Update Exist Data Using DataRow C# - mysql

I need to update my exist data in mysql database.
I write like this code;
String _id = lbID.Text;
dsrm_usersTableAdapters.rm_usersTableAdapter _t = new dsrm_usersTableAdapters.rm_usersTableAdapter();
dsrm_users _mds = new dsrm_users();
_mds.EnforceConstraints = false;
dsrm_users.rm_usersDataTable _m = _mds.rm_users;
_t.FillBy4(_m, _id);
if(_m.Rows.Count >0 )
{
DataRow _row = _m.Rows[0];
_row.BeginEdit();
_row["username"] = txtUserName.Text;
_row.EndEdit();
_row.AcceptChanges();
_t.Update(_m);
}
But nothing change my exists data. What is the Problem?

I think the problem is that you call DataRow.AcceptChanges() before calling DbDataAdapter.Update(). AcceptChanges will set the status of the datarow to "orignal" (or "not changed" - I don't remeber now). Try to move the call to AcceptChanges to after the Update.

Update requires a valid UpdateCommand when passed DataRow collection with modified rows
Yes I move the AccesptChange() after update bu now its give this error
Update requires a valid UpdateCommand when passed DataRow collection with modified rows
But now problem is, I use MySQL and I can not Wrie UpdateCommand , VS2008 does not accept the SQL command. Automaticly delete all SQL command. I dont understand the problem. So do you now another way without using SQL command (UpdateCommand) ?

Related

ssrs ORA_01008:NOT ALL VALIABLE BOUNDED [duplicate]

I have come across an Oracle problem for which I have so far been unable to find the cause.
The query below works in Oracle SQL developer, but when running in .NET it throws:
ORA-01008: not all variables bound
I've tried:
Changing the Oracle data type for lot_priority (Varchar2 or int32).
Changing the .NET data type for lot_priority (string or int).
One bind variable name is used twice in the query. This is not a problem in my
other queries that use the same bound variable in more than one
location, but just to be sure I tried making the second instance its
own variable with a different :name and binding it separately.
Several different ways of binding the variables (see commented code;
also others).
Moving the bindByName() call around.
Replacing each bound variable with a literal. I've had two separate variables cause the problem (:lot_pri and :lot_priprc). There were some minor changes I can't remember between the two. Changing to literals made the query work, but they do need to work with binding.
Query and code follow. Variable names have been changed to protect the innocent:
SELECT rf.myrow floworder, rf.stage, rf.prss,
rf.pin instnum, rf.prid, r_history.rt, r_history.wt
FROM
(
SELECT sub2.myrow, sub2.stage, sub2.prss, sub2.pin, sub2.prid
FROM (
SELECT sub.myrow, sub.stage, sub.prss, sub.pin,
sub.prid, MAX(sub.target_rn) OVER (ORDER BY sub.myrow) target_row
,sub.hflag
FROM (
WITH floc AS
(
SELECT flow.prss, flow.seq_num
FROM rpf#mydblink flow
WHERE flow.parent_p = :lapp
AND flow.prss IN (
SELECT r_priprc.prss
FROM r_priprc#mydblink r_priprc
WHERE priprc = :lot_priprc
)
AND rownum = 1
)
SELECT row_number() OVER (ORDER BY pp.seq_num, rpf.seq_num) myrow,
rpf.stage, rpf.prss, rpf.pin,
rpf.itype, hflag,
CASE WHEN rpf.itype = 'SpecialValue'
THEN rpf.instruction
ELSE rpf.parent_p
END prid,
CASE WHEN rpf.prss = floc.prss
AND rpf.seq_num = floc.seq_num
THEN row_number() OVER (ORDER BY pp.seq_num, rpf.seq_num)
END target_rn
FROM floc, rpf#mydblink rpf
LEFT OUTER JOIN r_priprc#mydblink pp
ON (pp.prss = rpf.prss)
WHERE pp.priprc = :lot_priprc
ORDER BY pp.seq_num, rpf.seq_num
) sub
) sub2
WHERE sub2.myrow >= sub2.target_row
AND sub2.hflag = 'true'
) rf
LEFT OUTER JOIN r_history#mydblink r_history
ON (r_history.lt = :lt
AND r_history.pri = :lot_pri
AND r_history.stage = rf.stage
AND r_history.curp = rf.prid
)
ORDER BY myrow
public void runMyQuery(string lot_priprc, string lapp, string lt, int lot_pri) {
Dictionary<int, foo> bar = new Dictionary<int, foo>();
using(var con = new OracleConnection(connStr)) {
con.Open();
using(var cmd = new OracleCommand(sql.rtd_get_flow_for_lot, con)) { // Query stored in sql.resx
try {
cmd.BindByName = true;
cmd.Prepare();
cmd.Parameters.Add(new OracleParameter("lapp", OracleDbType.Varchar2)).Value = lapp;
cmd.Parameters.Add(new OracleParameter("lot_priprc", OracleDbType.Varchar2)).Value = lot_priprc;
cmd.Parameters.Add(new OracleParameter("lt", OracleDbType.Varchar2)).Value = lt;
// Also tried OracleDbType.Varchar2 below, and tried passing lot_pri as an integer
cmd.Parameters.Add(new OracleParameter("lot_pri", OracleDbType.Int32)).Value = lot_pri.ToString();
/*********** Also tried the following, more explicit code rather than the 4 lines above: **
OracleParameter param_lapp
= cmd.Parameters.Add(new OracleParameter("lapp", OracleDbType.Varchar2));
OracleParameter param_priprc
= cmd.Parameters.Add(new OracleParameter("lot_priprc", OracleDbType.Varchar2));
OracleParameter param_lt
= cmd.Parameters.Add(new OracleParameter("lt", OracleDbType.Varchar2));
OracleParameter param_lot_pri
= cmd.Parameters.Add(new OracleParameter("lot_pri", OracleDbType.Varchar2));
param_lapp.Value = lastProcedureStackProcedureId;
param_priprc.Value = lotPrimaryProcedure;
param_lt.Value = lotType;
param_lot_pri.Value = lotPriority.ToString();
//***************************************************************/
var reader = cmd.ExecuteReader();
while(reader.Read()) {
// Get values from table (Never reached)
}
}
catch(OracleException e) {
// ORA-01008: not all variables bound
}
}
}
Why is Oracle claiming that not all variables are bound?
I know this is an old question, but it hasn't been correctly addressed, so I'm answering it for others who may run into this problem.
By default Oracle's ODP.net binds variables by position, and treats each position as a new variable.
Treating each copy as a different variable and setting it's value multiple times is a workaround and a pain, as furman87 mentioned, and could lead to bugs, if you are trying to rewrite the query and move things around.
The correct way is to set the BindByName property of OracleCommand to true as below:
var cmd = new OracleCommand(cmdtxt, conn);
cmd.BindByName = true;
You could also create a new class to encapsulate OracleCommand setting the BindByName to true on instantiation, so you don't have to set the value each time. This is discussed in this post
I found how to run the query without error, but I hesitate to call it a "solution" without really understanding the underlying cause.
This more closely resembles the beginning of my actual query:
-- Comment
-- More comment
SELECT rf.flowrow, rf.stage, rf.process,
rf.instr instnum, rf.procedure_id, rtd_history.runtime, rtd_history.waittime
FROM
(
-- Comment at beginning of subquery
-- These two comment lines are the problem
SELECT sub2.flowrow, sub2.stage, sub2.process, sub2.instr, sub2.pid
FROM ( ...
The second set of comments above, at the beginning of the subquery, were the problem. When removed, the query executes. Other comments are fine.
This is not a matter of some rogue or missing newline causing the following line to be commented, because the following line is a SELECT. A missing select would yield a different error than "not all variables bound."
I asked around and found one co-worker who has run into this -- comments causing query failures -- several times.
Does anyone know how this can be the cause? It is my understanding that the very first thing a DBMS would do with comments is see if they contain hints, and if not, remove them during parsing. How can an ordinary comment containing no unusual characters (just letters and a period) cause an error? Bizarre.
You have two references to the :lot_priprc binding variable -- while it should require you to only set the variable's value once and bind it in both places, I've had problems where this didn't work and had to treat each copy as a different variable. A pain, but it worked.
On Charles' comment problem: to make things worse, let
:p1 = 'TRIALDEV'
via a Command Parameter, then execute
select T.table_name as NAME, COALESCE(C.comments, '===') as DESCRIPTION
from all_all_tables T
Inner Join all_tab_comments C on T.owner = C.owner and T.table_name = C.table_name
where Upper(T.owner)=:p1
order by T.table_name
558 line(s) affected. Processing time: 00:00:00.6535711
and when changing the literal string from === to ---
select T.table_name as NAME, COALESCE(C.comments, '---') as DESCRIPTION
[...from...same-as-above...]
ORA-01008: not all variables bound
Both statements execute fine in SQL Developer. The shortened code:
Using con = New OracleConnection(cs)
con.Open()
Using cmd = con.CreateCommand()
cmd.CommandText = cmdText
cmd.Parameters.Add(pn, OracleDbType.NVarchar2, 250).Value = p
Dim tbl = New DataTable
Dim da = New OracleDataAdapter(cmd)
da.Fill(tbl)
Return tbl
End Using
End Using
using Oracle.ManagedDataAccess.dll Version 4.121.2.0 with the default settings in VS2015 on the .Net 4.61 platform.
So somewhere in the call chain, there might be a parser that is a bit too aggressively looking for one-line-comments started by -- in the commandText. But even if this would be true, the error message "not all variables bound" is at least misleading.
The solution in my situation was similar answer to Charles Burns; and the problem was related to SQL code comments.
I was building (or updating, rather) an already-functioning SSRS report with Oracle datasource. I added some more parameters to the report, tested it in Visual Studio, it works great, so I deployed it to the report server, and then when the report is executed the report on the server I got the error message:
"ORA-01008: not all variables bound"
I tried quite a few different things (TNSNames.ora file installed on the server, Removed single line comments, Validate dataset query mapping). What it came down to was I had to remove a comment block directly after the WHERE keyword. The error message was resolved after moving the comment block after the WHERE CLAUSE conditions. I have other comments in the code also. It was just the one after the WHERE keyword causing the error.
SQL with error: "ORA-01008: not all variables bound"...
WHERE
/*
OHH.SHIP_DATE BETWEEN TO_DATE('10/1/2018', 'MM/DD/YYYY') AND TO_DATE('10/31/2018', 'MM/DD/YYYY')
AND OHH.STATUS_CODE<>'DL'
AND OHH.BILL_COMP_CODE=100
AND OHH.MASTER_ORDER_NBR IS NULL
*/
OHH.SHIP_DATE BETWEEN :paramStartDate AND :paramEndDate
AND OHH.STATUS_CODE<>'DL'
AND OHH.BILL_COMP_CODE IN (:paramCompany)
AND LOAD.DEPART_FROM_WHSE_CODE IN (:paramWarehouse)
AND OHH.MASTER_ORDER_NBR IS NULL
AND LOAD.CLASS_CODE IN (:paramClassCode)
AND CUST.CUST_CODE || '-' || CUST.CUST_SHIPTO_CODE IN (:paramShipto)
SQL executes successfully on the report server...
WHERE
OHH.SHIP_DATE BETWEEN :paramStartDate AND :paramEndDate
AND OHH.STATUS_CODE<>'DL'
AND OHH.BILL_COMP_CODE IN (:paramCompany)
AND LOAD.DEPART_FROM_WHSE_CODE IN (:paramWarehouse)
AND OHH.MASTER_ORDER_NBR IS NULL
AND LOAD.CLASS_CODE IN (:paramClassCode)
AND CUST.CUST_CODE || '-' || CUST.CUST_SHIPTO_CODE IN (:paramShipto)
/*
OHH.SHIP_DATE BETWEEN TO_DATE('10/1/2018', 'MM/DD/YYYY') AND TO_DATE('10/31/2018', 'MM/DD/YYYY')
AND OHH.STATUS_CODE<>'DL'
AND OHH.BILL_COMP_CODE=100
AND OHH.MASTER_ORDER_NBR IS NULL
*/
Here is what the dataset parameter mapping screen looks like.
It's a bug in Managed ODP.net - 'Bug 21113901 : MANAGED ODP.NET RAISE ORA-1008 USING SINGLE QUOTED CONST + BIND VAR IN SELECT' fixed in patch 23530387 superseded by patch 24591642
Came here looking for help as got same error running a statement listed below while going through a Udemy course:
INSERT INTO departments (department_id, department_name)
values( &dpet_id, '&dname');
I'd been able to run statements with substitution variables before. Comment by Charles Burns about possibility of server reaching some threshold while recreating the variables prompted me to log out and restart the SQL Developer. The statement ran fine after logging back in.
Thought I'd share for anyone else venturing here with a limited scope issue as mine.
I'd a similar problem in a legacy application, but de "--" was string parameter.
Ex.:
Dim cmd As New OracleCommand("INSERT INTO USER (name, address, photo) VALUES ('User1', '--', :photo)", oracleConnection)
Dim fs As IO.FileStream = New IO.FileStream("c:\img.jpg", IO.FileMode.Open)
Dim br As New IO.BinaryReader(fs)
cmd.Parameters.Add(New OracleParameter("photo", OracleDbType.Blob)).Value = br.ReadBytes(fs.Length)
cmd.ExecuteNonQuery() 'here throws ORA-01008
Changing address parameter value '--' to '00' or other thing, works.

How can I speed up updating lots of rows

I have a table that has 1.400.000 entries. Its is a simple list of documents
Table - Document
ID int
DocumentPath nvarchar
DocumentValid
bit
I scan a directory and set any document found in the directory as valid.
public void SetReportsToValidated(List<int> validatedReports)
{
SqlConnection myCon = null;
try
{
myCon = new SqlConnection(_conn);
myCon.Open();
foreach (int id in validatedReports)
{
SqlDataAdapter myAdap = new SqlDataAdapter("update_DocumentValidated", myCon);
myAdap.SelectCommand.CommandType = CommandType.StoredProcedure;
SqlParameter pId = new SqlParameter("#Id", SqlDbType.Int);
pId.Value = id;
myAdap.SelectCommand.Parameters.Add(pId);
myAdap.SelectCommand.ExecuteNonQuery();
}
}
catch (SystemException ex)
{
_log.Error(ex);
throw;
}
finally
{
if (myCon != null)
{
myCon.Close();
}
}
}
The performance of Updates is ok, but I want more. It takes more than 1 hour to update 1000000 of the documents to valid. Is there any good way to speed up the updates? I am thinking of using some kind of batch (like table valued parameters).
Each update takes some 5-10ms when profiled on SQLServer.
Read the reports in and append them together in a DataTable (since they have the same dimensions) then use the SqlBulkCopy object for to upload the entire thing. Will probably work better for you. I don't think you will have memory issues given the small number of columns and rows.
At the moment you are calling the db for each record individually. You can use the SqlDataAdapter to do bulk updates by (in a very brief nutshell):
1) define one SqlDataAdapter
2) set the .UpdateCommand on the adapter to your update sproc
3) call the .Update method on the adapter, passing it a DataTable containing the ids of documents to be updated. This will batch up the updated rows from the DataTable in to the DB, calling the sproc for each record in a batched manner. You can control the Batch Size via the .BatchSize property.
4) So what you're doing is removing the manual, row by row looping which is inefficient for batched updates.
See examples:
http://support.microsoft.com/kb/308055
http://www.c-sharpcorner.com/UploadFile/61b832/4430/
Alternatively, you could:
1) Use SqlBulkCopy to bulk insert all the IDs into a new table in the database (highly efficient)
2) Once loaded in to that staging table, run a single SQL statement to update your main table from that staging table to validate the documents.
See examples:
http://www.adathedev.co.uk/2010/02/sqlbulkcopy-bulk-load-to-sql-server.html
http://www.adathedev.co.uk/2011/01/sqlbulkcopy-to-sql-server-in-parallel.html
Instead of creating the adapter and parameter every time in the loop just create them once and assign different value to the parameter:
SqlDataAdapter myAdap = new SqlDataAdapter("update_DocumentValidated", myCon);
myAdap.SelectCommand.CommandType = CommandType.StoredProcedure;
SqlParameter pId = new SqlParameter("#Id", SqlDbType.Int);
myAdap.SelectCommand.Parameters.Add(pId);
foreach (int id in validatedReports)
{
myAdap.SelectCommand.Parameters[0].Value = id;
myAdap.SelectCommand.ExecuteNonQuery();
}
This might not result in a very dramatic improvement but is better compared to the original code. Also, as you are manually executing the SqlCommand object you do not need the adapter at all. Just use the SqlCommand directly.

How do I get the next identity ID before Submit in Linq-to-Sql?

I want to get the next identity ID and then log it somewhere. Only after this do I want to call SubmitChanges().
Wrap the DataContext in a database transaction and call SubmitChanges to write changes to the database within that transaction. This way you can get the auto generated ID while being able to keep the operation transactional:
using (var con = new SqlConnection(conStr))
{
con.Open();
using (var tran = con.BeginTransaction())
{
using (var db = new YourDataContext(con))
{
// Setting the transaction is needed in .NET 3.5.
// It's a bug in L2S and was fixed in .NET 4.0.
db.Transaction = tran;
var entity = new MyEntity();
db.MyEntities.InsertOnSubmit(entity);
db.SubmitChanges();
var id = entity.Id;
// Do something useful with this id
}
tran.Commit();
}
}
Wrap the whole thing in a transaction, do a SELECT IDENT_CURRENT('table_name') then submit your changes, the commit the transaction. If you lock the table that should prevent someone else from inserting a record after your SELECT IDENT_CURRENT and before you insert which should give you the correct identity value.
you need to add the lastModified date column in DB and get it,if not and increment the identity column
Other wise better do SubmitChanges() and get the nextID

Why Could Linq to Sql Submit Changes Fail for Updates Despite Data in Change Set

I'm updating a set of objects, but the update fails on a SqlException that says "Incorrect Syntax near 'Where'".
So I crack open SqlProfiler, and here is the generated SQL:
exec sp_executesql N'UPDATE [dbo].[Addresses]
SET
WHERE ([AddressID] = #p0) AND ([StreetAddress] = #p1) AND ([StreetAddress2] = #p2) AND ([City] = #p3) AND ([State] = #p4) AND ([ZipCode] = #p5) AND ([CoordinateID] = #p6) AND ([CoordinateSourceID] IS NULL) AND ([CreatedDate] = #p7) AND ([Country] = #p8) AND (NOT ([IsDeleted] = 1)) AND (NOT ([IsNonSACOGZip] = 1))',N'#p0 uniqueidentifier,#p1 varchar(15),#p2 varchar(8000),#p3 varchar(10),#p4 varchar(2),#p5 varchar(5),#p6 uniqueidentifier,#p7 datetime,#p8 varchar(2)',#p0='92550F32-D921-4B71-9622-6F1EC6123FB1',#p1='125 Main Street',#p2='',#p3='Sacramento',#p4='CA',#p5='95864',#p6='725E7939-AEE3-4EF9-A033-7507579B69DF',#p7='2010-06-15 14:07:51.0100000',#p8='US'
Sure enough, no set statement.
I also called context.GetChangeSet() and the proper values are in the updates section.
Also, I checked the .dbml file and all of the properties Update Check values are 'Always'.
I am completely baffled on this one, any help out there?
I had overriden GetHashCode to return a concatenation of a few of the fields. When I changed it to return solely the hash of the primary key, it worked.
The root cause is that the updates will fail for an object whose hash code changes during its lifecycle, so when you override GetHashCode you need to pick attributes that cannot be updated, like the primary key,

Modifiy column attribute using ADOX [ vc++ and MS Access]

I have to add new columns in existing table. I can able to successfully add new column, but following exception occur while tying to modify the column attribute to nullable.
Multiple-step OLE DB operation generated errors. Check each OLE DB status value, if available. No work was done
Here my code,
HRESULT hr = S_OK;
ADOX::_CatalogPtr pCatalog = NULL;
ADOX::_TablePtr pTable = NULL;
ADOX::TablesPtr pTables = NULL;
hr = pCatalog.CreateInstance(__uuidof(Catalog));
pCatalog->PutActiveConnection("Provider='Microsoft.JET.OLEDB.4.0';data source='C:\\sample.mdb';");
pTables = pCatalog->GetTables();
pTable = pTables->Item["sampletable"];
hr = pTable->Columns->Append( "age", ADOX::adInteger, 0);
ASSERT(hr == S_OK);
pTable->Columns->Item["age"]->Attributes = ADOX::adColNullable;
The equivalent code in VBA works for me without error (assuming I have translated it faithfully).
Something perhaps to try is to create a Column object, set its properties including NULLable then append it to the Table object's Columns collection e.g. this in VBA:
Set oColumn = New ADOX.Column
oColumn.Name = "age"
oColumn.Type = ADOX.adInteger
oColumn.Attributes = ADOX.adColNullable
oTable.Columns.Append oColumn