I'm using Microsoft's SQL Server 2008 R2 (the choice of server version is dictated by the customer).
Note, before each and every test I run this:
UPDATE apps set AssetId = NULL;
Using Microsoft SQL Server Management Studio I run the following queries:
UPDATE apps set AssetId=1 WHERE id=1;
UPDATE apps set AssetId=1 WHERE id=2;
UPDATE apps set AssetId=1 WHERE id=3;
UPDATE apps set AssetId=1 WHERE id=4;
UPDATE apps set AssetId=1 WHERE id=5;
UPDATE apps set AssetId=1 WHERE id=6;
UPDATE apps set AssetId=1 WHERE id=7;
... the query runs without error, and as expected the following returns 7:
select count(*) from apps where AssetId=1;
So far so good.
Now I set all AssetIds to NULL, then run the same queries from a VB6 binary. I store all 7 queries in a single string variable and execute them using a ADO Recordset object. I check the SQL profiler and observe that all the queries appear within a SQL:BatchStarting EventClass. There are no errors (I selected to display all errors and warnings when setting up the profiler). Yet this returns 4:
select count(*) from apps where AssetId = 1;
-- Only 4 records have an AssetId which is not null:
select id, AssetId from apps where AssetId = 1;
select id, AssetId from apps where AssetId is not null;
I rearranged the UPDATE statements, set AssetId to null for all rows again, then ran the update statements. Still only 4 rows get updated (the rows corresponding to the first 4 update statements in the batch).
Why would only the first 4 statements in a batch of 7 be run? Why would ADO Recordset and Microsoft SQL Server Management Studio have different results for the exact same queries when using the same database on the same server instance?
It almost seems that ADO Recordset has a maximum number of update statements it can run in a single batch (even though the profiler sees all 7 in the batch).
[Further Information added]
I've changed the profiler field outputs, and using the same queries I see a different behavior now. As before only 4 rows update, however, in the profiler the the SQL:BatchingStarting shows all 7 semi-colon separated fields, but only 4 of the 7 statements have a SQL:StmtStarting" EventClass record (3 of the queries are not mentioned at all by the profiler). As before I have all errors/warning checked. I'm fairly sure that before I was see one one SQL:StmtStarting for each query in the batch.
I found the solution. I had to disable the advanced option named "disallow results from triggers". When this option is enabled, even if your triggers don't return record sets, it causes the batch to not execute all statements and seems to give no error or warning.
I ran this:
sp_configure 'show advanced options', 1 ;
GO
RECONFIGURE ;
GO
EXEC sp_configure 'disallow results from triggers', '1';
RECONFIGURE;
... with this done all UPDATE statements in my transaction now complete.
I think perhaps with 'show advanced options' set to 0 (the default for SQL Server 2008) the batch was accumulating warning messages:
The ability to return results from triggers will be removed in a
future version of SQL Server. Avoid using this feature in new
development work, and plan to modify applications that currently use
it.
I suspect that after X many of these are accumulated the server stops executing statements within the Batch, no error is given, and the Batch is marked as completed.
Related
I have to synchronize some tables from a MySql-Database to another (different server on different machine).
That transfer should only include special tables and only rows of that tables with a special characteristics (e.g. if a column named transfer is set to 1).
And it should be automatically/transparent, fast and work within short cycles (at least every 20s).
I tried different ways but none of them matched all requirements.
DB-synchronize with Galera works fine but does not exclude tables/rows.
mysqldump is not automatically (must be started) and does not exclude.
Is there no other way for that job beside doing it with some own code that runs permanently?
Those partial sync must be performed with specially created scheme.
Possible realization:
Test does your server instances supports The FEDERATED Storage Engine. By default this is allowed.
Test does destination server may access the data stored on source server using CREATE SERVER.
Create server attached to remote source server and needed remote database. Check that remote data is accessible.
On the destination server create an event procedure which is executed each 20s (and make it disabled firstly). I recommend you to create it in separate service database. In this event procedure execute the queries like
SET #event_start_timestamp = CURRENT_TIMESTAMP;
INSERT localDB.local_table ( {columns} )
SELECT {columns}
FROM serviceDB.federated_table
WHERE must_be_transferred
AND created_at < #event_start_timestamp;
UPDATE serviceDB.federated_table
SET must_be_transferred = FALSE
WHERE must_be_transferred
AND created_at < #event_start_timestamp;
Destination server sends according SELECT query to remote source server. It executes this query and sends the output back. Received rows are inserted. Then destination server sends UPDATE which drops the flag.
Enable Event scheduler.
Enable event procedure.
Ensure that your event procedure is executed fast enough. It must finish its work before the next firing. I.e. execute your code as regular stored procedure and check its execution time. Maybe you'd increase sheduling regularity time.
You may exclude such parallel firings using static flag in some service table created in your service database. If it is set (previous event have not finished its work) then event procedure 'd exit. I recommend you to perform this check anycase.
You must handle the next situation:
destination receives a row;
source updates this row;
destination marks this row as synchronized.
Possible solution.
The flag must_be_transferred should be not boolean but (unsigned tiny)integer, with the next values: 0 - not needed in sync; 1 - needed in sync; 2 - selected for copying, 3 - selected for copying but altered after selection.
Algo:
dest. updates the rows marked with non-zero value and set them to 2;
dest. copies the rows using the condition flag OR 2 (or flag IN (2,3));
dest. clears the flag using the expression flag XOR 2 and above condition.
src. marks altered rows as needed in sync using the expression flag OR 1.
I have a problem using an Access Application connected to a MySql Database.
I test UPDATE on one small table with a VBA procedure, using DAO
UPDATE tblFormateur
SET Prenom = 'Werner'
WHERE Nom='HEISENBERG' "
The first time I run the query, it's ok.
But if I run the same query, without changing the value,(keeping Prenom = 'Werner')
I get an error message saying that the query has not been executed,
due to a lock violation.
If I run the query again, but with a different value, e.g Prenom = 'Peter',
the query is executed without error.
On the other hand, If I do the same experiment with ADODB,
I do not get any error.
One can say: let's go with ADODB!
The problem is that the Access application Forms use DAO, not ADODB!
So all the forms won't be able to either add new records or update records.
Did you experienced the same issue?
Are there some parameters of the ODBC driver that needs to be set?
Thank's advance for any help.
Windows 11
Access Office 365
ODBC connector 8.0 CE
If you have a linked table as form RecordSource and you run an update query, it always give you this error.
The error appears because Access do not detect any changes. If you have a field with value 'Test' and than you delete this value, rewrite 'Test', Access doesn't detect any changes because the value is the same.
To bypass this error you can make a random field that add n + 1 before running your query. When you open the form, field is set to 0, before you run the query is 1, next time is 2 and so on.
Access will detect changes to the form and won't give you this error.
When I enable server-side prepared statments via useServerPrepStmts jdbc flag, result set update operations fail after the first request for a given query with:
Result Set not updatable.This result set must come from a statement
that was created with a result set type of ResultSet.CONCUR_UPDATABLE,
the query must select only one table, can not use functions and must
select all primary keys from that table
When I disable server-side prepared statements, result set update operations work flawlessly.
Since the query involves only 1 table, has a primary key, returns a single row, and no functions are involved, what must be happening is that the prepared statement is created with ResultSet.CONCUR_READ_ONLY and then cached server-side. Subsequent requests for the same query will draw the prepared statement from the cache and then, even though the client sends ResultSet.CONCUR_UPDATABLE for rs.updateRow(), concurrency is still set to ResultSet.CONCUR_READ_ONLY on the server.
If I am correct in above assumption, how does one override the server-side cache in this case? (everything else is fine with prepared statement caching, just result set row operations are affected).
Linux (CentOS 5.7) with:
mysql-connector-java 5.1.33
mysql 5.6.20
EDIT
not relevant I notice that the first query, which always succeeds, has this in the query log: SET SQL_SELECT_LIMIT=1, and all subsequent queries fail with this: SET SQL_SELECT_LIMIT=DEFAULT. Not sure if this is the cause of the problem, or just a side effect. Guess I'll try to manually setFetchSize on the client and see if that makes a difference...
Workaround is to append FOR UPDATE to ResultSet.CONCUR_READ_ONLY select statement on the client with ResultSet.CONCUR_UPDATABLE concurrency for the new prepared statement. This allows for server statement caching while still being able to modify a JDBC ResultSet.
Side note:
the select ... for update statement itself does not appear to be eligible for caching; i.e. query log shows Prepare and Execute lines on every request.
When I write a procedure in SQL Server 2008, it always write SET NOCOUNT ON.
I googled it, and saw that it's used to suppress the xx row were effected message, but why should I do it?
Is it for security reasons?
EDIT: ok, so I understand from the current answer that it's used mostly for performance, and coherence with the count of the client...
So is there a reason not to use it? Like if I want my client to be able to compare his count with mine?
I believe SET NOCOUNT ON is mostly used to avoid passing back to the client a potentially misleading information. In a stored procedure, for example, your batch may contain several different statements with their own count of affected records but you may want to pass back to the client just a single, perhaps completely different number.
It's not for security, since a rowcount doesn't really divulge much info, especially compared to the data that is in the same payload.
If you call SQL from an application, the "xxx rows" will be returned to the application as a dataset, with network round trips in between before you get the data, which as Mihai says, can have a performance impact.
Bottom line, it won't hurt to add it to your stored procedure, it is common practice, but you are not obligated to.
As per MSDN SET NOCOUNT ON
Stops the message that shows the count of the number of rows affected
by a Transact-SQL statement or stored procedure from being returned
as part of the result set.
When SET NOCOUNT is ON, the count is not returned. When SET NOCOUNT is
OFF, the count is returned. The ##ROWCOUNT function is updated even
when SET NOCOUNT is ON.
Another related good post on SO
SET NOCOUNT ON usage
Taken from SET NOCOUNT ON Improves SQL Server Stored Procedure Performance
SET NOCOUNT ON turns off the messages that SQL Server sends back to
the client after each T-SQL statement is executed. This is performed
for all SELECT, INSERT, UPDATE, and DELETE statements. Having this
information is handy when you run a T-SQL statement in a query window,
but when stored procedures are run there is no need for this
information to be passed back to the client.
By removing this extra overhead from the network it can greatly
improve overall performance for your database and application.
If you still need to get the number of rows affected by the T-SQL
statement that is executing you can still use the ##ROWCOUNT option.
By issuing a SET NOCOUNT ON this function (##ROWCOUNT) still works and
can still be used in your stored procedures to identify how many rows
were affected by the statement.
so is there a reason not to use it?
Instead use ##ROWCOUNT if you want to compare the count of rows affected.
I want to implement a batch MySQL script to do something in a database. The thing is that, for each master id that I have I want to insert 4 tuples. But this tuples should be added in a transaction which means if one of these 4 tuples is failed the transaction should be rollback. Then I need to have some catching mechanism to capture that the query is failed. I CAN ONLY USE PURE MYSQL neither PHP, nor PERL etc. Even I cannot create any store procedure to do that. In Microsoft SQL Server there is ##error variable that solved my problem but in MYSQL we do not have any system variables showing the error code.
how can I do that?
Cheers,
This is an ugly workaround, but it worked for me when I was trying to import a batch of SQL queries and wrap the entire thing within a transaction, so that I could roll back if any of the SQL queries errored.
Since the size of the batch was massive, a SQL procedure with condition handler was not an option either.
You have to do this manually, so it really isn't a solution unless you are batching:
First, make sure your entire batch is stored in an SQL file. The SQL file should only contain the batch queries, and no transaction control queries.
Then start up a MySQL command line client and type in transaction commands manually:
mysql> SET AUTOCOMMIT = 0;
mysql> START TRANSACTION;
Then tell the command line client to run the batch file:
mysql> SOURCE path/to/file.sql
After that you can simply manually COMMIT; or ROLLBACK; depending on how happy you are with the result of your queries.
This is such a kludge, though. Anyone have a better approach?