When using mysql.h, there's a property called insert_id on the MYSQL connection object. Is there a similar feature when using a TADOConnection?
I am well ware of SELECT LAST_INSERT_ID();. What I'm asking is: If the C API has that feature, why is that feature missing in the ADO connector, if missing?
And if the only solution is to INSERT and SELECT the last insert id, are these two queries atomic, can some other INSERT queries be executed in-between, from a different connection, and make LAST_INSERT_ID return an unexpected value?
Since SQL-Server has OUTPUT Inserted.ID and Postgres has RETURNING id, I find MySQL to have made a dumb design choice with LAST_INSERT_ID().
Related
I am working on a high scale application of the order of 35000 Qps, using Hibernate and MySQL.
A large table has AutoIncrement Primary key, and generation defined is IDENTITY at Hibernate. Show Sql is true as well.
Whenever an Insert happens I see only one query being fired in DB, which is an
Insert statement.
Few Questions Follow:
1) I was wondering how does Hibernate get the AutoIncrement Value after insert?
2) If the answer is "SELECT LAST_INSERT_ID()", why does it not show up at VividCortex or in Show Sql Logs...?
3) How does "SELECT LAST_INSERT_ID()" account for multiple autoincrements in different tables?
4) If MySql returns a value on Insert, why aren't the MySql clients built so that we can see what is being returned?
Thanks in Advance for all the help.
You should call SELECT LAST_INSERT_ID().
Practically, you can't do the same thing as the MySQL JDBC driver using another MySQL client. You'd have to write your own client that reads and writes the MySQL protocol.
The MySQL JDBC driver gets the last insert id by parsing packets of the MySQL protocol. The last insert id is returned in this protocol by a MySQL result set.
This is why SELECT LAST_INSERT_ID() doesn't show up in query metrics. It's not calling that SQL statement, it's picking the integer out of the result set at the protocol level.
You asked how it's done internally. A relevant line of code is https://github.com/mysql/mysql-connector-j/blob/release/8.0/src/main/protocol-impl/java/com/mysql/cj/protocol/a/result/OkPacket.java#L55
Basically, it parses an integer from a known position in a packet as it receives a result set.
I'm not going to go into any more detail about parsing the protocol. I don't have experience coding a MySQL protocol client, and it's not something I wish to do.
I think it would not be a good use of your time to implement your own MySQL client.
It probably uses the standard JDBC mechanism to get generated values.
It's not
You execute it imediately after inserting in one table, and you thus get the values that have been generated by that insert. But that's not what is being used, so it's irrelevant
Not sure what you mean by that: the MySQL JDBC driver allows doing that, using the standard JDBC API
(Too long for a comment.)
SELECT LAST_INSERT_ID() uses the value already available in the connection. (This may explain its absence from any log.)
Each table has its own auto_inc value.
(I don't know any details about Hibernate.)
35K qps is possible, but it won't be easy.
Please give us more details on the queries -- SELECTs? writes? 35K INSERTs?
Are you batching the inserts in any way? You will need to do such.
What do you then use the auto_inc value in?
Do you use BEGIN..COMMIT? What value of autocommit?
I want to get data from several tables which has the same column name in one SQL statement, for example:
SELECT name, age FROM table_a UNION SELECT name, age FROM table_b UNION...
But the table_x may not exists that I can't avoid from people who send the request to me, if one of the tables is not exits in a query it will be failed, is there any syntax to avoid that?
I know a way that I can use show tables to get all tables in the database and compare them to the request parameters first, but I hope I can do it from MySQL syntax.
The short answer is no. If you are using another language in front of it, such PHP or any other language really, you can query the tables as you suggest, but SQL expects the query to be accurate syntactically and if it's not it will error. There is one (IMO bad) way to do this, if you must. You could use a stored procedure, which would allow you to dynamically build the query as you would in PHP or another language, but that's about all you have with MySQL (or any database that I know).
I am developing high load web-application and trying to reduce the quantity of sql queries. Quite often I need to update one row and get the results. I think it would be nice to have possibility to run query and at the same time receive the values of updated fields without making 2 calls of mysql server. For example, I execute the following query:
update table set val=val+1 where id=1;
and function returns:
array("val"=>10)
Sure, I understand, that I can write my own function, which first makes update, than select and returns the result. But the problem is that in such case mysql server will have to seek data, update during first query, than comes the second query, which again requires to seek data and return it. And I am thinking about the way, where mysql seeks data, updates and returns updated data.
I am currently working on a query in Access 2010 and I am trying to get the below query to work. I have the connection string between my local DB and the server that I am passing through to working just fine.
Select column1
, column2
from serverDB.dbo.table1
where column1 in (Select column1 from tbl_Name1)
In this situation table1 is the table on the server that I am passing through to get to, but the tbl_Name1 is the table that is actually in my Access DB that I am trying to use to create constraints on the data that I am pulling from the server.
When I try to run the query, I am getting the error that it doesn't think tbl_Name1 exists.
Any help is appreciated!
I just came across a solution that may help others in a similar situation.
This approach is easy because you can just run one query on your local Access database and get everything you need all at once. However, a lot of filtering/churning-through-results may be done on your own local computer behind the scenes, as opposed to on the remote server, so it may not necessarily be quick.
Steps
Create a query, make it a "Pass Through" query, and set up its "ODBC Connect Str" property to connect to the remote database.
Write the pass through query, something like SELECT RemoteId From RemoteTable and give your pass through query a name, maybe PassThroughQuery
Create a new query, make it a regular "Select" query.
Write your new query, using the pass through query you just created as a table in this new query (seems weird to use a query as a table, but it works) and join that PassThroughQuery "table" to your local table and filter it based on values in the local table, something like SELECT R.RemoteId, L.LocalValue FROM PassThroughQuery R INNER JOIN LocalTable L ON L.LocalId = R.RemoteId where L.LocalValue = 'SomeText'
This approach allows you to mix/join the results of a pass through query and the data in a local Access database table cleanly, albeit potentially slowly if there is a lot of data involved.
I think the issue is that a pass through query is one that is run on the server. Since one of the tables is located on the local Access file, it won't find the table.
Possible workaround if you must stay with the pass-through is you can build an SQL string with the results of the nested query rather than the query string itself (depending on the number of results this may or may not be practical)
e.g. Instead of Select column1 from tbl_Name1 you use "c1result1","c1result2",....
So here is my situation: I have a vendor supplied DB we cannot modify and a custom db that imports data from the vendor app and acts on it. Once records are imported form the vendor app, they cannot appear on the list of records to be imported. Also we only want to display the 250 most recent records that have not been imported.
What I originally started with was select the list of ids that have been imported from the custom db, and then query the vendor db, using the list of ids in a .Where(x => !idList.Contains(x.Id)) clause on the remote query.
This worked up until we broke 2100 records imported into the custom db, as 2100 is the limit on the number of parameters that can be passed into SQL. After finding out this was the actual problem and not the 'invalid buffer'/'severe error' ADO.Net reported, my solution was to remove the first 2000 ids in the remote query, and then remove the remaining records in the local query.
Having to pull back a large number of irrelevant records, just to exclude them, so I can get the correct 250 records seems very inelegant. Is there a better way to do this, short of doing a cross db stored procedure?
Thanks in advance.
This might not be the best answer, depending on how many records you're dealing with, but you could force the SQL to execute and just deal with it as in-memory objects. Calling the ToList() method will execute the SQL and convert to an IEnumerable .
What I might suggest is to have started by querying the vendor database first ordering the results by some kind of criteria (perhaps a date field, oldest to most recent).
You could do a Skip().Take() to "skim" the results and then take each bulk set and insert them into the custom db where the ID doesn't already exist. That way you avoid the problem you have now.
If you have db-create access to the SQL Server that the vendor's db is running on (or if your custom db is on the same server), you could create a "has been imported" table in a different database on that same server, and then write a stored proc that does a cross-database join of that table against the vendor db, e.g.:
select top 250 from vendordb.to_be_imported
where not exists
(select 1 from customdb.has_been_imported where idWasImported = idToBeImported)
order by whatever;
You might even be able to do this in Linq 2 SQL -- I've never tried adding objects from different databases into a single DataContext...