Insert and Select query at the same time - ms-access

I have two tables in MS Access
Table1 (name, carname)
Table2 (carname, agency)
I executed the following command
st.executeUpdate("insert into Table1 values('"+name+"','"+carname+"')");
and the value is inserted.
At the same time if the carname in Table2 is the same as user given it has to select the agency. How can I write the query? (This two process has to be done at the same time).

You can't do them "at the same time", Access isn't capable of multi-threading. The only way to do it is to run the statements in succession, or one after the other. If that's the case, just put a second INSERT staement into an If/Then/Else block and only run it if it fulfills your desired criteria. I honestly don't understand the question; are you trying to match up the last record inserted? If that's the case, just select the Max(PrimaryKey) and INNER JOIN it back to the table, and that will give you the record you last inserted.

Related

Npm mysql getting the insertId of each row when inserting more than one row [duplicate]

Short version
Would someone provide an example of this? There are 3 SQL tables. Using INSERT ... SELECT, take data from table 1 and insert it into table 2. Then, INSERT rows into table 3, using the auto-increment id of each table 2 row just inserted using that INSERT ... SELECT statement.
INSERT ... SELECT creates multiple rows but you cannot obtain the auto-increment ID from them, for use in a subsequent INSERT statement.
Expanded version
I'm looking for an efficient way to use the auto increment IDs, created from an INSERT ... SELECT, in a second INSERT.
Imagine this scenario in a warehouse.
The warehouse receives a pallet of goods from a supplier. The pallet contains multiple individual items, which must be dispatched to different customers. The pallet is booked in, broken down and checked. Each item is then assigned to the correct customer and marked as "ready". At this point, each item is dispatched with the dispatch status recorded per customer. Each Customer's account balance is reduced by a given value based on the item.
The issue is linking the account reduction to the item dispatch. There are 3 tables:
GoodsIn: records the pallet arrival from the supplier
CREATE TABLE GoodsIn ('InID' 'CustomerID', 'ItemSKU_ID', 'HasBeenChecked')
GoodsOut: records the SKU dispatch to the Customer
CREATE TABLE GoodsOut ('OutID', 'CustomerID', 'ItemSKU_ID', 'DateDispatched')
Account: records each Customer transaction/balance
CREATE TABLE Ledger ('LedgerID', 'BalanceClose', 'AdjustmentAmount', 'CustomerID', 'ActionID')
(I've massively simplified this - please accept that GoodsIn and GoodsOut cannot be combined)
When an SKU is marked as ready for dispatch, I can use the following to automatically update the Ledger balance, taking the last balance row per customer and updating it
INSERT INTO Ledger (BalanceClose, AdjustmentAmount, CustomerID)
SELECT Ledger.BalanceClose +
(SELECT #Price:=ItemSKUData.ItemPrice FROM ItemSKUData WHERE ItemSKUData.ItemSKU_ID = GoodsIn.ItemSKU_ID) AS NEWBALANCECLOSE,
#Price AS ADJUSTMENTAMOUNT,
Ledger.CustomerID
FROM Ledger
INNER JOIN GoodsIn ON GoodsIn.CustomerID = Ledger.CustomerID
WHERE GoodsIn.HasBeenChecked = TRUE
AND Ledger.LedgerID IN (SELECT MAX(Ledger.LedgerID) FROM Ledger GROUP BY Ledger.CustomerID)
This all works absolutely fine - I get a new Ledger row, with the updated BalanceClose, for each GoodsIn row where GoodsIn.HasBeenChecked = TRUE. Each of these Ledger rows gets an auto-increment Ledger.LedgerID on INSERT.
I can then do pretty much the same code to INSERT into the GoodsOut table. Again as with Ledger, GoodsOut.OutID is an auto-increment ID.
I now need to link those Ledger rows (Ledger.ActionID) to the GoodsOut.OutID. This is the purpose of Ledger.ActionID - it needs to map to each GoodsOut.OutID, so that the reduction of the Ledger balance is linked to the action of sending the goods to the customer.
In theory, if this was a single INSERT and not an INSERT SELECT, I would simply take the GoodsOut.LAST_INSERT_ID() and use it on the INSERT INTO Ledger.
But because I'm using an INSERT ... SELECT, I can't get the auto-increment ID of each row.
The only way I can see to do this is to use a dummy column in the GoodsOut table, and store the GoodsIn.InID in it. I could then get the GoodsOut.OutID using a WHERE in the INSERT ... SELECT for the Ledger.
It doesn't feel very elegant and safe though.
So this is my question. I need to link table A to table B using table B's auto-increment ID, when all rows in BOTH table A and table B are created using INSERT ... SELECT.
You're right, when you do INSERT...SELECT for batch inserts, you don't have easy access to the auto-increment id. LAST_INSERT_ID() returns only the first id generated.
One documented behavior of bulk inserts is that the id's generated are guaranteed to be consecutive, because bulk inserts lock the table until the end of the statement.
https://dev.mysql.com/doc/refman/5.7/en/innodb-auto-increment-handling.html says:
innodb_autoinc_lock_mode = 1 (“consecutive” lock mode)
This is the default lock mode. In this mode, “bulk inserts” use the special AUTO-INC table-level lock and hold it until the end of the statement. This applies to all INSERT ... SELECT, REPLACE ... SELECT, and LOAD DATA statements. Only one statement holding the AUTO-INC lock can execute at a time.
This means if you know the first value generated, and the number of rows inserted (which you should be able to get from ROW_COUNT()), and the order of rows inserted, then you can reliably know all the id's generated.
The MySQL JDBC driver relies on this, for example. When you do a bulk insert, the full list of id's generated is not returned to the client (that is, the JDBC driver), but the driver has a Java method to return the full list. This is accomplished by Java code inferring the values, and assuming they are consecutive.

Mysql Locking 3 consecutive interdependent queries

I don't have a real code sorry. But only a problem explanation.
I would like to understand how is the best way to solve this problem.
I have 3 queries:
The first one is a long Transaction which performs an SQL INSERT statement in a table.
The second query COUNTs the number of rows of the previous table after the INSERT took place
The third query UPDATEs one field of the previously inserted record with the count number retrieved by the second query.
So far so good. My 3 queries are executed correctly.
Now suppose that these 3 queries are executed inside an API call. What happens now is that if multiple API calls are executed too fast and simultaneously, the second COUNT query retrieves a wrong value and consequently the 3th UPDATE has also a wrong value.
Nevertheless I have dead locks on the INSERT query because while making the INSERT, the SELECT COUNT tried to read at the same time on a second api call.
My question is what would be the best approach to solve this kind of problem.
I don't need code. I just would like to understand the best way to go.
Would I need to lock all the tables, for example?
It is unclear what you are doing, but this might be faster:
CREATE TEMPORARY TABLE t ...; -- all columns except count
INSERT IN t ...; -- the incoming data
SELECT COUNT(*) INTO #ct FROM t;
INSERT INTO real_table
(...) -- including the count-column last
SELECT ..., #ct FROM t; -- Note how count is tacked on last

MySQL Bulk Insert Dependent on Another Table

I have a case where I'm doing two queries: query1 is a bulk INSERT ... ON DUPLICATE KEY UPDATE on table1. For query2, I want to do another bulk INSERT on table2 with some application data along with using the ids inserted/updated from query1. I know I can do this with an intermediate query, selecting the ids I need from table1 and then inserting them into table2 along with application data, but I really want to avoid the extra network back-and-forth of that query along with the db overhead. Is there any way I can either get the ids inserted/updated from query1 when running that, or do some kind of complex, but relatively less expensive INSERT ... SELECT FROM in query2 to avoid this?
As far as I know, getting ids added/modified returned from query1 is impossible without a separate query, and I can't think of a way to batch INSERT ... SELECT FROM where the insertion values for each row are dependent on the selected value, but I'd love to be proven wrong, or shown a way around either of those.
There is no way to get a set of IDs as a result of a bulk INSERT.
One option you have is indeed to run a SELECT query to get the IDs and use them in the second bulk INSERT. But that's a hassle.
Another option is to run the 2nd bulk INSERT into a temporary table, let's call it table3, then use INSERT INTO table2 ... SELECT FROM ... table1 JOIN table3 ...
With a similar use case we eventually found that this is the fastest option, given that you index table3 correctly.
Note that in this case you don't have a SELECT that you need to loop over in your code, which is nice.

MySQL UPDATE table 1 and INSERT on table2 if id doesnt exist

I have a left join query that shows all the fields from a primary table (tblMarkers) and the values from a second table (tblLocations) where there is matching record.
tblLocations does not have a record for every id in tblMarkers
$query ="SELECT `tblMarkers`.*,`tblLocation`.*,`tblLocation`.`ID` AS `markerID`
FROM
`tblMarkers`
LEFT JOIN `tblLocation` ON `tblMarkers`.`ID` = `tblLocation`.`ID`
WHERE
`tblMarkers`.`ID` = $id";
I am comfortable with using UPDATE to update the tblMarkers fields but how do I update or INSERT a record into tblLocations if the record does not exist yet in tblLocations.
Also, how do I lock the record I ma working on to prevent someone else from doing an update at the same time?
Can I also use UPDATE tblMarkers * or do I have to list every field in the UPDATE statement?
Unfortunately you might have to implement some validation in your outside script. There is an IF statement in SQL, but I'm not sure if you can trigger different commands based on it's outcome.
Locking
In terms of locking, you have 2 options. for MyISAM tables, you can only lock the entire table using http://dev.mysql.com/doc/refman/5.0/en/lock-tables.html
LOCK TABLE users;
For InnoDB tables, there is no explicit 'lock' for single rows, however you can use transactions, to get exclusive rights during the operation. http://dev.mysql.com/doc/refman/5.0/en/innodb-locks-set.html
Update
There might be some shorthand notation, but I think you have to list every field in your query. Alternatively, you can always read the entire row, delete it and insert again using shorthand INSERT query. It all depends on how many fields you've got.

Basic query on Mysql Insert Statement

I have N number of records in a table ,i wanna move all records from one table other say old table as table1 and new as table2 .I have a query with sub query to select the records from the table for insertion .
Assuming as 10000 records While inserting on 6000 record it gets some exception in it , it got to an end,but still the table2 is empty , Here i wanna know that is the 5999 records where it would have been inserted in a databse ?
Thanks in advance ,,
if its unworthy to answer or any cause let me know the reason to down vote i can improve it
I have a query with sub query to select the records from the table for insertion
I assume you have some INESRT INTO table2(<COLUMN LIST>) SELECT <COLUMN LIST> FROM table1 WHERE ... that you are running to move the records.
If so, the INSERT statement is run as part of a transaction and will be committed only if the statement is executed successfully, i.e. if it is able to INSERT all the records returned by that SELECT query. Otherwise, the transaction gets rolled back and no records will be inserted.
Here i wanna know that is the 5999 records where it would have been inserted in a database?
These records would have been inserted into the worktable in tmp location while executing the INSERT statement. It would have been committed to the main table if everything had gone well.