I am using Access 2007 [normally SQL Server] I'm trying to insert records into a table whereby certain values are ID's from other tables. For example:
insert into table values ((select id from another_table), 1, 'Hello', etc)
This is possible in SQL Server.
I get an error that says, Query must contain at least one table or something...
Anyone know the syntax for this in Access? I've tested all the selects and they all produce the expected results, but when inserted in the above type of statement, I get the error.
I don't want to extract too much data into memory, so I'd prefer to get the above to work, instead of loading the ID's and names as a collection of objects.
I am not exactly sure what you want, but perhaps you are looking at it from the wrong angle?
INSERT INTO Table (ID, FK_ID, F2)
SELECT 1, T2_ID, "Hello"
FROM Table2
WHERE ID=1
Related
I want to insert multiple rows into a table, using a single INSERT statement. This is no problem, since SQL offers the option to provide multiple rows as parameter for a single INSERT statement. Now, those rows contain an ID field that is incremented automatically, i.e. its value is set by the database, not by my code.
As a result, I would like to get the ID values of the inserted rows. My basic question is: How do I do that for MariaDB / MySQL?
As it turns out, this is pretty simple, e.g. in PostgreSQL, as PostgreSQL has the RETURNING clause for INSERT which returns the desired values for one or even for multiple rows. This is exactly what I want and it works.
Unfortunately, neither MariaDB nor MySQL have PostgreSQL's RETURNING clause, so I need to fallback to something such as LAST_INSERT_ID(), but this only returns the ID of the single last inserted row, even if multiple rows were inserted using a single INSERT. How do I get all the ID values?
My code currently looks like this:
INSERT INTO mytable
(foo, bar)
VALUES
('fooA', 'barA'),
('fooB', 'barB');
SELECT LAST_INSERT_ID() AS id;
How can I solve this issue in a way that works even with concurrent writes?
(And no, it's not an option to change to a UUID field, or something like this; the auto-increment field is given, and can not be changed.)
MySQL & MariaDB have the LAST_INSERT_ID() function, and it returns the id generated by the most recent INSERT statement in your current session.
But when your INSERT statement inserts multiple rows, LAST_INSERT_ID() returns the first id in the set generated.
In such a batch of multiple rows, you can rely on the subsequent id's being consecutive. The MySQL JDBC driver depends on this, for example.
If the rows you insert include a mix of NULL and non-NULL values for the id column, you have a risk of messing up this assumption. The JDBC driver returns the wrong values for the set of generated id's.
As stated in the comments, you can capture the inserted IDs (SQL Server):
use tempdb
create table test (
id int identity(1,1) primary key,
t varchar(10) null
)
create table ids (
i int not null
)
insert test(t)
output inserted.id into ids
values (null), (null), (null)
select *
from test
select *
from ids
I have to import loads of files into a database, the problem is, with time it got more columns.
The files are all insert-lines from SQLite, but i need them in MySQL, SQLIte doesn't provide column-names in their sql files, so the MySQL-script crashes when there are more or less columns as in the insert statement.
Is there a solution for this? Maybe over a join?
The new added columns are in the end, so the first are ALWAYS the same.
Is there any possibility to insert the sql-file in a temporary table, then make a join on an empty table (or 1 ghost record) to get the right amount of columns, and then do a insert on each line from that table to the table i want to have the data in?
Files looks like:
INSERT into theTable Values (1,1,Text,2913, txt,);
And if columns were added the file is like
INSERT into theTable Values (1,1,Text,2913, txt,added-Text);
I want to copy data from one database which has table named domains to another database which also has the table named domains to it.
I have tried doing it using phpmyadmin but it doesn't copy maybe because of Auto_increment values. it just doesn't get copied to another database.table.
I want to know what can be done in this regard? also I don't want to copy old ID(auto_increment) values from first database to another.
phpmyadmin response.
#1136 - Column count doesn't match value count at row 1
both the structures are same in the databases still this.
My Query.
INSERT INTO `site1`.`domains`
SELECT * FROM `site33`.`domains`
^ This much is fixed.
now comes the auto_increment problem I Get:
#1062 - Duplicate entry '1' for key 'PRIMARY'
You could do something like this:
INSERT INTO database2.table1 (field2,field3)
SELECT table2.field2,table2.field3
FROM table2;
Note:
Do not include the column which is auto increment in your insert and select statement.
Sql Fiddle example
You are using an insert statement that has an incorrect number of values, for instance:
for a table with columns a, b and c, these are all invalid:
INSERT INTO YourTable VALUES (1, 2);
INSERT INTO YourTable(b, c) VALUES (1, 2, 3);
INSERT INTO YourTable(a, b, c) VALUES (2, 3);
The number of columns in the column list must match the number of values in the value list.
You may only omit the column list, but only if you specify one value for each column and in the order in which the columns exist in the table. This is bad practice, though. It is better to always specify the exact columns you need.
I got a table with a normal setup of auto inc. ids. Some of the rows have been deleted so the ID list could look something like this:
(1, 2, 3, 5, 8, ...)
Then, from another source (Edit: Another source = NOT in a database) I have this array:
(1, 3, 4, 5, 7, 8)
I'm looking for a query I can use on the database to get the list of ID:s NOT in the table from the array I have. Which would be:
(4, 7)
Does such exist? My solution right now is either creating a temporary table so the command "WHERE table.id IS NULL" works, or probably worse, using the PHP function array_diff to see what's missing after having retrieved all the ids from table.
Since the list of ids are closing in on millions or rows I'm eager to find the best solution.
Thank you!
/Thomas
Edit 2:
My main application is a rather easy table which is populated by a lot of rows. This application is administrated using a browser and I'm using PHP as the intepreter for the code.
Everything in this table is to be exported to another system (which is 3rd party product) and there's yet no way of doing this besides manually using the import function in that program. There's also possible to insert new rows in the other system, although the agreed routing is to never ever do this.
The problem is then that my system cannot be 100 % sure that the user did everything correct from when he/she pressed the "export" key. Or, that no rows has ever been created in the other system.
From the other system I can get a CSV-file out where all the rows that system has. So, by comparing the CSV file and my table I can see if:
* There are any rows missing in the other system that should have been imported
* If someone has created rows in the other system
The problem isn't "solving it". It's making the best solution to is since there are so much data in the rows.
Thanks again!
/Thomas
We can use MYSQL not in option.
SELECT id
FROM table_one
WHERE id NOT IN ( SELECT id FROM table_two )
Edited
If you are getting the source from a csv file then you can simply have to put these values directly like:
I am assuming that the CSV are like 1,2,3,...,n
SELECT id
FROM table_one
WHERE id NOT IN ( 1,2,3,...,n );
EDIT 2
Or If you want to select the other way around then you can use mysqlimport to import data in temporary table in MySQL Database and retrieve the result and delete the table.
Like:
Create table
CREATE TABLE my_temp_table(
ids INT,
);
load .csv file
LOAD DATA LOCAL INFILE 'yourIDs.csv' INTO TABLE my_temp_table
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n'
(ids);
Selecting records
SELECT ids FROM my_temp_table
WHERE ids NOT IN ( SELECT id FROM table_one )
dropping table
DROP TABLE IF EXISTS my_temp_table
What about using a left join ; something like this :
select second_table.id
from second_table
left join first_table on first_table.id = second_table.id
where first_table.is is null
You could also go with a sub-query ; depending on the situation, it might, or might not, be faster, though :
select second_table.id
from second_table
where second_table.id not in (
select first_table.id
from first_table
)
Or with a not exists :
select second_table.id
from second_table
where not exists (
select 1
from first_table
where first_table.id = second_table.id
)
The function you are looking for is NOT IN (an alias for <> ALL)
The MYSQL documentation:
http://dev.mysql.com/doc/refman/5.0/en/all-subqueries.html
An Example of its use:
http://www.roseindia.net/sql/mysql-example/not-in.shtml
Enjoy!
The problem is that T1 could have a million rows or ten million rows, and that number could change, so you don't know how many rows your comparison table, T2, the one that has no gaps, should have, for doing a WHERE NOT EXISTS or a LEFT JOIN testing for NULL.
But the question is, why do you care if there are missing values? I submit that, when an application is properly architected, it should not matter if there are gaps in an autoincrementing key sequence. Even an application where gaps do matter, such as a check-register, should not be using an autoincrenting primary key as a synonym for the check number.
Care to elaborate on your application requirement?
OK, I've read your edits/elaboration. Syncrhonizing two databases where the second is not supposed to insert any new rows, but might do so, sounds like a problem waiting to happen.
Neither approach suggested above (WHERE NOT EXISTS or LEFT JOIN) is air-tight and neither is a way to guarantee logical integrity between the two systems. They will not let you know which system created a row in situations where both tables contain a row with the same id. You're focusing on gaps now, but another problem is duplicate ids.
For example, if both tables have a row with id 13887, you cannot assume that database1 created the row. It could have been inserted into database2, and then database1 could insert a new row using that same id. You would have to compare all column values to ascertain that the rows are the same or not.
I'd suggest therefore that you also explore GUID as a replacement for autoincrementing integers. You cannot prevent database2 from inserting rows, but at least with GUIDs you won't run into a problem where the second database has inserted a row and assigned it a primary key value that your first database might also use, resulting in two different rows with the same id. CreationDateTime and LastUpdateDateTime columns would also be useful.
However, a proper solution, if it is available to you, is to maintain just one database and give users remote access to it, for example, via a web interface. That would eliminate the mess and complication of replication/synchronization issues.
If a remote-access web-interface is not feasible, perhaps you could make one of the databases read-only? Or does database2 have to make updates to the rows? Perhaps you could deny insert privilege? What database engine are you using?
I have the same problem: I have a list of values from the user, and I want to find the subset that does not exist in anther table. I did it in oracle by building a pseudo-table in the select statement Here's a way to do it in Oracle. Try it in MySQL without the "from dual":
-- find ids from user (1,2,3) that *don't* exist in my person table
-- build a pseudo table and join it with my person table
select pseudo.id from (
select '1' as id from dual
union select '2' as id from dual
union select '3' as id from dual
) pseudo
left join person
on person.person_id = pseudo.id
where person.person_id is null
I have a single procedure that has two insert statements in it for two different tables. I must insert data into table1 before I can insert into table2. I'm using PHP to do the data collection. What I'd like to know is how to insert multiple rows into table2, which can have many rows associated with table1. How would I do this?
I want to only store the person in table1 just one time but table2 requires multiple rows. If these insert statements were in separate procedures, I wouldn't have a problem but I just don't know how I would insert more than one row into table2 without table1 rejecting a second duplicate record.
BEGIN
INSERT INTO user(name, address, city) VALUES(Name, Address, City);
INSERT INTO order(order_id, desc) VALUES(OrderNo, Description);
END
I'd suggest you do it separately, otherwise you'd need a complicated solution which is prone to error if something changes.
The complicated solution is:
join all orderno and descriptions with a separator. (orderno#description)
join all orders with a different separator. (orderno#description/orderno#description/...)
pass it to the procedure
in the procedure, split the string by order separator, then loop through each of them
for each order, split the string by the first separator, then insert into the appropriate columns
As you can see, this is bad.
I am sorry, but what's stopping you from inserting data into these (seemingly unrelated) tables in separate queries? If you don't like the idea of it failing halfway through, you can wrap it into a transaction. I know, mysqli and pdo can do that just fine.
Answering your question directly, insert's ignore mode turns errors during insertion into warnings, so upon attempting to insert a duplicate row the warning is issued and the row is not inserted, but there is no error.
You could use the IGNORE keyword on the first statement.
http://dev.mysql.com/doc/refman/5.1/en/insert.html:
If you use the IGNORE keyword, errors that occur while executing the INSERT statement are treated as warnings instead. For example, without IGNORE, a row that duplicates an existing UNIQUE index or PRIMARY KEY value in the table causes a duplicate-key error and the statement is aborted. With IGNORE, the row still is not inserted, but no error is issued.But somehow this seems rather inefficient to me, a "stabbed from behind through the chest in the eye"-solution.