I wonder if there is a way to have an SQL table update itself dynamically.
I have table1 and table2 and I need to create a table3 using UNION and WHERE both tables ID column (PK) match but the issue is that I do not want to always create the same table3 instead if I add a record to the tables , let it appear automatically appear in table 3..
Any advise how it is done if possible or where should I look into?
Thanks
Table3 shouldn't be a table, it should be a view.
From the perspective of any given SELECT query and any consuming application looking at the data, a view can be treated like any other table. The fact that it's not a table is entirely transparent in those cases.
What a view does is compile and store a query which examines other tables, and presents the results of that query in a table structure. So any time you select from the view, you're dynamically selecting from the current state of the tables it examines.
Related
I have two tables T1 and T2 and want to update one field of T1 from T2 where T2 holds massive data.
What is more efficient?
Updating T1 in a for loop iteration over the values
or
Left join it with T2 and update.
Please note that i'm updating these tables in a shell script
In general, the JOIN will always work much better than a loop. The size should not be an issue if it is properly indexed.
There is no simple answer which will be more effective, it will depend on table size and data size to which you are going to update in one go.
Suppose you are using innodb engine and trying to update 1,000 or more rows in one go with 2 heavy tables join and it is quite frequent then it will not be good idea on production server as it will lock your table for some time and due to this locking some other operations also can be hit on your production server.
Option1: If you are trying to update few rows and based on proper indexed fields (preferred based on primary key) then you can go with join.
Option2: If you are trying to update a large amount of data based on multiple tables join then below option will be better:
Step1: Create a stored procedure.
Step2: Keep below query results in a cursor.
suppose you want TO UPDATE corresponding field2 DATA of TABLE table2 IN field1 of TABLE table1:
SELECT a.primary_key,b.field2 FROM table1 a JOIN table2 b ON a.primary_key=b.foreign_key WHERE [place CONDITION here IF any...];
Step3: Now update all rows one by one based on primary key using stored values in cursor.
Step4: You can call this stored procedure from your script.
I can "copy" a table using:
CREATE TABLE copy LIKE original_table
and
CREATE TABLE copy as select * from original_table
In the latter case only the data are copied but not e.g primary keys etc.
So I was wondering when would I prefer using a select as?
These do different things. CREATE TABLE LIKE creates an empty table with the same structure as the original table.
CREATE TABLE AS SELECT inserts the data into the new table. The resulting table is not empty. In addition, CREATE TABLE AS SELECT is often used with more complicated queries, to generate temporary tables. There is no "original" table in this case. The results of the query are just captured as a table.
EDIT:
The "standard" way to do backup is to use . . . . backup at the database level. This backs up all objects in the database. Backing up multiple tables is important, for instance, to maintain relational integrity among the objects.
If you just want a real copy of a table, first do a create table like and then insert into. However, this can pose a challenge with auto_increment fields. You will probably want to drop the auto_increment property on the column so you can populate such columns.
The second form is often used when the new table is not an exact copy of the old table, but contains only selected columns or columns that result from a join.
"Create Table as Select..." are most likely used when you have complex select
e.g:
create table t2 as select * from t1 where x1=7 and y1 <>2 from t1;
Now, apparently you should use Create Like if you don't need such complex selects. You can change the PI in this syntax also.
I have this query that works fine. Its deletes records that are old based on current time.
$cleanacc_1 = "DELETE FROM $acc_1
WHERE `Scheduled` < DATE_SUB(UTC_TIMESTAMP(), INTERVAL 30 SECOND)";
$result = mysql_query($cleanacc_1);
However, there are over 100 tables (accounts) that need deleting and I was wondering if I can combine them into one query. If possible how?
This implies you create a new table for every account. Why are you not creating a record for each account within a single table?
For example...
create table account (id int unsigned primary key auto_increment, other fields...);
If you alter your table structure you will be able to delete individual account records with a single query...
delete from account where condition=true;
Individual transaction records for each account are then stored in another table and contain the account id they relate to...
create table transaction (id, account_id, other transaction fields);
If you don't change the database design you'll need to write PHP code that loops through each table and runs your delete query. This is very inefficient and I urge you to redesign the table as suggested.
If you don't understand why my table redsign suggestion is a better approach, post more information about your database and I'll explain in more detail with a working example.
No way to do that, AFAIK; anyways, I don't think it would be a big problem to run 100 queries, assuming you are not running that for each request or so..
Are you expecting performance issues? If that's the case, I'd probably use a cron job to run that query every X minutes..
You could setup a view of the tables and do then run the delete sql against the view. That should delete the underlying table data as well. Your table schema and permissions could have an affect whether this will work or not. Check out this answer, it might help as well.
Does deleting row from view delete row from base table - MYsql?
Please consider the following example.
I have three tables in following structure.
Table names : t1,t2,t3
Fields : Id, name
Im going to perform delete query with one condition which recode id must less than 10.
DELETE FROM t1, t2,t3 USING t1 INNER JOIN t2 INNER JOIN t3 WHERE t1.id<10 and t2.id<10 and t3.id<10.
The query has been successfully executed ( MySql ). I got the expected output.
So please try the same way with your condition.
I got a table with a normal setup of auto inc. ids. Some of the rows have been deleted so the ID list could look something like this:
(1, 2, 3, 5, 8, ...)
Then, from another source (Edit: Another source = NOT in a database) I have this array:
(1, 3, 4, 5, 7, 8)
I'm looking for a query I can use on the database to get the list of ID:s NOT in the table from the array I have. Which would be:
(4, 7)
Does such exist? My solution right now is either creating a temporary table so the command "WHERE table.id IS NULL" works, or probably worse, using the PHP function array_diff to see what's missing after having retrieved all the ids from table.
Since the list of ids are closing in on millions or rows I'm eager to find the best solution.
Thank you!
/Thomas
Edit 2:
My main application is a rather easy table which is populated by a lot of rows. This application is administrated using a browser and I'm using PHP as the intepreter for the code.
Everything in this table is to be exported to another system (which is 3rd party product) and there's yet no way of doing this besides manually using the import function in that program. There's also possible to insert new rows in the other system, although the agreed routing is to never ever do this.
The problem is then that my system cannot be 100 % sure that the user did everything correct from when he/she pressed the "export" key. Or, that no rows has ever been created in the other system.
From the other system I can get a CSV-file out where all the rows that system has. So, by comparing the CSV file and my table I can see if:
* There are any rows missing in the other system that should have been imported
* If someone has created rows in the other system
The problem isn't "solving it". It's making the best solution to is since there are so much data in the rows.
Thanks again!
/Thomas
We can use MYSQL not in option.
SELECT id
FROM table_one
WHERE id NOT IN ( SELECT id FROM table_two )
Edited
If you are getting the source from a csv file then you can simply have to put these values directly like:
I am assuming that the CSV are like 1,2,3,...,n
SELECT id
FROM table_one
WHERE id NOT IN ( 1,2,3,...,n );
EDIT 2
Or If you want to select the other way around then you can use mysqlimport to import data in temporary table in MySQL Database and retrieve the result and delete the table.
Like:
Create table
CREATE TABLE my_temp_table(
ids INT,
);
load .csv file
LOAD DATA LOCAL INFILE 'yourIDs.csv' INTO TABLE my_temp_table
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n'
(ids);
Selecting records
SELECT ids FROM my_temp_table
WHERE ids NOT IN ( SELECT id FROM table_one )
dropping table
DROP TABLE IF EXISTS my_temp_table
What about using a left join ; something like this :
select second_table.id
from second_table
left join first_table on first_table.id = second_table.id
where first_table.is is null
You could also go with a sub-query ; depending on the situation, it might, or might not, be faster, though :
select second_table.id
from second_table
where second_table.id not in (
select first_table.id
from first_table
)
Or with a not exists :
select second_table.id
from second_table
where not exists (
select 1
from first_table
where first_table.id = second_table.id
)
The function you are looking for is NOT IN (an alias for <> ALL)
The MYSQL documentation:
http://dev.mysql.com/doc/refman/5.0/en/all-subqueries.html
An Example of its use:
http://www.roseindia.net/sql/mysql-example/not-in.shtml
Enjoy!
The problem is that T1 could have a million rows or ten million rows, and that number could change, so you don't know how many rows your comparison table, T2, the one that has no gaps, should have, for doing a WHERE NOT EXISTS or a LEFT JOIN testing for NULL.
But the question is, why do you care if there are missing values? I submit that, when an application is properly architected, it should not matter if there are gaps in an autoincrementing key sequence. Even an application where gaps do matter, such as a check-register, should not be using an autoincrenting primary key as a synonym for the check number.
Care to elaborate on your application requirement?
OK, I've read your edits/elaboration. Syncrhonizing two databases where the second is not supposed to insert any new rows, but might do so, sounds like a problem waiting to happen.
Neither approach suggested above (WHERE NOT EXISTS or LEFT JOIN) is air-tight and neither is a way to guarantee logical integrity between the two systems. They will not let you know which system created a row in situations where both tables contain a row with the same id. You're focusing on gaps now, but another problem is duplicate ids.
For example, if both tables have a row with id 13887, you cannot assume that database1 created the row. It could have been inserted into database2, and then database1 could insert a new row using that same id. You would have to compare all column values to ascertain that the rows are the same or not.
I'd suggest therefore that you also explore GUID as a replacement for autoincrementing integers. You cannot prevent database2 from inserting rows, but at least with GUIDs you won't run into a problem where the second database has inserted a row and assigned it a primary key value that your first database might also use, resulting in two different rows with the same id. CreationDateTime and LastUpdateDateTime columns would also be useful.
However, a proper solution, if it is available to you, is to maintain just one database and give users remote access to it, for example, via a web interface. That would eliminate the mess and complication of replication/synchronization issues.
If a remote-access web-interface is not feasible, perhaps you could make one of the databases read-only? Or does database2 have to make updates to the rows? Perhaps you could deny insert privilege? What database engine are you using?
I have the same problem: I have a list of values from the user, and I want to find the subset that does not exist in anther table. I did it in oracle by building a pseudo-table in the select statement Here's a way to do it in Oracle. Try it in MySQL without the "from dual":
-- find ids from user (1,2,3) that *don't* exist in my person table
-- build a pseudo table and join it with my person table
select pseudo.id from (
select '1' as id from dual
union select '2' as id from dual
union select '3' as id from dual
) pseudo
left join person
on person.person_id = pseudo.id
where person.person_id is null
I have a table containing about 500 000 rows. Once a day, I will try to synchronize this table with an external API. Most of the times, there are few- or no changes made since last update. My question is basically how should I construct my MySQL query for best performance? I have thought about using insert ignore, but it doesn't feel like the best way to go since only a few rows will be inserted and MySQL must loop through all rows in the table. I have also thought about using LOAD_DATA_INFILE to insert all rows in a temporary table and then select the rows not already in my original table, and then remove the temporary table. Maybe someone else has a better suggestion?
Thank you in advance!
I usually use a temporary table and the LOAD DATA INFILE bulk loader. The bulk loader is much more efficient that trying to insert records using a dynamically created query.
If you index your permanent tables with appropriate unique keys that relate to the keys in the API then you should find the the INSERT and UPDATE statements work pretty fast. An example of the type of INSERT query I use is as follows:
INSERT INTO keywords(api_adgroup_id, api_keyword_id, keyword_text, match_type, status)
SELECT a.api_id, a.keyword_text, a.match_type, a.status
FROM tmp_keywords a LEFT JOIN keywords b ON a.api_adgroup_id = b.api_adgroup_id AND a.api_keyword_id = b.api_keyword_id
WHERE b.api_keyword_id IS NULL
In this example, I perform an OUTER JOIN on the keywords table to check if it already exists. Only new rows in the temporary table where there isn't a match in the main table (the api_keyword_id in the keywords table is NULL) are inserted.
Also note that in this example I need to use both the ad group id AND the keyword id to uniquely identify the keyword because the AdWords API gives the same keyword/match type combination the same id when it exists in more than one ad group.