I am not database engineering. But I have a question about the possibility of an issue about the MySQL database.
Is it possible to write SQL to get the data from several tables and then use these data (what we get) to updated a new table?
Also, this work should be scheduled daily.
The reason why I ask this question is because I am in this situation:
Our IT department has maintained a big database, but the database/tables are not meet our department's business need (we only have read permission). Our department has a small database (have all the permission), which we can use custom SQL to create some special table and updated them by daily.
So go back to the question, it is possible to set up the SQL and schedule it to make sure these SQL keep updating our tables?
Thank you so much!!!
Is it possible to write SQL to get the data from several tables and
then use these data (what we get) to updated a new table?
Yes it is possible. You can use a UPDATE .. JOIN construct to get the data from several table using SELECT statement and then JOIN with that inline view and perform the update operation to your other table.
Example:
UPDATE Your_Table a
JOIN (
//Select query to get data from multiple other tables
) xxx ON a.some_column = xxx.some_matching_column
SET a.column_c = xxx.column_c;
Also, this work should be scheduled daily
Sure, use MySQL Event Schedular
Related
I am not sure if this has been answered before but what I found i am not sure how to make work for me but here is my problem.
I have a database used to keep track of phones for multiple clients. What needs to be done is have a query that can be ran that will run against multiple databases on the same server. each database uses the same table name that I am looking at but the names are slightly different. I came up with this..
INSERT INTO `export db`.exportinfo2
SELECT *
FROM (SELECT * FROM `export db'.tentantnames).users
WHERE name = 'Caller ID:emergency' AND value > 0
What suppose to happen is from a table that has all the database names is is to got to each database and go into the table labeled users and run a where clause on the data then export results to a different database table
I know the code needs to be dynamic but I am not sure how to make it dynamic and function. The table that has all the names for the databases is automatically created every few days.. I am not sure what else needs to be said without sounding like i repeat myself but i just need help making a dynamic query that uses a table premade as database names and run a where statement on the same named table in each database which have their name stored in a different table.
You should look into Synonyms. It can be used to fulfill your purpose
In reviewing many of the answers, don't see a solution something I feel should be simple.
I'm attempting to update a couple of fields in my Production database on one server from a restored database on another server because of a loss of data by our ERP vendor's update.
Anyway, have both servers connected in SSMS and just want to run the query below:
USE coll18_production;
GO
USE coll18_test2;
GO
UPDATE coll18_Production.dbo.PERSON
SET W71_ID_CRD_NO = T2.PERSON_USER1, W71_ID_CRD_DATE = T2.PERSON_USER9
FROM coll18_test2.dbo.PERSON as T2
WHERE coll18_Production.dbo.PERSON.ID = T2.ID;
I would think this would be a simple update, but can't make a query for 2 different server databases via the same tables.
Thanks if anyone can make this simple,
Donald
Okay, thanks for the input. In the essence of time I'm going to do something similar to what cpaccho recommended. Create a temp table containing the 2 fields that I want to update from in my Production database. Then I'm going to connect to my Test2 database that I restored from backup. Export these two fields as a csv file with the primary key and simply restore this table data into the temp. table in my production database. Then simply run my update from this temp table into the 2 fields in my production PERSON table where the ID's equal each other.
Have a great weekend,
Donald
The problem is that since the databases are on 2 different servers in order to join between them you will need a way for the servers to talk to each other.
The way to do that is through linked servers. Then you can set up your query to join the 2 tables together using 4 part naming (server.DB.Schema.Table) and accomplish your goal. The query will look sort of like this:
UPDATE Server.DB.Schema.Table1
SET column = b.column
FROM Server1.DB.Schema.Table1 a
INNER JOIN Server2.DB.Schema.Table2 b
ON a.column = b.column
Where a.column = something
You will only need to set up the linked server on one side and the Server name in the query will be the name you give the linked server. The only caveat is that this can be slow because in order to join the tables SQL Server may have to copy the entire table from one server to the other. I would also set up the linked server on the server you are updating (so that you run the update on the same server as the DB you are updating)
How to set up Linked Server Microsoft KB
A simple, rather hacky way would be to hard copy the table from database to database...
First create a table that containts the changes you want:
USE coll18_test2;
GO
SELECT PERSON_USER1, PERSON_USER9, ID
INTO dbo.MyMrigationOrWhateverNameYouLike
FROM coll18_test2.dbo.PERSON
Then go to SSMS, right click on coll18_test2 database --> Task --> Generate scripts and go with the assistant to generate a script for the newly created table. Don't forget to setup, in the advanced options, "Type of data to script" to "Schema and "Data".
Now that you have your script, just run it in your production database, and make your query based on that table.
UPDATE dbo.PERSON
SET W71_ID_CRD_NO = T2.PERSON_USER1, W71_ID_CRD_DATE = T2.PERSON_USER9
FROM dbo.MyMrigationOrWhateverNameYouLike as T2
WHERE coll18_Production.dbo.PERSON.ID = T2.ID;
Finally drop the MyMrigationOrWhateverNameYouLike table and you're done...
I'm trying to most efficiently manage a database table and get rid of old entries that will never be accessed. Yes they could probably easily be persisted for many years but I'd just like to get rid of them. I could do this maybe once every month. Would it be more efficient to copy the entries I want to keep into a new table then simply delete the old table. Or should a query manually delete each entry after that threshhold that I set.
I'm using MySQL with JPA/JPQL JEE6 with entity annotations and Java persistence manager.
Thanks
Another solution is to design the table with range or list PARTITIONING, and then you can use ALTER TABLE to drop or truncate old partitions.
This is much quicker than using DELETE, but it may complicate other uses of the table.
A single delete query will be the most efficient solution.
Copying data from one database to another can be lengthy if you have a lot of data to keep. It means you have to retrieve all the data with a single query (or multiple, if you want to batch), and issue a lot of insert statements in the other database.
Using JPQL, you can issue a single query to delete all old statements, something like
DELETE FROM Entity e WHERE e.date = ?
This will translated to a single SQL query, and the database will take care of deleting all the unwanted records.
So here is my situation: I have a vendor supplied DB we cannot modify and a custom db that imports data from the vendor app and acts on it. Once records are imported form the vendor app, they cannot appear on the list of records to be imported. Also we only want to display the 250 most recent records that have not been imported.
What I originally started with was select the list of ids that have been imported from the custom db, and then query the vendor db, using the list of ids in a .Where(x => !idList.Contains(x.Id)) clause on the remote query.
This worked up until we broke 2100 records imported into the custom db, as 2100 is the limit on the number of parameters that can be passed into SQL. After finding out this was the actual problem and not the 'invalid buffer'/'severe error' ADO.Net reported, my solution was to remove the first 2000 ids in the remote query, and then remove the remaining records in the local query.
Having to pull back a large number of irrelevant records, just to exclude them, so I can get the correct 250 records seems very inelegant. Is there a better way to do this, short of doing a cross db stored procedure?
Thanks in advance.
This might not be the best answer, depending on how many records you're dealing with, but you could force the SQL to execute and just deal with it as in-memory objects. Calling the ToList() method will execute the SQL and convert to an IEnumerable .
What I might suggest is to have started by querying the vendor database first ordering the results by some kind of criteria (perhaps a date field, oldest to most recent).
You could do a Skip().Take() to "skim" the results and then take each bulk set and insert them into the custom db where the ID doesn't already exist. That way you avoid the problem you have now.
If you have db-create access to the SQL Server that the vendor's db is running on (or if your custom db is on the same server), you could create a "has been imported" table in a different database on that same server, and then write a stored proc that does a cross-database join of that table against the vendor db, e.g.:
select top 250 from vendordb.to_be_imported
where not exists
(select 1 from customdb.has_been_imported where idWasImported = idToBeImported)
order by whatever;
You might even be able to do this in Linq 2 SQL -- I've never tried adding objects from different databases into a single DataContext...
I have a live database that had some data deleted from it and I need that data back. I have a very recent copy of that database that has already been restored on another machine. Unrelated changes have been made to the live database since the backup, so I do not want to wipe out the live database with a full restore.
The data I need is small - just a dozen rows - but those dozen rows each have a couple rows from other tables with foreign keys to it, and those couple rows have god knows how many rows with foreign keys pointing to them, so it would be complicated to restore by hand.
Ideally I'd be able to tell the backup copy of the database to select the dozen rows I need, and the transitive closure of everything that they depend on, and everything that depends on them, and export just that data, which I can then import into the live database without touching anything else.
What's the best approach to take here? Thanks.
Everyone has mentioned sp_generate_inserts. When using this, how do you prevent Identity columns from messing everything up? Do you just turn IDENTITY INSERT on?
I've run into similar situations before, but found that doing it by hand worked the best for me.
I restored the backup to a second server and did my query to get the information that I needed, I then build a script to insert the data sp_generate_inserts and then repeated for each of my tables that had relational rows.
In total I only had about 10 master records with relational data in 2 other tables. It only took me about an hour to get everything back the way it was.
UPDATE To answer your question about sp_generate_inserts, as long as you specify #owner='dbo', it will set identity insert to ON and then set it to off at the end of the script for you.
you'll have to restore by hand. The sp_generate_inserts is good for new data. but to update data I do it this way:
SELECT 'Update YourTable '
+'SET Column1='+COALESCE(''''+CONVERT(varchar,Column1Name)+'''','NULL')
+', Column2='+COALESCE(''''+CONVERT(varchar,Column2Name)+'''','NULL')
+' WHERE Key='+COALESCE(''''+CONVERT(varchar,KeyColumn)+'''','NULL') FROM backupserver.databasename.owner.YourTable
you could create inserts this way too, but sp_generate_inserts is better. Watch those identity values, and good luck (I've had this problem before and know where you're at right now).
useful queries:
--find out if there are missing rows, and which ones
SELECT
b.key,c.key
from backupserver.databasename.owner.YourTable b
LEFT OUTER JOIN YourTable c ON b.key=c.key
WHERE c.Key is NULL
--find differences
SELECT
b.key,c.key
from YourTable c
LEFT OUTER JOIN backupserver.databasename.owner.YourTable b ON c.key=b.key
WHERE b.Key is not null
AND ( ISNULL(c.column1,-9999) != ISNULL(b.column1,-9999)
OR ISNULL(c.column2,'~') != ISNULL(b.column2,'~')
OR ISNULL(c.column2,GETDATE()) != ISNULL(b.column2,GETDATE())
)
SQL Server Management Studio for SQL Server 2008 allows you to export table data as insert statements. See http://www.kodyaz.com/articles/sql-server-script-data-with-generate-script-wizard.aspx. This approach lacks some of the flexibility of sp_generate_inserts (you cannot specify a WHERE clause to filter the rows in your table, for example) but may be more reliable since it is part of the product.