I have ran a SQL query joining many base tables in various steps as #Tables and final output (Which is a separate table). My hard disk is crashed. I Have run this query 4 days back. I ran it from my local system by SQL management studio connecting to the server. Now I'm having the base tables and the output table as tables in SQL but I have lost the query which I have used to arrive at my final table due to hard drive crash. Can that query which I ran from my local system SQL management studio 4 days back be recovered from logs or by any other means? It was one of the last few queries which I tried in that database in that server.
SELECT execquery.last_execution_time AS [Date Time], execsql.text AS [Script]
FROM sys.dm_exec_query_stats AS execquery
CROSS APPLY sys.dm_exec_sql_text(execquery.sql_handle) AS execsql
ORDER BY execquery.last_execution_time DESC
Or in one of these locations depending on OS.
C:\Windows\System32\SQL Server Management Studio\Backup Files\Solution1
C:\Users\YourUsername\Documents\SQL Server Management Studio\Backup Files
Related
I am trying to setup a MySQL database that takes data from 3 other MySQL databases. The data that would be copied would be a query that standardizes the data format. The method would need to either be run daily as a script or synced in real time, either method would be fine for this project.
For example:
The query from source DB:
SELECT order_id, rate, quantity
WHERE date_order_placed = CUR_DATE()
FROM orders
Then I want to take the results of that query to be inserted into a destination DB.
The databases are on separate hosts.
I have tried creating scripts that run CSV and SQL exports/imports without success. I have also tried using Python pymysql library but seemed overkill. I'm pretty lost haha.
Thanks :)
Plan A:
Connect to source. SELECT ... INTO OUTFILE.
Connect to destination. LOAD DATA INFILE from the output above.
Plan B (both MySQL):
Set up Replication from the source (as a Master) and the destination (as a Slave)
Plan C (3 MySQL servers):
Multi-source replication to allow gathering data from two sources into a single, combined, destination.
I think MariaDB 10.0 is when they introduced multi-source repl. Caution: MariaDB's GTIDs are different than MySQL's. But I think there is a way to make the replication you seek to work. (It may be as simple as turning off GTIDs??)
Plan D (as mentioned):
Some ETL software.
Please ponder which Plan you would like to pursue, then ask for help in focusing on one. Meanwhile, your question is too broad.
I have attempted to find the answer here and via Google on how to control connections for a linked server ODBC connection.
Overview
I have a linked server from SQL Server 2014 to MySQL for the purposes of extracting data for our data warehouse. I've queried the database quite a few times without issue. Then yesterday, suddenly the query to read from the table is slow, and then I get reports that the application using this MySQL database are getting a "too many connections" error.
Details
The following query selects the data from MySQL and inserts to the SQL Server table.
INSERT INTO tmpCustomers
(fieldlist)
SELECT
myc.contact_id,
myl.franchise_id,
myl.lead_source,
LEFT(RTRIM(myc.first_name) + ' ' + RTRIM(myc.last_name),100) AS Name,
myc.first_name,
myc.last_name,
myc.company,
myc.Email,
myc.primary_phone,
myc.home_phone,
myc.mobile_phone,
myc.work_phone,
myc.fax,
myc.address1,
myc.Address2,
myc.City,
myc.[state],
myc.zip_code,
myc.created_date,
myc.updated_date
FROM [MYSQLDB]...[franchise] myf
INNER JOIN [MYSQLDB]...[leads] myl
ON myl.franchise_id = myf.franchise_id
INNER JOIN [MYSQLDBE]...[contact] myc
ON myc.contact_id = myl.contact_id
This query returns about 200K rows of data, and will grow. The MySQL database is used by our customer base, and this is a back-end process to pull data into our data warehouse.
The query has been working without issue over the past week of testing, until yesterday, where it caused our MySQL support to restart the MySQL server twice.
The ODBC setup was done using the "mysql-connector-odbc-5.3.6-win64.msi" version. I don't find any settings there to limit the number of connections. ODBC does show "Allow multiple statements", which this is not. It also has "Enable automatic reconnect", which I can't imagine why for a single query would be needed.
Summary
I can't afford to stop customers from connecting, and need to disable the process from using too many connections when doing the import.
Any input on this would be greatly appreciated.
Thanks
KDS
Update: 2016-Oct-05
AWS server - M3.xlarge
4 CPU
15 GiB
2 40 GiB SSD drives
It's better to optimize the MySQL server if you can't afford to stop customers from connecting.
With this much information, it hard to optimize or suggest something for MySQL optimization.
https://dev.mysql.com/doc/refman/5.5/en/too-many-connections.html
Better to update your configuration file. Or max_connections limit and InnoDB variable if you are using innoDB. And RAM also.
Can you update the above information in question section.
I'm going to mark this as answered as it's been about a year and no real solution to it. The issue was locks on the MySQL server as the SQL Server linked server was reading the data. SQL Server arguments like NOLOCK had no impact on resolving this.
So, what was done was to take a backup of the MySQL database nightly and restore it to a separate database that we linked to for SQL Server, and process the data from there. The reads are usually done in a matter of a minute or two. SQL Server was still putting a lock on the MySQL table, and users then started to stack multiple connections until all the connections to MySQL were used up.
So, since I only needed the data for reporting purposes daily, this separate database copy worked, but I don't know of any other fix to this.
Thanks
KD
I'm running SQL Server 2012 and have setup a Linked Server connection to a Linux MySQL instance via the latest MySQL ODBC drivers. I'm a bit disappointed by the time taken to return results of a fairly straightforward 'Select' query.
select * from OPENQUERY(LinkedServer, 'select * from mysqltable')
The table has approximately 150,000 rows and takes about 70 seconds to return the results to SSMS. Conversely if I query the table via a MySQL Client App (In this case Navicat Essentials, on the same machine as SQL Server) the query executes in about 1 second.
I know that Linked Servers and ODBC will be slower but I'm surprised by this performance hit, particularly with such a straight forward query.
I've tried both the Unicode and ANSI drivers and the performance is similar. The MySQL DB is UTF-8 CharSET and Coalition and the table is InnoDB. I've tried explicitly selecting columns rather than * also. Again no difference.
Has anyone experienced this before and got any tips for speeding up the performance or is this fairly normal?
Thanks in advance.
In linked server
I do not think there is a possibility to improve significantly
But you can try through SSIS
And use bulk insert.
I'm migrating my data warehouse from SQL Server 2005 to SQL Server 2008 . There is a large performance decrease on the table updates. The inserts work great.
I'm using the same SSIS package in both environments, but 2008 still doesn't update right.
I've run update stats full on all tables. The process uses a temp table. I've dropped all indexes (except one needed for the update) but none of these measures helped. I also wrote an update statement that mimics what SSIS is doing, and it runs fast as expected.
The update process uses a data flow task (there are other things in the task, like inserting into a processed table to know what data was used in the update).
This is a brand new database with nothing else running on it. Any suggestions?
Captured statistics IO
2005, CPU=0, Reads=150
2008, CPU=1700, Reads=33,000
Database RAM:
2005, 40GB Total / 18 Sql Server
2008, 128GB Total / 110GB Sql Server
The problem was found in the execution plan. The plan in 2008 was using different tables build the update statement. Background: since we use indexed views which don't allow any other access while querying those tables, we built smaller/leaner tables the iViews use rather than our dimensions to keep them available to users. The optimizer was choosing those tables rather than the ones we specified in the query.
When I originally did the explain plans, I used the wrong query, which did not have this functionality. This made all the difference.
Thanks!
Using SQL Server Management Studio to copy the entire contents of a table from SQL Server to an identical table on a MySQL machine. The MySQL db is connected to Management Studio as a linked server using MySQL ODBC 5.1 driver. Using a simple statement works fine, but executes extremely slowly.
INSERT INTO openquery(MYSQL, 'select * from Table1')
SELECT * from MSSQL..Table2
I have a table with about 450,000 records and it takes just over 5 hours to transfer. Is this normal? I don't have prior experience linking MySQL servers.
How long does it take to just run the "SELECT * from MSSQL..Table2", if you run it from management studio?
There are multiple reasons why your query may be slow:
Whenever you do a massive bulk copy, you usually don't do it all in one shot, because large insert/update/delete transactions are very expensive, as the DB has to be prepared to roll-back at any time until the transaction completes. It is better to copy in batches (say 1000 records at a time). A good tool for doing this bulk copy is SSIS (which comes with SQL Server), which can do this batching for you.
You should explicitly specify the sort order on the table you are copying from, especially if you are copying into a table which will have a clustered index. You should make sure you are inserting in the sort order of the clustered index (i.e. the clustered index is usually an int/bigint so ensure records are inserted in order 1,2,3,4 not 100,5,27,3 etc.)