SQL Server 2012 Linked Server to MySQL Slow Select Query - mysql

I'm running SQL Server 2012 and have setup a Linked Server connection to a Linux MySQL instance via the latest MySQL ODBC drivers. I'm a bit disappointed by the time taken to return results of a fairly straightforward 'Select' query.
select * from OPENQUERY(LinkedServer, 'select * from mysqltable')
The table has approximately 150,000 rows and takes about 70 seconds to return the results to SSMS. Conversely if I query the table via a MySQL Client App (In this case Navicat Essentials, on the same machine as SQL Server) the query executes in about 1 second.
I know that Linked Servers and ODBC will be slower but I'm surprised by this performance hit, particularly with such a straight forward query.
I've tried both the Unicode and ANSI drivers and the performance is similar. The MySQL DB is UTF-8 CharSET and Coalition and the table is InnoDB. I've tried explicitly selecting columns rather than * also. Again no difference.
Has anyone experienced this before and got any tips for speeding up the performance or is this fairly normal?
Thanks in advance.

In linked server
I do not think there is a possibility to improve significantly
But you can try through SSIS
And use bulk insert.

Related

SQL Server Linked Server To MySQL "Too many connections"

I have attempted to find the answer here and via Google on how to control connections for a linked server ODBC connection.
Overview
I have a linked server from SQL Server 2014 to MySQL for the purposes of extracting data for our data warehouse. I've queried the database quite a few times without issue. Then yesterday, suddenly the query to read from the table is slow, and then I get reports that the application using this MySQL database are getting a "too many connections" error.
Details
The following query selects the data from MySQL and inserts to the SQL Server table.
INSERT INTO tmpCustomers
(fieldlist)
SELECT
myc.contact_id,
myl.franchise_id,
myl.lead_source,
LEFT(RTRIM(myc.first_name) + ' ' + RTRIM(myc.last_name),100) AS Name,
myc.first_name,
myc.last_name,
myc.company,
myc.Email,
myc.primary_phone,
myc.home_phone,
myc.mobile_phone,
myc.work_phone,
myc.fax,
myc.address1,
myc.Address2,
myc.City,
myc.[state],
myc.zip_code,
myc.created_date,
myc.updated_date
FROM [MYSQLDB]...[franchise] myf
INNER JOIN [MYSQLDB]...[leads] myl
ON myl.franchise_id = myf.franchise_id
INNER JOIN [MYSQLDBE]...[contact] myc
ON myc.contact_id = myl.contact_id
This query returns about 200K rows of data, and will grow. The MySQL database is used by our customer base, and this is a back-end process to pull data into our data warehouse.
The query has been working without issue over the past week of testing, until yesterday, where it caused our MySQL support to restart the MySQL server twice.
The ODBC setup was done using the "mysql-connector-odbc-5.3.6-win64.msi" version. I don't find any settings there to limit the number of connections. ODBC does show "Allow multiple statements", which this is not. It also has "Enable automatic reconnect", which I can't imagine why for a single query would be needed.
Summary
I can't afford to stop customers from connecting, and need to disable the process from using too many connections when doing the import.
Any input on this would be greatly appreciated.
Thanks
KDS
Update: 2016-Oct-05
AWS server - M3.xlarge
4 CPU
15 GiB
2 40 GiB SSD drives
It's better to optimize the MySQL server if you can't afford to stop customers from connecting.
With this much information, it hard to optimize or suggest something for MySQL optimization.
https://dev.mysql.com/doc/refman/5.5/en/too-many-connections.html
Better to update your configuration file. Or max_connections limit and InnoDB variable if you are using innoDB. And RAM also.
Can you update the above information in question section.
I'm going to mark this as answered as it's been about a year and no real solution to it. The issue was locks on the MySQL server as the SQL Server linked server was reading the data. SQL Server arguments like NOLOCK had no impact on resolving this.
So, what was done was to take a backup of the MySQL database nightly and restore it to a separate database that we linked to for SQL Server, and process the data from there. The reads are usually done in a matter of a minute or two. SQL Server was still putting a lock on the MySQL table, and users then started to stack multiple connections until all the connections to MySQL were used up.
So, since I only needed the data for reporting purposes daily, this separate database copy worked, but I don't know of any other fix to this.
Thanks
KD

How to increase query performance at Sql Server 2000 and compare Sql Server 2008?

I have complex query and big database.I execute my query on the Sql Server 2008 it took time 8-10 minute.But I execute my query on the Sql Server 2000 it took time 1-2 hour.Why?I used index and I used execution plan but I didn't solve this problem.Anybody can help me? or Does anyone have a suggestion?
You can possibly create an auxiliary table for this query. That way the query would run faster, since the work is done before the query in background. Note: This usually works if the query retrieving the data doesnt have to be in sync with the DB.
Also, It depends on how you want to use the data, you might be able to cache or precache the results.

How to improve update performance of an SSIS 2008 package that runs fine on 2005?

I'm migrating my data warehouse from SQL Server 2005 to SQL Server 2008 . There is a large performance decrease on the table updates. The inserts work great.
I'm using the same SSIS package in both environments, but 2008 still doesn't update right.
I've run update stats full on all tables. The process uses a temp table. I've dropped all indexes (except one needed for the update) but none of these measures helped. I also wrote an update statement that mimics what SSIS is doing, and it runs fast as expected.
The update process uses a data flow task (there are other things in the task, like inserting into a processed table to know what data was used in the update).
This is a brand new database with nothing else running on it. Any suggestions?
Captured statistics IO
2005, CPU=0, Reads=150
2008, CPU=1700, Reads=33,000
Database RAM:
2005, 40GB Total / 18 Sql Server
2008, 128GB Total / 110GB Sql Server
The problem was found in the execution plan. The plan in 2008 was using different tables build the update statement. Background: since we use indexed views which don't allow any other access while querying those tables, we built smaller/leaner tables the iViews use rather than our dimensions to keep them available to users. The optimizer was choosing those tables rather than the ones we specified in the query.
When I originally did the explain plans, I used the wrong query, which did not have this functionality. This made all the difference.
Thanks!

Extremely slow insert OpenQuery performance on SQL Server to MySQL linked server

Using SQL Server Management Studio to copy the entire contents of a table from SQL Server to an identical table on a MySQL machine. The MySQL db is connected to Management Studio as a linked server using MySQL ODBC 5.1 driver. Using a simple statement works fine, but executes extremely slowly.
INSERT INTO openquery(MYSQL, 'select * from Table1')
SELECT * from MSSQL..Table2
I have a table with about 450,000 records and it takes just over 5 hours to transfer. Is this normal? I don't have prior experience linking MySQL servers.
How long does it take to just run the "SELECT * from MSSQL..Table2", if you run it from management studio?
There are multiple reasons why your query may be slow:
Whenever you do a massive bulk copy, you usually don't do it all in one shot, because large insert/update/delete transactions are very expensive, as the DB has to be prepared to roll-back at any time until the transaction completes. It is better to copy in batches (say 1000 records at a time). A good tool for doing this bulk copy is SSIS (which comes with SQL Server), which can do this batching for you.
You should explicitly specify the sort order on the table you are copying from, especially if you are copying into a table which will have a clustered index. You should make sure you are inserting in the sort order of the clustered index (i.e. the clustered index is usually an int/bigint so ensure records are inserted in order 1,2,3,4 not 100,5,27,3 etc.)

How to increase SQL Server database performance?

I have table in a SQL Server database with only 900 record with 4 column.
I am using Linq-to-SQL. Now I am trying retrieve data from that table for this I have written a select query.
Its not querying data from database and its showing time out error.
Please give me idea for this. First how can I increase time and second how can increase performance of query so can it easily access.
Thanks
That is a tiny table, there is either something very wrong with your database, or your application.
Try seeing what is happening in the database with SQL Profiler.
If you have just 900 records and four columns then unless you are storing many megabytes of data in each field the query should be very fast. I think your problem is that the connection is failing, possibly due to a firewall or other networking problem.
To debug I'd suggest running a simpler query and see if you can get any data at all. Also try running the same query from the SQL Server Management Studio to see if it works there.