MySQL ODBC Update Query VERY Slow - mysql

Our Access 2010 database recently hit the 2GB file size limit, so I ported the database to MySQL.
I installed MySQL Server 5.6.1 x64 on Windows Server 2008 x64.
All OS updates and patches are loaded.
I am using the MySQL ODBC 5.2w x64 Driver, as it seems to be the fastest.
My box has an i7-3960X with 64GB RAM and a 480GB SSD.
I use Access Query Designer as I prefer the interface and I regularly need to append missing records from one table to the other.
As a test, I have a simple Access Database with two linked tables:
tblData links to another Access Database and
tblOnline uses a SYSTEM DSN to a linked ODBC table.
Both tables contain over 10 million records.
Some of my ported working tables already have over 30 million records.
To select records to append, I use a field called INDBYN which is either true or false.
First I run an Update query on tblData:
UPDATE tblData SET tblData.InDBYN = False;
Then I update all matching records:
UPDATE tblData INNER JOIN tblData ON tblData.IDMaster = tblOnline.IDMaster SET tblData.InDBYN = True;
This works reasonably fast, even to the linked ODBC table.
Lastly I Append all records where INDBYN is False to tblOnline.
This is also acceptable speed, although slower than appends to a Linked Access table.
Within Access everything works 100% and is incredibly fast, except the DB is getting too big.
On the Linked Access Table, it takes 2m15s to update 11,500,000 records.
However, I now need to move the SOURCE table to MySQL, as it is reaching the 2GB limit.
So in future I will need to run the UPDATE statement on a linked ODBC table.
So far, when I run the same simple UPDATE query on the linked ODBC table it runs for more than 20 minutes, and then bombs out saying the query has exceeded the 2GB memory limit.
Both tables are identical in structure.
I do not know how to resolve this and need advice please.
I prefer to use Access as the front-end as I have hundreds of queries already designed for the app, and there is no time to re-develop the app.
I use the InnoDB engine and have tried various tweaks without success. Since my database uses relational tables, it looked like the best option to use INNODB as opposed to MyISAM.
I have turned doublewrite on and off and tried various buffer pool sizes, including query cache. It does not make a difference on this particular query.
My current my.ini file looks like this:
#-----------------------------------------------------------------------
# MySQL Server Instance Configuration File
# ----------------------------------------------------------------------
[client]
no-beep
port=3306
[mysql]
default-character-set=utf8
server_type=3
[mysqld]
port=3306
basedir="C:\Program Files\MySQL\MySQL Server 5.6\"
datadir="E:\MySQLData\data\"
character-set-server=utf8
default-storage-engine=INNODB
sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION"
log-output=FILE
general-log=0
general_log_file="SQLSERVER.log"
slow-query-log=1
slow_query_log_file="SQLSERVER-slow.log"
long_query_time=10
log-error="SQLSERVER.err"
max_connections=100
query_cache_size = 20M
table_open_cache=2000
tmp_table_size=502M
thread_cache_size=9
myisam_max_sort_file_size=100G
myisam_sort_buffer_size=1002M
key_buffer_size=8M
read_buffer_size=64K
read_rnd_buffer_size=256K
sort_buffer_size=256K
innodb_additional_mem_pool_size=32M
innodb_flush_log_at_trx_commit = 1
innodb_log_buffer_size=16M
innodb_buffer_pool_size = 48G
innodb_log_file_size=48M
innodb_thread_concurrency = 0
innodb_autoextend_increment=64M
innodb_buffer_pool_instances=8
innodb_concurrency_tickets=5000
innodb_old_blocks_time=1000
innodb_open_files=2000
innodb_stats_on_metadata=0
innodb_file_per_table=1
innodb_checksum_algorithm=0
back_log=70
flush_time=0
join_buffer_size=256K
max_allowed_packet=4M
max_connect_errors=100
open_files_limit=4110
query_cache_type = 1
sort_buffer_size=256K
table_definition_cache=1400
binlog_row_event_max_size=8K
sync_relay_log=10000
sync_relay_log_info=10000
tmpdir = "G:/MySQLTemp"
innodb_write_io_threads = 16
innodb_doublewrite
innodb = ON
innodb_fast_shutdown = 1
query_cache_min_res_unit = 4096
query_cache_limit = 1048576
innodb_data_home_dir = "E:/MySQLData/data"
bulk_insert_buffer_size = 8388608
Any advice will be greatly appreciated. Thank you in advance.

Communication of MS Access with MySQL thru linked table is slow. Terribly slow. That is the fact which can't be changed. Why is it happening? Access firstly load data from MySQL, then it process the command and finally it puts the data back. In addition, it does this process row by row!
However, you can avoid this if you don't need to use parameters or data from local tables in your "update" query. (In another words - if your query is always same and it use only MySQL data)
Trick is to force MySQL server to process the query instead of Access! This can be achieved by creating "pass-thru" query in Access, where you can write directly your SQL code (in MySQL syntax). Access then sends this command to MySQL server and it is processed directly within that server. So your query will be almost as fast as doing it in local access table.

Access is a single-user system. MySQL with InnoDB is a transaction-protected multi-user system.
When you issue an UPDATE command that hits ten or so megarows, MySQL has to construct rollback information in case the operation fails before it hits all the rows. This takes a lot of time and memory.
Try switching your table access method to MyISAM if you're going to do these truly massive UPDATE and INSERT commands. MyISAM isn't transaction-protected so these operations may run faster.
You may find it helpful to do your data migration with some tool other than ODBC. ODBC is severely limited in its ability to handle lots of data, as you have discovered. For example, you could export your Access tables to flat files and then import them with a MySQL client program. See here... https://stackoverflow.com/questions/9185/what-is-the-best-mysql-client-application-for-windows
Once you've imported your data to MySQL, you then can run Access-based queries. But avoid UPDATE requests that hit everything in the database.

Ollie, I get your point on avoiding UPDATES that hit all rows. I use that to flag rows which are missing from the destination database, and it has been a quick and easy way to append only the missing rows. I see SQLyog has an import tool to Append new records only, but this still runs through all rows in the import table, and runs for hours. I will see if I can export only the data I want to CSV, but would still be nice to get the ODBC connector to work faster than present, if at all possible.

Related

InnoDB - IBD file growing to eat all of the space on the server

The Environment
We have a staging/test WordPress site on a CentOS server running MariaDB (version 10.3.8). We've been experimenting with a plugin called GeoDirectory (https://wpgeodirectory.com/) The plugin creates a variety of tables in the database. The database also is using the innodb_file_per_table set to ON, so the database has generated IBD files for each tablespace in the database.
The Issue
The staging server had 80 GB of storage. After making a particular change to the settings of the plugin (updating the Place Settings), we noticed that about ten minutes later the staging site was timing out when we tried to access it via the browser.
Logging in via SSH, we noticed that the machine was completely out of space. Looking for the largest files in /var, I noticed that one file was now taking up 70 GB of space (wp_g1a4rar7xx_geodir_gd_place_detail.ibd). This is the IBD file that corresponded to the table where the settings were updated.
Logging into SQL and running a count(*) on the number of records in that table, there were only 6,000 records.
Because the table had blown up to take up the entirety of the disk, even trying OPTIMIZE TABLE wouldn't work because of the lack of space.
The Question
What could have happened to make that file balloon so large so quickly? How can we avoid this happening in the future?
Reckless Speculation
Based on searches, we believe it might have something to do with the rollback/logging that InnoDB performs, and that we might be able to avoid this by making some changes in my.cnf to limiting logging. Again, it could be something completely different.
Here's the current my.cnf from /etc/my.cnf:
[mysqld]
bind-address = 0.0.0.0
# bind-address = ::ffff:127.0.0.1
local-infile=0
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
# Disabling symbolic-links is recommended to prevent assorted security
risks
symbolic-links=0
# Settings user and group are ignored when systemd is used.
# If you need to run mysqld under a different user or group,
# customize your systemd unit file for mariadb according to the
# instructions in http://fedoraproject.org/wiki/Systemd
[mysqld_safe]
log-error=/var/log/mariadb/mariadb.log
pid-file=/var/run/mariadb/mariadb.pid
#
# include all files from the config directory
#
!includedir /etc/my.cnf.d
Thanks in advance for your help!
Based on the name of the plugin, I am guessing that the issue could be related to updates of indexed geometry columns. MariaDB Server 10.2 imported the InnoDB support for SPATIAL INDEX from MySQL 5.7, and it looks like the purge of version history is sometimes being skipped, depending on concurrent DML activity.
I have mentioned this peculiar design choice in MDEV-15284, which reports that SELECT on a SPATIAL INDEX can return inconsistent results if a concurrent ROLLBACK is being executed.

mysql 5.6 Linux vs windows performance

The below command takes 2-3 seconds in a Linux MySQL 5.6 server running Php 5.4
exec("mysql --host=$db_host --user=$db_user --password=$db_password $db_name < $sql_file");
On windows with similar configuration it takes 10-15 seconds. The windows machine has a lot more ram (16gb) and similar hard drive. I installed MySQL 5.6 and made no configuration changes. This is on windows server 2012.
What are configurations I can change to fix this?
The database file creates about 40 innodb tables with very minimal inserts.
EDIT: Here is the file I am running:
https://www.dropbox.com/s/uguzgbbnyghok0o/database_14.4.sql?dl=0
UPDATE: On windows 8 and 7 it was 3 seconds. But on windows server 2012 it is 15+ seconds. I disabled System center 2012 and that made no difference.
UPDATE 2:
I also tried killing almost every service except for mysql and IIS and it still performed slowly. Is there something in windows server 2012 that causes this to be slow?
Update 3
I tried disable write cache buffer flush and performance is now great.
I didn't have to do this on other machines I tested with. Does this indicate a bottleneck With how disk is setup?
https://social.technet.microsoft.com/Forums/windows/en-US/282ea0fc-fba7-4474-83d5-f9bbce0e52ea/major-disk-speed-improvement-disable-write-cache-buffer-flushing?forum=w7itproperf
That is why we call it LAMP stack and no doubt why it is so popular mysql on windows vs Linux. But that has more to do more with stability and safety. Performance wise the difference should be minimal. While a Microsoft Professional can best tune the Windows Server explicitly for MySQL by enabling and disabling the services, but we would rather be interested to see the configuration of your my.ini. So what could be the contributing factors w.r.t Windows on MySQL that we should consider
The services and policies in Windows is sometimes a big impediment to performance because of all sorts of restrictions and protections.
We should also take into account the Apache(httpd.conf) and PHP(php.ini) configuration as MySQL is so tightly coupled with them.
Antivirus : Better disable this when benchmarking about performance
Must consider these parameters in my.ini as here you have 40 Innodb tables
innodb_buffer_pool_size, innodb_flush_log_at_trx_commit, query_cache_size, innodb_flush_method, innodb_log_file_size, innodb_file_per_table
For example: If file size of ib_logfile0 = 524288000, Then
524288000/1048576 = 500, Hence innodb_log_file_size should be 500M
innodb_flush_log_at_trx_commit = 2
innodb_flush_method = O_DIRECT
https://dev.mysql.com/doc/refman/5.1/en/innodb-tuning.html
When importing data into InnoDB, make sure that MySQL does not have autocommit mode enabled because that requires a log flush to disk for every insert
SET autocommit=0;
Most importantly innodb_flush_log_at_trx_commit as in this case it is about importing database. Setting this to '2' form '1' (default)hm can be a big performance booster specially during data import as log buffer will be flushed to OS file cache on every transaction commit
For reference :
https://dev.mysql.com/doc/refman/5.5/en/optimizing-innodb-bulk-data-loading.html
https://dba.stackexchange.com/a/72766/60318
http://kvz.io/blog/2009/03/31/improve-mysql-insert-performance/
Lastly, based on this
mysql --host=$db_host --user=$db_user --password=$db_password $db_name < $sql_file
If the mysqldump (.sql) file is not residing in the same host where you are importing, performance will be slow. Consider to copy the (.sql) file exactly in the server where you need to import the database, then try importing without --host option.
Windows is slower at creating files, period. 40 InnoDB tables involves 40 or 80 file creations. Since they are small InnoDB tables, you may as well set innodb_file_per_table=OFF before doing the CREATEs, thereby needing only 40 file creations.
Good practice in MySQL is to create tables once, and not be creating/dropping tables frequently. If your application is designed to do lots of CREATEs, we should focus on that. (Note that, even on Linux, table create time is non-trivial.)
If these are temporary tables... 5.7 will have significant changes that will improve the performance (on either OS) in this area. 5.7 is on the cusp of being GA.
(RAM size is irrelevant in this situation.)

Periodic MySQL lockup when Wordpress is under heavy load

I have a MySQL 5.1.61 database running behind two load balanced Apache webservers hosting a fairly busy (100K uniques per day) Wordpress sites. I'm caching with Cloudflare, W3TC, and Varnish. Most of the time, the database server handles traffic very well. "show full processlist" shows 20-40 queries at any given time, with most being in the sleep state.
Periodically, though (particularly when traffic spikes or when a large number of comments are cleared), MySQL stops responding. I'll find 1000-1500 queries running, many "sending data", etc. No particular query seems to be straining the database (they're all standard Wordpress queries), but it just seems like the simultaneous volume of requests causes all queries to hang up. I'm (usually) still able to log in, to run "show full processlist", or other queries, but the 1000+ queries already in there just sit. The only solution seems to be to restart mysql (sometimes violently via kill -9 if I can't connect).
All tables are innodb, server has 8 cores, 24GB RAM, plenty of disk space, and the following is my my.cnf:
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
port=3306
skip-external-locking
skip-name-resolve
user=mysql
query_cache_type=1
query_cache_limit=16M
wait_timeout = 300
query_cache_size=128M
key_buffer_size=400M
thread_cache_size=50
table_cache=8192
skip-name-resolve
max_heap_table_size = 256M
tmp_table_size = 256M
innodb_file_per_table
innodb_buffer_pool_size = 5G
innodb_log_file_size=1G
#innodb_commit_concurrency = 32
#innodb_thread_concurrency = 32
innodb_flush_log_at_trx_commit = 0
thread_concurrency = 8
join_buffer_size = 256k
innodb_log_file_size = 256M
#innodb_concurrency_tickets = 220
thread_stack = 256K
max_allowed_packet=512M
max_connections=2500
# Default to using old password format for compatibility with mysql 3.x
# clients (those using the mysqlclient10 compatibility package).
old_passwords=1
#2012-11-03
#attempting a ram disk for tmp tables
tmpdir = /db/tmpfs01
[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
Any suggestions how I can potentially improve MySQL config, or other steps to maintain database stability under heavy load?
Like has been said, think outside the box and do sone rooting around why these queries are slow or somehow hung. An oldie but a good source of problems even for (supposedly;) intelligent system engineers is load balancing causing issues across webserver or database sessions. With all that caching and load balancing going on, are you sure everything is always connecting end-to-end as intended?
I agree with alditis & Bjoern
I'm pretty noobish with mysql but running mysqltuner can reveal some config optimisations based on recent queries of the DB https://github.com/rackerhacker/MySQLTuner-perl
And if possible store the DB files on a physically separate partition from the OS, the OS can consume IO which slows the DB. Like with Bjoern's logrotate issue.
First have a look at basic system behavior at the moment of problems. Use both vmstat and iostat if you can find any issues. See if the system starts swapping (pi,po columns in vmstat) and if lots of IO is happening. This is the first step in debugging your problem.
Another source of useful information is SHOW INNODB STATUS. See for http://www.mysqlperformanceblog.com/2006/07/17/show-innodb-status-walk-through/ on how to interpret the output.
It might be that at a certain point in time your writes are killing read performance because they flush the query cache.

Do I need to add something to each mysql query in order to use the mysql cache?

Ok so I have followed this tutorial, and restarted mysql server. However, whenever I go to run a query and then execute the query again, I see hardly any performance gain. It's like a .200 sec gain which tells me that the cache isn't working.
Here is the cache config from the my.cnf file.
# * Query Cache Configuration
#
query_cache_limit = 10M
query_cache_size = 256M
query_cache_type = 1
The way I am testing this is by running a Routine from the database. The routine consists of a simple SELECT statement that joins two small lookup tables.
show variable like '%query_cache%';
Results
have_query_cache YES
query_cache_limit 10485760
query_cache_min_res_unit 4096
query_cache_size 268435456
query_cache_type ON
query_cache_wlock_invalidate OFF
Edit 1
To add, when I look at the Server Health using Workbench. The query cache hitrate remains at 0%.
You should go back and specify your parameters in bytes. I know showVar looks correct, but those values are used within the system and it looks like you have spec'd the params in MBs, 1st code block above. Even though showVar looks good, the system may be seeing 10M and defaulting to 0.
Ok so here is what solved my problem. I ended up enabling Remote Management in the MySql workbench. Then I went to the Configuration->Options File. It gave me an error stating that there was no section called [mysqld] and that it was going to add it. After that I went to the Performance tab, and noticed all of my checkboxes for the cache were unchecked. I checked all the values, MySql Workbench suggested I suffix my values with a K, M, or G when I input my cache values. After all of that I restarted the server and it was successfully caching my queries.
Not sure what all technical pieces changed in the config, but the workbench took care of everything for me.

mysql deliberate slowdown

I have an Java application that connects to database. In the production environment, the dataset is very large, so the application very slow and I want to simulate (the slowness) in development environment. Is there a way, how to slowdown mysql, so the response time is bigger?
I know, that I could enlarge my test dataset, but the proccessing of the large dataset would eat processor cycles, I'm rather searching for something that would do "cheap" sleeps in MySQL.
MySQL has a SLEEP(duration) miscellaneous function, where duration is the number of seconds.
source: http://dev.mysql.com/doc/refman/5.0/en/miscellaneous-functions.html#function_sleep
You can copy MySQL database files to (slow, old) usb key and set MySQL's setting datadir to point to usb. I copied MySQL data directory to usb key and set datadir variable to usb.
#Path to the database root
#datadir="C:/Program Files (x86)/MySQL/MySQL Server 5.0/Data/"
datadir = "E:/data"
I suppose it's important to set innodb_flush_log_at_trx_commit to 1.
# If set to 1, InnoDB will flush (fsync) the transaction logs to the
# disk at each commit, which offers full ACID behavior. If you are
# willing to compromise this safety, and you are running small
# transactions, you may set this to 0 or 2 to reduce disk I/O to the
# logs. Value 0 means that the log is only written to the log file and
# the log file flushed to disk approximately once per second. Value 2
# means the log is written to the log file at each commit, but the log
# file is only flushed to disk approximately once per second.
innodb_flush_log_at_trx_commit=1
This way each update, delete, insert statement results I/O flush.
To add to the SLEEP() function comment, here's an easy way to integrate a sleep into any SQL query when you can't run it as separate statement:
LEFT JOIN (SELECT SLEEP(30)) as `sleep` ON 1=1