A customer notifies me that the search of an application we have developed for him is not working properly. The search is based on queries on Full-Text Search indexes on InnoDB tables in MySQL 5.6 (the first version that supported it).
When performing the query manually I check that it only returns 1 result when it should return 2. I tried with other search terms and the same table row is always omitted. The same queries using LIKE works well.
When trying to replicate the problem in a virtualized environment with the same OS and MySQL server versions I am unable to reproduce it, as the query works correctly after restoring a database dump.
After trying several options what has worked is to run OPTIMIZE TABLE. The search query works well after optimizing the table, showing the missing record.
Why does this happen? What is the explanation for this problem and how can it be detected or prevented?
i dont know much about your code, your database, your queries, but i see only 2 situations :
a row level locking which is never released : (may be a bug in mysql) https://dev.mysql.com/doc/refman/8.0/en/innodb-locking.html
table corruption : i'm not sure if OPTIMIZE TABLE is repairing anything or only reducing fragmentation.
Related
I'm working with a MySQL table through MySQL Workbench. Currently, many of my tables are appearing more than once with some appearing 10+ times. It is not messing anything up with my application or when I run a query on any of these tables, but something that I feel should be fixed. Haven't been able to find anything and just assuming this is a quick fix.
MySQL Table List
Currently running MySQL Workbench 8.0.23 build 365764 SE (64 bits), MySQL Azure Database System.
Also, is there any reason that this would be happening? I have a variety of php files set up to read/write data, my hunch is that some lines of code may be creating duplicates that I need to nail down, is there any specific SQL syntax that would be creating these duplicates (other than CREATE TABLE..., I do not have anything of that nature set to run).
Thanks for the help.
We installed latest Percona MySQL server v5.7.28-31, which is installed via percona repo.
After successfull migration of data and few days of work we met an issue with low performance and high disk IO utilization.
I tried to find out any solution, and found this known issue:
The length of time and resources required for a MySQL query execution increased with a large number of table partitions. Limiting the Estimation of Records in a Query describes the experimental options added to prevent index scans on the partitions and return a specified number of values.
The solution is to change the value of innodb_records_in_range and innodb_force_index_records_in_range variables. But i can't understand which correct values should be set there.
As i understand these two are count of rows, which will be scanned during the query, but what will mysql optimizer do after scan this rows? Will it use indexes, which should be used as a true way, or it will ignore any indexes and just scan this rows and return incorrect output?
Please help me to understand this two variables.
PS: IF i check query via EXLPAIN on server I see this correct logic and it uses required indexes.
But on the same server with mysql 5.7.28-31 at RANDOM time we see this incorrect logic and it's not using required indexes and here our DB server goes almost down.
PPS: we can't change query, because on other servers we don't have any issues, there is mysql 5.7.23 installed.
Is it really necessary to downgrade mysql minor version, or it is possible to set this values correct?
I need to start off by pointing out that by no means am I a database expert in any way. I do know how to get around to programming applications in several languages that require database backends, and am relatively familiar with MySQL, Microsoft SQL Server and now MEMSQL - but again, not an expert at databases so your input is very much appreciated.
I have been working on developing an application that has to cross reference several different tables. One very simple example of an issue I recently had, is I have to:
On a daily basis, pull down 600K to 1M records into a temporary table.
Compare what has changed between this new data pull and the old one. Record that information on a separate table.
Repopulate the table with the new records.
Running #2 is a query similar to:
SELECT * FROM (NEW TABLE) LEFT JOIN (OLD TABLE) ON (JOINED FIELD) WHERE (OLD TABLE.FIELD) IS NULL
In this case, I'm comparing the two tables on a given field and then pulling the information of what has changed.
In MySQL (v5.6.26, x64), my query times out. I'm running 4 vCPUs and 8 GB of RAM but note that the rest of my configuration is default configuration (did not tweak any parameters).
In MEMSQL (v5.5.8, x64), my query runs in about 3 seconds on the first try. I'm running the exact same virtual server configuration with 4 vCPUs and 8 GB of RAM, also note that the rest of my configuration is default configuration (did not tweak any parameters).
Also, in MEMSQL, I am running a single node configuration. Same thing for MySQL.
I love the fact that using MEMSQL allowed me to continue developing my project, and I'm coming across even bigger cross-table calculation queries and views that I can run that are running fantastically on MEMSQL... but, in an ideal world, i'd use MySQL. I've already come across the fact that I need to use a different set of tools to manage my instance (i.e.: MySQL Workbench works relatively well with a MEMSQL server but I actually need to build views and tables using the open source SQL Workbench and the mysql java adapter. Same thing for using the Visual Studio MySQL connector, works, but can be painful at times, for some reason I can add queries but can't add table adapters)... sorry, I'll submit a separate question for that :)
Considering both virtual machines are exactly the same configuration, and SSD backed, can anyone give me any recommendations on how to tweak my MySQL instance to run big queries like the one above on MySQL? I understand I can also create an in-memory database but I've read there might be some persistence issues with doing that, not sure.
Thank you!
The most likely reason this happens is because you don't have index on your joined field in one or both tables. According to this article:
https://www.percona.com/blog/2012/04/04/join-optimizations-in-mysql-5-6-and-mariadb-5-5/
Vanilla MySQL only supports nested loop joins, that require the index to perform well (otherwise they take quadratic time).
Both MemSQL and MariaDB support so-called hash join, which does not require you to have indexes on the tables, but consumes more memory. Since your dataset is negligibly small for modern RAM sizes, that extra memory overhead is not noticed in your case.
So all you need to do to address the issue is to add indexes on joined field in both tables.
Also, please describe the issues you are facing with the open source tools when connect to MemSQL in a separate question, or at chat.memsql.com, so that we can fix it in the next version (I work for MemSQL, and compatibility with MySQL tools is one of the priorities for us).
I have written a plug-in for a third-party application that runs the plug-in every minute.
My plug-in runs a JDBC query on MySQL and reports the result. I recently changed the collation of the table and all its columns, and after that the queries run very slowly.
I tried reverting the collation configuration but it didn't help. I also removed all the history from the table and inserted fresh data. And then I dropped the database and created a new one. Still I have very slow queries. And the interesting thing is that when I run the query manually through mysql-client, it runs as fast as expected.
This happens for some very simple queries like:
select 1 as Avail from Report limit 1
but some other queries run pretty fast.
Another thing that I changed before this problem occurred was that I turned off the binary logs of mysql. I really doubt that it has anything to do with the performance, but I think you might need to know it.
This is driving me crazy!
I'm migration my .NET/MSSQL to RoR/MySQL/EC2/Ubuntu platform. After I transferred all my existing data into MySQL, I found the MySQL querying speed is incredibily slow, even for a super-basic query , like querying a select count(*) from countries, it's just a country table, only contains around 200 records, but it takes 0.124ms for the query. It's obviously not normal.
I'm a newbie to MySQL, can anyone tell me what would be the possible problem? Or any initial optimization button I should turn on after installing MySQL?
count(*) operation cannot really be optimized since it has to either do a full table scan (O(n)), or read the cached table count (O(1)) depending on the database engine you are using. Either ways, your query should not be that slow. You might want to get in touch with AWS support. It's possible the box is being choked by some other process running on it.