EDIT: ok, this was all my error. I made a mistake resetting the least_prices values. The database still had the old wrong values. Whenever I clicked in my application on a product to get its id and look it up in the database an aftersave-hook would trigger a recalculation of the least_price, thus changing it to the new correct value. And I wrongly assumed looking up a product in the DB would change the cached value for least_price. I would delete this question, as it is very unlikely to help somebody else, but people have already answered. Thank you and sorry if I have wasted anybody`s time.
I recently set all values of one field (least_price) of my table products to new (higher) values with a php-script. Now I run this query:
SELECT Products.*
FROM products Products
WHERE
(
Products.least_price > 240
AND Products.least_price < 500
) ;
and the result set contains some products with a new least_price value above 500. The result will show wrong (I am assuming the old) values for the least_price field. If I query a particular product with select product where id = 123 that happens to have a new least_price higher than 500, it will show the (newer/higher) least_price correctly. Next time I run the first above-mentioned query the result-set will be smaller by one product and the missing product is the one I queried individually.
Can this behaviour be explained by a query cache? I tried to run RESET QUERY CACHE, but I don't have the previleges to do that unfortunately with that hosting provider. Is there anything else that I can do to alert mysql that the least_price attribute has changed?
I am using mysql 5.6.38-nmm1-log on a x86_64 debian-linux-gnu machine with innodb 5.6.38
No, this can't be due to the query cache.
The query cache doesn't cache row references, it caches the actual results that were returned. So it can't contain results that don't match the criteria in the query.
The cached result for a query is flushed if any of the tables it uses are modified. So it will never return stale data.
For full details of how the MySQL query cache works, see The MySQL Query Cache.
If the least_price column is indexed, your incorrect result could be due to a corrupted index, try repairing the table.
Related
I've got a long-running MySQL db operation on my node.js server. This operation performs an INSERT INTO (...) SELECT ... FROM statement that should result in a table with 1000's of rows, but I only end up with a fraction of that amount. I'm noticing that my node server shows the request always taking exactly 120000 MS, so it's led me to believe that something -- either MySQL or node's MySQL connector -- is artificially limiting my results from the SELECT statement.
Some things to note:
I've tried adding my own LIMIT 0,100000 and my final result is exactly the same as if I had no LIMIT clause at all.
If I run with no WHERE clause, my resulting data goes through July of 2013. I can force later data by adding a WHERE theDateField > '2013-08-01'; I can conclude from this that the query itself should be working, but that something is limiting it.
I get the same result by running my query in MySQL workbench after removing the LIMIT via preferences (this suggests that the MySql server itself may be the problem)
Is anyone aware of a setting or something that could cause this behavior?
I have a database table that lists all orders. Each weekend a cron runs and it generates invoices for each customer. The code loops through each customer, gets their recent orders, creates a PDF and then updates the orders table to record the invoice ID against each of their orders.
The final update query is:
update bookings set invoiced='12345' where username='test-username' and invoiced='';
So, set invoiced to 12345 for all orders for test-username that haven't been previously invoiced.
I have come across a problem where orders are being added to the PDF but not updated to reflect the fact that they have been invoiced.
I have started running the update query manually and come across a strange scenario.
A customer may have 60 orders.
If I run the query once then 1 order is updated. I run it again and 1 order is updated, I repeat the process and each time only a small number of orders are updated - between 1 and 3. It doesn't update the 60 in one query as I would expect. I need to run the query repeatedly until it finally comes back with "0 rows affected" and then I can be sure that all rows have been updated.
I am not including a LIMIT XX in my query so I so no reason why it can't update all orders at once. The query I run repeatedly is identical each time.
Does anybody have any wise suggestions?!
I'm guessing you're using InnoDB. You haven't disclosed the type of code you're running.
But I bet you're seeing an issue that relates to transactions. When a program works differently from an interactive session, it's often a transaction issue.
See here: http://dev.mysql.com/doc/refman/5.5/en/commit.html
Do things work better if you issue a COMMIT; command right after your UPDATE statement?
Note that your language binding may have its own preferred way of issuing the COMMIT; command.
Another way to handle this problem is to issue the SQL command
SET autocommit = 1
right after you establish your connection. This will make every SQL command that changes data do its COMMIT operation automatically.
Recently, phpmyadmin show this message at the top of my Records in database.
The description of this message is:
"The number of records for InnoDB tables is not correct.
phpMyAdmin uses a quick method to get the row count, and this method only returns an approximate count in the case of InnoDB tables. See $cfg['MaxExactCount'] for a way to modify those results, but this could have a serious impact on performance."
I would like to know will it further affect my database data if I ignore it?
Or should I cleared my database and re-created those data?
Thanks.
I would like to know will it further affect my database data if I ignore it?
it won't affect you if you ignore it.
Or should I cleared my database and re-created those data?
there's no need to re-create the data, it won't get rid of the message.
all that message is telling you is that the numbers shown in the Rows column might not be exact. this isn't a problem with the data or the database, but just something phpMyAdmin does to speed up showing that page. counting all the rows takes a long time.
When i run SHOW STATUS LIKE 'Qcache%, i am getting following results
Variable_name|Value
Qcache_free_blocks|0
Qcache_free_memory|0
Qcache_hits|0
Qcache_inserts|0
Qcache_lowmem_prunes|0
Qcache_not_cached|0
Qcache_queries_in_cache|0
Qcache_total_blocks|0
But i enabled all cache settings in mysql server and i am getting following result for my query SHOW VARIABLES LIKE '%query_cache%';
Variable_name|Value
have_query_cache|YES
query_cache_limit|2147483648
query_cache_min_res_unit|4096
query_cache_size|2147483648
query_cache_type|ON
query_cache_wlock_invalidate|OFF
Can anyone help me why my Qcache values are remain zero.. ? I need to do this to improve all my query performance. Currently my innodb table having 3 million records when i try to put my business logic as stored procedure i cant able to get any response from it. Also i already changed all the possible innodb buffer values on my my.conf file but still its very very slow. Give some suggestions for me to improve its performance. Thanks in Advance
I know this post is quite old, but in case u havent still got an answer, query caches dont work for stored procedures is whats been given here, around in the 10th line
http://dev.mysql.com/doc/refman/5.6/en/query-cache-operation.html
I am deleting approximately 1/3 of the records in a table using the query:
DELETE FROM `abc` LIMIT 10680000;
The query appears in the processlist with the state "updating". There are 30m records in total. The table has 5 columns and two indexes, and when dumped to SQL the file about 9GB.
This is the only database and table in MySQL.
This is running on a machine with 2GB of memory, a 3 GHz quad-core processor and a fast SAS disk. MySQL is not performing any reads or writes other than this DELETE operation. No other "heavy" processes are running on the machine.
This query has been running for more than 2 hours -- how long can I expect it to take?
Thanks for the help! I'm pretty new to MySQL, so any tidbits about what's happening "under the hood" while running this query are definitely appreciated.
Let me know if I can provide any other information that would be pertinent.
Update: I just ran a COUNT(*), and in 2 hours, it's only deleted 200k records. I think I'm going to take Joe Enos' advice and see how well inserting the data into a new table and dropping the previous table performs.
Update 2: Sorry, I actually misread the number. In 2 hours, it's not deleted anything. I'm confused. Any suggestions?
Update 3: I ended up using mysqldump with --where "true LIMIT 10680000,31622302" and then importing the data into a new table. I then deleted the old table and renamed the new one. This took just over half an hour.
Don't know if this would be any better, but it might be worth thinking about doing the following:
Create a new table and insert 2/3 of the original table into the new one.
Drop the original table.
Rename the new table to the original table's name.
This would prevent the log file from having all the deletes, but I don't know if inserting 20m records is faster than deleting 10m.
You should post the table definition.
Also, to know why is it taking to much time, try to enable the profile mode on the delete request via :
SET profiling=1;
DELETE FROM abc LIMIT 10680000;
SET profiling=0;
SHOW PROFILES;
SHOW PROFILE ALL FOR QUERY X; (X is the ID of your query shown in SHOW PROFILES)
and post what it returns (But I think the query must end to return the profiling data)
http://dev.mysql.com/doc/refman/5.0/en/show-profiles.html
Also, I think you'll get more responses on ServerFault ;)
When you run this query, the InnoDB log file for the database is used to record all the details of the rows that are deleted - and if this log file isn't large enough from the outset it'll be auto-extended as and when necessary (if configured to do so) - I'm not familiar with the specifics but I expect this auto-extension is not blindingly fast. 2 hours does seem like a long time - but doesn't surprise me if the log file is growing as the query is running.
Is the table from which the records are being deleted on the end of a foreign key (i.e. does another table reference it through a FK constraint)?
I hope your query ended by now ... :) but from what I've seen, LIMIT with large numbers (and I never tried this kind of numbers) is very slow. I would try something based on the pk like
DELETE FROM abc WHERE abc_pk < 10680000;