MySql stops writing, but keeps reading - mysql

I have a magento store running on Debian, with LAMP, the server is a VPN with 1GB RAM, 1 Core Processor.
MySQL randomly but often stops writing new data to tables, magento doesn't show any error, says that it successfully saved the data. It can read the tables without problem, the website keeps running okay, it just doesn't save any new data.
If I restart MySQL it starts saving new data again, then randomly (I think, couldn't relate it to any action) stops writing some time later, it can be days or hours.
I've turned on query log, but not sure what to look for on it,
I found this common error after mysql stops writing: INSERT INTO index_process_event (process_id,event_id,status) VALUES ('7', '10453', 'error') ON DUPLICATE KEY UPDATE status = VALUES(status);
I've tried to reindex the whole process table as suggested by Henry, but no success.
After reindexing the event_id changed.
I don't believe the problem is low RAM, the website only get around 200 sessions/day, hardly more than 2 users online at the same time.
Thanks, I appreciate any help.

try to see if there is any space left on your storage.
also try to see in system log in addition of mysql log.
grep / find for any line that have "error" string.

Related

How do I restore/recover an RDS instance that is corrupted?

I attempted to upgrade the RDS DB instance from db.t3.large to db.m6g.large. Although the modification went through successfully, I start getting errors on my web application. For two tables, no matter what I do, I get this error:
Index column size too large. The maximum column size is 767 bytes.
I know what the error is. However, at this point, there is no way to fix it. Anything touching those two tables are corrupted. I even tried to restore the DB using "Point In Time Restore" and it failed. It got stuck at "Creating" and I finally got an email titled "RDS MySQL - Point In Time Restore Failure". Message is
We have been unable to perform a point-in-time restore for your Amazon RDS DB instance XXX, in the us-west-2 Region for account ID: XXX. As a result, your instance was put in the 'incompatible-restore' state 1.
When I did restore from past snapshots it works. But the new snapshots still fail for those two tables.
I have posted it on the AWS developer forum but there has been no response for since Nov 12, 2020.
Thank you for your help.
EDIT: An example of "no matter what I do" with those two tables.
For example, if I connect to those tables using a SQL client such as SequelPro and I attempt to query those tables, that error pops up. Pretty much and DML that I throw at it will fail.
Ex: select * from table_name OR ALTER TABLE.

Godaddy database storage not lowering when content is deleted

My site is connected to a GoDaddy MySQL database. The database is only 1 gigabyte, and I was almost at the max, so I started to delete some pictures from it using some MySQL commands.
But, when I went back and checked how much space I had now, It was the same as before I deleted some content. Does Godaddy take a long time to refresh it's databases or do MySQL delete commands not work on GoDaddy?
It's doubtful anyone here can truly help unless they work for GoDaddy.
But when you say you deleted pictures from your mySQL database... are you really sure the pictures were stored in the database? I suspect more likely they are stored on disk and there is simply a file path stored in the database. Deleting such records would make a minuscule impact on the size of a 1GB database.
If you have access to the information schema, try running the following query
SELECT `TABLE_SCHEMA`, `TABLE_NAME`, `DATA_LENGTH`, `INDEX_LENGTH`
FROM `INFORMATION_SCHEMA`.`TABLES`
ORDER BY `DATA_LENGTH`+`INDEX_LENGTH` DESC
That should give you some clues around where the trouble is happening, and at least indicate where you should start trimming data

Hibernate spring hangs

I'm working on an hibernate Spring Mysql app, sometimes when i make a gethibernateTemplate()get(class,id) i can see a bunch of HQL in the logs and the application hangs, have to kill tomcat. This method reads trhough a 3,000 lines file, and there should be 18 files of these, i've been thinking i probably been looking at this wrong. I need you to help me check this at database level , but i don´t know hot to approach. Maybe my database can´t take so many hits so fast.
I´v looked in phpMyAdmin in the information bout executions time section, i see a red values in:
Innodb_buffer_pool_reads 165
Handler_read_rnd 40
Handler_read_rnd_next 713 k
Created_tmp_disk_tables 8
Opened_tables 30
Can i set the application some how to threat more gently the database ?
How can i check if this is the issue ?
Update
I put a
Thread.sleep(2000);
at the end of each cycle and it made the same numbers of calls (18), so i guess this wont be te reason ? can i discard this approach ?
This is a different view of this question
Hibernate hangs or throws lazy initialization no session or session was closed
trying some different
Update 2
Think it might be the buffer reader reading the file?? file is 44KB, tried this method:
http://code.hammerpig.com/how-to-read-really-large-files-in-java.html
class but did not work.
Update 1 -- do never use a Sleep or something slow within an transaction. A transaction has to be closed as fast as possible, because it can block other database operations (what exactly will be blocked depends on the isolation level)
I do not really understand how the database is related to the files in your usecase. But if the stuff works for the first file and become slow later on, then the problem can be the Hibernate Session (to many objects), in this case start an new Transaction/Hibernate Session for each file.
I rewrote the program so i load the information directly into the database using mysql query LOAD DATA INFILE. It works very fast. Then i updated rows changing some fields i need also with sql queries. I think there is simply too much information at the same time to manage trhough memory and abstractions.

How to track down a Drupal max_allowed_packet error?

One of my staging sites has recently started spewing huge errors on every admin page along the lines of:
User warning: Got a packet bigger than 'max_allowed_packet' bytes query: UPDATE cache_update SET data = ' ... ', created = 1298434692, expire = 1298438292, serialized = 1 WHERE cid = 'update_project_data' in _db_query() (line 141 of /var/www/vhosts/mysite/mypath/includes/database.mysqli.inc). (where "..." is about 1.5 million characters worth of serialized data)
How should I go about tracking down where the error originates? Would adding debugging code to _db_query do any good, since it gets called so much?
No need to track this down because you can't fix it I think.
This is the cache from update.module, containing information about which modules have updated versions and so on. So this is coming from one of the "_update_cache_set()" calls in that module.
Based on a wild guess, I'd say it is the one in this function: http://api.drupal.org/api/drupal/modules--update--update.fetch.inc/function/_update_refresh/6
It is basically building up an huge array with information about all projects on your site and tries to store it as a single, serialized value.
How many modules do you have installed on this site?
I can think of three ways to "fix" this error:
Increase the max_allowed_packet size. (max_allowed_packet setting in my.conf)
Disable update.module (It's not that useful on a staging/production site anyway, when you need to update on a dev site first anyway)
Disable some modules ;)
I had a similar error and went round and round for about an hour.
Increased memory limit to 512m and still had the issue. And figured that was enough. So went looking elsewhere.
I cleared the caches with drush, still the error, and then looked at the database tables.
I noticed that all the cache tables were cleared except cache_update. I truncated this table and bam, everything was working normally.
Before I got the memory limit error, I got a max_input_vars error since I am on PHP5.4. But this question and answer led me to this fix. Not quite sure how or why it worked, but it did.

Fixing "Lock wait timeout exceeded; try restarting transaction" for a 'stuck" Mysql table?

From a script I sent a query like this thousands of times to my local database:
update some_table set some_column = some_value
I forgot to add the where part, so the same column was set to the same a value for all the rows in the table and this was done thousands of times and the column was indexed, so the corresponding index was probably updated too lots of times.
I noticed something was wrong, because it took too long, so I killed the script. I even rebooted my computer since then, but something stuck in the table, because simple queries take a very long time to run and when I try dropping the relevant index it fails with this message:
Lock wait timeout exceeded; try restarting transaction
It's an innodb table, so stuck the transaction is probably implicit. How can I fix this table and remove the stuck transaction from it?
I had a similar problem and solved it by checking the threads that are running.
To see the running threads use the following command in mysql command line interface:
SHOW PROCESSLIST;
It can also be sent from phpMyAdmin if you don't have access to mysql command line interface.
This will display a list of threads with corresponding ids and execution time, so you can KILL the threads that are taking too much time to execute.
In phpMyAdmin you will have a button for stopping threads by using KILL, if you are using command line interface just use the KILL command followed by the thread id, like in the following example:
KILL 115;
This will terminate the connection for the corresponding thread.
You can check the currently running transactions with
SELECT * FROM `information_schema`.`innodb_trx` ORDER BY `trx_started`
Your transaction should be one of the first, because it's the oldest in the list. Now just take the value from trx_mysql_thread_id and send it the KILL command:
KILL 1234;
If you're unsure which transaction is yours, repeat the first query very often and see which transactions persist.
Check InnoDB status for locks
SHOW ENGINE InnoDB STATUS;
Check MySQL open tables
SHOW OPEN TABLES WHERE In_use > 0;
Check pending InnoDB transactions
SELECT * FROM `information_schema`.`innodb_trx` ORDER BY `trx_started`;
Check lock dependency - what blocks what
SELECT * FROM `information_schema`.`innodb_locks`;
After investigating the results above, you should be able to see what is locking what.
The root cause of the issue might be in your code too - please check the related functions especially for annotations if you use JPA like Hibernate.
For example, as described here, the misuse of the following annotation might cause locks in the database:
#Transactional(propagation = Propagation.REQUIRES_NEW)
This started happening to me when my database size grew and I was doing a lot of transactions on it.
Truth is there is probably some way to optimize either your queries or your DB but try these 2 queries for a work around fix.
Run this:
SET GLOBAL innodb_lock_wait_timeout = 5000;
And then this:
SET innodb_lock_wait_timeout = 5000;
When you establish a connection for a transaction, you acquire a lock before performing the transaction. If not able to acquire the lock, then you try for sometime. If lock is still not obtainable, then lock wait time exceeded error is thrown. Why you will not able to acquire a lock is that you are not closing the connection. So, when you are trying to get a lock second time, you will not be able to acquire the lock as your previous connection is still unclosed and holding the lock.
Solution: close the connection or setAutoCommit(true) (according to your design) to release the lock.
Restart MySQL, it works fine.
BUT beware that if such a query is stuck, there is a problem somewhere :
in your query (misplaced char, cartesian product, ...)
very numerous records to edit
complex joins or tests (MD5, substrings, LIKE %...%, etc.)
data structure problem
foreign key model (chain/loop locking)
misindexed data
As #syedrakib said, it works but this is no long-living solution for production.
Beware : doing the restart can affect your data with inconsistent state.
Also, you can check how MySQL handles your query with the EXPLAIN keyword and see if something is possible there to speed up the query (indexes, complex tests,...).
Goto processes in mysql.
So can see there is task still working.
Kill the particular process or wait until process complete.
I ran into the same problem with an "update"-statement. My solution was simply to run through the operations available in phpMyAdmin for the table. I optimized, flushed and defragmented the table (not in that order). No need to drop the table and restore it from backup for me. :)
I had the same issue. I think it was a deadlock issue with SQL. You can just force close the SQL process from Task Manager. If that didn't fix it, just restart your computer. You don't need to drop the table and reload the data.
I had this problem when trying to delete a certain group of records (using MS Access 2007 with an ODBC connection to MySQL on a web server). Typically I would delete certain records from MySQL then replace with updated records (cascade delete several related records, this streamlines deleting all related records for a single record deletion).
I tried to run through the operations available in phpMyAdmin for the table (optimize,flush, etc), but I was getting a need permission to RELOAD error when I tried to flush. Since my database is on a web server, I couldn't restart the database. Restoring from a backup was not an option.
I tried running delete query for this group of records on the cPanel mySQL access on the web. Got same error message.
My solution: I used Sun's (Oracle's) free MySQL Query Browser (that I previously installed on my computer) and ran the delete query there. It worked right away, Problem solved. I was then able to once again perform the function using the Access script using the ODBC Access to MySQL connection.
Issue in my case: Some updates were made to some rows within a transaction and before the transaction was committed, in another place, the same rows were being updated outside this transaction. Ensuring that all the updates to the rows are made within the same transaction resolved my issue.
issue resolved in my case by changing delete to truncate
issue-
query:
delete from Survey1.sr_survey_generic_details
mycursor.execute(query)
fix-
query:
truncate table Survey1.sr_survey_generic_details
mycursor.execute(query)
This happened to me when I was accessing the database from multiple platforms, for example from dbeaver and control panels. At some point dbeaver got stuck and therefore the other panels couldn't process additional information. The solution is to reboot all access points to the database. close them all and restart.
Fixed it.
Make sure you doesn't have mismatched data type insert in query.
I had an issue where i was trying "user browser agent data" in VARCHAR(255) and having issue with this lock however when I changed it to TEXT(255) it fixed it.
So most likely it is a mismatch of data type.
I solved the problem by dropping the table and restoring it from backup.