Can't Access Some Tables in MYSQL - mysql

I need a help that some of mysql tables are not accessable ( looks like deadlock) as my website is down and I can't open these tables through Navicat nor phpMyAdmin , I tried to kill the query by knowing the thread ID using:
SHOW FULL PROCESSLIST
but I didn't find any rows related to the issue.
Noting that the issue caused after I tried to run some heavy query to 2 tables which contain > 10 millions of row, and I cancelled the query after it took more than 3 hours.
I don't have access to reboot server, and I am using godaddy hosting, and they doesn't have online chatting support to my country!
Can I do anything to fix it by myside without rebooting!
please advice.

Even though it is 2 million rows the query should not take 3 hours, which means the SQL itself needs serious attention. As you said 'looks like deadlock' you can examine by SHOW ENGINE INNODB STATUS
If we get more tables in locked state then we may able to resolve these situations by setting the value of the innodb_lock_wait_timeout system variable.
For reference:
https://dev.mysql.com/doc/refman/5.7/en/glossary.html#glos_deadlock
https://www.percona.com/blog/2012/09/19/logging-deadlocks-errors/

In Unix-like OS you can use "system" or "\!" prefix to execute any command of shell inside mysql interpreter. See this.
Of course you must have admin privileges to run any commands in shell.
With this you can use some mysql admin command to get de PID and analise your problem by OS.

Related

MySQL Tables Multiple Records Appearing Over Time

I'm working with a MySQL table through MySQL Workbench. Currently, many of my tables are appearing more than once with some appearing 10+ times. It is not messing anything up with my application or when I run a query on any of these tables, but something that I feel should be fixed. Haven't been able to find anything and just assuming this is a quick fix.
MySQL Table List
Currently running MySQL Workbench 8.0.23 build 365764 SE (64 bits), MySQL Azure Database System.
Also, is there any reason that this would be happening? I have a variety of php files set up to read/write data, my hunch is that some lines of code may be creating duplicates that I need to nail down, is there any specific SQL syntax that would be creating these duplicates (other than CREATE TABLE..., I do not have anything of that nature set to run).
Thanks for the help.

Godaddy database storage not lowering when content is deleted

My site is connected to a GoDaddy MySQL database. The database is only 1 gigabyte, and I was almost at the max, so I started to delete some pictures from it using some MySQL commands.
But, when I went back and checked how much space I had now, It was the same as before I deleted some content. Does Godaddy take a long time to refresh it's databases or do MySQL delete commands not work on GoDaddy?
It's doubtful anyone here can truly help unless they work for GoDaddy.
But when you say you deleted pictures from your mySQL database... are you really sure the pictures were stored in the database? I suspect more likely they are stored on disk and there is simply a file path stored in the database. Deleting such records would make a minuscule impact on the size of a 1GB database.
If you have access to the information schema, try running the following query
SELECT `TABLE_SCHEMA`, `TABLE_NAME`, `DATA_LENGTH`, `INDEX_LENGTH`
FROM `INFORMATION_SCHEMA`.`TABLES`
ORDER BY `DATA_LENGTH`+`INDEX_LENGTH` DESC
That should give you some clues around where the trouble is happening, and at least indicate where you should start trimming data

MySQL Workbench kicks me off before query runs

I'm using MySQL workbench to run a query. The query is reasonably big. When I limit the results to say 50 records the results are as expected. But when I remove the limit, the query runs for ~ 5 minutes then prompts me to re-enter my password. After doing so I see the query has not run and has stopped working.
It seems like there is a setting that kicks me off after a certain amount of time but I cannot see it in the drop down menus.
Has anyone experienced this? Any advice?
** Update**
Query ran and Workbench says results returned but they are not and I got the screen shot error pop up here:
It sounds like one of three things are the likely culprit.
MySql Workbench may be timing out.
This may help with #1
Go to Edit -> Preferences -> SQL Editor and set to a higher value this
parameter: DBMS connection read time out (in seconds). For instance:
86400.
Close and reopen MySQL Workbench. Kill your previously query that
probably is running and run the query again.
The mySQL server is timing out
You will need access to the server's timeout configuration
Something network related (but far less likely if your timeout happens consistently at around 5 mins)
There are a number of factors that might need adjusting.
If #2, you will need access to the server's timeout configuration
If #3, well, there are a number of factors that might need adjusting.

MySQL Connection Limit Advice

I've hit a problem with my MySQL queries and was hoping someone could offer some help/advice.
I'm developing a PHP-based system which combines quite a lot of data in different tabs on one page (tab1 = profile, tab2 = address, tab3 = payments etc.) and as a result, one page can have up to 34/40 MySQL queries pulling from different tables or with different criteria.
The page load became really slow and I asked my web host if they knew what was wrong and they advised it's because of slow MySQL queries (some over 2 seconds). They also said that my MySQL user is only allowed 15 connections at a time.
If my page has 40 queries and only 15 connections are allowed at a time, does this mean they effectively queue and wait for one to complete? If this was the case then I can understand why the page is taking a while to load but i'm not sure of the solution. Is 15 MySQL queries considered a lot or is this quite a tight restriction by my host (HostMonster)?
Also, if there were 15 users accessing the system at the same time, would this 15 connections be split between each of them or is it 15 connections per user logged into the site? I assume they mean per database user but all people who access the system will be using the same database user so it seems impossible to create a system in which several users can access at one?
The whole connections thing has confused me a little.
Thanks in advance for any help!
Have one connection per page. Put the queries into sequence
Optimize those queries - see explain and use indexes
Perhaps combine queries to reduce the through put.
BTW 10+ queries per page is excessive IMHO.
If having max connect error problem then use this command from command line
mysqladmin flush-hosts -uuser -p'password'
this will flush hosts that MySQL has recorded and will build the list again. In the newer version of MYSQL 5.6 you get more information on this but not on previous version.
You can set the following
max_connect_errors 10000
to avoid the message to appear again.
15 queries or connection is not a problem at all, in busy databases we have seen thousand connections and tens of thousands of queries per second.

Fixing "Lock wait timeout exceeded; try restarting transaction" for a 'stuck" Mysql table?

From a script I sent a query like this thousands of times to my local database:
update some_table set some_column = some_value
I forgot to add the where part, so the same column was set to the same a value for all the rows in the table and this was done thousands of times and the column was indexed, so the corresponding index was probably updated too lots of times.
I noticed something was wrong, because it took too long, so I killed the script. I even rebooted my computer since then, but something stuck in the table, because simple queries take a very long time to run and when I try dropping the relevant index it fails with this message:
Lock wait timeout exceeded; try restarting transaction
It's an innodb table, so stuck the transaction is probably implicit. How can I fix this table and remove the stuck transaction from it?
I had a similar problem and solved it by checking the threads that are running.
To see the running threads use the following command in mysql command line interface:
SHOW PROCESSLIST;
It can also be sent from phpMyAdmin if you don't have access to mysql command line interface.
This will display a list of threads with corresponding ids and execution time, so you can KILL the threads that are taking too much time to execute.
In phpMyAdmin you will have a button for stopping threads by using KILL, if you are using command line interface just use the KILL command followed by the thread id, like in the following example:
KILL 115;
This will terminate the connection for the corresponding thread.
You can check the currently running transactions with
SELECT * FROM `information_schema`.`innodb_trx` ORDER BY `trx_started`
Your transaction should be one of the first, because it's the oldest in the list. Now just take the value from trx_mysql_thread_id and send it the KILL command:
KILL 1234;
If you're unsure which transaction is yours, repeat the first query very often and see which transactions persist.
Check InnoDB status for locks
SHOW ENGINE InnoDB STATUS;
Check MySQL open tables
SHOW OPEN TABLES WHERE In_use > 0;
Check pending InnoDB transactions
SELECT * FROM `information_schema`.`innodb_trx` ORDER BY `trx_started`;
Check lock dependency - what blocks what
SELECT * FROM `information_schema`.`innodb_locks`;
After investigating the results above, you should be able to see what is locking what.
The root cause of the issue might be in your code too - please check the related functions especially for annotations if you use JPA like Hibernate.
For example, as described here, the misuse of the following annotation might cause locks in the database:
#Transactional(propagation = Propagation.REQUIRES_NEW)
This started happening to me when my database size grew and I was doing a lot of transactions on it.
Truth is there is probably some way to optimize either your queries or your DB but try these 2 queries for a work around fix.
Run this:
SET GLOBAL innodb_lock_wait_timeout = 5000;
And then this:
SET innodb_lock_wait_timeout = 5000;
When you establish a connection for a transaction, you acquire a lock before performing the transaction. If not able to acquire the lock, then you try for sometime. If lock is still not obtainable, then lock wait time exceeded error is thrown. Why you will not able to acquire a lock is that you are not closing the connection. So, when you are trying to get a lock second time, you will not be able to acquire the lock as your previous connection is still unclosed and holding the lock.
Solution: close the connection or setAutoCommit(true) (according to your design) to release the lock.
Restart MySQL, it works fine.
BUT beware that if such a query is stuck, there is a problem somewhere :
in your query (misplaced char, cartesian product, ...)
very numerous records to edit
complex joins or tests (MD5, substrings, LIKE %...%, etc.)
data structure problem
foreign key model (chain/loop locking)
misindexed data
As #syedrakib said, it works but this is no long-living solution for production.
Beware : doing the restart can affect your data with inconsistent state.
Also, you can check how MySQL handles your query with the EXPLAIN keyword and see if something is possible there to speed up the query (indexes, complex tests,...).
Goto processes in mysql.
So can see there is task still working.
Kill the particular process or wait until process complete.
I ran into the same problem with an "update"-statement. My solution was simply to run through the operations available in phpMyAdmin for the table. I optimized, flushed and defragmented the table (not in that order). No need to drop the table and restore it from backup for me. :)
I had the same issue. I think it was a deadlock issue with SQL. You can just force close the SQL process from Task Manager. If that didn't fix it, just restart your computer. You don't need to drop the table and reload the data.
I had this problem when trying to delete a certain group of records (using MS Access 2007 with an ODBC connection to MySQL on a web server). Typically I would delete certain records from MySQL then replace with updated records (cascade delete several related records, this streamlines deleting all related records for a single record deletion).
I tried to run through the operations available in phpMyAdmin for the table (optimize,flush, etc), but I was getting a need permission to RELOAD error when I tried to flush. Since my database is on a web server, I couldn't restart the database. Restoring from a backup was not an option.
I tried running delete query for this group of records on the cPanel mySQL access on the web. Got same error message.
My solution: I used Sun's (Oracle's) free MySQL Query Browser (that I previously installed on my computer) and ran the delete query there. It worked right away, Problem solved. I was then able to once again perform the function using the Access script using the ODBC Access to MySQL connection.
Issue in my case: Some updates were made to some rows within a transaction and before the transaction was committed, in another place, the same rows were being updated outside this transaction. Ensuring that all the updates to the rows are made within the same transaction resolved my issue.
issue resolved in my case by changing delete to truncate
issue-
query:
delete from Survey1.sr_survey_generic_details
mycursor.execute(query)
fix-
query:
truncate table Survey1.sr_survey_generic_details
mycursor.execute(query)
This happened to me when I was accessing the database from multiple platforms, for example from dbeaver and control panels. At some point dbeaver got stuck and therefore the other panels couldn't process additional information. The solution is to reboot all access points to the database. close them all and restart.
Fixed it.
Make sure you doesn't have mismatched data type insert in query.
I had an issue where i was trying "user browser agent data" in VARCHAR(255) and having issue with this lock however when I changed it to TEXT(255) it fixed it.
So most likely it is a mismatch of data type.
I solved the problem by dropping the table and restoring it from backup.