Phpmyadmin blocks when executing a large query - mysql

I am using phpmyadmin for my MySQL administration. When I'm doing an expensive query, which takes several minutes, phpmyadmin seems to block all other activities going on in other tabs. I can still use the mysql console for queries, but I can't use phpmyadmin anymore in any tab, it loads and finish only when the big query in the other tab is finished. Can I change this somehow?

That's because of the way php handles sessions. One session can only be used by one script at a time. In one browser all tabs use the same session so they have to wait for the task to complete.
If you log in to phpMyAdmin in another browser, you have create a new session and can do things in parallel. (Because each browser has its own cookie store)

Phpmyadmin is designed to be a single session through the webserver into the database. If you need to be supportive of more sessions, then you must use a client (console, sqlyog, toad) to be able to use multiple threads on the database, or use another browser so it has another session handler at the same time.

As this still seems to be quite popular, let me add up to date answer.
Since phpMyAdmin 4.5.0 the session is not locked while executing SQL query (and other possibly long lasting operations). See https://github.com/phpmyadmin/phpmyadmin/issues/5699 for more information.

Related

Kill Long Running Processes in MySQL

Scenario - you have hundreds of reports running on a slave machine. These reports are either scheduled by MySQL's event scheduler or are called via a Python/R or Shell script. Apart from that, there are fifty odd users who are connecting to MySQL slave running random queries. These people don't really know how to write good queries and that's fair. They are not supposed to. So, every now and then (read every day), you see some queries which are stuck because of read/write locks. How do you fix that.
What you do is that you don't kill whatever is being written. Instead, you kill all the read queries. Now, that is also tricky because, if you kill all the read queries, you will also let go off OUTFILE queries, which are actually write queries (they just don't write to MySQL, but write to disk).
Why killing is necessary (I'm only speaking for MySQL, do not take this out of context)
I have got two words for you - Slave lag. We don't want that to happen, because if that happens, all users, reports, consumers suffer.
I have written the following to kill processes in MySQL based on three questions
how long has the query been running?
who is running the query?
do you want to kill write/modify queries too?
What I have intentionally not done yet is that I have not maintained a history of the processes that have been killed. One should do that so as to analyse and find out who is running all the bad queries. But there are other ways to find that out.
I have create a procedure for this. Haven't spend much time on this. So, please suggest if this is a good way to do it or not.
GitHub Gist
Switch to MariaDB. Versions 10.0 and 10.1 implement several limits and timeouts: https://mariadb.com/kb/en/library/query-limits-and-timeouts/
Then write an API between what the users write and actually hitting the database. In this layer, add the appropriate limitations.

Can I keep websql database open to improve performance?

I have an HTML5 mobile app running on iOS and Android. Users will normally have a little bit of local data stored in a few tables. Let's say five tables with an average of three records.
Performance of websql is really bad. I read in this post that much of the delay is probably in opening and closing the database for each transaction. My users will normally only do one transaction at a time, so the time needed to open and close the database for each operation will usually be a relatively big chunk of total time needed.
I am wondering if I could just open the database once, dispense with all the transaction wrappers and just execute the sql straight away?
The table is never used by any other person or process than the user updating their data, or the app reading the data after an update and sending the data to a server for calculations and statistics.
Most crucially: if I follow the above strategy, and the database is never closed, but the user or the OS closes the app (properly speaking: the webview), will the changed data persist or be lost?
Okay, I found the problem. I use the persistenceJS framework to deal with the local database. This keeps a copy of the websql data stored in a js object and keeps database and js object in sync. That's a process that takes a while, and I was putting everything in the "flush" handler, which comes after the sync.
I also keep the connection open. For IndexedDB, I could keep open on UI and background thread at the same time without observing problem. I believe WebSQL will be the same. If you are using just JS file, you could try out my own javascript library, it is very thin wrapper for both IndexedDB and WebSQL. But the library is written for IndexedDb style.

How can I ask MySQL Workbench to submit queries asynchronously, when performing long operations (e.g. table alterations)?

Albeit all its greatness, it is very annoying that MySQL Workbench 5.2 freezes each time it submits a query, instead of allowing it to be performed asynchronously.
It is not even possible to launch a second instance to do other tasks in the mean time.
Do you know if there is a setting somewhere to adjust this behaviour, or is it a "feature"?
Pretty sure it's a feature. You can run more than one query in a script. There are a lot of cases where you would want/need queries to run sequentially. I don't know of any query editor tools that allow for what you want.
If you're using php you could fire off several AJAX requests to pages that each ran one of the queries you need ran, but unless you are doing something like this often; it wouldn't be worth the time to set up.

Should I store executed DB queries for debugging purposes?

I want to be able to track the data changes in the DB of my app so I'm thinking about storing, in a dedicated DB table, all INSERT, UPDATE and DELETE queries that my app executes.
Would that be a bad idea?
We do that in debug mode - we output all the queries to a file. Of course, this does not make sense (and is a huge performance hit) on a production server, but we can turn it on there too, for short period of debugging.
Any way - mysql has query log you can turn on, it will record every single thing it does.

How to check which cache features are turned on? [PHP/MYSQL]

Is there an application or a code I can use to check which cache functions are turned on?
On this app I'm working on, I thought there was mysql cacheing, but since I started using the SELECT SQL_NO_CACHE in one of my queries (applied yesterday), the cacheing has not stopped. This leads me to assume it's a php cache function that's occurring.
I looked over php.ini for any possible cache features, but didn't see anything that stood out.
Which leads me to this: Is there an app I can download or a Shell function I can input to tell me which cache functions are on or what may be causing the cache.
You probably already know that MySQL has a query caching mechanism. For instance if you have a table named users, and run a query like this:
SELECT COUNT(*) FROM `users`
It may take 3 seconds to run. However if you run the query again, it may only take 0.02 seconds to run. That's because MySQL has cached the results from the first query. However MySQL will clear it's cache if you update the users table in any way. Such as inserting new rows, updating a row, etc. So it's doubtful that MySQL is the problem here.
My hunch is your browser is caching the data. It's also possible that logic in your code is grabbing the old row, updating it, and then displaying the old row data in the form. I really can't say without seeing your code.
You probably need to close and restart your browser. I'd bet it is your browser caching not the back end.