simultaneous pageloads: queries one by one? - mysql

I use a PHP script which modifies a mysql database with multiple PDO queries. I was wondering, if two users load a page exactly the same time, will mysql first handle all the queries from user one in one go, or is there a risk that some queries from user one are handled, after which some queries from user two and then some more from user one?
I hope my question is clear. Many thanks in advance ;-).

Unless you use locks or transactions, you can't guarantee the order of execution of queries. If you have multiple PHP scripts that are executed simultaneously, They will interact with the database unaware of each other.
For example, if a script is executed twice simultaneously and has two queries with a few seconds of space between them, script one's query 1 could be executed by the database before script two's query one, but script two's query 2 could be executed before script one's query 2.
From a higher level perspective, all of the queries are executed at the same time, but if there is something in the code that depends on the queries being executed in a specific order, you'd need to use a lock or transaction to make sure everything goes well.

Related

Ruby on rails: ActiveRecords' first_or_create is very slow

I've got a ruby scripts which imports XML files to a MySQL database. It does it by looping through the elements in the XML file and finally
table.where(
value: e['value'],
...
).first_or_create
The script has to process a lot of data, most of it is already in the database. Because of this, it runs really slow because first_or_create obviously triggers a lot ot SELECT queries.
Is there any way to handle this more rapidly? Is it related to connection management?
Thanks
first_or_create is of course a convenience method, which doesn't much care about performance on a bigger data set.
Ensure all your indices are in place.
First obvious way to increase performance would be: since every create statement is wrapped in a begin, commit transaction block. Thats 3 queries for one insert.
You can place your whole loop inside transaction block - that will gain you some time as it will only execute begin and commit once.
Remember that the roundtrip to/from database takes considerable amount of time, so an obvious performance boost is to combine multiple statements into one. Try to create one SELECT query finding a batch of, let's say 1000 records. DB will return that 200 don't exist and you can go ahead and build one INSERT statement for those 200 queries.
Always perform measurements and always try to formulate what level of performance you are trying to achieve, to not make the code too verbose.
It's better to eliminate records that need to create.check records if not in the db then create

mysql: sequence of executed queries

I have to update a row into a table(InnoDB) and then right after select the last registry that I updated and make an insert. If the connection is too slow(for the update statement), can the select statement get the wrong row? Assuming that I'm using two different queries.
Are you using SQL to run your script or are you running it somewhere else? (ex PHP, Python, C#)
A Script from SQL should* always complete one line before moving on to the next but if you're unsure you could call something like the sleep function or the wait delay function to pause before you run your second line.
*I say should as I've seen some extremely rare random cases, usually with longer running queries that don't. If your first job takes a long time to complete it may be worth the effort to schedule the first job in Job Agent, then later that day schedule the second job.
MySQL does not keep records of row insertion order. Any algorithm that's based on last registry that I updated must implement its own means to gather the required information. If it doesn't, it will get the wrong row sooner or later. (Network speed is probably not as relevant as concurrent access.)

Two MySQL requests at the same time - Performance issue

I have a MySQL server with many innodb tables.
I have a background script that does A LOT a delete/insert with one request : it deletes many millions of rows from table 2, then insert many millions of rows to table 2 using data from table 1 :
INSERT INTO table 2 (date)
SELECT date from table 1 GROUP BY date
(The request is actually more complex but it is to shown what kind of request I am doing).
At the same time, I am going to run a second background script, that does about a million INSERT or UPDATE requests, but separately (I mean, I execute the first update query, then I execute an insert query, etc...) in table 3.
My issue is that when a script is running, it is fast, like let's say it takes 30minutes each, so 1h total. But when the two scripts are running at the same time, it is VERY slow, like it will take 5h, instead of 1h.
So first, I would like to know what can cause this ? Is it because of IO performance ? (like mysql is writing in two different tables so it is slow to switch between the two ?)
And how could I fix this ? If I could say that the big INSERT query is paused while my second background script is running, it would be great, for example... But I can't find a way to do something like this.
I am not an expert at MySQL administration.. If you need more information, please let me know !
Thank you !!
30 minutes for million INSERT is not fast. Do you have an index on date column? (or whatever column you are using to pivot on)
Regarding your original question.It's difficult to say much without knowing the details of both your scripts and the table structures, but one possible reason why the scripts are running reasonably quickly separately is because you are doing similar kinds of SELECT queries, which might be getting cached by MySQL and then reused for subsequent queries. But if you are running two queries in parallel, then the SELECT's for the corresponding query might not stay in the cache (because there are two concurrent processes which send new queries all the time).
You might want to explicitly disable cache for some queries which you are sure you only run once (using SQL_NO_CACHE modifier) and see if it changes anything. But I'd look into indexing and into your table structure first, because 30 minutes seems to be extremely slow :) E.g. you might also want to introduce partitioning by date for your tables, if you know that you always choose entries in a given period (say by month). The exact tricks depend on your data.
UPDATE: Another issue might be that both your queries work with the same table (table 1), and the default transaction isolation level in MySQL is REPEATABLE READS afair. So it might be that one query is waiting until the other is done with the table to satisfy the transaction isolation level. You might want to lower the transaction isolation level if you are sure that your table 1 is not changed when scripts are working on it.
You can use an event scheduler so you can set mysql to launch this queries at different hours of the day, in another stackoverflow related question you have an exmaple of how to do it: MySQL Event Scheduler on a specific time everyday
Another thing to have in mind is to use the explain plan to see what could be the reason the query is that slow.

what is the maximum number of queries

What is the maximum number of queries that can be run in a dedicated server with 4 GB of RAM in one instance.
I am running a cron job that may contains queries near to one hundred thousand.its queries running in a loop, queries are simple queries selecting 3 fields with integer fields.
please advice
42, of course. The 43rd query breaks it. No, really :-)
There is no upper limit on the number of queries -- the loop can run all day. Unless there is some form of parallel code (i.e. threads), each query from the cron-job will run in series (sends query, processes result, sends query, processes...) and thus the number of total queries is irrelevant in terms of memory requirements.
There is, however, a potential (if absolutely absurd) limit with updates/inserts/deletes that run within a single transaction. This is because the transaction needs to be able to be rolled-back. (I am not sure if this is bound by storage, main memory, or otherwise.)
Happy coding.
Since this is a long-running job, take note: If the cron-job "runs into" the next cron-job (does not complete in time), then serious issues can result as the same "job" may be executing multiple times! This ugly situation can quickly spiral out of control if the cron-jobs keep cascading into each other: each concurrently running "job" will place more burden on the database server.

MySQL - Concurrent SELECTS - one client waits for another?

I have the following scenario:
I have a database with a particular MyISAM table of about 4 million rows. I use stored procedures (MySQL Version 5.1) and one in particular to search through these rows on various criteria. This table has several indexes on it, and the queries through this stored procedure are normally very fast ( <1s). Basically I use a prepared statement and create and execute some dynamic SQL in this search sp. After executing the prepared statement, I perform "DEALLOCATE PREPARED stmt;"
Most of the queries run in under a second (I use LIMIT to get just 15 rows at any time). However, there are some rare queries which take longer to run (say 2-3s). I have optimized the searched table as far as I can.
I have developed a web application and I can run and see the results of the fast queries in under a second on my development machine.
However, if I open two browser instances and do a simultaneous search (against the development machine), one with the longer running query, and the other with the faster query, the results are returned at the same time, i.e. it seems as if the fast query waits for the slower query to finish before returning the results. i.e. both queries will take 2-3 seconds...
Is there a reason for this? Because I thought that MyISAM handles SELECTS irrespective of one another and currently this is not the behaviour I am experiencing...
Thanks in advance!
Tim
This is just due to you doing it from the same machine, if the searches were coming from two different machines they would go at the same time. Would you really like one person to be able to bog down your MySQL server just by opening a bunch of browser windows and hitting refresh?
That is right. Each select query on a MyISAM table locks the entire table until it is finished. Their excuse is that this achieves "a very high read throughput". Switching to innoDB will allow concurrent reads.