MySql Infinite Loop - mysql

I am somewhat inexperience with MySQL so please excuse my naivety.
I am trying to merge two tables according to an ISR id number and then insert that data into another table, adr. I am running a SQL query on a MySQL database in Workbench 5.2 and my query takes less than a second if I limit my code as follows:
TRUNCATE adr;
INSERT into adr
(SELECT drug08q1.isr, drug08q1.drugname, reac08q1.pt
FROM drug08q1
INNER JOIN reac08q1
ON drug08q1.isr = reac08q1.isr LIMIT 0,2815);
SELECT * FROM adr;
If I increment the limit to:
LIMIT 0,20816
The SQL will say it's running forever and never finish. I have no idea why incrementing the LIMIT beyond some random threshold will result in an infinite loop. What could I be doing incorrectly?
Thank you in advance!

Related

MySql Update takes very long

I found a strange behavior on following query:
UPDATE llx_socpeople SET no_email=1 WHERE rowid IN (SELECT source_id FROM llx_mailing_cibles where tag = "68d74c3bc618ebed67919ed5646d0ffb");
takes 1 min and 30 seconds.
When I split up the commands to 2 queries:
SELECT source_id FROM llx_mailing_cibles where tag = "68d74c3bc618ebed67919ed5646d0ffb";
Result is 10842
UPDATE llx_socpeople SET no_email=1 WHERE rowid = 10842;
Result is shown in milliseconds.
Table llx_socpeople has about 7.000 records, llx_mailing_cibles has about 10.000 records.
MySQL Version is: 5.7.20-0ubuntu0.16.04.1
I already tried to optimize/repair both tables with no effect.
Any ideas?
Currently, as the subquery is being run for each row of the main query, we can expect a longer execution time.
What I would suggest would be to rely on a inner join for performing the update:
UPDATE llx_socpeople AS t1
INNER JOIN llx_mailing_cibles AS t2
ON t1.rowid = t2.source_id
SET t1.no_email=1
WHERE t2.tag = "68d74c3bc618ebed67919ed5646d0ffb";
This way you will definitely get far better performance.
You can troubleshoot your slow queries using the EXPLAIN MySql statement. Find more details about it on it's dedicated page from the official documentation. It might help you discover any missing indexes.

SQL delete query is taking too long , 20k records to be deleted from half million records

In my database i have around 500,000 records. I have applied a query to delete around 20,000 records.
Its been 45 minutes that Heidi SQL is showing that the command is being executed.
Here is my command -
DELETE FROM DIRECTINOUT_MSQL WHERE MACHINENO LIKE '%TEXAOUT%' AND DATASENT = 1 AND UPDATETIME = NULL AND DATE <= '2017/4/30';
Please advise, how to avoid these kind of situations in future and what should i do now ? Shall i disintegrate this query with some condition and execute smaller query ?
I have exported my database backup file, its around 47mb
Kindly advise.
Try index method. It will improve your query performance
Database index, or just index, helps speed up the retrieval of data from tables. When you query data from a table, first MySQL checks if the indexes exist, then MySQL uses the indexes to select exact physical corresponding rows of the table instead of scanning the whole table.
https://dev.mysql.com/doc/refman/5.5/en/optimization-indexes.html
What is the engine you are using myIsm or InnoDB?
I guess you are using mysql database,
What is the isolation level set on the database?
how much time does a select of that data takes? please see that there is in some cases limits that make you thin that query finished within few seconds but you got only part of the results.
I had this problem once. I solved it using a loop.
You have to first write a query with fast select to check if you have records to delete:
SELECT COUNT(SOME_ID) FROM DIRECTINOUT_MSQL WHERE MACHINENO LIKE '%TEXAOUT%' AND DATASENT = 1 AND UPDATETIME = NULL AND DATE <= '2017/4/30' LIMIT 1
While you have a count of 1 for this query then do this:
DELETE FROM DIRECTINOUT_MSQL WHERE MACHINENO LIKE '%TEXAOUT%' AND DATASENT = 1 AND UPDATETIME = NULL AND DATE <= '2017/4/30' LIMIT 1000
I just chose 1000 randomly, but you have to see what is the fastest limit for the DELETE statement for you server configuration.

MYSQL query too big

Despite 8gb of RAM, when I run this MYSQL query I get an error because memory runs. The reason is I have a huge amount of data:
`DELETE FROM bigtable_main where date = '2009-12-31';
Is there a way to split the above query up so that I can do rows 1 to 999,999 in one query, rows 1,000,000 to 1,999,999 in another query, etc.?
You could use the limit keyword:
DELETE FROM bigtable_main where date = '2009-12-31' LIMIT 1000000;
You simply run this query over and over again until there are no rows left to delete.
DELETEing rows is more complex than you might guess, because MySQL's transaction semantics go to a lot of trouble to make it possible to roll back the deletion. If you do the deletion in smaller chunks (e.g. LIMIT 1000000 or even LIMIT 1000) you demand less rollback work from the MySQL server.

Query taking so much time in mysql but not in oracle

Why following query takes so much time in mysql but not in oracle
select * from (select * from employee) as a limit 1000
I test this query in oracle and MySQL database with 50,00,000 records in this table.
I know that these query should be write like
select * from employee limit 1000
But for displaying data & total no of rows to our custom dynamic grid we have only one query we use simple select * from employee query and then add limit or other conditions.
we short this problem.
But my question is "Why such query in mysql takes too much time?"
Well, because to perform this query MySQL has to go over all 50K rows in the table, copy them to a temp table (in memory or on disk, depending on size) and then take the first 1000.
In MySQL, if you want to optimize queries with LIMIT, you need to follow some practices that prevent scanning the full data set (mainly indexes on the column you sort by).
See here: http://dev.mysql.com/doc/refman/5.0/en/limit-optimization.html

Accessing MySQL tables with over 10 million rows = Error: too many connection

How could I optimize this MySQL queries that is accessing two tables with more than 10 million rows each?
What the query below do is, it gets all the id from 'users' table that doesn't exist on the 'guests' table. This will return over hundred of thousands of rows as a result so we're limiting it to atleast get 5000 id per run. Is there a better way to run this so we could get more done per run.
$before = date here before in time;
$now = date now;
$query="SELECT users.id
FROM users
LEFT JOIN guests ON guests.id = users.id
WHERE guests.id IS NULL AND (users.in >= '$before' AND users.in <= '$now')
LIMIT 0,5000";
After we know which IDs doesn't exist on the guests table we have to delete those rows in the users table. So this means it will run another 5000 delete queries to delete all those IDs.
If we run this process with both tables containing over 10million rows of data our server is returning an error that it has too many connection and MySQL server can't be accessed anymore until you restart it. But if we run the same process with both tables containing over a few thousand rows, it doesn't encounter this problem but it still take some time.
Why is this happening and how could we avoid this at the same time optimize this process altogether.
2 things - check how your software handles MySQL connections. Looks like it opens a persistent connection, and then does not reuse it, and there is a new connect before every query.
Second - you can modify your query to do it in one statement, instead of running a separate query on each user. That way only one connection is needed, and all the processing will be on MySQL side, which will be able to optimize it further.
Edit: One more thing that you can check is running EXPLAIN on your query to make sure you have all the proper index set up (if select part is running slow now).
Warning: test this query before running on live data. I do not claim responsibility for any data lost
DELETE
u
FROM
users AS u
LEFT JOIN
guests AS g
ON g.id = u.id
WHERE
g.id IS NULL
AND (users.in >= '$before' AND users.in <= '$now')
As to the core of your question (too many connections) I suspect your PHP script is starting new connections in loop for ever ID that is to be deleted.