Error: MySQL client ran out of memory - mysql

Can anyone please advise me on this error...
The database has 40,000 news stories but only the fields 'story' is large,
'old' is a numeric value 0 or 1,
'title' and 'shortstory' are very short or NULL.
any advice appreciated. This is the result of running a search database query.
Error: MySQL client ran out of memory
Statement: SELECT news30_access.usehtml, old, title, story, shortstory, news30_access.name AS accessname, news30_users.user AS authorname, timestamp, news30_story.id AS newsid FROM news30_story LEFT JOIN news30_users ON news30_story.author = news30_users.uid LEFT JOIN news30_access ON news30_users.uid = news30_access.uid WHERE title LIKE ? OR story LIKE ? OR shortstory LIKE ? OR news30_users.user LIKE ? ORDER BY timestamp DESC

The simple answer is: don't use story in the SELECT clause.
If you want the story, then limit the number of results being returned. Start with, say, 100 results by adding:
limit 100
to the end of the query. This will get the 100 most recent stories.
I also note that you are using like with story as well as other string columns. You probably want to be using match with a full text index. This doesn't solve your immediate problem (which is returning too much data to the client). But, it will make your queries run faster.
To learn about full text search, start with the documentation.

Related

MySQL 5.7 RAND() and IF() without LIMIT leads to unexpected results

I have the following query
SELECT t.res, IF(t.res=0, "zero", "more than zero")
FROM (
SELECT table.*, IF (RAND()<=0.2,1, IF (RAND()<=0.4,2, IF (RAND()<=0.6,3,0))) AS res
FROM table LIMIT 20) t
which returns something like this:
That's exactly what you would expect. However, as soon as I remove the LIMIT 20 I receive highly unexpected results (there are more rows returned than 20, I cut it off to make it easier to read):
SELECT t.res, IF(t.res=0, "zero", "more than zero")
FROM (
SELECT table.*, IF (RAND()<=0.2,1, IF (RAND()<=0.4,2, IF (RAND()<=0.6,3,0))) AS res
FROM table) t
Side notes:
I'm using MySQL 5.7.18-15-log and this is a highly abstracted example (real query is much more difficult).
I'm trying to understand what is happening. I do not need answers that offer work arounds without any explanations why the original version is not working. Thank you.
Update:
Instead of using LIMIT, GROUP BY id also works in the first case.
Update 2:
As requested by zerkms, I added t.res = 0 and t.res + 1 to the second example
The problem is caused by a change introduced in MySQL 5.7 on how derived tables in (sub)queries are treated.
Basically, in order to optimize performance, some subqueries are executed at different times and / or multiple times leading to unexpected results when your subquery returns non-deterministic results (like in my case with RAND()).
There are two easy (and likewise ugly) workarounds to get MySQL to "materialize" (aka return deterministic results) these subqueries: Use LIMIT <high number> or GROUP BY id both of which force MySQL to materialize the subquery and return the expected results.
The last option is turn off derived_merge in the optimizer_switch variable: derived_merge=off (make sure to leave all the other parameters as they are).
Further readings:
https://mysqlserverteam.com/derived-tables-in-mysql-5-7/
Subquery's rand() column re-evaluated for every repeated selection in MySQL 5.7/8.0 vs MySQL 5.6

Laravel orderBy slowing down response greatly

Leaving out unnecessary information the table structure is as follows(not listing all the with relations):
products
id
launch_date
name
product_view_history
id
account_id
product_id
timestamps
I have query that is taking and irregularly long amount of time. With all the profiling I've done, the actual time spent in SQL is very small(<50 ms) but the time this code takes to execute is in the 900+ms range:
$this->select('products.*', DB::raw('COUNT(product_view_history.id) as view_count'))
->leftJoin('product_view_history', 'product_view_history.product_id', '=', 'products.id', 'outer')
->groupBy('product_view_history.product_id')
->orderBy('view_count', 'DESC')
->orderBy('products.id', 'DESC')
->whereNotNull('products.launch_date')
->with(['owner.images', 'owner.star', 'owner.follows', 'owner.followers', 'company.products.alphas'])
->take(Config::get('xxxx.limits.small'))
->get();
However the time it takes for this code to execute is reduced the the appropriate <50ms if I comment out ->orderBy('view_count', 'DESC'). If I swap out get() with toSql() and run both those queries manually I'm finding the times to be relatively similar and small. To be clear to measure the time this is taking is not SQL query time; I am just getting the time in milliseconds before and directly after this is done and logging the difference.
Can anyone see any reason why ->orderBy('view_count', 'DESC') would add close to a full second of time to the execution of code, even though the SQL itself is not/minimally slower?
It seems like executing the query raw and hydrating and loading seem to speed up the query. This does not answer WHY that order by would cause such an issue but it does answer how to get around the issue at hand:
$products = self::hydrate(DB::select(
"select `products`.*, COUNT(product_view_history.id) as view_count
from `products` left join `product_view_history`
on `product_view_history`.`product_id` = `products`.`id`
where `products`.`launch_date` is not null
group by `product_view_history`.`product_id`
order by `view_count` desc, `products`.`id` desc limit {$limit}"))
->load(['owner.images', 'owner.star', 'owner.follows', 'owner.followers', 'company.products.alphas']);
For me it was caused by using "wrong" data types in the where query.
For example I filtered by a column called "username" which is a varchar but inserted an Int as value to filter by.
It took very long when using orderBy but when removing the orderBy it was fast again.
The solution was at least for me to cast the username to String and the orderBy was fluent as before. I do not know the real reason but maybe Eloquent casts or sorts differently when using not matching data types.

Long query time

I have a query to post last 10 tv show episodes by sorting it by date (from newest to oldest) like this:
return $this->getEntityManager()->createQuery('SELECT t FROM AppBundle:TvShow t JOIN t.episodes e ORDER BY e.date DESC')->setFirstResult(0)->setMaxResults(10)->getResult();
It returns only 9 nine episode. We have similar queries in same page too, they are working fine. When i setMaxResults to (11) just then it returns 10 episodes.
Another issue related with this query is: it takes too long compared to other similar queries. (about 200ms)
What do you suggest for me?
Thanks in advance.
Like in Richard answer - wrong result with setMaxResults and fetch-joined collection is doctrine normal behaviour.
To make it works you can use Doctrine Pagination (from Doctrine 2.2) (docs: http://docs.doctrine-project.org/en/latest/tutorials/pagination.html)
Example usage:
use Doctrine\ORM\Tools\Pagination\Paginator;
$query->setMaxResults($limit);
$query->setFirstResult($offset);
$results = new Paginator($query, $fetchJoin = true);
Long query time looks like a topic for another question.
Straight from the documentation:
If your query contains a fetch-joined collection specifying the result limit methods are not working as you would expect. Set Max Results restricts the number of database result rows, however in the case of fetch-joined collections one root entity might appear in many rows, effectively hydrating less than the specified number of results.
https://doctrine-orm.readthedocs.org/en/latest/

phpMyadmin Query Execution Time

I tried to run the following query in my Cloud VPS cPanel phpMyadmin
SELECT bankcode, bankname
FROM newbankdetails
WHERE
type='DB' AND
bankcode IN ( SELECT drawingbankname
FROM dd1
WHERE entrydate BETWEEN '01/04/2014' AND '30/04/2014'
AND paymentmode='DD'
AND state='TAMIL NADU' )
It takes very very very long time nearly 5 to 6 hours and says out of time or keeps running for days and don't show any error also don't show results.
Whereas it works perfectly in my local machine xampp
It happens only when working on large size tables like 12GB and around like that
How to speed it up and display the result as it displays instantly in localhost xampp
I think, replacing IN with a LEFT JOIN can make more sense, like this:
SELECT
bankcode, bankname
FROM
newbankdetails nd
LEFT JOIN
dd1 ON nd.bankcode = dd1.drawingbankname
WHERE
nd.type='DB'
AND
dd1.entrydate BETWEEN '01/04/2014' AND '30/04/2014'
AND
dd1.paymentmode = 'DD'
AND
dd1.state = 'TAMIL NADU'
I can't test it, But I think with dd1.paymentmode = 'DD' automatically removes rows of dd1.paymentmode IS NULL because of using LEFT JOIN.
First you should use EXPLAIN QUERY (https://dev.mysql.com/doc/refman/5.0/en/using-explain.html)
That will give you information about what is slow.
Your problem seems to be related to the size of the table. On your localhost you may have a small table, but in production it is very large. Also put an index on the first table on nd.bankcode.
You should try to remove the subselect, and use a JOIN instead.
Also, you should put indexes on the columns you use for search.
Put an index on drawingbankname, entrydate, paymentmode, state. Put also an index on nd.backcode .
Column drawingbankname is varchar, you should convert it to fixed char.
Remove the subselect and use JOIN.
Use EXPLAIN after the changes to see progress.

Practical limit to length of SQL query (specifically MySQL)

Is it particularly bad to have a very, very large SQL query with lots of (potentially redundant) WHERE clauses?
For example, here's a query I've generated from my web application with everything turned off, which should be the largest possible query for this program to generate:
SELECT *
FROM 4e_magic_items
INNER JOIN 4e_magic_item_levels
ON 4e_magic_items.id = 4e_magic_item_levels.itemid
INNER JOIN 4e_monster_sources
ON 4e_magic_items.source = 4e_monster_sources.id
WHERE (itemlevel BETWEEN 1 AND 30)
AND source!=16 AND source!=2 AND source!=5
AND source!=13 AND source!=15 AND source!=3
AND source!=4 AND source!=12 AND source!=7
AND source!=14 AND source!=11 AND source!=10
AND source!=8 AND source!=1 AND source!=6
AND source!=9 AND type!='Arms' AND type!='Feet'
AND type!='Hands' AND type!='Head'
AND type!='Neck' AND type!='Orb'
AND type!='Potion' AND type!='Ring'
AND type!='Rod' AND type!='Staff'
AND type!='Symbol' AND type!='Waist'
AND type!='Wand' AND type!='Wondrous Item'
AND type!='Alchemical Item' AND type!='Elixir'
AND type!='Reagent' AND type!='Whetstone'
AND type!='Other Consumable' AND type!='Companion'
AND type!='Mount' AND (type!='Armor' OR (false ))
AND (type!='Weapon' OR (false ))
ORDER BY type ASC, itemlevel ASC, name ASC
It seems to work well enough, but it's also not particularly high traffic (a few hundred hits a day or so), and I wonder if it would be worth the effort to try and optimize the queries to remove redundancies and such.
Reading your query makes me want to play an RPG.
This is definitely not too long. As long as they are well formatted, I'd say a practical limit is about 100 lines. After that, you're better off breaking subqueries into views just to keep your eyes from crossing.
I've worked with some queries that are 1000+ lines, and that's hard to debug.
By the way, may I suggest a reformatted version? This is mostly to demonstrate the importance of formatting; I trust this will be easier to understand.
select *
from
4e_magic_items mi
,4e_magic_item_levels mil
,4e_monster_sources ms
where mi.id = mil.itemid
and mi.source = ms.id
and itemlevel between 1 and 30
and source not in(16,2,5,13,15,3,4,12,7,14,11,10,8,1,6,9)
and type not in(
'Arms' ,'Feet' ,'Hands' ,'Head' ,'Neck' ,'Orb' ,
'Potion' ,'Ring' ,'Rod' ,'Staff' ,'Symbol' ,'Waist' ,
'Wand' ,'Wondrous Item' ,'Alchemical Item' ,'Elixir' ,
'Reagent' ,'Whetstone' ,'Other Consumable' ,'Companion' ,
'Mount'
)
and ((type != 'Armor') or (false))
and ((type != 'Weapon') or (false))
order by
type asc
,itemlevel asc
,name asc
/*
Some thoughts:
==============
0 - Formatting really matters, in SQL even more than most languages.
1 - consider selecting only the columns you need, not "*"
2 - use of table aliases makes it short & clear ("MI", "MIL" in my example)
3 - joins in the WHERE clause will un-clutter your FROM clause
4 - use NOT IN for long lists
5 - logically, the last two lines can be added to the "type not in" section.
I'm not sure why you have the "or false", but I'll assume some good reason
and leave them here.
*/
Default MySQL 5.0 server limitation is "1MB", configurable up to 1GB.
This is configured via the max_allowed_packet setting on both client and server, and the effective limitation is the lessor of the two.
Caveats:
It's likely that this "packet" limitation does not map directly to characters in a SQL statement. Surely you want to take into account character encoding within the client, some packet metadata, etc.)
SELECT ##global.max_allowed_packet
this is the only real limit it's adjustable on a server so there is no real straight answer
From a practical perspective, I generally consider any SELECT that ends up taking more than 10 lines to write (putting each clause/condition on a separate line) to be too long to easily maintain. At this point, it should probably be done as a stored procedure of some sort, or I should try to find a better way to express the same concept--possibly by creating an intermediate table to capture some relationship I seem to be frequently querying.
Your mileage may vary, and there are some exceptionally long queries that have a good reason to be. But my rule of thumb is 10 lines.
Example (mildly improper SQL):
SELECT x, y, z
FROM a, b
WHERE fiz = 1
AND foo = 2
AND a.x = b.y
AND b.z IN (SELECT q, r, s, t
FROM c, d, e
WHERE c.q = d.r
AND d.s = e.t
AND c.gar IS NOT NULL)
ORDER BY b.gonk
This is probably too large; optimizing, however, would depend largely on context.
Just remember, the longer and more complex the query, the harder it's going to be to maintain.
Most databases support stored procedures to avoid this issue. If your code is fast enough to execute and easy to read, you don't want to have to change it in order to get the compile time down.
An alternative is to use prepared statements so you get the hit only once per client connection and then pass in only the parameters for each call
I'm assuming you mean by 'turned off' that a field doesn't have a value?
Instead of checking if something is not this, and it's also not that etc. can't you just check if the field is null? Or set the field to 'off', and check if type or whatever equals 'off'.