MYSQL query consumes too much memory - mysql

I have a giant MYSQL table and the hostgator shared server I use has 4gb RAM. I am trying to execute the following query simple using phpmyadmin:
DELETE FROM Table1_main where date = '2009-12-31'
However, this query just times out because there is insufficient RAM. How can I execute this query without buying a higher performance server?

Is there an index on that column? That's usually the way to speed up a simple query like this.
Here's how to create an index on that column:
CREATE INDEX ON table1_main (date);

NOTE: THIS IS A HACK IF you just want it to get it done
I bet there is a better way to do this.
DELETE FROM Table1_main where date = '2009-12-31' limit 1000000
try to increase the number until it breaks.

If you have MySQL greater than or equal to version 5.1, then you can use partitions.
The other solution would be to use a cycle to delete N records at a time until the total is deleted.
Please take a look at insights on both approaches at:
http://mysql.rjweb.org/doc.php/deletebig
Hope that helps,

Related

MySQL indexing has no speed effect through PHP but does on PhpMyAdmin

I am trying to speed up a simple SELECT query on a table that has around 2 million entries, in a MariaDB MySQL database. It took over 1.5s until I created an index for the columns that I need, and running it through PhpMyAdmin showed a significant boost in speed (now takes around 0.09s).
The problem is, when I run it through my PHP server (mysqli), the execution time does not change at all. I'm logging my execution time by running microtime() before and after the query, and it takes ~1.5s to run it, regardless of having the index or not (tried removing/readding it to see the difference).
Query example:
SELECT `pair`, `price`, `time` FROM `live_prices` FORCE INDEX
(pairPriceTime) WHERE `time` = '2022-08-07 03:01:59';
Index created:
ALTER TABLE `live_prices` ADD INDEX pairPriceTime (pair, price, time);
Any thoughts on this? Does PHP PDO ignore indexes? Do I need to restart the server in order for it to "acknowledge" that there is a new index? (Which is a problem since I'm using a shared hosting service...)
If that is really the query, then it needs an INDEX starting with the value tested in the WHERE:
INDEX(time)
Or, to make a "covering index":
INDEX(time, pair, price)
However, I suspect that most of your accesses involve pair? If so, then other queries may need
INDEX(pair, time)
especially if you as for a range of times.
To discuss various options further, please provide EXPLAIN SELECT ...
PDO, mysqli, phpmyadmin -- These all work the same way. (A possible exception deals with an implicit LIMIT on phpmyadmin.)
Try hard to avoid the use of FORCE INDEX -- what helps on today's query and dataset may hurt on tomorrow's.
When you see puzzling anomalies in timings, run the query twice. Caching may be the explanation.
The mysql documenation says
The FORCE INDEX hint acts like USE INDEX (index_list), with the addition that a table scan is assumed to be very expensive. In other words, a table scan is used only if there is no way to use one of the named indexes to find rows in the table.
MariaDB documentation Force Index here says this
FORCE INDEX works by only considering the given indexes (like with USE_INDEX) but in addition, it tells the optimizer to regard a table scan as something very expensive. However, if none of the 'forced' indexes can be used, then a table scan will be used anyway.
Use of the index is not mandatory. Since you have only specified one condition - the time, it can choose to use some other index for the fetch. I would suggest that you use another condition for the select in the where clause or add an order by
order by pair, price, time
I ended up creating another index (just for the time column) and it did the trick, running at ~0.002s now. Setting the LIMIT clause had no effect since I was always getting 423 rows (for 423 coin pairs).
Bottom line, I probably needed a more specific index, although the weird part is that the first index worked great on PMA but not through PHP, but the second one now applies to both approaches.
Thank you all for the kind replies :)

Quickest way to increment counter in MySQL DB

I'm running a forum on a VPS, running Percona DB, with PHP 5.5.8, Opcode caching, etc, it's all very speed orientated.
I'm also running New Relic, (yes I have the t-shirt).
As I'm tuning the application, optimising queries the forum is making to the DB for any query at the top of my time consumed list.
Right now, the most time consuming query I have, as it's the most frequently used is a simple hit counter on each topic.
So the query is:
UPDATE topics SET num_views = num_views + 1 WHERE id_topic = ?
I can't think of a simpler way to perform this, or if any of the various other ways might be quicker, and why.
Is there a way of writing this query to be even faster, or an index I can add to a field to aide speed?
Thanks.
Assuming id_topic is indexed, you're not going to get better. The only recommendation I would have is to look at the other indexes on this table and make sure you don't have redundant ones that include num_views in them. That would decrease update speed on this update.
For example if you had the following indexes
( some_column, num_views)
( some_column, num_views, another_column)
Index #1 would be extraneous and just add to the insert/update overhead
Not sure if that is an improvement, but you could check the following:
How about only adding a row for each page hit to the table instead of locking and updating the row?
And then using a count to get the results, and cache them instead of doing the count each time?
(And maybe compacting the table once per day?)

mysql: slow query on indexed field

The orders table has 2m records. There are ~900K unique ship-to-ids.
There is an index on ship_to_id ( the field isint(8)).
The query below takes nearly 10mn to complete. I've run PROCESSLIST which has Command = Query and State = Sending Data.
When I run explain, the existing index is used, and possible_keys is NULL.
Is there anything I should do to speed this query up? Thanks.
SELECT
ship_to_id as customer_id
FROM orders
GROUP BY ship_to_id
HAVING SUM( price_after_discount ) > 0
Does not look like you have a useful index. Try adding an index on price_after_discount, and add a where condition like this:
WHERE price_after_discount > 0
to minimize the number of rows you need to sum as you can obviously discard any that are 0.
Also try running "top" command and look at the io "wait" column while the query is running. If its high, it means your query causes a lot of disk I/O. You can increase various memory buffers if you have the RAM to speed this up (if you're using innodb) or myisam is done through filesystem cacheing. Restarting the server will flush these caches.
If you do not have enough RAM (which you shouldn't need too much for 2M records) then consider a partitioning scheme against maybe ship-to-ids column (if your version of mysql supports it).
If all the orders in that table aren't current (i.e. not going to change again) then you could archive them off into another table to reduce how much data has to be scanned.
Another option is to throw a last_modified timestamp on the table with an index. You could then keep track of when the query is run and store the results in another table (query_results). When it's time to run the query again, you would only need to select the orders that were modified since the last time the query was run, then use that to update the query_results. The logic is a little more complicated, but it should be much faster assuming a low percentage of the orders are updated between query executions.
MySQL will use an index for a group by, at least according to the documentation, as explained here.
To be most useful, all the columns used in the query should be in the index. This prevents the engine from having to reference the original data as well as the index. So, try an index on orders(ship_to_id, price_after_discount).

Remove over 100,000 rows from mysql table - server crashes

I have a question when I try to remove over 100,000 rows from a mysql table the server freezes and non of its websites can be accessed anymore!
I waited 2 hours and then restarted the server and restored the account.
I used following query:
DELETE FROM `pligg_links` WHERE `link_id` > 10000
-
SELECT* FROM `pligg_links` WHERE `link_id` > 10000
works perfectly
Is there a better way to do this?
You could delete the rows in smaller sets. A quick script that deletes 1000 rows at a time should see you through.
"Delete from" can be very expensive for large data sets.
I recommend using partitioning.
This may be done slightly differently in PostgreSQL and MySQL, but in PostgreSQL you can create many tables that are "partitions" of the larger table or on a partition. Queries and whatnot can be run on the larger table. This can greatly increase the speed with which you can query given you partition correctly. Also, you can delete a partition by simply dropping it. This happens very very quickly because it is somewhat equivalent to dropping a table.
Documentation for table partitioning can be found here:
http://www.postgresql.org/docs/8.3/static/ddl-partitioning.html
Make sure you have an index on link_id column.
And try to delete with chunks like 10.000 in a time.
Deleting from table is very costy operation.

Faster MySQL Queries?

At my work I have several tables with over 200,000 rows of data. I have to set-up some queries that look over 15,000+ at a time so sometimes I get this error:
PHP Fatal error: Maximum execution
time of 180 seconds exceeded
So, how do I speed up faster queries?
The query is like this:
SELECT toemail, toname
FROM email_sent
WHERE companyid = '$member[companyid]'
Thanks.
Create an index on email_sent (company_id):
CREATE INDEX ix_emailsent_companyid ON email_sent (company_id)
Optimization might be the answer. If it's not enough, you can always just increase PHP's time limit.
This will set it for just that script:
set_time_limit docs
Set the number of seconds a script is
allowed to run. If this is reached,
the script returns a fatal error. The
default limit is 30 seconds or, if it
exists, the max_execution_time value
defined in the php.ini.
Or, edit php.ini and change the max_execution_time setting. This will change it globally, of course. It sounds like it has already been adjusted (by your sysadmin?) as the default is 30 seconds.
Adding an index if you haven't already.
Another way is to switch from MyISAM to InnoDB.
The first thing you might want to look into is indexing any columns which participate in the query. For example, if you your query is always testing the value of a column FirstName, you might want to index that.
If you provide a DDL (Data Definition Lanaguage) script or a description of the tables, as well as the queries that are taking so long, we might be able to provide better tips for indexing.
If you've already tuned as much as you can and you still get timeouts, you might want to see if you can increase the transaction timeout limit. I don't know enough about your server setup to give details, but that sort of thing is usually possible.
UPDATE
If your query is:
SELECT toemail,toname FROM email_sent WHERE companyid = '$member[companyid]'
My first question is: do you have an index on companyid and if not, does creating one improve performance?