I have a large table with what are essentially log entries. For most of my queries, I need a table with the most recent entries, so I created a 'view' from the following query:
SELECT t1.store_id, t1.code_id, t1.working, t1.expiration, t1.details, t1.price
FROM code_stores t1
LEFT OUTER JOIN code_stores t2
ON (t1.store_id = t2.store_id AND t1.code_id = t2.code_id AND t1.id < t2.id)
WHERE t2.store_id IS NULL
Then I use this 'view' in my query. Unfortunately this is leading to slow queries, so I'd like to cache the results of this view somehow. Since this view will only change a few times a day (when I can run a query to update the cache table), I would like to create a temporary table that stores the results of this view, and update this throughout the day.
How do I go about doing this? I read about materialized views, but it appears that they don't work with mysql. More help would be greatly appreciated.
With the idea of using a temporary table, I suggest using trigger so that temporary table is updated each time the code_store table changes.
Related
I have a MySQL database with just 1 table:
Fields are: blocknr (not unique), btcaddress (not unique), txid (not unique), vin, vinvoutnr, netvalue.
Indexes exist on both btcaddress and txid.
Data in it looks like this:
I need to delete all "deletable" record pairs. An example is given in red.
Conditions are:
txid must be the same (there can be more than 2 records with same txid)
vinvoutnr must be the same
vin must be different (can have only 2 values 0 and 1, so 1 must be 0 other must be 1)
In a table of 36M records, about 33M records will be deleted.
I've used this:
delete t1
from registration t1
inner join registration t2
where t1.txid=t2.txid and t1.vinvoutnr=t2.vinvoutnr and t1.vin<>t2.vin;
It works but takes 5 hours.
Maybe this would work too (not tested yet):
delete t1
from registration as t1, registration as t2
where t1.txid=t2.txid and t1.vinvoutnr=t2.vinvoutnr and t1.vin<>t2.vin;
Or do I forget about a delete query and try to make a new table with all non-delatables in and then drop the original ?
Database can be offline for this delete query.
Based on your question, you are deleting most of the rows in the table. That is just really expensive. A better approach is to empty the table and re-populate it:
create table temp_registration as
<query for the rows to keep here>;
truncate table registration;
insert into registration
select *
from temp_registration;
Your logic is a bit hard to follow, but I think the logic on the rows to keep is:
select r.*
from registration r
where not exists (select 1
from registration r2
where r2.txid = r.txid and
r2.vinvoutnr = r.vinvoutnr and
r2.vin <> r.vin
);
For best performance, you want an index on registration(txid, vinvoutnr, vin).
Given that you expect to remove the majority of your data it does sound like the simplest approach would be to create a new table with the correct data and then drop the original table as you suggest. Otherwise ADyson's corrections to the JOIN query might help to alleviate the performance issue.
I have two tables T1 and T2 and want to update one field of T1 from T2 where T2 holds massive data.
What is more efficient?
Updating T1 in a for loop iteration over the values
or
Left join it with T2 and update.
Please note that i'm updating these tables in a shell script
In general, the JOIN will always work much better than a loop. The size should not be an issue if it is properly indexed.
There is no simple answer which will be more effective, it will depend on table size and data size to which you are going to update in one go.
Suppose you are using innodb engine and trying to update 1,000 or more rows in one go with 2 heavy tables join and it is quite frequent then it will not be good idea on production server as it will lock your table for some time and due to this locking some other operations also can be hit on your production server.
Option1: If you are trying to update few rows and based on proper indexed fields (preferred based on primary key) then you can go with join.
Option2: If you are trying to update a large amount of data based on multiple tables join then below option will be better:
Step1: Create a stored procedure.
Step2: Keep below query results in a cursor.
suppose you want TO UPDATE corresponding field2 DATA of TABLE table2 IN field1 of TABLE table1:
SELECT a.primary_key,b.field2 FROM table1 a JOIN table2 b ON a.primary_key=b.foreign_key WHERE [place CONDITION here IF any...];
Step3: Now update all rows one by one based on primary key using stored values in cursor.
Step4: You can call this stored procedure from your script.
I have a MySQL query which uses 3 tables with 2 inner joins. Then, I have to find the maximum of a group from this query output. Combining them both is beyond me. Can I break down the problem by storing the output of the first complicated query into some sort of temporary table, give it a name and then use this table in a new query? This will make the code more manageable. Thank you for your help.
This is very straightforward:
CREATE TEMPORARY TABLE tempname AS (
SELECT whatever, whatever
FROM rawtable
JOIN othertable ON this = that
)
The temporary table will vanish when your connection closes. A temp table contains the data that was captured at the time it was created.
You can also create a view, like so.
CREATE VIEW viewname AS (
SELECT whatever, whatever
FROM rawtable
JOIN othertable ON this = that
)
Views are permanent objects (they don't vanish when your connection closes) but they retrieve data from the underlying tables at the time you invoke them.
I have a Drupal 6 application that requires more joins than that 61 table join mySQL limit allows. I understand that this is an excessive number, but it is ran only once a day, and the results are cached for further reference.
Are there any mySQL configuration parameters that could be of help, or any other approaches short of changing the logic behind collecting the data?
My approach would be to split the humongous query into smaller, simpler queries, and use temporary tables to store the intermediate steps. I use this approach frequently and it helps me a lot (sometimes it is even faster to create some temp tables than to join all the tables in one big query).
Something like this:
drop table if exists temp_step01;
create temporary table temp_step01
select t1.*, t2.someField
from table1 as t1 inner join table2 as t2 on t1.id = t2.table1_id;
-- Add the appropriate indexes to optimize the subsequent queries
alter table temp_step01
add index idx_1 (field1);
-- Create all the temp tables that you need, and finally show the results
select sXX.*
from temp_stepXX as sXX;
Remember: Temporary tables are visible only to the connection that creates them. If you need to make the result visible to other connections, you'll need to create a "real" table (of course, that is only worth with the last step of your process).
I have this query that works fine. Its deletes records that are old based on current time.
$cleanacc_1 = "DELETE FROM $acc_1
WHERE `Scheduled` < DATE_SUB(UTC_TIMESTAMP(), INTERVAL 30 SECOND)";
$result = mysql_query($cleanacc_1);
However, there are over 100 tables (accounts) that need deleting and I was wondering if I can combine them into one query. If possible how?
This implies you create a new table for every account. Why are you not creating a record for each account within a single table?
For example...
create table account (id int unsigned primary key auto_increment, other fields...);
If you alter your table structure you will be able to delete individual account records with a single query...
delete from account where condition=true;
Individual transaction records for each account are then stored in another table and contain the account id they relate to...
create table transaction (id, account_id, other transaction fields);
If you don't change the database design you'll need to write PHP code that loops through each table and runs your delete query. This is very inefficient and I urge you to redesign the table as suggested.
If you don't understand why my table redsign suggestion is a better approach, post more information about your database and I'll explain in more detail with a working example.
No way to do that, AFAIK; anyways, I don't think it would be a big problem to run 100 queries, assuming you are not running that for each request or so..
Are you expecting performance issues? If that's the case, I'd probably use a cron job to run that query every X minutes..
You could setup a view of the tables and do then run the delete sql against the view. That should delete the underlying table data as well. Your table schema and permissions could have an affect whether this will work or not. Check out this answer, it might help as well.
Does deleting row from view delete row from base table - MYsql?
Please consider the following example.
I have three tables in following structure.
Table names : t1,t2,t3
Fields : Id, name
Im going to perform delete query with one condition which recode id must less than 10.
DELETE FROM t1, t2,t3 USING t1 INNER JOIN t2 INNER JOIN t3 WHERE t1.id<10 and t2.id<10 and t3.id<10.
The query has been successfully executed ( MySql ). I got the expected output.
So please try the same way with your condition.