i have a table with 2255440 records,
a cron job works every minute and inserts upto 50-100 records on every execution
inserts are working fine
the problem is that there is another cron job which is also running every minute.. this cron job updates these records according to the data recieved from other server
the problem is that the update query is taking around 6 - 7 seconds per update query
this is the table information and update query example
records are updated with this query
Query:
UPDATE `$month`
SET `acctstoptime`='$data->acctstoptime',
`acctsessiontime`='$data->acctsessiontime',
`acctinputoctets`='$data->acctinputoctets',
`acctoutputoctets`='$data->acctoutputoctets',
`acctterminatecause`='$data->acctterminatecause'
WHERE `radacctid`=$data->radacctid
Is there a single-column index on column of 'radacctid'?
If not you should create one.
CREATE INDEX:
Indexes are used to retrieve data from the database more quickly than
otherwise. The users cannot see the indexes, they are just used to
speed up searches/queries.
Syntax:
CREATE INDEX [index name] ON [table name]([column name]);
Arguments
Name Description
index name Name of the index.
table name Name of the table.
column name Name of the column.
Example
Code:
CREATE INDEX radacctid ON table_name(radacctid);
Related
I'm working on automating the process of building a database. This is a database that needs daily updates after one build.
This database has 51 tables, divided into 3 schemas (there are 17 tables in each schema), and has a total of 20 million records, each record with a PK of manage_number .
I need to update 2000~3000 records every day, but I don't know which method to use.
Make a table for PK indexing
This is a method to separately create a table with a PK and a table name with a PK in the same database. To create a table with metadata about which table the manage_number is stored in. Currently, this methodology is applied. The problem is that the build time takes 5-6 times longer than before (increased from 2 minutes to 12 minutes).
Multi-table update query
This is how to perform update query on 17 tables with the same schema. However, in this case, the load in the FROM clause is expected to be very high, and I think it will be a burden on the server.
Update query may looks like below.
UPDATE table1, table2, table3, ..., table17
SET data_here
WHERE manage_number = 'TARGET_NUMBER';
Please share which way is better, or if you have a better way.
Thank you.
MySQL database has only one table products. Table record contains 1 integer autoincrement primarykey, 1 integer modified_time field and 10 varchar fields.
Two times a day CRON launches process that receives 700.000 XML records with new/updated/old products from other server. Usually there are about 100.000 updated on new products a day (other 600.000 dont change).
The question is what way will be faster
1) Make something like DROP TABLE then recreate same table or DELETE * FROM products. Then INSERT everything we receive (700k records).
2) Start loop on every XML record, compare modified_time field and UPDATE if XML modified_time is newer than modified_time from database.
As I understand second way will lead to 700k SELECT queries, 100k UPDATE queries and some DELETE queries.
So what way will be faster? Thanks in advance
I've been running a website, with a large amount of data in the process.
A user's save data like ip , id , date to the server and it is stored in a MySQL database. Each entry is stored as a single row in a table.
Right now there are approximately 24 million rows in the table
Problem 1:
Things are getting slow now, as a full table scan can take too many minutes but I already indexed the table.
Problem 2:
If a user is pulling a select data from table it could potentially block all other users (as the table is locked) access to the site until the query is complete.
Our server
32 Gb Ram
12 core with 24 thread cpu
table use MyISAM engine
EXPLAIN SELECT SUM(impresn), SUM(rae), SUM(reve), `date` FROM `publisher_ads_hits` WHERE date between '2015-05-01' AND '2016-04-02' AND userid='168' GROUP BY date ORDER BY date DESC
Lock to comment from #Max P. If you write to MyIsam Tables ALL SELECTs are blocked. There is only a Table lock. If you use InnoDB there is a ROW Lock that only locks the ROWs they need. Aslo show us the EXPLAIN of your Queries. So it is possible that you must create some new one. MySQL can only handle one Index per Query. So if you use more fields in the Where Condition it can be useful to have a COMPOSITE INDEX over this fields
According to explain, query doesn't use index. Try to add composite index (userid, date).
If you have many update and delete operations, try to change engine to INNODB.
Basic problem is full table scan. Some suggestion are:
Partition the table based on date and dont keep more than 6-12months data in live system
Add an index on user_id
I have about 100K records that I have to run for the following query:
delete from users where name in #{String}
where the string could be 100K strings of this form: Joe,Kate etc.
For performance is it better to run the above statement or delete one record in a loop with one session.commit(); in the end?
EDITED
There could be only one record for each value
If you can create batches of queries to run then breaking it up into batches would most likely be the fastest:
delete from users where name in ('name1','name2','name3',.....'nameX');
delete from users where name in ('nameX+1','nameX+2','nameX+3',.....'nameX+X');
etc..
If you have the names in a table already you can just do this:
delete from users where name in (select name from table_with_names_to_be_deleted)
It would be better not to use such statements!
The query optimizer will have a field day parsing such a query. I suggest using some temporary table to join against, or some other WHERE-clause that the records to be deleted have in common?
I have a MyISAM table in MySQL which consists of two fields (f1 integer unsigned, f2 integer unsigned) and contains 320 million rows. I have an index on f2. Every week I insert about 150,000 rows into this table. I would like to know what is the frequency with which I need to run "analyze" and "optimize" on this table (as it would probably take a long time and block in the meantime)? I do not do any deletes or update statements, but just insert new rows every week. Also, I am not using this table in any joins so, based on this information, are "analyze" and "optimize" really required?
Thanks in advance,
Tim
ANALYZE TABLE checks the keys, OPTIMIZE TABLE kind of reorganizes tables.
If you never...ever... delete or update the data in your table, only insert new ones, you won't need analyze or optimize.