Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 8 years ago.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
This question does not appear to be about programming within the scope defined in the help center.
Improve this question
Is it possible to restore from a full backup or parrallel db, only certain records with original IDs?
Lets say records were deleted from a specific date, can those records be restored without restoring the entire table?
So to be clear lets say I have records 500 - 720 still in a backup or parrallel db, but the table has had new records added since the backup so dont want to lose them either. So simply want to slot records 500 - 720 back with their original IDs to the current table.
If you have a copy of the db, that's going to be the easiest and quickest way - create a copy of your table with just the rows you need:
CREATE TABLE table2
AS
SELECT * FROM table1
WHERE table1.ID BETWEEN 500 AND 720
then dump table2 with mysqldump:
mysqldump -u -p thedatabase table2 > table2_dump.sql
and ship the dump to the main db, run the dump when using a temporary database, and insert the missing records using:
INSERT INTO table1
SELECT *
FROM temp_db.table2
If you don't have a copy of the db with the missing records, just a backup, then I don't think you can do such a selective restore. If you just have a single dump file of the entire db, then you will have to restore a complete copy to a temporary db, and insert the missing records in a similar manner to the way I've described above, but with a where clause in the insert.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 days ago.
Improve this question
I have a wordpress site and when updating the main theme, I saw that mysql was consuming a high percentage of CPU, then I entered phpmyadmin and this appeared in the process list.
"Waiting for table metadata lock" and "copy to tmp table"
what should i do, my site stopped working and my server space is running out
Only the process running "copying to tmp table" is doing any work. The others are waiting.
Many types of ALTER TABLE operations in MySQL work by making a copy of the table and filling it with an altered form of the data. In your case, ALTER TABLE wp_posts ENGINE=InnoDB converts the table to the InnoDB storage engine. If the table was already using that storage engine, it's almost a no-op, but it can serve to defragment a tablespace after you delete a lot of rows.
Because it is incrementally copying rows to a new tablespace, it takes more storage space. Once it is done, it will drop the original tablespace. So it will temporarily need to use up to double the size of that table.
There should be no reason to run that command many times. Did you do that? The one that's doing the work is in progress, but it takes some time, depending on how many rows are stored in the table and also depending on how powerful your database server is. Be patient, and don't try to start the ALTER TABLE again in more than one tab.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm working on a database that has one table with 21 million records. Data is loaded once when the database is created and there are no more insert, update or delete operations. A web application accesses the database to make select statements.
It currently takes 25 second per request for the server to receive a response. However if multiple clients are making simultaneous requests the response time increases significantly. Is there a way of speeding this process up ?
I'm using MyISAM instead of InnoDB with fixed max rows and have indexed based on the searched field.
If no data is being updated/inserted/deleted, then this might be case where you want to tell the database not to lock the table while you are reading it.
For MYSQL this seems to be something along the lines of:
SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED ;
SELECT * FROM TABLE_NAME ;
SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ ;
(ref: http://itecsoftware.com/with-nolock-table-hint-equivalent-for-mysql)
More reading in the docs, if it helps:
https://dev.mysql.com/doc/refman/5.7/en/innodb-transaction-isolation-levels.html
The TSQL equivalent, which may help if you need to google further, is
SELECT * FROM TABLE WITH (nolock)
This may improve performance. As noted in other comments some good indexing may help, and maybe breaking the table out further (if possible) to spread things around so you aren't accessing all the data if you don't need it.
As a note; locking a table prevents other people changing data while you are using it. Not locking a table that is has a lot of inserts/deletes/updates may cause your selects to return multiple rows of the same data (as it gets moved around on the harddrive), rows with missing columns and so forth.
Since you've got one table you are selecting against your requests are all taking turns locking and unlocking the table. If you aren't doing updates, inserts or deletes then your data shouldn't change, so you should be ok to forgo the locks.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
General question:
Is it safe to regularly run a mysql repair command?
This is my bash script that is added to the crontab scheduler to execute:
while read line; do
# skip database tables that are okay
echo "$line"|grep -q OK$ && continue
echo "WARNING: $line"
done < <(mysqlcheck -u cron -p1234 -A --auto-repair)
Like create a bash script that repairs mysql every hour.
Does it have a negative effect on the database itself?
Thanks,
Client 1: (mimics the mysqlcheck)
lock tables category read;
select count(*) from category;
...
... do stuff
...
unlock tables;
Client 2:
(while Client 1 has the Read Lock, prior to unlock tables)
mysql> insert category(category_name,parent_id) values ('z',1);
(... Client 1 finally performs the `unlock tables`)
Query OK, 1 row affected (12 min 20.93 sec)
So that is the experience you may have.
To clarify the opinion expressed by my previous thread-comment: *"it is not at all clear to me why you find it necessary to 'repair(!)' your database 'every(!!) hour(!!)' (or, at all ...)." If you do "find yourself in such a disagreeable situation," then you had damm-well better find out why!
And, as to the question of “does it have a negative effect on the database itself?” my answer would be that the answer could “definitely be ‘Yes.’” (But(!) that is merely my experiential opinion!)
By and large, the data-structures of a database are intended to be self-maintaining over a very long period of time. “Frequent analysis and/or optimization” should no more be necessary, on a database, than “frequent de-fragmentation” should be necessary on a (modern...) file system. The algorithms are designed to selectively adjust the internal statistics-counters on a case-wise basis, in order to “self-tune” the system to consistently produce “good enough” performance without draconian intervention.
In my opinion, operations such as mysqlcheck should only be performed after pervasive changes have been made to the database contents, such as a mass-delete or mass-insert. And, the object of your quest should never be "repair."
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions concerning problems with code you've written must describe the specific problem — and include valid code to reproduce it — in the question itself. See SSCCE.org for guidance.
Closed 9 years ago.
Improve this question
Lately i have been tasked to delete and reinsert approximately 15 million rows on a myisam table that has about 150 million rows doing so while the table/db still remains available
for inserts/reads.
In order to do so i have started a process that takes small chunks of data
and reinserts it via insert select statements into a cloned table with the same structure with sleep in between the runs to not overload the server, skips over the data to be deleted and insert the replacement data.
This way while cloned table was in the build process (took 8+ hours) new data was coming in into the source table. At the end i had to just sync the tables with the new data that was
added in the 8+ hours and do a rename of the tables.
Everything was fine with exception of one thing. The cardinality
of the indexes on the cloned table is way off, and execution plans for queries executed against it are awful (went from few seconds to 30+ min for some of them).
I know that this can be fixed by running an Analyze table on it, but this takes also a lot of time (currently i'm running one on a slave server and is been executed for more then 10h now) and i can't afford to have this table offline to write while the analyze is performed. Also this will stress the IO of the server putting pressure on the server and slowing it down.
Can someone explain why building a myisam table via insert select statements results in a table which has such a poor internal statistics for indexes?
Also is there a way to incrementally build the table and have the indexes in good shape at the end?
Thanks in advance.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I have a MySQL database that is continually growing.
Every once in a while I OPTIMIZE all the tables. Would this be the sort of thing I should put a on a cron job daily or weekly?
Are there any specific tasks I could set to run automatically that keeps the database in top performance?
Thanks
Ben
You can optimize your tables inside database by executing this query:
SELECT * FROM `db_name`.`table_name` PROCEDURE ANALYSE(1, 10);
This will suggest Optimal_fieldtype to use, You have to ALTER your database so that
optimal field_type has been used.
Also, You can profile your queries inorder to make sure that proper indexing has been done on a table.
I suggest you try SQLyog which can let you know both "Calculate Optimal Datatype" and "SQL Profiler" which will definately help you in optimizing server performance.