Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I've recently come across a query which is taking abnormally long time from past few days. We had a server migration for MySQL database last month, and the problem never happened on the previous server. The MySQL version on the older server was 5.1.34 and on current one it's 5.1.58(not sure if that could have anything to do with this issue).
The query is as below:
SELECT table_name,
partition_name,
subpartition_name,
partition_method,
subpartition_method,
partition_expression,
subpartition_expression,
partition_description,
partition_comment,
nodegroup,
tablespace_name
FROM information_schema.partitions
WHERE table_schema LIKE 'wialogdb'
AND NOT Isnull(partition_name)
AND table_name LIKE 'freemail'
ORDER BY table_name,
partition_name,
partition_ordinal_position,
subpartition_ordinal_position;
It's query on information_schema.PARTITIONS
fired by Navicat to get the details about the table structure, it's very difucult to reproduce.
when you edit the table Navicat has to collect all the details abotu the table from Information Schema
(e.g lsit of Engines, Table Columns, see analyze Show Create table, etc)
and PARTITIONS is one of the table it has to check as you can see the WHERE condition is not "correct" WHERE TABLE_SCHEMA LIKE 'wialogdb' it should not be LIKE it should be WHERE TABLE_SCHEMA = 'wialogdb', this query is much more faster, but it's Navicat's internal code
we can not change it. we didn't have this issues in the past (old MySQL 5.1.34)
Any help would be highly appreciated.
Thanks in advance.
If it's not your software, don't try to fix it. Let the creators know you have performance issues and that you have found something which can improve performance.
There's a reason why people buy licenses for software: use the support.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
What are the differences between MySQL and Oracle databases. I know both are RDBMS, both use SQL as Query language and both are developed by Oracle. So what are the differences between these two technically?
I used Oracle in Deuth Bank for 1,5 years, and some experience with MySQL on other job.
In general, Oracle is much more powerful and is a deeper RDBMS, which allows you to write any complex system. That's why it is used in banking, military, and science fields.
MySQL - is light, simple RDBMS, it is very well for web, for example small internet shop, your personal web page, or page of a school. More complex web often use RDBMS PostgreSQL.
Oracle allows you to use packages (often on PL/SQL), coursurs (same as subselect), PL/SQL language, Roles, snapshot, synonym, tablespace.
Also Oracle has more advanced data types, and a bit different datatypes.
For example:
BIGINT (8 Bytes) In MySQL, in Oracle called - NUMBER (19,0).
For what I miss in Oracle is select * from dual, wherein dual is a default virtual table in Oracle.
For more deep comparison, please check compare table on Oracle's website:
https://docs.oracle.com/cd/E12151_01/doc.150/e12155/oracle_mysql_compared.htm#i1027526
Mysql and Oracle are both RDMS. oracle not develop MySQL he purchase it.
Both are same just syntax diffrence like
for limit rows in mysql
select * from tbl limit 1
in oracle
SELECT * FROM tbl WHERE ROWNUM <=1;
mysql is open source and oracle is paid.for more diffrence in query you can see here
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm working on a database that has one table with 21 million records. Data is loaded once when the database is created and there are no more insert, update or delete operations. A web application accesses the database to make select statements.
It currently takes 25 second per request for the server to receive a response. However if multiple clients are making simultaneous requests the response time increases significantly. Is there a way of speeding this process up ?
I'm using MyISAM instead of InnoDB with fixed max rows and have indexed based on the searched field.
If no data is being updated/inserted/deleted, then this might be case where you want to tell the database not to lock the table while you are reading it.
For MYSQL this seems to be something along the lines of:
SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED ;
SELECT * FROM TABLE_NAME ;
SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ ;
(ref: http://itecsoftware.com/with-nolock-table-hint-equivalent-for-mysql)
More reading in the docs, if it helps:
https://dev.mysql.com/doc/refman/5.7/en/innodb-transaction-isolation-levels.html
The TSQL equivalent, which may help if you need to google further, is
SELECT * FROM TABLE WITH (nolock)
This may improve performance. As noted in other comments some good indexing may help, and maybe breaking the table out further (if possible) to spread things around so you aren't accessing all the data if you don't need it.
As a note; locking a table prevents other people changing data while you are using it. Not locking a table that is has a lot of inserts/deletes/updates may cause your selects to return multiple rows of the same data (as it gets moved around on the harddrive), rows with missing columns and so forth.
Since you've got one table you are selecting against your requests are all taking turns locking and unlocking the table. If you aren't doing updates, inserts or deletes then your data shouldn't change, so you should be ok to forgo the locks.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions concerning problems with code you've written must describe the specific problem — and include valid code to reproduce it — in the question itself. See SSCCE.org for guidance.
Closed 9 years ago.
Improve this question
Lately i have been tasked to delete and reinsert approximately 15 million rows on a myisam table that has about 150 million rows doing so while the table/db still remains available
for inserts/reads.
In order to do so i have started a process that takes small chunks of data
and reinserts it via insert select statements into a cloned table with the same structure with sleep in between the runs to not overload the server, skips over the data to be deleted and insert the replacement data.
This way while cloned table was in the build process (took 8+ hours) new data was coming in into the source table. At the end i had to just sync the tables with the new data that was
added in the 8+ hours and do a rename of the tables.
Everything was fine with exception of one thing. The cardinality
of the indexes on the cloned table is way off, and execution plans for queries executed against it are awful (went from few seconds to 30+ min for some of them).
I know that this can be fixed by running an Analyze table on it, but this takes also a lot of time (currently i'm running one on a slave server and is been executed for more then 10h now) and i can't afford to have this table offline to write while the analyze is performed. Also this will stress the IO of the server putting pressure on the server and slowing it down.
Can someone explain why building a myisam table via insert select statements results in a table which has such a poor internal statistics for indexes?
Also is there a way to incrementally build the table and have the indexes in good shape at the end?
Thanks in advance.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 8 years ago.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
This question does not appear to be about programming within the scope defined in the help center.
Improve this question
Is it possible to restore from a full backup or parrallel db, only certain records with original IDs?
Lets say records were deleted from a specific date, can those records be restored without restoring the entire table?
So to be clear lets say I have records 500 - 720 still in a backup or parrallel db, but the table has had new records added since the backup so dont want to lose them either. So simply want to slot records 500 - 720 back with their original IDs to the current table.
If you have a copy of the db, that's going to be the easiest and quickest way - create a copy of your table with just the rows you need:
CREATE TABLE table2
AS
SELECT * FROM table1
WHERE table1.ID BETWEEN 500 AND 720
then dump table2 with mysqldump:
mysqldump -u -p thedatabase table2 > table2_dump.sql
and ship the dump to the main db, run the dump when using a temporary database, and insert the missing records using:
INSERT INTO table1
SELECT *
FROM temp_db.table2
If you don't have a copy of the db with the missing records, just a backup, then I don't think you can do such a selective restore. If you just have a single dump file of the entire db, then you will have to restore a complete copy to a temporary db, and insert the missing records in a similar manner to the way I've described above, but with a where clause in the insert.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I have a MySQL database that is continually growing.
Every once in a while I OPTIMIZE all the tables. Would this be the sort of thing I should put a on a cron job daily or weekly?
Are there any specific tasks I could set to run automatically that keeps the database in top performance?
Thanks
Ben
You can optimize your tables inside database by executing this query:
SELECT * FROM `db_name`.`table_name` PROCEDURE ANALYSE(1, 10);
This will suggest Optimal_fieldtype to use, You have to ALTER your database so that
optimal field_type has been used.
Also, You can profile your queries inorder to make sure that proper indexing has been done on a table.
I suggest you try SQLyog which can let you know both "Calculate Optimal Datatype" and "SQL Profiler" which will definately help you in optimizing server performance.