Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
WHEN use attribute DELAY_KEY_WRITE?
How it helped ?
CRETA TABLE(....) ;DELAY_KEY_WRITE =1
Another performance option in MySQL is the DELAY_KEY_WRITE option. According to the MySQL documentation the option makes index updates faster because they are not flushed to disk until the table is closed.
Note that this option applies only to MyISAM tables,
You can enable it on a table by table basis using the following SQL statement:
ALTER TABLE sometable DELAY_KEY_WRITE = 1;
This can also be set in the advanced table options in the MySQL Query Browser.
This performance option could be handy if you have to do a lot of update, because you can delay writing the indexes until tables are closed. So frequent updates to large tables, may want to check out this option.
Ok, so when does MySQL close tables?
That should have been your next question. It looks as though tables are opened when they are needed, but then added to the table cache. This cache can be flushed manually with FLUSH TABLES; but here's how they are closed automatically according to the docs:
1.When the cache is full and a thread tries to open a table that is not in the cache.
2.When the cache contains more than table_cache entries and a thread is no longer using a table.
3.FLUSH TABLES; is called.
"If DELAY_KEY_WRITE is enabled, this means that the key buffer for tables with this option are not flushed on every index update, but only when a table is closed. This speeds up writes on keys a lot, but if you use this feature, you should add automatic checking of all MyISAM tables by starting the server with the --myisam-recover option (for example, --myisam-recover=BACKUP,FORCE)."
So if you do use this option you may want to flush your table cache periodically, and make sure you startup using the myisam-recover option.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 days ago.
Improve this question
I have a wordpress site and when updating the main theme, I saw that mysql was consuming a high percentage of CPU, then I entered phpmyadmin and this appeared in the process list.
"Waiting for table metadata lock" and "copy to tmp table"
what should i do, my site stopped working and my server space is running out
Only the process running "copying to tmp table" is doing any work. The others are waiting.
Many types of ALTER TABLE operations in MySQL work by making a copy of the table and filling it with an altered form of the data. In your case, ALTER TABLE wp_posts ENGINE=InnoDB converts the table to the InnoDB storage engine. If the table was already using that storage engine, it's almost a no-op, but it can serve to defragment a tablespace after you delete a lot of rows.
Because it is incrementally copying rows to a new tablespace, it takes more storage space. Once it is done, it will drop the original tablespace. So it will temporarily need to use up to double the size of that table.
There should be no reason to run that command many times. Did you do that? The one that's doing the work is in progress, but it takes some time, depending on how many rows are stored in the table and also depending on how powerful your database server is. Be patient, and don't try to start the ALTER TABLE again in more than one tab.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I want to periodically insert data from an MySQL database into clickHouse, i.e., when data is added/updated in mySQL database, I want that data to be added automatically to clickHouse.
I am thinking of using the Change Data Capture (CDC). CDC is a technique that captures changes made to data in MySQL and applies it to the destination ClickHouse table. It only imports changed data, not the entire database. To use the CDC method with a MySQL database, we must utilize the Binary Change Log (binlog). Binlog allows us to capture change data as a stream, enabling near real-time replication.
Binlog not only captures data changes (INSERT, UPDATE, DELETE) but also table schema changes such as ADD/DROP COLUMN. It also ensures that rows deleted from MySQL are also deleted in ClickHouse.
After having the changes, How can I insert it in the ClickHouse?
[experimental] MaterializedMySQL
Creates ClickHouse database with all the tables existing in MySQL, and all the data in those tables.
ClickHouse server works as MySQL replica. It reads binlog and performs DDL and DML queries.
https://clickhouse.tech/docs/en/engines/database-engines/materialized-mysql/
https://altinity.com/blog/2018/6/30/realtime-mysql-clickhouse-replication-in-practice
https://clickhouse.tech/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources/#dicts-external_dicts_dict_sources-mysql
https://altinity.com/blog/dictionaries-explained
https://altinity.com/blog/2020/5/19/clickhouse-dictionaries-reloaded
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
I'm using MySql(innodb storage engine). I want to implement row level implicit locking on update statement. so, that no other transaction can read or update that row concurrently.
Example:
Transaction1 is executing
"UPDATE Customers
SET City='Hamburg'
WHERE CustomerID=1;"
Then, at same time Transaction2 should not able to read or update same row but Transaction2 should be able to access other rows.
Any help would be appreciated.
Thank you for your time.
If there are no other statements supporting that UPDATE, it is atomic.
If, for example, you needed to look at the row before deciding to change it, then it is a little more complex:
BEGIN;
SELECT ... FOR UPDATE;
decide what you need to do
UPDATE ...;
COMMIT;
No other connection can change with the row(s) SELECTed before the COMMIT.
Other connections can see the rows involved, but they may be seeing the values before the BEGIN started. Reads usually don't matter. What usually matters is that everything between BEGIN and COMMIT is "consistent", regardless of what is happening in other connections.
Your connection might be delayed, waiting for another connection to let go of something (such as the SELECT...FOR UPDATE). Some other connection might be delayed. Or there could be a "deadlock" -- when InnoDB decides that waiting will not work.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm working on a database that has one table with 21 million records. Data is loaded once when the database is created and there are no more insert, update or delete operations. A web application accesses the database to make select statements.
It currently takes 25 second per request for the server to receive a response. However if multiple clients are making simultaneous requests the response time increases significantly. Is there a way of speeding this process up ?
I'm using MyISAM instead of InnoDB with fixed max rows and have indexed based on the searched field.
If no data is being updated/inserted/deleted, then this might be case where you want to tell the database not to lock the table while you are reading it.
For MYSQL this seems to be something along the lines of:
SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED ;
SELECT * FROM TABLE_NAME ;
SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ ;
(ref: http://itecsoftware.com/with-nolock-table-hint-equivalent-for-mysql)
More reading in the docs, if it helps:
https://dev.mysql.com/doc/refman/5.7/en/innodb-transaction-isolation-levels.html
The TSQL equivalent, which may help if you need to google further, is
SELECT * FROM TABLE WITH (nolock)
This may improve performance. As noted in other comments some good indexing may help, and maybe breaking the table out further (if possible) to spread things around so you aren't accessing all the data if you don't need it.
As a note; locking a table prevents other people changing data while you are using it. Not locking a table that is has a lot of inserts/deletes/updates may cause your selects to return multiple rows of the same data (as it gets moved around on the harddrive), rows with missing columns and so forth.
Since you've got one table you are selecting against your requests are all taking turns locking and unlocking the table. If you aren't doing updates, inserts or deletes then your data shouldn't change, so you should be ok to forgo the locks.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I am using Both types in my database. And Found information regarding these two types as i given following :
1. InnoDB locks the particular row in the table, and MyISAM locks the entire MySQL table.
2. MyISAM is the original storage engine. It is a fast storage engine. It does not support transactions.
3. InnoDB is the most widely used storage engine with transaction support. It is an ACID compliant storage engine.
So, I am confused on which table store as MyISAM type and which table store as InnoDB type in My database.
Please provide suggestion to me.
MyISAM and InnoDB are two database engines, and they both are better in their aspects. MyISAM works best on non-transactional purpose such as where you need SEARCH. InnoDB works better where you use transaction such as INSERT,UPDATE,DELETE.
The main difference between InnoDB and MyISAM is transaction and referential integrity
They are both better in their own design goals.
Refer to this link to have a clear idea about MyISAM and InnoDB