The usage of mysql option --unbuffered - mysql

There is a option named "unbuffered" of mysql client and a very simple line about it, "Flush the buffer after each query.", in mysql manual.
My question is what is its usage?
I try to read mysql source code and it may be the option "flush mysql client log/output buffer after each query", but I'm not sure.
Thanks.

The default behavior for the db is to buffer your query before outputting any info. If you run an unbiffered query, you are asking mysql to start the output as soo as possible. Theoretically this only stores one row in memory so you can stream huge tables without running out of memory.
Downsides are you cannot run 2 unbuffered queries at the same time. Whereas buffered queries will jjst enqueue the second one, unbuffered statements will throw an error.
Another downside is that you don't know how many rows are left until you end iterating through the redult.

Related

MySql 5.5; possible to exclude a table from logging?

MySql 5.5 has a few logging option, among which the "Binary Logfile" with Binlog options which I do not want to use and the "query log file" which I want to use.
However, 1 program using 1 table in that database is filling this logfile with 50+Mb per day, so I would like that table to be excluded from this log.
Is that possible, or is the only way to install another MySql version and then to move this 1 table?
Thanks,
Alex
There are options for filtering the binlog by table, but not the query logs.
There are no options for filtering the general query log. It is either enabled for all queries, or else it's disabled.
There are options for filtering the slow query log, but not by table. For example, to log only queries that take longer than N seconds, or queries that don't use an index. Percona Server adds some options to filter the slow query log based on sampling.
You can use a session variable to disable either slow query or general query logging for queries run in a given session. This is a dynamic setting, so you can change it at will. But you would need to change your client code to do this every time you query that specific table.
Another option is to implement log rotation for the slow query log, so it never grows too large. See https://www.percona.com/blog/2013/04/18/rotating-mysql-slow-logs-safely/

SQL Query Cache

I know that SQL query will use query cache to receive data instead of reprocess all of the data. Here the question I would like to ask,
I working with a server of database and I'm one of the developer that working on it and I need to do performance testing on queries that i handling
If I clear the query cache
example using FLUSH QUERY CACHE; or RESET QUERY CACHE;,
will it affect others developer or it only clears away my local query cache?
If it will affect others, is there any way to clear locally or allow my query won't use the query cache for testing
Two clarifications to begin with:
MySQL query cache is a server-side feature, there's no such thing as "local cache". You're probably confused by the LOCAL keyword in FLUSH command. As docs explain it's just an alias for NO_WRITE_TO_BINLOG (thus it's related to replication and "local" means "this server").
MySQL will only return cached data if you've enabled the feature and either made it default or opted-in with the SQL_CACHE hint. In my experience, most servers do not have it by default.
Let's now answer your question. At The MySQL Query Cache we can read:
The query cache is shared among sessions, so a result set generated by
one client can be sent in response to the same query issued by another
client.
Which makes sense: a cache that cannot reuse stored data is not as useful.
I don't know what you want to test exactly. Your data should always be fresh:
The query cache does not return stale data. When tables are modified,
any relevant entries in the query cache are flushed.
However you might want to get an idea of how long the query takes to run. You can always opt out with the SQL_NO_CACHE keyword:
The server does not use the query cache. It neither checks the query
cache to see whether the result is already cached, nor does it cache
the query result.
Just take into account that a query that runs for the second time might run faster even without cache because part of the data segments might be already loaded into RAM.
Try using the SQL_NO_CACHE option in your query.This will stop MySQL caching the results
SELECT SQL_NO_CACHE * FROM TABLE
With SQL Server for cached data, you can use DBCC DROPCLEANBUFFERS and force a manul CHECKPOINT.
However it works at the Server (instance) level:
Use DBCC DROPCLEANBUFFERS to test queries with a cold buffer cache without shutting down and restarting the server.
To drop clean buffers from the buffer pool, first use CHECKPOINT to produce a cold buffer cache. This forces all dirty pages for the current database to be written to disk and cleans the buffers. After you do this, you can issue DBCC DROPCLEANBUFFERS command to remove all buffers from the buffer pool.
edited*
SQL Query buffer cache is global but not local. If the buffer or query cache is drop, it drops it globally, and will affect all user using the database server.

Does executing a statement always take in memory for the result set?

I was told by a colleague that executing an SQL statement always puts the data into RAM/swap by the database server. Thus it is not practical to select large result sets.
I thought that such code
my $sth = $dbh->prepare('SELECT million_rows FROM table');
while (my #data = $sth->fetchrow) {
# process the row
}
retrieves the result set row by row, without it being loaded to RAM.
But I can't find any reference to this in DBI or MySQL docs. How is the result set really created and retrieved? Does it work the same for simple selects and joins?
Your colleague is right.
By default, the perl module DBD::mysql uses mysql_store_result which does indeed read in all SELECT data and cache it in RAM. Unless you change that default, when you fetch row-by-row in DBI, it's just reading them out of that memory buffer.
This is usually what you want unless you have very very large result sets. Otherwise, until you get the last data back from mysqld, it has to hold that data ready and my understanding is that it causes blocks on writes to the same rows (blocks? tables?).
Keep in mind, modern machines have a lot of RAM. A million-row result set is usually not a big deal. Even if each row is quite large at 1 KB, that's only 1 GB RAM plus overhead.
If you're going to process millions of rows of BLOBs, maybe you do want mysql_use_result -- or you want to SELECT those rows in chunks with progressive uses of LIMIT x,y.
See mysql_use_result and mysql_store_result in perldoc DBD::mysql for details.
This is not true (if we are talking about the database server itself, not client layers).
MySQL can buffer the whole resultset, but this is not necessarily done, and if done, not necessarily in RAM.
The resultset is buffered if you are using inline views (SELECT FROM (SELECT …)), the query needs to sort (which is shown as using filesort), or the plan requires creating a temporary table (which is shown as using temporary in the query plan).
Even if using temporary, MySQL only keeps the table in memory when its size does not exceed the limit set in tmp_table. When the table grows over this limit, it is converted from memory into MyISAM and stored on disk.
You, though, may explicitly instruct MySQL to buffer the resultset by appending SQL_BUFFER_RESULT instruction to the outermost SELECT.
See the docs for more detail.
No, that is not how it works.
Database will not hold rows in RAM/swap.
However, it will try, and mysql tries hard here, to cache as much as possible (indexes, results, etc...). Your mysql configuration gives values for the available memory buffers for different kinds of caches (for different kinds of storage engines) - you should not allow this cache to swap.
Test it
Bottom line - it should be very easy to test this using client only (I don't know perl's dbi, it might, but I doubt it, be doing something that forces mysql to load everything on prepare). Anyway... test it:
If you actually issue a prepare on SELECT SQL_NO_CACHE million_rows FROM table and then fetch only few rows out of millions.
You should then compare performance with SELECT SQL_NO_CACHE only_fetched_rows FROM table and see how that fares.
If the performance is comparable (and fast) then I believe that you can call your colleague's bluff.
Also if you enable log of the statements actually issued to mysql and give us a transcript of that then we (non perl folks) can give more definitive answer on what would mysql do.
I am not super familiar with this, but it looks to me like DBD::mysql can either fetch everything up front or only as needed, based on the mysql_use_result attribute. Consult the DBD::mysql and MySQL documentation.

Identifying who makes a lot of INSERT requests in MySQL

Recently, I noticed that my MySQL server processes a lot of INSERT's. How can I detect user or DB on which is this activivty??
insert 33 k 97.96 k 44.21%
SHOW FULL PROCESSLIST will return every connection, user, and query currently active, if you have the PROCESS permission. That's more for immediate problems, but it has the least overhead.
If you use query logging, then instead of the regular query log (it can slow your server down noticeably) use the binary log to keep it minimal. It only tracks actions that change tables, like CREATE/DROP/ALTER and INSERT/UPDATE/REPLACE.
What you should log periodically (once a minute):
SHOW FULL PROCESSLIST;
SHOW GLOBAL STATUS;
with slow log enabled this will give you huge chance that any question can be solved.
If you have binary logging enabled you can check time/user who inserted rows.
If you have general log enabled then everything is logged.
Look in your query logs. This will show every connect into MySQL, and show every command that they execute.

Fixing "Lock wait timeout exceeded; try restarting transaction" for a 'stuck" Mysql table?

From a script I sent a query like this thousands of times to my local database:
update some_table set some_column = some_value
I forgot to add the where part, so the same column was set to the same a value for all the rows in the table and this was done thousands of times and the column was indexed, so the corresponding index was probably updated too lots of times.
I noticed something was wrong, because it took too long, so I killed the script. I even rebooted my computer since then, but something stuck in the table, because simple queries take a very long time to run and when I try dropping the relevant index it fails with this message:
Lock wait timeout exceeded; try restarting transaction
It's an innodb table, so stuck the transaction is probably implicit. How can I fix this table and remove the stuck transaction from it?
I had a similar problem and solved it by checking the threads that are running.
To see the running threads use the following command in mysql command line interface:
SHOW PROCESSLIST;
It can also be sent from phpMyAdmin if you don't have access to mysql command line interface.
This will display a list of threads with corresponding ids and execution time, so you can KILL the threads that are taking too much time to execute.
In phpMyAdmin you will have a button for stopping threads by using KILL, if you are using command line interface just use the KILL command followed by the thread id, like in the following example:
KILL 115;
This will terminate the connection for the corresponding thread.
You can check the currently running transactions with
SELECT * FROM `information_schema`.`innodb_trx` ORDER BY `trx_started`
Your transaction should be one of the first, because it's the oldest in the list. Now just take the value from trx_mysql_thread_id and send it the KILL command:
KILL 1234;
If you're unsure which transaction is yours, repeat the first query very often and see which transactions persist.
Check InnoDB status for locks
SHOW ENGINE InnoDB STATUS;
Check MySQL open tables
SHOW OPEN TABLES WHERE In_use > 0;
Check pending InnoDB transactions
SELECT * FROM `information_schema`.`innodb_trx` ORDER BY `trx_started`;
Check lock dependency - what blocks what
SELECT * FROM `information_schema`.`innodb_locks`;
After investigating the results above, you should be able to see what is locking what.
The root cause of the issue might be in your code too - please check the related functions especially for annotations if you use JPA like Hibernate.
For example, as described here, the misuse of the following annotation might cause locks in the database:
#Transactional(propagation = Propagation.REQUIRES_NEW)
This started happening to me when my database size grew and I was doing a lot of transactions on it.
Truth is there is probably some way to optimize either your queries or your DB but try these 2 queries for a work around fix.
Run this:
SET GLOBAL innodb_lock_wait_timeout = 5000;
And then this:
SET innodb_lock_wait_timeout = 5000;
When you establish a connection for a transaction, you acquire a lock before performing the transaction. If not able to acquire the lock, then you try for sometime. If lock is still not obtainable, then lock wait time exceeded error is thrown. Why you will not able to acquire a lock is that you are not closing the connection. So, when you are trying to get a lock second time, you will not be able to acquire the lock as your previous connection is still unclosed and holding the lock.
Solution: close the connection or setAutoCommit(true) (according to your design) to release the lock.
Restart MySQL, it works fine.
BUT beware that if such a query is stuck, there is a problem somewhere :
in your query (misplaced char, cartesian product, ...)
very numerous records to edit
complex joins or tests (MD5, substrings, LIKE %...%, etc.)
data structure problem
foreign key model (chain/loop locking)
misindexed data
As #syedrakib said, it works but this is no long-living solution for production.
Beware : doing the restart can affect your data with inconsistent state.
Also, you can check how MySQL handles your query with the EXPLAIN keyword and see if something is possible there to speed up the query (indexes, complex tests,...).
Goto processes in mysql.
So can see there is task still working.
Kill the particular process or wait until process complete.
I ran into the same problem with an "update"-statement. My solution was simply to run through the operations available in phpMyAdmin for the table. I optimized, flushed and defragmented the table (not in that order). No need to drop the table and restore it from backup for me. :)
I had the same issue. I think it was a deadlock issue with SQL. You can just force close the SQL process from Task Manager. If that didn't fix it, just restart your computer. You don't need to drop the table and reload the data.
I had this problem when trying to delete a certain group of records (using MS Access 2007 with an ODBC connection to MySQL on a web server). Typically I would delete certain records from MySQL then replace with updated records (cascade delete several related records, this streamlines deleting all related records for a single record deletion).
I tried to run through the operations available in phpMyAdmin for the table (optimize,flush, etc), but I was getting a need permission to RELOAD error when I tried to flush. Since my database is on a web server, I couldn't restart the database. Restoring from a backup was not an option.
I tried running delete query for this group of records on the cPanel mySQL access on the web. Got same error message.
My solution: I used Sun's (Oracle's) free MySQL Query Browser (that I previously installed on my computer) and ran the delete query there. It worked right away, Problem solved. I was then able to once again perform the function using the Access script using the ODBC Access to MySQL connection.
Issue in my case: Some updates were made to some rows within a transaction and before the transaction was committed, in another place, the same rows were being updated outside this transaction. Ensuring that all the updates to the rows are made within the same transaction resolved my issue.
issue resolved in my case by changing delete to truncate
issue-
query:
delete from Survey1.sr_survey_generic_details
mycursor.execute(query)
fix-
query:
truncate table Survey1.sr_survey_generic_details
mycursor.execute(query)
This happened to me when I was accessing the database from multiple platforms, for example from dbeaver and control panels. At some point dbeaver got stuck and therefore the other panels couldn't process additional information. The solution is to reboot all access points to the database. close them all and restart.
Fixed it.
Make sure you doesn't have mismatched data type insert in query.
I had an issue where i was trying "user browser agent data" in VARCHAR(255) and having issue with this lock however when I changed it to TEXT(255) it fixed it.
So most likely it is a mismatch of data type.
I solved the problem by dropping the table and restoring it from backup.