Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 11 hours ago.
Improve this question
I'm using Azure database for mysql, there is a problem with a "Failed Connections".
Currently, connections are crowded at a specific time, and a specific select query is confirmed by more than half of the slow query logs.
Among several parameters, the values of parameters related to connection are as follows.
connect_timeout : 10 (units are seconds)
wait_timeout : 360
interactive_timeout : 28800
long_query_time : 5
net_read_timeout : 120
net_write_timeout : 240
slave_net_timeout : 60
and I heard that I need to create a "replication database" for slow queries, but I wonder if there is any other solutions by adjusting the mysql parameters.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
This is how I do in SQL Server:
Create Database MyDatabase
on (Name='MyDatabase_Data',
Filename='c:\db\BdUnisanta_Data.mdf',
Size= 20MB,
FileGrowth = 10%,
Maxsize=100MB)
log on
(Name = 'MyDatabase_log',
Filename = 'c:\db\MyDatabase_Log.ldf',
Size = 5MB,
FileGrowth = 5%,
MAXSIZE = UNLIMITED
)
How I could do that in these databases?
In DB2 you can also specify MAXSIZE of each tablespace, as well as the maximum size and number of log files. For example:
create db mydb on /whatever dbpath on /whatever
catalog tablespace managed by automatic storage maxsize 500 M
user tablespace managed by automatic storage maxsize 100 G;
update db cfg for mydb using logfilsiz 4000 logprimary 10 logsecond 100;
There is a question of temporary tablespace(s) though. By default they are created such that you cannot specify their maximum size, and really you should not. However, there is a workaround, although with a possible performance penalty.
Link to the manual.
In regards to PostgreSQL: As far as I know, you can't. However, there are workarounds explained in this posting:
https://bytes.com/topic/postgresql/answers/421532-database-size-limiting
Hope this helps.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
What is the meaning of "master heartbeat time period" in MySQL server, and how can I configure this variable in my.cnf?
As mentioned here on the mysql performance blog
MASTER_HEATBEAT_PERIOD is a value in seconds in the range between 0 to 4294967 with resolution in milliseconds. After the loss of a beat the SLAVE IO Thread will disconnect and try to connect again.
You can configure it on a slave using syntax also mentioned in that article and in the queries below.
mysql_slave > STOP SLAVE;
mysql_slave > CHANGE MASTER TO MASTER_HEARTBEAT_PERIOD=1;
mysql_slave > START SLAVE;
More information on using CHANGE MASTER can be found on the mysql documentation site
MASTER_HEARTBEAT_PERIOD sets the interval in seconds between replication heartbeats. Whenever the master's binary log is updated with an event, the waiting period for the next heartbeat is reset. interval is a decimal value having the range 0 to 4294967 seconds and a resolution in milliseconds; the smallest nonzero value is 0.001. Heartbeats are sent by the master only if there are no unsent events in the binary log file for a period longer than interval.
Setting interval to 0 disables heartbeats altogether. The default value for interval is equal to the value of slave_net_timeout divided by 2.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I'm debugging a problem with slow queries in a MySQL server. Queries normally complete in 100-400 millisecs but sometimes rocket to 10's or 100's of seconds.
The queries are generated by an application over which I have no control, and there are multiple databases (one for each customer). The slow queries seem to appear randomly, and neither RAM, disk or CPU is loaded when the slow queries are logged. When I run the queries manually, they run fine (as in millisecs), which makes me suspect locking issues in combination with other read and write queries. The queries itself are horrible (unable to use the index in either the WHERE or ORDER BY clause) but the largest tables are relatively small (up to 200.000 rows), and there are almost no JOINs. When I profile the queries, most time is spent sorting the result (in the case where the query runs fine).
I'm unable to reproduce the extreme slowness in a test environment, and my best idea right now is to stop the production MySQL server, create a copy of the databases, enable full query logging and starting the server again. This way I should be able to replay the load and reproduce the problem. But the general query log seems to only record the query, not the target database for the query. Do I have any other record / replay options for MySQL?
You can use the slow query log: http://dev.mysql.com/doc/refman/5.1/en/slow-query-log.html
Just set the threshold to a very small value (hopefully you're running mysql > 5.1 )
Otherwise you can use tcpdump:
http://www.mysqlperformanceblog.com/2008/11/07/poor-mans-query-logging/
and of course if you use that, you may want to look at the percona toolkit's pt-query-digest to process the tcpdump output: http://www.percona.com/doc/percona-toolkit/2.1/pt-query-digest.html
For future reference, you may want to set up query and server monitoring:
https://github.com/box/Anemometer/wiki
and
https://github.com/box/RainGauge/wiki/What-is-Rain-Gauge%3F
I finally nailed the problem. The application is doing something like this:
cursor = conn.execute("SELECT * FROM `LargeTable`")
while cursor.has_more_rows():
cursor.fetchrow()
do_something_that_takes_a_while()
cursor.close()
It's fetching and processing the result set, 1 row at a time. If the loop takes 100 seconds to complete, then the table is locked on the server for 100 seconds.
Changing this setting on the MySQL server:
set global SQL_BUFFER_RESULT=ON;
made the slow queries disappear instantly, because result sets are now pushed to a temp table so the table lock can be removed, regardless of how slowly the application consumes the result set. The setting brings in a host of other performance problems, but fortunately the server is dimensioned to handle these problems.
Percona is working on a new tool called Playback which does exactly what you want:
http://www.mysqlperformanceblog.com/2013/04/09/percona-playback-0-6-for-mysql-now-available/
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
From what I've been able to find in the web, mysql stores statements that alter data in the bin log, which is then read by the slave. What remains unclear is what happens to those statements next? Are they replayed as if they happenned on the slave server?
For example, say there is a query with current time in the conditional, like "UPDATE something SET updatedat = NOW()", and due to the replication delay, the query ends at the slave a couple of seconds later. Will the values in the table be different?
Or if there is master-master replication, at time 1000 the following query happens on server 1:
UPDATE t SET data = 'old', updatedat = 1000 WHERE updatedat < 1000
At time 1001 on server 2 the following query happens:
UPDATE t SET data = 'new', updatedat = 1001 WHERE updatedat < 1001
Then server 2 fetches the replication log from server 1, the value on the server 2 will be "old"? If so, is there a way to avoid it?
For example, say there is a query with current time in the conditional, like "UPDATE something SET updatedat = NOW()", and due to the replication delay, the query ends at the slave a couple of seconds later. Will the values in the table be different?
No. The replication duplicates the row, which means that the time will be the same
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I have a MySQL database that is continually growing.
Every once in a while I OPTIMIZE all the tables. Would this be the sort of thing I should put a on a cron job daily or weekly?
Are there any specific tasks I could set to run automatically that keeps the database in top performance?
Thanks
Ben
You can optimize your tables inside database by executing this query:
SELECT * FROM `db_name`.`table_name` PROCEDURE ANALYSE(1, 10);
This will suggest Optimal_fieldtype to use, You have to ALTER your database so that
optimal field_type has been used.
Also, You can profile your queries inorder to make sure that proper indexing has been done on a table.
I suggest you try SQLyog which can let you know both "Calculate Optimal Datatype" and "SQL Profiler" which will definately help you in optimizing server performance.