I've run into a problem where I run some query and the mysqld process starts using 100% CPU power, without ending. I want to pinpoint this query. The problem is that log/development.log contains only queries that have finished. Any idea?
I think you have a few options for this. The first is really taking a look at your development.log and seeing which actions are causing it. Take a look at the queries you're asking rails to run and try to pinpoint that particular query. If it's taking a large amount of time it probably means you're doing something like returning n+1 queries, missing indexes or some other performance killer.
You say that the dev log only has queries that have finished. Can't you work out what the next query to run would be?
Your other options involve starting mysqld with a log (i think the names of some of these have changed):
mysqld --log[=file_name] --log-slow-queries[=file_name]
Showing the current statement list using processlist from within mysql:
show processlist;
To prevent stuff like this happening again you could also take some time to look at a rails performance monitor like RPM from New Relic (http://www.newrelic.com/).
I hope this helps!
You could take a look at running/unfinished statements via the
show processlist;
command.
If you have assess to MySQL, consider the SQL query
SHOW PROCESSLIST
Or from the command line:
mysqladmin processlist
Alternatively, the most powerful way is to override the 'execute' method of the ActiveRecord::Base connection instance. This article shows the general approach:
http://www.misuse.org/science/2006/12/12/sql-logging-in-rails/
You put this code into application.rb:
# define SQL_LOG_FILE, SQL_LOG_MAX_LINES
connection = ActiveRecord::Base.connection
class << connection
alias :original_exec :execute
def execute(sql, *name)
# try to log sql command but ignore any errors that occur in this block
# we log before executing, in case the execution raises an error
begin
lines = if File::exists?(SQL_LOG_FILE) then IO::readlines(SQL_LOG_FILE) else [] end
log = File.new(SQL_LOG_FILE, "w+")
# keep the log to specified max lines
if lines.length > SQL_LOG_MAX_LINES
lines.slice!(0..(lines.length-SQL_LOG_MAX_LINES))
end
lines << Time.now.strftime("%x %I:%M:%S %p")+": "+sql+"n"
log.write(lines)
log.close
$sql_log = sql
rescue Exception => e
;
end
# execute original statement
original_exec(sql, *name)
end # def execute
end # class <<
Related
I have created a discord bot that interacts with a mysql database but when you run a command that uses the UPDATE query it doesnt execute the update query but executes sleep , meaning the data in the DB isnt chnaged.
(from comment)
#client.command()
async def SetJob(ctx, uid: str, rank: str):
disout = exec("UPDATE users SET 'job'='{0}' WHERE identifier='{1}'".format(rank,uid))
if ctx.message.author == client.user:
return
if ctx.message.author.id not in whitelisted:
await ctx.send(embed=discord.Embed(title="You are not authorized to use this bot", description='Please contact Not Soviet Bear to add you to the whitelisted members list', color=discord.Color.red()))
return
else:
await ctx.send(embed=discord.Embed(title="Job Change", description="Job changed to '{0}' for Identifier'{1}'".format(rank,uid), color=discord.Color.blue()))
I assume your "bot" is periodically doing SHOW PROCESSLIST? Well, the UPDATE probably finished so fast that it did not see the query.
The Sleep says that the connection is still sitting there, but doing nothing. (There is no "sleep command"; "Sleep" indicates that no query is running at the instant.)
So, perhaps the question is "why did my update not do anything?". In order to debug that (or get help from us),
Check for errors after running the update. (You should always do this.)
Figure out the exact text of the generated SQL. (Sometimes there is an obvious syntax error or failure to escape, say, quotes.)
I'm using SQL Magic to connect to a db2 instance. However, I can't seem to find the syntax anywhere on how to close the connection when I'm done querying the database.
you cannot explicitly close a connection using Jupyter SQL Magic. In fact, that is one of the shortcoming of using Jupyter SQL Magic to connect to DB2. You need to close your session to close the Db2 connection. Hope this helps.
This probably isn't very useful, and to the extent it is it's probably not guaranteed to work in the future. But if you need a really hackish way to close the connection, I was able to do it this way (for a postgres db, I assume it's similar for db2):
In[87]: connections = %sql -l
Out[87]: {'postgresql://ngd#node1:5432/graph': <sql.connection.Connection at 0x7effdbcf6b38>}
In[88]: conn = connections['postgresql://ngd#node1:5432/graph']
In[89]: conn.session.close()
In[90]: %sql SELECT 1
...
StatementError: (sqlalchemy.exc.ResourceClosedError) This Connection is closed
[SQL: SELECT 1]
[parameters: [{'__name__': '__main__', '__doc__': 'Automatically created module for IPython interactive environment', '__package__': None, '__loader__': None, '__s ... (123202 characters truncated) ... stgresql://ngd#node1:5432/graph']", '_i28': "conn = connections['postgresql://ngd#node1:5432/graph']\nconn.session.close()", '_i29': '%sql SELECT 1'}]]
A big problem is--if you want to reconnect, that doesn't seem to work. Even after running %reload_ext sql, and trying to connect again, it still thinks the connection is closed when you try to use it. So unless someone knows how to fix that behavior, this is only useful for disconnecting if you don't want to re-connect again (to the same db with the same params) before restarting the kernel.
You can also restart the kernel.
This is the most simple way I've found to close all connections at the end of the session. You must restart the kernel to be able to re-establish the connection.
connections = %sql -l
[c.session.close() for c in connections.values()]
sorry for being to late but I've just started with working with SQL Magic and got annoyed with the constant errors appearing. It's a bit of a awkward patch but this helped me use it.
def multiline_qry(qry):
try:
%sql {qry}
except Exception as ex:
if str(type(ex).__name__) != 'ResourceClosedError':
template = "An exception of type {0} occurred. Arguments:\n{1!r}"
message = template.format(type(ex).__name__, ex.args)
print (message)
qry = '''DROP TABLE IF EXISTS EMPLOYEE;
CREATE TABLE EMPLOYEE(firstname varchar(50),lastname varchar(50));
INSERT INTO EMPLOYEE VALUES('Tom','Mitchell'),('Jack','Ryan');
'''
multiline_qry(qry)
log out the notebook first if you want to close the connection.
I am using the Mariaex.start_link method to establish a connection with MySQL database and it returns me a pid. I was wondering what's the best practice to manage these pids, i.e. close and create new ones every time? keep 1, 2, ... n pid(s) around as needed?
Also how would I close that connection or kill that pid? I tried Process.exit with :normal which doesn't stop it and I tried it with :kill but I get an error probably from Mariaex and it doesn't seem clean to kill it that way.
Thanks!
You might refer to Ecto codebase to see how it handles this case.
Basically, it starts a connection, executes a query and stops the Mariaex GenServer immediately after:
with {:ok, conn} <- Mariaex.start_link(opts) do
value = Ecto.Adapters.MySQL.Connection.execute(conn, sql, [], opts)
GenServer.stop(conn)
value
end
Running django via gunicorn to RDS (AWS mysql), I'm seeing this error in my gunicorn logs:
Exception _mysql_exceptions.ProgrammingError: (2014, "Commands out of sync; you can't run this command now") in <bound method Cursor.__del__ of <MySQLdb.cursors.Cursor object at 0x690ecd0>> ignored
I can't reliably reproduce it yet, nor can I track down the underlying code that's causing it.
I am using raw cursors in some places, following this pattern:
cursor = connections['read_only'].cursor()
sql = "select username from auth_user;"
cursor.execute(sql)
rows = cursor.fetchall()
usernames = []
for row in rows:
usernames.append(row[0])
In some places I immediately reuse the cursor for another query execute() / fetchall() pattern. Sometimes I don't.
I also use raw manager queries in some place.
I'm not explicitly closing cursors, but I don't believe that I should.
Other than that: I'm not using any stored procedures, no init_command parameters, nor anything else indicated in the other answers I've seen posted here.
Any ideas or suggestions for how to debug would be appreciated.
Check out https://code.djangoproject.com/ticket/17289
you'll need to do something like:
while cursor.nextset() is not None:
if verbose:
print "rows modified %s" % cursor.rowcount
I have queries that return thousands of results, is it posible to show only query time without actual results in MySQL console or from command line?
Use SET profiling = 1; at command prompt.
Refer for more details
It's not possible to get the execution time without getting result or getting sql executed.
See why we can not get execution time without actual query execution
If you're using Linux.
Create a file query.sql
Enter the query to test into it. e.g. select * from table1
Run this command:
time mysql your_db_name -u'db_user_name' -p'your_password' < query.sql > /dev/null
The output will look something like this:
real 0m4.383s
user 0m0.022s
sys 0m0.004s
the "real" line is what you're looking at. In the above example, the query took 4.38 seconds.
Obviously, entering your DB password on the command line is not such a great idea, but will do as a quick and dirty workaround.