Postgres vs MySQL: Commands out of sync; - mysql

MySQL scenario:
When I execute "SELECT" queries in MySQL using multiple threads I get the following message: "Commands out of sync; you can't run this command now", I found that this is due to the limitation of having to wait "consume" the results to make another query. C ++ example:
void DataProcAsyncWorker::Execute()
{
std::thread (&DataProcAsyncWorker::Run, this).join();
}
void DataProcAsyncWorker :: Run () {
sql::PreparedStatement * prep_stmt = c->con->prepareStatement(query);
...
}
Important:
I can't help using multiple threads per query (SELECT, INSERT, ETC) because the module I'm building that is being integrated with NodeJS "locks" the thread until the result is already obtained, for this reason I need to run in the background (new thread) and resolve the "promise" containing the result obtained from MySQL
Important:
I am saving several "connections" [example: 10], and with each SQL call the function chooses a connection. This is: 1. A connection pool that contains 10 established connections, Ex:
for (int i = 0; i <10; i ++) {
Com * c = new Com;
c->id = i;
c->con = openConnection ();
c->con->setSchema("gateway");
conns.push_back(c);
}
The problem occurs when executing> = 100 SELECT queries per second, I believe that even with the connection balance 100 connections per second is a high number and the connection "ex: conns.at(10)" is in process and was not consumed
My question:
Does PostgreSQL have this limitation as well? Or in PostgreSQL there is also such a limitation?
Note:
In PHP Docs about MySQL, the mysqli_free_result command is required after using mysqli_query, if not, I will get a "Commands out of sync" error, in contrast to the PostgreSQL documentation, the pg_free_result command is completely optional after using pg_query.
That said, someone using PostgreSQL has already faced problems related to "commands are out of sync", maybe there is another name for this error?
Or is PostgreSQL able to deal with this problem automatically for this reason the free_result is being called invisibly by the server without causing me this error?

You need to finish using one prepared statement (or cursor or similar construct) before starting another.
"Commands out of sync" is often cured by adding the closing statement.

"Question:
Does PostgreSQL have this limitation as well? Or in PostgreSQL there is also such a limitation?"
No, the PostgreSQL does not have this limitation.

Related

Postgress vs MySQL: Commands out of sync;

MySQL scenario:
When I execute "SELECT" queries in MySQL using multiple threads I get the following message: "Commands out of sync; you can't run this command now", I found that this is due to the limitation of having to wait "consume" the results to make another query.
C ++ example:
void DataProcAsyncWorker :: Execute ()
{
  std :: thread (& DataProcAsyncWorker :: Run, this) .join ();
}
void DataProcAsyncWorker :: Run () {
  sql :: PreparedStatement * prep_stmt = c-> con-> prepareStatement (query);
 ...
}
Important:
I can't help using multiple threads per query (SELECT, INSERT, ETC) because the module I'm building that is being integrated with NodeJS "locks" the thread until the result is already obtained, for this reason I need to run in the background (new thread) and resolve the "promise" containing the result obtained from MySQL
Important:
I am saving several "connections" [example: 10], and with each SQL call the function chooses a connection.
This is:
1. A connection pool that contains 10 established connections, Ex:
for (int i = 0; i <10; i ++) {
    Com * c = new Com;
    c-> id = i;
    c-> con = openConnection ();
    c-> con-> setSchema ("gateway");
    conns.push_back (c);
}
2. The problem occurs when executing> = 100 SELECT queries per second, I believe that even with the connection balance 100 connections per second is a high number and the connection "ex: conns.at (50)" is in process and was not consumed
My question:
A. Does PostgreSQL have this limitation as well? Or in PostgreSQL there is also such a limitation?
B. Which server using SQL commands is recommended for large SQL queries per second without the need to "open new connections", that is:
In a conns.at (0) connection I can execute (through 2 simultaneous threads) SELECT commands.
Additional:
1. I can even create a larger number of connections in the pool, but when I simulate a number of queries per second greater than the number of pre-set connections I will get the error: "Commands out of sync", the the only solution I found was mutex, which is bad for performance
I found that PostgreSQL looks great with this (queue / queue), in a very efficient way, unlike MySQL where I need to call "_free_result", in PostgreSQL, I can run multiple queries on the same connection without receiving the error: "Commands out of sync ".
Note: I did the test using libpqxx (library for connection / queries to the PostgreSQL server in C) and it really worked like a wonder without giving me a headache.
Note: I don't know if it allows multi-thread execution or the execution is done synchronously on the server side for each connection, the only thing I know is that there is no such error in postgresql.

How to Close a Connection When Using Jupyter SQL Magic?

I'm using SQL Magic to connect to a db2 instance. However, I can't seem to find the syntax anywhere on how to close the connection when I'm done querying the database.
you cannot explicitly close a connection using Jupyter SQL Magic. In fact, that is one of the shortcoming of using Jupyter SQL Magic to connect to DB2. You need to close your session to close the Db2 connection. Hope this helps.
This probably isn't very useful, and to the extent it is it's probably not guaranteed to work in the future. But if you need a really hackish way to close the connection, I was able to do it this way (for a postgres db, I assume it's similar for db2):
In[87]: connections = %sql -l
Out[87]: {'postgresql://ngd#node1:5432/graph': <sql.connection.Connection at 0x7effdbcf6b38>}
In[88]: conn = connections['postgresql://ngd#node1:5432/graph']
In[89]: conn.session.close()
In[90]: %sql SELECT 1
...
StatementError: (sqlalchemy.exc.ResourceClosedError) This Connection is closed
[SQL: SELECT 1]
[parameters: [{'__name__': '__main__', '__doc__': 'Automatically created module for IPython interactive environment', '__package__': None, '__loader__': None, '__s ... (123202 characters truncated) ... stgresql://ngd#node1:5432/graph']", '_i28': "conn = connections['postgresql://ngd#node1:5432/graph']\nconn.session.close()", '_i29': '%sql SELECT 1'}]]
A big problem is--if you want to reconnect, that doesn't seem to work. Even after running %reload_ext sql, and trying to connect again, it still thinks the connection is closed when you try to use it. So unless someone knows how to fix that behavior, this is only useful for disconnecting if you don't want to re-connect again (to the same db with the same params) before restarting the kernel.
You can also restart the kernel.
This is the most simple way I've found to close all connections at the end of the session. You must restart the kernel to be able to re-establish the connection.
connections = %sql -l
[c.session.close() for c in connections.values()]
sorry for being to late but I've just started with working with SQL Magic and got annoyed with the constant errors appearing. It's a bit of a awkward patch but this helped me use it.
def multiline_qry(qry):
try:
%sql {qry}
except Exception as ex:
if str(type(ex).__name__) != 'ResourceClosedError':
template = "An exception of type {0} occurred. Arguments:\n{1!r}"
message = template.format(type(ex).__name__, ex.args)
print (message)
qry = '''DROP TABLE IF EXISTS EMPLOYEE;
CREATE TABLE EMPLOYEE(firstname varchar(50),lastname varchar(50));
INSERT INTO EMPLOYEE VALUES('Tom','Mitchell'),('Jack','Ryan');
'''
multiline_qry(qry)
log out the notebook first if you want to close the connection.

General error: 1615 Prepared statement needs to be re-prepared

I have been running into this issue every time I try and sync a medium size JSON object to my database so we can perform some reporting on it. From looking into what can cause it I have come across these links on the matter.
http://blog.corrlabs.com/2013/04/mysql-prepared-statement-needs-to-be-re.html
http://bugs.mysql.com/bug.php?id=42041
Both seem to point me in the direction of table_definition_cache. However this is saying the issue is due to a mysqldump happening on the server at the same time. I can assure you that this is not the case. Further I have slimmed down the query to only insert one object at a time.
public function fire($job, $data)
{
foreach (unserialize($data['message']) as $org)
{
// Ignore ID 33421 this will time out.
// It contains all users in the system.
if($org->id != 33421) {
$organization = new Organization();
$organization->orgsync_id = $org->id;
$organization->short_name = $org->short_name;
$organization->long_name = $org->long_name;
$organization->category = $org->category->name;
$organization->save();
$org_groups = $this->getGroupsInOrganization($org->id);
if (!is_int($org_groups))
{
foreach ($org_groups as $group)
{
foreach($group->account_ids as $account_id)
{
$student = Student::where('orgsync_id', '=', $account_id)->first();
if (is_object($student))
{
$student->organizations()->attach($organization->id, array('is_officer' => ($group->name == 'Officers')));
}
}
}
}
}
}
$job->delete();
}
This is the code that is running when the error is thrown. Which normally comes in the form of.
SQLSTATE[HY000]: General error: 1615 Prepared statement needs to be re-prepared (SQL: insert into `organization_student` (`is_officer`, `organization_id`, `student_id`) values (0, 284, 26))
Which is then followed by this error repeated 3 times.
SQLSTATE[HY000]: General error: 1615 Prepared statement needs to be re-prepared (SQL: insert into `organizations` (`orgsync_id`, `short_name`, `long_name`, `category`, `updated_at`, `created_at`) values (24291, SA, Society of American, Professional, 2014-09-15 16:26:01, 2014-09-15 16:26:01))
If anyone can point me in the right direction I would be very grateful. I am more curious about what is actually triggering the error then finding the cause of this specific issue. It also seems to be somewhat common in laravel application when using the ORM.
While mysqldump is the commonly reported cause for this it is not the only one.
In my case running artisan:migrate on any database will also trigger this error for different databases on the same server.
http://bugs.mysql.com/bug.php?id=42041
Mentions table locks/flush which would be called in a mysqldump so worth checking if you have any migrations, locks or flushes happening simultaneously.
Failing that try switching the prepares to emulated.
'options' => [
\PDO::ATTR_EMULATE_PREPARES => true
]
This error occurs when mysqldump is in progress. It doesn't matter which DB dump is in progress. Wait for the dump to finish and this error will vanish.
The issue is with the table definition being dumped which cause this error.
Yeah I tried changing these mysql settings, but it still occurs sometime (mostly when running heavy mysql backups/dumps at night)..
table_open_cache 128=>16384
table_definition_cache 1024=>16384
I had a similar problem. In my case the problem seemed to be caused by using a view that itself used other views, the net effect might have been that it took several mS to process. It was particularly annoying because sometimes the error occurred and sometimes it did not. I programmed my way around it by creating temporary tables within the stored procedure rather than relying on the views. The server running the database reported using MariaDb ver. 10.2.35

Mysql + django exception: "Commands out of sync; you can't run this command now"

Running django via gunicorn to RDS (AWS mysql), I'm seeing this error in my gunicorn logs:
Exception _mysql_exceptions.ProgrammingError: (2014, "Commands out of sync; you can't run this command now") in <bound method Cursor.__del__ of <MySQLdb.cursors.Cursor object at 0x690ecd0>> ignored
I can't reliably reproduce it yet, nor can I track down the underlying code that's causing it.
I am using raw cursors in some places, following this pattern:
cursor = connections['read_only'].cursor()
sql = "select username from auth_user;"
cursor.execute(sql)
rows = cursor.fetchall()
usernames = []
for row in rows:
usernames.append(row[0])
In some places I immediately reuse the cursor for another query execute() / fetchall() pattern. Sometimes I don't.
I also use raw manager queries in some place.
I'm not explicitly closing cursors, but I don't believe that I should.
Other than that: I'm not using any stored procedures, no init_command parameters, nor anything else indicated in the other answers I've seen posted here.
Any ideas or suggestions for how to debug would be appreciated.
Check out https://code.djangoproject.com/ticket/17289
you'll need to do something like:
while cursor.nextset() is not None:
if verbose:
print "rows modified %s" % cursor.rowcount

"foreach" loop : Using all cores in R (especially if we are sending sql queries inside foreach loop)

I intend to use "foreach" to uitlize all the cores in my CPU. The catch is i need to send a sql query inside the loop. The script is working fine with normal 'for' loop, but it is giving following error when i change it to 'foreach'.
The error is :
select: Interrupted system call
select: Interrupted system call
select: Interrupted system call
Error in { : task 1 failed - "expired MySQLConnection"
The code i used is :
library(foreach)
library(doMC)
library(RMySQL)
library(multicore)
registerDoMC(cores=6)
m <- dbDriver("MySQL", max.con = 100)
con <- dbConnect(m, user="*****", password = "******", host ="**.**.***",dbname="dbname")
list<-dbListTables(con)
foreach(i = 1:(length(list))%dopar%{
query<-paste("SELECT * FROM ",list[i]," WHERE `CLOSE` BETWEEN 1 AND 100",sep="")
t<-dbGetQuery(con,query)
}
Though 'foreach' is working fine in my system for all other purposes, it is giving error only in case of sql queries. Is there a way to send sql queries inside 'foreach' loop?
My suggestion is this:
Move the database queries outside the loop, and lock access so you dont do parallel database queries. I think that will speed things up too, as you won't have parallel disk access, while still being able to do parallel processing.
Meaning (pseudo code)
db = connect to database
threadlock = lock();
parfor {
threadlock.lock
result = db query (pull all data here, as you cant process while you load without keeping the database locked)
thread.unlock
process resulting data (which is now just data, and not a sql object).
}