Problem with querying in Amazon RDS - mysql

Hi i have a python script that connects to an Amazon RDS machine and check for new entries.
my scripts works on the localhost perfectly. But on the RDS it does not detect the new entry. once i cancel the script and run again i get the new entry. For testing i tried it out like this
cont = MySQLdb.connect("localhost","root","password","DB")
cursor = cont.cursor()
for i in range(0, 100):
cursor.execute("Select count(*) from box")
A = cursor.fetchone()
print A
and during this process when i add a new entry it does not detect the new entry but when i close the connection and run it again i get the new entry. Why is this i checked the cache it was at 0. what else am i missing.

I have seen this happen in MySQL command-line clients as well.
My understanding (from other people linking to this URL) is that Python's API often silently creates transactions: http://www.python.org/dev/peps/pep-0249/
If that is true, then your cursor is looking at a consistent version of the data, even after another transaction adds rows. You could try doing a cursor.rollback() in the for loop to stop the implicit transaction that the SELECT is running in.

I got the solution to this, it is due to the Isolation level in Mysql, all i had to do was
Set the default transaction isolation level transaction-isolation = READ-COMMITTED
And i am using Django for this i had to add this in the django database settings
'OPTIONS': {
"init_command": "SET SESSION TRANSACTION ISOLATION LEVEL READ COMMITTED"
}

Related

maxscale proxy does not route read requests from flask-sqlachamy to the slaves

I have a maxscale mariadb cluster with one master and two slaves. I am using flask-sqlachemy ORM for querying and writing.
I have written read queries in style
db.session(User).join()....
Now all my read queries are going to max scale master node
Below are maxcalse logs
2021-09-14 17:38:26 info : (1239) (Read-Write-Service) > Autocommit: [disabled], trx is [open], cmd: (0x03) COM_QUERY, plen: 287, type: QUERY_TYPE_READ, stmt: SELECT some_col FROM user
2021-09-14 17:38:26 info : (1239) [readwritesplit] (Read-Write-Service) Route query to master: Primary <
I have tried other ways too
conn = mysql_connector.connect(...)
conn.autocommit(True)
cursor = conn.cursor()
cursor.execute(query)
This works fine and routes query to one of slave.
But my most of code is written in ORM style. Is there any way to achieve this while using flask-sqlalchemy
If autocommit is disabled, you always have an open transaction: use START TRANSACTION READ ONLY to start an explicit read-only transaction. This allows MaxScale to route the transaction to a slave.

MySQL executes sleep command when UPDATE query is used

I have created a discord bot that interacts with a mysql database but when you run a command that uses the UPDATE query it doesnt execute the update query but executes sleep , meaning the data in the DB isnt chnaged.
(from comment)
#client.command()
async def SetJob(ctx, uid: str, rank: str):
disout = exec("UPDATE users SET 'job'='{0}' WHERE identifier='{1}'".format(rank,uid))
if ctx.message.author == client.user:
return
if ctx.message.author.id not in whitelisted:
await ctx.send(embed=discord.Embed(title="You are not authorized to use this bot", description='Please contact Not Soviet Bear to add you to the whitelisted members list', color=discord.Color.red()))
return
else:
await ctx.send(embed=discord.Embed(title="Job Change", description="Job changed to '{0}' for Identifier'{1}'".format(rank,uid), color=discord.Color.blue()))
I assume your "bot" is periodically doing SHOW PROCESSLIST? Well, the UPDATE probably finished so fast that it did not see the query.
The Sleep says that the connection is still sitting there, but doing nothing. (There is no "sleep command"; "Sleep" indicates that no query is running at the instant.)
So, perhaps the question is "why did my update not do anything?". In order to debug that (or get help from us),
Check for errors after running the update. (You should always do this.)
Figure out the exact text of the generated SQL. (Sometimes there is an obvious syntax error or failure to escape, say, quotes.)

Postgres vs MySQL: Commands out of sync;

MySQL scenario:
When I execute "SELECT" queries in MySQL using multiple threads I get the following message: "Commands out of sync; you can't run this command now", I found that this is due to the limitation of having to wait "consume" the results to make another query. C ++ example:
void DataProcAsyncWorker::Execute()
{
std::thread (&DataProcAsyncWorker::Run, this).join();
}
void DataProcAsyncWorker :: Run () {
sql::PreparedStatement * prep_stmt = c->con->prepareStatement(query);
...
}
Important:
I can't help using multiple threads per query (SELECT, INSERT, ETC) because the module I'm building that is being integrated with NodeJS "locks" the thread until the result is already obtained, for this reason I need to run in the background (new thread) and resolve the "promise" containing the result obtained from MySQL
Important:
I am saving several "connections" [example: 10], and with each SQL call the function chooses a connection. This is: 1. A connection pool that contains 10 established connections, Ex:
for (int i = 0; i <10; i ++) {
Com * c = new Com;
c->id = i;
c->con = openConnection ();
c->con->setSchema("gateway");
conns.push_back(c);
}
The problem occurs when executing> = 100 SELECT queries per second, I believe that even with the connection balance 100 connections per second is a high number and the connection "ex: conns.at(10)" is in process and was not consumed
My question:
Does PostgreSQL have this limitation as well? Or in PostgreSQL there is also such a limitation?
Note:
In PHP Docs about MySQL, the mysqli_free_result command is required after using mysqli_query, if not, I will get a "Commands out of sync" error, in contrast to the PostgreSQL documentation, the pg_free_result command is completely optional after using pg_query.
That said, someone using PostgreSQL has already faced problems related to "commands are out of sync", maybe there is another name for this error?
Or is PostgreSQL able to deal with this problem automatically for this reason the free_result is being called invisibly by the server without causing me this error?
You need to finish using one prepared statement (or cursor or similar construct) before starting another.
"Commands out of sync" is often cured by adding the closing statement.
"Question:
Does PostgreSQL have this limitation as well? Or in PostgreSQL there is also such a limitation?"
No, the PostgreSQL does not have this limitation.

Closing active connections using RMySQL

As per my question earlier today, I suspect I have an issue with unclosed connections that is blocking data from being injected into my MySQL database. Data is being allowed into tables that are not currently being used (hence I suspect many open connections preventing uploading into that particular table).
I am using RMySQL on Ubuntu servers to upload data onto a MySQL database.
I'm looking for a way to a) determine if connections are open b) close them if they are. The command exec sp_who and exec sp_who2 from the SQL command line returns an SQL code error.
Another note: I am able to connect, complete the uploading process, and end the R process successfully, and there is no data on the server (checked via the SQL command line) when I try only that table.
(By the way,: If all else fails, would simply deleting the table and creating a new one with the same name fix it? It would be quite a pain, but doable.)
a. dbListConnections( dbDriver( drv = "MySQL"))
b. dbDisconnect( dbListConnections( dbDriver( drv = "MySQL"))[[index of MySQLConnection you want to close]]). To close all: lapply( dbListConnections( dbDriver( drv = "MySQL")), dbDisconnect)
Yes, you could just rewrite the table, of course you would lose all data. Or you can specify dbWriteTable(, ..., overwrite = TRUE).
I would also play with the other options, like row.names, header, field.types, quote, sep, eol. I've had a lot of weird behavior in RMySQL as well. I can't remember specifics, but it seems like I've had no error message when I had done something wrong, like forget to set row.names. HTH
Close all active connections:
dbDisconnectAll <- function(){
ile <- length(dbListConnections(MySQL()) )
lapply( dbListConnections(MySQL()), function(x) dbDisconnect(x) )
cat(sprintf("%s connection(s) closed.\n", ile))
}
executing:
dbDisconnectAll()
Simplest:
lapply(dbListConnections( dbDriver( drv = "MySQL")), dbDisconnect)
List all connections and disconnect them by lapply
Closing a connection
You can use dbDisconnect() together with dbListConnections() to disconnect those connections RMySQL is managing:
all_cons <- dbListConnections(MySQL())
for(con in all_cons)
dbDisconnect(con)
Check all connections have been closed
dbListConnections(MySQL())
You could also kill any connection you're allowed to (not just those managed by RMySQL):
dbGetQuery(mydb, "show processlist")
Where mydb is..
mydb = dbConnect(MySQL(), user='user_id', password='password',
dbname='db_name', host='host')
Close a particular connection
dbGetQuery(mydb, "kill 2")
dbGetQuery(mydb, "kill 5")
lapply(dbListConnections(MySQL()), dbDisconnect)
In current releases the "dbListConnections" function is deprecated and DBI no longer requires drivers to maintain a list of connections. As such, the above solutions may no longer work. E.g. in RMariaDB the above solutions create errors.
I made with the following alternative that uses the MySQL server's functionality and that should work with current DBI / driver versions:
### listing all open connection to a server with open connection
query <- dbSendQuery(mydb, "SHOW processlist;")
processlist <- dbFetch(query)
dbClearResult(query)
### getting the id of your current connection so that you don't close that one
query <- dbSendQuery(mydb, "SELECT CONNECTION_ID();")
current_id <- dbFetch(query)
dbClearResult(query)
### making a list with all other open processes by a particular set of users
# E.g. when you are working on Amazon Web Services you might not want to close
# the "rdsadmin" connection to the AWS console. Here e.g. I choose only "admin"
# connections that I opened myself. If you really want to kill all connections,
# just delete the "processlist$User == "admin" &" bit.
queries <- paste0("KILL ",processlist[processlist$User == "admin" & processlist$Id != current_id[1,1],"Id"],";")
### making function to kill connections
kill_connections <- function(x) {
query <- dbSendQuery(mydb, x)
dbClearResult(query)
}
### killing other connections
lapply(queries, kill_connections)
### killing current connection
dbDisconnect(mydb)

Keep mysql connection open

I'm making a eggdrop tcl script to write activity of several public IRC channels to a database (over time this will be 10 to 15 channels I think). I have two options how to handle the database connection in mind.
An user says something -> Open a mysql connection to the database -> insert information about what the user said -> close the connection
Start the bot -> Open a mysql connection to the database -> Insert information when there is channel activity -> Wait for more information etc.
I think it's better to use case 1, but when there is much channel activity I think opening and closing a connection every time will cause a massive server load and slows things down drastically after a while.
What's the best way to do this?
If you want to keep the connection open just call
mysql::ping $dbhandle
from time to time.
This can be done with something like this:
proc keepMySqlOpen {dbhandle} {
mysql::ping $dbhandle
after 2000 [list keepMySqlOpen $dbhandle]
}
....
set dbh [mysql::open ...]
keepMySqlOpen $dbh
...
An other option is just to use mysql::ping before accessing the db, which should according to the mysqltcl manual reconnect if necessary. This might be the best of both worlds (let the connection time out if there is not much activity, keep it open otherwise).