Conditional duplicate key updates with MySQL using Peewee - mysql

I have a case where I need to use conditional updates/inserts using peewee.
The query looks similar to what is shown here, conditional-duplicate-key-updates-with-mysql
As of now, what I'm doing is, do a get_or_create and then if it is not a create, check the condition in code and call and insert with on_conflict_replace.
But this is prone to race conditions, since the condition check happens back in web server, not in db server.
Is there a way to do the same with insert in peewee?
Using: AWS Aurora-MySQL-5.7

Yes, Peewee supports the ON DUPLICATE KEY UPDATE syntax. Here's an example from the docs:
class User(Model):
username = TextField(unique=True)
last_login = DateTimeField(null=True)
login_count = IntegerField()
# Insert a new user.
User.create(username='huey', login_count=0)
# Simulate the user logging in. The login count and timestamp will be
# either created or updated correctly.
now = datetime.now()
rowid = (User
.insert(username='huey', last_login=now, login_count=1)
.on_conflict(
preserve=[User.last_login], # Use the value we would have inserted.
update={User.login_count: User.login_count + 1})
.execute())
Doc link: http://docs.peewee-orm.com/en/latest/peewee/querying.html#upsert

Related

Kafka connect jdbc sink upsert mode issue

I'm trying to connect replicate a table at realtime using Kafka connect. The database used is MySQLv5.7.
On working with insert and update mode separately, the columns are behaving as expected. However, when I use the upsert mode, no change is observed in the database.
Configuration File filled via UI
Sink
topic = custom-p2p
Connector Class = JdbcSinkConnector
name = sink
tasks-max = 1
Key-converter-class=org.apache.kafka.connect.storage.StringConverter
Value-converter-class=org.apache.kafka.connect.json.JsonConverter
jdbc_url=jdbc:mysql://127.0.0.1:3306/p2p_service_db4?user=root&password=root&useSSL=false
insert mode = upsert
auto create = true
auto evolve = true
Source
Connector Class = JdbcSourceConnector
name = source-new
task max = 1
key converter class = org.apache.kafka.connect.storage.StringConverter
value converter class = org.apache.kafka.connect.json.JsonConverter
jdbc url = jdbc:mysql://127.0.0.1:3306/p2p_service_db3?user=root&password=root&useSSL=false
table loading mode = timestamp+incrementing
incrementing column name = auto_id
timestamp column name = last_updated_at
topic prefix = custom-
ver
The issue that I'm having is that when the sink insert mode is changed to insert, the insertion takes place properly when changed to update, this also happens perfectly as expected, however when the value is changed to upsert, neither insertion nor update takes place.
Please let me know if something done is wrong? Why this mode is not working? Is there some alternative to this if this inserts and updates both need to be replicated in the backup DB.
Thank you in advance. Let me know if some other information is needed

INSERT … ON DUPLICATE KEY UPDATE with Condition

My table have Cas field, I want implement CompareAndSet in save operation
here is my sql code
INSERT INTO `test_cas_table`(id,name,cas) VALUES(3, "test data", 2)
ON DUPLICATE KEY UPDATE
id = VALUES(id),
name = VALUES(name),
cas = IF(cas = VALUES(cas) - 1, VALUES(cas) , "update failure")
because cas field is BIGINT, when cas != VALUES(cas) - 1 will set it with "update failure" cause this execution to fail
but this way is so ugly, Is there a pretty implementation?
and I want know did postgresql have pretty implementation?
I want implement it in once execution
Is your identity column auto generated? If so, there's no need to check for duplicates. Just insert your new information and your database will handle it.
However, if you have an identity column which isn't auto generated (like an email or official document instead of an auto increment integer primary Key), you need to first query your database looking for the value You're about to persist. That way, instead of receiving a SQLException (Java), you check your query result and tell the user to change its email or official document if it was already taken by another user.

Update Case-Sensitive DB Field In Laravel 5.3 With Postgres

I am trying to update a database column field with raw SQL in laravel. It's important to mention that the update code was written to MySQL drive but now I use Postgres. The column name is dayID. So the update code is:
DB::update("update table set travel = ... WHERE dayID = {$this->dayID}");
I must use raw SQL because I make some updates to polygon types.
The problem is that laravel automatically transforms the dayID to dayid so I get an error:
column "dayid" does not exist
I tried to set a variable in order to use it in update query but it also failed with the same error:
$var = "dayID";
DB::update("update table set travel = ... WHERE ".$var." = {$this->dayID}");
How can I fix it?
Please try DB::table with update below:
DB::table('table_name')
->where('dayID', $this->dayID)
->update(['travel' => '...']);
Laravel document :
https://laravel.com/docs/5.3/queries#updates

How to update a row in a MySQL database using Ruby's Sequel toolkit?

This should be the simplest thing but for some reason it's eluding me completely.
I have a Sequel connection to a database named DB. It's using the Mysql2 engine if that's important.
I'm trying to update a single record in a table in the database. The short loop I'm using looks like this:
dataset = DB["SELECT post_id, message FROM xf_post WHERE message LIKE '%#{match}%'"]
dataset.each do |row|
new_message = process_message(row[:message])
# HERE IS WHERE I WANT TO UPDATE THE ROW IN THE DATABASE!
end
I've tried:
dataset.where('post_id = ?', row[:post_id]).update(message: new_message)
Which is what the Sequel cheat sheet recommends.
And:
DB["UPDATE xf_post SET message = ? WHERE post_id = ?", new_message, row[:post_id]]
Which should be raw SQL executed by the Sequel connector. Neither throws an error or outputs any error message (I'm using a logger with the Sequel connection). But both calls fail to update the records in the database. The data is unchanged when I query the database after running the code.
How can I make the update call function properly here?
Your problem is you are using a raw SQL dataset, so the where call isn't going to change the SQL, and update is just going to execute the raw SQL. Here's what you want to do:
dataset = DB[:xf_post].select(:post_id, :message).
where(Sequel.like(:message, "%#{match}%"))
That will make the where/update combination work.
Note that your original code has a trivial SQL injection vulnerability if match depends on user input, which this new code avoids. You may want to consider using Dataset#escape_like if you want to escape metacharacters inside match, otherwise if match depends on user input, it's possible for users to use very complex matching syntax that the database may execute slowly or not handle properly.
Note that the reason that
DB["UPDATE xf_post SET message = ? WHERE post_id = ?", new_message, row[:post_id]]
doesn't work is because it only creates a dataset, it doesn't execute it. You can actually call update on that dataset to run the query and return number of affected rows.

I set a key in django's request.session , but has no effect

I set jobfile key in session below:
def getjoblist(request):
joblist = request.GET.get('job')
jobliststr = joblist[0:-1]
request.session['jobfile'] = jobliststr
return HttpResponse("ok\n")
and I check the mysql database, rows is increment, but when I test the key ,It shows the key is not in session:
if request.session.has_key('jobfile'): # return False;
I didn't know why?
You may need to save the session after adding the new key
request.session['jobfile']
request.session.save()
I know why, first I use shell command 'curl' to request, and session is saved. Then, I use browser to request , and session is also saved. But, this two session is not the same, because session is different between users. So, I can't get the key.