req = Test()
setattr(req, 'test', 1);
session.add(req)
print req.id
How to get a last id for req object ?
You should commit it first (after session.add(req)).
flush() is enough, it'll execute the queries and fill the auto IDs.
If you do a session.commit() before the final print statement the id attribute will be set. As written, there is no reason for the engine to perform any SQL queries, so the object has not been inserted yet. Once the query is run (on session flush or commit) the ID will be there.
Related
I have a case where I need to use conditional updates/inserts using peewee.
The query looks similar to what is shown here, conditional-duplicate-key-updates-with-mysql
As of now, what I'm doing is, do a get_or_create and then if it is not a create, check the condition in code and call and insert with on_conflict_replace.
But this is prone to race conditions, since the condition check happens back in web server, not in db server.
Is there a way to do the same with insert in peewee?
Using: AWS Aurora-MySQL-5.7
Yes, Peewee supports the ON DUPLICATE KEY UPDATE syntax. Here's an example from the docs:
class User(Model):
username = TextField(unique=True)
last_login = DateTimeField(null=True)
login_count = IntegerField()
# Insert a new user.
User.create(username='huey', login_count=0)
# Simulate the user logging in. The login count and timestamp will be
# either created or updated correctly.
now = datetime.now()
rowid = (User
.insert(username='huey', last_login=now, login_count=1)
.on_conflict(
preserve=[User.last_login], # Use the value we would have inserted.
update={User.login_count: User.login_count + 1})
.execute())
Doc link: http://docs.peewee-orm.com/en/latest/peewee/querying.html#upsert
I have the following Sequence Container inside of a ForEach loop in my SSIS package:
I am busy testing the ROLLBACK TRANSACTION statement, it executes fine but it does not rollback.
Not sure if I am missing anything?
Thanks in advance.
EDIT:
This is how the data flow looks like in my Sequence Container:
I took an alternative route to make sure that my fail over works. What I did was I added a column to my tables that will store a GUID and when there is an error I just delete records from tables where that GUID is equal to the one in that is used in that session. I added the GUID in my select statement that my ForEach Container will use and then I write it to the table.
SELECT
ReconMedicalAidFile.fReconMedicalAidFileID
,ReconMedicalAidFile.fReconMedicalAidID
,ReconMedicalAids.fMedicalAidID
,ReconMedicalAidFile.fFileName
,ReconMedicalAidFile.fFileLocation
,ReconMedicalAidFile.fFileImportedDate
,ReconMedicalAidFile.fNumberRecords
,ReconMedicalAidFile.fUser
,ReconMedicalAidFile.fIsImported
,CONVERT(varchar(50),NEWID()) AS 'Session'
FROM ReconMedicalAidFile
INNER JOIN ReconMedicalAids ON ReconMedicalAidFile.fReconMedicalAidID = ReconMedicalAids.fReconMedicalAidID
WHERE (fIsImported = 0) AND (fNumberRecords = 0)
I added this code in the Script Task which will map the selected GUID above:
DELETE FROM BankmedStatments
WHERE fSession = ?
DELETE FROM ReconMedicalAidData
WHERE fSession = ?
I'm trying to perform a check to see if a record exist first before inserting the record so I won't get an error.
If it exists, i'll update a field.
mydb(mydb.myitems.itemNumber==int(row)).update(oldImageName=fileName) or
mydb.myitems.insert(itemNumber=int(row),oldImageName=fileName)
If i try to update a record that does not exist, then it should throw a 1 or something aside from 0. But in the case above, it always throws a 0 so the insert keeps happening.
Why is that?
Thanks!
UPDATE:
Adding model:
mydb.define_table('myitems',
Field('itemNumber', 'id',notnull=True,unique=True),
Field('oldImageName', 'string')
If i try to update a record that does not exist, then it should throw a 1 or something aside from 0.
If you try to update a record that does not exist, .update() will return None, so the insert will then happen. If matching records exist, .update() will return the number of records updated.
In any case, you should instead do:
mydb.myitems.update_or_insert(mydb.myitems.itemNumber == int(row),
oldImageName=filename)
or alternatively:
mydb.myitems.update_or_insert(dict(itemNumber == int(row)),
oldImageName=filename)
The first argument to update_or_insert is _key, which can either be a DAL query or a dictionary with field names as keys.
This is basically a correction of Anthony's answer which did not give quite the desired result when I tried it. If you do:
mydb.myitems.update_or_insert(mydb.myitems.itemNumber == int(row),
itemNumber = int(row),
oldImageName=filename)
then the code should insert a record with the itemNumber and filename if there is not an item with that number there already, otherwise update it.
If you miss the itemNumber = int(row), bit then web2py makes up an itemNumber which is probably not what is wanted.
See http://www.web2py.com/books/default/chapter/29/06/the-database-abstraction-layer#update_or_insert
I am trying to execute SELECT ... FOR UPDATE query using Laravel 3:
SELECT * from projects where id = 1 FOR UPDATE;
UPDATE projects SET money = money + 10 where id = 1;
I have tried several things for several hours now:
DB::connection()->pdo->exec($query);
and
DB::query($query)
I have also tried adding START TRANSACTION; ... COMMIT; to the query
and I tried to separate the SELECT from the UPDATE in two different parts like this:
DB::query($select);
DB::query($update);
Sometimes I get 0 rows affected, sometimes I get an error like this one:
SQLSTATE[HY000]: General error: 2014 Cannot execute queries while other unbuffered queries are active. Consider using PDOStatement::fetchAll(). Alternatively, if your code is only ever going to run against mysql, you may enable query buffering by setting the PDO::MYSQL_ATTR_USE_BUFFERED_QUERY attribute.
SQL: UPDATE `sessions` SET `last_activity` = ?, `data` = ? WHERE `id` = ?
I want to lock the row in order to update sensitive data, using Laravel's database connection.
Thanks.
In case all you need to do is increase money by 10, you don't need to lock the row before update. Simply executing the update query will do the job. The SELECT query will only slow down your script and doesn't help in this case.
UPDATE projects SET money = money + 10 where id = 1;
I would use diferent queries for sure, so you can have control on what you are doing.
I would use a transaction.
If we read this simple explanations, pdo transactions are quite straightforward. They give us this simple but complete example, that ilustrates how everithing is as we should expect (consider $db to be your DB::connection()->pdo).
try {
$db->beginTransaction();
$db->exec("SOME QUERY");
$stmt = $db->prepare("SOME OTHER QUERY?");
$stmt->execute(array($value));
$stmt = $db->prepare("YET ANOTHER QUERY??");
$stmt->execute(array($value2, $value3));
$db->commit();
}
catch(PDOException $ex) {
//Something went wrong rollback!
$db->rollBack();
echo $ex->getMessage();
}
Lets go to your real statements. For the first of them, the SELECT ..., i wouldn't use exec, but query, since as stated here
PDO::exec() does not return results from a SELECT statement. For a
SELECT statement that you only need to issue once during your program,
consider issuing PDO::query(). For a statement that you need to issue
multiple times, prepare a PDOStatement object with PDO::prepare() and
issue the statement with PDOStatement::execute().
And assign its result to some temp variable like
$result= $db->query ($select);
After this execution, i would call $result->fetchAll(), or $result->closeCursor(), since as we can read here
If you do not fetch all of the data in a result set before issuing
your next call to PDO::query(), your call may fail. Call
PDOStatement::closeCursor() to release the database resources
associated with the PDOStatement object before issuing your next call
to PDO::query().
Then you can exec the update
$result= $db->exec($update);
And after all, just in case, i would call again $result->fetchAll(), or $result->closeCursor().
If the aim is
to lock the row in order to update sensitive data, using Laravel's database connection.
Maybe you can use PDO transactions :
DB::connection()->pdo->beginTransaction();
DB::connection()->pdo->commit();
DB::connection()->pdo->rollBack();
I have a table that is storing data that needs to be processed. I have id, status, data in the table. I'm currently going through and selecting id, data where status = #. I'm then doing an update immediately after the select, changing the status # so that it won't be selected again.
my program is multithreaded and sometimes I get threads that grab the same id as they are both querying the table at a relatively close time to each other, causing the grab of the same id. i looked into select for update, however, i either did the query wrong, or i'm not understanding what it is used for.
my goal is to find a way of grabbing the id, data that i need and setting the status so that no other thread tries to grab and process the same data. here is the code i tried. (i wrote it all together for show purpose here. i have my prepares set at the beginning of the program as to not do a prepare for each time it's ran, just in case anyone was concerned there)
my $select = $db->prepare("SELECT id, data FROM `TestTable` WHERE _status=4 LIMIT ? FOR UPDATE") or die $DBI::errstr;
if ($select->execute($limit))
{
while ($data = $select->fetchrow_hashref())
{
my $update_status = $db->prepare( "UPDATE `TestTable` SET _status = ?, data = ? WHERE _id=?");
$update_status->execute(10, "", $data->{_id});
push(#array_hash, $data);
}
}
when i run this, if doing multiple threads, i'll get many duplicate inserts, when trying to do an insert after i process my transaction data.
i'm not terribly familiar with mysql and the research i've done, i haven't found anything that really cleared this up for me.
thanks
As a sanity check, are you using InnoDB? MyISAM has zero transactional support, aside from faking it with full table locking.
I don't see where you're starting a transaction. MySQL's autocommit option is on by default, so starting a transaction and later committing would be necessary unless you turned off autocommit.
It looks like you simply rely on the database locking mechanisms. I googled perl dbi locking and found this:
$dbh->do("LOCK TABLES foo WRITE, bar READ");
$sth->prepare("SELECT x,y,z FROM bar");
$sth2->prepare("INSERT INTO foo SET a = ?");
while (#ary = $sth->fetchrow_array()) {
$sth2->$execute($ary[0]);
}
$sth2->finish();
$sth->finish();
$dbh->do("UNLOCK TABLES");
Not really saying GIYF as I am also fairly novice at both MySQL and DBI, but perhaps you can find other answers that way.
Another option might be as follows, and this only works if you control all the code accessing the data. You can create lock column in the table. When your code accesses the table it (pseudocode):
if row.lock != 1
row.lock = 1
read row
update row
row.lock = 0
next
else
sleep 1
redo
again though, this trusts that all users/script that access this data will agree to follow this policy. If you cannot ensure that then this won't work.
Anyway thats all the knowledge I have on the topic. Good Luck!