How to persist tables with master-detail relationship within a single transaction? - mysql

I'm trying to persist two tables with master-detail relationship in MySQL 5.6 using Delphi XE3 and Zeos 7.0.4. When I do ApplyUpdates on the master, the auto increment field stays with 0 as value. I need the auto increment value, so I can link the detail table with the master table's ID field coming from ApplyUpdates. I'm using ZConnection with AutoCommit = FALSE and TransactionIsolationLevel = tiReadCommitted, ZQuery with CachedUpdates = TRUE. What am I missing?
ZQPerson.Append;
ZQEmployee.Append;
try
ZQPersonName.Value := Edit1.Text;
ZQPerson.ApplyUpdates; //Here I expected to have the auto increment value on the Id field of ZQPerson, but it returns always 0
ZQEmployeePersonID.Value := ZQPersonId.Value; //Here I'd link Employee to it's Person record
ZQEmployeeRegNo.Value := StrToInt(Edit2.Text);
ZQEmployee.ApplyUpdates;
ZConnection1.Commit; //Here I would persist both tables in a single transaction to avoid master table without details
except
ZQPerson.CancelUpdates;
ZQEmployee.CancelUpdates;
ZConnection1.Rollback; //In case of exceptions rollback everything
raise;
end;
ZQPerson.CommitUpdates;
ZQEmployee.CommitUpdates;
My ZSQLMonitor trace is this:
2013-08-29 00:01:23 cat: Execute, proto: mysql-5, msg: INSERT INTO person (Id, name) VALUES (NULL, 'Edit1') --> This is just after ZQPerson.ApplyUpdates
2013-08-29 00:01:50 cat: Execute, proto: mysql-5, msg: INSERT INTO employee (Id, RegNo, ProductId) VALUES (NULL, 1000, 0), errcode: 1452, error: Cannot add or update a child row: a foreign key constraint fails (`test`.`employee`, CONSTRAINT `FK_A6085E0491BDF8EE` FOREIGN KEY (`PersonId`) REFERENCES `person` (`Id`) --> This is just after ZQEmployee.ApplyUpdates
2013-08-29 00:02:05 cat: Execute, proto: mysql-5, msg: Native Rollback call --> Rollback after Exception on the ZQEmployee.ApplyUpdates

Are you starting the transaction with ZConnection1.StartTransaction? I think too that you must Refresh ZQuery1 after calling ZQuery1.ApplyUpdates to get the new id-
Reading your comment, you must be doing a select * without a where clause? right? I can recommend you use this approach:
1) select and increment the current autoincrement value
2) select from master table where id=[step1 id] // it will be empty, of course
3) add detail using the id in step 1
4) assign the id in the master dataset
5) apply both updates

The workaround I found was this one. It not satiesfies me completely because it doesn't make transparent the use of the database's auto increment feature, making me use Last_Insert_ID() function. I'm in contact with zeos develpers to check this out.
function LastInsertID(ATableName: string): Integer;
var DBQuery: TZQuery;
begin
DBQuery := TZQuery.Create(Self);
with DBQuery do
begin
Connection := ZConnection1;
SQL.Clear;
SQL.Add('Select Last_Insert_ID() as Last_Insert_ID from ' + ATableName);
Open;
Result := FieldByName('Last_Insert_ID').Value;
Free;
end;
end;
procedure Persist;
var LastID: Integer;
begin
ZQPerson.Append;
ZQEmployee.Append;
try
ZQPersonName.Value := Edit1.Text;
ZQPerson.ApplyUpdates; // Here I expected to have the auto increment value on the Id field of ZQPerson, but it returns always 0
LastID := LastInsertID('Person'); //Getting the Last_Insert_ID(), even on the uncommitted transction, works
ZQEmployeePersonId.Value := LastID; //Link the two tables using the Last_Insert_ID() result
ZQEmployeeRegNo.Value := StrToInt(Edit2.Text);
ZQEmployee.ApplyUpdates;
ZConnection1.Commit; // Here I persist both tables in a single transaction to avoid master table without details
except
ZQPerson.CancelUpdates;
ZQEmployee.CancelUpdates;
ZConnection1.Rollback; // In case of exceptions rollback everything
raise;
end;
ZQPerson.CommitUpdates;
ZQEmployee.CommitUpdates;

I tested it in a simple database with two master and detail tables nested with TDataSource and relating by the where of the detail table:
object conMysql: TZConnection
     TransactIsolationLevel = tiReadCommitted
object zqryMaster: TZQuery
     Connection = conMysql
SQL.Strings = (
       'select * from temp.master')
object dsNestedMaster: TDataSource
     DataSet = zqryMaster
object zqryDetail: TZQuery
     Connection = conMysql
     SQL.Strings = (
       'select * from temp.detail'
       'where id_master =: id')
After starting the transaction all updates must wait for confirmation or rollback if an error occurs:
try
zqryMaster.Connection.StartTransaction;
zqryMaster.Edit;
zqryDetail.Edit;
zqryMaster.FindField('dt_mov').Value := Now;
while not zqryDetail.Eof do
begin
zqryDetail.Edit;
zqryDetail.FindField('dt_mov').Value := Now;
zqryDetail.ApplyUpdates;
zqryDetail.Next;
//raise Exception.Create('simple error'); //use for tests, check database after perform
end;
zqryMaster.ApplyUpdates;
zqryMaster.Connection.Commit;
except
zqryMaster.Connection.Rollback;
zqryMaster.CancelUpdates;
zqryDetail.CancelUpdates;
end;

Related

Yii2 mysql how to insert a record into a table where column value already exists and should be unique?

I have a table, say 'mytable' that use a "rank" column that is unique. After having created some record where rank is successively rec A(rank=0), rec B (rank=1), rec C (rank=2), rec D (rank=3), rec E (rank=4).
I need to insert a new record that will take an existing rank, say 1, and modify the rank value of the following records accordingly.
The result being : rec A(rank=0), new rec (rank=1), rec B (rank=2), rec C (rank=3), rec D (rank=4), rec E (rank=5).
How can I do this ? Can this be solved with mysql only or should I write some important bunch of code in PHP (Yii2) ?
Assuming no rank is skipped you need to shift existing ranks before saving the new record. To do that you can use beforeSave() method of your ActiveRecord like this:
class MyModel extends \yii\db\ActiveRecord
{
public function beforeSave($insert)
{
if (!parent::beforeSave($insert)) {
return false;
}
if ($insert) { //only if we are saving new record
{
$query = self::find()
->where(['rank' => $this->rank]);
if ($query->exists()) { //check if the rank is already present in DB
//we will create the query directly because yii2
// doesn't support order by for update
$command = static::getDb()->createCommand(
"UPDATE " . static::tableName() .
" SET rank = rank + 1 WHERE rank >= :rank ORDER BY rank DESC",
[':rank' => $this->rank]
);
$command->execute();
}
}
return true;
}
// ... other content of your model ...
}
MySQL allows use of ORDER BY in UPDATE query, that will help us deal with fact that doing UPDATE on table is not atomic and the UNIQUE constraint is checked after each row is updated.
It would be more problematic if there are skipped ranks. In that case you will need to shift ranks only until you hit first skipped rank.
Another option might be creating an before insert trigger on the table that would do the rank shifting.
Note:
It might be a good idea to also implement afterDelete method to shift the ranks in oposite direction when some record is removed to avoid skipped ranks.
Resources:
\yii\db\BaseActiveRecord::beforeSave()
\yii\db\ActiveRecord::updateAllCounters() - replaced with direct update
MySQL triggers
MySQL UPDATE syntax

How to get RowsAffected in Go-Gorm's Association Mode

I insert a relation by the following code:
db.Where(exercise).FirstOrCreate(&exercise).Model(&User{ID: userID}).Association("Exercises").Append(&exercise)
Corresponding SQL printed by debug console to the code is:
INSERT INTO `user_exercise` (`user_id`,`exercise_id`) SELECT 1,1 FROM DUAL WHERE NOT EXISTS (SELECT * FROM `user_exercise` WHERE `user_id` = 1 AND `exercise_id` = 1)
I want know if there are new record created in user_exercise, but due to the generated SQL, if a relation in user_exercise already exists, it won't insert and produce no error.
Go-Gorm's Association object doesn't have a RowsAffected attribute, so I can't get RowsAffected from the query to confirm if a new record is created.
Though I can get RowsAffected from the first db object, like
db.Where(exercise).FirstOrCreate(&exercise).Model(&User{ID: userID}).Association("Exercises").Append(&exercise)
if db.RowsAffected == 1 {
// do something
}
I wonder since the db is shared by all queries, if another query executed at the same time and affected rows > 0, is it safe to get RowsAffected from the global db object?
Assuming that the user_execise table has an unique constraint (user_id, exercise_id) the insert should return an error if you try to do it to an already created record. (Exactly what you want)
So just do something like this...
db.Where(exercise).FirstOrCreate(&exercise)
ue := struct {
UserID uint
ExerciseID uint
}{
UserID: userID,
ExerciseID exercise.ID
}
if err := db.Table("user_exercise").Create(&ue).Error; err != nil {
// will enter here if it wasn't created
}
If it doesn't returns an Error means that a new record was created

Stored Procedure With Function giving me errors in Oracle

I have stored procedure and function and I am calling the function in the stored procedure in ORACLE.The function CalculateIncomeTax is what is giving me errors.In MSSQL,this type of update is possible because I have done it before.I called the function in the stored procedure.When I read around the answer I get is to use a package before I cannot use a function to update a table from another table.Please if you have any idea,tell me.The error I get is
table string.string is mutating, trigger/function may not see it
Cause: A trigger (or a user defined plsql function that is referenced in this statement) attempted to look at (or modify) a table that was in the middle of being modified by the statement which fired it.
Action: Rewrite the trigger (or function) so it does not read that table.
This is function
CREATE OR REPLACE function CalculateIncomeTax(periodId NVARCHAR2,
employeeId NVARCHAR2, taxableIncome NUMBER)return NUMBER
AS
IncomeTax NUMBER (18,4);Taxable NUMBER(18,4);
BEGIN
SELECT SUM(CASE WHEN (taxableIncome > T.TAX_CUMMULATIVE_AMOUNT)
THEN (taxableIncome - T.TAX_CUMMULATIVE_AMOUNT)* T.TAX_PERCENTAGE/ 100
ELSE 0.00 END ) INTO IncomeTax
FROM TAX_LAW T JOIN PAY_GROUP P ON P.PAY_FORMULA_ID =T.TAX_FORMULA_ID
JOIN PAYROLL_MASTER PP ON P.PAY_CODE =PP.PAY_PAY_GROUP_CODE
WHERE PP.PAY_EMPLOYEE_ID = employeeId AND PP.PAY_PERIOD_CODE = periodId;
if IncomeTax IS NULL THEN IncomeTax :=0;
end if;
return IncomeTax;
end;/
This is the stored procedure
CREATE OR REPLACE PROCEDURE PROCESSPAYROLLMASTER (periodcode
VARCHAR2) AS BEGIN
INSERT INTO PAYROLL_MASTER
(
PAY_PAYROLL_ID,PAY_EMPLOYEE_ID ,PAY_EMPLOYEE_NAME,PAY_SALARY_GRADE_CODE
,PAY_SALARY_NOTCH_CODE,PAY_BASIC_SALARY,PAY_TOTAL_ALLOWANCE
,PAY_TOTAL_CASH_BENEFIT,PAY_MEDICAL_BENEFIT,PAY_TOTAL_BENEFIT
,PAY_TOTAL_DEDUCTION,PAY_GROSS_SALARY,PAY_TOTAL_TAXABLE,PAY_INCOME_TAX
,PAY_TAXABLE,PAY_PERIOD_CODE,PAY_BANK_CODE,PAY_BANK_NAME,PAY_BANK_ACCOUNT_NO
,PAY_PAY_GROUP_CODE )
SELECT
1,
E.EMP_ID AS PAY_EMPLOYEE_ID ,
E.EMP_FIRST_NAME || ' ' || E.EMP_LAST_NAME AS PAY_EMPLOYEE_NAME,
E.EMP_RANK_CODE,
'CODE',
(SC.SAL_MINIMUM_AMOUNT+( SN.SAL_SALARY_PERCENTAGE *
SC.SAL_MINIMUM_AMOUNT)/100) AS PAY_BASIC_SALARY,
0,
0,
0,
0,
0,
0,
0,
0,
0,
periodcode,
'BANKCODE',
'BANKNAME',
'BANKNUMBER',
'GENERAL'
FROM EMPLOYEE E
LEFT JOIN SALARY_SCALE SC ON SC.SAL_RANK_CODE = E.EMP_RANK_CODE
LEFT JOIN SALARY_NOTCH SN ON SC.SAL_ID = SN.SAL_SALARYSCALE_ID
WHERE E.EMP_RANK_CODE = SC.SAL_RANK_CODE AND E.EMP_STATUS=2;
CALCULATEALLOWANCE(v_payrollId,periodcode);
CALCULATECASHBENEFITS(v_payrollId,periodcode);
CALCULATEDEDUCTIONS(v_payrollId,periodcode);
-- UPDATE PAYROLL PAY_INCOME_TAX
UPDATE PAYROLL_MASTER PM SET PM.PAY_INCOME_TAX = CalculateIncomeTax(PM.PAY_PERIOD_CODE,PM.PAY_EMPLOYEE_ID,PM.PAY_TOTAL_TAXABLE) WHERE PM.PAY_PAYROLL_ID = v_payrollId;
UPDATE PAYROLL_PROCESS set PAY_CANCELLED = 1 WHERE PAY_PAY_GROUP_CODE='GENERAL' AND PAY_PERIOD_CODE=periodcode
AND PAY_ID<>v_payrollId;
COMMIT;
END ;
/
The function is querying the same table you are updating, which is what the error is reporting. As it happens you are not changing the value of the column you're querying, but Oracle doesn't check to that level - not least because there could be, for instance, a trigger that has less obvious side-effects.
The best solution really would be to not have to update at all, and to calculate and set all the value as part of the original insert, by joining to all the relevant tables. But you are already calling other procedures which are, presumably, updating some of the values you're inserting as zeros, including pay_total_taxable.
Unless you're able to reevaluate those as well, you may be stuck with doing a further update. In which case, you could remove the reference to the payroll_master table from the function and instead pass in the relevant data.
I think this is equivalent, though with out the table structures, sample data and what the other procedures are doing it's hard to be sure (so this is untested, obviously):
create or replace function calculateincometax (
p_periodid nvarchar2,
p_employeeid nvarchar2,
p_paypaygroupcode payroll_master.pay_pay_group_code%type,
p_taxableincome number
) return number as
l_incometax number(18, 4);
begin
select coalesce(sum(case when p_taxableincome > t.tax_cummulative_amount
then (taxableincome - t.tax_cummulative_amount) * t.tax_percentage / 100
else 0 end), 0)
into l_incometax
from tax_law t
join pay_group p
on p.pay_formula_id = t.tax_formula_id
where p.pay_code = p_paypaygroupcode;
return l_incometax;
end;
/
and then include the extra argument in your call:
update payroll_master pm
set pm.pay_income_tax = calculateincometax(pm.pay_period_code, pm.pay_employee_id,
pm.pay_pay_group_code, pm.pay_total_taxable)
where pm.pay_payroll_id = v_payrollid;
Although v_payrollid isn't defined in what you've shown, so even that isn't entirely clear.
I've also modified the function argument and local variable names with prefixes to remove potential ambiguity (which you seem to do by removing underscores from the names), removed the unused variable, and added a coalesce() call in place of the separate null check. Those things aren't directly relevant to the approach though.

what is a mysql buffered cursor w.r.t python mysql connector

Can someone please give an example to understand this?
After executing a query, a MySQLCursorBuffered cursor fetches the entire result set from the server and buffers the rows.
For queries executed using a buffered cursor, row-fetching methods such as fetchone() return rows from the set of buffered rows. For nonbuffered cursors, rows are not fetched from the server until a row-fetching method is called. In this case, you must be sure to fetch all rows of the result set before executing any other statements on the same connection, or an InternalError (Unread result found) exception will be raised.
Thanks
I can think of two ways these two types of Cursors are different.
The first way is that if you execute a query using a buffered cursor, you can get the number of rows returned by checking MySQLCursorBuffered.rowcount. However, the rowcount attribute of an unbuffered cursor returns -1 right after the execute method is called. This, basically, means that the entire result set has not yet been fetched from the server. Furthermore, the rowcount attribute of an unbuffered cursor increases as you fetch rows from it, while the rowcount attribute of a buffered cursor remains the same, as you fetch rows from it.
The following snippet code tries to illustrate the points made above:
import mysql.connector
conn = mysql.connector.connect(database='db',
user='username',
password='pass',
host='localhost',
port=3306)
buffered_cursor = conn.cursor(buffered=True)
unbuffered_cursor = conn.cursor(buffered=False)
create_query = """
drop table if exists people;
create table if not exists people (
personid int(10) unsigned auto_increment,
firstname varchar(255),
lastname varchar(255),
primary key (personid)
);
insert into people (firstname, lastname)
values ('Jon', 'Bon Jovi'),
('David', 'Bryan'),
('Tico', 'Torres'),
('Phil', 'Xenidis'),
('Hugh', 'McDonald')
"""
# Create and populate a table
results = buffered_cursor.execute(create_query, multi=True)
conn.commit()
buffered_cursor.execute("select * from people")
print("Row count from a buffer cursor:", buffered_cursor.rowcount)
unbuffered_cursor.execute("select * from people")
print("Row count from an unbuffered cursor:", unbuffered_cursor.rowcount)
print()
print("Fetching rows from a buffered cursor: ")
while True:
try:
row = next(buffered_cursor)
print("Row:", row)
print("Row count:", buffered_cursor.rowcount)
except StopIteration:
break
print()
print("Fetching rows from an unbuffered cursor: ")
while True:
try:
row = next(unbuffered_cursor)
print("Row:", row)
print("Row count:", unbuffered_cursor.rowcount)
except StopIteration:
break
The above snippet should return something like the following:
Row count from a buffered reader: 5
Row count from an unbuffered reader: -1
Fetching rows from a buffered cursor:
Row: (1, 'Jon', 'Bon Jovi')
Row count: 5
Row: (2, 'David', 'Bryan')
Row count: 5
Row: (3, 'Tico', 'Torres')
Row count: 5
Row: (4, 'Phil', 'Xenidis')
Row count: 5
Row: (5, 'Hugh', 'McDonald')
Row: 5
Fetching rows from an unbuffered cursor:
Row: (1, 'Jon', 'Bon Jovi')
Row count: 1
Row: (2, 'David', 'Bryan')
Row count: 2
Row: (3, 'Tico', 'Torres')
Row count: 3
Row: (4, 'Phil', 'Xenidis')
Row count: 4
Row: (5, 'Hugh', 'McDonald')
Row count: 5
As you can see, the rowcount attribute for the unbuffered cursor starts at -1 and increases as we loop through the result it generates. This is not the case with the buffered cursor.
The second way to tell the difference is by paying attention to which of the two (under the same connection) executes first. If you start with executing an unbuffered cursor whose rows have not been fully fetched and then try to execute a query with the buffered cursor, an InternalError exception will be raised, and you will be asked to consume or ditch what is returned by the unbuffered cursor. Below is an illustration:
import mysql.connector
conn = mysql.connector.connect(database='db',
user='username',
password='pass',
host='localhost',
port=3306)
buffered_cursor = conn.cursor(buffered=True)
unbuffered_cursor = conn.cursor(buffered=False)
create_query = """
drop table if exists people;
create table if not exists people (
personid int(10) unsigned auto_increment,
firstname varchar(255),
lastname varchar(255),
primary key (personid)
);
insert into people (firstname, lastname)
values ('Jon', 'Bon Jovi'),
('David', 'Bryan'),
('Tico', 'Torres'),
('Phil', 'Xenidis'),
('Hugh', 'McDonald')
"""
# Create and populate a table
results = buffered_cursor.execute(create_query, multi=True)
conn.commit()
unbuffered_cursor.execute("select * from people")
unbuffered_cursor.fetchone()
buffered_cursor.execute("select * from people")
The snippet above will raise a InternalError exception with a message indicating that there is some unread result. What it is basically saying is that the result returned by the unbuffered cursor needs to be fully consumed before you can execute another query with any cursor under the same connection. If you change unbuffered_cursor.fetchone() with unbuffered_cursor.fetchall(), the error will disappear.
There are other less obvious differences, such as memory consumption. Buffered cursor will likely consume more memory since they may fetch the result set from the server and buffer the rows.
I hope this proves useful.

mysql update updating all user rows in table

Problem:
I have a mySql stored procedure which runs the following UPDATE:
IF target = 'sup' THEN
UPDATE my_table SET deleted = 1, last_updated = lastUpdate WHERE id = ID AND user_id = accountID;
END IF;
The input parameters are:
(IN ID BIGINT, IN lastUpdate DATETIME, IN target VARCHAR(3), IN accountID BIGINT)
When this sproc is called, mySql updates all of the rows in the table for the user_id and seems to ignore the id in the WHERE clause.
Background:
A mobile app makes an ajax json call to a .NET webservice, which then calls the mySql sproc.
The json call is like:
{"id":["5","6","10"],"lastUpdated":"2014-07-19 22:28:53","target":"sup","accountID":"309"}
At the .net webservice, it converts each id entry to Int64 and sends it to the mySql sproc:
For Each checkedID As String In id
cmd.Parameters.AddWithValue("#ID", CType(checkedID, Int64)).Direction = ParameterDirection.Input
cmd.Parameters.AddWithValue("#lastUpdate", dte).Direction = ParameterDirection.Input
cmd.Parameters.AddWithValue("#target", target).Direction = ParameterDirection.Input
cmd.Parameters.AddWithValue("#accountID", accountID).Direction = ParameterDirection.Input
cmd.ExecuteNonQuery()
cmd.Parameters.Clear()
Next
Research and fix attempts:
Lots of search
Using MySQL Workbench; running the SQL directly correctly updates just the targeted row:
UPDATE my_table SET deleted = 1, last_updated = '1970-01-01 10:10:10' WHERE id = "7" AND user_id = 309;
However, if I call the sproc from within MySQL Workbench, it still updates all of the rows for the targeted user:
CALL `my_sproc`(7, '1990-01-01 10:10:10', 'sup', 309);
I cannot see anything wrong with the sproc, unless I've just looked at it for too long. The mobile app has got close to 100 MySQL sprocs, and this is the only one causing an issue.
I am stumped.
Just adding an explicit reference to the table fixed the issue.
my_table.id