magento enterprise_targetrule/index model not saving - mysql

I am executing this code:
$model1 = Mage::getModel('enterprise_targetrule/index')->load(5511);
var_dump($model1);
$model2 = Mage::getModel('enterprise_targetrule/index')->
load(5511)->
setFlag('0')->
save();
var_dump($model2);
$model3 = Mage::getModel('enterprise_targetrule/index')->load(5511);
var_dump($model3);
die();
The outputs from the var_dump calls are exactly what I would expect: $_data[flag] is 1 for $model1, 0 for $model2 and $model3, and $_origData[flag] is 1 for $model1 and $model2, and 0 for $model3.
So far, it is all looking exactly right. However, when I then (immediately after running this code), execute select * from enterprise_targetrule_index on my database, I get this result:
mysql> select * from enterprise_targetrule_index;
+-----------+----------+-------------------+---------+------+
| entity_id | store_id | customer_group_id | type_id | flag |
+-----------+----------+-------------------+---------+------+
| 5511 | 7 | 0 | 1 | 1 |
WHY?
Why is the flag not getting updated? The models are correct, all the fields are correct, the save and load calls all succeed and return perfect results, but the database is not updated! It's like the change I save() doesn't get written, and yet can somehow still be loaded, at least within that script. What is going on here? What is special about this model, that makes it unable to save?

$model2 = Mage::getModel('enterprise_targetrule/index')->load(5511);
$model2->setFlag('0');
$model2->save();
echo $model2->getFlag();
If you use var_dump it displays the whole Object..

Turns out the reason for this, is the die. When sql queries are executed, they are only performed in memory - they don't get written to the database until and unless the process terminates successfully. Because I was using die(), this prevented the queries from being written.
I was thrown by this initially, because it occurs entirely in memory and not in the database. Once it is written, it does go down as a transaction, but the transaction all gets written at the same time, which is why I didn't see the rollback command in the mysql general log - it wasn't technically rolling back, but preventing even the first query from being written. Very strange, and does make testing harder, but good to keep in mind.

Related

Recovering from an EventStoreException where sequence is already in use within DeadlineHandler

Sometime an EventStoreException occurs mentioning the event couldn't be stored because it has the same sequence as another event in the aggregate.
This happens when EventA and EventB almost have the same timestamp.
CommandA is sent by a controller and CommandB is sent by a Saga within a DeadlineHandler.
So the handling of the deadline fails with the EventStoreException is logged, but not retried.
Would it help if we configure the Saga with a PropagatingErrorHandler?
Events table:
timestamp | aggregate_id | seq | type
| | |
2020-11-30T15:14:51.345541552Z | b02a5364-ee34-431a-ab1a-6c59bb937845 | 0 | MyAggregate
2020-11-30T15:14:52.06794746Z | b02a5364-ee34-431a-ab1a-6c59bb937845 | 1 | MyAggregate
Exception details:
org.axonframework.eventsourcing.eventstore.EventStoreException: An event for aggregate [b02a5364-ee34-431a-ab1a-6c59bb937845] at sequence [1] was already inserted
java.sql.BatchUpdateException: Batch entry 0 INSERT INTO events
(event_id, aggregate_id, sequence_number, type, timestamp, payload_type, payload_revision, payload, metadata)
VALUES
('d5be369e-5fd0-475e-b5b6-e12449a4ed04',
'b02a5364-ee34-431a-ab1a -6c59bb937845',
1,
'MyAggregate',
'2020-11-30T15:14:52.067871723Z',
'MyEvent',
NULL,
'{"payload":"payload"}',
'{"metaData":"metaData"}')
was aborted: ERROR: duplicate key value violates unique constraint "uk_aggregate_identifier_sequence_number"
Detail: Key (aggregate_id, sequence_number)=(b02a5364-ee34-431a-ab1a-6c59bb937845, 1) already exists.
As you can see the timestamp of the events are nearly the same:
EventA: 2020-11-30T15:14:52.06794746Z vs. EventB: 2020-11-30T15:14:52.067871723Z
To first answer your question, confiruging a PropagatingErrorHandler does not help because TrackingEventProcessor is not going to retry a DeadlineMessage. It only works for retrying real events which is not the case dor a DeadlineMessage since it is not an Event.
Now to your problem, we are assuming that your Saga has a DeadlineHandler and this component is dispatching a Command towards your Aggregate at the same time another component is also dispatching a Command to the same Aggregate. In that way, the Aggregate is failing to handle the second Command.
Based on that, we can give you 2 advices:
Have a consistent Routing Strategy which is used by a Distributed implementation of the CommandBus. In short, it will give you the following:
Two commands with the same routing key will always be routed to the same segment.
Have a RetryScheduler configured on your CommandGateway. You can read more about it here.
The RetryScheduler is capable of scheduling retries when command execution has failed.

How to send and receive raw string from MySQL

I am working on a SQL Developer-like application, in which user enters some SQL command and the result from the server is shown in a <div> on web page.
The problem is that user can enter ANY string, valid SQL or not, for example if an user sends select * from employees; I want to receive and display in
the <div> text EXACTLY as below :
+---------+----------+---------------+----------+
| user_id | username | full_name | password |
+---------+----------+---------------+----------+
| 1 | john | John Doe | admin |
And when he enters a bad SQL string, the <div> message should be the standard MySQL error strings , for example :
mysql> select * from usrsss;
ERROR 1146 (42S02): Table 'mydb.usrsss' doesn't exist
I know about security risk , I do not care about it at this point.
Can this be done, as I have no control on the SQL string syntax being sent by user?
First of all, the prompt you see there mysql> represents the MySQL Shell. This is not SQL or JDBC but a command line interface provided by MySQL.
This MySQL Shell allows you to execute:
SQL statements.
A variety of other statements that are NOT part of SQL.
The JDBC API you want to use will allow you to run the first group of statements -- the SQL statements. Unfortunately, it won't allow you to run the second one, so you are out of luck for this one.
Also, for the first group the JDBC API will provide error codes and error messages that are not exactly the same ones you see when using the MySQL Shell.
Bottom line, you can simulate some of these commands, but it will not be the exact same experience that you probably expect.
However... and this is a big one, why do you want to do this in the first place? One of my developers asked me if he could do this, since it's not difficult to implement; this way we could easily run any SQL command from the web page. Great? Well... no. This is a HUGE SECURITY RISK. If anyone hacks your web page, you would be exposing the whole database.
In other words, don't deploy this code to production.
If you are using java.sql.Connection,
create a statement first by using .createStatement().
use .executeQuery() for searching
and .executeUpdate() for inserts, updates and deletes
When searching identify the number of columns to create a table.
ResultSet rs = statement.executeQuery(String sql);
ResultSetMetaData metaData = rs.getMetaData();
In ResultSetMetaData,
.getColumnCount() will give you the column count.
in a for loop create the columns, while creating .getColumnName(int index) will give you the column name.
After creating the table, iterate the ResultSet,
while (rs.next()) {
rs.getString(int columnIndex);
}
use the above method to get values, and add rows to your table.
Don't forget to surround the code block with
try{
} catch(Exception e){
e.getMessage();
}
so if anything goes wrong you can get the SQLException thrown. That will include a message, the probable cause for the error.
Work your way out... Enjoy!

TOS DI variable in tMySqlInput

I'm relatively new to Talend OSDI. I managed to do simple request in MySql with tMySqlInput component. However today I have a more ambitious request and have some trouble to make it work.
Indeed I need a request where the result depends on the previous line. I made it on MySQLWorkbench but not on Talend. Exemple : delay time between two dates.
Here is the request :
SET #var = NULL;
SELECT id, start_date, end_date, #var precedent, UNIX_TIMESTAMP(TIMEDIFF(start_date,#var)) AS diff, #var:=start_date AS temp
FROM ma_table
ORDER BY start_date;
and errors are :
You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'SELECT id, start_date, end_date, id_process_type, #var precedent, UNIX_TIMESTAMP' at line 2
...Not very usefull, Is this syntax forbidden on Talend ? Do it exists others solutions to do such requests on Talend ? (for delay time between two dates for examples) or other component maybe ? I am searching with tMysqlRow.
Thanks for ideas !
As #Gabriele B mentions, you might want to consider doing this in a more "Talend" way.
I'd personally make use of the tMemorizeRows component to do this though.
To simplify this I've gone and made the start and end dates as integers but it should be trivial to handle this using proper dates.
If we have some data that shows the start and end date of a process and we want to work out the delay between finishing the last one and starting the next process we can read all of the data in and then use the tMemorizeRows component to remember the last 2 rows:
We then access the memorized data by looking at the array index. So here we go to a tJavaRow component that has an extra output column, startdelay. We then calculate it by comparing the current process' start day minus the last process' end date:
output_row.id = input_row.id;
output_row.startdate = input_row.startdate;
output_row.enddate = input_row.enddate;
if (id_tMemorizeRows_1[0] != 1) {
output_row.startDelay = startdate_tMemorizeRows_1[0] - enddate_tMemorizeRows_1[1];
} else {
output_row.startDelay = 0;
}
The conditional statement it to avoid null pointer errors on the first run of the data as the enddate_tMemorizeRows_1[1] will be null at that point. You could handle the null in other ways of course.
This process is reasonably easy to understand and maintain (although there is that small bit of Java code in there) and has the benefits of only needing the load the data once and only keep a small part of it in memory at any one time. It should also be very fast.
You should consider a statement refactory to do it in a "Talend" way, maybe little slower but most portable and robust.
If your table is not huge, for example, I would recommend to load it in memory using tCacheOutput/tCacheInput (you can find them on Talend Exchange) and this design:
tMySqlLoad----->tCacheOutput_1
|
|
|
OnSubjobOk
|
|
v
tCacheInput_1------->tMap_1--------+
|
|
tJoin-------------->tMap_3------------>[output]
|
|
tCacheInput_2------->tMap_2--------'
First of all you dump your table on a memory buffer
Then, you read two times this buffer. It's in memory, so it won't hurt performances
In tMap_1 you add a auto_increment index using a Numeric.sequence
You do the same in tMap_2 but with a starting number of 2 (basically, you shift the index)
Then you auto-join the table using these brand new columns
Finally in tMap_3 you're going to release your payload (ie make the diff)
This is going to be a verbose but robust solution if your table is small. If it's not and performance is not a issue you can try an even more verbose solution like Prepared Statements.

"show full processlist" shows no time-hogging processes, yet my script takes too long

I have a script residing on my webserver's cron that should run every night. It has stopped running recently due to exceeding the time limits set by the webserver on cron jobs. It used to run fine. Anytime I manually ran it, it was very quick (well under 5 minutes). All of the sudden, it takes over half an hour.
The script basically updates a MySQL database. The DB is around 60mb according to them. I can't seem to find this information, but it seems reasonable (though the file that I transfer to the server every night is only around 2mb).
I have undertaken their steps suggested to optimize my DB, but nothing really came of that. It still takes ages for the script to run. All the script does is delete everything out of the DB and fill it in again with our updated inventory.
So now I am running "show full processlist" on one Putty window, while running the script in another. "show full processlist" shows only a couple of items, both of which show 0 for the time.
mysql> show full processlist;
+-----------+--------------+--------------------+-------------------------+---------+------+-------+-----------------------+
| Id | User | Host | db | Command | Time | State | Info |
+-----------+--------------+--------------------+-------------------------+---------+------+-------+-----------------------+
| 142841868 | purposely omitted | purposely omitted | purposely omitted_net_-_main | Sleep | 0 | | NULL |
| 142857238 | purposely omitted | purposely omitted | NULL | Query | 0 | NULL | show full processlist |
+-----------+--------------+--------------------+-------------------------+---------+------+-------+-----------------------+
2 rows in set (0.05 sec)
If I keep using the show full processlist command really quickly, occasionally I can catch other things being listed in this table but then they disappear the next time I run it. This indicates to me that they are being processed very quickly!
So does anyone have any ideas what is going wrong? I am fairly new to this :(
Thanks!!
PS here is my code
#!/usr/bin/perl
use strict;
use DBI;
my $host = 'PURPOSLEY OMITTED';
my $db = 'PURPOSLEY OMITTED';
my $db_user = 'PURPOSLEY OMITTED';
my $db_password = "PURPOSLEY OMITTED";
my $dbh = DBI->connect("dbi:mysql:$db:$host", "$db_user", "$db_password");
$dbh->do("DELETE FROM main");
$dbh->do("DELETE FROM keywords");
open FH, "PURPOSLEY OMITTED" or die;
while (my $line = <FH>) {
my #rec = split(/\|/, $line);
print $rec[1].' : '.$rec[2].' : '.$rec[3].' : '.$rec[4].' : '.$rec[5].' : '.$rec[6].' : '.$rec[7];
$rec[16] =~ s/"//g;
$rec[17] =~ s/"//g;
$rec[13] =~ chomp($rec[13]);
my $myquery = "INSERT INTO main (medium, title, artist, label, genre, price, qty, catno,barcode,created,received,tickler,blurb,stockid) values (\"$rec[0]\",\"$rec[1]\",\"$rec[2]\",\"$rec[3]\",\"$rec[4]\",\"$rec[5]\",\"$rec[6]\",\"$rec[7]\",\"$rec[8]\",\"$rec[9]\",\"$rec[10]\",\"$rec[11]\",\"$rec[12]\",\"$rec[13]\")";
$dbh->do($myquery);
$dbh->do("INSERT IGNORE INTO keywords VALUES (0, '$rec[2]','$rec[13]')");
$dbh->do("INSERT LOW_PRIORITY IGNORE INTO keywords VALUES (0, \"$rec[1]\", \"$rec[13]\")");
print "\n";
}
close FH;
$dbh->disconnect();
I have two suggestions:
(less impact) use TRUNCATE instead of DELETE, it is significantly faster, and is particularly easy to use when yo don't need to worry about an auto-incrementing value.
Restructure slightly to work in batches for the inserts. Usually I do this by keeping a stack variable of a given size (start with something like 20 rows), and for the first 20 rows, it just fills the stack; but on the 20th row it also actually performs the insert and resets the stack. It might boggle your mind how much this can improve performance :-)
Pseudo-code:
const buffer_size = 20
while(row) {
stack.addvalues(row.values)
if(stack.size >= buffer_size) {
// INSERT INTO mytable (fields) VALUES stack.all_values()
stack.empty()
}
then play with the "buffer" size. I have seen scripts where tweaking the buffer to upwards of 100-200 rows at a time sped up massive imports by almost as many times (i.e. a drastically disproportionate amount of work was involved in the "overhead" of executing the individual INSERTs (network, etc)

Application Mutex via MySQL InnoDB row locks

My application consists of a couple of Apache servers talking to a common MySQL box. Part of the application lets users create one hour appointments in the future. I need a mechanism which prevents different users coming from the different Apache instances at the same time, book the same one hour appointment slot. I've seen a similar "inter-system mutex" solution implemented on Oracle databases (basically 'select ... for update') but haven't dealt with the details on doing the same with MySQL. Would appreciate any advise, code or documentation references, best practices, etc. Did try to google around but mostly discussions about the MySQL internal mutexes come up.
These are my MySQL settings I thought relevant (my code will have try-catch and all and should never bail without unlocking what it locked but have to account for what happens in those cases as well):
mysql> show variables like 'innodb_rollback_on_timeout';
+----------------------------+-------+
| Variable_name | Value |
+----------------------------+-------+
| innodb_rollback_on_timeout | OFF |
+----------------------------+-------+
1 row in set (0.00 sec)
mysql> show variables like 'innodb_lock_wait_timeout';
+--------------------------+-------+
| Variable_name | Value |
+--------------------------+-------+
| innodb_lock_wait_timeout | 100 |
+--------------------------+-------+
1 row in set (0.00 sec)
mysql> select ##autocommit;
+--------------+
| ##autocommit |
+--------------+
| 1 |
+--------------+
1 row in set (0.00 sec)
Any alternate solutions (outside of MySQL) you could recommend? I do have a memcached instance running as well but gets flushed rather often (and not sure I want to have memcachedb, etc. to make that fault tolerant).
Appreciate your help...
One can also use MySQL's and MariaDB's GET_LOCK (and RELEASE_LOCK) functions:
https://dev.mysql.com/doc/refman/5.5/en/miscellaneous-functions.html#function_get-lock (outdated link)
https://dev.mysql.com/doc/refman/8.0/en/locking-functions.html#function_get-lock
https://mariadb.com/kb/en/library/get_lock/
The functions can be used to realize the behavior described in the question.
Acquiring a lock my_app_lock_1.
SELECT GET_LOCK('my_app_lock_1', 1000); -- lock's name 'my_app_lock_1', timeout 1000 ms
+---------------------------------+
| GET_LOCK('my_app_lock_1', 1000) |
+---------------------------------+
| 1 |
+---------------------------------+
Releasing the lock:
DO RELEASE_LOCK('my_app_lock_1'); -- DO makes a result set ignored
Please note (the quotes from MariaDB's documentation):
Names are locked on a server-wide basis. If a name has been locked by one client, GET_LOCK() blocks any request by another client for a lock with the same name. This allows clients that agree on a given lock name to use the name to perform cooperative advisory locking. But be aware that it also allows a client that is not among the set of cooperating clients to lock a name, either inadvertently or deliberately, and thus prevent any of the cooperating clients from locking that name. One way to reduce the likelihood of this is to use lock names that are database-specific or application-specific. For example, use lock names of the form db_name.str or app_name.str.
Locks obtained with GET_LOCK() do not interact with transactions.
Answering my own question here. A variation of this is what we eventually end up doing (in PHP):
<?php
$conn = mysql_connect('localhost', 'name', 'pass');
if (!$conn) {
echo "Unable to connect to DB: " . mysql_error();
exit;
}
if (!mysql_select_db("my_db")) {
echo "Unable to select mydbname: " . mysql_error();
exit;
}
mysql_query('SET AUTOCOMMIT=0'); //very important! this makes FOR UPDATE work
mysql_query('START TRANSACTION');
$sql = "SELECT * from my_mutex_table where entity_id = 'my_mutex_key' FOR UPDATE";
$result = mysql_query($sql);
if (!$result) {
echo "Could not successfully run query ($sql) from DB: " . mysql_error();
exit;
}
if (mysql_num_rows($result) == 0) {
echo "No rows found, nothing to print so am exiting";
exit;
}
echo 'Locked. Hit Enter to unlock...';
$response = trim(fgets(STDIN));
mysql_free_result($result);
echo "Unlocked\n";
?>
To verify it works run from two different consoles. Time performance is a bit worse than standard file lock based mutexes but still very acceptable.