Kafka connect jdbc sink upsert mode issue - mysql

I'm trying to connect replicate a table at realtime using Kafka connect. The database used is MySQLv5.7.
On working with insert and update mode separately, the columns are behaving as expected. However, when I use the upsert mode, no change is observed in the database.
Configuration File filled via UI
Sink
topic = custom-p2p
Connector Class = JdbcSinkConnector
name = sink
tasks-max = 1
Key-converter-class=org.apache.kafka.connect.storage.StringConverter
Value-converter-class=org.apache.kafka.connect.json.JsonConverter
jdbc_url=jdbc:mysql://127.0.0.1:3306/p2p_service_db4?user=root&password=root&useSSL=false
insert mode = upsert
auto create = true
auto evolve = true
Source
Connector Class = JdbcSourceConnector
name = source-new
task max = 1
key converter class = org.apache.kafka.connect.storage.StringConverter
value converter class = org.apache.kafka.connect.json.JsonConverter
jdbc url = jdbc:mysql://127.0.0.1:3306/p2p_service_db3?user=root&password=root&useSSL=false
table loading mode = timestamp+incrementing
incrementing column name = auto_id
timestamp column name = last_updated_at
topic prefix = custom-
ver
The issue that I'm having is that when the sink insert mode is changed to insert, the insertion takes place properly when changed to update, this also happens perfectly as expected, however when the value is changed to upsert, neither insertion nor update takes place.
Please let me know if something done is wrong? Why this mode is not working? Is there some alternative to this if this inserts and updates both need to be replicated in the backup DB.
Thank you in advance. Let me know if some other information is needed

Related

Conditional duplicate key updates with MySQL using Peewee

I have a case where I need to use conditional updates/inserts using peewee.
The query looks similar to what is shown here, conditional-duplicate-key-updates-with-mysql
As of now, what I'm doing is, do a get_or_create and then if it is not a create, check the condition in code and call and insert with on_conflict_replace.
But this is prone to race conditions, since the condition check happens back in web server, not in db server.
Is there a way to do the same with insert in peewee?
Using: AWS Aurora-MySQL-5.7
Yes, Peewee supports the ON DUPLICATE KEY UPDATE syntax. Here's an example from the docs:
class User(Model):
username = TextField(unique=True)
last_login = DateTimeField(null=True)
login_count = IntegerField()
# Insert a new user.
User.create(username='huey', login_count=0)
# Simulate the user logging in. The login count and timestamp will be
# either created or updated correctly.
now = datetime.now()
rowid = (User
.insert(username='huey', last_login=now, login_count=1)
.on_conflict(
preserve=[User.last_login], # Use the value we would have inserted.
update={User.login_count: User.login_count + 1})
.execute())
Doc link: http://docs.peewee-orm.com/en/latest/peewee/querying.html#upsert

Grails: Db row update fails silently when domain class contains 'version false'

I'm using grails v3.2.9
I have a domain class Offer containing
static mapping = {
version false
}
I insert a row to a offer table, then in another transaction I try to update a value of one column inside that row, but offer update silently fails while other entities in the same transaction are updated properly.
I save the offer as follows:
offer.save(failOnError: true)
so it is not the case of offer.save() when the validation fails and saving fails silently.
However if I add version column to offer table(dbCreate is set to none) and change the Offer domain class to contain
static mapping = {
version true
}
the row starts to be updated successfully.
When I inspect the audit_log for offer table there is only the insertion events, no any update event is there.
It is very weird as I have other domain classes containing version = false and updating there works fine.
Any help would be appreciated.
Since version = false, the property Offer.version is equal to null and the column does not exist in database. Normally when you perform updates Hibernate will automatically check the version property against the version column in the database. So, I am just guessing it may be a bug with the hibernate session trying check a value that is null. I tried to replicate your scenario but I did not succeed.
Have you tried flushing the session when you save?:
offer.save(flush: true, failOnError: true)
It is also possible that when you changed the domain that required version to a domain that doesn't require version, the underlying database table didn't drop that column.
By default the version column is created as NOT NULL in the physical database. Even though hibernate doesn't care about the version property in the domain, the physical database won't let that record inserted and hence it fails.
While this explains why a record is not inserted, it doesn't explain why it doesn't throw an error. It shouldn't fail silently and instead throw a SQL exception.

Mysql database column constraint error

I am having a problem and I would really love if I could have some help. I am using a php application to interact with the database. I have a database, which is working perfectly. However when I backed it up and moved it to another PC it started to act up. It is identical to the original. I have a table called Authorize, and a column authorized with a default not being null, however when I try to update the authorized column following message comes up (On the original system it still works fine, I can't seem to find the problem).
Error: Column 'authorized' cannot be null
sql: update `authorized` set `authorized` = :authorized where `authorized_id` = :identity_id;
arr_sql_param: Array
(
[:identity_id] => 22
[:authorized] =>
)
Sent From: update_grid()
Reading your code:
public function sql_update() {
...
// make sql statement from key and values in $_POST data
foreach($_POST as $key => $val) {
$sql_set .= "`$key` = :$key, ";
$sql_param[":$key"] = $this->cast_value($val, $key);
...
// posted values are saved here for pdo execute
$sql_param = array();
$sql_param[':identity_id'] = $identity_id;
...
$sql_final = "update `$this->table` set $sql_set where `$this->identity_name` = :identity_id;";
...
And the error:
Error: Column 'authorized' cannot be null
sql: update authorized set authorized = :authorized where authorized_id = :identity_id;
I realize that indeed :authorized is not set or included in the SQL statement explicitly.
Which leads to two possible conclusions:
When the column cannot be NULL in this environment but the same code works fine on your development system (your PC), then the database scheme may be different on those two systems.
On the new environment, the authorized column in table authorized is defined NOT NULL while on your dev environment you don't have this constraint.
Compare SHOW CREATE TABLE authorized from both systems to see if this is true.
since the column value for authorized is coming from the $_POST array .. is it possible that it's just not posted by the browser for some reason? Can't find a reason for that in your code, though.

Adding output converter to pyodbc connection in SQLAlchemy

using:
Python 2.7.3
SQLAlchemy 0.7.8
PyODBC 3.0.3
I have implemented my own Dialect for the EXASolution DB using PyODBC as the underlying db driver. I need to make use of PyODBC's output_converter function to translate DECIMAL(x, 0) columns to integers/longs.
The following code snippet does the trick:
pyodbc = self.dbapi
dbapi_con = connection.connection
dbapi_version = dbapi_con.getinfo(pyodbc.SQL_DRIVER_VER)
(major, minor, patch) = [int(x) for x in dbapi_version]
if major >= 3:
dbapi_con.add_output_converter(pyodbc.SQL_DECIMAL, self.decimal2int)
I have placed this code snippet in the initialize(self, connection) method of
class EXADialect_pyodbc(PyODBCConnector, EXADialect):
Code gets called, and no exception is thrown, but this is a one time initialization. Later on, other connections are created. These connections are not passed through my initialization code.
Does anyone have a hint on how connection initialization works with SQLAlchemy, and where to place my code so that it gets called for every new connection created?
This is an old question, but something I hit recently, so an updated answer may help someone else along the way. In my case, I was trying to automatically downcase mssql UNIQUEIDENTIFIER columns (guids).
You can grab the raw connection (pyodbc) through the session or engine to do this:
engine = create_engine(connection_string)
make_session = sessionmaker(engine)
...
session = make_session()
session.connection().connection.add_output_converter(SQL_DECIMAL, decimal2int)
# or
connection = engine.connect().connection
connection.add_output_converter(SQL_DECIMAL, decimal2int)

Problem with querying in Amazon RDS

Hi i have a python script that connects to an Amazon RDS machine and check for new entries.
my scripts works on the localhost perfectly. But on the RDS it does not detect the new entry. once i cancel the script and run again i get the new entry. For testing i tried it out like this
cont = MySQLdb.connect("localhost","root","password","DB")
cursor = cont.cursor()
for i in range(0, 100):
cursor.execute("Select count(*) from box")
A = cursor.fetchone()
print A
and during this process when i add a new entry it does not detect the new entry but when i close the connection and run it again i get the new entry. Why is this i checked the cache it was at 0. what else am i missing.
I have seen this happen in MySQL command-line clients as well.
My understanding (from other people linking to this URL) is that Python's API often silently creates transactions: http://www.python.org/dev/peps/pep-0249/
If that is true, then your cursor is looking at a consistent version of the data, even after another transaction adds rows. You could try doing a cursor.rollback() in the for loop to stop the implicit transaction that the SELECT is running in.
I got the solution to this, it is due to the Isolation level in Mysql, all i had to do was
Set the default transaction isolation level transaction-isolation = READ-COMMITTED
And i am using Django for this i had to add this in the django database settings
'OPTIONS': {
"init_command": "SET SESSION TRANSACTION ISOLATION LEVEL READ COMMITTED"
}