Can the Debezium MySQL connector be configured so messages it emits contain metadata that can be used to infer which MySQL user performed the change?
My use case is a CDC for a web application with a production and staging environment that use the same MySQL database. I'd like to separate events originating from staging to a separate set of topics, so they can be observed by staging environment consumers only (& vice versa).
I would also like to avoid requiring the application to explicitly store something on each changed row indicating which environment made the change; that would require schema changes on all tables Debezium is capturing, and couple the application logic to the CDC implementation. The application environments connect to the database with separate database users, so perhaps a connection ID could be used to infer the environment somehow?
My understanding is that MySQL binlog entries do not contain metadata about which user or connection a change was performed by, so I'm unsure how else I could tie changes to environments without explicit help from application logic.
Related
I have aws rds aurora database for the production environment. And I have to build a database for the development environment.
I tried it by using aws database migration service(DMS), snapshot, mysqldump.
First, DMS didn't support migrating Auto_Increment column and Indexes. But I need them.
Second, Snapshot overwrite production database's user data(MySQL user data - using when connecting to MySQL) to the development database. And I want to maintain them differently.
Last, mysqldump is so slow and I concerned about mysqldump may cause down performance to production database.
So, I'm looking for the other way.
The below things are what I want:
Every information(w/o MySQL user data) such as Auto_Increment, Indexes are must be migrated.
Development environment database is must be sync to production database(reset and re-migrate) every day automatically.
Migration process as faster and lower downing performance as possible is the best.
Does anyone know how to build it?
Using an AWS DMS task, you can specify what schema to migrate and the type of migration.
The page should look similar to the following
Amazon RDS automatically creates a primary
DB Instance and synchronously replicates the data to a standby instance in a different
Availability Zone (AZ). Each AZ runs on its own physically distinct, independent
infrastructure, and is engineered to be highly reliable.
Yes you can. Migrations are among the most time-consuming tasks handled by the database
administrators (DBAs). Although the task becomes easier with the advent of
managed migration services such as the AWS Database Migration Service (AWS
DMS), many large-scale database migrations still require a custom approach
due to performance, manageability, and compatibility requirements.
Extra
*Amazon RDS provides high availability and failover support for DB instances using Multi-AZ- deployments.
*ElastiCache improves the performance of your database through caching query results
I want to test Node and Deno and try to redirect users via proxy to one MySQL DB.
How will it impact the database?
Can some timestamp conflicts via CRUD operations arise or does MySQL have some mechanism to cope with connections from multiple servers?
What about performance or memory footprint of the database in RAM? Will it be occupying the same amount of space as if there was only one server requesting the database to CRUD something?
What would happen if I added another server that will connect to the DB, for example, java or Go server?
It will virtually have no impact on the database other than having any other concurrent processes connecting to it.
This is not a deno issue but rather a database issue.
The exact same problems can happen even with your current single Node.js instance, because the nature of all systems these days is concurrent/parallel.
You might as well replace the Deno app with another Node.js instance, Java, etc. Or even your current Node.js app.
Data in a database can change once you loaded it to the client, and it is up to you to implement the code that will handle such scenarios.
The fact that MySQL is not "ACID" is neither negative nor relevant in and of itself because it is doesn't have context.
If you need complete absolute integrity on a registry make sure you lock it when you select it, but there will be a trade off.
I have a requirement to sync local db data with central SQL server. The remote users (mostly around 10 people) will be using laptop which will host application and local db. The internet connection is not 24x7. During no connectivity, the laptop user should be able to make changes in local db and once the connection is restored, the data should be synced with central SQL server automatically. The sync is just usually data updates. I have looked at options Sync framework and Merge replication. I can’t use sync framework as I am not C# expert. For Merge replication, additional hardware is required I believe which is not possible. The solution should be easy to develop and maintain.
Are there any other options available? Is it possible to use SSIS in this scenario?
I would use Merge replication for this scenario. I'm unaware of any "additional hardware" requirements.
SSIS could do this job but it does not give you any help out-of-the-box - you would be reinventing the wheel for a very common and complex scenario.
an idea...
Idea requires an intermediate database (exchange database).
On the exchange database you have tables with data for each direction of synchronization. And using change tracking on exchange db, and central.
On the local database side could mean rows with flags:
row is created on local db
row comes with exchange db
row required resynchronisation (when is updated, ect.)
Synchronisation localdb-exchange db.
When synchronizing, first send the data in localdb (marked as created locally or required resynchronisation), later download the data from exchange db (marked by change trancking as changed).
Synchorisation beetween exchange db and central db is simply, basen on change tracking with the database engine.
About Change Trancking here!
We are building a Node.js application that is connected to a MySQL database. The main purpose of this application is to manage a sport event; manage the entries, draw, results, etc. We host this application at a hosting provider(the global instance), but because we don't want to take any risk with a failing internet connection at the event location, we also want to be able to run a local instance of this server that communicates with the global instance.
The local instance has to be the 'master' instance of the specific event, that we would like to use for editing match information. We would like to use the global instance as 'publish' server that serves live match information to viewers on the website.
Therefore we are planning to do the following:
On the global instance (MySQL): Master the 'main' database (with user information, etc.) and slave the event specific database.
On the local instance (MySQL): Slave the 'main' database and master the event specific database.
Once we have saved a result in the local event database, it will start to replicate that information with the global database.
The problem
Our problem is that we want to publish the live match information as fast as possible. But we don't know how we can tell whether the global node.js application's database has been updated. Long polling on the MySQL database doesn't seem to be a very good idea. Sending an event from the local to the global node.js application could be a solution, but then the local application needs to know when the replication is finished.
We have been thinking about this problem for a long time. Is there a way to generate a Node.js event when the replication is ready, or is there a whole different method which we can use to gain the same result?
MySql Enterprise Monitor allows to report replication events using SMTP or SNMP.
There are node modules for both such as node-snmp-server for SNMP or simplesmtp for SMTP.
So it would be possible to an configure the alarm in MySQL Enterprise Monitor and implementing a listener in the node process to receive the notification.
I have read MongoDB's replication docs and MySQL's Cluster page however I cannot figure out how to configure my web apps to connect to database.
My apps will have connection information, database host, username, password etc, however, even if there is multi server function, should I need a big master that has a fixed ip that distirbutes the load to servers? Then, how can I prevent single-point of failure? Is there any common approaches to that problem?
Features such as MongoDB's replica sets are designed to enable automatic failover and recovery. These will help avoid single points of failure at the database level if properly configured. You don't need a separate "big master" to distribute the load; that is the gist of what replica sets provide. Your application connects using a database driver and generally does not need to be aware of the status of individual replicas. For critical writes in MongoDB you can request that the driver does a "safe" commit which requires data to be confirmed written to a minimum number of replicas.
To be comprehensively insulated from server failures, you still have to consider other factors such as physical failure of disks, machines, or networking equipment and provision with appropriate redundancy. For example, your replica sets should be distributed across more than one server or instance. If all of those instances are in the same physical colocation facility, your single point of failure could still be the hopefully unlikely (but possible) case where the colocation facility loses power or network.