Mysql replicate two databases - mysql

I have a database setup online where I take user registrations and provide them a pass to enter the event ( people gathering), Now at the event I also have to perform user registrations, situation is internet is not always stable at the event so I am considering to setup database offline, I found some guides on mysql replications but not getting full picture if its possible the way I want.
At the event I will setup database at my localhost and register users offline, also take new registrations online ( on other server hosting a copy of same database online), users table has an autoincremental index which is going to be a huge problem to sync both databases using mysql replication, when both servers will add a record to the same table ,it will assign the same index id to both databases. Is there something I can do to avoid this issue.

Well, master-master replication does exist, and may suit your purposes, but it has some drawbacks.
I think you should consider taking registrations in a different form when on-site, then insert them into your main database when you get home. This is a pretty common way to do things.
If you really need to it with MySQL, come up with a "merge" tool that can re-create the users created off-site, on demand; as you pointed out, you'll need to account for different auto-increment IDs, but that's not necessarily an actual problem. It just needs to be dealt with.

If your primary concern is that you have 2 systems that need to generate id that are unique from each other without coordination, there's a couple of things you can:
Use auto_increment_increment and auto_increment_offset. Simply put 1 of your 2 servers might use even id's and the other one odd ones.
Use a different key, maybe a natural key (email address?)
Use something like a UUID (or GUID if you're in the microsoft world. They're really the same thing).

you can dump the database from server to local from terminal like below.
Run command : mysqldump -h hostname -u username -ppassword databasename > C:\path_to_file

Related

Concept of schema in MySQL

Is there a suggested way to use "schemas" in mysql? For example, if I have one database called events and then I want to have two environments dev and prod, what might be a way to do that? Currently I add a table prefix, but it seems a a bit hack-ish:
you create a separate database for that, because MySQL does not have the concept of schema like e.g. PostgreSQL does.
You create one database for production e.g. prod_database with the table names event and event_type. and one database for dev e.g. dev_database, with the same table names event and event_type. As you always want to have the same table names in different environments.
You could (and should) even use the same database name, if you host the database on different servers. Which for production and development/staging would also make sense e.g. to test server version updates on one setup without affecting production.

mysqldump: how to fetch dependent rows

I'd like a snapshot of a live MySQL DB to work with on my development machine. The problem is that the DB is too large, so my thought was to execute:
mysqldump [connection-info-here] --no-autocommit --where="1 limit 1000" mydb > /dump.sql
I think this will give me the first thousand rows of every table in database mydb. I anticipate that the resulting dataset will break a lot of foreign key constraints since some records will be missing. As a result the application I mean to run on the dev machine will fail.
Is there a way to mysqldump a sample of the database while ensuring that all records dumped abide by key constraints? (for instance if a foreign key is dumped, the matching record in the foreign table will also be dumped).
If that isn't possible, how do you guys deal with this problem?
No, there's no option for mysqldump to dump only rows that match in foreign key relationships. You already know about the --where option, and that won't do it.
I've had the same task as you, to dump a subset of data but only data that is related. For example, for creating a test instance.
I've been using MySQL for many years, I've worked as a MySQL consultant and trainer, and I try to keep up with current tools. I have never heard of any MySQL tool that does this operation.
The only solution I can suggest is to write your own script to dump table by table using SELECT...INTO OUTFILE.
It's sometimes easier to write a custom script just for your specific schema, than for someone to write a general-purpose tool that works for everyone's schema.
How I have dealt with this problem in the past is I don't copy data from the live database. I find some other way to create a subset of fake data for testing. It's probably better to create synthetic data anyway, because then you don't risk accidentally using live data in your dev/test environment, in case some of it is private data.

MySQL backup multi-client DB for single client

I am facing a problem for a task I have to do at work.
I have a MySQL database which holds the information of several clients of my company and I have to create a backup/restore procedure to backup and restore such information for any single client. To clarify, if my client A is losing his data, I have to be able to recover such data being sure I am not modifying the data of client B, C, ...
I am not a DB administrator, so I don't know if I can do this using standard mysql tools (such as mysqldump) or any other backup tools (such as Percona Xtrabackup).
To backup, my research (and my intuition) led my to this possibile solution:
create the restore insert statement using the insert-select syntax (http://dev.mysql.com/doc/refman/5.1/en/insert-select.html);
save this inserts into a sql file, either in proper order or allowing this script to temporarily disable the foreign key checks to meet foreign keys' constraint;
of course, I do this for all my clients on a daily base, using a file for each client (and day).
Then, in the case I have to restore the data for a specific client:
I delete all his data left;
I restore the correct data using his sql file I created during the backup.
This way I believe I may recover the right data of client A without touching the data of client B. Is my solution eventually working? Is there any better way to achieve the same result? Or do you need more information about my problem?
Please, forgive me if this question is not well-formed, but I am new here and this is my first question so I may be unprecise...thanks anyway for the help.
Note: we will also backup the entire database with mysqldump.
You can use the --where parameter, you could provide a condition like *client_id=N* . Of course I am making an assumption since you don't provide any information on your schema.
If you have a Star schema , then you could probably write a small script that backups all lookup tables (considering they are adequately small) by using this parameter --tables and use the --where condition for your client data table. For additional performance, perhaps you could partition the table by the client_id.

Using a table in another database

I've been asked to build a module for a web application, which will also be used as a stand alone website. Since this is the case, I wanted to use a separate database, and wondered if there was a way of having a table in one database, be a "pointer" in another database.
For example, I have databases db1 and db2
db1 has table users, so I want to have db2.users point to db1.users.
I know I could setup triggers and what not to sync two seperate tables but this sounds cooler :)
EDIT
So in my code I'm using sql such as
select * from users
Now, at the database level, I want "users" to actually be db1.users. Then, if I want to, I can remove the alias/pointer and "select * from users" will point to the users table in the current database. I guess what I'm looking for is a "global alias" type of thing.
Just use it directly from another database?
SELECT ... FROM `db1`.`users` LEFT JOIN `db2`.`something`
The federated storage engine offers something similar to the feature you asked for.
And if your databases are on the same database server, the federated storage enging sounds a bit like an overkill to me. You may want to create a view instead.
Both methods won't be useful if db1 is not available. As Emmerman already points out, you need to store the data in db2 if you want to prepare for the case of db1 being unavailable.

Default database for MySQL

Is there a way to allocate a default database to a specific user in MySQL so they don't need to specify the database name while making a query?
I think you need to revisit some concepts - as Lmwangi points out if you are connecting with mysql client then my.cnf can set it.
However, your use of the word query suggests that you are talking about connecting from some programming environment - in this case you will always need a connection object. To create connection object and in this case having default database to connect to will lead to no improvement (in terms of speed or simplicity). Efficiently managing your connection(s) might be interesting for you - but for this you should let us know exactly what is your environment.
If you use a database schema you don't need to specify the database name every time, but you need to select the database name.
The best thing to do would be to use a MySQL trigger on the connection. However, MySQL only accepts triggers for updates, deletes and inserts. A quick Google search yielded an interesting stored procedure alternative. Please
see MySQL Logon trigger.
When you assign the permissions to every user group, you can also specify, at the same file, several things for that group, for example the database that users group need to use.
You can do this with a specification file, depending on the language you are working with, as a simple variable. Later, you only have to look for that variable to know which database you need to work with. But, I repeat, it depends on the language. The specification file can be an XML, phpspecs file, or anything like this.