MySQL: Location of data when developing on different machines - mysql

Having just installed MySQL, which I want to use for research software development, I face the question where should I store my data files?
I have three computers (home, work, laptop), all of which have a development environment (Java/Eclipse) and I want all those machines to be able to access the database(s).
If I just had one machine, it would be a no brainer and I would just use localhost.
I can't decide where best to locate the data files. The options I am considering (but happy to hear other views) are:
1) Just store on the local machine and let Dropbox take care of syncing the data.
The data might get quite large and exceed the storage capacity on at least one of the machines and it might also take a long time to sync?
2) Use a Network Storage device (I have a Synology unit)
3) I have my own domain registered so I could use that?
4) Use a cloud based service.
Not sure how these work, the costs and the backup options.
In all the above, unless I use localhost, I am concerned about access times if I have to go "over the internet", especially if I make heavy use of SQL queries/updates.
I am also worried about backip up the databases in case I need to restore.
You might ask why I want to use mySQL? In the future, I might want to do a PHP roll out and MySQL seems the way to go.
Thans in advance.
G

maybe consider local installs on each machine, but with mysql replication. that way, if your laptop doesnt have internet service, you can still work with the local data, even though it might be a tiny bit out of date.
Replication also kinda addresses backups.

I am not sure size of your application. If it's huge, you could use an independent DATABASE server, otherwise use one of your computers as place to store data (use the one with biggest disk and memory size).
I think you wonder how to visit data in a different computer, actually no need to worry about that, because it always use database connector with IP/port defined in your application. If you use Java/Eclipse, you should use JDBC to visit database.
For example, JDBC setting looks like below.
Connection conn = DriverManager.getConnection("jdbc:mysql://localhost:3306/myuser", "root", "root"); // please replace localhost and 3306 with IP and port of your database
String sql = "select * from yourtable"; // your query
Statement st = (Statement)conn.createStatement();
st.executeQuery(sql);

Related

Best database setup for dual POS?

I am currently setting up my first POS system and would like some advice on the best database setup for my situation.
I have two POS computers that need to have seamless database
integration between the two of them (when adding a record on the
one, it should reflect on the other instantly)
I would like both computers to store a local updated version of
the database on their respective HDD's.
I would also like to back the database up to some sort of cloud
storage incase something happens to both PC's.
It is imperative that both PC's can communicate with each other
and update the database even when there is NO network connection.
NOTE: Both PC's will be connected to a WIFI router with internet
access. I'm currently using PHPMyAdmin for the database storage.
I have very limited database knowledge. Would Master-Master replication be the best option for this scenario? If so, what would be the best way to go about it?
Thank you.

Storing MySQL credentials in a MySQL database

This is a similar question to "Storing MS SQL Server credentials in a MySQL Database"
So, in theory, imagine I have 1 MySQL server. I have a "master" database, and then X number of other generic databases. What im looking for, is a way of using an app (for arguments sake, lets say a web app, running on php) to first access the master database. This database then needs to tell the app which database to connect to - in the process, giving it all the credentials and username etc.
How is the best way around this?
The three ideas I have so far
Store the credentials in the master database for all the other databases. These credentials would of course be encrypted in some way, AES probably. The app would get the encrypted credentials, decrypt, connect.
Store the credentials elsewhere - maybe a completely separate server. When the master database is accessed, it returns some sort of token, which can be used to access the credential storage. Again, encrypted via AES.
Using some sort of system that I am not aware of to do exactly this.
Not doing this at all, and come up with a completely different approach.
To give a little example. "master" would contain a list of clients. Each client would contain it's own separate database, with it's own permissions etc.
I've had no reason to do this kind of thing myself but your first two ideas sound good to me and (as long as you include server address) not even necessarily separate ideas (could have some clients on the server with master, and some elsewhere) the client logic won't need to care. The only issue I can see is keeping the data in the "master" schema synced with the server's security data. Also, I wouldn't bother keeping database permissions in the master schema as I would think all clients have the same permissions, just specific to their schema. If you have "permissions" (settings) that limit what specific clients can do (perhaps limited by contract/features paid for), I would think it would be much easier to keep those in that clients' schema but where their db user cannot change data.
Edit: It is a decent idea to have separate database users in this kind of situation; it will let you worry less about queries from one user's client inadvertently (or perhaps maliciously) modifying another's (client account should only have permissions to access their own schema.) It would probably be a good idea to keep the code for the "master" coordination (and connection) somewhat segregated from the client code base to prevent accidental leaking of access to that database into the client code; even if encrypted you probably don't want them to even have any more access than necessary to your client connection info.
I did something like this not long ago. It sounds like you're trying to build some kind of one-database-per-tenant multi-tenant system.
Storing encrypted credentials in a directory database is fine, since there's really no fundamentally different way to do it. At some point, you need to worry about storing some secret (your encryption key) no matter what you do.
In my use case, I was able to get away with a setup where the directory just mapped tenants to db-hosts. The database name and credentials for each tenant were derived from the tenant's identifier (a string). So something like, given a TenantID T:
host = whatever the directory says.
dbname = "db_" + T
dbuser = T
dbpass = sha1("some secret string" + T)
From a security standpoint, this is no better (actually a bit worse) than storing AES encrypted credentials in the directory database, since if someone owns your app server, they can learn everything either way. But it's pretty good, and easy to implement.
This is also nice because you can think about extending the idea a bit and get rid of the directory server entirely and write some function that maps your tenant-id to one of N database hosts. That works great until you add or remove db servers, and then you need to handle shuffling things around. See how memcache works, for example.
You can use Vault to do this in much systematic way. In fact this is a strong use-case for this.
Percona has already written a great blog on it,

MySQL Custom replication

I have a weird scenario and I can't seem to find the best way to make it work.
I have a inventory app stored on an Linode server. This app handles different companies. Each company has its own database.
All companies have multiple stores located at different locations.
All stores need to use the same app and at the same time the data has to be synced.
I need to replicate the data, But all stores/apps need to be able to write/read and replicate at the same time. The problem is that most of them don't have Internet connection for hours. They are totally disconnected from the world(just LAN).
The conventional MySQL replication is not going to work because it needs internet connectivity to stay operational.
What do I do???
Is having my own software solution that replicates data on a higher level a good idea? If yes are there any best practises I should follow?
I also can't use mysql auto_increment step and offset for ID generation because some of the clients keep opening more and more stores. Do I need to generate my own GUID for each entity to make sure ids don't clash by prefixing the store unique ID (STOREID-UNIQUEID)?
MySQL's replication should be able to handle network downtimes as long as it has enough time, bandwidth and disc space to download the logs during uptime.
I'm not sure how the auto-reconnect handles extended downtimes, but you should be able to fix reconnection issues with a scheduled job which restarts the replication.
GUID's are a good option for multi-site key generation. The other option is to use a site (client) identifier along with the autoincrement for a PK.

openshift: is it possible to create multiple mysql cartridges?

Openshift offers scalability. However, it seems to me, if you are using MySql, in the end MySql queries/hits will be the bottleneck (if you are lucky enough to generate traffic which need scalability and considering the max-connections limit on openshift).
Suppose I want to use OpenShift, is it possible to create multiple mysql cartridges to balance the load and create dynamic environmental variables to assign requests to different mySql cartridges? (suppose I send an id or something and the environment variable for the mysql is set to "dbname+lastdigit" of this id").
This is a simplified example which should multiply the database capacity by ten (if this is unrelated data). Can it be done?
I hope some openshift guy or girl will clarify this for me....
cheers
Edit: Thanks mbaird for your info:
To clarify:
I wasn't talking about auto-scaling but using for instance 11 static/persistant db-cartridges which would never scale up or down.
Then you could store user information in any of them depending on (also for instance) their last id-digit.
The 11th database cartride could be used as a table to get the user's id and then redirect that user to the right database (if last digit = 0, db = db0, if last digit = 1, db = db1 etc). This would enable me to call the right database for the right user.
Of course, this is not auto-scaling but it would multiply the database capacity by (roughly) ten.
However, this would require the ability to create multiple mysql cartridges and corresponding environmental variables to gain access to all these mysql cartridges.
It seems to me this not possible right now, so I will investigate your suggestions.
The OpenShift database tier currently doesn't scale. Further, even if you could add a second MySQL cartridge it wouldn't give you a scalable database, it would give you a new, empty database. What you are looking for is the ability to scale the MySQL cartridge across multiple gears, not adding another cartridge.
I've actually seen some comments from OpenShift (although I can't seem to find them now) that the databases on OpenShift are for development only and you should look for another service to host your database if you have a mission-critical application that requires database fail-over and scalability.
Since you are specifically using MySQL, I would look into using Amazon RDS (either MySQL or the new Aurora engine which is MySQL compatible) or ScaleDB.

Which server can I decide for MySQL, windows or Unix/Linux/Ubuntu/Debian?

I'm working on a SaaS project and mysql is our main database. Our applications is written on c# .net and runs under an windows 2003 server.
Considering maintainance, cost, options and performance, which server plattaform can I decide for MySQL hosting, windows or Unix/Linux/Ubuntu/Debian?
The scenario is as following:
The server I run today has a modarate transaction volume. Databases increase 5MB daily and we expect to increase 50MB in couple of months and it is mission critical.
I don't know how big the database is going to be. We rent a VPS to host application and database server.
Most of our queries are simple but our ORM Tool makes constantly use of subqueries. Also we run reports simple and heavy ones. Some them runs after user click, but most runs in order to the queue.
Buy an extra co-lo space will be nice as we got more clients. That's SaaS project after all.
When developing, you can use your Windows box to also run a MySQL server. If and when you
want to have your DBMS in a separate server it can be in either a Windows or Linux server.
MySql and supporting tools for backup etc probably have more choices in Linux.
There are also 3rd party suppliers who will host your MySQL database on their servers. The benefit is they will handle backups, maintenance etc.
Also: look into phpMyAdmin for use as a great admin tool.
Larry
I think you need more information to make an informed decision. It's hard to just pull out a "best" answer based on no specific information.
What is your expected transaction volume?
How big will the database get?
How complex are your queries, ie are they long running or relatively quick?
Are you hosting the application on your own server at your own location? If you have to buy extra co-lo space maybe an extra server isn't the best option.
How "mission critical" is this database? Ie maybe you need replicated servers to ensure stability.
There is a server sizing tool online at http://www.sizinglounge.com/, so you should check that out. It sounds like your server could be smaller than their smallest tier, but it should be a good place to start.
If this is a mission critical application you need to do some kind of replication to an extra server in case the primary one fails, so you are definitely looking at two systems. This has to be in addition to a good backup plan.
Given that you are uncertain about how big it could get you might just continue renting a server. For your backup one idea would be to look at running MySQL on an Amazon EC2 instance. BTW it is important to have a remote replicated server. If you have two systems next to each other and an environmental problem comes up, they could both be out of commission at the same time. But with a remote copy your options are open to potentially working around it.
If you run a lot of read-only queries locally and have your site hosted somewhere, it might make sense to set up a local replicated database copy to query against. That could potentially improve both your website and local performance quite a bit. Plus it would give you some good piece of mind having a local copy under your control.
HTH,
Brandon