I am currently setting up my first POS system and would like some advice on the best database setup for my situation.
I have two POS computers that need to have seamless database
integration between the two of them (when adding a record on the
one, it should reflect on the other instantly)
I would like both computers to store a local updated version of
the database on their respective HDD's.
I would also like to back the database up to some sort of cloud
storage incase something happens to both PC's.
It is imperative that both PC's can communicate with each other
and update the database even when there is NO network connection.
NOTE: Both PC's will be connected to a WIFI router with internet
access. I'm currently using PHPMyAdmin for the database storage.
I have very limited database knowledge. Would Master-Master replication be the best option for this scenario? If so, what would be the best way to go about it?
Thank you.
Related
I am after some advice please. I am no developer and outsource my work requirements to various freelancers. I have a specific requirement but due to my lack of skills I’m not quite sure what to ask for, hence my question here.
I have a system where i have several Raspberry Pi "drones" that collect data. These drones are all connected to the web and at present instantly send the data via a live feed direct to a MySQL server hosted at Amazon. This server is accessible via a static IP address.
Each drone is given a unique ID and the data collected is tagged with that ID so we know where it comes from.
The existing MySQL server collects and processes all this data and we have a website that displays the stats. Nothing really complicated and the current system works very well.
The issue i have is we occasionally have internet connection issues from the drones so i want to make the whole system more robust. When the drones do have a connection issue we lose data as the drone do not store anything which is what I want to resolve.
Just as a heads up… due to the data structure the drone will not write to a file, they have to feed direct to a MySQL server.
To resolve this issue my Plan is to have a MySQL server run on each RPI with the same table structure etc as the main server. Each RPI will write to its own local MySQL server and i then need that server to "update" the main server at Amazon. Please note the data will only ever be sent in this direction, it will never come from Amazon back to the drones. When the drone can communicate with the main server I would like the drone based MySQL server to communicate pretty much instantly ( or as close as i can get it ) but where there is an internet connection issue i need the drone to store its own data until the internet connection is restored at which point it will update the main server.
As i have said, i am no developer so i wouldn’t be undertaking this work myself but i would like to know what i need to ask for in order to get the right system.
If anyone can help i would appreciate some pointers. In addition if this is the type of work you could undertake please feel free to let me know and maybe we could talk further via PM, after all … someone needs to do it
Many Thanks.
I recommend to use a schedule update to the Amazon Database, using the programming language that you are already using or whatever, something that looks like:
While(gattering data){
Store data into local MySQL
for(each record in local MySQL){
if(there is internet){
store record in remote MySQL
optional: read remote record to check data was correctly stored
delete record in local MySQL
}else{
break;
}
}
}
I am making a Javafx program and need to use a small mySQL database. Currently I am hosting one on my computer but I can't access it on other computers on other networks. I need the mySQL server to be accessible from anywhere. How do I host one that does that? Thanks in advance, all help is welcome.
Well you have a few options depending on how important this MySQL database is to you, how you intend to connect to it from outside, and what you want to do with it.
The naive implementation would involve opening your firewall and directing all incoming traffic using whatever port you have configured MySQL for to point to the ip address of your server. If you do this you absolutely must secure your database with a password!!! You'll also need to keep the server's public ip address handy so you know how to find it when you go out.
Use Amazon AWS, Google Compute, Google App Engine, or some other cloud platform to host a MySQL instance. All the big players also tend to host pretty awesome RDBMS solutions. The advantage here is that you're not exposing your home computer to malice and you are connecting into an ecosystem that will answer a lot of other questions for you as they come up along the way (IE - how do you ensure redundancy? Backups? Scale your network for traffic?). There's a ton of other advantages too. It's the cloud... dude...
Use a SaaS DB service such as Firebase (Note: We are leaving MySQL and SQL database territory with Firebase)
If you plan to let other parties access your MySQL instance to make use of your data, you might also want to consider implementing a REST API (or SOAP API if you hate the future) which acts as an abstraction layer to interact with and provide the data from your database in a consistent and reliable format.
Best answer I can give with the details afforded - look around though the options in this arena are near limitless depending on how and what you're trying to do.
You should be able to access your machine from your LAN pretty easily unless there is some firewall rules preventing opening connection to your machine. Another way is there are many cloud shosting providers has free tier you can signup to bring up a test instance of mysql. Example: Open Shift.
Having just installed MySQL, which I want to use for research software development, I face the question where should I store my data files?
I have three computers (home, work, laptop), all of which have a development environment (Java/Eclipse) and I want all those machines to be able to access the database(s).
If I just had one machine, it would be a no brainer and I would just use localhost.
I can't decide where best to locate the data files. The options I am considering (but happy to hear other views) are:
1) Just store on the local machine and let Dropbox take care of syncing the data.
The data might get quite large and exceed the storage capacity on at least one of the machines and it might also take a long time to sync?
2) Use a Network Storage device (I have a Synology unit)
3) I have my own domain registered so I could use that?
4) Use a cloud based service.
Not sure how these work, the costs and the backup options.
In all the above, unless I use localhost, I am concerned about access times if I have to go "over the internet", especially if I make heavy use of SQL queries/updates.
I am also worried about backip up the databases in case I need to restore.
You might ask why I want to use mySQL? In the future, I might want to do a PHP roll out and MySQL seems the way to go.
Thans in advance.
G
maybe consider local installs on each machine, but with mysql replication. that way, if your laptop doesnt have internet service, you can still work with the local data, even though it might be a tiny bit out of date.
Replication also kinda addresses backups.
I am not sure size of your application. If it's huge, you could use an independent DATABASE server, otherwise use one of your computers as place to store data (use the one with biggest disk and memory size).
I think you wonder how to visit data in a different computer, actually no need to worry about that, because it always use database connector with IP/port defined in your application. If you use Java/Eclipse, you should use JDBC to visit database.
For example, JDBC setting looks like below.
Connection conn = DriverManager.getConnection("jdbc:mysql://localhost:3306/myuser", "root", "root"); // please replace localhost and 3306 with IP and port of your database
String sql = "select * from yourtable"; // your query
Statement st = (Statement)conn.createStatement();
st.executeQuery(sql);
I've got a very specific use case and because I'm not too familiar with database replication, I am open to suggestions and ideas about how to accomplish the following in the best possible way:
A web application + database is running on a remote server. Let's call this set-up R for remote.
Now suppose there are 3 separate geographical locations which need read+write access to the database. I will call these locations L1, L2 and L3.
The main problem: the remote server might be unavailable or the internet connection of one of the locations might not always work, rendering the remote application unavailable; but we want the application to work as a high availability solution (on-site) even when the remote server is down or when there is an internet connection problem.
Partial solution: So I was thinking about giving each geographical location its own server with a local copy of the web application. The web application itself can get updated when needed from a version control system automatically (for example using git hooks).
So far so good... (at least I believe so?)
But what about our data? The really tricky part seems to be the database replication. Let's assume no DNS or IP failover and assume that the user first tries to access the remote server directly and if this does not work, the user can still use the local server on-site instead. This all happens inside a web browser (or similar client).
One possible (but unsatisfactory) solution would be to use master-slave replication from R (master) to L1, L2 and L3 (slaves). When doing this asynchronously this should be quite fast? I think this is a viable solution for temporary local read-only database access when the main server is broken or can't be accessed.
But... what about read-write support? I suppose we would need multi-master replication in this case, but I am afraid that synchronous replication using something like (for example) MySQL Cluster or Galera would slow things down, especially since L1, L2 and L3 are on lower bandwidth connections. And they are connected through WAN. (Also, L1, L2 or L3 might not always be online.)
The real question: How would you tackle this specific use case? At the moment I am leaning towards multi-master replication if it doesn't slow down things too much. The application itself will mainly be used by employees on-site but by some external people over WAN as well. Would multi-master replication work well? What if for example L1 is down for 24 hours and suddenly comes back on-line? What if R can't be accessed?
EXTRA: not my main question, but I also need the synchronized data to be sent securely over SSL, if possible, please take this into account for your answer.
Perhaps I am still forgetting some necessary details; if so, please respond with some feedback and I will try to update my question accordingly.
Please note that I haven't decided on a database yet and the database schema will be developed from scratch, so ideas using other databases or database engines are welcome as well. (At the moment I have most experience with MySQL and PostgreSQL)
As you are still undecided, I would strongly recommand you to have a look at MS-SQL merge replication. It is strong, highly reliable, replicates through LAN and HTTPS (so called web replication), and not that expensive.
Terminology differs from the mySql Master\Slave idea. We are here talking about one publisher, and multiple subscribers. All changes done at subscriber's level are collected and sent to the publisher, then redistributed to all subscribers (with, if needed, fancy options like 'filtered subscriptions').
Standard architecture will then be:
a publisher, somewhere on a server, which collects and redistributes changes between subscribers. Publisher might not be accessed by end users.
other database subscribers servers, either for local or web access, replicating with the publisher. Subscribers are accessed by end users.
We have been using this architecture for years, including:
one subscriber for internet access
one subscriber for intranet access
tens of subscribers for local access: some subscribers are on our constructions projects, somewhere in the desert ....
Such an architecture is not available "from the shelf" with MySQL. I guess it could be built, but it would then certainly be a lot more expensive than just buying the corresponding MS-SQL licenses. Do not forget that the free SQLEXPRESS version of MS-SQL can be a subscriber.
Be careful: If you are planning to go through such a configuration, I would (really) strongly advise you to have all primary keys set to uniqueIdentifier data type, and randomly generated. This will avoid the typical replication pitfall, where PK's are set to int with automatic increment, and where independant servers generate identical primary keys between two replications (MS-SQL proposes a tool to avoid such problems, where you can allocate PK ranges per server, but this solution is a real PITA ...).
I'm working on a SaaS project and mysql is our main database. Our applications is written on c# .net and runs under an windows 2003 server.
Considering maintainance, cost, options and performance, which server plattaform can I decide for MySQL hosting, windows or Unix/Linux/Ubuntu/Debian?
The scenario is as following:
The server I run today has a modarate transaction volume. Databases increase 5MB daily and we expect to increase 50MB in couple of months and it is mission critical.
I don't know how big the database is going to be. We rent a VPS to host application and database server.
Most of our queries are simple but our ORM Tool makes constantly use of subqueries. Also we run reports simple and heavy ones. Some them runs after user click, but most runs in order to the queue.
Buy an extra co-lo space will be nice as we got more clients. That's SaaS project after all.
When developing, you can use your Windows box to also run a MySQL server. If and when you
want to have your DBMS in a separate server it can be in either a Windows or Linux server.
MySql and supporting tools for backup etc probably have more choices in Linux.
There are also 3rd party suppliers who will host your MySQL database on their servers. The benefit is they will handle backups, maintenance etc.
Also: look into phpMyAdmin for use as a great admin tool.
Larry
I think you need more information to make an informed decision. It's hard to just pull out a "best" answer based on no specific information.
What is your expected transaction volume?
How big will the database get?
How complex are your queries, ie are they long running or relatively quick?
Are you hosting the application on your own server at your own location? If you have to buy extra co-lo space maybe an extra server isn't the best option.
How "mission critical" is this database? Ie maybe you need replicated servers to ensure stability.
There is a server sizing tool online at http://www.sizinglounge.com/, so you should check that out. It sounds like your server could be smaller than their smallest tier, but it should be a good place to start.
If this is a mission critical application you need to do some kind of replication to an extra server in case the primary one fails, so you are definitely looking at two systems. This has to be in addition to a good backup plan.
Given that you are uncertain about how big it could get you might just continue renting a server. For your backup one idea would be to look at running MySQL on an Amazon EC2 instance. BTW it is important to have a remote replicated server. If you have two systems next to each other and an environmental problem comes up, they could both be out of commission at the same time. But with a remote copy your options are open to potentially working around it.
If you run a lot of read-only queries locally and have your site hosted somewhere, it might make sense to set up a local replicated database copy to query against. That could potentially improve both your website and local performance quite a bit. Plus it would give you some good piece of mind having a local copy under your control.
HTH,
Brandon