What database should I use with S3 as a host? - mysql

I'm completely new to s3. Started coding on a site today. I'm a mysql guy, but I'm not sure if I can host a mysql database on their server or what my options are. What is my best option for database storage?
Edited to add: I know this question sounds vague, but I literally don't know what my options are. Can I use a mysql database on amazon's servers or am I forced to use Amazon's SimpleDB?

S3 itself is not a typical data storage that you would simply use with RDBMS. It's only accessible via web service and is not a block device. I think that your best bet for hosting database on Amazon would be Relational Database Service which , in effect, is mysql.
You could also run your own mysql server on EC2 using EBS as backing store.
Also, SimpleDB is very nice but it is not a relational database. It's more like persistent hash map. It is not transactional and is eventually consistent. It belongs to category of No SQL solutions and you have to design your system in very specific ways for it.

S3 is a storage provider; you can't run your own code on it. You pay for file storage and can access it from your application, which has to be hosted elsewhere (Amazon EC2 for example).

Related

Saving all local MySQL operations to replicate them to an online database

I have two databases right now: one in local and one in the cloud.
I was wondering if it would be possible to save all the MySQL DML commands/operations that I perform in my local database somewhere (e.g. local file system, or even a new separate table for it in the database) so I can then process all of those commands, one by one and replicate them to my database in the cloud?
The reason is that I'm using my database in the cloud as the backup database and so my local database and my online database should be in sync.
If what I'm thinking isn't possible, is there another way around this?
Thank you very much!
Replication or Use canal may solve your problem
Yes, there is such a feature. It's called the Binary Log. There's no need to explain further, because it has full documentation. Visit the link I just provided.
It's probably not easy to use replication in your case, because it would require your remote MySQL Server to connect to your local MySQL as a client, and I expect that's not possible because of firewalls or NAT.
But you could collect your local binary logs, and upload them periodically, and then use the same technique as point-in-time recovery to apply them to the remote MySQL server. See https://dev.mysql.com/doc/refman/8.0/en/point-in-time-recovery-binlog.html

replicate tables from mysql database to online hosting server having cpanel (5gbfree.com)

I am looking for a way to mirror or replicate a local MySQL database to an online hosting website which is 5gbfree.com. I saw the master slave replication but don't know how to use it with an online server. Can you help me please ?
I tried setting the online database as the slave but it didn't work.
problem
I don't know how to configure the online database as the slave.
Do you have access to the mysqld.ini file? If not, it is not possible to configure replication. Replication configuration relies heavily on having access to the said file and additional configuration (e.g. Firewall, Range of IP Addresses to accept connections from, creating replication user account, restarting the service)
Your best bet is to export the data from your localhost and import them in your hosting provider.
Or, check with your provider. They might be able to help you.
Additional reading here.

Local and Remote data store sync

I have a situation where I would like a desktop application to be useable whether an internet connection is present or not.
I have a choice of MySQL on a web server and I could work with a local MySQL database or maybe MS Access database on the local drive and then just update data when connection is restored. My issues are as follows.
Sync local changes to remote server. Multi site / multi user scenario so how to keep db in sync when connection restored without loosing changes from other users in server data.
Sync remote changes to local. Multi site / multi user scenario so how to keep db in sync when connection restored without loosing changes made locally while updating with server data.
Currently I am using XML files and LingtoXML querying but it is unsatisfactory to continue with these files so a better solution is required.
Any help would be appreciated to identify what technology would work best and how to keep them in sync.
Thanks in Advance.
"Jet Replication Objects (JRO)", the replication features of the Access Database Engine, have been deprecated (ref: here). I believe that the related management features have also been completely removed from Access 2013. Therefore, native Access replication should no longer be considered a viable option.
Microsoft's recommendation would be to use SQL Server and its replication features. However, SQL Server Express has limitations on how much it can do (e.g., it can be a "Subscriber" but not a "Publisher" or "Distributor", ref: here) so presumably there would have to be a non-free copy of SQL Server involved somehow.
I haven't yet had the occasion to use MySQL replication myself, but it is probably worth considering. Chances are good that you could still use Access as a front-end (via ODBC linked tables).

Best way to synchronize MySQL bases on different servers

I have two servers. One is located in our office, and its MySQL base contains our offers, our clients etc.
The second server is located at our hosting provider's datacenter. It uses the same database structure and the same offers, and I use it for our website.
I was synchronizing these two servers manually, by sending json from one server to another each few hours, but now I need a real-time synchronization.
Which way should I use?
Master-slave replication from company server to website server. The problem is, that our slave website database has its own changeable tables too. For example, orders, user sessions, viewcounts and so on. And I need to send somehow that tables to master server at office.
To use only one database for both servers. Problem is, that there could be up to 100 queries each pageview, and I think that running each query through internet could be quiet slow.
We cannot use only one server for all tasks because we are unable to provide a stable low-latency internet connection in our office. So when internet is down, our site or our CRM system would be down to.
May be there is a third and best way to do this?
You can try Data Comparison tool in dbForge Studio for MySQL. It will connect to two different MySQL servers (using simple connection, SSL, SSH or HTTP tunnel), compare them and show differences; then it will offer to run synchronization script or view/save it.
There is also stand-alone dbForge Data Compare tool.

ADO.NET (Sql Compact + MySQL + IBM db2 expressC)

I'm developing an app which will have a central database users can add entries to. The database will have to be on a server somewhere but I want the users to be able to add entries offline. The app will sync to the main db when connection is available. So, I supose I need 2 databases - the main one sitting on a server (preferably linux) and a small one on each client machine to use as a buffer when offline. The app will be coded in c# for windows. I'm having trouble deciding what databases to use and whether I can leverage any replication technology to make this easier. Also, I don't want to pay for anything ;) So I guess my questions are...
Will I have any trouble writing code in ADO.NET to move data from something like SQL Compact Edition to MySQL?
Are there any replication solutions which will move stuff from local to main database for me
I've recently discovered IBM's db2 expressC but I'm not sure if it's serverless as well as server installed. Does anyone know?
Firebird can be server or serverless. Can I replicate between them. Is the server mode capable of heavy use?
Firebird can be server or serverless.
Can I replicate between them.
Yes.
Is the server mode capable of heavy
use?
Define 'heavy use'. I've had production systems with 200 simultaneous users pumping 20 transactions/minute each on databases in the 10-20GB range. I'm sure there are many larger deployments out there.
Also, what you describe seem like the 'briefcase model'. You should look into it if you haven't already done so. Maybe the solution is not replication at the database level, but rather a smarter fat client.
Just answering two of your questions; I don't know about DB2 or Firebird.
Will I have any trouble writing code in ADO.NET to move data from something like SQL Compact Edition to MySQL?
That should be very trivial; install MySQL Connector/NET and you're good to go.
Are there any replication solutions which will move stuff from local to main database for me
SQL Server replication is made for this, but I don't suppose it would work with MySQL.