I have multiple shops (a few hundred) and they all need to write an online Mysql database, and at about the same time. The program I have has been written in VB6 and it is currently updating the database successfully (via ODBC), although it is only updating three stores at the moment. The plan for the near future is to have all the other stores update to the same Mysql database.
Will Mysql be able to handle all these stores updating it at the same time using ODBC or is there a better way to do it?
In terms of running into problems caused by too many concurrent connections it depends very much on the platform your database is running on.
https://dev.mysql.com/doc/refman/5.5/en/too-many-connections.html
Creating a test environment to stress test your database would be a sensible way for you find out if it is going to be up to it.
If your database cannot cope with the load, there are many scalable cloud based resources around now which can easily be expanded to cope with higher load. Google Cloud, Windows Azure or Amazon may be worth a look.
Related
I identified an issue with an infrastructure I created on the Google Cloud Platform and would like to ask the community for help.
A charge was made that I did not expect, as theoretically it would be almost impossible for me to pass the limits of the free tier. But I noticed that my database is huge, with 1gb and growing, and there are some very heavy buckets.
My database is managed by a Django APP, and accessing the tool's admin panel there are only 2 records in production. There were several backups and things like that, but it wasn't me.
Can anyone give me a guide on how to solve this?
I would assume that you manage the database yourself, i.e. it is not Cloud SQL. Databases pre-allocate files in larger chunks in order to be efficient on writes. You can check this yourself - write additional 100k records, most likely size will not change.
Go to Cloud SQL and it will say how many sql instances you have. If you see the "create instance" window that means that you don't have any Google managed SQL instances and the one we're talking about is a standalone one.
You can also check Deployment Manger to see if you deployed one from the marketplace or it's a standalone VM with MySQL installed manually.
I made a simple experiment and deployed (from Marketplace) a MySQL 5.6 on top of Ubuntu 14.04 LTS. Initial DB size was over 110MB - but there are some records in it initially (schema, permissions etc) so it's not "empty" at all.
It is still weird that your DB is 1GB already. Try to deploy new one (if your use case permits that). Maybe this is some old DB that was used for some purpose and all the content deleted afterwards but some "trash" still remains and takes a lot of space.
Or - it may well be exactly as NeatNerd said - that DB will allocate some space due to performance issues and optimisation.
If I have more details I can give you better explanation.
I have an application which is going to be distributed to a hosting platform, most probably phpfog.
It is very similar to how WordPress.com operates, where each customer can host their own individual installation of the app on our servers. We host the 'work' files and provide the database (However, it is NOT WordPress; it's a custom app).
Each user of the application has their own separate MySQL database.
I am wondering what the most cost-effective service would be to provide this. It seems that most cloud services offer, for instance, one massive 50GB database. It is definitely conceivable that instead of an individual database, we have one huge one and prefix all the tables per user. But that seems really bloated and unwieldy. It's also not really possible without major structural changes to have one big database for everyone (And the same tables inside it for everyone) as the app is primarily designed to be standalone.
Each database really won't get that big. We are talking low GB - I'd suggest the biggest would be 5GB. However, there will be a LOT of them as obviously it's one per customer.
What would be the most cost- and performance-effective way of handling this?
Amazon RDS in fact provides a database server rather than an individual sales page; I misunderstood their offerings.
In this case, RDS is a drop-in replacement for existing MySQL databases and will work perfectly.
I am working with a client who is syncing between SQL Server and MySQL containing the exact same schema and data. We want to centralize that data into one database. Other then performance and maintainability issues, what else is bad about the original design?
You can create a linked server instance in SQL Server, with the MySQL instance.
Despite being completely proprietary, one of the nice connectivity features offered in SQL Server is the ability to query other servers through a Linked Server. Essentially, a linked server is a method of directly querying another RDBMS; this often happens through the use of an ODBC driver installed on the server.
Refer This article : step-by-step process SQL Server Linked Server to MySQL.
Providing you grant the MySQL user you connect on behalf of proper permissions, you can write to the MySQL instance accouding to you. So you can update stored procedures to do an additional step to insert records into MySQL.
Much easier solution is to use commercial application - Omega Sync from Spectral Core
Omega Sync can compare and synchronize both database schema and table data. You can even synchronize data of heterogeneous databases (for example, compare your local SQL Server database with a MySQL replica on your web site - and synchronize all the differences in just a few minutes).
on the otherhand I think you've already mentioned what possible problems you may encounter when synchronizing 2 db at the same time aside from this two I think it would be the resources. since there are different RDBMS working for the application they would also have a separate resources for each, like when I update a particular record of a user it still needs to check on which resource does it really exist, but I love to hear more from other people out there this is really an interesting topic to discuss. ;)
I need to work with a fairly large amount of data, and am considering both MySQL and SQLite. So I'm trying to get a good, high-level overview of both packages:
How well do each handle large databases?
Is SQLite as much of a handful to work with as MySQL?
Are there any good (web-based) resources comparing these two?
SQLite is a database library and runs only in the program which uses it. It cannot be written to at the same time from other programs although other processes can read from it. You cannot connect remotely to it and saves the data on a local accessible filesystem (possibly mounted from a file server). Forget what I said : these statements were based on outdated assumptions, and I need to read up on sqlite3 because it can now do things I was not aware of.
MySQL is a database server, i.e. can run on another machine and multiple computers and programs can connect to it at the same time.
ALthough SQLite can also handle quite large datasets, in most circumstances people will choose MySQL for large datasets, because they want remote access (without exposing the database files to well intentioned, inadvertent "cleanup" actions) to the data while the program is running for administrative purposes or to run reports.
If your application is an embedded database which only ever will be used by a single application SQLite will be just fine.
And no, SQlite is not such a handful as MySQL. MySql is not really difficult, but it has a number of strange quirks which hit people when they try to get it installed. Once it is running it is pretty painless.
You might look at PostgreSQL, as I find it a bit easier to manage and maintain as I feel some aspects are more 'logical' than MySQL. That being said, in practice there is not a huge difference.
I'm working on a SaaS project and mysql is our main database. Our applications is written on c# .net and runs under an windows 2003 server.
Considering maintainance, cost, options and performance, which server plattaform can I decide for MySQL hosting, windows or Unix/Linux/Ubuntu/Debian?
The scenario is as following:
The server I run today has a modarate transaction volume. Databases increase 5MB daily and we expect to increase 50MB in couple of months and it is mission critical.
I don't know how big the database is going to be. We rent a VPS to host application and database server.
Most of our queries are simple but our ORM Tool makes constantly use of subqueries. Also we run reports simple and heavy ones. Some them runs after user click, but most runs in order to the queue.
Buy an extra co-lo space will be nice as we got more clients. That's SaaS project after all.
When developing, you can use your Windows box to also run a MySQL server. If and when you
want to have your DBMS in a separate server it can be in either a Windows or Linux server.
MySql and supporting tools for backup etc probably have more choices in Linux.
There are also 3rd party suppliers who will host your MySQL database on their servers. The benefit is they will handle backups, maintenance etc.
Also: look into phpMyAdmin for use as a great admin tool.
Larry
I think you need more information to make an informed decision. It's hard to just pull out a "best" answer based on no specific information.
What is your expected transaction volume?
How big will the database get?
How complex are your queries, ie are they long running or relatively quick?
Are you hosting the application on your own server at your own location? If you have to buy extra co-lo space maybe an extra server isn't the best option.
How "mission critical" is this database? Ie maybe you need replicated servers to ensure stability.
There is a server sizing tool online at http://www.sizinglounge.com/, so you should check that out. It sounds like your server could be smaller than their smallest tier, but it should be a good place to start.
If this is a mission critical application you need to do some kind of replication to an extra server in case the primary one fails, so you are definitely looking at two systems. This has to be in addition to a good backup plan.
Given that you are uncertain about how big it could get you might just continue renting a server. For your backup one idea would be to look at running MySQL on an Amazon EC2 instance. BTW it is important to have a remote replicated server. If you have two systems next to each other and an environmental problem comes up, they could both be out of commission at the same time. But with a remote copy your options are open to potentially working around it.
If you run a lot of read-only queries locally and have your site hosted somewhere, it might make sense to set up a local replicated database copy to query against. That could potentially improve both your website and local performance quite a bit. Plus it would give you some good piece of mind having a local copy under your control.
HTH,
Brandon