I identified an issue with an infrastructure I created on the Google Cloud Platform and would like to ask the community for help.
A charge was made that I did not expect, as theoretically it would be almost impossible for me to pass the limits of the free tier. But I noticed that my database is huge, with 1gb and growing, and there are some very heavy buckets.
My database is managed by a Django APP, and accessing the tool's admin panel there are only 2 records in production. There were several backups and things like that, but it wasn't me.
Can anyone give me a guide on how to solve this?
I would assume that you manage the database yourself, i.e. it is not Cloud SQL. Databases pre-allocate files in larger chunks in order to be efficient on writes. You can check this yourself - write additional 100k records, most likely size will not change.
Go to Cloud SQL and it will say how many sql instances you have. If you see the "create instance" window that means that you don't have any Google managed SQL instances and the one we're talking about is a standalone one.
You can also check Deployment Manger to see if you deployed one from the marketplace or it's a standalone VM with MySQL installed manually.
I made a simple experiment and deployed (from Marketplace) a MySQL 5.6 on top of Ubuntu 14.04 LTS. Initial DB size was over 110MB - but there are some records in it initially (schema, permissions etc) so it's not "empty" at all.
It is still weird that your DB is 1GB already. Try to deploy new one (if your use case permits that). Maybe this is some old DB that was used for some purpose and all the content deleted afterwards but some "trash" still remains and takes a lot of space.
Or - it may well be exactly as NeatNerd said - that DB will allocate some space due to performance issues and optimisation.
If I have more details I can give you better explanation.
Related
I have multiple shops (a few hundred) and they all need to write an online Mysql database, and at about the same time. The program I have has been written in VB6 and it is currently updating the database successfully (via ODBC), although it is only updating three stores at the moment. The plan for the near future is to have all the other stores update to the same Mysql database.
Will Mysql be able to handle all these stores updating it at the same time using ODBC or is there a better way to do it?
In terms of running into problems caused by too many concurrent connections it depends very much on the platform your database is running on.
https://dev.mysql.com/doc/refman/5.5/en/too-many-connections.html
Creating a test environment to stress test your database would be a sensible way for you find out if it is going to be up to it.
If your database cannot cope with the load, there are many scalable cloud based resources around now which can easily be expanded to cope with higher load. Google Cloud, Windows Azure or Amazon may be worth a look.
I have an application which is going to be distributed to a hosting platform, most probably phpfog.
It is very similar to how WordPress.com operates, where each customer can host their own individual installation of the app on our servers. We host the 'work' files and provide the database (However, it is NOT WordPress; it's a custom app).
Each user of the application has their own separate MySQL database.
I am wondering what the most cost-effective service would be to provide this. It seems that most cloud services offer, for instance, one massive 50GB database. It is definitely conceivable that instead of an individual database, we have one huge one and prefix all the tables per user. But that seems really bloated and unwieldy. It's also not really possible without major structural changes to have one big database for everyone (And the same tables inside it for everyone) as the app is primarily designed to be standalone.
Each database really won't get that big. We are talking low GB - I'd suggest the biggest would be 5GB. However, there will be a LOT of them as obviously it's one per customer.
What would be the most cost- and performance-effective way of handling this?
Amazon RDS in fact provides a database server rather than an individual sales page; I misunderstood their offerings.
In this case, RDS is a drop-in replacement for existing MySQL databases and will work perfectly.
We're running MySQL 5.1 on CentOS 5 and I need to securely wipe data. Simply issuing a DELETE query isn't an option, we need to comply with DoD file deletion standards. This will be done on a live production server without taking MySQL down. Short of taking the server down and using a secure deletion utility on the DB files is there a way to do this?
Update
The data sanitization will be done once per database when we remove some of the tables. We don't need to delete data continuously. CPU time isn't an issue, these servers are nowhere near capacity.
If you need a really secure open source database, you could take a look at Security Enhanced PostgreSQL running on SELinux. A very aggresive vacuum strategy can assure your data gets overwritten quickly. Strong encryption can be of help as well, pgcrypto has some fine PGP functions.
Not as far as I know, secure deletion requires the CPU to do a bit of work, especially DoD standard which I believe is 3 passes of inflating 1's and 0's. You can, however, encrypt the harddrive. Given that a user would need phsyical access and a password for the CentOS to recover the data. As long as you routinely monitory access logs for suspicious activity on the server, this should be "secure".
While searching found this article: Six Steps to Secure Sensitive Data in MySQL
Short of that though, I do not think a DoD standard wipe is viable or even possible without taking the server down.
EDIT
One thing I found is this software: data wiper. If there is a linux comparable version of that, that might work "wipes unused disk space". But again this may take a major performance toll on your server, so may be advisable to run at night at a set time and I do not know what the re-precautions (if any) of doing this too often to a harddrive.
One other resource is this forum thread. It talks about wiping unused space etc. From that thread one resource stands out in particular: secure_deletion toolkit - sfill. The man page should be helpful.
If it's on a disk, you could just use: http://lambda-diode.com/software/wipe/
I have several databases hosted on a shared server, and a local testing server which I use for development.
I would like to keep both set of databases somewhat synchronized (more or less daily).
So far, my ideas to solve the problem seem very clumsy. Anyway, for reference, here is what I have considered so far:
Make a database dump from online databases, trash local databases, and recreate the databases from the dump. It's a lot of work and requires a lot of download time (which guarantees I won't do it as much as I would like it to be done)
Write a small web service to access the new data, and write a small application locally to communicate with said web service, download the newest data, and update the local databases.
Both solutions sound like a lot of work for a problem that is probably already solved a zillion times over. Or maybe it's even an existing feature which I completely overlooked.
Is there an easy way to keep databases more or less in synch? Ideally something that I can set up once, schedule and forget about.
I am using MySQL 5 (MyISAM) databases on both servers.
=============
Edit: I had a look at replication, but it seems that I can't go that route because the shared hosting does not give me enough control on the server itself (I got most permissions on my databases, but not on the MySQL server itself)
I only need to keep the data synchronized, nothing else. Is there any other solution that doesn't require full control on the server?
Edit 2:
Sorry, I forgot to mention I am running on a LAMP stack on the shared server, so Windows-only solutions won't work.
I am surprised to see that there is no obvious off-the-shelves solution for this problem.
Have you considered replication? It's not to be trifled with but may be what you want. See here for more details... http://dev.mysql.com/doc/refman/5.0/en/replication-configuration.html
Take a look at Microsoft Sync Framework - you will need to code in .net, but it can resolve your issues.
http://msdn.microsoft.com/en-in/sync/default(en-us).aspx
Here is a sample for SQL server, but it can be adapted to mysql as well using ado.net provider for Mysql.
http://code.msdn.microsoft.com/sync/Release/ProjectReleases.aspx?ReleaseId=4835
You will need the additional tables for change tracking and anchors (keeping track of last synchronization) for this to work, in your mysql database, but you wont need full control as long as you can access the db.
Replication would have simpler :), but this might just work in your case.
I'm working on a SaaS project and mysql is our main database. Our applications is written on c# .net and runs under an windows 2003 server.
Considering maintainance, cost, options and performance, which server plattaform can I decide for MySQL hosting, windows or Unix/Linux/Ubuntu/Debian?
The scenario is as following:
The server I run today has a modarate transaction volume. Databases increase 5MB daily and we expect to increase 50MB in couple of months and it is mission critical.
I don't know how big the database is going to be. We rent a VPS to host application and database server.
Most of our queries are simple but our ORM Tool makes constantly use of subqueries. Also we run reports simple and heavy ones. Some them runs after user click, but most runs in order to the queue.
Buy an extra co-lo space will be nice as we got more clients. That's SaaS project after all.
When developing, you can use your Windows box to also run a MySQL server. If and when you
want to have your DBMS in a separate server it can be in either a Windows or Linux server.
MySql and supporting tools for backup etc probably have more choices in Linux.
There are also 3rd party suppliers who will host your MySQL database on their servers. The benefit is they will handle backups, maintenance etc.
Also: look into phpMyAdmin for use as a great admin tool.
Larry
I think you need more information to make an informed decision. It's hard to just pull out a "best" answer based on no specific information.
What is your expected transaction volume?
How big will the database get?
How complex are your queries, ie are they long running or relatively quick?
Are you hosting the application on your own server at your own location? If you have to buy extra co-lo space maybe an extra server isn't the best option.
How "mission critical" is this database? Ie maybe you need replicated servers to ensure stability.
There is a server sizing tool online at http://www.sizinglounge.com/, so you should check that out. It sounds like your server could be smaller than their smallest tier, but it should be a good place to start.
If this is a mission critical application you need to do some kind of replication to an extra server in case the primary one fails, so you are definitely looking at two systems. This has to be in addition to a good backup plan.
Given that you are uncertain about how big it could get you might just continue renting a server. For your backup one idea would be to look at running MySQL on an Amazon EC2 instance. BTW it is important to have a remote replicated server. If you have two systems next to each other and an environmental problem comes up, they could both be out of commission at the same time. But with a remote copy your options are open to potentially working around it.
If you run a lot of read-only queries locally and have your site hosted somewhere, it might make sense to set up a local replicated database copy to query against. That could potentially improve both your website and local performance quite a bit. Plus it would give you some good piece of mind having a local copy under your control.
HTH,
Brandon