I'm a little (very) paranoid. I work with workbench for my own projects as well as for work. One of the things I am completely frightened about is performing dangerous SQL commands like delete from x whilst connected to my work's remote server erroneously (thinking it is my development machine). The question then is, is there a way to configure workbench to prevent you from making stupid (and usually tired) mistakes, or is there an alternative that does? Or is it just a thing of being more controlled?
Getting a confirmation from the user as suggested in the comments is probably not a good solution, depending on the frequency you send queries. After the first 20-30 of such confirmation dialogs you get tired of them and just click them away.
A much better way is to establish 2 simple habits:
Give your users only the absolute minimum of privileges, so that they can do their work. This limits the damage they can cause.
Make backups. There's no question if you need a backup, only when.
Related
I'm new to MYSQL and there is something really weird happened and I can't figure out why.
Recently, the INSERT query to some of the table become extremely slow. Weirdly enough, the query time all around 60 secs.
The tables are all with the only 10k to 35k entries, so I think they are not that big.(But indeed they are the biggest one in the database, though.)
And the slowness is only with INSERT query, DELETE, UPDATE, SELECT are all executed with 0.000x sec.
Can some help me find out why is this happening?
UPDATE: So I turned on the general log and noticed all my INSERT queries are followed with 'DO sleep(60)'. It seems my server got hacked?
Where can I find this malicious script inject the sleep() command after INSERT?
If you use code to build the queries, copy the code base off the server to your machine (ideally in a VM, just in case) and search for the changes within the code. Alternatively, you could restore the code base from source control (you use source control, right?!).
If it's store procedures you use, you'll need to change them back to a working version without the sleep. Check previous backups to try and find out when this happened, which might help a wider investigation as to how they got in and did what they did.
You'll also need to think about the wider implications of this. Do you store user data? If so, then you'll need to inform them that you've had your database compromised and therefore they should assume their accounts are and change their passwords.
Finally, wipe the server. A hacked server is no longer in your control (or that's how you should look at it). Wipe it, reinstall everything, and put in changes to help prevent the same hack happening again.
A little bit stumped here, can't seem to find anyone with the question. I feel like my question is a lot simpler than it probably sounds. Basically I want to have an exact copy of my rails database on a different server, as it is being populated. Let me explain with an example
I have a production website that needs to be up at all times is the bottom line here.
Currently if the website goes down, I have to use the latest copy that I have of the database (Because the server is down), and when the site comes back up, I have to manually import anything new into the original mysql server. So I am looking for a way to keep the mysql server on both servers in sync with each other so that if one goes down, they both still have the same information.
I understand that this can add a lot of overhead in the rails app, which I am not that concerned about as I can find ways to defer the mysql queries. Unless someone knows a better way to do this?
I've been doing a lot of research, reading on replication, etc but just not sure as to what mysql solution would work.
This is what I'm looking at:
when my mysql fails for some reason or there are certain queries that are taking really long to execute and locking some tables, I want the other insert/update/select queries to still function at normal speed without having to wait for locks to be released or for the main database to be back up. I'm thinking there should be a second mysql server for this to happen, but is what I mentioned possible even if there is and would it involve a lot of change in my existing programming logic?
when my database is being backed up, I would still like my site to function normally, all inserts/selects/updates should function as normal.
when I need to alter a large table, I wouldn't like it to affect my application, there should be a backup server to work from.
So what do I need to do to get all this done and also would it require changing plenty of existing coding to suit the new set up? [My site has a lot of reads and writes]
There's no easy way. You're asking for a highly-available MySQL-based setup, and that requires a lot of work at the server and client ends.
Some issues, for example:
when I need to alter a large table, I wouldn't like it to affect my application, there should be a backup server to work from.
If you're altering the table, you can't trivially create a copy to work from during the update. What about the changes that are made to your copy while the first update is taking place?
Have a search for "High Availability MySQL". It's mostly a solved problem, but the solution depends heavily on your exact requirements. You cannot just ask for "I want my SQL server to run at full speed always forever no matter what I throw at it".
Not a MySQL specific answer, but a general one. Have a read only copy of your DB for site to render, which will be synced to the master DB regularly. This way, you can keep your site working even if the master DB is under load/lock due to insert/delete. For efficiency, keep this copy as denormalized as you can.
I have several databases hosted on a shared server, and a local testing server which I use for development.
I would like to keep both set of databases somewhat synchronized (more or less daily).
So far, my ideas to solve the problem seem very clumsy. Anyway, for reference, here is what I have considered so far:
Make a database dump from online databases, trash local databases, and recreate the databases from the dump. It's a lot of work and requires a lot of download time (which guarantees I won't do it as much as I would like it to be done)
Write a small web service to access the new data, and write a small application locally to communicate with said web service, download the newest data, and update the local databases.
Both solutions sound like a lot of work for a problem that is probably already solved a zillion times over. Or maybe it's even an existing feature which I completely overlooked.
Is there an easy way to keep databases more or less in synch? Ideally something that I can set up once, schedule and forget about.
I am using MySQL 5 (MyISAM) databases on both servers.
=============
Edit: I had a look at replication, but it seems that I can't go that route because the shared hosting does not give me enough control on the server itself (I got most permissions on my databases, but not on the MySQL server itself)
I only need to keep the data synchronized, nothing else. Is there any other solution that doesn't require full control on the server?
Edit 2:
Sorry, I forgot to mention I am running on a LAMP stack on the shared server, so Windows-only solutions won't work.
I am surprised to see that there is no obvious off-the-shelves solution for this problem.
Have you considered replication? It's not to be trifled with but may be what you want. See here for more details... http://dev.mysql.com/doc/refman/5.0/en/replication-configuration.html
Take a look at Microsoft Sync Framework - you will need to code in .net, but it can resolve your issues.
http://msdn.microsoft.com/en-in/sync/default(en-us).aspx
Here is a sample for SQL server, but it can be adapted to mysql as well using ado.net provider for Mysql.
http://code.msdn.microsoft.com/sync/Release/ProjectReleases.aspx?ReleaseId=4835
You will need the additional tables for change tracking and anchors (keeping track of last synchronization) for this to work, in your mysql database, but you wont need full control as long as you can access the db.
Replication would have simpler :), but this might just work in your case.
We are looking to have about 35-40 people writing to an access database via script on a shared drive. The metrics break down to them needed to write about 3-7 times an hour. Would Access support this without going ape on me.
Yes I would love to use this as a SQL server but that means going through massive amounts of red tape/meetings paperwork etc that I would prefer not to bother with
Could you not make them go with the free edition of SQL Server Express without the red tape?
In answer to your question, though, I've seen Access give big problems in environments with this many users, although that was pre 2007. I dunno how much it has changed.
If it were me, I'd avoid Access at all cost.
Could it? Yes. If you are very careful and perform locking and ensure that nobody steps on anybody else. Access is really not designed for any form of concurrency. I know of one place that managed to make it work in a very concurrent environment, but that environment basically logged everything and if the DB clobbered itself, it'd restore from the last backup and replay against the Access file automatically, so that the failures were transparent. I would not recommend following that course of action...
Should you do it? No. Is there any reason that you cannot use something like PostgreSQL or MySQL?
Yes, it would work. No, it's not a good idea.
Access would be able to handle the load, as long as those 35-40 people aren't all trying to access the database at once. It'll quickly bog down when you start having more than a couple of concurrent users, particularly if those users are all trying to update something.
The problem is that isn't not safe. You need to have the entire database file accessible on a network share, where any users will be able to write to it. You'll have multiple instances of Access trying to read and modify the file at the same time, and unless you are very careful with locking, it's quite possible for the database to become damaged or corrupt.
You'll also never be able to add any kind of access control beyond basic file permissions. You might not need it now, but internal databases often end up needing to be exposed to the wider world somehow.
It's not worth it. There are plenty of real RDBMS systems out there, for free, that are designed to handle this kind of thing. Why spend time trying to make Access work in such an environment, when you could just install SQL Server Express and be done with it? It has limitations, but if you're seriously considering Access, you're never going to be anywhere near those. Or use MySQL, PostgreSQL, Firebird...
I would avoid access too. Have you every thought about sql ce. It should handle multi users better and it is file just like access.
7 * 40 = 280 per hour.
280 / 60 = 4,6 per mins.
If your script is light, and if you don't read results too often, maybe...
Of course I don't recommand you to try. Meetings time! ;)
If the connections are opened only as long as needed to run the scripts, and you use transactions and have some retry logic built in when there's a conflict, there really oughtn't be too much of an issue.
If your script takes 1 second to do its update (that's a pretty long time in computer/database terms, of course), and there are 280 updates per hour, if you were lucky enough that no two users simultaneously ran their scripts, you would still have 3,320 seconds when the database was not open.
I don't see an issue, assuming that you know how to properly manage your connections and manage your Jet transactions.
That volume is not a problem for Access so long as it's on a stable LAN or very high speed WAN. Wireless connections are also a bad idea.
I have several clients which are adding about 200K to 300K transactions per year into the systems. So that's about 1000 per work day. That's using both an Access front end and back end.
That said one of them will be upsizing shortly to SQL Server. I fired the other client when they hired a PHB (Dilbert's pointy haired boss.)
It's iffy. The first time the database crashes you'll wish you went with SQL Server Express. And it will crash, eventually.
In my previous job we had a product with an Access database backend. We had some clients with 25 users. We refused clients who had 40 potential users because we knew from experience that the database would corrupt itself on a regular basis, and performance would be unacceptable.
The day we went to SQL Server Express, the performance of the application doubled, and the problems with crashing and corruption virtually disappeared.