ENVIRONMENT:
Ubuntu hardy LTS, Apache 2, Passenger
Virtual Server One: rails 2.3.8 application accessing MySQL common_database
Virtual Server Two: rails 3.0.4 application accessing the same MySQL common_database
The first application gets regular use from our clients. The second application is released but seeing light usage currently. The database seems to be working fine.
Someone advised me that this configuration could wind up corrupting the database. We anticipate that the second application will eventually get heavy usage. It would kill some momentum if the database became corrupted. A few more facts:
Neither application has the ability to change database structure.
Both apps will be accessing the same tables at the same time.
It is impossible for both apps to attempt to update the same record at the same time.
Has anyone experienced corruption of a MySQL database that is accessed in this kind of configuration? Were you able to overcome the problem? How?
What was the case made for risk of corruption?
As long as you aren't using rake db:migrate and both apps have the same models, you have roughly the same risks as if two or more web servers with the same application version have been spun up. If there is a divergence in expectation of what the database schema looks like between the two apps, you may run into issues, especially if the divergence is related to foreign keys, or application logic that is based on magic values.
With any concurrent manipulation of a database you run into a category of issues around modifying the same records simultaneously (who wins when two clients try to modify the same piece of data; what kind of locking model do you use, etc).
Related
I want to test Node and Deno and try to redirect users via proxy to one MySQL DB.
How will it impact the database?
Can some timestamp conflicts via CRUD operations arise or does MySQL have some mechanism to cope with connections from multiple servers?
What about performance or memory footprint of the database in RAM? Will it be occupying the same amount of space as if there was only one server requesting the database to CRUD something?
What would happen if I added another server that will connect to the DB, for example, java or Go server?
It will virtually have no impact on the database other than having any other concurrent processes connecting to it.
This is not a deno issue but rather a database issue.
The exact same problems can happen even with your current single Node.js instance, because the nature of all systems these days is concurrent/parallel.
You might as well replace the Deno app with another Node.js instance, Java, etc. Or even your current Node.js app.
Data in a database can change once you loaded it to the client, and it is up to you to implement the code that will handle such scenarios.
The fact that MySQL is not "ACID" is neither negative nor relevant in and of itself because it is doesn't have context.
If you need complete absolute integrity on a registry make sure you lock it when you select it, but there will be a trade off.
I am developing a Drupal site using MariaDB.
The import process of a 77MB dump file locally (docker container running maria db) takes about 2 minutes.
The same import to an Amazon RDS (db.m4.large) running a MariaDB database takes more than 30 minutes.
Isn't the Amazon RDS supposed to be quicker ?
What is the recommended practice for having a quick dev environment for SQL ? (the local docker service is running too slow)
Thanks,
Yaron
If you are already on RDS, just use a snapshot.
Take a snapshot from production. (or find one of the automated snapshots)
Create a new DB from the snapshot
It's very fast and doesn't have the issues of latency and running millions of queries which an import has.
However, this is just one very crude approach to making a dev environment.
Some people have scripts that create the data sat for DEV from scratch. This might be more appropriate and even necessary, if for example you have a large database and developers that like to work locally on their computer.
Some people have scripts that sanitize DEV to eliminate sensitive and personal data, which you could run after the snapshot.
Some people even have DEV as a replica of the main DB and modify the DEV db so that additional usage doesn't clash with the replicated changes. This is a bit delicate though.
Often Dev and Tests use dummy data, and Staging uses real data (cloned from Production and possibly sanitized).
We have a 2GB Digital Ocean server, and it is dedicated for a MySQL server of other two PHP servers. we are using Percona MySQL Server 5.6 on this server. We configured MySQL replication and these configuration is working fine
Our issue is sometime our site monitoring tools reporting that some of the URL hosted with this server is down (May be this is happening once in a week or two). When I am checking, I could see that Mysql Master server load is too much high (May be 35 - 40), so the MySQL server was not responded. # that I usually do a MySQl service restart, this restart cause to server load become normal and the sites started working after service restart.
This is a back-end MySQL database server of 20-25 PHP applications (WordPress, Drupal and some custom applications server).
Here are my questions,
Why this server load automatically goes down, after a spikes happens?
Is there any way in which database is causing issues? So that I can identify the application too.
How can I identify the root cause of this issues
Depending upon your working dataset, a 2GB server providing access for 20-25 PHP applications (WordPress, Drupal and some custom applications server) could be the issue.
For example, if you have a 1.4GB buffer pool (assuming all tables are InnnoDB) and 10GB of data, then your various applications could end up competing for resources, such as I/O, buffer pool pages, Adaptive Hash Index, query cache. They could also, assuming caching is used, be invalidating theit caches within a similar timeframe, thus sending expensive queries to the database.
Whilst a load of 50 is something that you would normally want to avoid, the load average is not something that you should concern yourself with if showing in isolation.
The use of the uninterruptible state has since grown in the Linux
kernel, and nowadays includes uninterruptible lock primitives. If the
load average is a measure of demand in terms of running and waiting
threads (and not strictly threads wanting hardware resources), then
they are still working the way we want them to.
http://www.brendangregg.com/blog/2017-08-08/linux-load-averages.html
If the issue is happening once per week then it is starting to sound like a batch process, or cache expiration issue - too much happening at once for the resources available.
The best thing to do is to monitor and look for the cause. Since you are already using Percona Server, using PMM should give you the perfect insight to find the cause, although it works with Oracle MySQL, MariaDB, Aurora, etc. You can try a demo to see the insights that you can gain:
https://pmmdemo.percona.com. The software is Open Source and free to use.
You can look in QAN to find the most expensive queries, whilst looking at Prometheus data to give an insight into the host itself. There are some recommendations to get the most from PMM, depending upon your flavour of MySQL.
I have a small app running on a production server. In the next update the db schema will change; this means the production database schema will need to change and there will need to be some data manipulation.
What's the best way to do this? I.E run a one off script to complete these tasks when I deploy to the production server?
Stack:
Nodejs
Expressjs
MySQL using node mysql
Codeship
Elasticbeanstalk
Thanks!
"The best way" depends on your circumstances. Is this a rather seldom occurrence, or is it likely to happen on a regular basis? How many production servers are there? Are there other environments, e.g. for integration tests, staging etc.? Do your developers have an own DB environment on their machines? Does your process involve continuous integration?
The more complex your landscape is, the better it is to use solutions like Todd R suggested (Liquibase, Flywaydb).
If you just have one production server and it can be down for maintenance for a few hours, the it could be sufficient to
Schedule a maintenance downtime with your stakeholders and users
Shutdown the server
Create a backup
Update the database structure and contents as necessary
Deploy software updates
Restart the server
Test the result (manually or automatically)
Inform your stakeholders and users
If anything goes wrong, rollback to a backed up version of the database and your software.
Having database update scripts is advisable. Having tested them once or more is advisable even more. Creating a backup in advance is essential.
http://www.liquibase.org/ or http://flywaydb.org/ - pretty "heavy" for one time use, but if you'll need to change the schema again in the future, probably worth investing the time to learn one of these.
I have a MySQL install on a shared server and have access through phpMyAdmin. I want to make a continuous, real time clone of that database to a cloud mySQL database (we have created an Nginx-ready MySQL server specially for this database) I want to create a real time clone of the old one, then update code to point to the new database...
I think you will have difficulty doing real-time replication of a MySQL in a shared server environment. Since you appear to be moving db servers, I would be inclined to do a hot copy of your data, and install that on the new db server. At the same time as taking that copy, you should switch on query logging on your application.
Your switch over would then consist of running logged queries against the new database (faster than they were logged!) and finally, at a point that all logged queries have been run, switching the configuration of the app so that the new db is used.
Edit: the problem with a hot copy is that data is being written to the db at the same time as it is being copied. That means that the 'last updated' time will be different for each table. On that basis, is it possible in your application to set up a 'last_updated' column for each row? If so you will be able to tell for each table which logged queries still need to be copied.
What you're looking for is replication. It has far to many options to cover here in a single post.
http://dev.mysql.com/doc/refman/5.5/en/replication.html
If your going to do replication over the internet you'll want to secure it.Your host might allow a virtual local area network So this doesn't use up your bandwidth resources.
A great set of tools from percona you should look at are maatkit
https://launchpad.net/percona-toolkit
Documentation and usage examples
http://www.maatkit.org/doc/
It's good for other tasks but it also allows you to replicate a live database quickly.
When your working with live databases make sure your backups are upto date.