Background: I used to develop using MAMP and over the months/years I've accumulated a large mysql database (a few gigs) that I use for development for my different projects. I finally got around to setting up a VM using Vagrant and I've gotten everything set up and working nicely except my database. I'm running a CentOS 6.5 guest box on an OSX host.
My problem: I need my database to be completely persistent so I can vagrant up/destroy as many boxes as I need to, but the mysql persists.
My solution #1: I initially mounted a synced folder using vboxsf. This works pretty well and seems to be my best option so far, but performance is pretty bad. Query-intensive pages on my dev sites take 1-3 seconds to load whereas they might normally take under a second to load.
My solution #2: I then tried mounting a synced folder using nfs because the performance should be much better. The issue here is that mysql complains b/c, given the nature of nfs, it can't chown the data directory to the mysql:mysql user. I get the following errors when trying to start up the mysqld service:
chown: changing ownership of '/www/mysql': Operation not permitted
chmod: changing permissions of '/www/mysql': Permission denied
Sooo, my question is: are there any better ways to accomplish what I need? I feel like NFS would be the best solution, but I don't know how to get around the whole ownership/permission issues automatically with Vagrant. Any help would be appreciated.
I had the same issue or requirement for my local dev on Mac. And I found a solution for a MySQL-only Vagrant box with external data linked as folder_sync. But it'll run on Win too I guess.
Here is the Vagrant box config: https://github.com/ronnyhartenstein/vagrant-mysql-shared-folder
And if you understand German, here is my blog article with some background infos and tests (and fails of course): http://blog.rh-flow.de/2014/11/11/es-hat-sich-ausgemampft-vagrant-ist/
First of all, let me start with saying this is not best practice. You may know yourself that this can lead to problems if e.g. your PC goes blank or you want to give one project to another person for development. Of course, especially as a one-person-endevour, there are more important things than having test data importers and stuff :) So let's look for solutions.
NFS Permissions
To get NFS permissions right, your users need to have the same UID and GUID on host and guest. It's pretty tricky to setup and you should not change it from the guest. Maybe you can change it on the host to make it writeable to mysql and make UID and GUID the same. Of course, the moment the host changes this won't work anymore.
rsync shared folder
Rsync might not be the fastest in terms of syncing, but if you create on rsync shared folder where only MySQL is writing and which syncs back to some folder on your host this might be a solution. The "real" projects could still live inside a virtualbox share or nfs and you don't need to bother with correct permissions.
There might be some other solutions as well:
Create a backup/restore strategy
One way to go would be to backup MySQL inside your vagrant box at various points, e.g. every day. You could also run the backup when the box is shut down, thus creating a backup right before you destroy the box. Placing this backup at a shared folder, you'd have up-to-date data in case you destroy a box. Performance should be pretty good as the data MySQL is using wouldn't be on a shared folder.
Run MySQL on host or other vagrant box
It's of course possible to connect from within your vagrant box to your host or another vagrant box which runs MySQL. Your host or this box could be long-lived and could serve as a central "MySQL Server" for all your projects.
Have a MySQL slave running on the same machine which writes to shared folder
I believe with MySQL a master/slave combination is possible. Running both on one machine with the master (which you use in your projects) living inside your vm and not writing anything to a shared folder and a slave which writes to your shared folder and is a mirror of your master. This would mean that you have high performance and a few secs of delay between writing something and having it written to your shared folder. Of course, keeping this setup running and making sure it works all the time can be tricky.
You can use bindfs for changing the user/group of a share. I'm actually using a plugin called vagrant-bindfs which let's you remount a share with different ownerships. It works, but i haven't tried it with mysql to see how it performs.
Relevant lines on my Vagrantfile:
unless Vagrant.has_plugin?("vagrant-bindfs")
raise 'vagrant-bindfs is not installed! Please install with vagrant plugin install vagrant-bindfs'
end
config.vm.synced_folder "../", "/temp-nfs-mounts/sites-unbinded", type: :nfs
config.bindfs.bind_folder "/temp-nfs-mounts/sites-unbinded", "/sites", :force_user => "vagrant", :force_group => "vagrant", :create_as_user => true
Related
I have a small instance running in GCE, had some troubles with the MongoDb so after some tries decided to reset the instance. But... it didn't seem to come back online. So i stopped the instance and restarted it.
It is an Bitnami MEAN stack which starts apache and stuff at startup.
But... i can't reach the instance! No SCP, no SSH, no webservice running. When i try to connect via SSH (in GCE) it times out, cant make connection on port 22. In the information it says 'The instance is booting up and sshd is not running yet', which is possible of course.... But i cant reach the instance in no possible manner not even after an hour wait :) Not sure what's happening if i cant connect to it somehow :(
There is some activity in the console... some CPU usage, mostly 0%, some incomming traffic but no outgoing...
I hope someone can give me a hint here!
Update 1
After the helpfull tip form Serhii... if found this in the logs...
Booting from Hard Disk 0...
[ 0.872447] piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr
/dev/sda1 contains a file system with errors, check forced.
/dev/sda1: Inodes that were part of a corrupted orphan linked list found.
/dev/sda1: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.
(i.e., without -a or -p options)
fsck exited with status code 4
The root filesystem on /dev/sda1 requires a manual fsck
Update 2...
So, i need to fsck the drive...
Created a snapshot, made a new disk from that snapshot, added the new disk as an extra disk to another instance. Now that instance wont boot with the same problem... removing the extra disk fixed it again. So adding the disk makes it crash even though it isn't the boot-disk?
First, have a look at the Compute Engine -> VM instances -> NAME_OF_YOUR_VM -> Logs -> Serial port 1 (console) and try to find errors and warnings that could be connected to lack of free space or SSH. It'll be helpful if you updated your post by providing this information. In case if your instance run out of free space follow this instructions.
You can try to connect to your VM via Serial console by following this guide, but keep in mind that:
The interactive serial console does not support IP-based access
restrictions such as IP whitelists. If you enable the interactive
serial console on an instance, clients can attempt to connect to that
instance from any IP address.
more details you can find in the documentation.
Have a look at the Troubleshooting SSH guide and Known issues for SSH in browser. In addition, Google provides a troubleshooting script for Compute Engine to identify issues with SSH login/accessibility of your Linux based instance.
If you still have a problem try to use your disk on a new instance.
EDIT It looks like your test VM is trying to boot from the disk that you created from the snapshot. Try to follow this guide.
If you still have a problem, you can try to recreate the boot disk from a snapshot to resize it.
I don´t get it. Everytime when i´ve deleted all the content of a mysql-directory in var/lib/mysql/mywebsite, my website still runs on any device or browser without any negative effect. If i look in phpMyAdmin, the database is completely empty! If i delete the whole directory (with directory itself), it has an effect (site is gone), but after restoring this directory with an different database, the old version appears on the website again instead of the new restored database!
BUT if i clear my database tables of that same database in phpMyAdmin, it affects my site immediately... why is that, could it be, that there is any kind of DB-caching on my Vserver?
After:
service mysqld restart
everything is normal, means: right content / right database.
Would be nice if somebody could help me with this.
CentOS 6.9 (Final)
Plesk Onyx 17.5.3 Update Nr. 4
Wordpress 4.7.4 (without caching enabled / caching plugins)
On Unix, if a process has a file open while another process deletes it, the process can still access the file -- it doesn't really go away until all processes close it. The mysqld process already has the database files open. So deleting the directory doesn't affect the contents of the database until you restart it, which forces it to reopen all the files.
Also, mysqld loads indexes into memory when possible, and doesn't reread them from the file if it doesn't need to.
In general, you should avoid manipulating the files used by a database directly while the daemon is running, the results can be very unpredictable. You should preferably use commands in the database client to manage the database contents, but if you need to restore it from a file-level backup, you should shut down the daemon first.
Try drop database in PhpMyAdmin, Or execute in your console.
DROP DATABASE database_name;
Fellows,
I have the following assumption:
I have an application X (let us e wordpresss) running into a docker container and links to another docker container running mysql. Annd for some reason I lose the database it there a way to recover without any data loss from the latest changes?
For some reason (not sure if true or not) any file changes into docker container it automatically is lost. Therefore is the database is into a docker container then anything that is into the db is automatically lost.
Instead if mysql is running on the server and the database is down I just an restart the server and then is ok because the changes into the filesystem are written. And before make the system live i can also force a backup to have the latest changes.
In booth cases of course somehow I managed to get a nice backup in a daily basis. And in booth cases I want the minimum data loss.
And of course I understand that maybe data corruption in the second option. I want to have the latest written data even if corrupted.
As far as I was aware, for MS SQL, PostgreSQL, and even MySQL databases (so, I assumed, in general for RDBMS engines), you cannot simply back up the file system they are hosted on, but need to do an SQL-level backup to have any hope of internal consistency and therefore ability to actually restore.
But then answers like this and indeed the official docs referenced seem to suggest that one can just tar away on database data:
docker run --volumes-from dbdata -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata
These two ideas seem at odds with one another. Is there something special about how Docker works that makes it unnecessary to use SQL-level backups? If not, what am I missing in my understanding? (Why is something used as the official example when you can't use it to back up a production database? That can't be right...)
Under certain circumstances, it should be safe to use the image of a database on a disk:
The database server is not running.
All persistent data is on the disk system(s) being backed up (logs, tables spaces, temporary storage).
All components are restored together.
You are restoring the image to the same server on the same path.
The last condition is important, because some aspects of the database configuration may be stored in operating system files.
You need to do the backup within the database whenever the server is running. The server is responsible for the internal consistency of the data, and the disk image may not be complete or recoverable. If the server is not running, then the state of the database should be consistent in the persistent storage.
I launched an Amazon EC2 with Amazon Linux and Amazon-EBS as root volume. I also started tomcat7 and mysql 5.5 on this EBS volume.
Later I decided to change from Amazon Linux to Ubuntu. To do that I need to launch another Amazon EC2 instance with a new EBS root volume. Now I want to copy tomcat7 and mysql from older EBS volume to new one. I have tables and data in mysql which I don't want to loose and an application running on tomcat. How to go about it?
A couple of thoughts and suggestions.
First, if you are going to be having any kind of significant load on your database, running it on EBS-backed volume is probably not a great idea as EBS-backed storage is incredibly slow relative to the machine's local/ephemeral storage (/mnt). Now obviously you don't want DB data on ephemeral storage, so there is really nothing you can do about it if you want to run MySQL on EC2. So my suggestion would be to utilize an RDS instance for your DB if your infrastructure requirements allow for it.
Second, if this is a production application, you are undoubtedly going to have some down time as you make this transition. The question is whether you need to absolutely minimize the amount of downtime. If so, then you need to have an idea as to the size of your database. Is it going to take a long time to dump/load? If not, you could probably just get your new instance up and running, and tested on an older copy of your database and then just dump and load the current database at the time of cutover.
If it is a large database then perhaps you can turn on MySQL binary logging. Then make a dump of the database at a known binary log position. Then install this dump on your new instance. Then when ready to cutover, you can replay the binary logs on the new instance to bring it current. Similarly, you could just set up the DB on the new instance as a replica until the cutover, at which point you make it the master.
You may even consider just using rsync to sync the physical database files if you don't want to mess with binary logging, though this can be a problematic approach if you are not that familiar with dealing with the actual physical database files.
As far as your application goes, that should be much simpler to migrate assuming it is just a collection of files. I would not copy the Tomcat7 installation itself, but rather just install Tomcat on Ubuntu and then adjust the configuration to match current.
As far as the cutover itself goes, this should be pretty straightforward and would vary in approach depending on whether you are using an elastic IP for your server or whether it is behind a load balancer,