I am playing with mysql and the command select ##datadir; has got me thinking.For someone like me without the means to have a dedicated server i am relegated to cheap vps's which are no good since once my disk space quota is consumed,there is no adjusting my quota upwards.
Since a new connection would be a new thread,i imagine it would be convinient to have some form of network multithreading with my datadir sitting at dropbox or google.is there a database system which allow one access a datadir across a network?.
The MySQL database requires a directory path.
You could fool it to use NFS. However, be warned that "standard" linux NFS is not a very good solution: it introduces locks which may lead to lockdown of database (experienced).
Otherwise some storage devices provide with their own NFS clients.
Or you could use SAN/NAS/Whatever. I would further suggest that what you may be looking for is a separate storage device, not a separate "machine".
Related
The default port for MySQL connection is 3306. But can we set 2 different port for it? Maybe port 30 and 3306, thus we can have connections at localhost:30 and localhost:3306, assuming all ports are free. I try to run this using xampp in window 10.
It is not recommend by mysql for good reasons.
>
Warning
Normally, you should never have two servers that update data in the same databases. This may lead to unpleasant surprises if your operating system does not support fault-free system locking. If (despite this warning) you run multiple servers using the same data directory and they have logging enabled, you must use the appropriate options to specify log file names that are unique to each server. Otherwise, the servers try to log to the same files.
Even when the preceding precautions are observed, this kind of setup works only with MyISAM and MERGE tables, and not with any of the other storage engines. Also, this warning against sharing a data directory among servers always applies in an NFS environment. Permitting multiple MySQL servers to access a common data directory over NFS is a very bad idea. The primary problem is that NFS is the speed bottleneck. It is not meant for such use. Another risk with NFS is that you must devise a way to ensure that two or more servers do not interfere with each other. Usually NFS file locking is handled by the lockd daemon, but at the moment there is no platform that performs locking 100% reliably in every situation.
https://dev.mysql.com/doc/refman/8.0/en/multiple-data-directories.html
See here there you will find also, what you have to do
As far as I was aware, for MS SQL, PostgreSQL, and even MySQL databases (so, I assumed, in general for RDBMS engines), you cannot simply back up the file system they are hosted on, but need to do an SQL-level backup to have any hope of internal consistency and therefore ability to actually restore.
But then answers like this and indeed the official docs referenced seem to suggest that one can just tar away on database data:
docker run --volumes-from dbdata -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata
These two ideas seem at odds with one another. Is there something special about how Docker works that makes it unnecessary to use SQL-level backups? If not, what am I missing in my understanding? (Why is something used as the official example when you can't use it to back up a production database? That can't be right...)
Under certain circumstances, it should be safe to use the image of a database on a disk:
The database server is not running.
All persistent data is on the disk system(s) being backed up (logs, tables spaces, temporary storage).
All components are restored together.
You are restoring the image to the same server on the same path.
The last condition is important, because some aspects of the database configuration may be stored in operating system files.
You need to do the backup within the database whenever the server is running. The server is responsible for the internal consistency of the data, and the disk image may not be complete or recoverable. If the server is not running, then the state of the database should be consistent in the persistent storage.
We have installed WordPress on EC2 t1.micro instance and installed Buddypress on top of that, everything work fine for single user, but when two user access at same time, site goes down, because of RAM issue, httpd (Apache) takes maximum memory, how to overcome this, is there any configuration need to do in http.conf file or any network / traffic blocking tool do i need to install?
Micro instances are notoriously too small to handle WordPress and MySQL together. They're going to thrash (overuse the disk swap feature) or just run out of RAM and crash.
You are going to have to do a lot of tuning to get this right on a micro instance, and it is never going to be rock-stable. It's a pain in the neck. If your time is worth more than a dollar an hour compared to hosting fees, you should upgrade to an instance with more RAM, or sign up for one of the many US$6 per month shared hosting accounts available in the world.
Where to start tuning? Try setting a value in the Apache httpd.conf.
Set MaxRequestWorkers to a low number. You might try 4. When this number is low then you also won't have many simultaneous clients connecting from your Apache/php to your MySQL server.
Requests from web-browser clients will be enqueued when all your workers are busy. That works correctly, but may make your web site seem slow to your users. See the backlog parameter in the Linux documentation for listen(2) for an explanation of that queuing.
That will save both on Apache RAM and MySQL resources.
http://httpd.apache.org/docs/current/mod/mpm_common.html#maxrequestworkers
Then you probably should look at the my.conf file for MySQL, and see what you can play around with.
Edit MySQL, Apache, and php are all drawing on the same pool of RAM -- 512MB if I remember correctly. Reducing the number of Apache workers should help control RAM usage by Apache (and php, which is probably running in the Apache server's address space). Do that.
Then, go find the memory_limit in php.ini. It's set to 128M in many standard installations. Try reducing it to 64M or 40M. That will make each php instance use less RAM. But, if your WordPress installation is complex (lots of plugins, fancy theme), it may make some pages fail to load. WordPress will announce the problem as memory running out. http://php.net/memory-limit
Then, jump into MySQL's my.ini. The standard MySQL install comes with a file called my-small.ini, which contains the configuration parameters for a small MySQL instance. Yours can be small: WordPress's tables contain hundreds or a few thousands of rows, not hundreds of thousands. Save your old my.ini and then copy the contents of my-small.ini into my.ini. Restart your MySQL server after doing that.
Those steps may help you squeak by in a micro instance. They may not. They are, I suppose, worth a try.
I'm trying to set up MySQL running in a virtual server (CentOS 6), but as disk intensive stuff isn't great on a VM, I intend to store the database on the host (KVM on CentOS 6) server, and connect to it via Samba or NFS, or the like.
The trouble is that MySQL doesn't seem to like the /var/lib/mysql directory being mounted from a network drive, and I'm getting various different errors in the various configurations I've tried.
My end game is to have the DB server on a VM that can be easily moved between hosts, and the data on a redundant (probably clustered) server. In the mean time, the storage area I'm using on the host server is mirrored using DRBD.
Has anyone done something similar, and can suggest a config that works, or an altogether better way of doing it?
Using a file level protocol is a really bad idea. They are designed to do a very different job.
For block device level protocols there's 2 choices (DRBD doesn't apply here). AoE or iSCSI. IIRC, AoE is tightly coupled to physical network interfaces which may cause some complications in your setup - hence I'd recommend having a long hard look at iSCSI.
Is it correct to use a backup of a vm as a means of restoring a MySQL database?
Are there any dangers in doing this?
My own feeling is that a vm backup/snapshot is at the os not the db level and therefore may not backup the database in the correct way. Has anybody any advice on this?
It's perfectly fine as long as you do one of two things:
Either ensure consistency of the tables by either shutting down the database or using something like FLUSH TABLES WITH READ LOCK while doing the snapshot (you probably don't want to do this)
Use a transactionally-safe storage engine such as InnoDB (the default) for all tables that are likely to change around the time of the snapshot, and rely on its ability to recover from what looks like a crashed state, i.e. the copy of a running server.
Once you realise that taking a snapshot of a running VM and booting the snapshot on another machine looks just like pulling the plug on that server and rebooting it, your choice becomes relatively easy: Make sure the system can recover from pulling the plug, and it can recover from a VM snapshot backup.
Based on a recommendation from Jeff Hunter posted on the VMWare blog, the answer is no, it's not safe to rely on the snapshots for MySQL backups. His recommendation is basically to dump the db through a separate process (and then allow the snapshot to copy the dump).