I have a webserver and a database server, not the same machine.
I can get all the database sizes etc etc but would like to know the physical disk space used and available on the mysql server. i have no rights on the db sever, so can't use any command lines. Need to be by connecting to the database and then asking...
Is this possible? I think its not, but just hoping if anyone has a better way.
Related
As far as I was aware, for MS SQL, PostgreSQL, and even MySQL databases (so, I assumed, in general for RDBMS engines), you cannot simply back up the file system they are hosted on, but need to do an SQL-level backup to have any hope of internal consistency and therefore ability to actually restore.
But then answers like this and indeed the official docs referenced seem to suggest that one can just tar away on database data:
docker run --volumes-from dbdata -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata
These two ideas seem at odds with one another. Is there something special about how Docker works that makes it unnecessary to use SQL-level backups? If not, what am I missing in my understanding? (Why is something used as the official example when you can't use it to back up a production database? That can't be right...)
Under certain circumstances, it should be safe to use the image of a database on a disk:
The database server is not running.
All persistent data is on the disk system(s) being backed up (logs, tables spaces, temporary storage).
All components are restored together.
You are restoring the image to the same server on the same path.
The last condition is important, because some aspects of the database configuration may be stored in operating system files.
You need to do the backup within the database whenever the server is running. The server is responsible for the internal consistency of the data, and the disk image may not be complete or recoverable. If the server is not running, then the state of the database should be consistent in the persistent storage.
I need to put up a server with many and many connections, approximately 1 million at the same time.
I need to know how to do it and what technologies to use.
I have messages from users (i'll use xmpp protocol), and they have to pass from a server, so if 1 million people at the same time use the same server it will crash. And the users will use database (mysql?) for registration.
So...how do I set the server so that it doesn't crash? What kind of server i'll use (apache mysql server...)? And what to do for the database to hold all that traffic? queries won't take too much time?
I've read those documents for configuring mysql and they don't suggest me to have more then 1000 connections.. link
Thank you!
You should set up loadbalancers(like nginx, haproxy) which send users to the right server, and you can build a cloud: webserver cluster, database cluster behind load balancers. You can also set up a backup server and rsync which can restore your data if anything goes wrong.
check digitalocean tutorials, its worth a shot:
haproxy
I have a mysql database on a Amazon RDS (About 600GB of data) I need to move it back home to our local dedicated servers, but I don't know where to start.
Every time I try to init a sqldump it freezes, are there a way to move it on to S3? maybe even splitting it to smaller parts before starting the download?
How would you go about migrating a 600GB mysql DB?
Did you tried to use innobackupex script? It allows to to run living database (hot backup) and tar|gzip final backup thus you can get a smaller file. Works only with file_per_table=1
If you have downtime to move database you can also try to optimize tables to reclaim some space (especially if you did a lot of deletes).
Also you can think about get rid of some data: logs, archives etc and move them later.
I have been doing some research on best backup procedures for largish (27.678gb) MYSQL database tables.
Currently we are using a program called Rapidsync (which is a offsite backup tool) but it is slow and it locks the tables it's currently backing up therefore causing downtime/slowness of sql.
Our current server is running Windows 2008 r2 with SQL server 2008 (on the same box) also.
Hardware specs for the dedicated server are:
16gb Ram
CPU intel xeon E3-1230 V2 # 3.30GHz
1 TB hard drive
In terms of databases we have 58 in total varying in size in which some need to backed up weekly ideally or even daily.
Through a program we use called Navicat you can tunnel to a database using SSH and copy databases manually, is this a reliable and feasible option if we were to install it on our local machine and copy them across? Or would it be more secure/efficient to use SQL Dump maybe?
I hope I have given all of the necessary info but please do ask if you need to know more.
P.S Only free options at this moment as we are on a tight budget! :)
Thanks in advance
You mentioned SSH, so I suppose the backups can be done also on other server than the database server itself. For Unix, you can use great tool Percona xtrabackup, which is free and supports online (without locking) backup of InnoDB database files and also incremental backups. Maybe it is possible to compile it also on Windows (part of it is written in C, part in Perl).
So you can setup weekly full backup and daily incremental backups. The tool will keep track of which pages of the InnoDB datafile has been changed and will copy only those.
I have 2 servers for my website
Web Server
For sending dynamic content, mostly created with PHP
A lot of RAM and a fast processor, only a few GBs of hard drive space.
and a
File Server
For sending static content, images, videos etc..
A few TBs of hard drive space, not as much RAM and a slower processor.
I want to Use the speed of the Web Server, but the space of the File Server. But I heard the overhead of NFS will make it so slow it will not matter...
I will be using MySQL and I want to know how I should optimize the database so I can keep the data on the File Server, but have the queries preformed, and processed by the Web Server.
The advice you received is correct in my experience... running mysqld on one box and using a remote server via NFS for file storage is not very fast (if the remote storage were a SAN that would be a different matter).
You can reduce the number of times your database is hit and leverage the RAM on the web server by caching on that tier. Look into introducing something like memcached to help with the most expensive MySQL operations.
If some of your tables are small but used frequently, you could consider running a second instance of MySQL on your web server just for those tables. Keep in mind, though, that you will have two separate points of database failure that need to be managed (appropriate backups, security updates, etc.).