Wordpress site malfunctions when server disk is full - mysql

My company has some private servers for web development.
The server has storage space of 600GB, but at times it runs out of disk space.
At such situations one of our websites using wordpress malfunctions and some of its features can't be accessed.
Can any one tell what is the possible cause for this & how it can be prevented?
Thanks in advance....

At such situations one of our websites using wordpress malfunctions and some of its features can't be accessed.
That's what happens when the disk is full - nothing we can do about that. Temporary files can no longer be created, including session files that are necessary to track whether a user is logged in or not. MySQL may need to write temporary data even when doing only a SELECT. The web server may need some swap space on the hard disk in peak times. etc. etc.
I'm more concerned with WP malfunctioning rather than "why server disk is full".
That won't work. You need to fix the source of the problem, and that is the server's disk filling up. You can't make Wordpress work despite the disk being full.

It could be down to so many things, probably not at WordPress level. With no disk space apache can't write it's access/error logs, sessions can't be written by PHP, MySQL can't write and so on.
The only answer applicable to you at this stage is to stop maxing out the disk space.

Related

1gb database with only two records

I identified an issue with an infrastructure I created on the Google Cloud Platform and would like to ask the community for help.
A charge was made that I did not expect, as theoretically it would be almost impossible for me to pass the limits of the free tier. But I noticed that my database is huge, with 1gb and growing, and there are some very heavy buckets.
My database is managed by a Django APP, and accessing the tool's admin panel there are only 2 records in production. There were several backups and things like that, but it wasn't me.
Can anyone give me a guide on how to solve this?
I would assume that you manage the database yourself, i.e. it is not Cloud SQL. Databases pre-allocate files in larger chunks in order to be efficient on writes. You can check this yourself - write additional 100k records, most likely size will not change.
Go to Cloud SQL and it will say how many sql instances you have. If you see the "create instance" window that means that you don't have any Google managed SQL instances and the one we're talking about is a standalone one.
You can also check Deployment Manger to see if you deployed one from the marketplace or it's a standalone VM with MySQL installed manually.
I made a simple experiment and deployed (from Marketplace) a MySQL 5.6 on top of Ubuntu 14.04 LTS. Initial DB size was over 110MB - but there are some records in it initially (schema, permissions etc) so it's not "empty" at all.
It is still weird that your DB is 1GB already. Try to deploy new one (if your use case permits that). Maybe this is some old DB that was used for some purpose and all the content deleted afterwards but some "trash" still remains and takes a lot of space.
Or - it may well be exactly as NeatNerd said - that DB will allocate some space due to performance issues and optimisation.
If I have more details I can give you better explanation.

effect of loading files from mysql database on site performance

I've a website that all of it's images are loading from MySQL database.
Sometimes when many clients connect to the site, it slows down & I'm doing some optimizations on my server & codes to increase total performance.
As a candidate of change, I want to know that moving files out of the database & loading from static files instead of dynamically generated contents, can cause any significant improvement on my performance?
If yes, is there any benchmark available about it?
Storing images in a database is generally a bad idea, yet you see lots of people doing it without any good reason.
In 99% of cases I would recommend only storing file path references to the images in the database and have the images stored statically.
Here are some reasons why:
You don't tie up both the application server and the databases server transmitting images to the browser, you can offload this to web server itself which is more optimized for this.
If you have a sizeable site, you would eventually want to move static images onto a CDN anyway. You can't do this with files in database
You application will be slower when trying to insert images into the database, as you basically have to upload file to application server, then turn around and write into the DB as opposed to simply writing the path reference.
You DB itself could grow in size at a significant rate with enough images. You don't want to tie up your DB file system with a bunch of files that can be stored at low cost in other ways (like distributed file storage services like Amazon S3)
I have a similar situation to yours. The solution is simple: cache the content.
When you run the first time the query to get an image, ie:
SELECT * FROM images WHERE id = 1
Then simply cache the result to a file:
file_put_contents("image1.png",$row['data']);
Next time simply check whenever there is the file, this will avoid to query the database

MySQL Server Runs out of Disk Space?

Our company's web application stores a ton of data points on thousands of visitors a day, and we are anticipating the hard disks will fill up soon. Our server can not support more hard drives, and we are not interested in little tricks to free up some space to buy us a few hours worth of space.
How can we solve this issue? The database is huge, over 200GB, and our website needs to be available, so I don't believe copying it and moving it to a new, larger server is a good option for us. Furthermore, what happens when THAT server runs out of disk space?
What do large scale web sites normally do to remedy this issue?
Thanks!
You may want to investigate separating into multiple database servers as "shards. You will likely have to add some logic to your application to know where to find a set of data and how to join queries with data that originates from multiple shards. There are third-party applications that can assist you with this process.

Using a MySQL database is slow

We have a dedicated MySQL server, with about 2000 small databases on it. (It's a Drupal multi-site install - each database is one site).
When you load each site for the first time in a while, it can take up to 30s to return the first page. After that, the pages return at an acceptable speed. I've traced this through the stack to MySQL. Also, when you connect with the command line mysql client, connection is fast, then "use dbname" is slow, and then queries are fast.
My hunch is that this is due to the server not being configured correctly, and the unused dbs falling out of a cache, or something like that, but I'm not sure which cache or setting applies in this case.
One thing I have tried is the innodb_buffer_pool size. This was set to the default 8M. I tried raising it to 512MB (The machine has ~ 2GB of RAM, and the additional RAM was available) as the reading I did indicated that more should give better performance, but this made the system run slower, so it's back at 8MB now.
Thanks for reading.
With 2000 databases you should adjust the table cache setting. You certainly have a lot of cache miss in this cache.
Try using mysqltunner and/or tunning_primer.sh to get other informations on potential issues with your settings.
Now drupal makes Database intensive work, check you Drupal installations, you are maybe generating a lot (too much) of requests.
About the innodb_buffer_pool_size, you certainly have a lot of pagination cache miss with a little buffer (8Mb). The ideal size is when all your data and indexes size can fit in this buffer, and with 2000 databases... well it is quite certainly a very little size but it will be hard for you to grow. Tunning a MySQL server is hard, if MySQL takes too much RAM your apache won't get enough RAM.
Solutions are:
check that you do not make the connexion with DNS names but with IP
(in case of)
buy more RAM
set MySQL on a separate server
adjust your settings
For Drupal, try to set the session not in the database but in memcache (you'll need RAM for that but it will be better for MySQL), modules for that are available. If you have Drupal 7 you can even try to set some of the cache tables in memcache instead of MySQL (do not do that with big cache tables).
edit: last thing, I hope you have not modified Drupal to use persistent database connexions, some modules allows that (or having an old drupal 5 which try to do it automatically). With 2000 database you would kill your server. Try to check mysql error log for "too many connections" errors.
Hello Rupertj as I read you are using tables type innodb, right?
innodb table is a bit slower than myisam tables, but I don't think it is a major problem, as you told, you are using drupal system, is that a kind of mult-sites, like a word-press system?
If yes, sorry about but this kind of systems, each time you install a plugin or something else, it grow your database in tables and of course in datas.. and it can change into something very very much slow. I have experiencied by myself not using Drupal but using Word-press blog system, and it was a nightmare to me and my friends..
Since then, I have abandoned the project... and my only advice to you is, don't install a lot of plugins in your drupal system.
I hope this advice help you, because it help me a lot in word-press.
This sounds like a caching issue in Drupal, not MYSQL. It seems there are a few very heavy queries, or many, many small ones, or both, that hammer the database-server. Once that is done, Drupal caches that in several caching layers. After which only one (or very few) queries are all that is needed to build up a page. Slow in the beginning, fast after that.
You will have to profile it to determine what the cause is, but the table cache seems like a likely suspect.
However, you should also be mindful of persistent connections - which should absolutely definitely, always be turned off (yes, for everyone, not just you). Apache / PHP persistent connections are a pessimisation that you and everyone else can generally do without.

Securely deleting/wiping MySQL data from hard disk

We're running MySQL 5.1 on CentOS 5 and I need to securely wipe data. Simply issuing a DELETE query isn't an option, we need to comply with DoD file deletion standards. This will be done on a live production server without taking MySQL down. Short of taking the server down and using a secure deletion utility on the DB files is there a way to do this?
Update
The data sanitization will be done once per database when we remove some of the tables. We don't need to delete data continuously. CPU time isn't an issue, these servers are nowhere near capacity.
If you need a really secure open source database, you could take a look at Security Enhanced PostgreSQL running on SELinux. A very aggresive vacuum strategy can assure your data gets overwritten quickly. Strong encryption can be of help as well, pgcrypto has some fine PGP functions.
Not as far as I know, secure deletion requires the CPU to do a bit of work, especially DoD standard which I believe is 3 passes of inflating 1's and 0's. You can, however, encrypt the harddrive. Given that a user would need phsyical access and a password for the CentOS to recover the data. As long as you routinely monitory access logs for suspicious activity on the server, this should be "secure".
While searching found this article: Six Steps to Secure Sensitive Data in MySQL
Short of that though, I do not think a DoD standard wipe is viable or even possible without taking the server down.
EDIT
One thing I found is this software: data wiper. If there is a linux comparable version of that, that might work "wipes unused disk space". But again this may take a major performance toll on your server, so may be advisable to run at night at a set time and I do not know what the re-precautions (if any) of doing this too often to a harddrive.
One other resource is this forum thread. It talks about wiping unused space etc. From that thread one resource stands out in particular: secure_deletion toolkit - sfill. The man page should be helpful.
If it's on a disk, you could just use: http://lambda-diode.com/software/wipe/