Can't shrink my SQL Server 2008 Log files - 260GB currently [closed] - sql-server-2008

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
My database log files have grown to 260GB and I need to shrink them. I have tried numerous scripts such as:
DBCC SHRINKFILE ('HM_Log', 0)
As well as using the Shrink option in SQL Server Management Studio however the log file doesn't seem to shrink.
Has anyone got any suggestions?
The database is on my production server and actively used, I also have a maintenance plan setup to run all the relevant tasks including a daily transaction log backup and a weekly full backup which appears to be working fine.
I need to shrink it down so I can do a full backup + restore on my local development machine, however at the moment the log files is too large for my local drive.

You need to backup the log file, then it should be reduced in size automatically.
See here: http://msdn.microsoft.com/en-us/library/ms178037.aspx
In particular, this bit:
Typically, truncation occurs automatically under the simple recovery model when database is backed up and under the full recovery model when the transaction log is backed up. However, truncation can be delayed by a number of factors. For more information, see http://msdn.microsoft.com/en-us/library/ms345414.aspx.

Related

Is it OK to copy DB files from one MySQL server to another? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I have two identically configured MySQL 5.6.10 servers and needed to move the data files fast from one to the other. Is this an OK procedure?
Here is what I did:
1) Shut down both servers
2) Moved all the files from one box to the other (DATA is on a separate drive on both machines)
3) Turned the second server on
4) Connected it back to the app server
It took about 5 minutes to move all files (~50GB) and all seems to work. I just wonder if I missed anything?
Thanks much for your feedback.
If both the server versions are same, then I think, it's perfectly fine, not just OK, as I have done the same many times, without any data loss, but this method comes with cost:
You have to shut down mysql server (which is not good, if it's a production server)
You have to make sure the permission of data (mysql) directory is same as the previous one.
You will have to monitor the mysql_error log while starting the second server.
You can use mysqldump, but if you don't want to, then you can use Mysql Workbench's migration wizard, it really takes care of everything.
A much safer and recommended way would be Database Backup And Recovery.
Do a full backup from server1 and restore it to server2. Later then on, you can go for a differential backup.

how to clear server/host log [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
Suddenly my phpmyadmin page can't load and redirect to the same login page, although I can login to mysql through shell, I tried to clear the cache of the browser and still have this issue. I read in this post to clear the server/host log files and I search in google 'how to do that' and I find the log files was at /var/log folder, but I don't know which files is can be delete, I'm afraid to lost my data in mysql. I'm running server with lamp. Which log files exactly should I delete?
There's no mysql data in /var/log, just log files, so you can't lose any data.
Delete a bunch of the older *.gz files. These are log files from previous weeks or months, depending on how many old files you keep during rotation. You should also update your logrotate configuration files to reduce the number of old files you save, so you don't fill up the disk again.

Continuous Backup of Mediawiki [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am administrating mediawiki for my organisation. We use it as our Intranet site. It has accumulated a huge organisational knowledge base. I have make sure that mediawiki is always up and running. Knowledge base always backed up.
Is there a way to take continuous back of mediawiki files and databases? My mediawiki is hosted on LAMPP server with Debian OS.
I am trying to find a way to automate backup process.
It depends on what you mean by "continuous". If you want a copy of the database running that is always the same as the main database, you will need to set up "replication" - see http://dev.mysql.com/doc/refman/5.1/en/replication.html for how to do that.
If you want a database backup that is relatively current, then running mysqldump every hour or so is a pretty good solution.
You'll need to backup the files separately, because they are in your file system not the database. Look at running rsync every hour or so.
Why do you want a "continuous" backup and how would you use it? Do either of these approaches answer your question?

Any reason to run JETCOMP on smaller MDB file? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Is there any reason to run JETCOMP.EXE or any other compaction method on an MDB file if it is relatively small (ie. 0-200MB)?
I can understand if it is approaching a size limit (eg. 1GB for some older MDB formats) then it would make sense, but is there any need otherwise?
Can a failure to run JETCOMP (or similar) bring about data corruption problems?
It is good practice to run a regular compact on an Access database of any size as it has several benefits: First of all is the size of the file, not an issue if around 200mb but can't hurt. It will also re-index all your tables so especially important if you have amended any table structures or added new tables. It will re-evaluate any queries to ensure they have the fastest execution plan.
In terms of a failure on the compact causing corruption, the first thing the compact does is create a backup copy of the original file, so that if it fails you still have the original to revert back to and therefore you shouldn't run in to any corruption issues.
Access database is very prone to data corruption especially when the database is shared through the network. It won't hurt to run Compact and Repair every now and then just to make sure DB is in optimum condition.

How many MySQL database can be created in a single domain in Plesk control panel? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
Do you have any experience about this question? I have currently 1900 MySQL databases in a single domain in my plesk control panel and I wonder if my MySQL server gets overloaded or out-of-service due to such high number of databases in the system.
Do you have any suggestions? Each database is for a user in my service by the way.
MySQL itself doesn't place any restrictions on the number of databases you can have, and I doubt Plesk does either, I'm sure it just displays all the databases present on the MySQL server.
However, your host may have a limit (which you'd have to ask them about), or if you start getting a huge number of databases, you may actually run into a filesystem limit. As the MySQL documentation says, each database is stored as a directory, so you could hypothetically hit the filesystem's upper limit for how many subdirectories are allowed.
ive got well over 5000 databases running on alinux based plesk cluster (one db one web server) and its running fine, though i have had to increase open files limits due to the huge amounts of files. i cant run the mysql tuning primer any more though, well i can but it takes about 4 hours