Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Is there any reason to run JETCOMP.EXE or any other compaction method on an MDB file if it is relatively small (ie. 0-200MB)?
I can understand if it is approaching a size limit (eg. 1GB for some older MDB formats) then it would make sense, but is there any need otherwise?
Can a failure to run JETCOMP (or similar) bring about data corruption problems?
It is good practice to run a regular compact on an Access database of any size as it has several benefits: First of all is the size of the file, not an issue if around 200mb but can't hurt. It will also re-index all your tables so especially important if you have amended any table structures or added new tables. It will re-evaluate any queries to ensure they have the fastest execution plan.
In terms of a failure on the compact causing corruption, the first thing the compact does is create a backup copy of the original file, so that if it fails you still have the original to revert back to and therefore you shouldn't run in to any corruption issues.
Access database is very prone to data corruption especially when the database is shared through the network. It won't hurt to run Compact and Repair every now and then just to make sure DB is in optimum condition.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I have two identically configured MySQL 5.6.10 servers and needed to move the data files fast from one to the other. Is this an OK procedure?
Here is what I did:
1) Shut down both servers
2) Moved all the files from one box to the other (DATA is on a separate drive on both machines)
3) Turned the second server on
4) Connected it back to the app server
It took about 5 minutes to move all files (~50GB) and all seems to work. I just wonder if I missed anything?
Thanks much for your feedback.
If both the server versions are same, then I think, it's perfectly fine, not just OK, as I have done the same many times, without any data loss, but this method comes with cost:
You have to shut down mysql server (which is not good, if it's a production server)
You have to make sure the permission of data (mysql) directory is same as the previous one.
You will have to monitor the mysql_error log while starting the second server.
You can use mysqldump, but if you don't want to, then you can use Mysql Workbench's migration wizard, it really takes care of everything.
A much safer and recommended way would be Database Backup And Recovery.
Do a full backup from server1 and restore it to server2. Later then on, you can go for a differential backup.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I will be writing a program in Delphi that will be reading and writing to a MySQL database tables on a regular basis like every 5 seconds. Is this going to be CPU intensive? or get to a point where computer will freeze completely? I know reading and writing to and from a hardrive nonstop can freeze everything on your computer. I am not really sure about MySQL database.
Databases are designed to handle many transactions frequently, but it really depends on what the queries you are using. A simple SELECT on a couple rows is unlikely to cause an issue, but large scale updates targeting many tables or multiple joins can slow performance. It all depends on what your queries are.
This all depends on the computer and the complexity of the query.
As David has said, it really does depend on the hardware and queries you are processing.
I would suggest measuring the processing time of each query to determine whether the writing processes will be stacking over the other 5 second interval queries.
You can find information on how to measure your MySQL processes here.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
Morning, we have 8 databases on our live server. I have created a new one on our test/development server. In MySQL Administrator I've backed up this new database to an SQL file, this file is on the new server. If I use "restore" in MySQL Administrator to create this database, will it affect the other databases that are there or will they carry on working as normal?
Is there a better way to do this?
The new DB is only a few k in size, the others contain many years of info and data and are huge. Any help appreciated
No, it won't. As you said that your DB is small it will not affect at all the other DB. It would if it was bigger, most probably it would slow dow your server a bit during the import (if it was bigger, huge to be precise), after the import the database will work normally. Of course it will share resource to keep one more instance working. And with time, it will make some difference in performance (depending on how big this DB grow). But you will have to be more concerned with hardware capabilities then the database itself.
Of course, I assume that when you say database you are saying about a new instance on your database.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I have a question in my mind, that is it preferable to keep images in database directly using datatypes like blob,binary etc. or the way paperclip stores images by maintaining folder structure and keeping only the path in database is the standard one.
Storing pictures in the database helps you keeping your data synchronized (what if by any chance a folder name is changed manually?). Then it would save you the small effort to remember to back up data outside the database itself.
On the other side retrieving an image from a database is much slower than doing it from the file system, and database storage space I seem to remember is more expensive on a web server.
Said that, it's just a matter of choice. In case you decide to go for the database, there is a gem helping you to do that 'paperclip_database'.
There are a few advantages and disadvantages.
Advantages
Easy to protect the referential integrity. No discrepancy between image (path) saved in DB and the real image saved at the filesystem.
Database Backup also contains images. But consider you have to backup the application data nevertheless. (So this is not a good argument)
Easier rights management
Disadvantages
Performance issues. Every image has to be loaded from the database. The more images affected the more weight this argument should get.
Browser-Cache / Checking if an image has changed (If-modified-Since) not working anymore.
(But you could implement a checking yourself)
(No complete list)
My Conclution
On small sites with less traffic and images it's okay to store images in the database. On bigger sites I wouldn't do that.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I'm trying to figure out why a MySQL database is abnormally slow on a live server. Deleting a row from a table (with only less than 100 rows) can take anywhere between 1 second and 20 seconds. I've checked the running processes and cannot see anything that would take out all the CPU or memory.
Also the website is not launched yet so there's just me on it.
In these conditions, what could be the reason for the database to be so slow? Is there any way to diagnose this kind of problem?
In these conditions, what could be the reason for the database to be so slow? Is there any way to diagnose this kind of problem?
Are you sure that it's the DB that is slow?
Connect to your server using the command line, launch mysql, and run a few sample queries from there. If it's plenty fast there (which it should, unless you're swapping like mad or have a gazillion funky triggers), you can safely eliminate SQL as the culprit. If not, there likely is a problem with your schema, your database configuration (does it have enough memory?) or your server (is the RAM broken?).
Another sources of slowness might be latency. Examples:
Time needed to do a DNS lookup (e.g. on occasion, it's faster to connect to 127.0.0.1 than it is to connect to localhost)
Lag due to the DB being located on a separate server (especially if the DB is at the other end of the world)
Time needed to retrieve the results back from the DB, if blobs are involved.
Dreadfully slow NSF:
http://lists.freebsd.org/pipermail/freebsd-fs/2013-April/017125.html
etc.