After someone messed up the server, Magento could not connect MySql DB.
First try, I used mysql -u <username> -h localhost -p and failed to authenticate.
After a lot of struggle this guy helped me (the solution is in the comments), so I finally succeeded connecting to the DB using Magento's credentials. But then I couldn't connect remotely, this one didn't help since --skip-networking disables remote connection, but I finally figured it out as well (now I don't remember what I did, either changed something in my.cnf or /etc/hosts).
So now I can connect with Magento username/password (configured in configuration.php) both locally and remotely.
Still, Magento prints to screen errors that it can't connect MySql.
I checked both local.xml and config.xml (under <Magento root>/app/etc) and both seems to be configured correctly.
I started thinking about installing the whole thing from scratch, the problem is that there isn't any good backup and I'm not sure what/if I'm going to loose data by doing that, but if I'll have to, I'll backup the files+DB and go for it...
Any ideas ?
UPDATE
After endless digging, apparently there were other XML files in the same directory with local.xml and config.xml. Removing these files (which were created as backups, but were left with the .xml extension) the problem was solved.
Conclusion: if you backup xml files, save the backup as file.xml.backup so it won't be treated the same as a file with an xml extension!
If you're thinking about reinstalling the whole thing, may I, as a foreword, advise to do that on a different server than the messed-up one - just in order to keep data on the old one in case things turn bad. You may also want to do that on the same server but with a different vhost, home folder and mysql database.
Here is the procedure I use when making Magento project migrations, imports and other stuff related to Magento moves from one server to another.
This requires that you can access mysql + mysqldump from the shell.
This is a procedure I use regularly on Debian based distros with LAMP.
On source server
1. Clean the BD
This is necessary if you consider that your DB is to heavy to be downloaded from your new destination server.
Also, please make sure that you really know which tables you are truncating. I cannot tell which precisely as this depends on your Magento version.
Roughly, truncate index tables + core_url_rewrite, log tables, cron_schedule, flat catalog tables, dataflow batch tables and profile history, reports aggregation tables.
2. Backup the DB
mysqldump -h [host] -u [user] -p'[password]' [dbname] > magento.sql
3. Clean your Magento filesystem
From you Magento root folder:
rm -rf var/session/* && rm -rf var/cache/* && rm -rf var/log/*
4. Archive your Magento filesystem
From your Magento root folder:
tar -zcvf magento.tar.gz .
On the destination server
Retrieve your magento.sql and magento.tar.gz any way you like (wget, copy/paste from SSH GUI client...) and put them in your new Magento root directory.
5. Import your DB
mysql -h [your_host] -u [user] -p'[password]' [dbname]
That will open the mysql shell on your new DB
mysql> SET FOREIGN_KEY_CHECKS = 0;
mysql> source /full/path/to/magento.sql
...
mysql> SET FOREIGN_KEY_CHECKS = 1;
6. Extract your magento.tar.gz
From your new Magento root directory
tar -zxvf magento.tar.gz
You should now be able to see your site. Some permissions modification and a fine tuning of app/etc/local.xml may be needed to make it fit to your destination server MySql configuration.
Try to flush cache from backend or delete /var/cache/*
Related
I am attempting to back up a MySQL database on a Linux server before I install some upgrades to the software (Omeka) which is using the database.
The command supplied by Omeka documentation for that is the following:
mysqldump -h localhost -u username -p omeka_db_name > omeka_db_backup.sql
However, when I run this, I get the ever so helpfully vague message of "permission denied." It does this if I run the command as sudo. It does this no matter what directory I try to save the backup file to. It doesn't prompt me for a MySQL password when I run mysql dump, but it does when I run "mysql" command and it accepts the password I put in so I know the issue isn't that I'm using the wrong credentials.
I cannot navigate to the MySQL folder directly in shell and when I use WinSCP to access the server, the MySQL folder is listed as owned by "MySQL" and not by "root." So I'm assuming that I don't have permission to copy anything from this folder and that is my problem. I don't want to willy nilly assign ownership of the MySQL folder to root because I'm afraid it might break MySQL's ability to read and write from this folder.
All I want to do is copy the database files somewhere as backup. Heck, I'll copy the whole MySQL folder someplace if I have to do that. How can I do that without breaking MySQL?
Root has permissions for everything. There may be some additional safeguards, depending (there is some security software that limits root permissions).
You can just use:
mysqldump -h localhost -u username -p omeka_db_name > /path/to/some/other/directory/omeka_db_backup.sql
And put backup in directory you can normally access. If you use the mysqldump you don't need to write to mysql dir.
I've recently upgrade a server to Debian 9 and MySQL to the latest version. I have a simple backup script that I run before performing any work on a production site but this time, when running my script, I encounter the following:
mysqldump: unknown variable 'local-infile=0'
Here is my script. What's going on?
#!/bin/bash
# [skipping commentary]
SITE=prod
# Set the directory that the Drupal root is IN, no trailing slashes
DROOT=[website_root]
# Set the directory for storing backups, no trailing slashes
BUD=/$DROOT/notes/backups
# Don't edit; End of defining variables
echo Doing a full back up...
echo Prepare to enter MySQL password...
# tar -czf $BUD/$SITE-files-$(date +'%Y%m%d%H%M%S').tgz $DROOT/docroot
mysqldump -u mysql_user -p drupal > $BUD/$SITE-drupal-$(date +'%Y%m%d%H%M%S').sql
mysqldump -u mysql_user -p civicrm > $BUD/$SITE-civicrm-$(date +'%Y%m%d%H%M%S').sql
ls -lh $BUD
pwd
echo Finished with backups...
MySQL version 10.1.37-MariaDB-0+deb9u1 Debian 9.6
Edit: When I ssh and run mysqldump with correct permissions I get the same issue. Weirdest thing, cron that runs similar process is backing up my databases as ordered.
The best way to solve this is simply to rename the variable to:
loose-local-infile=1
This will allow mysqldump to merely throw a warning, rather than a fatal error.
The suggestion to comment out the variable is not an option if you want LOAD DATA INFILE functionality out of the box, and MySQL 8+ for security reasons requires you to set this variable for both server (mysqld) and client. It is the [client] variable grouping in your config that chokes mysqldump if you don't add the "loose-" prefix to local-infile.
Seems like the new version you install is compiled without support of local-infile parameter. And because package management system (usually) keep your current configuration file you can try to find this parameter in my.ini file and comment it.
This parameter manage LOAD DATA LOCAL functionality. But seems like this have some potential security issues (more here)
I've been migrating ddbb (a few GB size) in mySQL workbench 6.1, from one mySQL server to another mySQL. Never having done this before I thought it was 99% reliable. Instead, 2 out of 3 tries have failed.
My ddbb dont have complex features (triggers, SP & functions,...). The errors, though, are difficult to interpret, almost always about tables failing to export, reason unknown. There might be occasionally a duplicated key index in source, but that shouldn't prevent an export from happening?
I've tried all the different methods available in the interface:
1) Server > Data Export > Data Import
2) Migration wizard
3) Schema transfer wizard
4) Reverse engineer
but no real difference.
Also, all methods seem variants of the same, do these menu options rely on the same procedure internally, how really different are they?
My questions are generic:
1) Is there a foolproof method, relaxed about errors, e.g. is
mysqldbcopy from myQL utilities much better that workbench wizards?
2) Does mySQL wizards configuration make any difference (e.g. a checkbox that causes errors by being too demanding if the source db has a problem) I just want to transfer the db, not perfection in the target server. I've switched SSL=NO, but still not working.
3) What is the single most important cause of errors in migration, e.g. server overloaded, enough memory, table structure?
Thanks in advance,
There might be occasionally a duplicated key index in source, but that shouldn't prevent an export from happening?
Yeah, It shouldn't prevent export operation.
I've tried all the different methods available in the interface:
All interface you have used might have some timeout configured so it don't really execute fully as your database is BIG.
So how to migrate MySQL database from one server to another?
To do it properly, I suggest you use command line like this:
Step 1: create backup file on old server
mysqldump -u [[user_name]] -p[[password]] [[db_name]] > db_backup.sql
Step 2: Transfer backup file to new server.
Step 3: Import backup file in new server.
mysql -u [[user_name]] -p[[password]] [[db_name]] < db_backup.sql
Pro tip:
you can combine step 1 & 2 if you have remote MySQL enabled on old server. Just execute this command on new server so it will download the backup file in current directory of new server.
mysqldump -h [[xxx.xx.xxx.xxx]] -u [[user_name]] -p[[password]] [[db_name]] > db_backup.sql
where [[xxx.xx.xxx.xxx]] represents ip address/hostname for old server.
Extra Note:
Please note that there is no space between -p and [[password]]. you can also omit the [[password]] if you think it's security issue to include password in command.
If you have access to your terminal you can try using "mysqldump" and also you could try percona xtrabackup tool.
Mysql dump : (If your DB is too large then I suggest you to use screens)
Backup all DB : mysqldump -u root -pxxxx --all-databases > all_db_backup.sql
Backup Tables : mysqldump -u root -pxxxx DatabaseName table1 table2 > tables.sql
Backup Individual databases : mysqldump -u root -pxxx --databases DB1 DB2 > Only_DB.sql
To import : Sync all the files to another server and try importing as show below
mysql -u root -pxxxx < all_db_backup.sql (Use Screen for large Databases)
Individual DB : mysql -u root -pxxx DBName < DB.sql
( Note : Before you import make sure your backuped file already has create database if not exists statements or you could create those DB names before importing )
I have a local copy of Bitbucket Server on my machine and i'm running tests before putting it on a server.
I'm trying to use the Bitbucket DIY Backup but every time I run the backup it completely deletes the directory the database should be backed up into and then tells me is cannot find the directory.
It backs up to the home and archive directories as it should with no issues but won't work for the database.
Here is the line used for creating the dump that seems to be causing the directory to be deleted:
mysqldump -username=root -password= --databases=bitbucket_server > ../bitbucket-backups-diy/bitbucket-database/bitbucket_server.sql
I have tested the connection settings on the line above with the following line and am getting a list of the tables in the database as I would expect:
mysql -D bitbucket_server -u root -p -e "show tables"
Any help would be greatly appreciated, thanks in advance.
Sam
I have stopped the bash file from deleting the directory and now it stores the dump in there.
Thanks to #BerndBuffen I altered the way the dump was accessing my database. Instead of using:
mysqldump -username=root -password= --databases=bitbucket_server > ../bitbucket-backups-diy/bitbucket-database/bitbucket_server.sql
I now use:
mysqldump -uroot bitbucket_server > ../bitbucket-backups-diy/bitbucket-database/bitbucket_server.sql
You also need to add the following code to the line above mysqldump to create the folder:
mkdir -p ../bitbucket-backups-diy/bitbucket-database
Because my root user doesn't have a password on my local database I don't need to list a password, this looks to be the reason it was failing. For when I put the backup on to a live server I will just need to add -p back in to the script and it should work.
Hopefully this can help anyone else having this problem.
Sam
Copying database backup files from xamp/mysql/data of windows to linux in path /var/lib/mysql, but it is creating only empty database in phpmyadmin of linux.
Please some one help me to solve this issue, i have only these files backup with me
The best way is-
Step1: Take backup from windows by mysqldump-
mysqldump -uroot -proot123 -A > backup.sql
Step2: Move this backup to linux, you can use winscp tool for it.
Step3: Now restore this backup to linux machine.
mysql -uroot -proot123 < backup.sql
Modification:
It seems your db engine is myisam and you just coppied file/folder from window to linux, so give permissions as per below-
chown -R mysql.mysql /var/lib/mysql
First create a .my.cnf file containing the mysql root password in your users home folder, on linux.
On windows, there exists a .my.ini or something which serves the same purpose. That way you will not have to reenter your passwords a lot during the next steps, which you are very likely to repeat several times until you get them right, I fear. :)
Since unix/linux and windows have different ways to save files, you might very likely run into errors during a simple copy-restore process, depending on how you copy files.
Your best bet is likely copying the original mysql folders to another windows machine and save them accordingly, such that mysql can find them. (With an installed mysql instance, of course.) I don't know what else you might need, if the databases are not found instantly, since I never had to do this prior and have no test setup here to check this case out.
When the databases are found through the mysql on the WINDOWS server, from the mysql cli prompt there look up which encoding etc the db uses:
SELECT SCHEMA_NAME 'database', default_character_set_name 'charset', DEFAULT_COLLATION_NAME 'collation' FROM information_schema.SCHEMATA;
Then create a new database on the LINUX server with the same name and the same encoding's from MYSQL CLI:
create database <db-name> character set <charset> collate <collation>;
Then on the WINDOWS server in a CMD window, do a mysqldump which should look familiar on windows like on linux:
mysqldump <db-name> > <db-name>.sql
Then copy the dump over to the LINUX server and replay it:
mysql <db-name> < <db-name>.sql
Afterwards you will have to recreate a user (if you know which user and password your web app used to access the database, create a new user with these credentials and grant him full access on your database.
If you do not happen to know the credentials anymore, create an arbitrary user and then change the database credentials in the configfile of your web application.
In case you have problems, check the unix file permissions of the files you copied, such that mysql can access them.
Good luck, mate.