Adding additional MySQL data folder to server. Ubuntu - mysql

Heres the deal. Removed mysql 5.0.xx and neglected to dump a data folder which is on a mounted drive.
I have mySql 5.6.5 now installed and running and the data folder works fine in the default directory. I attempted to switch the data dir in the my.conf file but that results in the error "The server quit without updating PID file."
What I would like to do is still have my.conf point at the default data directory while also adding the external database to MySQL. This is how I had it set up in mySql 5.0.xx. The only problem is I created the database via a GUI and specified that the data would actually be stored in the mounted drive. I can't quite figure out how to do this via the command line and I have found no good sources of documentation or examples.

You probably created a symbolic link to the directory on the mounted drive. This is done with the ln command:
cd /var/lib/mysql
ln -s /mounted_drive/data_directory/db_name db_name
On Ubuntu the MySQL data folder resides in /var/lib
Generally you can set this variable in my.cnf http://dev.mysql.com/doc/refman/5.6/en/server-options.html#option_mysqld_datadir in order to change the default data directory.

Related

Mysql backup from a mounted root drive

I had a problem with my hdd. There is a new system running in place and I found that I can mount and access the / of my old hdd (It had a Debian Linux distribution). However I forgot to backup some important data in the DB tables and I was wondering if there was anyway to execute a mysql server command from the mysql server installation in the mounted drive?
If your hdd contains complete OS, you can simply mount it and chroot to it. In chrooted environment start your mysql server and take data-backup with mysqldump command.
Or you can simply install mysql-server on your new system, change data directory option in /etc/my.cnf to the mounted partition, backup the data with mysqldump command. You can then revert the my.cnf change back.

Mysql wont start after changing temp dir on ubuntu 12.04

I run ubuntu 12.04.
I am trying to move the temp dir for files as /tmp has filled up, somehow I only set it to 1meg, which is obviously not enough for a large mysql database.
What I need to do is move it on, so I looked online for a solution to this and I found an article which seems to make sense.
In the my.conf file at /etc/mysql/my.conf I changed the tmpdir directive to /mysqltmp. I made the directory with root login, then chmod 777 that dir. I reboot and the mysql server wont start. (it was starting just previously).
The error log says..
/usr/sbin/mysqld: Can't create/write to file '/mysqltmp/ibqADloJ'
It's a permissions error, however the directory has full permissions so why is this a problem?
Probably apparmor is getting in your way. Have a look at /etc/apparmor.d/usr.sbin.mysqld and make your new temp-folder writable by the mysqld process (or configure mysqld to write its temporary data to a directory it has write permissions for)

Magento can't connect MySQL

After someone messed up the server, Magento could not connect MySql DB.
First try, I used mysql -u <username> -h localhost -p and failed to authenticate.
After a lot of struggle this guy helped me (the solution is in the comments), so I finally succeeded connecting to the DB using Magento's credentials. But then I couldn't connect remotely, this one didn't help since --skip-networking disables remote connection, but I finally figured it out as well (now I don't remember what I did, either changed something in my.cnf or /etc/hosts).
So now I can connect with Magento username/password (configured in configuration.php) both locally and remotely.
Still, Magento prints to screen errors that it can't connect MySql.
I checked both local.xml and config.xml (under <Magento root>/app/etc) and both seems to be configured correctly.
I started thinking about installing the whole thing from scratch, the problem is that there isn't any good backup and I'm not sure what/if I'm going to loose data by doing that, but if I'll have to, I'll backup the files+DB and go for it...
Any ideas ?
UPDATE
After endless digging, apparently there were other XML files in the same directory with local.xml and config.xml. Removing these files (which were created as backups, but were left with the .xml extension) the problem was solved.
Conclusion: if you backup xml files, save the backup as file.xml.backup so it won't be treated the same as a file with an xml extension!
If you're thinking about reinstalling the whole thing, may I, as a foreword, advise to do that on a different server than the messed-up one - just in order to keep data on the old one in case things turn bad. You may also want to do that on the same server but with a different vhost, home folder and mysql database.
Here is the procedure I use when making Magento project migrations, imports and other stuff related to Magento moves from one server to another.
This requires that you can access mysql + mysqldump from the shell.
This is a procedure I use regularly on Debian based distros with LAMP.
On source server
1. Clean the BD
This is necessary if you consider that your DB is to heavy to be downloaded from your new destination server.
Also, please make sure that you really know which tables you are truncating. I cannot tell which precisely as this depends on your Magento version.
Roughly, truncate index tables + core_url_rewrite, log tables, cron_schedule, flat catalog tables, dataflow batch tables and profile history, reports aggregation tables.
2. Backup the DB
mysqldump -h [host] -u [user] -p'[password]' [dbname] > magento.sql
3. Clean your Magento filesystem
From you Magento root folder:
rm -rf var/session/* && rm -rf var/cache/* && rm -rf var/log/*
4. Archive your Magento filesystem
From your Magento root folder:
tar -zcvf magento.tar.gz .
On the destination server
Retrieve your magento.sql and magento.tar.gz any way you like (wget, copy/paste from SSH GUI client...) and put them in your new Magento root directory.
5. Import your DB
mysql -h [your_host] -u [user] -p'[password]' [dbname]
That will open the mysql shell on your new DB
mysql> SET FOREIGN_KEY_CHECKS = 0;
mysql> source /full/path/to/magento.sql
...
mysql> SET FOREIGN_KEY_CHECKS = 1;
6. Extract your magento.tar.gz
From your new Magento root directory
tar -zxvf magento.tar.gz
You should now be able to see your site. Some permissions modification and a fine tuning of app/etc/local.xml may be needed to make it fit to your destination server MySql configuration.
Try to flush cache from backend or delete /var/cache/*

How to import large sql file in phpmyadmin

I want to import a sql file of approx 12 mb. But its causing problem while loading. Is there any way to upload it without splitting the sql file ?
Try to import it from mysql console as per the taste of your OS.
mysql -u {DB-USER-NAME} -p {DB-NAME} < {db.file.sql path}
or if it's on a remote server use the -h flag to specify the host.
mysql -u {DB-USER-NAME} -h {MySQL-SERVER-HOST-NAME} -p {DB-NAME} < {db.file.sql path}
3 things you have to do:
in php.ini of your php installation (note: depending if you want it for CLI, apache, or nginx, find the right php.ini to manipulate)
post_max_size=500M
upload_max_filesize=500M
memory_limit=900M
or set other values.
Restart/reload apache if you have apache installed or php-fpm for nginx if you use nginx.
Remote server?
increase max_execution_time as well, as it will take time to upload the file.
NGINX installation?
you will have to add: client_max_body_size 912M; in /etc/nginx/nginx.conf to the http{...} block
Edit the config.inc.php file located in the phpmyadmin directory. In my case it is located at C:\wamp\apps\phpmyadmin3.2.0.1\config.inc.php.
Find the line with $cfg['UploadDir'] on it and update it to $cfg['UploadDir'] = 'upload';
Then, create a directory called ‘upload’ within the phpmyadmin directory (for me, at C:\wamp\apps\phpmyadmin3.2.0.1\upload\).
Then place the large SQL file that you are trying to import into the new upload directory. Now when you go onto the db import page within phpmyadmin console you will notice a drop down present that wasn’t there before – it contains all of the sql files in the upload directory that you have just created. You can now select this and begin the import.
If you’re not using WAMP on Windows, then I’m sure you’ll be able to adapt this to your environment without too much trouble.
Reference : http://daipratt.co.uk/importing-large-files-into-mysql-with-phpmyadmin/comment-page-4/
Solution for LINUX USERS (run with sudo)
Create 'upload' and 'save' directories:
mkdir /etc/phpmyadmin/upload
mkdir /etc/phpmyadmin/save
chmod a+w /etc/phpmyadmin/upload
chmod a+w /etc/phpmyadmin/save
Then edit phpmyadmin's config file:
gedit /etc/phpmyadmin/config.inc.php
Finally add absolute path for both 'upload' and 'save' directories:
$cfg['UploadDir'] = '/etc/phpmyadmin/upload';
$cfg['SaveDir'] = '/etc/phpmyadmin/save';
Now, just drop files on /etc/phpmyadmin/upload folder and then you'll be able to select them from phpmyadmin.
Hope this help.
Just one line and you are done (make sure mysql command is available as global or just go to mysql installation folder and enter into bin folder)
mysql -u database_user_name -p -D database_name < complete_file_path_with_file_name_and_extension
Here
u stands for User
p stands for Password
D stands for Database
---DON'T FORGET TO ADD < SIGN AFTER DATABASE NAME---
Complete file path with name and extension can be like
c:\folder_name\"folder name"\sql_file.sql
---IF YOUR FOLDER AND FILE NAME CONTAINS SPACE THAN BIND THEM USING DOUBLE QUOTE---
Tip and Note: You can write your password after -p but this is not recommended because it will show to others who are watching your screen at that time, if you don't write there it will ask you when you will execute command by pressing enter.
Create a zip or tar file and upload in phpmyadmin thats it..!
I was able to import a large .sql file by having the following configuration in httpd.conf file:
Alias /phpmyadmin "C:/xampp/phpMyAdmin/"
<Directory "C:/xampp/phpMyAdmin">
AllowOverride AuthConfig
Require all granted
php_admin_value upload_max_filesize 128M
php_admin_value post_max_size 128M
php_admin_value max_execution_time 360
php_admin_value max_input_time 360
</Directory>
I dont understand why nobody mention the easiest way....just split the large file with http://www.rusiczki.net/2007/01/24/sql-dump-file-splitter/
and after just execute vie mySQL admin the seperated generated files starting from the one with Structure
Ok you use PHPMyAdmin but sometimes the best way is through terminal:
Connect to database: mysql -h localhost -u root -p (switch root and localhost for user and database location)
Start import from dump: \. /path/to/your/file.sql
Go take a coffe and brag about yourself because you use terminal.
And that's it. Just remember if you are in a remote server, you must upload the .sql file to some folder.
PHPmyadmin also accepts compressed files in gzip format, so you can gzip the file (Use 7Zip if you don't have any) and upload the zipped file. Since its a text file, it will have a good compress ratio.
You will have to edit the php.ini file. change the following upload_max_filesize post_max_size to accommodate your file size.
Trying running phpinfo() to see their current value. If you are not at the liberty to change the php.ini file directly try ini_set()
If that is also not an option, you might like to give bigdump a try.
One solution is to use the command line;
mysql -h yourhostname -u username -p databasename < yoursqlfile.sql
Just ensure the path to the SQL file to import is stated explicitly.
In my case, I used this;
mysql -h localhost -u root -p databasename < /home/ejalee/dumps/mysqlfile.sql
Voila! you are good to go.
For that you will have to edit php.ini file, If you are using the ubuntu server this is link Upload large file in phpMyAdmin might help you.
In MAMP, You could load huge files by :
creating a new folder in this directory
/MAMP/bin/phpMyAdmin/"folderName"
and then edit "/MAMP/bin/phpMyAdmin/config.inc.php" line 531 :
$cfg['UploadDir']= 'folderName';
Copy your .sql or .csv Files into this folder.
Now you will have another option in "PhpMyAdmin" : Select from the
web server upload directory newFolder/: You could select your file and import
it.
You could load any file now !!
I stumbled on an article and this worked best for me
Open up the config.inc.php file within the phpmyadmin dir with your favorite code editor. In your local MAMP environment, it should be located here:
Hard Drive » Applications » MAMP » bin » config.inc.php
Do a search for the phrase $cfg[‘UploadDir’] – it’s going to look like this:
$cfg['UploadDir'] = '';
Change it to look like this:
$cfg['UploadDir'] = 'upload';
Then, within that phpmyadmin dir, create a new folder & name it upload.
Take that large .sql file that you’re trying to import, and put it in that new upload folder.
Now, the next time you go to import a database into phpMyAdmin, you’ll see a new dropdown field right below the standard browse area in your “File to Import” section.
the answer for those with shared hosting. Best to use this little script which I just used to import a 300mb DB file to my server. The script is called Big Dump.
provides a script to import large DB's on resource-limited servers
Best way to upload a large file not use phpmyadmin . cause phpmyadin at first upload the file using php upload class then execute sql that cause most of the time its time out happened.
best way is :
enter wamp folder>bin>mysql>bin dirrectory then write this line
mysql -u root -p listnames < latestdb.sql
here listnames is the database name at first please create the empty database
and the latestdb.sql is your sql file name where your data present .
but one important thing is if your database file has unicode data . you must need to open your latestdb.sql file and one line before any line . the line is :
SET NAMES utf8;
then your command mode run this script code
I have made a PHP script which is designed to import large database dumps which have been generated by phpmyadmin. It's called PETMI and you can download it here [project page] [gitlab page]. It has been tested with a 1GB database.
First, copy your mysql database to local disk C:\ for easy file location and now open your command prompt.
Meawhile, Navigate to mysql bin folder eg if you are using xampp, run or type the code below.
cd/ This take you to local disk pointer
cd xampp This take you to xampp folder
cd mysql This take you to mysql folder
cd bin This take you to bin folder
and run the code below
mysql -u dbusername -p -D dbname < c:\yourdbtoupload.sql
This will promt enter password, enter your password or click enter button if you are not using password
Change your server settings to allow file uploads larger than 12 mb and you should be fine.
Usually the server settings are set to 5 to 8 mb for file uploads.
Open your sql file in a text editor (like Notepad)
Select All -> Copy
Go to phpMyAdmin, select your database and go to SQL tab
Paste the content you have copied in clipboard
It might popup a javascript error, ignore it
Execute
/Applications/XAMPP/xamppfiles/etc/php.ini
First find this location -> open php.ini file in notepad or sublime text
Then find this "post_max_size, upload_max_filesize, memory_limit" in php.ini text and change size like below
post_max_size=450M
upload_max_filesize=450M
memory_limit=700M
Notes : Before do this stop phpmyadmin in xampp or wamp and do above methods and then startall (xampp or manager-osx) it will work perfectly. Then you can able to upload large files in phpmyadmin. Thanks
For windows, first of all open xampp and right click Config and open php.ini file. After in php.ini file update this code
post_max_size = 800M
upload_max_filesize = 800M
max_execution_time = 6000
max_input_time = 6000
memory_limit = 1000M

Can't find file: './ci/users.frm' (errno: 13)

I installed LAMP on Ubuntu 11.04 and copy project from Windows.
PHP directory (/ci/) to var/www/
and
MySQL project directory (/ci/) to var/lib/mysql/
Full text of error that i get:
A Database Error Occurred
Error Number: 1017
Can't find file: './ci/users.frm' (errno: 13)
SELECT COUNT(*) AS `numrows` FROM (`users`) WHERE `email` = 'admin#localsite.com'
I googled that its permission problem, but don't know what do next.
Log from /var/log/mysql/error.log:
110622 19:27:21 [ERROR] /usr/sbin/mysqld: Can't find file: './ci/users.frm' (errno: 13)
Permissions problem meaning the permissions on the file. MySQL probably can't read it. Just change the owner and group to mysql and it should work.
chown mysql:mysql /var/lib/mysql/ci/*
As well as the files being readable by the MySQL user, the directory containing the .MYI files needs to be read, write and executable by the MySQL user. On my system this was achieved by:
chown -R mysql:mysql /var/lib/mysql/dbname
chmod -R 660 /var/lib/mysql/dbname
chown mysql:mysql /var/lib/mysql/dbname
chmod 700 /var/lib/mysql/dbname
This is an old topic, but I didn't find anything that worked for me so for anyone running into the same problem, yet the above file permission suggestions still don't change the "Can't find file" errors, here's what worked for me and my particular issue.
I was doing a rescue from one CentOS server to another using a recovery image, which had a different OS than the original OS and the original filesystem was mounted on a temporary dir. While I had access to the original /var/lib/mysql files, I didn't have access to the mysql admin or dump utilities, which requires the server to be running anyway (it's not automatically included when doing a recovery from a read-only image). Backups were a week old and I wanted to see if I could get the most recent data possible.
Changing the standard file permissions on these still kept giving "Can't find file" for nearly all of the database tables, however I could see that the tables were there. Turns out it was related to SELinux context on the files I had moved over using rysnc. All of the rescued dirs and files looked like this:
$ ls -alZ
drwx------. mysql mysql unconfined_u:object_r:admin_home_t:s0 somedb_dev
drwx------. mysql mysql unconfined_u:object_r:admin_home_t:s0 somedb_local
drwx------. mysql mysql unconfined_u:object_r:admin_home_t:s0 somedb_production
drwx------. mysql mysql unconfined_u:object_r:admin_home_t:s0 somedb_staging
The -Z flag notes the security context of files and dirs. Notice the unconfined_u and admin_home_t context. These are different from what they should be:
drwx------. mysql mysql system_u:object_r:mysqld_db_t:s0 mysql
Changing these database files to the proper context solved the problem and gave proper access to mysqld using the chcon command:
$ chcon -R -u system_u -t mysqld_db_t somedb_*
This changed all my custom databases to the proper SELinux context and the files could now be recognized by mysqld. I recommend running the chcon commad while the database server is not active, just as a precaution.
Hope that helps someone running into the same problem I had! Of course, you can turn off SELinux temporarily to test if this is fact this issue, but I didn't want turning off SELinux as a permanent solution.
I followed this steps:
Stop the mysql service.
Modify the my.cnf line datadir to my custom location.
Deleted all the files ib_data* , ib_logfile* in our new custom location
Change the permissions of the entire folder with your sentence:
chown mysql:mysql -R /custom_location/mysql/*
Start again the mysql service.
It works!!
Thanks
This error also occurs if the table is not in the database; so if you changed permissions of the directory and are still running into issues check your database and make sure the table is there.
So let's say you got an error like the OP:
Can't find file: './ci/users.frm'
ci is the database name
users is the table name
So in this case if you changed permissions and still had this issue you would verify that the users table is in the ci database.
#Brent Baisley It does work in XAMPP for Linux, but the location is different.
I did upgrade the Kernel today to fix the new Linux “Dirty Cow” Vulnerability (CVE-2016-5195). After the reboot I got the 'frm' permission error too.
So, if you get the following error:
Can't find file: 'yourtablename.frm' (errno: 13 - Permission denied) SQL query :...
You can do:
chown mysql:mysql /opt/lampp/var/mysql/yourDBname/*.frm
This will resolve your issue.
If you'd like to check, if your permission to any of the files has been modified before you execute the permission change, do:
ls -l /opt/lampp/var/mysql/yourDBname/*.frm
Hope that helps someone.
If you have failed RENAME TABLE statement, it could leave MySQL metadata in bad state. The solution is to recreate schema or to recreate table.