MySQL batch file update - mysql

One of my clients has an issue with his MySQL database. To solve the issue I need to just run a simple update on a table. I will need to send that to my client via a batch file.
How do I run a MySQL update on a table via a batch file?

Typically I put the SQL commands that I want to use in a plain text file. You can then call the file by launching MySQL and:
\. filename
This will run each line of the file as if it was typed from input. It is also easy to test.
If you need more, you can launch MySQL via a command they can cut and paste and pipe the file into MySQL as the input. Make sure usernames and passwords are handled by your command line or your script.

the following is more preferable.
On Linux:
mysql -u root -p -D database < file
I have been using this and I find it more convenient.

Related

What parameters does MySQL Workbench pass to mysqldump?

I need to write a script to automate MySQL backup of a database. So to determine what I will need, I go into MySQL Workbench, select the Schema, select Data Export, set a couple of controls (at the moment: Export to Self-Contained File & Include Create Schema) and Start Export.
Export Progress shows me command-line:
Running: mysqldump --defaults-file="/tmp/tmpTbhnzh/extraparams.cnf" --user=*** --host=*** --protocol=tcp --port=3306 --default-character-set=utf8 --skip-triggers "<schema-name>"
I need to know what is in that temporary "defaults file" if I'm to replicate whatever it is that MySQL Workbench passes to mysqldump. But the backup completes so quickly and deletes the file that I can't even copy it, of course!
Is there a way I can know just what arguments Workbench is passing to mysqldump so I can know I'm generating a good, robust script? (To be clear: I'm sure I can look up the mysqldump documentation to find arguments corresponding to whatever UI items I fill in explicitly, but I'm wondering what other "goodies" MySQL Workbench might know about and put in the parameters file.)
A bit of digging about in the python scripts (there's one called wb_admin_export.py) and the answer is....not very exciting... it's your password.
It also includes ignore-tables if there are any to ignore.

How do I use crontab to export and rsync databases in sql format to my computer?

Situation
I am trying to automate the downloading of my websites' files, emails and databases to my iMac. I am also trying to use rsync so that besides the first download, the rest will just be incremental. I discovered crontab and have finally gotten the right expression to rsync my websites' files, as per follows:
30 01 * * * * rsync -azvv -e 'ssh -p MYPORTNUMBER' root#MYIPADDRESS:/var/www/vhosts/ /Users/MYUSERNAME/var/www/vhosts
Question
How do I automate the downloading of all the databases in my server in SQL format to my iMac, plus do it in incremental style? There are so many solutions out there but they all seem to suggest using shell script, and that shell script has to be stored on the server. Is it possible to store the shell script on my imac then have the crontab to run it? And can someone please help me with the expressions for it?
I have tried creating a .sh file in my imac, then running it in crontab or terminal. But I get this error message "cannot execute binary file".
Or, should I be finding the script to transform the databases into sql format, store it somewhere on the server, then download it to my computer, then delete it off the server?
I have never touched shell script or crontab until now, so I am very scared that some wrong command will screw up the server or delete or modify files unknowingly.
You want a shell script that will dump your MySQL database to a .sql file and then you can rsync that file over. The script will need to live on your server and create the file. Then you can rsync it over to your Mac. Something like this to get you started:
#!/bin/sh
/usr/bin/mysqldump MyDatabase > mydatabase_backup.sql
Of course this just overwrites your backup each time so if you want to get extra credit you can learn how to create variables based on the current date in a shell script and use the date in your backup file name.

export mysql database schema and data using phpmyadmin

I'm trying to export a database with phpmyadmin but the result is a file that can't be read. In phpmyadmin I leave the options and only add compression. After downloading I'm unable to open the resulted .sql file (after extracting) in geany and get a complaint about the file being in an encoding that's not supported. Every time I do the export the file seems to be of a random size as if the process just stops somewhere.
It's a small database with joomla stuff that needs to be moved to another site and I would like to use mysldump but don't have ssh access. Other than the phpmyadmin in the cpanel I can't think of another way to access that data but phpmyadmin doesn't seem to be up for doing the job.
Is there any setting I should have to have a look at or some other way to get that data exported?
Try disabling compression or using another compression method. There have been a few bugs in phpMyAdmin. Maybe you are stumbling about one of them. Also try opening the sql file in an ordinary text editor and check the content maybe you can get some more information about the problem you are experiencing.
If you have access to the commandline you can also try the following command: mysqldump -u <username> -p <database_name> > dumpfilename.sql.

Importing larger SQL files into MySQL

I have a 400 MB large SQL backup file. I'm trying to import that file into MySQL database using WAMP → import, but the import was unsuccessful due to many reasons such as upload file size is too large and so on. So, I've gone ahead and made a few changes to the configuration of PHP and MySQL settings under WAMP. Changes made are
Under WAMP->MySQL->my.ini
max_allowed_packet = 800M
read_buffer_size = 2014K
Under WAMP->PHP->PHP.ini
max_input_time = 20000
memory_limit = 128M
Is this the right way to import such a large SQL file using WAMP → import?
If so, did I make the right changes to the configuration files? What are the maximum values allowed for the above variable?
You can import large files this command line way:
mysql -h yourhostname -u username -p databasename < yoursqlfile.sql
Since you state (in a clarification comment to another person's answer) that you are using MySQL Workbench, you could try using the "sql script" option there. This will avoid the need to use the commandline (although I agree with the other answer that it's good to climb up on that horse and learn to ride).
In MySQL Workbench, go to File menu, then select "open script". This is probably going to take a long time and might even crash the app since it's not made to open things that are as big as what you describe.
The better way is to use the commandline. Make sure you have MySQL client installed on your machine. This means the actual MySQL (not Workbench GUI or PhpMyAdmin or anything like that). Here is a link describing the command-line tool. Once you have that downloaded and installed, open a terminal window on your machine, and you have two choices for slurping data from your file system (such as in a backup file) up into your target database. One is to use 'source' command and the other is to use the < redirection operator.
Option 1:
from the directory where your backup file is:
$mysql -u username -p -h hostname databasename < backupfile.sql
Option 2:
from the directory where your backup file is:
$mysql -u username -p -h hostname
[enter your password]
> use databasename;
> source backupfile.sql
Obviously, both of these options require that you have a backup file that is SQL.
Use this from mysql command window:
mysql> use db_name;
mysql> source backup-file.sql;
You can make the import command from any SQL query browser using the following command:
source C:\path\to\file\dump.sql
Run the above code in the query window ... and wait a while :)
This is more or less equal to the previously stated command line version. The main difference is that it is executed from within MySQL instead. For those using non-standard encodings, this is also safer than using the command line version because of less risk of encoding failures.
We have experienced the same issue when moving the sql server in-house.
A good solution that we ended up using is splitting the sql file into chunks. There are several ways to do that. Use
http://www.ozerov.de/bigdump/ seems good (but never used it)
http://www.rusiczki.net/2007/01/24/sql-dump-file-splitter/ used it and it was very useful to get structure out of the mess and you can take it from there.
Hope this helps :)
If you are using the source command on Windows remember to use f:/myfolder/mysubfolder/file.sql and not f:\myfolder\mysubfolder\file.sql
I believe the easiest way is to upload the file using MYSQL command line.
using the command from the terminal to access MySQL command line and run source
mysql --host=hostname -uuser -ppassword
source filename.sql
or directly from the terminal
mysql --host=hostname -uuser -ppassword < filename.sql
at the prompt.
Three things you have to do, if you are doing it locally:
In php.ini or php.cfg of your PHP installation
post_max_size=500M
upload_max_filesize=500M
memory_limit=900M
Or set other values. Restart Apache.
Or
Use the PHP big dump tool. It’s best ever I have seen. It’s free and open source.
Had a similar problem, but in Windows. I was trying to figure out how to open a large MySQL SQL file in Windows, and these are the steps I had to take:
Go to the download website (http://dev.mysql.com/downloads/).
Download the MySQL Community Server and install it (select the developer or full install, so it will install client and server tools).
Open the MySQL command-line client from the Start menu.
Enter your password used in install.
In the prompt, mysql>, enter:
CREATE DATABASE database_name;
USE database_name;
SOURCE myfile.sql
That should import your large file.
The question is a few months old but for other people looking --
A simpler way to import a large file is to make a sub directory 'upload' in your folder c:/wamp/apps/phpmyadmin3.5.2
and edit this line in the config.inc.php file in the same directory to include the folder name
$cfg['UploadDir'] = 'upload';
Then place the incoming .sql file in the folder /upload.
Working from inside the phpmyadmin console, go to the new database and import. You will now see an additional option to upload files from that folder. Chose the correct file and be a little patient. It works.
If you still get a time out error try adding $cfg['ExecTimeLimit'] = 0; to the same config.inc.php file.
I have had difficulty importing an .sql file where the user name was root and the password differed from my the root password on my new server. I simply took off the password before I exported the .sql file and the import worked smoothly.
On item 1.16 of phpMyAdmin they say:
1.16 I cannot upload big dump files (memory, HTTP or timeout problems).
Starting with version 2.7.0, the import engine has been re–written and these problems should not occur. If possible, upgrade your phpMyAdmin to the latest version to take advantage of the new import features.
The first things to check (or ask your host provider to check) are the values of upload_max_filesize, memory_limit and post_max_size in the php.ini configuration file. All of these three settings limit the maximum size of data that can be submitted and handled by PHP. One user also said that post_max_size and memory_limit need to be larger than upload_max_filesize. There exist several workarounds if your upload is too big or your hosting provider is unwilling to change the settings:
Look at the $cfg['UploadDir'] feature. This allows one to upload a file to the server via scp, ftp, or your favorite file transfer method. PhpMyAdmin is then able to import the files from the temporary directory. More information is available in the Configuration of this document.
Using a utility (such as BigDump) to split the files before uploading. We cannot support this or any third party applications, but are aware of users having success with it.
If you have shell (command line) access, use MySQL to import the files directly. You can do this by issuing the “source” command from within MySQL:
source filename.sql;
The simplest solution is MySQL Workbench. Just copy the .sql file text to the query window of MySQL Workbench and just execute it. All renaming things will done by it.
if you want to see the import progress. Install pv with brew and run this:
brew install pv
pv -p dump.sql | mysql -h localhost -u root -p db_name
my.ini
max_allowed_packet = 800M
read_buffer_size = 2014K
PHP.ini
max_input_time = 20000
memory_limit = 128M
post_max_size=128M

Expand mysql memory

I am trying to import data base, that is more than 128 MB limit, to mysql on XAMMP machine. How can I expand the mysql memory on XAMMP?
Assuming you're using PHPMyAdmin, you can't - that's a hard limit.
To do it via PHPMyAdmin, you need to split your import files up and run them separately
Alternatively, use a different MySQL client, like the mysql command line client that comes as part of XAMPP, and allows you to run scripts directly.
The mysql command allows you to connect to the database using the client, and then run the script from there, like this:
C:\xampp\mysql\bin>mysql.exe -u root
mysql> source C:/mysqlscripts/import.sql;
Note the forward slashes in the file path
Or, you can run it on the windows commandline with the file as a parameter, as described in the documentation.