Execute INSERT statements present in a file - mysql

I have a file with many INSERT statements. How do I execute them without having to manually copy them and paste them for execution. It is quite tough to do it that way, as the file is about 60 MB.

You can import a file for mysql to run both from the mysql executable (command prompt) and the client itself. I think the former is easier:
mysql < file-to-import.sql
(you may need username/password, etc.)

Create your file with extension :.sql Then put all your queries on the file. Don't forget to add ";" after each instruction. After that, via a command line, execute mysql < "path-to-your-file/theFilename.sql"

Related

Appserv command line batch

I use Java and MySQL with Appserv. I have a database file named "modb.sql" (in a specific directory relative to my program's location), and I frequently need to drop the old database, create a new database (with the same name every time, inside phpMyAdmin) and import the modb.sql file to the new DB.
Is there a way to automate this process and include it with the program or inside a setup file? Instead of doing it manually by me or the user.
I can use another MySQL database manager or C# instead of Java if that would allow the process to be automatic.
You can do all of this from the Windows command line or through a Scheduled Task. I don't have Appserv installed, so I can't give you all of the exact file path locations which may vary based on your installation location. You could technically do it from a Java application as well, but that's a lot of overhead and not really automating it like a schedule task will.
You'll basically be calling the mysql.exe command directly from your schedule task -- or from a batch file.
First, let's create a new SQL file, perhaps called DropAndCreateMoDb.sql. Put your commands to drop (DROP DATABASE ...) and recreate the database here. Of course the exact drop command depends on what you call your database and the create command depends heavily on what structure is created by modb.sql; you'll also create any permissions that you need to here. This automates the part about dropping and recreating.
Write a batch file. You don't really need to do this, you could call MySQL twice from the schedule task, but we're trying to do this the right way. This is untested, and obviously you'll want to substitute the proper paths and MySQL username/password -- I suggest creating a maintenance account for this so your full credentials aren't in the batch file; the usual disclaimers apply about properly securing your account), but perhaps something like:
#ECHO OFF
C:\Program Files\Appserv\MySQL.exe -u foo -p bar < C:\data\DropAndCreateMoDb.sql
C:\Program Files\Appserv\MySQL.exe -u foo -p bar < C:\data\modb.sql
Obviously it doesn't catch any errors and your username and password are in the clear here, but as long as this is on your local development machine for development purposes and you know and understand the risks, it will work.
At this point, you can double-click the .bat file and it should drop and recreate your database. To finish it off and fully automate it, you'll add a scheduled task. Go to the Scheduled Tasks control panel and add a new task. Tell it the path to your new DropAndRecreate.bat or whatever you decide to call it, tell Windows when you want the task to run, and now it's fully automated.
There are lots of variables here that make the specifics of the implementation very dependent on your exact configuration and so on. Make sure you understand what each step actually does instead of just copying and pasting.

How to migrate large mysql database where remote server has stringent limits?

I have a local database thats about 1GB and my remote host is a free host that I am using for testing. want to make sure everything works before i spend money on a paid host. The problem is the phpmyadmin on the remote server only allows 50mb files which which just doesn't cut it, especially since the restore usually fails due to execution time limits. Below is the list of everything I've tried.
LOCAL
phpmyadmin -----> backingup of table no longer work because of timeout even with modified php.ini settings because of shear size of db
mysqldumper -----> program creates dumps with inserts, there is no option for me to make it create insert ignores. ill explain the problem later below.
mysqlworkbench -----> creates database using database name of my local server (problem is my remote server has a different database name and i cant open a 1gb .sql file to edit the database name at the very top. computer just craps out and I have to force quit workbench)
sqlsplitter (mac program) cuts up large .sql or .sql.gz files
REMOTE
phpmyadmin with .gz/.sql files cut up into 20mb chunks
-----> timeout. phpmyadmin resume function doesnt work either. it just overwrites old data
mysqldumper -----> process ends up in an error randomly midway through my restore on remote server using a backup created with mysqldumper on my local computer (single file or multipart, both dont work). could be at 10% completion, could be at 50%.
bigdump -----> used single and multipart dumps from mysqldumper, same problem. randomly quits halfway through. some multiparts were successful in completing, but when one failed and I tried the failed part again, it would give me an error saying unique key already exists in table. i dont want to unset all my unique key stuff and have to go through and delete all duplicates later.
mysqldumper -----> does not work with dump from mysqlworkbench
bigdump -----> gives me an error sql error denied for creating database using dump from mysqlworkbench (i cannot open up a 1 gb file to delete that 1 line that says create database)
Does anybody know of a better method to upload to my host? I have no command line access on there and only a 500mb space limit (no limit on sql space though).
Thanks
Use mysqldump. Figure out what the error you're seeing is, and fix it. The mysqldump utility works. I've restored dumpfiles with hundreds of gigabytes of data to servers, and never use anything else. If it doesn't work for you, you're doing something incorrectly.
You can prevent it from writing a USE database-name; statement at the top of the file by invoking it with the database name as the last argument, without using the --databases option before it.
You can add the --insert-ignore command line option to write all the INSERT statements as INSERT IGNORE to work around your partial insert issues
You can use --no-data to extract a dump file that contains table definitions, not data, and get all of the tables declared, first.
You can use the --no-create-info option to extract a dump file with just the inserts, not the table definitions.
http://dev.mysql.com/doc/refman/5.6/en/mysqldump.html
You can also use a simple bash loop to extract each table into its own file, so you have smaller files to work with:
for TABLE in `mysql [args] -e 'show tables in database-name'`; do mysqldump [args] database-name $TABLE > $TABLE.sql; done
When restoring the files, add the --compress option to the mysql command line arguments for a faster transfer, and specify your (new) database name as the last argument, so the client will use the correct database before applying the file, which no longer contains the database name.

Importing Product Open Data with BigDump

I'm trying to import the .sql file from Open Product Data, the POD Database Dump. The site recommends to use BigDump, but when I try to run the php script from BigDump in my localhost, I'm getting this message:
Stopped at the line 339.
At this place the current query includes more than 300 dump lines. That can happen if your dump file was created by some tool which doesn't place a semicolon followed by a linebreak at the end of each query, or if your dump contains extended inserts or very long procedure definitions. Please read the BigDump usage notes for more infos. Ask for our support services in order to handle dump files containing extended inserts.
How can I fix this?
I already tried to open the .sql file, but it always stops working/crashs in my Mac

MySQL batch file update

One of my clients has an issue with his MySQL database. To solve the issue I need to just run a simple update on a table. I will need to send that to my client via a batch file.
How do I run a MySQL update on a table via a batch file?
Typically I put the SQL commands that I want to use in a plain text file. You can then call the file by launching MySQL and:
\. filename
This will run each line of the file as if it was typed from input. It is also easy to test.
If you need more, you can launch MySQL via a command they can cut and paste and pipe the file into MySQL as the input. Make sure usernames and passwords are handled by your command line or your script.
the following is more preferable.
On Linux:
mysql -u root -p -D database < file
I have been using this and I find it more convenient.

Mysql outfile to current working directory?

Is there a way I can make the OUTFILE statement in a .sql file point to the path of the current working directory of the .sql file itself, without manually specifying an absolute path name? As it is now, the default location is the data directory of the schema that I'm working with (ie. C:\progra~1\mysql\etc\etc).
Thanks!
Seems like scripting would be your best bet on this. Using Perl or PHP to generate the query with the outfile in cwd.
Not directly, but you can get similar functionality by using mysql at the command line.
First, some background: the LOAD DATA INFILE statement has a LOCAL variant, LOAD LOCAL DATA INFILE, that allows you to read data from a file on your local file system; without LOCAL it reads from the specified location on the server.
Unfortunately, the SELECT INTO OUTFILE statement always writes to the specified location on the server's filesystem (which can include server-configured mounts such as NFS or CIFS shares); there is no LOCAL equivalent.
However, you can approximate such functionality by using the mysql client in batch mode:
mysql -e 'SELECT * FROM db1.tbl1' -B > output.tsv
The -B option causes the statement to run in batch mode, so the query results are printed in a tab-delimited way, with no borders. Using > somefile.txt allows you to redirect that output to the specified file, so you have something equivalent to what you want, without the FIELDS or LINES options of SELECT INTO OUTFILE, but close enough!