Importing zipped files in Mysql using CMD - mysql

I am trying to import zipped database files into Mysql using command prompt using the following command
7z < backup.sql.7z | mysql -u root test
The root user don't have any password associated with it.
test is my target blank database.
I use 7zip for unzipping purpose.
The zipped database i.e. backup.sql.7z is located in D drive.
But it's giving the following error
So, instead I used the following command
7z < backup.7z | mysql -u root test
Note: This time I am using backup.7z instead of backup.sql.7z
But then I get the following error
Clearly there's something wrong with my SQL syntax.
What will be the correct syntax to use then ?

I needed to import from a compressed file as well, and stumbled upon your question.
After a bit of messing around, I found that this worked for me:
7z x -so backup.7z | mysql -u root test
x is the extraction command
-so makes 7-zip write to stdout

Nothing wrong with your syntax, it's just a limitation with 7zip. It's better to use xz in this case, which doesn't put extraneous junk in stdout, or directly call the 7z.dll with your favorite programming language. 7z.exe is really meant for archive management, rather than unix-style piping, and Igor is very reluctant to change that.
If you try a plain 7z < somefile.7z you'll immediately see that all you get back is a usage list.

Related

Call external program from mysql

How can I call an external program from mysql?
I am a complete beginner at this, on Linux Mint 20, I created a database of all my video files, the paths of the videos are all listed in a table.
I can access the DB using Bash with:
mysql -u root -proot -e "use collection; select path from videos where path Like '%foo%' or path Like '%bar%'"
To search for what I want, but now I want to pipe the chosen vid(s) to MPV/VLC, whatever.
Apart from the fact I am doing it as root, am I going about this the wrong way?
I just want to perform quick searches in a terminal, then fire up the vid(s).
Thanks a lot, folks.
If I'm understanding correctly. You want to query your db for a specific type of file or path and then you want to use the result of your query to open up the files?
You don't open the program from MySQL, but you could open it from bash.
Figure out what the bash command is to open that program and use the output of your query to run over a loop in bash to open, one by one, the results you got from your query.
Alternatively you can output the results to a temporary file and read from it with bash:
mysql -user -pass -e "YOUR QUERY" > /tmp/output.txt
If you can get the right output in your output.txt file, I would look into reading from that file in bash with a loop. Something like:
while IFS= read -r line
do
mpv "$line"
done < output.txt

Retrieve lost file using Vi in MySQL

I would like to know how to retrieve a file using Vi in MySQL. I logged in using:
mysql -uuser -p -hserver -A database
Then I do:
\e
The editor opens and I type my query of 200 lines, then I :wq and \G (if I save the file it says: /tmp/sql9SbYQZ saved) and I see the result.
Now, if I make a mistake or run a different query and I try to type \e again, the query is lost.
ll /tmp/sql9SbYQZ
ls: /tmp/sql9SbYQZ: No such file or directory
Is there a way to retrieve the lost file?
Here's what I added to my .vimrc in order to save the current query in case i made a mistake.
nmap <F7> :w! /tmp/query.sql\| wq!<CR>
This will create a map to the F7 key (you can change it of course). So every time you open a file either using edit or \e, you change it use the F7 key.
This will save a backup of your current query to /tmp/query.sql and then save and close the temporary file. This way, if you make a mistake, you just re-open the backup file and try again.
Here's also a link you might like: http://vim.wikia.com/wiki/Open_the_last_edited_file
With the vi/m editor used with mysql, crontab, and many others, the work is done in a tmp file, as you see from your messages.
Edit (Big doah!, remove cruft about ls -l /tmp/..., you already did that!)
In the future the solution is to tell vim to w the buffer to a file name of your chosing, i.e.
w! /home/you/scripts/mysql2.sql
Then close the editor with
q
Note you may not need the ! after w.
I hope this helps.
Here is something you can try:
In Linux, do the following
$ cd
$ cp .mysql_history mystuff.txt
$ vi mystuff.txt
You should see the file .mysql_history. The mysql client records all queries and commands executed. Hopefully, your query is in there.
Give it a Try !!!

cant find a frm file when trying to import data into mySql

I am trying to import English wikipedia dump into MySQL so I can use the JWPL library to work with it.
I installed MySSQ, created a database named wikidump, ran a sql script that created the needed tables, and tried to run the following import command to load the data:
mysqlimport -u root-p --local --default-character-set=utf8 wikidump `pwd`/*.txt
When I do so, I get the following error:
msqlimport: Error: 1017,can't find file: '.\wilidump\#002.frm' <errno:22> when using table:*
I ran the command from the root directory of the files to import. Is this okay?
Is this a problem with the db or the the files I am trying to import?
Any clues on what to do next?
(Sorry, if it a simple question and I'm just missing out on something simple, I am a newbie to sql and I did my best searching for an answer.)
I got this message once when I tried to read in gzipped data files and needed to uncompress them first...
I got the problem too.
It seems that the command didn't support the usage of "*". So my way to solve the problem is to list all the names of the files into another file, use the shell to add "mysqlimport ......" before every file name, the use the file as a script to repeat the import command to all the files.

Import Multiple .sql dump files into mysql database from shell

I have a directory with a bunch of .sql files that mysql dumps of each database on my server.
e.g.
database1-2011-01-15.sql
database2-2011-01-15.sql
...
There are quite a lot of them actually.
I need to create a shell script or single line probably that will import each database.
I'm running on a Linux Debian machine.
I thinking there is some way to pipe in the results of a ls into some find command or something..
any help and education is much appreciated.
EDIT
So ultimately I want to automatically import one file at a time into the database.
E.g. if I did it manually on one it would be:
mysql -u root -ppassword < database1-2011-01-15.sql
cat *.sql | mysql? Do you need them in any specific order?
If you have too many to handle this way, then try something like:
find . -name '*.sql' | awk '{ print "source",$0 }' | mysql --batch
This also gets around some problems with passing script input through a pipeline though you shouldn't have any problems with pipeline processing under Linux. The nice thing about this approach is that the mysql utility reads in each file instead of having it read from stdin.
One-liner to read in all .sql files and imports them:
for SQL in *.sql; do DB=${SQL/\.sql/}; echo importing $DB; mysql $DB < $SQL; done
The only trick is the bash substring replacement to strip out the .sql to get the database name.
There is superb little script at http://kedar.nitty-witty.com/blog/mydumpsplitter-extract-tables-from-mysql-dump-shell-script which will take a huge mysqldump file and split it into a single file for each table. Then you can run this very simple script to load the database from those files:
for i in *.sql
do
echo "file=$i"
mysql -u admin_privileged_user --password=whatever your_database_here < $i
done
mydumpsplitter even works on .gz files, but it is much, much slower than gunzipping first, then running it on the uncompressed file.
I say huge, but I guess everything is relative. It took about 6-8 minutes to split a 2000-table, 200MB dump file for me.
I don't remember the syntax of mysqldump but it will be something like this
find . -name '*.sql'|xargs mysql ...
I created a script some time ago to do precisely this, which I called (completely uncreatively) "myload". It loads SQL files into MySQL.
Here it is on GitHub
It's simple and straight-forward; allows you to specify mysql connection parameters, and will decompress gzip'ed sql files on-the-fly. It assumes you have a file per database, and the base of the filename is the desired database name.
So:
myload foo.sql bar.sql.gz
Will create (if not exist) databases called "foo" and "bar", and import the sql file into each.
For the other side of the process, I wrote this script (mydumpall) which creates the corresponding sql (or sql.gz) files for each database (or some subset specified either by name or regex).

Mysql command line restore error "The system cannot find the file specified."

Ive got a rather strange issue, ive got an automated build tool that is calling through to the mysql command line to teardown then setup a database from an SQL file.
On one computer it is working fine, and its basically calling:
Mysql -h {Connection::Host} -u {Connection::User} --password={Connection::Password} < {sqlFile}
Ive just checked it out on another computer and tried to build and it keeps giving me the error "The system cannot find the file specified.". The MySQL versions are the same 5.1 and no other files have changed. The only thing that i know is different is where the build files are deployed... at home they are deployed to:
d:/code/projects/xxxxx/
whereas on this computer that doesn't work it is deployed to:
c:\Documents and Settings\xxxxxx\My Documents\Projects\Other\xxxxxx\
The interwebs brought back a few possibilities such as the spaces within the path, however ive tried adding the -i to the command (ignore spaces) and it made no difference.
Anyone have any ideas?
It's most likely the spaces in the path. The part after the space will be passed to the program as a different parameter.
Try surrounding {sqlFile} with double quotes.
Mysql .... < "{sqlFile}"