my aim is to start a scheduled job on a windows server that deletes datasets on a local mysql server, also installed on a windows server.
But I would not just delete data, I want to create a logfile.
With this command:
mysql -v -h localhost -u user --password=password -P 3307 --database=smydatabase -B --silent --skip-named-commands < query.sql > logging.txt
an the following sql-file "query.sql
select count(*) from table;
I get the following logfile:
--------------
select count(*) from table
--------------
4420101
=======================
My first Question is: can I susspress the query and the both lines above and below.
The final sql-file will contain abount 20 Lines of sql-commands. My preffered goal is to create a formated logfile like this:
Job started at <date>
Deleting <4420101> datasets in table <table>
4420101 rows affected
Deleting <22013> datasets in table <persons>
etc.
So I have to create lines of logfile with select-statements and variables. other lines like delete-statements should not appear in the logfile. is this possible?
Include --skip-column-names:
mysql -v -h localhost -u user --password=password -P 3307 --database=smydatabase -B --silent --skip-named-commands --skip-column-names < query.sql > logging.txt
Hope this would help.
Related
The code below extracts views separately from the database. However, I'm trying to get this to run in a single docker run or exec command.
Right now when I try, the pipe command and in combination with trying to escape quotes gives me errors.
mysql -u username INFORMATION_SCHEMA
--skip-column-names --batch
-e "select table_name from tables where table_type = 'VIEW'
and table_schema = 'database'"
| xargs mysqldump -u username database
> views.sql
Anyone know how to achieve this within one docker command?
For example:
docker exec -i $(docker-compose ps -q mysqldb) mysql ...
Much love.
You can run both the mysql client command and the mysqldump tool from somewhere that's not "on the database server". In your case, you can run them from the host that has the MySQL server, assuming you launched the database with options like docker run -p 3306:3306. It would look something like
mysql -h 127.0.0.1 -u username INFORMATION_SCHEMA \
--skip-column-names --batch \
-e "select table_name from tables where table_type = 'VIEW' and table_schema = 'database'" \
| xargs mysqldump -h 127.0.0.1 -u username database \
> views.sql
This avoids all of the shell quoting problems trying to feed this into docker exec, and also avoids the requirement to need root-level access on the host to do an administrative task (if you can run any Docker command at all then you can use docker run to add yourself to the host's /etc/sudoers, among other things).
I also agree with #MichaelBoesl's answer, though: this is long enough that trying to make it into a one-liner isn't really worth the trouble that the various quoting and escaping will bring. I'd probably write this into a script and put the SQL query into a file.
#!/bin/sh
: ${MYSQL_HOST:=127.0.0.1}
: ${MYSQL_USER:=username}
: ${MYSQL_DATABASE:=INFORMATION_SCHEMA}
cat >/tmp/dump_views.sql <<SQL
SELECT table_name
FROM tables
WHERE table_type='VIEW' AND table_schema='database';
SQL
mysql -h "$MYSQL_HOST" -u "$MYSQL_USER" --skip-column-names --batch \
"$MYSQL_DATABASE" </tmp/dump_views.sql \
| xargs mysqldump -h "$MYSQL_HOST" -u "$MYSQL_USER" "$MYSQL_DATABASE"
You can put all your commands into a bash script on the container and just execute the script!
When I run any select or update query in mysql workbench, it shows either
or number of rows returned.
In my script, I use mysql -u user -h ip db -se"select * from.."
I have tried redirecting mysql output:
./script.sh >> script.log 2>&1
but it shows message only for error not when successfully run.
It does not show 27 row(s) returned. So in that case, I could not check if any update statement, select or procedure run successfully.
How can I get output which runs successfully?
I found a solution, in the options of mysql.
Run it like this:
mysql -u USER -h HOST -p PORT --password -vv -se "YOUR QUERY" >output.txt 2>errors.txt
The addition of the -vv parameter will give you the number of affected rows.
-vvv will also tell you how much time it took to run the query.
Ex: I ran this:
mysql -u Nic3500-h localhost -P 3306 --password -vv -se "INSERT INTO stackoverflow.activity (activity_id, activity_name) VALUES ('10', 'testtest');" >output.txt 2>&1
And output.txt is:
--------------
INSERT INTO stackoverflow.activity (activity_id, activity_name) VALUES ('10', 'testtest')
--------------
Query OK, 1 row affected
Bye
You can do this with a query using INTO OUTFILE like shown in the mysql documentation
Trying to connect to an rds mysql server from an ec2 ubuntu server.
I use
mysql -h my_host_name -u admin_name -p database < data.sql
When the password prompts, I enter my password. However all this does it create a new blank line and does nothing else.
Any ideas?
When mysql is processing file input, it doesn't normally print informative messages, it only displays the results of SELECT queries. If you want to see messages from queries that modify the database, add the -v option to make it verbose.
mysql -v -h my_host_name -u admin_name -p database < data.sql
If you use -v -v it will produce even more details, and -v -v -v will be most informative.
This blank line probably means that your mysql is processing what is inside your "data.sql".
If you need to see what is been processed, you can first connect to mysql server with:
mysql -h my_host_name -u admin_name -p
Change to your database ( if you have one defined and your sql is not creating one... ):
mysql> change my_database;
Than you call your script execution with:
mysql> source data.sql;
{}'s
I am trying to create a batch file that opens command prompt, changes directory and then runs MySQL queries:
C:\xampp\mysql\bin\mysql.exe -u admin -padmin -h localhost mydatabase
select * from table;
When I run the batch file the MySQL command line opens and connects to the database, but the select * from table; command doesnt run
select * from table;
What is the correct way to do this?
The batch is not able to pass any line after the call of the exe to the program.
If you want to send a command you should create a textfile that contains the command. You can name it commands.txt
select * from table
Then tell mysql to read from that file:
C:\xampp\mysql\bin\mysql.exe -u admin -padmin -h localhost mydatabase < commands.txt
If you need the results of the command, save them like that:
C:\xampp\mysql\bin\mysql.exe -u admin -padmin -h localhost mydatabase < commands.txt > results.txt
You can read more about that approach here.
I want to copy a mysql database from my local computer to a remote server.
I am trying to use the mysql dump command. All the examples on the internet suggest doing something like
The initial mysql> is just the prompt I get after logging in.
mysql> mysqldump -u user -p pass myDBName | NewDBName.out;
But when I do this I get You have an error in your SQL syntax; check the manual that corresponds ... to use near 'mysqldump -u user -p pass myDBName | NewDBName.out'
Since I have already logged in do I need to use -u and -p? Not doing so gives me the same error. Can you see what is wrong?
In addition to what Alexandre said, you probably don't want to pipe (|) output to NewDBName.out, but rather redirect it there (>).
So from the Windows/Unix command line:
mysqldump -u user -p pass myDBName > NewDBName.out
Note that if you have large binary fields (e.g. BLOBS) in some columns you may need to set an additional option (I think it was --hex-blob, but there might have been another option too). If that applies to you, add a comment and I'll research the setting.
mysqldump is not an SQL statement that you execute inside a mysql session but a distinct binary that should be started from your OS shell.
The are a few ways to use this. One of them is to pipe the output of mysqldump to another MySQL instance:
echo CREATE DATABASE remote_db | mysql -h remote_host -u remote_user -premote_password
mysqldump -h source_host -u root -ppassword source_db | mysql -h remote_host -u remote_user -premote_password -D remote_db
I have had to dump large sets of data recently. From what I have found on a 200Mb database with 10,000+ records in many of the tables is the following. I used the linux 'time' command to get actual time.
12 minutes using:
mysqldump -u user -p pass myDBName > db-backups.sql
7 minutes to clone the database:
mysqldump -u user -p pass myDBName | mysql -u user -p pass cloneDBName
And in less than a second:
mysqlhotcopy -u user -p pass myDBName cloneDBName
The last one blew my mind, but you have to be logged in locally where the database server resides. Personally I think this is much faster than remotely doing a dump, the you can compress the .sql file and transfer it manually.