I have a mysql table with large number of rows (10m)
From the mysql client, I want to run a query but not print results. This is because even though the query runs in 15 seconds, printing the results on to console takes many minutes.
How can I achieve this?
EDIT: My query is the following:
select user_id, count(*) as ct from user_geo_loc group by user_id, lat, lng;
EDIT 2: At the end of the execution, the mysql client prints the following
9950710 rows in set (9.31 sec)
I want to find out this time but not print the results (which takes 15 minutes)
When on Linux, you could redirect the output to /dev/null to prevent the output. Like this:
mysql -u username -p database -e "SELECT * FROM table" > /dev/null
On Windows the equivalent would be:
mysql -u username -p database -e "SELECT * FROM table" > NUL
Please note: The only thing printed on the console will be errors, to prevent this, you would have to redirect stderr to stdout by adding 2>&1 to the end (Linux)
In console, you may redirect output into the null device:
$ mysql -uUSER -pPASSWORD -e"select ..." DATABASE_NAME > /dev/null
or you may redirect into the file to look result later (this is much faster than print output into console):
$ mysql -uUSER -pPASSWORD -e"select ..." DATABASE_NAME > ./output.txt
It seems like you want a pager ?
run the following (in the MySQL console)
pager less
Which will use less and only show the first "screen" of info
Related
I have a q.sql file that has queries like
SET SQL_SAFE_UPDATES = 0;
UPDATE student SET gender = 'f' WHERE gender = 'm';
.
.
UPDATE student SET rollno = '03' WHERE rollno = '003';
This .sql file is executed through a shellscript:
mysql -uuser -ppass DB < q.sql
The command is executed even when one of the queries in q.sql file has failed. Now I want to verify if all the queries are updated successfully.
I tried to echo $? but it always prints 0, i.e command successful, even if the one of the queries in q.sql has failed.
mysql -uuser -ppass DB < q.sql
echo $?
If query fails I want it to print "failed" or stop the further execution of the shellscript.
If you use bash, you can use the set -e in your script and execute each line of your mysql script using -e option.
#!/bin/bash
set -e
while read line; do
mysql -uuser -ppass DB -e "$line"
done < q.sql
For information set --help shows:
-e Exit immediately if a command exits with a non-zero status.
and the man mysql page:
--execute=statement, -e statement
Execute the statement and quit. The default output format is like that produced with --batch. See Section 4.2.3.1, "Using Options on the Command Line", for some examples. With this option, mysql does not use the history file.
You can catch the output in a file for further processing:
mysql -uuser -ppass DB < q.sql > mysql.out
If you have a query that produces a lot of output, you can run the output through a pager rather than watching it scroll off the top of your screen:
mysql -uuser -ppass DB < q.sql | more
If you want to get the interactive output format in batch mode, use mysql -t. To echo to the output the statements that are executed, use mysql -v.
https://dev.mysql.com/doc/refman/8.0/en/batch-mode.html
When I run any select or update query in mysql workbench, it shows either
or number of rows returned.
In my script, I use mysql -u user -h ip db -se"select * from.."
I have tried redirecting mysql output:
./script.sh >> script.log 2>&1
but it shows message only for error not when successfully run.
It does not show 27 row(s) returned. So in that case, I could not check if any update statement, select or procedure run successfully.
How can I get output which runs successfully?
I found a solution, in the options of mysql.
Run it like this:
mysql -u USER -h HOST -p PORT --password -vv -se "YOUR QUERY" >output.txt 2>errors.txt
The addition of the -vv parameter will give you the number of affected rows.
-vvv will also tell you how much time it took to run the query.
Ex: I ran this:
mysql -u Nic3500-h localhost -P 3306 --password -vv -se "INSERT INTO stackoverflow.activity (activity_id, activity_name) VALUES ('10', 'testtest');" >output.txt 2>&1
And output.txt is:
--------------
INSERT INTO stackoverflow.activity (activity_id, activity_name) VALUES ('10', 'testtest')
--------------
Query OK, 1 row affected
Bye
You can do this with a query using INTO OUTFILE like shown in the mysql documentation
I have a Magento database in which I want to search for a particular string/pattern.
But the database's size is too large so I cannot export the database to .sql file and then search into that file(editor even Geany crashes opening such large files).
So how can I do a search the database for a perfect match of [string/pattern] and display fulltext information as result, through only using command-line and MySQL Database credentials ?
I tried below command, but it requires username to be given as -u[USERNAME], also it doesn't display full query or result in terminal window.
mysqldump -p[PASSWORD] [DATABASE] --extended=FALSE | grep [pattern] | less -S
Anyone have any solutions for this ?
You can first log into MySQL CLI as especified in http://dev.mysql.com/doc/refman/5.7/en/connecting.html
mysql --host=localhost --user=myname --password=mypass mydb
So, you can use a query command to find your pattern. If you know the table you want to search such as the column it make the thinks easy. The SELECT statement is like this:
SELECT column FROM table WHERE column LIKE '%pattern%';
http://dev.mysql.com/doc/en/select.html
If you don't know the table's name, you can list all and try to find by the meaning.
SHOW TABLES;
Edited with better code
You didn't say if this was a one off or not but this will check all tables in a schema for a value.
First in your home directory set up a file named .my.cnf with the following contents and change its permissions to 700 (Replace [USERNAME] and [PASSWORD] with your username and password.
[client]
user=[USERNAME]
password="[PASSWORD]"
Then execute the following (Replacing [DATABASE] and [CHECKSTRING] with your database and the check string)
mysql [DATABASE] --silent -N -e "show tables;"|while read table; do mysql [DATABASE] --silent -N -e "select * from ${table};"|while read line;do if [[ "${line}" == *"[CHECKSTRING]"* ]]; then echo "${table}***${line}";fi;done;done
If checking for 51584 the result would be something like
test_table***551584,'column 2 value','column 3 value'
test_table5***'column 1 value',251584,'column 3 value'
If you want to know which column had the value then select from INFORMATION_SCHEMA.COLUMNS and add another nest.
mysql [DATABASE] --silent -N -e "show tables;"|while read table; do mysql [DATABASE] --silent -N -e "select column_name from information_schema.columns where table_schema='[DATABASE]' and table_name = '${table}';"|while read column; do mysql [DATABASE] --silent -N -e "select ${column} from ${table};"|while read line;do if [[ "${line}" == *"[CHECKSTRING]"* ]]; then echo "${table}***${column}***${line}";fi;done;done;done
If checking for 51584 the result would be something like
test_table***column1***551584
test_table5***column2***251584
First of all you need to login into database with correct username and password by below command.
sudo mysql -u root -p
then check the database in which you want to operate operation.
eg.
SHOW DATABASES;
USE Test;
now your database is ready for operation through terminal. Here I assume my database name is "Test".
Now for String/pattern matching use command as below or follow the link http://www.mysqltutorial.org/mysql-regular-expression-regexp.aspx.
SELECT
column_list
FROM
table_name
WHERE
string_column REGEXP pattern;
I need to run a monthly bash script via cron that is related to our company's billing system. This is done with two stored procedures. When I run them via the MySQL console and workbench, they work fine.
I've looked at this article and this is basically the way I do it.
I call via cron, a shell script that looks like this:
mysql -h 192.168.1.1 -u<username> -p<password> mydatabase < /path/to/billing_periods.sql
My text file that has the commands in it looks like this:
call sp_start_billing_period();
call sp_bill_clients();
What happens is that the first query runs, but the second one on the second line, doesn't.
I can make a stored procedure that wraps these two - but I just was hoping to learn why this was happening... Perhaps a mistake I made or a limit in the way you do this..
I also considered doing this (two calls to the MySQL shell):
mysql -h 192.168.1.1 -u<username> -p<password> mydatabase -e "call sp_start_billing_period();"
mysql -h 192.168.1.1 -u<username> -p<password> mydatabase -e "call sp_bill_clients();"
You could try separating each statement with a semicolon.
mysql -h 192.168.1.1 -u<username> -p<password> mydatabase -e "call sp_start_billing_period();call sp_bill_clients();"
If you have your statements in a file you can do:
while read LINE; do mysql -u<username> -p<password> mydatabase -e"$LINE";echo "-----------";done < statements.sql
I think you are only allowed to execute a single statement in your input .sql file, see the mysql documentation (manpage) for -e statement.
· --execute=statement, -e statement
Execute the statement and quit. The default output format is like that produced with --batch.
The -e is implicit. At least when I do different mysql queries I put them in their own script like you already suggested.
I have set up a cronjob using crontab -e as follows:
12 22 * * * /usr/bin/mysql >> < FILE PATH >
This does not run the mysql command. It only creates a blank file.
Whereas mysqldump command is running via cron.
What could the problem be?
Surely mysql is the interactive interface into MySQL.
Assuming that you're just running mysql and appending the output to your file with >>, the first time it tries to read from standard input, it will probably get an end-of-file and exit.
Perhaps you might want to think about providing a command for it to process, something like:
12 22 * * * /usr/bin/mysql
-u me
-p never_you_mind
-e "select * from my_table"
-D my_database
>>/home/me/output_file
(split across multiple lines for readability, but should be on one line).
As an aside, that's not overly secure since your password may be visible from ps while the process is running. Since it's only an example, I'm not too worried, but you should consider storing the password in a properly secured my.cnf file if you go down this path.
In terms of running a shell script from cron which in turn executes MySQL commands, that should work as well. One choice is with a here-doc:
/usr/bin/mysql -u me -p never_you_mind -D my_database <<EOF
select * from my_table
select * from my_other_table where id = 74
EOF
12 22 * * * /usr/bin/mysql >> < FILE PATH > 2>&1
Redirect your error message to the same file so you can debug it.
There is also a good article about how to debug cron jobs:
How to debug a broken cron job