Mysql query to update multiple rows using a input file from linux - mysql

I'm trying to update multiple rows in a DB using a small script.
I need to update the rows based on some specific user_ids which I have in a list on Linux machine.
#! /bin/bash
mysql -u user-ppassword db -e "update device set in_use=0 where user_id in ()";
As you see above, the user_ids are in a file, let's say /opt/test/user_ids_txt.
How can I import them into this command?

This really depends on the format of user_ids_txt. If we assume it just happens to be in the correct syntax for your SQL in statement, the following will work:
#! /bin/bash
mysql -u user-ppassword db -e "update device set in_use=0 where user_id in ($(< /opt/test/user_ids_txt))";
The bash interpreter will substitute in the contents of the file. This can be dangerous for SQL queries, so I would echo out the command on the terminal to make sure it is correct before implementing it. You should be able to preview your SQL query by simply running the following on the command line:
echo "update device set in_use=0 where user_id in ($(< /opt/test/user_ids_txt))"
If your file is not in the SQL in syntax you will need to edit it (or a copy of it) before running your query. I would recommend something like sed for this.
Example
Let's say your file /opt/test/user_ids_txt is just a list of user_ids in the format:
aaa
bbb
ccc
You can use sed to edit this into the correct SQL syntax:
sed 's/^/\'/g; s/$/\'/g; 2,$s/^/,/g' /opt/test/user_ids_txt
The output of this command will be:
'aaa'
,'bbb'
,'ccc'
If you look at this sed command, you will see 3 separate commands separated by semicolons. The individual commands translate to:
1: Add ' to the beginning of every line
2: Add ' to the end of every line
3: Add , to the beginning of every line but the first
Note: If your ID's are strictly numeric, you only need the third command.
This would make your SQL query translate to:
update device set in_use=0 where user_id in ('aaa'
,'bbb'
,'ccc')
Rather than make a temporary file to store this, I would use a bash variable, and simply plug that into the query like this:
#! /bin/bash
in_statement="$(sed 's/^/\'/g; s/$/\'/g; 2,$s/^/,/g' /opt/test/user_ids_txt)"
mysql -u user-ppassword db -e "update device set in_use=0 where user_id in (${in_statement})";

Related

Detect and delete line break directly out of mysql query

im trying to detect and delete a line break out of a subject (called m.subject) mail information retrieved via CONCAT out of a mysql database.
That said, the linebreak may or may not occur in the subject and therefore must be detected.
My query looks like this:
mysql --default-character-set=utf8 -h $DB_HOST -D $TARGET -u $DB_USER -p$DB_PW -N -r -e "SELECT CONCAT(m.one,';',m.two,';',m.three,';',m.subject,';',m.four';',m.five,';',(SELECT CONCAT(special_one) FROM special_$SQL_TABLE WHERE msg_id = m.six ORDER BY time DESC LIMIT 1)) FROM mails_$SQL_TABLE m WHERE m.rtime BETWEEN $START AND $END AND m.seven = 1 AND m.eight IN (2);"
I tried to delete it afterwards, but getting in performance trouble due to several while operations on all lines already. Is there an easy way to detect and cut it directly via the CONCAT buildup? It is crucial to retrieve only one line after extraction for me.
Updating/changing the database is not an option for me, as I only want to read the current state.

How to convert MySQL query output in array in shell scripting?

I am storing output of MySQL query in a varible using shell scripting. The output of SQL query is in multiple rows. When I checked the count of the variable (which I think is an array), it is giving 1. My code snippet is as follows:
sessionLogin=`mysql -ugtsdbadmin -pgtsdbadmin -h$MYSQL_HOST -P$MYSQLPORT CMDB -e " select distinct SessionID div 100000 as 'MemberID' from SessionLogin where ClientIPAddr like '10.104%' and LoginTimestamp > 1426291200000000000 order by 1;"`
echo "${#sessionLogin[#]}"
How can I store the MySQL query output in an array in shell scripting?
You can loop over the output from mysql and append to an existing array. For example, in Bash 3.1+, a while loop with process substitution is one way to do it (please replace the mysql parameters with your actual command)
output=()
while read -r output_line; do
output+=("$output_line")
done < <(mysql -u user -ppass -hhost DB -e "query")
echo "There are ${#output[#]} lines returned"
Also take a look at the always excellent BashFaq

How do I combine a mysql and linux bash script?

I have to read some columns from mysql and change that column using bash script and then update column in mysql.
My Mysql query is like "select description from story".
Then i will iterate over each row of the result set and edit the description with some shell scripting. After editing i will update that row .
The pseudo code looks like :
select id,description from story
for each description in result set
$orig_description=description
$orig_id= id
apply shell script file script.sh ($edited_description=./script.sh)
update story set description=$edited_description where id=$orig_id
What is the easiest way to accomplish this task ? And how to accomplish it ?
As per your given queries and explanation,sample script would be,
cmd="mysql -u [user] -p[pass]"
cmdRes=$($cmd -e "select id,description from story")
for val in "$cmdRes";
do
#parse val for id and description
#val1=id
#val2=description
#apply modification logic
$cmd -e "update story set desc=${Val2} where id=${val1}"
done

Can MySQL check that file exists?

I have a table that holds relative paths to real files on HDD. for example:
SELECT * FROM images -->
id | path
1 | /files/1.jpg
2 | /files/2.jpg
Can I create a query to select all records pointing to non-existent files? I need to check it by MySql server exactly, without using an iteration in PHP-client.
I would go with a query like this:
SELECT id, path, ISNULL(LOAD_FILE(path)) as not_exists
FROM images
HAVING not_exists = 1
The function LOAD_FILE tries to load the file as a string, and returns NULL when it fails.
Please notice that a failure in this case might be due to the fact that mysql simply cannot read that specific location, even if the file actually exists.
EDIT:
As #ostrokach pointed out in comments, this isn't standard SQL, even though MySQL allows it, to follow the standard it could be:
SELECT *
FROM images
WHERE LOAD_FILE(PATH) IS NULL
The MySQL LOAD_FILE command has very stringent requirements on the files that it can open. From the MySQL docs:
[LOAD_FILE] Reads the file and returns the file contents as a string. To use this function, the file must be located on the server host, you must specify the full path name to the file, and you must have the FILE privilege. The file must be readable by all and its size less than max_allowed_packet bytes. If the secure_file_priv system variable is set to a non-empty directory name, the file to be loaded must be located in that directory.
So if the file can't be reached by the mysql user or any of the other requirements are not satisfied, LOAD_FILE will return Null.
You can get a list of IDs that correspond to missing files using awk:
mysql db_name --batch -s -e "SELECT id, path FROM images" \
| awk '{if(system("[ -e " $2 " ]") == 1) {print $1}}' \
>> missing_ids.txt
or simply using bash:
mysql db_name --batch -s -e "SELECT id, path FROM images" \
| while read id path ; if [[ -e "$path" ]] ; then echo $id ; done
>> missing_ids.txt
This also has the advantage of being much faster than LOAD_FILE.
MYSQL only handles the Database so there is no way for you to fire an SQL Statement to check on the HDD if the file exists. You need to iterate over the rows and check it with PHP.
It's not possible using stock MySQL. However you can write UDF (user-defined function), probably in C, load it using CREATE FUNCTION statement and use it from MySQL as you would use any built-in function.

Using shell script to insert data into remote MYSQL database

I've been trying to get a shell(bash) script to insert a row into a REMOTE database, but I've been having some trouble :(
The script is meant to upload a file to a server, get a URL, HASH, and a file size, connect to a remote mysql database, and insert the data into an existing table. I've gotten it working until the remote MYSQL database bit.
It looks like this:
#!/bin/bash
zxw=randomtext
description=randomtext2
for file in "$#"
do
echo -n *****
ident= *****
data= ****
size=` ****
hash=`****
mysql --host=randomhost --user=randomuser --password=randompass randomdb
insert into table (field1,field2,field3) values('http://www.example.com/$hash','$file','$size');
echo "done"
done
I'm a total noob at programming so yeah :P
Anyway, I added the \ to escape the brackets as I was getting errors. As it is right now, the script is works fine until connects to the mysql database. It just connects to the mysql database and doesn't do the insert command (and I don't even know if the insert command would work in bash).
PS: I've tried both the mysql commands from the command line one by one, and they worked, though I defined the hash/file/size and didn't have the escaping "".
Anyway, what do you guys think? Is what I'm trying to do even possible? If so how?
Any help would be appreciated :)
The insert statement has to be sent to mysql, not another line in the shell script, so you need to make it a "here document".
mysql --host=randomhost --user=randomuser --password=randompass randomdb << EOF
insert into table (field1,field2,field3) values('http://www.site.com/$hash','$file','$size');
EOF
The << EOF means take everything before the next line that contains nothing but EOF (no whitespace at the beginning) as standard input to the program.
This might not be exactly what you are looking for but it is an option.
If you want to bypass the annoyance of actually including your query in the sh script, you can save the query as .sql file (useful sometimes when the query is REALLY big and complicated). This can be done with simple file IO in whatever language you are using.
Then you can simply include in your sh scrip something like:
mysql -u youruser -p yourpass -h remoteHost < query.sql &
This is called batch mode execution. Optionally, you can include the ampersand at the end to ensure that that line of the sh script does not block.
Also if you are concerned about the same data getting entered multiple times and your rdbms getting inconsistent, you should explore MySql transactions (commit, rollback, etc).
Don't use raw SQL from bash; bash has no sane facility for sanitizing the data beforehand. Generate a CSV file and upload that instead.