How to make a batch file write an sql query to file which includes < without it dumping the results - mysql

i'm looking to create a batch file which when executed, creates and then runs an sql file against a mysql database. The issue I'm having is that the script uses the > symbol during the query which when used in a SET command wants to dump half my query as a filename.
here is an extract of my batch file:
set FILE=query.sql
set TEXT=SELECT count(*) FROM calls where CallStart < '2012-01-01';
echo %TEXT% >> "%FILE%"
Can anyone help on how I get a batch file to write the < symbol into a file without creating a new export file?

Assuming this is a windows batch file, try escaping the < character with ^:
set TEXT=SELECT count(*) FROM calls where CallStart ^< '2012-01-01';
or use delayed expansion and quote the string:
SETLOCAL enabledelayedexpansion
set FILE=query.sql
set "TEXT=SELECT count(*) FROM calls where CallStart < '2012-01-01';"
echo %TEXT% >> "%FILE%"
I'm not on a windows machine, so I can't test this out for you!

Related

PostgreSQL multiple CSV import and add filename to each column

I've got 200k csv files and I need to import them all to a single postgresql table. It's a list of parameters from various devices and each csv's file name contains device's serial number and I need it to be in one of the colums for each row.
So to simplify, I've got few columns of data (no headers), let's say that columns in each csv file are: Date, Variable, Value and file name contains SERIALNUMBER_and_someOtherStuffIDontNeed.csv
I'm trying to use cygwin to write a bash script to iterate over files and do it for me, however for some reason it won't work, showing 'syntax error at or near "as" '
Here's my code:
#!/bin/bash
FILELIST=/cygdrive/c/devices/files/*
for INPUT_FILE in $FILELIST
do
psql -U postgres -d devices -c "copy devicelist
(
Date,
Variable,
Value,
SN as CURRENT_LOAD_SOURCE(),
)
from '$INPUT_FILE
delimiter ',' ;"
done
I'm learning SQL so it might be an obvious mistake, but I can't see it.
Also I know that in that form I will get full file name, not just the serial number bit I want but I can probably handle that somehow later.
Please advise.
Thanks.
I dont think there is a CURRENT_LOAD_SOURCE() function in postgres. A work-around is to leave the name-column NULL on copy, and patch is to the desired value just after the copy. I prefer a shell here-document because that make quoting inside the SQL body easier. (BTW: for 10K of files, the globbing needed to obtain FILELIST might exceed argmax for the shell ...)
#!/bin/bash
FILELIST="`ls /tmp/*.c`"
for INPUT_FILE in $FILELIST
do
echo "File:" $INPUT_FILE
psql -U postgres -d devices <<OMG
-- I have a schema "tmp" for testing purposes
CREATE TABLE IF NOT EXISTS tmp.filelist(name text, content text);
COPY tmp.filelist ( content)
from '/$INPUT_FILE' delimiter ',' ;
UPDATE tmp.filelist SET name = '$FILELIST'
WHERE name IS NULL;
OMG
done
For anyone interested in an answer, I've used a python script to change file names and then another script using psycopg2 to connect to the database and then done everyting in one connection. Took 10 minutes instead of 10 hours.
Here's the code:
Renaming files (also apparently to import from CSV you need all the rows to be filled and the information I needed was in first 4 columns anyway, therefore I've put together a solution to generate whole new CSVs instead of just renaming them):
import os
import csv
path='C:/devices/files'
os.chdir(path)
i=0
for file in os.listdir(path):
try:
i+=1
if i%10000 == 0:
#just to see the progress
print(i)
serial_number = (file[:8])
creader = csv.reader(open(file))
cwriter = csv.writer(open('processed_'+file, 'w'))
for cline in creader:
new_line = [val for col, val in enumerate(cline) if col not in (4, 5, 6, 7)]
new_line.insert(0, serial_number)
#print(new_line)
cwriter.writerow(new_line)
except:
print('problem with file: ' + file)
pass
Updating database:
import os
import psycopg2
path="C:\\devices\\files"
directory_listing = os.listdir(path)
conn = psycopg2.connect("dbname='devices' user='postgres' host='localhost'")
cursor = conn.cursor()
print(len(directory_listing))
i=100001
while i < 218792:
current_file=(directory_listing[i])
i+=1
full_path = "C:/devices/files/" + current_file
with open(full_path) as f:
cursor.copy_from(file=f, table='devicelistlive', sep=",")
conn.commit()
conn.close()
Don't mind while and weird numbers, it's just because I was doing it in portions for testing purposes. Can easily be replaced with for

automate csv import in mysql db in linux environment

Is there a way to have a .csv imported into a SQL Table automatically in mysql db? I know how to do it manually, but there is a situation where a .csv is exported nightly from PeopleSoft and we want that imported automatically into SQL Table in linux environment. plese give me a sample script to do that.. If there's a way, can anyone point me in that direction (I'm not a SQL expert)!!
You can try creating Stored procedure,
Write load csv query into SP.
Create Event to call SP.
I hope this helps.
CREATE EVENT IF NOT EXISTS `load_csv_event`
ON SCHEDULE EVERY 23 DAY_HOUR
DO CALL my_sp_load_csv();
Alos, You can directly create an event and write a load query into it.
You could create a crontab job, for example:
* * * * * /path/to/load_script.sh
Where load_script.sh may be like (do not forget make it executable):
#!/bin/bash
IMPORTED_FILE_PATH=/path/to/your/imported/file.csv
TABLENAME=target_table_name
DATABASE=db_name
TMP_FILENAME=/tmp/${TABLENAME}.cvs
# do nothing if imported file does not exist
[ -f "$IMPORTED_FILE_PATH" ] || exit 0
# if temporary file exists, then it means previous import job is running. Also do nothing
[ -f "$TMP_FILENAME" ] && exit 0
# Move it to tmp and rename to target table name
mv "$IMPORTED_FILE_PATH" "$TMP_FILENAME"
mysqlimport --user=mysqlusername --password=mysqlpassword --host=mysqlhost --local $DATABASE $TMP_FILENAME
rm -f "$TMP_FILENAME"
It is just an example (not tested). You should add error handling, logging, etc.
Also look at manual of mysqlimport

Can MySQL check that file exists?

I have a table that holds relative paths to real files on HDD. for example:
SELECT * FROM images -->
id | path
1 | /files/1.jpg
2 | /files/2.jpg
Can I create a query to select all records pointing to non-existent files? I need to check it by MySql server exactly, without using an iteration in PHP-client.
I would go with a query like this:
SELECT id, path, ISNULL(LOAD_FILE(path)) as not_exists
FROM images
HAVING not_exists = 1
The function LOAD_FILE tries to load the file as a string, and returns NULL when it fails.
Please notice that a failure in this case might be due to the fact that mysql simply cannot read that specific location, even if the file actually exists.
EDIT:
As #ostrokach pointed out in comments, this isn't standard SQL, even though MySQL allows it, to follow the standard it could be:
SELECT *
FROM images
WHERE LOAD_FILE(PATH) IS NULL
The MySQL LOAD_FILE command has very stringent requirements on the files that it can open. From the MySQL docs:
[LOAD_FILE] Reads the file and returns the file contents as a string. To use this function, the file must be located on the server host, you must specify the full path name to the file, and you must have the FILE privilege. The file must be readable by all and its size less than max_allowed_packet bytes. If the secure_file_priv system variable is set to a non-empty directory name, the file to be loaded must be located in that directory.
So if the file can't be reached by the mysql user or any of the other requirements are not satisfied, LOAD_FILE will return Null.
You can get a list of IDs that correspond to missing files using awk:
mysql db_name --batch -s -e "SELECT id, path FROM images" \
| awk '{if(system("[ -e " $2 " ]") == 1) {print $1}}' \
>> missing_ids.txt
or simply using bash:
mysql db_name --batch -s -e "SELECT id, path FROM images" \
| while read id path ; if [[ -e "$path" ]] ; then echo $id ; done
>> missing_ids.txt
This also has the advantage of being much faster than LOAD_FILE.
MYSQL only handles the Database so there is no way for you to fire an SQL Statement to check on the HDD if the file exists. You need to iterate over the rows and check it with PHP.
It's not possible using stock MySQL. However you can write UDF (user-defined function), probably in C, load it using CREATE FUNCTION statement and use it from MySQL as you would use any built-in function.

Disable My SQL Foreign Key Constrain (FOREIGN_KEY_CHECKS ) from bat file

I need to run around 100 .sql files from a batch file for loading data into a look up table in our application. I need to disable constains before loading process starts and enable it again after the process finish.
My current code is
for /r "%ScriptsPathLookup%" %%f in (*.sql) do (
mysql --host=%Server% --port=%PortNumber% --user=%UserName% --password=%UserPassword% --database=%DB% <%ConstrainPath%\Constrain-disable.sql<%%f)
Here Constrain-disable.sql -> SET FOREIGN_KEY_CHECKS = 0;
But this is not working. I believe if I go and put 'SET FOREIGN_KEY_CHECKS = 0;' in all the .sql files it will load correctly. This is not the best approach and would be tough to maintain. Can anyone suggest a better solution? Thanks.
Your example code has double input redirects < which isn't going to work. A relatively simple approach, using the same looping mechanism that you have, involves creating a temporary file with the disable code at the top and then using that file with the MySQL command.
set TempFile="%TEMP%\MyTempSql.sql"
for /r "%ScriptsPathLookup%" %%f in (*.sql) do (
type %ConstrainPath%\Constrain-disable.sql >%TempFile%
echo.>>%TempFile%
type %%f >>%TempFile%
mysql --host=%Server% --port=%PortNumber% --user=%UserName% --password=%UserPassword% --database=%DB% <%TempFile%
)
del %TempFile%

Using MySQL in Powershell, how do I pipe the results of my script into a csv file?

In PowerShell, how do I execute my mysql script so that the results are piped into a csv file? The results of this script is just a small set of columns that I would like copied into a csv file.
I can have it go directly to the shell by doing:
mysql> source myscript.sql
And I have tried various little things like:
mysql> source myscript.sql > mysql.out
mysql> source myscript.sql > mysql.csv
in infinite variation, and I just get errors. My db connections is alright because I can do basic table queries from the command line etc... I haven't been able to find a solution on the web so far either...
Any help would be really appreciated!
You seem to not be running powershell, but the mysql command line tool (perhaps you started it in a powershell console though.)
Note also that the mysql command line tool cannot export directly to csv.
However, to redirect the output to a file just run
mysql mydb < myscript.sql >mysql.out
or e.g.
echo select * from mytable | mysql mydb >mysql.out
(and whatever arguments to mysql you need, like username, hostname)
Are you looking for SELECT INTO OUTFILE ? dev.mysql.com/doc/refman/5.1/en/select.html – Pekka 19 hours ago
Yep. Select into outfile worked! But to make sure you get column names you also need to do something like:
select *
from
(
select
a,
b,
c
)
Union ALL
(Select *
from actual)