I want to create a deployment script, somehow emulate Oracle deployment scripts, where with ¶m you can use previously declared parameters.
I need to call this script for different users on different databases automatically.
For example my script should be:
USE ¶m;
DROP TABLE IF EXISTS `TEST` ;
CREATE TABLE IF NOT EXISTS `TEST` (X INT(16))
etc....
Of course ¶m is what I would have used in Oracle environment.
Thanks
Updates:
Forgot to mention that I am using a windows environment for now. I have created a batch script to call the mysql script. The easiest way I thought would be to pass to mysql 2 command: 1) use the schema I have as parameter and then call the script which will create the table regardless of the schema. Unfortunately mysql seems to understand that I want to connect to the schema X, but doesn't want to call the script.
REM param is the schema and mainsql is the script
SET param="%1"
SET mainsql="script.sql"
echo %param%
echo %mainsql%
mysql -u <user> --password=<psw> %param% "source %mainsql%;"
As far as I know you can't directly pass variables in to a MySQL script. The best you can do is set user variables in a wrapper shell script. Something like:
passed_var1=$1
passed_var2=$2
mainsql=script.sql
mysql $(usual_parameters) -e "set #user_var1=$passed_var1; set #user_var2=$passed_var2; source $mainsql"
Adjust for actual use, of course.
Related
Is there a way to have a .csv imported into a SQL Table automatically in mysql db? I know how to do it manually, but there is a situation where a .csv is exported nightly from PeopleSoft and we want that imported automatically into SQL Table in linux environment. plese give me a sample script to do that.. If there's a way, can anyone point me in that direction (I'm not a SQL expert)!!
You can try creating Stored procedure,
Write load csv query into SP.
Create Event to call SP.
I hope this helps.
CREATE EVENT IF NOT EXISTS `load_csv_event`
ON SCHEDULE EVERY 23 DAY_HOUR
DO CALL my_sp_load_csv();
Alos, You can directly create an event and write a load query into it.
You could create a crontab job, for example:
* * * * * /path/to/load_script.sh
Where load_script.sh may be like (do not forget make it executable):
#!/bin/bash
IMPORTED_FILE_PATH=/path/to/your/imported/file.csv
TABLENAME=target_table_name
DATABASE=db_name
TMP_FILENAME=/tmp/${TABLENAME}.cvs
# do nothing if imported file does not exist
[ -f "$IMPORTED_FILE_PATH" ] || exit 0
# if temporary file exists, then it means previous import job is running. Also do nothing
[ -f "$TMP_FILENAME" ] && exit 0
# Move it to tmp and rename to target table name
mv "$IMPORTED_FILE_PATH" "$TMP_FILENAME"
mysqlimport --user=mysqlusername --password=mysqlpassword --host=mysqlhost --local $DATABASE $TMP_FILENAME
rm -f "$TMP_FILENAME"
It is just an example (not tested). You should add error handling, logging, etc.
Also look at manual of mysqlimport
Basically I have the unenviable position of updating our entire system to stop using a certain table and instead use another one. I've already done this for all of our code, now I need to do it for all of our functions and procedures.
I know that I can get a list of the functions / procedures in a database as such:
SELECT * FROM INFORMATION_SCHEMA.ROUTINES
I also know that I can look at the code for an individual function / procedure as such:
SHOW CREATE FUNCTION function_name
SHOW CREATE PROCEDURE procedure_name
However, I don't want to have to look through each function and procedure one by one, as we have over 200 of them.
I'm wondering if there is anything like...
SELECT * FROM INFORMATION_SCHEMA.ROUTINES WHERE code_column_name LIKE '%search_string%'
There doesn't seem to be any column in INFORMATION_SCHEMA.ROUTINES that contains the code, but... is there a way to do this on a different table perhaps?
I would use UNIX grep for that.
If you output the results of show create function/procedure to a flat file on disk then run grep on it. Or turn it directly thru grep.
Here is one way to do it:
echo 'show create function foo' | mysql -h*host* -u*user* -p*pass* *schema* | grep *obsolete-tablename*
or dump the entire database to disk and then grep it.
mysqldump -h*host* -u*user* -p*pass* *schema --routines > mydump.sql
grep *obsolete-tablename mydump.sql
Try:
SELECT * FROM mysql.proc WHERE body LIKE '%search_string%'
You asked about MySQL but I will just mention that I use this for stored procedures in MS SQL Server:
SELECT object_name(id)
FROM syscomments
WHERE text LIKE '%wibble%'
The INFORMATION_SCHEMA.ROUTINES table (which you mention) also contains similar stuff, but the text is cut off after 4000 chars, so you find less matches; not very helpful.
I'm having a really hard time believing this question has never been asked before, it MUST be! I'm working on a batch file that needs to run some sql commands. All tutorials explaining this DO NOT WORK (referring to this link:Pass parameters to sql script that someone will undoubtedly mention)! I've tried other posts on this site verbatim and still nothing is working.
The way I see it, there are two ways I can approach this:
1. Either figure out how to call my basic MYSQL script and specify a parameter or..
2. Find an equivalent "USE ;" command that works in batch
My Batch file so far:
:START
#ECHO off
:Set_User
set usrCode = 0
mysql -u root SET #usrCode = '0'; \. caller.sql
Simply put, I want to pass 'usrCode' to my MYSQL script 'caller.sql' which looks like this:
USE `my_db`;
CALL collect_mismatch(#usrCode);
I know that procedures are a whole other topic to get into, but assume that the procedure is working just fine. I just can't get my parameter from Batch to MYSQL.
Ideally I would like to have the 'USE' & 'CALL' commands in my Batch file, but I can't find anything that let's me select a database in Batch before CALLing my procedure. That's when I tried the above link which boasts a simple command line entry and you're off to the races, but it isn't the case.
Any help would be greatly appreciated.
This will work;
echo SET #usrCode = '0'; > params.sql
type params.sql caller.sql | mysql -u root dbname
How do I do such a thing?
In mysql I do:
SELECT LOAD_FILE('/path/to/file');
What about postgres? Without using the \copy command of psql?
That depends what you want to do exactly.
You have COPY for reading structured data into (temporary) tables.
Note that this is the SQL command, which is similar, but not the same as the \copy command of psql!
And there is pg_read_file() for reading in any text file.
Edit - a basic example:
CREATE FUNCTION f_showfile(myfile text)
RETURNS text AS
$x$
BEGIN
RETURN pg_read_file(myfile, 0, 1000000); -- 1 MB max.
-- or you could read into a text var and do stuff with it.
END;
$x$
LANGUAGE plpgsql VOLATILE;
Only superusers can use this function. Be careful not to open security holes. You could create a function with SECURITY DEFINER, REVOKE FROMpublic and GRANT TO selected roles. If security is an issue read this paragraph at the provided link:
Writing SECURITY DEFINER Functions Safely
pg_read_file() you can only read from the logfile dir and the database dir. On Linux you could create a symlink to a data dir (at a safe location) like this:
cd /path//my/database
ln -s /var/lib/postgresql/text_dir/ .
Then call like this:
SELECT f_showfile('text_dir/readme.txt');
Output:
f_showfile
-------------------------------------------------------------------------
This is my text from a file.
I've been trying to get a shell(bash) script to insert a row into a REMOTE database, but I've been having some trouble :(
The script is meant to upload a file to a server, get a URL, HASH, and a file size, connect to a remote mysql database, and insert the data into an existing table. I've gotten it working until the remote MYSQL database bit.
It looks like this:
#!/bin/bash
zxw=randomtext
description=randomtext2
for file in "$#"
do
echo -n *****
ident= *****
data= ****
size=` ****
hash=`****
mysql --host=randomhost --user=randomuser --password=randompass randomdb
insert into table (field1,field2,field3) values('http://www.example.com/$hash','$file','$size');
echo "done"
done
I'm a total noob at programming so yeah :P
Anyway, I added the \ to escape the brackets as I was getting errors. As it is right now, the script is works fine until connects to the mysql database. It just connects to the mysql database and doesn't do the insert command (and I don't even know if the insert command would work in bash).
PS: I've tried both the mysql commands from the command line one by one, and they worked, though I defined the hash/file/size and didn't have the escaping "".
Anyway, what do you guys think? Is what I'm trying to do even possible? If so how?
Any help would be appreciated :)
The insert statement has to be sent to mysql, not another line in the shell script, so you need to make it a "here document".
mysql --host=randomhost --user=randomuser --password=randompass randomdb << EOF
insert into table (field1,field2,field3) values('http://www.site.com/$hash','$file','$size');
EOF
The << EOF means take everything before the next line that contains nothing but EOF (no whitespace at the beginning) as standard input to the program.
This might not be exactly what you are looking for but it is an option.
If you want to bypass the annoyance of actually including your query in the sh script, you can save the query as .sql file (useful sometimes when the query is REALLY big and complicated). This can be done with simple file IO in whatever language you are using.
Then you can simply include in your sh scrip something like:
mysql -u youruser -p yourpass -h remoteHost < query.sql &
This is called batch mode execution. Optionally, you can include the ampersand at the end to ensure that that line of the sh script does not block.
Also if you are concerned about the same data getting entered multiple times and your rdbms getting inconsistent, you should explore MySql transactions (commit, rollback, etc).
Don't use raw SQL from bash; bash has no sane facility for sanitizing the data beforehand. Generate a CSV file and upload that instead.