Looping through databases in postgresql function - function

I'd like to loop through a list of postgresql databases and run some queries on them from within a postgresql function. Here's a code example...
CREATE OR REPLACE FUNCTION my_function()
RETURNS VOID AS
$$
DECLARE
db VARCHAR(50); -- this declaration is where the confusion is
BEGIN
FOR db IN
SELECT datname FROM pg_catalog.pg_database WHERE datname ~ '^mydbname_'
LOOP
-- this is just an example
SELECT * FROM db;
END LOOP;
END;
$$
LANGUAGE 'plpgsql';
I'm aware that I can use postgresql's EXECUTE to evaluate the queries as a string (e.g., EXECUTE 'SELECT * FROM ' || db || ';';), but my queries are rather long and complex.
Is there a way to do this in postgresql? Is there a "database" declaration type?

You can't use a variable as an object-name (database, table, column) in a query directly. You'll have to use EXECUTE.
This isn't going to work anyway because you can't do cross-database queries. Either do this from the client or look at using dblink. There is an implementation of SQL/MED (Foreign Data Wrappers) but oddly I don't think there is a PostgreSQL wrapper yet.

I'm not able to do it with a Postgresql function. If it helps, here is a very simple bash script to iterate over all databases:
#!/bin/bash
all="SELECT datname FROM pg_database WHERE datistemplate = false and datname != 'postgres'"
psql -h host -U user postgres --no-align -t -c "${all}" | while read -a bd ; do
echo "Processing ${bd}..."
psql -h host -U user "${bd}" -c "SELECT current_database()"
# psql -h host -U user "${bd}" -f fix.sql
done

Isn't db name in pg_database of type name?
Try DECLARE db_name NAME;

Related

Finding name of column(s) based on a value stored MYSQL [duplicate]

I am working on a MySQL database that is huge (about 120 tables). I am trying to make some sense of it and it will help a great deal if I can search all 120 tables + columns for a string I am looking for.
Is that possible to do on a MySQL DB?
There is one solution, which might not be what you want. If you dumped the table into a file (mysqldump) with the data, then you would be able to grep any information you wanted out of it.
It would remove the need for time consuming search queries, and is the most efficient way I can think of.
This will help you to find a string in entire database
DELIMITER ##
CREATE PROCEDURE sp_search1(IN searchstring INT)
BEGIN
DECLARE done INT DEFAULT FALSE;
DECLARE output TEXT;
DECLARE table_name TEXT;
DECLARE column_name TEXT;
DECLARE s TEXT;
DECLARE searchcursor CURSOR FOR
SELECT table_name,column_name FROM information_schema.columns AS column
ORDER BY table_name,ordinal_position;
OPEN searchcursor;
PREPARE stmt2 FROM 'select * from ? where ? = ?' ;
search_loop : LOOP
IF done THEN
LEAVE search_loop;
END IF;
FETCH searchcursor INTO table_name,column_name;
IF( EXECUTE stmt2 USING table_name, column_name,searchstring) THEN
INSERT INTO `table_names`(`table_name`) VALUES(#table_name);
END IF;
END LOOP;
END;
Just wanted to add on to Omnipresent's answer, which is the de facto way to search a db.
Unfortunately, 99% of the time, my db is huge and an average dump has few newlines, meaning grepping for the string I want returns the vast majority of the sql file.
I now prefer to use the --tab switch which makes a tab delimited txt file per table in a db.
This means not only do I get one record per line, but I can quickly get the table my search term is in.
Try this:
mysqldump -u user_name -p database_name --tab=tmp
Where tmp is an empty directory you've created.
An ls of tmp would give you something like this:
users.sql
users.txt
orders.sql
orders.txt
where the sql files contain the create table syntax, and the txt contain the data.
Note that the tab option utilizes mysql's SELECT INTO OUTFILE which means this trick cannot be done anywhere but localhost.
In unix machines, if the database is not too big:
mysqldump -u <username> -p <password> <database_name> --extended=FALSE | grep <String to search> | less -S
You could just iterate each table:
mysql="mysql -uUSER -pPASS -hHOST --protocol=tcp dbname -e"
for table in `$mysql "show tables;"`
do
echo $table
$mysql "select * from $table;" | grep STRING_TO_SEARCH_FOR
done

Unix: Passing Param to MYSQL files from BASH Shell Script

I want to pass SOME VARIABLES to mysql file from bash shell script.
Here is my shell script.
#!/bin/bash
echo $0 Started at $(date)
mysql -uroot -p123xyzblabla MyMYSQLDBName<mysqlfile.sql PARAM_TABLE_NAME
Please note that it is MYSQL and not SQLPLUS
My MYSQL.sql , I want to read and use passed parameter/argument (PARAM_TABLE_NAME)
select count(*) from PARAM_TABLE_NAME
Question 1: What is the correct syntax to pass variable(PARAM_TABLE_NAME) to sql file (mysqlfile.sql)?
Question 2: How can I print variable(PARAM_TABLE_NAME) in sql file (mysqlfile.sql)?
Basically, I want to make generic SQL script which can load or select data from tables based on received inputs.
Thanks
There is no such thing as passing a parameter to a SQL file. A SQL file is no more than a text file that contains a list of SQL statements. These statements are interpreted by the mysql client program exactly as if you typed them on your keyboard.
The mysql client does not provide the feature you are looking for.
But I can think of a few tricks to achieve a similar effet:
create/populate a configuration table prior to reading your SQL file. Then write your SQL file so that it takes this table contents into account:
bash> mysql -e "INSERT INTO config_table VALUES(1, 2, 3)"
bash> mysql < script.sql
prepend your SQL file with some variables declarations. Then use these variables in the rest of your script:
bash> (echo "SET #var=123;" ; cat script.sql) |mysql
[example script.sql]
SELECT * FROM mytable WHERE id = #var;
write your SQL file with some placeholders that your replace on the fly, e.g with sed:
bash> sed "s/__VAR_A__/mytable/g" script.sql |mysql
[example script.sql]
SELECT * FROM __VAR_A__ WHERE id = 123;
All the above is quite dirty. A much cleaner solution would involve stored procedures or functions. Then you would just pass your parameters as procedure parameters:
bash> PARAM1='foo'; PARAM2='bar'
bash> mysql -e "CALL MyProc($PARAM1);"
bash> mysql -e "SELECT MyFunc($PARAM2);"
note: it is not possible to parametrize a table name in SQL, so you will need to resort to dynamic SQL like this in all cases (except for the sed-based hack, which I do not recommend)
This is an old thread but I think may still be useful to some people. Something like this should work:
mysql -uroot -p123xyzblabla MyMYSQLDBName -e "set #testVar='customer_name'; source mysqlfile.sql;"
Now #testVar (customer_name) is available for you to use in mysqlfile.sql file.
HTH
The way to pass parameters has already been answered in this or other threads. However, specific to the sample in you question, I'd like to add that you can't use the variables declaration method as a placeholder for a table name, as the documentation says:
User variables are intended to provide data values. They cannot be used directly in an SQL statement as an identifier or as part of an identifier, such as in contexts where a table or database name is expected
If you want to use a table name parameter, you can still use the sed or the stored procedures or functions as answered by #RandomSeed
In addition to that, another way is using PREPARE and EXECUTE in your script. The following example allows you to create a database/schema (in case you wanted to use stored procedures you need to have them already stored in a database), like this:
[myscript.sql]
set #s=CONCAT("CREATE DATABASE ", #dbname);
PREPARE stmt FROM #s;
EXECUTE stmt;
Then use any of the proposed syntax in the other questions to set the #dbname variable:
mysql -uroot -p123xyzblabla MyMYSQLDBName -e "set #dbname='mydatabase'; source myscript.sql;"

How can I drop all MySQL Databases with a certain prefix?

I need to drop hundreds of mysql databases that all share a common prefix, but have random id's as the rest of their name ( eg. database_0123456789, database_9876543210 ). All these databases are on the same server. There are other databases on that same server that I don't want to drop.
This is what I'd like to do:
DROP DATABASE `database_*`;
How can I drop these efficiently? Is there a MySQL query I can run? Maybe a shell script?
The syntax of the DROP DATABASE statement supports only a single database name. You will need to execute a separate DROP DATABASE statement for each database.
You can run a query to return a list of database names, or maybe more helpful, to generate the actual statements you need to run. If you want to drop all databases that start with the literal string database_ (including the underscore character), then:
SELECT CONCAT('DROP DATABASE `',schema_name,'` ;') AS `-- stmt`
FROM information_schema.schemata
WHERE schema_name LIKE 'database\_%' ESCAPE '\\'
ORDER BY schema_name
Copy the results from that query, and you've got yourself a SQL script.
(Save the results as plain text file (e.g. dropdbs.sql), review with your favorite text editor to remove any goofy header and footer lines, make sure the script looks right, save it, and then from the mysql command line tool, mysql> source dropdbs.sql.)
Obviously, you could get more sophisticated than that, but for a one-time shot, this is probably the most efficient.)
Don't need of an external script file. A stored procedure using prepare statements might do the trick:
CREATE PROCEDURE kill_em_all()
BEGIN
DECLARE done INT DEFAULT FALSE;
DECLARE dbname VARCHAR(255);
DECLARE cur CURSOR FOR SELECT schema_name
FROM information_schema.schemata
WHERE schema_name LIKE 'database\_%' ESCAPE '\\'
ORDER BY schema_name;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = TRUE;
OPEN cur;
read_loop: LOOP
FETCH cur INTO dbname;
IF done THEN
LEAVE read_loop;
END IF;
SET #query = CONCAT('DROP DATABASE ',dbname);
PREPARE stmt FROM #query;
EXECUTE stmt;
END LOOP;
END;
Once you have that procedure, you just have to:
CALL kill_em_all();
When done:
DROP PROCEDURE kill_em_all
This question lacks an answer without creating a file first.
Our build server automatically creates a database for every topic branch while running unit tests. After information_schema queries get really slow which causes our tests to fail.
I created a batch file which runs every day. I did not want to deal with temporary files. So here is my solution.
#ECHO OFF
REM drops all databases excluding defaults
SET user=user
SET pass=pass
mysql ^
-u %user% ^
-p%pass% ^
-NBe "SELECT CONCAT('drop database `', schema_name, '`;') from information_schema.schemata where schema_name NOT IN ('mysql', 'test', 'performance_schema', 'information_schema')" | mysql -u %user% -p%pass%
Modifying spencer7593 answer
Here is the command to find desired results and save it in file where prefix is database prefix
SELECT CONCAT('DROP DATABASE ',schema_name,' ;') AS stmt
FROM information_schema.schemata
WHERE schema_name LIKE 'prefix\_%' ESCAPE '\\'
ORDER BY schema_name into outfile '/var/www/pardeep/file.txt';
if you get permission denied then change folder permission to 777 or change folder group to mysql using this
chown -R mysql /var/www/pardeep/
then run this query
source /var/www/pardeep/file.txt;

way to search entire database for a string in MySQL

I am working on a MySQL database that is huge (about 120 tables). I am trying to make some sense of it and it will help a great deal if I can search all 120 tables + columns for a string I am looking for.
Is that possible to do on a MySQL DB?
There is one solution, which might not be what you want. If you dumped the table into a file (mysqldump) with the data, then you would be able to grep any information you wanted out of it.
It would remove the need for time consuming search queries, and is the most efficient way I can think of.
This will help you to find a string in entire database
DELIMITER ##
CREATE PROCEDURE sp_search1(IN searchstring INT)
BEGIN
DECLARE done INT DEFAULT FALSE;
DECLARE output TEXT;
DECLARE table_name TEXT;
DECLARE column_name TEXT;
DECLARE s TEXT;
DECLARE searchcursor CURSOR FOR
SELECT table_name,column_name FROM information_schema.columns AS column
ORDER BY table_name,ordinal_position;
OPEN searchcursor;
PREPARE stmt2 FROM 'select * from ? where ? = ?' ;
search_loop : LOOP
IF done THEN
LEAVE search_loop;
END IF;
FETCH searchcursor INTO table_name,column_name;
IF( EXECUTE stmt2 USING table_name, column_name,searchstring) THEN
INSERT INTO `table_names`(`table_name`) VALUES(#table_name);
END IF;
END LOOP;
END;
Just wanted to add on to Omnipresent's answer, which is the de facto way to search a db.
Unfortunately, 99% of the time, my db is huge and an average dump has few newlines, meaning grepping for the string I want returns the vast majority of the sql file.
I now prefer to use the --tab switch which makes a tab delimited txt file per table in a db.
This means not only do I get one record per line, but I can quickly get the table my search term is in.
Try this:
mysqldump -u user_name -p database_name --tab=tmp
Where tmp is an empty directory you've created.
An ls of tmp would give you something like this:
users.sql
users.txt
orders.sql
orders.txt
where the sql files contain the create table syntax, and the txt contain the data.
Note that the tab option utilizes mysql's SELECT INTO OUTFILE which means this trick cannot be done anywhere but localhost.
In unix machines, if the database is not too big:
mysqldump -u <username> -p <password> <database_name> --extended=FALSE | grep <String to search> | less -S
You could just iterate each table:
mysql="mysql -uUSER -pPASS -hHOST --protocol=tcp dbname -e"
for table in `$mysql "show tables;"`
do
echo $table
$mysql "select * from $table;" | grep STRING_TO_SEARCH_FOR
done

How do I pass a variable to a mysql script?

I know that with mysql you can write SQL statements into a .sql file and run the file from the mysql command line like this:
mysql> source script.sql
How do I pass a variable to the script? For example, if I want to run a script that retrieves all the employees in a department, I want to be able to pass in the number of the department as a variable.
I am not trying to run queries through a shell script. There are simple queries I run from the mysql command line. I'm tired of retyping them all the time, and writing a shell script for them would be overkill.
#!/bin/bash
#verify the passed params
echo 1 cmd arg : $1
echo 2 cmd arg : $2
export db=$1
export tbl=$2
#set the params ... Note the quotes ( needed for non-numeric values )
mysql -uroot -pMySecretPaassword \
-e "set #db='${db}';set #tbl='${tbl}';source run.sql ;" ;
#usage: bash run.sh my_db my_table
#
#eof file: run.sh
--file:run.sql
SET #query = CONCAT('Select * FROM ', #db , '.' , #tbl ) ;
SELECT 'RUNNING THE FOLLOWING query : ' , #query ;
PREPARE stmt FROM #query;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
--eof file: run.sql
you can re-use the whole concept from from the following project
Like this:
set #department := 'Engineering';
Then, reference #department wherever you need to in script.sql:
update employee set salary = salary + 10000 where department = #department;
you really should be looking at a more appropriate way of doing this. i'm going to guess that you're trying to run mysql queries via a shell script. you should instead be using something like PERL or PHP.