Setting envs or globals in bash script [NOT A duplicate] - mysql

I'm looking for a way (preferably cross-platform compatible) to set something globally accessible from a bash script.
My company is using a bash script to request access credentials to a mysql database. This returns username, password and db domain that I end up having to copy paste in my terminal to run and connect to our mysql db.
I thought i'd amend the script to set environment variables and make use of these in an alias with the credentials set in my bashrc but turns out you can't set environment variables in a bash script.
So i tried to set the mysql alias with the username password and domain pre-filled in that same script but same issue. Can't set an alias in a bash script.
I essentially want to be able to run the script that gives me the credentials and then not have to do manual copy pasting all over the place.
What I tried was (if it give more context):
#!/bin/bash
# Script gets the credentials
# Script now has username, password, endpoint variables
export MYSQL_USER=$username
export MYSQL_PASSWORD=$password
export MYSQL_ENDPOINT=$endpoint
# Script finishes
and in my bashrc:
alias mysqlenv="mysql -h $MYSQL_ENDPOINT -u $MYSQL_USER -p'$MYSQL_PASSWORD'"
I appreciate this is not working and that might not be the best solution so i'm open to other options.
PS: Forgot to mention the credentials expire every 24H which is why i want to smoothen the process
PS2: I can't source the script that gives me the credentials because it's not just exporting environment variables, it's taking params from the cli and getting me to log in to my company system on my browser, etc.
PS3: I know putting password for mysql on the command line is bad practice but this is a non-issue as that password is being printed there in the first place by the script that give me the credential (written by someone else in the company)

Since you can already parse the credentials, I'd use your awk code to output shell commands:
getMysqlCredentials() {
credential_script.sh | awk '
{parse the output}
END {
printf "local MYSQL_USER=\"%s\"\n", username
printf "local MYSQL_PASSWORD=\"%s\"\n", password
printf "local MYSQL_ENDPOINT=\"%s\"\n", endpoint
}
'
}
then, I'd have a wrapper function around mysql where you invoke that function and source the output
mysql() {
source <(getMysqlCredentials)
command mysql -h "$MYSQL_ENDPOINT" -u "$MYSQL_USER" -p"$MYSQL_PASSWORD" "$#"
}

Related

Google Cloud SQL does not let me download console output. Permission for output not allowed, and mysql -u root does not work

Using: MySQL 5.7
What I want to achieve:
To save console output of Cloud SQL output as a text.
Error:
Cloud SQL returns this:
ERROR 1045 (28000): Access denied for user 'root'#'%' (using password: YES)
Things I tried:
Logging in with no password → asks password anyways, and any password including that of the server itself does not work.
Creating various users with password → same error result
Creating a Cloud SQL instance with skip-grant-tables so that no permission is required to modify the table → Cloud SQL does not support this flag
I tried manually flagging the database with this option, but Cloud Shell doesn’t even support root login without password.
Possible solution:
If I can: mysql -u root with Cloud SQL with no password, then it should be able to do this just fine. It seems that any user besides root cannot even login to the instance.
Thank you in advance. Any clues / help is appreciated.
I believe the most trivial solution is to use the Google Cloud SDK with the following command.
You will export the results of the query in CSV format to Google Cloud Storage bucket, and copy them from the bucket to your system. Then you’ll have to parse the CSV file which is a standard procedure.
There’s an how-to guide here and you can take a look at a concrete example below:
Have some variables that will be used in multiple commands
INSTANCE_ID=your_cloud_sql_instance_id
BUCKET=gs://your_bucket here
Create bucket if you don’t have one, choosing the location accordingly
gsutil mb -l EU -p $DEVSHELL_PROJECT_ID $BUCKET
You can read the explanation of the following commands in the documentation 2, but bottom line will have a csv file on your file system at the end. Also make sure to edit the name of the DATABASE variable below as well as the correspondent query.
gsutil acl ch -u `gcloud sql instances describe $INSTANCE_ID --format json | jq -c ".serviceAccountEmailAddress" | cut -d \" -f 2`:W $BUCKET
DATABASE=db_visit
FILENAME=$DATABASE'_'`date +%Y%m%d%H%M%Y`_query.csv
gcloud beta sql export csv $INSTANCE_ID $BUCKET/$FILENAME --database=$DATABASE --query="select * from visit"
gsutil cp $BUCKET/$FILENAME .
To automate the login through mysql client and make subsequent queries and get its output I encourage you to research a solution along the lines of pexpect.

mysql: passing pwd into sql script

I am going to run the following, but need to pass a password into .sql file as well. I am hoping I will read pwd from the some secure file and can pass it in
mysql --login-path=db_server ${db} < "/path/to/sql_script/update.sql"
Although the command-line tool will get really upset, it'll still let you supply a password with the --password=XXXXX argument. You cannot put it in the .sql file.
You can also put the password in a config file in your home directory, like .mylogin.cnf or ~/my.cnf

Can I enter password once for multiple mysql command line invocations, where the queries are not known upfront?

You can avoid re-entering mysql command line password by putting the queries into a file.
In my case, the later queries are not determined until after the first queries have finished.
This happens in a non-interactive script so running a mysql console is not an option.
Is there any notion of a session for mysql command line interactions? Or can I set it up to listen for commands on a local unix socket (the output is required to be returned)? Or something like that?
User #smcjones mentions using the .my.cnf file or mysql_config_editor. Those are good suggestions, I give my +1 vote to him.
Another solution is to put the credentials in any file of your choosing and then specify that file when you invoke MySQL tools:
mysql --defaults-extra-file=my_special.cnf ...other arguments...
And finally, just for completeness, you can use environment variables for some options, like host and password. But strangely, not the user. See http://dev.mysql.com/doc/refman/5.7/en/environment-variables.html
export MYSQL_HOST="mydbserver"
export MYSQL_PWD="Xyzzy"
mysql ...other arguments...
I don't really recommend using an environment variable for the password, since anyone who can run ps on your client host can see the environment variables for the mysql client process.
There are a few ways to handle this in MySQL.
Put password in hidden .my.cnf in the home directory of the user the script is running as.
[client]
user=USER
password=PASSWORD
Use mysql_config_editor
mysql_config_editor set --login-path=client --host=localhost
--user=localuser --password
When prompted to enter your password, enter it like you otherwise would.
IMO this is the worst option, but I'll add it for the sake of completeness.
You could always create a function wrapper for MySQL that appends your set password.
#! /bin/bash
local_mysql_do_file() {
mysql -u localuser -h localhost -pPASSWORD_NO_SPACE < $1
}
# usage
local_mysql_do_file file.sql

How do i import imdb list files into mysql database on windows?

Am using MYSQL, imdbpy(to import data). I have never used imdbpy before and have no idea of how python script works.Here is the list of text files from which am gonna use a few to import into my database.
ftp://ftp.fu-berlin.de/pub/misc/movies/database/
Here is the link of imdbpy link that am trying to follow.
http://imdbpy.sourceforge.net/docs/README.sqldb.txt
I quite don't understand this part :
*Create a database named "imdb" (or whatever you like),
using the tool provided by your database; as an example, for MySQL
you will use the 'mysqladmin' command:
# mysqladmin -p create imdb
For PostgreSQL, you have to use the "createdb" command:
# createdb -W imdb
To create the tables and to populate the database, you must run
the imdbpy2sql.py script:
# imdbpy2sql.py -d /dir/with/plainTextDataFiles/ -u 'URI'
Where the 'URI' argument is a string representing the connection
to your database, with the schema:
scheme://[user[:password]#]host[:port]/database[?parameters]*
How do i run that imdbpy2sql.py script?
Assuming you have Python installed, you would run "imdbpy2sql.py -d /dir/with/plainTextDataFiles/ -u 'URI'" from the command prompt.
Note: The command prompt will need to be 'sitting' in the directory where imdbpy2sql.py exists or you will need to predicate imdbpy2sql.py with the full directory path.
The script imdbpy2sql.py can be found on their site: https://bitbucket.org/alberanid/imdbpy/src/ed8b3d354c9f1e2de7056d78b21e49f64ee52591/bin/imdbpy2sql.py?at=default
Since the script doesn't seem dependent on any other files, it should be sufficient to copy and paste the code to your current repository.

Executing MySQL commands in shell script?

I’m looking to create a deploy script that I can run from a terminal and it automatically deploys my site from a repository. The steps I’ve identified are:
Connect to remote server via SSH
Fetch latest version of site from remote repository
Run any SQL patches
Clean up and exit
I’ve placed the SSH connection and git pull commands in my shell file, but what I’m stuck with is MySQL with it being an (interactive?) shell itself. So in my file I have:
#!/bin/bash
# connect to remote server via SSH
ssh $SSH_USER#$SSH_HOST
# update code via Git
git pull origin $GIT_BRANCH
# connect to the database
mysql --user $MYSQL_USER --password=$MYSQL_PASSWORD --database=$MYSQL_DBNAME
# run any database patches
# disconnect from the database
# TODO
exit 0
As you can see, I’m connecting to the database, but not sure how to then execute any MySQL statements.
At the moment, I have a directory containing SQL patches in numerical order. So 1.sql, 2.sql, and so on. Then in my database, I have a table that simply records the last patch to be run. So I’d need to do a SELECT statement, read the last patch to be ran, and then run any neccesary patches.
How do I issue the SELECT statement to the mysql prompt in my shell script?
Then what would be the normal flow? Close the connection and re-open it, passing a patch file as the input? Or to run all required patches in one connection?
I assume I’ll be checking the last patch file, and doing a do loop for any patches in between?
Help here would be greatly appreciated.
Assuming you want to do all the business on the remote side:
ssh $SSH_USER#$SSH_HOST << END_SSH
git pull origin $GIT_BRANCH
mysql --user $MYSQL_USER --password=$MYSQL_PASSWORD --database=$MYSQL_DBNAME << END_SQL
<sql statements go here>
END_SQL
END_SSH
You could get the output from mysql using Perl or similar. This could be used to do your control flow.
Put your mysql commands into a file as you would enter them.
Then run as: mysql -u <user> -p -h <host> < file.sqlcommands.
You can also put queries on the mysql command line using '-e'. Put your 'select max(patch) from .' and read the output in your script.
cat *.sql | mysql --user $MYSQL_USER --password=$MYSQL_PASSWORD --database=$MYSQL_DBNAME