I provide password in envoy script but it prompts to enter password manually - laravel-envoy

With laravel 5.8 envoy command I deploy on remote server and I set password in command line, like:
envoy run Hostels2Deploy --lardeployer_password=111 --app_version=0.105a
and envoy file:
#setup
$server_login_user= 'lardeployer';
$lardeployer_password = isset($lardeployer_password) ? $lardeployer_password : "Not Defined";
#endsetup
#servers(['dev' => $server_login_user.':'.$lardeployer_password.'#NNN.NN.NNN.N'])
#task('clean_old_releases')
echo "Step # 81";
echo 'The password is: {{ $lardeployer_password }}';
echo 'The $server_login_user is: {{ $server_login_user }}';
echo "Step # 00 app_version ::{{ $app_version }}";
cd {{ $release_number_dir }}
# php artisan envoy:delete-old-versions Hostels2Deployed
#endtask
#macro('Hostels2Deploy',['on'=>'dev'])
clean_old_releases
#endmacro
With credentials in #servers block I expected I will not have to enter password manually, but in command line I see prompt to enter
password. I output $server_login_user and $lardeployer_password vars and they have valid values.
Which is valid path ?

I found a decision with ssh keys in /home/user/.ssh/config of my OS to add line :
Host laravelserver
IdentityFile ~/.ssh/id_rsa
HostName NNN.NN.NNN.N
Port 22
User lardeployer
and in envoy file to connect to this server like :
#servers(['dev' => ['laravelserver'] )
Aslo on remote user in file authorized_keys lardeployer's public key must be added.
and restart the sevice :
sudo systemctl restart ssh

Related

How to store credentials in sql configuration file as environment variables

I've got an SQL configuration file that's something like this:
[client]
database = dev
host = my-host.com
user = dev
password = super-secret-password
default-character-set = utf8
Is there any way I can swap out the plaintext host and password with some sort of environment variable, so I don't have to push it to GitHub directly? To deploy, I've been pushing to GitHub, making a docker image of the code pushed, pulling it onto an AWS server, and running it.
I'd rather not push the plaintext config file directly so I was wondering how to get around this.
You can use Github Secret to store sensitive data for your projects .
Read more about it from here ; Creating encrypted secrets for a repository
Create Env Variable using Github Action:
steps:
- name: Execute script
env:
PASSWORD: ${{ secrets.SCRIPT_CREDENTIALS }}
run: # your script to connect the database here .
for example to use a PHP script you can follow this method :
<?php
$servername = "localhost";
$username = "username";
$password = getenv("PASSWORD");
$conn = new mysqli($servername, $username, $password);
if ($conn->connect_error) {
die("Connection failed: " . $conn->connect_error);
}
echo "Connected successfully";
?>
To make a change on a .cfg file you can also use githubaction like that :
steps:
- name: Edit your config file
env:
PASSWORD: ${{ secrets.SCRIPT_CREDENTIALS }}
run: echo "password = ${{ secrets.SCRIPT_CREDENTIALS }}" >> file.cfg
Update on this for anyone using Django and having a similar issue, I was able to figure it out like this.
Before, my database connection file was set up like this:
DATABASES = {
"default": {
"ENGINE": "django.db.backends.mysql",
"OPTIONS": {
"read_default_file": "local.cnf",
},
}
}
rather than doing this, it's easier to do something like:
DATABASES = {
"default": {
"ENGINE": "django.db.backends.mysql",
'NAME': 'dev',
'USER': 'dev',
'PASSWORD': os.environ['DEV_PASS'],
'HOST': os.environ['DEV_HOST']
}
}
so then you can specify your environment variables as usual.

How to execute mysql script insertion on terraform user_data?

The last line of the script was not executed.
I tried to execute the code manually on the instance created and it was successful.
#!/bin/bash
#install tools
apt-get update -y
apt-get install mysql-client -y
#Create MySQL config file
echo "[mysql]" >> ~/.my.cnf
echo "user = poc5admin" >> ~/.my.cnf
echo "password = poc5password" >> ~/.my.cnf
#test
echo "endpoint = ${rds_endpoint}" >> ~/variables
hostip=$(hostname -I)
endpoint=${rds_endpoint}
echo "$hostip" >> ~/variables
#I have created a table here but I will remove the code since it is unnecessary...
#Create User
echo "CREATE USER 'poc5user'#'%' IDENTIFIED BY 'poc5pass';" >> ~/mysqlscript.sql
echo "GRANT EVENT ON * . * TO 'poc5user'#'%';" >> ~/mysqlscript.sql
cp mysqlscript.sql /home/ubuntu/mysqlscript.sql
mysql -h $endpoint -u poc5admin < ~/mysqlscript.sql
Expected result: There should be a Database, Table and User created on the RDS instance.
You can insert or create Database like this from the bash script but it is not recommended an approach to work with RDS. better to place your data over s3 and import from the s3.
Here is the example, that will create DB
resource "aws_db_instance" "db" {
allocated_storage = 20
storage_type = "gp2"
engine = "mysql"
engine_version = "5.7"
instance_class = "db.t2.micro"
name = "mydb"
username = "foo"
password = "foobarbaz"
parameter_group_name = "default.mysql5.7"
s3_import {
source_engine = "mysql"
source_engine_version = "5.6"
bucket_name = "mybucket"
bucket_prefix = "backups"
ingestion_role = "arn:aws:iam::1234567890:role/role-xtrabackup-rds-restore"
}
}
~/.my.cnf why you need this? better to place these script in the s3 file.
second thing, If you still interesting to run from your local environment then you can insert from local-exec
resource "null_resource" "main_db_update_table" {
provisioner "local-exec" {
on_failure = "fail"
interpreter = ["/bin/bash", "-c"]
command = <<EOT
mysql -h ${aws_rds_cluster.db.endpoint} -u your_username -pyour_password your_db < mysql_script.sql
EOT
}
}
But better to with s3.
If you want to import from remote, you can explore remote-exec.
With user-data, you can do this but it seems your MySQL script not generating properly. better to cp script to remote and then run with local exec in remote.
There is no such thing as terraform "user_data". User data is a bootstrap script for the EC2 instances which you can use to install software/binaries or to execute your script at the boot time.
The script will be executed by the cloud-init, not by the terraform itself. The responsibility of the terraform is to set user-data for the ec2 instances.
You may check the cloud-init output logs which should have the result of your user-data script also.
From your code, I am not able to understand which step you have copied the below file.
cp mysqlscript.sql /home/ubuntu/mysqlscript.sql
mysql -h $endpoint -u poc5admin < ~/mysqlscript.sql
I am assuming that you are creating a new server and it does not have any file.
Thank you for your inputs. I have found an answer by moving the config file to /etc/mysql/my.cnf and then executing
mysql -h $endpoint -u poc5admin < ~/mysqlscript.sql

How to access phpmyadmin cloud9 cakephp3?

Hi people I have a problem developing in cloud9. I followed the steps to configure mysql and phpmyadmin. So when I run the app I do it with the following line: bin/cake server -H 0.0.0.0 -p 8080. The apps run fine, but when a try to access phpmyadmin (https://james-mand-cortana.c9users.io/phpmyadmin/) shows an error:Error: PhpmyadminController could not be found.
But when I run the app by running the index.php file (without bin/cake server -H 0.0.0.0 -p 8080) works fine to access phpmyadmin.
So Basically this is my problem I want to run my application with the line bin / cake server -H 0.0.0.0 -p 8080 and access phpmyadmin without any problem.
Thanks for the help.
Here is an excerpt from the index.php:
<?php
if (php_sapi_name() === 'cli-server') {
$_SERVER['PHP_SELF'] = '/' . basename(__FILE__);
$url = parse_url(urldecode($_SERVER['REQUEST_URI']));
$file = DIR . $url['path'];
if (strpos($url['path'], '..') === false && strpos($url['path'], '.') !== false && is_file($file)) {
return false;
}
}
require dirname(DIR) . '/vendor/autoload.php';
use App\Application;
use Cake\Http\Server;
$server = new Server(new Application(dirname(DIR) . '/config'));
$server->emit($server->run());

inno setup giving error code 2 with mysql setup

I am executing this line in inno setup, but i am getting and exit code 2
;Setting root password default root (blank). ex : mypass4u#
Filename: "{app}\mysql\bin\mysqladmin.exe"; \
Parameters: "-u root -e ""update mysql.user set password=PASSWORD('mypass4u#') where user='root';"""; \
StatusMsg: "Setting password root"; \
Flags: runhidden;
I get the following message in the debug window
[11:56:54.387] -- Run entry -- [11:56:54.392] Run as: Current user
[11:56:54.396] Type: Exec
[11:56:54.400] Filename: C:\Program Files (x86)\Company\Myapp\mysql\bin\mysqladmin.exe
[11:56:54.405] Parameters: -u root -e "update mysql.user set
password=PASSWORD('mypass4u#') where user='root';"
[11:56:54.758] Process exit code: 2
What could be causing this error
I assume you wanted to use mysql.exe, not mysqladmin.exe.

Bash - Break up returned value from MySQL query

I am trying to break up a returned value from a mysql call in a shell script. Essentially what I have done so far is query the database for IP addresses that I have stored in a specific table. From there I store that returned value into a bash variable. The code is below:
#!/bin/bash
# This file will be used to obtain the system details for given ip address
retrieve_details()
{
# get all the ip addresses in the hosts table to pass to snmp call
host_ips=$(mysql -u user -ppassword -D honours_project -e "SELECT host_ip FROM hosts" -s)
echo "$host_ips"
# break up the returned host ip values
# loop through array of ip addresses
# while [ ]
# do
# pass ip values to snmp command call
# store details into mysql table host_descriptions
# done
}
retrieve_details
So this returns the following:
192.168.1.1
192.168.1.100
192.168.1.101
These are essentially the values I have in my hosts table. So what I am trying to do is break up each value such that I can get an array that looks like the following:
arr[0]=192.168.1.1
arr[1]=192.168.1.100
arr[2]=192.168.1.101
...
I have reviewed this link here: bash script - select from database into variable but I don't believe this applies to my situation. Any help would be appreciated
host_ips=($(mysql -u user -ppassword -D honours_project -e "SELECT host_ip FROM hosts" -s));
outer () will convert that in array. But you need to change your IFS (Internal Field Separator) to a newline first.
IFS=$'\n';
host_ips=($(mysql -u user -ppassword -D honours_project -e "SELECT host_ip FROM hosts" -s));
unset IFS;
for i in ${host_ips[#]} ; do echo $i ; done;
to print with key
for i in "${!host_ips[#]}"
do
echo "key :" $i "value:" ${host_ips[$i]}
done
wspace#lw:~$ echo $host_ips
192.168.1.1 192.168.1.100 192.168.1.101
wspace#lw:~$ arr=($(echo $host_ips))
wspace#lw:~$ echo ${arr[0]}
192.168.1.1
wspace#lw:~$ echo ${arr[1]}
192.168.1.100
wspace#lw:~$ echo ${arr[2]}
192.168.1.101
wspace#lw:~$ echo ${arr[#]}
192.168.1.1 192.168.1.100 192.168.1.101
wspace#lw:~$
maybe this is what you want