AWS EB undefined RDS_HOSTNAME with Database hosts array empty - mysql

Currently in a Laravel project using AWS EB with RDS.
When I run
php artisan migrate --seed
then I get
PHP Notice: Undefined index: RDS_HOSTNAME in /var/app/current/config/database.php on line 5
PHP Notice: Undefined index: RDS_USERNAME in /var/app/current/config/database.php on line 6
PHP Notice: Undefined index: RDS_PASSWORD in /var/app/current/config/database.php on line 7
PHP Notice: Undefined index: RDS_DB_NAME in /var/app/current/config/database.php on line 8
and
Database hosts array is empty. (SQL: select * from
information_schema.tables where table_schema = ? and table_name =
migrations and table_type = 'BASE TABLE')
I'm not using a .env file but defining these variables in EB configuration, like
and my ./config/database.php file
Tested changing the variables' prefix to RDS_ instead of DB_ in EB but that didn't solve the problem.

Per the Elastic Beanstalk documentation here:
Note: Environment properties aren't automatically exported to the
shell, even though they are present in the instance. Instead,
environment properties are made available to the application through
the stack that it runs in, based on which platform you're using.
So Elastic Beanstalk is going to pass those environment variables to the Apache HTTP server process, but not the Linux shell where you are running that php command.
Per the documentation here, you need to use the get-config script to pull those environment variable values into your shell.
So you'll need to do this for all
/opt/elasticbeanstalk/bin/get-config environment -k RDS_USERNAME
which will print the value of RDS_USERNAME. Then export it to use in other commands
export RDS_USERNAME="value"
Do that for all - RDS_HOSTNAME, RDS_USERNAME, RDS_PASSWORD and RDS_DB_NAME. Then if you run
export
you must be able to see RDS_HOSTNAME, RDS_USERNAME, RDS_PASSWORD and RDS_DB_NAME and the respective value in front.
Once that's done and you run
php artisan migrate --seed
you'll then get as expected

Related

Environment variable not found: DATABASE_URL. Prisma and mysql

I've developped an API with Node.Js, Express, Prisma and Mysql in local firstly. After that it works, I have deployed my API on Heroku and I took the ClearDB add-on to have a Mysql DB on Heroku.
So the deployment is OKAY when I go on my root root URI I have the "Cannot GET /" message, and when I try to connect to my ClearDB with MysqlWorkbench I have my tables, columns etc...
The main problem is from Prisma.
When I go to the "Run console" of my Heroku's project, the command npx prisma init works perfectly BUT when I type npx prisma migrate deploy || dev or also if I try to npx prisma db push I have this error =>
Error: Get Config: Schema parsing - Error while interacting with query-engine-node-api library
Error code: P1012
error: Environment variable not found: DATABASE_URL.
--> schema.prisma:10
|
9 | provider = "mysql"
10 | url = env("DATABASE_URL")
|
All my code is in a GitHub repo, I've configured my .env (which is in the root folder of my server) like this :
DATABASE_URL="mysql://<username>:<my-password>#eu-cdbr-west-30.cleardb.net/heroku_36d295ebb6686a2"
NODE_ENV="development"
APP_SECRET="jwtsecret12"
NODE_PATH="./src"
ACCESS_TOKEN_SECRET="651651651848754cdfce9fz8ef4ef54se8f4sef48s69ef84e"
I hope you have all the informations that you need to help me :)
PS : Locally my project works perfectly
Waiting for your answers, thank you very much !
Your .env file is irrelevant. It should not be used on Heroku (and should not be tracked in your repository).
ClearDB provides an environment variable called CLEARDB_DATABASE_URL, not DATABASE_URL. You can either change your code to use this variable instead of DATABASE_URL, or you can set DATABASE_URL to the same value:
Retrieve your database URL by issuing the following command:
heroku config | grep CLEARDB_DATABASE_URL
CLEARDB_DATABASE_URL => mysql://adffdadf2341:adf4234#us-cdbr-east.cleardb.com/heroku_db?reconnect=true
Copy the value of the CLEARDB_DATABASE_URL config variable.
If you’re using Ruby on Rails and the mysql2 gem, you will need to change the mysql:// scheme in the CLEARDB_DATABASE_URL to mysql2://
heroku config:set DATABASE_URL='mysql://adffdadf2341:adf4234#us-cdbr-east.cleardb.com/heroku_db?reconnect=true'
Adding config vars:
DATABASE_URL => mysql2://adffd...b?reconnect=true
Restarting app... done, v61.
The connection information for Heroku Postgres can change at any time, but since the ClearDB documentation provides the preceding guidance I would hope that it does not do so.

Value for 'configPath' when running checkForServerUpgrade on AWS RDS

To prepare my upgrade from mysql 5.7 to mysql 8, I want to run the upgrade utility checker. Here's what I did so far:
installed mysqlsh on my machine
started mysqlsh
executed util.checkForServerUpgrade targeting the server that I want to upgrade
Here's the exact command that I used in step 3:
util.checkForServerUpgrade('root#my-remote-host:3306', { "password":"my-password" })
This runs fine but some checks are not executed because I don't provide the configPath parameter. For example, here's a warning that I get:
14) Removed system variables for error logging to the system log configuration
To run this check requires full path to MySQL server configuration file to be specified at 'configPath' key of options dictionary
More information:
https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-13.html#mysqld-8-0-13-logging
Anybody knows the value that I should provide for the configPath parameter?
I've tried to do the same using the command util.checkForServerUpgrade defining the configPath, without success. I then tried to run the same command directly from outside the mysqlsh shell, with success:
mysqlsh -- util check-for-server-upgrade root#localhost --target-version=8.0.13 --output-format=JSON --config-path=/etc/mysql/my.cnf
and it worked. To be noted that when I've tried to run from mysqlsh in the session root#localhost the command:
util.checkForServerUpgrade({"configPath":"/etc/mysql/my.cnf"})
mysqlsh replied with:
"Util.checkForServerUpgrade: Argument #1: Invalid values in connection options: configPath (ArgumentError)"
Try putting in the connection string, for example,
util.checkForServerUpgrade('root#localhost',{'configPath': '/etc/my.cnf'})
This worked for me, but without the connection string it doesn't.

How from my envoy script to write app version to database?

With laravel 5.8 envoy command I deploy my changes and I need from my envoy script to write app version to database
For this I created console command , which is located in app/Console/Commands/envoyWriteAppVersion.php file,
but I did not find how to assign additive parameter to my consol commad. I tried like :
php artisan envoy:write-app-version "654"
php artisan envoy:write-app-version 654
php artisan envoy:write-app-version app_version=7.654
But I got error :
Too many arguments, expected arguments "command".
This task did not complete successfully on one of your servers
Which is the valid way ?
Thanks!
I found a valid decision to use in my console command method :
$arguments = $this->arguments();
as it is written here https://laravel.com/docs/5.8/artisan#command-io.
and run from console with space :
php artisan envoy:write-app-version 0.101

how to set livy.server.session.timeout on EMR cluster boostrap?

I am creating an EMR cluster, and using jupyter notebook to run some spark tasks.
My tasks die after approximately 1 hour of execution, and the error is:
An error was encountered:
Invalid status code '400' from https://xxx.xx.x.xxx:18888/sessions/0/statements/20 with error payload: "requirement failed: Session isn't active."
My understanding is that it is related to the Livy config livy.server.session.timeout, but I don't know how I can set it in the bootstrap of the cluster (I need to do it in the bootstrap because the cluster is created with no ssh access)
Thanks a lot in advance
On EMR, livy-conf is the classification for the properties for livy's livy.conf file, so when creating an EMR cluster, choose advanced options with Livy as an application chosen to install, please pass this EMR configuration in the Enter Configuration field.
[{'classification': 'livy-conf','Properties': {'livy.server.session.timeout':'5h'}}]
On EMR, Livy binary is located at /etc/livy/, and so the config file is at /etc/livy/conf/livy.conf
To verify this,
Create an EMR cluster with a known ec2 key-pair, Livy and above config
Using the ec2 key-pair, login to the EC2 Master node associated with the cluster ssh -i some-ec2-key-pair.pem hadoop#ec2-00-00-00-0.ca-region-n.compute.amazonaws.com
Navigate to /etc/livy/conf, vim livy.conf & see the updated value of livy.server.session.timeout
If you don't want the Livy session to go down at all, then set the property livy.server.session.timeout-check to false in /etc/livy/conf/livy.conf.
Another way to do that if you don’t want to recreate the cluster is:
go to /etc/livy/conf/livy.conf and set the livy.server.session.timeout property to the value you would like.
After that, run sudo restart livy-server to make the configuration applied.

MySQL login-path issues with clustercheck script used in xinetd

default: on
# description: mysqlchk
service mysqlchk
{
# this is a config for xinetd, place it in /etc/xinetd.d/
disable = no
flags = REUSE
socket_type = stream
type = UNLISTED
port = 9200
wait = no
user = root
server = /usr/bin/mysqlclustercheck
log_on_failure += USERID
only_from = 0.0.0.0/0
#
# Passing arguments to clustercheck
# <user> <pass> <available_when_donor=0|1> <log_file> <available_when_readonly=0|1> <defaults_extra_file>"
# Recommended: server_args = user pass 1 /var/log/log-file 0 /etc/my.cnf.local"
# Compatibility: server_args = user pass 1 /var/log/log-file 1 /etc/my.cnf.local"
# 55-to-56 upgrade: server_args = user pass 1 /var/log/log-file 0 /etc/my.cnf.extra"
#
# recommended to put the IPs that need
# to connect exclusively (security purposes)
per_source = UNLIMITED
}
/etc/xinetd.d #
It is kind of strange that script works fine when run manually when it runs using /etc/xinetd.d/ , it is not working as expected.
In mysqlclustercheck script, instead of using --user= and passord= syntax, I am using --login-path= syntax
script runs fine when I run using command line but status for xinetd was showing signal 13. After debugging, I have found that even simple command like this is not working
mysql_config_editor print --all >>/tmp/test.txt
We don't see any output generated when it is run using xinetd ( mysqlclustercheck)
Have you tried the following instead of /usr/bin/mysqlclustercheck?
server = /usr/bin/clustercheck
I am wondering if you could test your binary location with the linux which command.
A long time ago since this question was asked, but it just came to my attention.
First of all as mentioned, Percona Cluster Control script is called clustercheck, so make sure you are using the correct name and correct path.
Secondly, since the server script runs fine from command line, it seems to me that the path of mysql client command is not known by the xinetd when it runs the Cluster Control script.
Since the mysqlclustercheck script as it is offered from Percona, it uses only the binary name mysql without specifying the absolute path I suggest you do the following:
Find where mysql client command is located on your system:
ccloud#gal1:~> sudo -i
gal1:~ # which mysql
/usr/local/mysql/bin/mysql
gal1:~ #
then edit script /usr/bin/mysqlclustercheck and in the following line:
MYSQL_CMDLINE="mysql --defaults-extra-file=$DEFAULTS_EXTRA_FILE -nNE --connect-timeout=$TIMEOUT \
place the exact path of mysql client command you found in the previous step.
I also see that you are not using MySQL connection credentials for connecting to MySQL server. mysqlclustercheck script as it is offered from Percona, it uses User/Password in order to connect to MySQL server.
So normally, you should execute the script in the command line like:
gal1:~ # /usr/sbin/clustercheck haproxy haproxyMySQLpass
HTTP/1.1 200 OK
Content-Type: text/plain
Where haproxy/haproxyMySQLpass is the MySQL connection user/pass for HAProxy monitoring user.
Additionally, you should specify them to your script's xinetd settings like:
server = /usr/bin/mysqlclustercheck
server_args = haproxy haproxyMySQLpass
Last but not least, the signal 13 you are getting is because you try to write something in a script run by xinetd. If for example in your mysqlclustercheck you try to add a statement like
echo "debug message"
you probably going to see the broken pipe signal (13 in POSIX).
Finally, I had issues with this script using SLES 12.3 and I finally manage to run it not as 'nobody' but as 'root'.
Hope it helps