How from my envoy script to write app version to database? - laravel-envoy

With laravel 5.8 envoy command I deploy my changes and I need from my envoy script to write app version to database
For this I created console command , which is located in app/Console/Commands/envoyWriteAppVersion.php file,
but I did not find how to assign additive parameter to my consol commad. I tried like :
php artisan envoy:write-app-version "654"
php artisan envoy:write-app-version 654
php artisan envoy:write-app-version app_version=7.654
But I got error :
Too many arguments, expected arguments "command".
This task did not complete successfully on one of your servers
Which is the valid way ?
Thanks!

I found a valid decision to use in my console command method :
$arguments = $this->arguments();
as it is written here https://laravel.com/docs/5.8/artisan#command-io.
and run from console with space :
php artisan envoy:write-app-version 0.101

Related

Environment variable not found: DATABASE_URL. Prisma and mysql

I've developped an API with Node.Js, Express, Prisma and Mysql in local firstly. After that it works, I have deployed my API on Heroku and I took the ClearDB add-on to have a Mysql DB on Heroku.
So the deployment is OKAY when I go on my root root URI I have the "Cannot GET /" message, and when I try to connect to my ClearDB with MysqlWorkbench I have my tables, columns etc...
The main problem is from Prisma.
When I go to the "Run console" of my Heroku's project, the command npx prisma init works perfectly BUT when I type npx prisma migrate deploy || dev or also if I try to npx prisma db push I have this error =>
Error: Get Config: Schema parsing - Error while interacting with query-engine-node-api library
Error code: P1012
error: Environment variable not found: DATABASE_URL.
--> schema.prisma:10
|
9 | provider = "mysql"
10 | url = env("DATABASE_URL")
|
All my code is in a GitHub repo, I've configured my .env (which is in the root folder of my server) like this :
DATABASE_URL="mysql://<username>:<my-password>#eu-cdbr-west-30.cleardb.net/heroku_36d295ebb6686a2"
NODE_ENV="development"
APP_SECRET="jwtsecret12"
NODE_PATH="./src"
ACCESS_TOKEN_SECRET="651651651848754cdfce9fz8ef4ef54se8f4sef48s69ef84e"
I hope you have all the informations that you need to help me :)
PS : Locally my project works perfectly
Waiting for your answers, thank you very much !
Your .env file is irrelevant. It should not be used on Heroku (and should not be tracked in your repository).
ClearDB provides an environment variable called CLEARDB_DATABASE_URL, not DATABASE_URL. You can either change your code to use this variable instead of DATABASE_URL, or you can set DATABASE_URL to the same value:
Retrieve your database URL by issuing the following command:
heroku config | grep CLEARDB_DATABASE_URL
CLEARDB_DATABASE_URL => mysql://adffdadf2341:adf4234#us-cdbr-east.cleardb.com/heroku_db?reconnect=true
Copy the value of the CLEARDB_DATABASE_URL config variable.
If you’re using Ruby on Rails and the mysql2 gem, you will need to change the mysql:// scheme in the CLEARDB_DATABASE_URL to mysql2://
heroku config:set DATABASE_URL='mysql://adffdadf2341:adf4234#us-cdbr-east.cleardb.com/heroku_db?reconnect=true'
Adding config vars:
DATABASE_URL => mysql2://adffd...b?reconnect=true
Restarting app... done, v61.
The connection information for Heroku Postgres can change at any time, but since the ClearDB documentation provides the preceding guidance I would hope that it does not do so.

Value for 'configPath' when running checkForServerUpgrade on AWS RDS

To prepare my upgrade from mysql 5.7 to mysql 8, I want to run the upgrade utility checker. Here's what I did so far:
installed mysqlsh on my machine
started mysqlsh
executed util.checkForServerUpgrade targeting the server that I want to upgrade
Here's the exact command that I used in step 3:
util.checkForServerUpgrade('root#my-remote-host:3306', { "password":"my-password" })
This runs fine but some checks are not executed because I don't provide the configPath parameter. For example, here's a warning that I get:
14) Removed system variables for error logging to the system log configuration
To run this check requires full path to MySQL server configuration file to be specified at 'configPath' key of options dictionary
More information:
https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-13.html#mysqld-8-0-13-logging
Anybody knows the value that I should provide for the configPath parameter?
I've tried to do the same using the command util.checkForServerUpgrade defining the configPath, without success. I then tried to run the same command directly from outside the mysqlsh shell, with success:
mysqlsh -- util check-for-server-upgrade root#localhost --target-version=8.0.13 --output-format=JSON --config-path=/etc/mysql/my.cnf
and it worked. To be noted that when I've tried to run from mysqlsh in the session root#localhost the command:
util.checkForServerUpgrade({"configPath":"/etc/mysql/my.cnf"})
mysqlsh replied with:
"Util.checkForServerUpgrade: Argument #1: Invalid values in connection options: configPath (ArgumentError)"
Try putting in the connection string, for example,
util.checkForServerUpgrade('root#localhost',{'configPath': '/etc/my.cnf'})
This worked for me, but without the connection string it doesn't.

AWS EB undefined RDS_HOSTNAME with Database hosts array empty

Currently in a Laravel project using AWS EB with RDS.
When I run
php artisan migrate --seed
then I get
PHP Notice: Undefined index: RDS_HOSTNAME in /var/app/current/config/database.php on line 5
PHP Notice: Undefined index: RDS_USERNAME in /var/app/current/config/database.php on line 6
PHP Notice: Undefined index: RDS_PASSWORD in /var/app/current/config/database.php on line 7
PHP Notice: Undefined index: RDS_DB_NAME in /var/app/current/config/database.php on line 8
and
Database hosts array is empty. (SQL: select * from
information_schema.tables where table_schema = ? and table_name =
migrations and table_type = 'BASE TABLE')
I'm not using a .env file but defining these variables in EB configuration, like
and my ./config/database.php file
Tested changing the variables' prefix to RDS_ instead of DB_ in EB but that didn't solve the problem.
Per the Elastic Beanstalk documentation here:
Note: Environment properties aren't automatically exported to the
shell, even though they are present in the instance. Instead,
environment properties are made available to the application through
the stack that it runs in, based on which platform you're using.
So Elastic Beanstalk is going to pass those environment variables to the Apache HTTP server process, but not the Linux shell where you are running that php command.
Per the documentation here, you need to use the get-config script to pull those environment variable values into your shell.
So you'll need to do this for all
/opt/elasticbeanstalk/bin/get-config environment -k RDS_USERNAME
which will print the value of RDS_USERNAME. Then export it to use in other commands
export RDS_USERNAME="value"
Do that for all - RDS_HOSTNAME, RDS_USERNAME, RDS_PASSWORD and RDS_DB_NAME. Then if you run
export
you must be able to see RDS_HOSTNAME, RDS_USERNAME, RDS_PASSWORD and RDS_DB_NAME and the respective value in front.
Once that's done and you run
php artisan migrate --seed
you'll then get as expected

MySQL login-path issues with clustercheck script used in xinetd

default: on
# description: mysqlchk
service mysqlchk
{
# this is a config for xinetd, place it in /etc/xinetd.d/
disable = no
flags = REUSE
socket_type = stream
type = UNLISTED
port = 9200
wait = no
user = root
server = /usr/bin/mysqlclustercheck
log_on_failure += USERID
only_from = 0.0.0.0/0
#
# Passing arguments to clustercheck
# <user> <pass> <available_when_donor=0|1> <log_file> <available_when_readonly=0|1> <defaults_extra_file>"
# Recommended: server_args = user pass 1 /var/log/log-file 0 /etc/my.cnf.local"
# Compatibility: server_args = user pass 1 /var/log/log-file 1 /etc/my.cnf.local"
# 55-to-56 upgrade: server_args = user pass 1 /var/log/log-file 0 /etc/my.cnf.extra"
#
# recommended to put the IPs that need
# to connect exclusively (security purposes)
per_source = UNLIMITED
}
/etc/xinetd.d #
It is kind of strange that script works fine when run manually when it runs using /etc/xinetd.d/ , it is not working as expected.
In mysqlclustercheck script, instead of using --user= and passord= syntax, I am using --login-path= syntax
script runs fine when I run using command line but status for xinetd was showing signal 13. After debugging, I have found that even simple command like this is not working
mysql_config_editor print --all >>/tmp/test.txt
We don't see any output generated when it is run using xinetd ( mysqlclustercheck)
Have you tried the following instead of /usr/bin/mysqlclustercheck?
server = /usr/bin/clustercheck
I am wondering if you could test your binary location with the linux which command.
A long time ago since this question was asked, but it just came to my attention.
First of all as mentioned, Percona Cluster Control script is called clustercheck, so make sure you are using the correct name and correct path.
Secondly, since the server script runs fine from command line, it seems to me that the path of mysql client command is not known by the xinetd when it runs the Cluster Control script.
Since the mysqlclustercheck script as it is offered from Percona, it uses only the binary name mysql without specifying the absolute path I suggest you do the following:
Find where mysql client command is located on your system:
ccloud#gal1:~> sudo -i
gal1:~ # which mysql
/usr/local/mysql/bin/mysql
gal1:~ #
then edit script /usr/bin/mysqlclustercheck and in the following line:
MYSQL_CMDLINE="mysql --defaults-extra-file=$DEFAULTS_EXTRA_FILE -nNE --connect-timeout=$TIMEOUT \
place the exact path of mysql client command you found in the previous step.
I also see that you are not using MySQL connection credentials for connecting to MySQL server. mysqlclustercheck script as it is offered from Percona, it uses User/Password in order to connect to MySQL server.
So normally, you should execute the script in the command line like:
gal1:~ # /usr/sbin/clustercheck haproxy haproxyMySQLpass
HTTP/1.1 200 OK
Content-Type: text/plain
Where haproxy/haproxyMySQLpass is the MySQL connection user/pass for HAProxy monitoring user.
Additionally, you should specify them to your script's xinetd settings like:
server = /usr/bin/mysqlclustercheck
server_args = haproxy haproxyMySQLpass
Last but not least, the signal 13 you are getting is because you try to write something in a script run by xinetd. If for example in your mysqlclustercheck you try to add a statement like
echo "debug message"
you probably going to see the broken pipe signal (13 in POSIX).
Finally, I had issues with this script using SLES 12.3 and I finally manage to run it not as 'nobody' but as 'root'.
Hope it helps

Error when starting Yesod app on openshift - command line args?

I am getting the following error when (re)starting my Yesod app on openshift:
server: InvalidYaml (Just (YamlException "Yaml file not found: xxx.xxx.xxx.xxx"))
Where xxx.xxx.xxx.xxx is an IP address. I did find a link to a Heroku+Yesod issue saying something about "removing an argument" but it didn't say from where, and of course the scripts/settings are going to be different in the case of OpenShift. Any ideas what this error is and how to get past it?
I'm assuming based on the question that you're using the standard scaffolding. If you look in the code, you'll find that uses loadAppSettingsArgs, which is described as:
Same as loadAppSettings, but get the list of runtime config files from the command line arguments.
If you don't want to pay attention to command line arguments, just replace the call to loadAppSettingsArgs with loadAppSettings [].