PM2 start script with multiple arguments (serve) - pm2

I'm trying to run serve frontend/dist -l 4000 from PM2. This is supposed to serve a Vue app on port 4000.
In my ecosystem.config.js, I have:
{
name: 'parker-frontend',
max_restarts: 5,
script: 'serve',
args: 'frontend/dist -l 4000',
instances: 1,
},
But when I do pm2 start, in the logs I have the following message:
Exposing /var/lib/jenkins/workspace/parker/frontend/dist directory on port NaN
Whereas if I run the same command: serve frontend/dist -l 4000, it runs just fine on port 4000.

After running serve frontend/dist -l 5000 I got an error in the PM2 logs.
In it's call stack I've found:
at Object.<anonymous> (/usr/lib/node_modules/pm2/lib/API/Serve.js:242:4)
Notice the path: /usr/lib/node_modules/pm2/lib/API/Serve.js
There is another command that's called serve in pm2 itself that was ran instead of the correct one. This is not the npm i -g serve I installed before. This is due to how Node package resolution works - it prioritizes local modules first.
To use the globally installed version (the correct one), you need to specify the exact path to your global serve.
To find out the path - on Linux, you can just do:
$ which serve
/usr/local/bin/serve
Then put the path in your ecosystem.config.js script property.
Final working ecosystem.config.js:
{
name: 'parker-frontend',
script: '/usr/local/bin/serve', //pm2 has it's own 'serve' which doesn't work, make sure to use global
args: 'frontend/dist -l 5000',
instances: 1,
},
```

Related

Heroku not recognizing updated config vars for Watir , and also not sure if config vars are pointing to right chrome files

I have a Ruby (non-Rails) & Watir web scraper working locally, however after I deployed it on Heroku, it looks like Heroku is unable to start Chrome/Chromedriver. I tried following the solutions from here: Heroku: Unable to find chromedriver when using Selenium
Here is my configuration:
args = %w[--disable-infobars --no-sandbox --disable-gpu]
options = {
binary: ENV['GOOGLE_CHROME_BIN'],
prefs: { password_manager_enable: false, credentials_enable_service: false },
args: args
}
b = Watir::Browser.new(:chrome, options: options)
Gemfile
ruby '2.7.1'
gem 'watir', '6.19.0'
gem 'xpath', '3.2.0'
gem 'google-api-client'
gem 'webdrivers'
edit And set the following buildpacks:
heroku buildpacks:set https://github.com/heroku/heroku-buildpack-google-chrome
heroku buildpacks:set https://github.com/heroku/heroku-buildpack-chromedriver
And here is the error message in the Heroku log:
unknown error: Chrome failed to start: exited abnormally. (Selenium::WebDriver::Error::UnknownError)
(unknown error: DevToolsActivePort file doesn't exist)
(The process started from chrome location /app/.apt/opt/google/chrome/chrome is no longer running, so ChromeDriver is assuming that Chrome has crashed.)
I also ran it with the args set to --headless
args = %w[--disable-infobars --headless window-size=1600,1200 --no-sandbox --disable-gpu]
and got this message:
timed out after 30 seconds, waiting for true condition on #<Watir::Browser:0x7a2c669409432a9a url="https://client.schwab.com/login/signon/customercenterlogin.aspx" title="Login | Charles Schwab"> (Watir::Wait::TimeoutError)
I had originally set the config vars to the following as I was following Heroku: Unable to find chromedriver when using Selenium:
heroku config:set GOOGLE_CHROME_BIN=/app/.apt/opt/google/chrome/chrome
heroku config:set GOOGLE_CHROME_SHIM=/app/.apt/opt/google/chrome/chrome
But I realized it was pointing to the wrong path, when I ran this in Heroku console find /app/ -name "*chrome*", the path I had set didn't exist, and I found the new path: /app/.apt/usr/bin/google-chrome
/app/.apt/usr/bin/google-chrome
/app/.apt/usr/bin/google-chrome-stable
/app/.apt/usr/share/gnome-control-center/default-apps/google-chrome.xml
/app/.apt/usr/share/man/man1/google-chrome-stable.1.gz
/app/.apt/usr/share/man/man1/google-chrome.1.gz
/app/.apt/usr/share/doc/google-chrome-stable
/app/.apt/usr/share/applications/google-chrome.desktop
/app/.apt/usr/share/menu/google-chrome.menu
/app/.apt/usr/share/appdata/google-chrome.appdata.xml
/app/.apt/opt/google/chrome
/app/.apt/opt/google/chrome/chrome_100_percent.pak
/app/.apt/opt/google/chrome/chrome
/app/.apt/opt/google/chrome/google-chrome
/app/.apt/opt/google/chrome/cron/google-chrome
/app/.apt/opt/google/chrome/chrome-sandbox
/app/.apt/opt/google/chrome/chrome_200_percent.pak
/app/.apt/etc/cron.daily/google-chrome
/app/.profile.d/010_google-chrome.sh
/app/.profile.d/chromedriver.sh
/app/.chromedriver
/app/.chromedriver/bin/chromedriver
/app/vendor/bundle/ruby/2.7.0/gems/webdrivers-4.6.0/lib/webdrivers/tasks/chromedriver.rake
/app/vendor/bundle/ruby/2.7.0/gems/webdrivers-4.6.0/lib/webdrivers/chrome_finder.rb
/app/vendor/bundle/ruby/2.7.0/gems/webdrivers-4.6.0/lib/webdrivers/chromedriver.rb
/app/vendor/bundle/ruby/2.7.0/gems/webdrivers-4.6.0/spec/webdrivers/chrome_finder_spec.rb
/app/vendor/bundle/ruby/2.7.0/gems/webdrivers-4.6.0/spec/webdrivers/chromedriver_spec.rb
/app/vendor/bundle/ruby/2.7.0/gems/selenium-webdriver-3.142.7/lib/selenium/webdriver/chrome.rb
/app/vendor/bundle/ruby/2.7.0/gems/selenium-webdriver-3.142.7/lib/selenium/webdriver/chrome
/app/vendor/bundle/ruby/2.7.0/gems/google-api-client-0.53.0/generated/google/apis/chromeuxreport_v1
/app/vendor/bundle/ruby/2.7.0/gems/google-api-client-0.53.0/generated/google/apis/chromeuxreport_v1.rb
So I then changed my vars to:
heroku config:set GOOGLE_CHROME_BIN=/app/.apt/usr/bin/google-chrome
heroku config:set GOOGLE_CHROME_SHIM=/app/.apt/usr/bin/google-chrome
My questions:
Is my config vars GOOGLE_CHROME_BIN & GOOGLE_CHROME_SHIM pointing to the right files now? or should they be pointing to different bin files?
It looks like Heroku is still not detecting the updated config vars, as the log is still showing /app/.apt/opt/google/chrome/chrome. After I updated them, I've pushed the changes to Heroku, triggering a rebuild, and also resetted the dynos. Even in the config vars settings in dashboard, it is showing the updated vars. What else can I do to make Heroku recognize the updated config vars?
Thanks for any help!

How can I make Jelastic start PM2 to launch an 'npm' command instead of a file?

I'm using a Jelastic Node.js PM2 environment and I want my app to be started with something like the following:
pm2 start npm --name "app name" -- start
(my server is not a JS file).
The command runs fine if I use a Jelastic 'npm' environment, but I'd rather have the benefits of PM2.
I tried setting various APP_FILE (start, npm start, a pm2 config file path), Entry Points and PROCESS_MANAGER_FILE, without success. I usually get this error:
Node ID : 53209
-----------------------
result 1 Failed to start
Stopping nodejs server[ OK ] Starting nodejs server [FAILED]
The comment from #Jelastic worked! Indeed using a PM2 'ecosystem file' works in Jelastic.
Set APP_FILE (or possibly PROCESS_MANAGER_FILE) to ecosystem.config.js (This is relative to ROOT_DIR)
The content of this file should look something like this:
module.exports = {
apps: [
{
script: "yarn",
args: "--cwd myserver1 start",
name: "myserver1",
},
// You can use this setup to start multiple processes too.
{
script: "yarn",
args: "--cwd myserver2 start",
name: "myserver2",
},
],
};
--cwd tells yarn to switch the Current Working Directory. If you use npm, you can use --prefix instead.
Read more about PM2 ecoystem files: https://pm2.keymetrics.io/docs/usage/application-declaration/

oc-command to forward local-ports to remote debug ports based on service-name instead of pod-name

To minimize the setup-time for attaching a debug session to the remote pod (microservice deployed on OpenShift) using intelliJ,
I am trying to get the most out of the 'Before launch'-setting of the Remote Debug-Configuration.
I use 2 steps before attaching the debugger to the JVM Socket with following command-line arguments (this setup works but needs editing every new deploy);
-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:8000
step 1:
external tools: oc with arguments:
login
https://url.of.openshift.environment
--username=<login>
--password=<password>
step 2:
external tools: oc with arguments:
port-forward
microservice-name-65-6bhz8 -> this needs to be changed after every deploy
8000
3000
3001
background info:
this is the info in the service his YAML under spec>containers>env:
- name: JAVA_TOOL_OPTIONS
value: >-
-agentlib:jdwp=transport=dt_socket,server=y,address=8000,suspend=n
-Dcom.sun.management.jmxremote=true
-Dcom.sun.management.jmxremote.port=3000
-Dcom.sun.management.jmxremote.rmi.port=3001
-Djava.rmi.server.hostname=127.0.0.1
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
As the name of the pod changes every (re-)deploy I am trying to find a oc-command which can be used to port-forward without having to provide the pod-name.(eg. based on the service-name)
Or a completely other solution that allows me to hit 1 button to setup a debug-session (preferably in intelliJ).
> Screenshot IntelliJ settings
----------------------------- edit after tips -------------------------------
For now I made a small batch-script which does the trick:
Feel free to help on a even faster solution
(I'm checking https://openshiftdo.org/)
or other intelliJent solutions
set /p _username=Type your username:
set /p _password=Type your password:
oc login replace-with-openshift-console-url --username=%_username% --password=%_password%
oc project replace-with-project-name
oc get pods --selector app=replace-with-app-name -o jsonpath={.items[?(#.status.phase=='Running')].metadata.name} > temp.txt
set /p PODNAME= <temp.txt
del temp.txt
oc port-forward %PODNAME% 8000 3000 3001
Your going to need the pod name in order to port forward but of course you can fetch that programatically consistantly so you don't need to update in place every time.
There are a number of ways you can do this, via jsonpath, go template, bash etc. An example would be to use the following, replacing your app name as required:
oc get pod -l app=replace-me -o jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}'

Error starting Apache Drill in Embedded Mode on Windows 10

I am trying to start Apache Drill 1.10 in Embedded Mode on Windows 10 x64 (with Oracle JVM 1.8.0_131). When launching the command
sqlline.bat -u "jdbc:drill:zk=local"
I get the following:
Error during udf area creation [/C:/Users/<user>/drill/udf/registry] on file system [file:///] (state=,code=0)
So, after some googling, I have changed the drill-override.conf file this way:
drill.exec: {
cluster-id: "drillbits1",
zk.connect: "localhost:2181",
udf: {
# number of retry attempts to update remote function registry
# if registry version was changed during update
retry-attempts: 10,
directory: {
# Override this property if custom file system should be used to create remote directories
# instead of default taken from Hadoop configuration
fs: "file:///",
# Set this property if custom absolute root should be used for remote directories
root: "/c:/work"
}
}
}
Then I have checked the following:
proper permission set on the folder
console started as an Administrator
But I still get the same error:
Error during udf area creation [/c:/work/drill/udf/registry] on file system [file:///] (state=,code=0)
I can't disable UDF since I don't have an active connection.
Any suggestions?
Seems to be related to ownership of the folders, as per this link.
Details of the solution from the link are quoted as follows
Run these commands before the first time you are running sqlline.bat.
mkdir %userprofile%\drill
mkdir %userprofile%\drill\udf
mkdir %userprofile%\drill\udf\registry
mkdir %userprofile%\drill\udf\tmp
mkdir %userprofile%\drill\udf\staging
takeown /R /F %userprofile%\drill

Using environment properties with files in elastic beanstalk config files

Working with Elastic Beanstalk .config files is kinda... interesting. I'm trying to use environment properties with the files: configuration option in an Elastc Beanstalk .config file. What I'd like to do is something like:
files:
"/etc/passwd-s3fs" :
mode: "000640"
owner: root
group: root
content: |
${AWS_ACCESS_KEY_ID}:${AWS_SECRET_KEY}
To create an /etc/passwd-s3fs file with content something like:
ABAC73E92DEEWEDS3FG4E:aiDSuhr8eg4fHHGEMes44zdkIJD0wkmd
I.e. use the environment properties defined in the AWS Console (Elastic Beanstalk/Configuration/Software Configuration/Environment Properties) to initialize system configuration files and such.
I've found that it is possible to use environment properties in container-command:s, like so:
container_commands:
000-create-file:
command: echo ${AWS_ACCESS_KEY_ID}:${AWS_SECRET_KEY} > /etc/passwd-s3fs
However, doing so will require me to manually set owner, group, file permissions etc. It's also much more of a hassle when dealing with larger configuration files than the Files: configuration option...
Anyone got any tips on this?
How about something like this. I will use the word "context" for dev vs. qa.
Create one file per context:
dev-envvars
export MYAPP_IP_ADDR=111.222.0.1
export MYAPP_BUCKET=dev
qa-envvars
export MYAPP_IP_ADDR=111.222.1.1
export MYAPP_BUCKET=qa
Upload those files to a private S3 folder, S3://myapp/config.
In IAM, add a policy to the aws-elasticbeanstalk-ec2-role role that allows reading S3://myapp/config.
Add the following file to your .ebextensions directory:
envvars.config
files:
"/opt/myapp_envvars" :
mode: "000644"
owner: root
group: root
# change the source when you need a different context
#source: https://s3-us-west-2.amazonaws.com/myapp/dev-envvars
source: https://s3-us-west-2.amazonaws.com/myapp/qa-envvars
Resources:
AWSEBAutoScalingGroup:
Metadata:
AWS::CloudFormation::Authentication:
S3Access:
type: S3
roleName: aws-elasticbeanstalk-ec2-role
buckets: myapp
commands:
# commands executes after files per
# http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html
10-load-env-vars:
command: . /opt/myapp_envvars
Per the AWS Developer's Guide, commands "run before the application and web server are set up and the application version file is extracted," and before container-commands. I guess the question will be whether that is early enough in the boot process to make the environment variables available when you need them. I actually wound up writing an init.d script to start and stop things in my EC2 instance. I used the technique above to deploy the script.
Credit for the “Resources” section that allows downloading from secured S3 goes to the May 7, 2014 post that Joshua#AWS made to this thread.
I am gravedigging but since I stumbled across this in the course of my travels, there is a "clever" way to do what you describe–at least in 2018, and at least since 2016. You can retrieve an environment variable by key with get-config:
/opt/elasticbeanstalk/bin/get-config environment --key YOUR_ENV_VAR_KEY
And likewise all environment variables with (as JSON or --output YAML)
/opt/elasticbeanstalk/bin/get-config environment
Example usage in a container command:
container_commands:
00_store_env_var_in_file_and_chmod:
command: "/opt/elasticbeanstalk/bin/get-config environment --key YOUR_ENV_KEY | install -D /dev/stdin /etc/somefile && chmod 640 /etc/somefile"
Example usage in a file:
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/00_do_stuff.sh":
mode: "000755"
owner: root
group: root
content: |
#!/bin/bash
YOUR_ENV_VAR=$(source /opt/elasticbeanstalk/bin/get-config environment --key YOUR_ENV_VAR_KEY)
echo "Hello $YOUR_ENV_VAR"
I was introduced to get-config by Thomas Reggi in https://serverfault.com/a/771067.
I assume that AWS_ACCESS_KEY_ID and AWS_SECRET_KEY are known to you prior to the app deployment.
You can create the file on your workstation and submit it to Elastic Beanstalk instance with the code on $ git aws.push
$ cd .ebextensions
$ echo 'ABAC73E92DEEWEDS3FG4E:aiDSuhr8eg4fHHGEMes44zdkIJD0wkmd' > passwd-s3fs
In .config:
files:
"/etc/passwd-s3fs" :
mode: "000640"
owner: root
group: root
container_commands:
10-copy-passwords-file:
command: "cat .ebextensions/passwd-s3fs > /etc/passwd-s3fs"
You might have to play with the permissions or execute cat as sudo. Also, I put the file into .ebextensions for example, it can be anywhere in your project.
Hope it helps.