PM2 deploy, user password? - pm2

I try to use pm2 deploy but i don't find how to integrate user password in config file (https://pm2.keymetrics.io/docs/usage/deployment/):
{
"apps" : [{
"name" : "HTTP-API",
"script" : "http.js"
}],
"deploy" : {
// "production" is the environment name
"production" : {
"user" : "ubuntu",
"host" : ["xxxxxxxx"],
"ref" : "origin/master",
"repo" : "git#github.com:Username/repository.git",
"path" : "/var/www/my-repository",
"post-deploy" : "npm install; grunt dist"
},
}
}
I'm not able to run npm install on my server without sudo, according to this, how can i pass the password inside this config?
================= SOLUTIONS ===================
The only solution i found is to pass directly the password in my command line and read it with sudo -S :
production: {
key: "/home/xxxx/.ssh/xxx.pem",
user: "xxx",
host: ["xxxxxxxx"],
ssh_options: "StrictHostKeyChecking=no",
ref: "origin/xxxx",
repo: "xxxxx#bitbucket.org:xxxx/xxxxx.git",
path: "/home/xxxx/xxxxxx",
'pre-setup': "echo '## Pre-Setup'; echo 'MYPASS' |sudo -S bash setup.sh;",
'post-setup': "echo '## Post-Setup'",
'post-deploy': "echo 'MYPASS' |sudo -S bash start.sh; ",
}

As I understand, there is no option for ssh password in pm2 deploy, only keys.

Related

Can I set env key from param in template

in OpenShift 4.3, I'm trying to set env key from param value within a template. for example:
"env": [
{
"name: "${FOO}-TEST",
"value": "${BAR}"
},
{
"name: "TEST",
"value": "${BAR}"
}
]
"parameters": [
{
"name": "FOO",
"required": true
},
{
"name": "BAR",
"required": true
}
]
Then, oc new-app with -p FOO=X -p BAR=Y, and checking env vars on pod, it shows:
TEST=Y
But does not show:
X-TEST=Y
In template, can I not include a parameter value as env key?
I think you can set up a parameter value as env key.
Could you check the template is working well as you expected as follows ?
export the template as yaml file first.
$ oc get template <your template name> -o yaml > test-template.yml
check whether the parameter you specified is setting up or not from the output.
$ oc process -f test-template.yml -p FOO=X -p BAR=Y
It's my simple test result.
e.g.>
$ cat test-temp.yml
:
containers:
- env:
- name: "${NAME}-KEY"
value: ${NAME}
:
$ oc process -f test-temp.yml -p NAME=test
:
"containers": [
{
"env": [
{
"name": "test-KEY",
"value": "test"
}
],
:
I hope it help you.
Export your variables
oc process FOO=${FOO} BAR=${BAR} -f yamlFile

PM2: Deploy multiple environments on a single server?

I am using PM2 for deployment / process management, and the application handles lots of DNS tasks, and so it's easiest if I run the development app from the remote server, and either Rsyncing or SFTPing on save (still sorting this out).
This being the case, it is the idea case for the dev app to be on the same VM as the production app. However, the structure of the PM2 deployment configuration file (ecosystem.config.js) doesn't seem to make this possible, as when I run pm2 deploy development, the development version overtakes the production process on the VM.
Here is what I have:
module.exports = {
apps: [
{
name: "APP NAME",
script: "app.js",
env_development: {
NODE_ENV: "development",
...
},
env_production: {
NODE_ENV: "production",
...
}
}
],
deploy: {
production: {
user: "user",
host: ["123.123.123.123"],
ref: "origin/master",
repo: "git#gitlab.com:me/repo.git",
path: "/var/www/app",
"post-deploy":
"npm install && pm2 reload ecosystem.config.js --env production"
},
development: {
user: "user",
host: ["123.123.123.123"],
ref: "origin/master",
repo: "git#gitlab.com:me/repo.git",
path: "/var/www/app-dev",
"post-deploy":
"npm install && pm2 reload ecosystem.config.js --env development"
}
}
};
Any thoughts for the best way to go about accomplishing this?
After referencing this PR, I'm thinking you should be able to add append_env_to_name: true as a property to the object in the apps array of the ecosystem.config.js:
So your updated ecosystem.config.js file would be as follows:
module.exports = {
apps: [
{
name: "APP NAME",
append_env_to_name: true // <===== add this line
script: "app.js",
env_development: {
NODE_ENV: "development",
...
},
env_production: {
NODE_ENV: "production",
...
}
}
],
deploy: {
production: {
user: "user",
host: ["123.123.123.123"],
ref: "origin/master",
repo: "git#gitlab.com:me/repo.git",
path: "/var/www/app",
"post-deploy":
"npm install && pm2 reload ecosystem.config.js --env production"
},
development: {
user: "user",
host: ["123.123.123.123"],
ref: "origin/master",
repo: "git#gitlab.com:me/repo.git",
path: "/var/www/app-dev",
"post-deploy":
"npm install && pm2 reload ecosystem.config.js --env development"
}
}
};

How to find public IP address of MySQL persistent on Openshift 3 web console?

I deployed Wildfly application and MySQL persistent on Openshift web console and tried to connect MySQL persistent with jdbc driver of Eclipse outside. However I can not find the public IP address at all on web console.
How can I find public IP address of MySQL persistent or how to configure the specific IP address into MySQL persistent? I attach an image of both services on Openshift.
[
UPDATED
On Eclipse IDE, I opened the log part of MySQL pod. And I found the IP addresses of MySQL service:
"readinessProbe" : {
"exec" : {"command" : [
"/bin/sh",
"-i",
"-c",
"MYSQL_PWD=\"$MYSQL_PASSWORD\" mysql -h 127.0.0.1 -u $MYSQL_USER -D $MYSQL_DATABASE -e 'SELECT 1'"
]},
"initialDelaySeconds" : 5,
"timeoutSeconds" : 1,
"periodSeconds" : 10,
"successThreshold" : 1,
"failureThreshold" : 3
},
....
"phase" : "Running",
"conditions" : [
{
"type" : "Initialized",
"status" : "True",
"lastTransitionTime" : "2017-04-02T06:35:00Z"
},
{
"type" : "Ready",
"status" : "True",
"lastTransitionTime" : "2017-04-03T16:47:27Z"
},
{
"type" : "PodScheduled",
"status" : "True",
"lastTransitionTime" : "2017-04-02T06:35:00Z"
}
],
"hostIP" : "172.31.14.159",
"podIP" : "10.1.72.72",
"startTime" : "2017-04-02T06:35:00Z",
"containerStatuses" : [{
"name" : "mysql",
"state" : {"running" : {"startedAt" : "2017-04-03T16:47:07Z"}},
"lastState" : {"terminated" : {
"exitCode" : 255,
"reason" : "Error",
"startedAt" : "2017-04-02T06:36:28Z",
....
I tried to connect MySQL pod with the hostIP, 172.31.14.159 or podIP, 10.1.72.72. But connection failed. And then I found the following MySQL generation commands in the log contents:
"exec" : {"command" : [
"/bin/sh",
"-i",
"-c",
"MYSQL_PWD=\"$MYSQL_PASSWORD\" mysql -h 127.0.0.1 -u $MYSQL_USER -D $MYSQL_DATABASE -e 'SELECT 1'"
]},
So, I tried to connect the mysql database service with the ip 127.0.0.1. And the connection was SUCCESSFUL.
Now I am confused what this 127.0.0.1 address is, my local PC or MySQL pod of Openshift container. How can I generate MySQL persistent with the HostIP, not with 127.0.0.1? I am afraid I missed the some procedure.
Your mysql pod havn't a public ip address, but you can use port forwarding.
With Eclipse:
How-To:
https://blog.openshift.com/getting-started-eclipse-jboss-tools-openshift-online-3/
Download:
http://marketplace.eclipse.org/content/jboss-tools-luna
With Openshift CLI:
$ oc port-forward <pod> [<local_port>:]<remote_port> [...[<local_port_n>:]<remote_port_n>]
such as
$ oc port-forward <pod> 3306:5000
Now you can connect the URL jdbc:mysql://127.0.0.1:5000/database. The mysql pod listen to on your local port 5000.
https://docs.openshift.com/container-platform/3.3/dev_guide/port_forwarding.html

Apache Drill web UI authencation failing

i am new to apache Drill, I added the below code to drill-override.conf :
drill.exec {
security.user.auth {
enabled: true,
packages += "org.apache.drill.exec.rpc.user.security",
impl: "pam",
pam_profiles: [ "sudo", "login" ]
}
}
But it's giving an error while logging through Web UI saying -
username and password invalid
How can I assign my root user to Drill?
Don't forget to add PAM lib ( libjpam.so ) to say <jpamdir>
edit <drill_home>/conf/drill-env.sh add:
export DRILL_JAVA_LIB_PATH="-Djava.library.path=<jpamdir>"
export DRILLBIT_JAVA_OPTS="-Djava.library.path=<jpamdir>"
export DRILL_SHELL_JAVA_OPTS="-Djava.library.path=<jpamdir>"
and make changes to <drill_home>/conf/drill-override.conf:
drill.exec: {
cluster-id: "",
zk.connect: "",
impersonation: {
enabled: true,
max_chained_user_hops: 3
},
security: {
auth.mechanisms : ["PLAIN"],
},
security.user.auth: {
enabled: true,
packages += "org.apache.drill.exec.rpc.user.security",
impl: "pam",
pam_profiles: [ "sudo", "login" ]
}
}
Run drill as root
sudo <drill_home>/bin/sqlline -u jdbc:drill:zk=local -n <user> -p <password>
Login drill web-ui/client with Linux user

Logstash Forwarder on AWS Elastic Beasntalk

what is best possible way to install the logstash forwarder on the Elastic Beanstalk application (Rails Application) to forward logs on the Logstash
Here what I did , create config file .ebextensions/02-logstash.config
files:
"/etc/yum.repos.d/logstash.repo":
mode: "000755"
owner: root
group: root
content: |
[logstash-forwarder]
name=logstash-forwarder repository
baseurl=http://packages.elasticsearch.org/logstashforwarder/centos
gpgcheck=1
gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
enabled=1
commands:
"100-rpm-key":
command: "rpm --import http://packages.elasticsearch.org/GPG-KEY-elasticsearch"
"200-install-logstash-forwarder":
command: "yum -y install logstash-forwarder"
"300-install-contrib-plugin":
command: "rm -rf /etc/logstash-forwarder.conf && cp /var/app/current/logstash-forwarder.conf /etc/ "
test: "[ ! -f /etc/logstash-forwarder.conf ]"
"400-copy-cert":
command: "cp /var/app/current/logstash-forwarder.crt /etc/pki/tls/certs/"
"500-install-logstash":
command: "service logstash-forwarder restart"
1: logstash-forwarder.conf
{
"network": {
"servers": [
"logstashIP:5000"
],
"timeout": 15,
"ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt"
},
"files": [
{
"paths": [
"/var/log/messages",
"/var/log/secure",
"/var/log/eb-version-deployment.log",
"/var/app/support/logs/passenger.log",
"/var/log/eb-activity.log",
"/var/log/eb-commandprocessor.log"
],
"fields": {
"type": "syslog"
}
}
]
}