PM2: Deploy multiple environments on a single server? - pm2

I am using PM2 for deployment / process management, and the application handles lots of DNS tasks, and so it's easiest if I run the development app from the remote server, and either Rsyncing or SFTPing on save (still sorting this out).
This being the case, it is the idea case for the dev app to be on the same VM as the production app. However, the structure of the PM2 deployment configuration file (ecosystem.config.js) doesn't seem to make this possible, as when I run pm2 deploy development, the development version overtakes the production process on the VM.
Here is what I have:
module.exports = {
apps: [
{
name: "APP NAME",
script: "app.js",
env_development: {
NODE_ENV: "development",
...
},
env_production: {
NODE_ENV: "production",
...
}
}
],
deploy: {
production: {
user: "user",
host: ["123.123.123.123"],
ref: "origin/master",
repo: "git#gitlab.com:me/repo.git",
path: "/var/www/app",
"post-deploy":
"npm install && pm2 reload ecosystem.config.js --env production"
},
development: {
user: "user",
host: ["123.123.123.123"],
ref: "origin/master",
repo: "git#gitlab.com:me/repo.git",
path: "/var/www/app-dev",
"post-deploy":
"npm install && pm2 reload ecosystem.config.js --env development"
}
}
};
Any thoughts for the best way to go about accomplishing this?

After referencing this PR, I'm thinking you should be able to add append_env_to_name: true as a property to the object in the apps array of the ecosystem.config.js:
So your updated ecosystem.config.js file would be as follows:
module.exports = {
apps: [
{
name: "APP NAME",
append_env_to_name: true // <===== add this line
script: "app.js",
env_development: {
NODE_ENV: "development",
...
},
env_production: {
NODE_ENV: "production",
...
}
}
],
deploy: {
production: {
user: "user",
host: ["123.123.123.123"],
ref: "origin/master",
repo: "git#gitlab.com:me/repo.git",
path: "/var/www/app",
"post-deploy":
"npm install && pm2 reload ecosystem.config.js --env production"
},
development: {
user: "user",
host: ["123.123.123.123"],
ref: "origin/master",
repo: "git#gitlab.com:me/repo.git",
path: "/var/www/app-dev",
"post-deploy":
"npm install && pm2 reload ecosystem.config.js --env development"
}
}
};

Related

Can't connect to flask app inside a docker container from host [duplicate]

This question already has answers here:
Deploying a minimal flask app in docker - server connection issues
(8 answers)
Closed 1 year ago.
I’m trying to run a Flask application and mysql database by running docker-compose up on my computer. The flask is running on port 5000.
if __name__ == "__main__":
app.run(port=5000, debug=True)
The docker container is responding properly when I use docker exec command. But I can't get any response from the host by using the url: http://localhost:5000.
The curl -X GET <url> command is giving the following output:
curl: (56) Recv failure: Connection reset by peer
The docker ps command is giving the following output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
bffa59c471f6 customer_transaction_app "/bin/sh -c 'python …" About an hour ago Up About an hour 0.0.0.0:5000->5000/tcp customer_transaction_app_1
ad60c2830ac0 mysql "docker-entrypoint.s…" About an hour ago Up About an hour 33060/tcp, 0.0.0.0:32001->3306/tcp customertransaction_db_host
Here is the Dockerfile:
FROM python:3.8
EXPOSE 5000
COPY requirements.txt /app/requirements.txt
WORKDIR /app
RUN pip install -r requirements.txt
COPY . /app CMD
python main.py
Here is the docker-compose.yml file:
version: "2"
services:
app:
build: ./
depends_on:
- db
ports:
- "5000:5000"
db:
container_name: customertransaction_db_host
image: mysql
restart: always
ports:
- "32001:3306"
volumes:
- customertransaction-db-vol:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: 123456
MYSQL_DATABASE: customertransaction_db
MYSQL_USER: user
MYSQL_PASSWORD: 123456
volumes:
customertransaction-db-vol: {}
Both the containers reside inside a docker network customer_transaction_default. The docker network inspect command creates the following output:
[
{
"Name": "customer_transaction_default",
"Id": "4b5b20f503af0026a2f1ef185436c9a8e3d9c2ece690e93ece0e6b12f7821edb",
"Created": "2021-06-20T17:52:15.603679073+05:30",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.24.0.0/16",
"Gateway": "172.24.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"ad60c2830ac0f7e270daf03334ea8a8170200e92c2bc43492c378bd1d89cd3ac": {
"Name": "customertransaction_db_host",
"EndpointID": "de4597a1f58d711640f71a6169111f9842c7c5d74320825657a2518d07f36504",
"MacAddress": "02:42:ac:18:00:02",
"IPv4Address": "172.24.0.2/16",
"IPv6Address": ""
},
"bffa59c471f6762bb802fcee37db356cf2c7a59f4f88192e3546dd10ad9dbb2d": {
"Name": "customer_transaction_app_1",
"EndpointID": "a3ded03e28343921d799c0efc334034028821c231e1469d4359cd387c7f43f70",
"MacAddress": "02:42:ac:18:00:03",
"IPv4Address": "172.24.0.3/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
Since you are trying to connect a server in your docker network, you should change the host in the connection string that you are using to connect Mysql to the name of the container that you are willing to connect.
For your case, you have to change localhost with "customertransaction_db_host".

PM2 deploy, user password?

I try to use pm2 deploy but i don't find how to integrate user password in config file (https://pm2.keymetrics.io/docs/usage/deployment/):
{
"apps" : [{
"name" : "HTTP-API",
"script" : "http.js"
}],
"deploy" : {
// "production" is the environment name
"production" : {
"user" : "ubuntu",
"host" : ["xxxxxxxx"],
"ref" : "origin/master",
"repo" : "git#github.com:Username/repository.git",
"path" : "/var/www/my-repository",
"post-deploy" : "npm install; grunt dist"
},
}
}
I'm not able to run npm install on my server without sudo, according to this, how can i pass the password inside this config?
================= SOLUTIONS ===================
The only solution i found is to pass directly the password in my command line and read it with sudo -S :
production: {
key: "/home/xxxx/.ssh/xxx.pem",
user: "xxx",
host: ["xxxxxxxx"],
ssh_options: "StrictHostKeyChecking=no",
ref: "origin/xxxx",
repo: "xxxxx#bitbucket.org:xxxx/xxxxx.git",
path: "/home/xxxx/xxxxxx",
'pre-setup': "echo '## Pre-Setup'; echo 'MYPASS' |sudo -S bash setup.sh;",
'post-setup': "echo '## Post-Setup'",
'post-deploy': "echo 'MYPASS' |sudo -S bash start.sh; ",
}
As I understand, there is no option for ssh password in pm2 deploy, only keys.

How to run webpack in production in a package.json script in windows with other commands?

I am using MERN stack and running windows 10. I am trying to run this npm run command from the package.json file.
"scripts": {
"build": "set NODE_ENV=production webpack && gulp",
"postinstall": "npm run build",
"start": "node ./bin/www"
},
When I run npm run build I get the following results:
terminal results
What happens is it looks like gulp runs and that is it. My bundle is not optimized for production at all. I will include the webpack file incase it is needed.
const webpack = require('webpack');
const path = require('path');
module.exports = {
entry: {
app: './src/app.js'
},
output: {
filename: 'public/build/bundle.js',
sourceMapFilename: 'public/build/bundle.map'
},
devtool: '#source-map',
plugins:
process.env.NODE_ENV === 'production'
? [
new webpack.DefinePlugin({
'process.env': {
NODE_ENV: JSON.stringify('production')
}
}),
new webpack.optimize.UglifyJsPlugin({
minimize: true,
compress: {
warning: true
}
})
]
: [],
module: {
loaders: [
{
test: /\.jsx?$/,
exclude: /(node_modules)/,
loader: 'babel-loader',
query: {
presets: ['react', 'es2015', 'stage-1']
}
}
]
}
};
You might want to use cross-env, a platform-independent way to use environment variables:
"scripts": {
"build": "cross-env NODE_ENV=production webpack && gulp",
// or if the former does not work
"build": "npm run build:webpack && gulp",
"build:webpack": "cross-env NODE_ENV=production webpack",
},
Note that there are other ways to do production builds or evaluate environment variables in the webpack configuration.

Apache Drill web UI authencation failing

i am new to apache Drill, I added the below code to drill-override.conf :
drill.exec {
security.user.auth {
enabled: true,
packages += "org.apache.drill.exec.rpc.user.security",
impl: "pam",
pam_profiles: [ "sudo", "login" ]
}
}
But it's giving an error while logging through Web UI saying -
username and password invalid
How can I assign my root user to Drill?
Don't forget to add PAM lib ( libjpam.so ) to say <jpamdir>
edit <drill_home>/conf/drill-env.sh add:
export DRILL_JAVA_LIB_PATH="-Djava.library.path=<jpamdir>"
export DRILLBIT_JAVA_OPTS="-Djava.library.path=<jpamdir>"
export DRILL_SHELL_JAVA_OPTS="-Djava.library.path=<jpamdir>"
and make changes to <drill_home>/conf/drill-override.conf:
drill.exec: {
cluster-id: "",
zk.connect: "",
impersonation: {
enabled: true,
max_chained_user_hops: 3
},
security: {
auth.mechanisms : ["PLAIN"],
},
security.user.auth: {
enabled: true,
packages += "org.apache.drill.exec.rpc.user.security",
impl: "pam",
pam_profiles: [ "sudo", "login" ]
}
}
Run drill as root
sudo <drill_home>/bin/sqlline -u jdbc:drill:zk=local -n <user> -p <password>
Login drill web-ui/client with Linux user

Logstash Forwarder on AWS Elastic Beasntalk

what is best possible way to install the logstash forwarder on the Elastic Beanstalk application (Rails Application) to forward logs on the Logstash
Here what I did , create config file .ebextensions/02-logstash.config
files:
"/etc/yum.repos.d/logstash.repo":
mode: "000755"
owner: root
group: root
content: |
[logstash-forwarder]
name=logstash-forwarder repository
baseurl=http://packages.elasticsearch.org/logstashforwarder/centos
gpgcheck=1
gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
enabled=1
commands:
"100-rpm-key":
command: "rpm --import http://packages.elasticsearch.org/GPG-KEY-elasticsearch"
"200-install-logstash-forwarder":
command: "yum -y install logstash-forwarder"
"300-install-contrib-plugin":
command: "rm -rf /etc/logstash-forwarder.conf && cp /var/app/current/logstash-forwarder.conf /etc/ "
test: "[ ! -f /etc/logstash-forwarder.conf ]"
"400-copy-cert":
command: "cp /var/app/current/logstash-forwarder.crt /etc/pki/tls/certs/"
"500-install-logstash":
command: "service logstash-forwarder restart"
1: logstash-forwarder.conf
{
"network": {
"servers": [
"logstashIP:5000"
],
"timeout": 15,
"ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt"
},
"files": [
{
"paths": [
"/var/log/messages",
"/var/log/secure",
"/var/log/eb-version-deployment.log",
"/var/app/support/logs/passenger.log",
"/var/log/eb-activity.log",
"/var/log/eb-commandprocessor.log"
],
"fields": {
"type": "syslog"
}
}
]
}