Crontab to reload PM2 cluster processes - pm2

I'm around this issue for one day with no effective results. Imagine I want to reload (zero downtime, thus not restart) a PM2 cluster process every hour with crontab.
The PM2 app is this one, which I start with sudo pm2 start app.json (btw, I think it's not relevant, but I'm using pm2 as sudo, I can't recall why)
{
"apps" : [{
"name" : "autocosts.prod",
"script" : "bin/server.js",
"cwd" : "/var/www/autocosts.prod/",
"node_args" : "--use_strict",
"args" : "-r prod --print --pdf --social --googleCaptcha --googleAnalytics --database",
"exec_mode" : "cluster",
"instances" : 4,
"wait_ready" : true,
"listen_timeout" : 50000,
"watch" : false,
"exp_backoff_restart_delay" : 200,
"env": {
"NODE_ENV": "production"
},
"log_date_format": "DD-MM-YYYY"
}]
}
My crontab line is
# reload every hour
0 * * * * /usr/local/bin/node /usr/local/bin/pm2 reload /var/www/autocosts.prod/app.json > /var/log/pm2/app.cron.log 2>&1
But on that log file I get an error
[PM2] Applying action reloadProcessId on app [autocosts.prod](ids: [ 0, 1, 2, 3 ])
[PM2][ERROR] Process 0 not found
Process 0 not found
It seems PM2 can't detect the id numbers of the cluster.
How to have crontabs to reload (zero downtime) a PM2 cluster process?

I found the problem, was indeed due to the fact that I was using pm2 as sudo and crontab in sudo mode has no access to a series of environments.
Change owner of PM2 to yourself (do which pm2 to find the full path of the executable), such that you can use it without sudo. Do the same for ~/.pm2.

Related

Can't install cloudwatch agent by cloudformation on Amazon ECS-optimized AMI

I am creating a cloudformation template, which creates some resources as EC2 instance, autoscaling group and launchConfiguration.
By the userData property of the launchConfiguration resource, I tried to install the Cloudwatch agent as follows:
"UserData":{ "Fn::Base64" : {
"Fn::Join" : ["", [
"#!/bin/bash -xe\n",
"yum -y install aws-cfn-bootstrap\n",
"/opt/aws/bin/cfn-init -v",
" --stack ", { "Ref": "AWS::StackName" },
" --resource LaunchCongig",
" --region ", { "Ref" : "AWS::Region" },"\n",
"yum -y install wget\n",
"# Get the CloudWatch Logs agent\n",
"wget https://s3.amazonaws.com/aws-cloudwatch/downloads/latest/awslogs-agent-setup.py\n",
"# Install the CloudWatch Logs agent\n",
"python ./awslogs-agent-setup.py -n -r ", { "Ref" : "AWS::Region" }, " -c /etc/cwlogs.cfg || error_exit 'Failed to run CloudWatch Logs agent setup'\n",
"service awslogs start"
]]}
After ssh into the instance, I checked the file /var/log/cloud-init-output.log to see if everything is fine, but here is what I got:
+ wget https://s3.amazonaws.com/aws-cloudwatch/downloads/latest/awslogs-agent-setup.py
--2017-02-17 14:36:10-- https://s3.amazonaws.com/aws-cloudwatch/downloads/latest/awslogs-agent-setup.py
Resolving s3.amazonaws.com (s3.amazonaws.com)... 52.216.226.59
Connecting to s3.amazonaws.com (s3.amazonaws.com)|52.216.226.59|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 47998 (47K) [text/x-python]
Saving to: ‘awslogs-agent-setup.py’
0K .......... .......... .......... .......... ...... 100% 196K=0.2s
2017-02-17 14:36:10 (196 KB/s) - ‘awslogs-agent-setup.py’ saved [47998/47998]
+ python ./awslogs-agent-setup.py -n -r eu-west-1 -c /etc/cwlogs.cfg
Step 1 of 5: Installing pip ...Traceback (most recent call last):
File "./awslogs-agent-setup.py", line 1144, in <module>
main()
File "./awslogs-agent-setup.py", line 1140, in main
setup.setup_artifacts()
File "./awslogs-agent-setup.py", line 693, in setup_artifacts
self.install_pip()
File "./awslogs-agent-setup.py", line 600, in install_pip
fail("Could not install pip. Please try again or see " + AGENT_SETUP_LOG_FILE + " for more details")
TypeError: fail() takes exactly 2 arguments (1 given)
+ error_exit 'Failed to run CloudWatch Logs agent setup'
/var/lib/cloud/instance/scripts/part-001: line 8: error_exit: command not found
Feb 17 14:36:12 cloud-init[2798]: util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-001 [127]
Feb 17 14:36:12 cloud-init[2798]: cc_scripts_user.py[WARNING]: Failed to run module scripts-user (scripts in /var/lib/cloud/instance/scripts)
Feb 17 14:36:12 cloud-init[2798]: util.py[WARNING]: Running module scripts-user (<module 'cloudinit.config.cc_scripts_user' from '/usr/lib/python2.7/dist-packages/cloudinit/config/cc_scripts_user.pyc'>) failed
Cloud-init v. 0.7.6 finished at Fri, 17 Feb 2017 14:36:12 +0000. Datasource DataSourceEc2. Up 85.78 seconds
What is wrong with this script? Is there any other way to install the agent?
Thank you.
EDIT:
I figured out that is because maybe the python-pip package didn't get installed so I added this to the userData:
"yum -y install python-pip\n",
After that I played the template again and strangely I got the same Error.
I am usinh an Amazon ECS-optimized AMI
I solved the problem by installing the agent directly by yum awslogs:
"UserData":{ "Fn::Base64" : {
"Fn::Join" : ["", [
"#!/bin/bash -xe\n",
"yum -y install aws-cfn-bootstrap\n",
"/opt/aws/bin/cfn-init -v",
" --stack ", { "Ref": "AWS::StackName" },
" --resource launchConfig",
" --region ", { "Ref" : "AWS::Region" },"\n",
"yum -y install awslogs\n",
"service awslogs start"
]]}
Here is the output from the log file:
Installed:
awslogs.noarch 0:1.1.2-1.10.amzn1
Dependency Installed:
aws-cli.noarch 0:1.11.29-1.45.amzn1
aws-cli-plugin-cloudwatch-logs.noarch 0:1.3.3-1.15.amzn1
freetype.x86_64 0:2.3.11-15.14.amzn1
libjpeg-turbo.x86_64 0:1.2.90-5.14.amzn1
mailcap.noarch 0:2.1.31-2.7.amzn1
python27-botocore.noarch 0:1.4.86-1.62.amzn1
python27-colorama.noarch 0:0.2.5-1.7.amzn1
python27-dateutil.noarch 0:2.1-1.3.amzn1
python27-docutils.noarch 0:0.11-1.15.amzn1
python27-futures.noarch 0:3.0.3-1.3.amzn1
python27-imaging.x86_64 0:1.1.6-19.9.amzn1
python27-jmespath.noarch 0:0.9.0-1.11.amzn1
python27-ply.noarch 0:3.4-3.12.amzn1
python27-pyasn1.noarch 0:0.1.7-2.9.amzn1
python27-rsa.noarch 0:3.4.1-1.8.amzn1
Complete!
+ service awslogs start
Starting awslogs: [ OK ]
Cloud-init v. 0.7.6 finished at Fri, 17 Feb 2017 15:33:42 +0000. Datasource DataSourceEc2. Up 83.47 seconds
Everything works fine this way. Hope that will help someone someday!
For ECS specifically, see Using CloudWatch Logs with Container Instances in the EC2 Container Service documentation for details on configuring CloudWatch Logs. The documentation recommends using yum install -y awslogs instead of the Python install script.
The documentation provides a complete sample in the Configuring CloudWatch Logs at Launch with User Data section.
In your case, since you're already managing your config files using cfn-init and CloudFormation::Init metadata in CloudFormation, you don't need any complex parsing of config files in your User-Data script, but you can still use the script as a reference. One thing worth adding to your User-Data script is running chkconfig awslogs on to make sure the service continues running on the instance after a reboot.

Is there a way to switch cwd by changing environment in PM2 - node.js

I am using PM2 to manage the execution of a couple of micro-apps on node.
Goal:
However I would like to be able to automatically switch settings and the cwd value based on the environment the app is executing in.
For example: on my local machine CWD should be ~/user/pm2, while on the server it needs to be E:\Programs\PM2.
Is there any way to do this using JSON config options with PM2? Is there a better way to manage the variables for different environments?
you can save a shell script, say pm2_dev.sh containing the cd command as first line.
#!/bin/bash
cd /foo/bar
pm2-dev run my-app.js
OR you can add input to your script:
# pm2_dev.sh ~/user/pm2
file should be:
#!/bin/bash
cd $1
pm2-dev run my-app.js
If you do not want to change environment by shell script, you can follow documentation way:
{ "apps" : [{
"script" : "worker.js",
"watch" : true,
"env": {
"NODE_ENV": "development",
},
"env_production" : {
"NODE_ENV": "production"
} },{
"name" : "api-app",
"script" : "api.js",
"instances" : 4,
"exec_mode" : "cluster" }] }
When running your application you should use --env option as it is written here:
--env specify environment to get
specific env variables (for JSON declaration)
Finally you can wrap configuration in a js object that conditionally returns parameters basing on current environment:
module.exports = (function(env){
if( env === 'development' )
return { folder: '~/user/pm2' };
else if( env === 'production' )
return { folder: 'E:\Programs\PM2' };
}(process.env.NODE_ENV));
Then you can require the config file and access it being sure that it returns always the correct config.

You are starting 1 processes in fork_mode without load balancing. To enable it remove -x option

I have the following application definition:
{
"apps": [
{
"name": "dbm",
"script": "node_modules/core-dbm/src/app.js",
"args": "--conf=configuration/local.js --dev",
"instances": 1,
"exec_mode": "fork",
"env": {
"NODE_ENV": "development"
},
"env_production": {
"NODE_ENV": "production"
},
"autorestart": false
}
]
}
When I start this configuration, I get the following warning:
You are starting 1 processes in fork_mode without load balancing. To enable it remove -x option.
I don't understand what it is trying to tell me. Obviously, I don't need load balancing with a single process and I don't have a -x option anywhere.
pm2 does not properly check the value of instances.
Even though 1 is a perfectly valid setting for running a single instance in fork mode, pm2 does not properly respect it. To get rid of the warning, just remove the instances setting.

MySQL import hangs on Vagrant CoreOS box on Mac

I have a local development setup using the following:
Mac Yosemite 10.10.3
Vagrant 1.7.3
CoreOS alpha version 681.0.0
2 Docker containers one for apache PHP and another for mysql both based on Ubuntu 12.10
Its set up to sync the local dev directory ~/Sites to the Vagrant box using NFS, since my working directories as well as the MySQL directories are located here (~/Sites/.coreos-databases/mysql). From what I have read this is not the best type of setup but it has worked for me for quite some time as well as others at work.
Recently I have not been able to import any database dumps into this setup. The import starts and hangs approximately half way through the process. It happens on the command line as well as with Sequel Pro. It does import some of the the tables, but freezes exactly at the same spot everytime. It doesn't seem to matter what the size of the dump is - the one I have been attempting is only 104Kb. Someone else is having the same issue with a 100MB+ dump - freezing at the same spot approx halfway.
My Vagrantfile:
Vagrant.configure("2") do |config|
# Define the CoreOS box
config.vm.box = "coreos-alpha"
config.vm.box_url = "http://alpha.release.core-os.net/amd64-usr/current/coreos_production_vagrant.json"
# Define a static IP
config.vm.network "private_network",
ip: "33.33.33.77"
# Share the current folder via NFS
config.vm.synced_folder ".", "/home/core/sites",
id: "core",
:nfs => true,
:mount_options => ['nolock,vers=3,udp,noatime']
# Provision docker with shell
# config.vm.provision
config.vm.provision "shell",
path: ".coreos-devenv/scripts/provision-docker.sh"
end
Dockerfile for mysql:
# Start with Ubuntu base
FROM ubuntu:12.10
# Install some basics
RUN apt-get update
# Install mysql
RUN apt-get install -y mysql-server
# Clean up after install
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Add a grants file to set up remote user
# and disbale the root user's remote access.
ADD grants.sql /etc/mysql/
# Add a conf file for correcting "listen"
ADD listen.cnf /etc/mysql/conf.d/
# Run mysqld on standard port
EXPOSE 3306
ENTRYPOINT ["/usr/sbin/mysqld"]
CMD ["--init-file=/etc/mysql/grants.sql"]
I 'vagrant ssh' in and run dmesg and this is what it spits out after it freezes:
[ 465.504357] nfs: server 33.33.33.1 not responding, still trying
[ 600.091356] INFO: task mysqld:1501 blocked for more than 120 seconds.
[ 600.092388] Not tainted 4.0.3 #2
[ 600.093277] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 600.094442] mysqld D ffff880019dbfbc8 0 1501 939 0x00000000
[ 600.095953] ffff880019dbfbc8 ffffffff81a154c0 ffff88001ec61910 ffff880019dbfba8
[ 600.098871] ffff880019dbffd8 0000000000000000 7fffffffffffffff 0000000000000002
[ 600.101594] ffffffff8150b4e0 ffff880019dbfbe8 ffffffff8150ad57 ffff88001ed5eb18
[ 600.103794] Call Trace:
[ 600.104376] [<ffffffff8150b4e0>] ? bit_wait+0x50/0x50
[ 600.105934] [<ffffffff8150ad57>] schedule+0x37/0x90
[ 600.107505] [<ffffffff8150da7c>] schedule_timeout+0x20c/0x280
[ 600.108369] [<ffffffff8101d799>] ? read_tsc+0x9/0x10
[ 600.109370] [<ffffffff810d731e>] ? ktime_get+0x3e/0xa0
[ 600.110353] [<ffffffff8101d799>] ? read_tsc+0x9/0x10
[ 600.111327] [<ffffffff810d731e>] ? ktime_get+0x3e/0xa0
[ 600.112347] [<ffffffff8150a31c>] io_schedule_timeout+0xac/0x130
[ 600.113368] [<ffffffff810a9ee7>] ? prepare_to_wait+0x57/0x90
[ 600.114358] [<ffffffff8150b516>] bit_wait_io+0x36/0x50
[ 600.115332] [<ffffffff8150b145>] __wait_on_bit+0x65/0x90
[ 600.116343] [<ffffffff81146072>] wait_on_page_bit+0xc2/0xd0
[ 600.117453] [<ffffffff810aa360>] ? autoremove_wake_function+0x40/0x40
[ 600.119304] [<ffffffff81146179>] filemap_fdatawait_range+0xf9/0x190
[ 600.120646] [<ffffffff81152ffe>] ? do_writepages+0x1e/0x40
[ 600.121346] [<ffffffff81147f96>] ? __filemap_fdatawrite_range+0x56/0x70
[ 600.122397] [<ffffffff811480bf>] filemap_write_and_wait_range+0x3f/0x70
[ 600.123460] [<ffffffffa0207b1e>] nfs_file_fsync_commit+0x23e/0x3c0 [nfs]
[ 600.124399] [<ffffffff811e7bf0>] vfs_fsync_range+0x40/0xb0
[ 600.126163] [<ffffffff811e7cbd>] do_fsync+0x3d/0x70
[ 600.127092] [<ffffffff811e7f50>] SyS_fsync+0x10/0x20
[ 600.128086] [<ffffffff8150f089>] system_call_fastpath+0x12/0x17
Any ideas as whats going on here?
I am also using this same setup. Vagrant defaults to UDP so removing that from your setup seems to work. Haven't tested it though but I didn't run into the MYSQL issues you had.
config.vm.synced_folder ".", "/home/core/sites",
id: "core",
nfs_version: "4",
:nfs => true,
:mount_options => ['nolock,noatime']
This worked for me. YMMV.

rabbitmq 3.3.4 shovel configuration is crashing start process

I'm trying to configure the shovel plugin via the config file (running in docker) but I get this error:
BOOT FAILED
===========
Error description:
{error,{failed_to_cluster_with,[rabbit#dalmacpmfd57],
"Mnesia could not connect to any nodes."}}
The config is set up this way because the destination for shovel will be created on demand when a dev environment is spun up... the source is a permanent rabbitmq instance running that the new, dev environment will attach to.
Here is the config file contents:
[
{rabbitmq_shovel,
[{shovels,
[{indexer_replica_static,
[{sources,
[{broker, [ "amqp://guest:guest#rabbitmq/newdev" ]},
{declarations,
[{'queue.declare', [{queue, <<"Indexer_Replica_Static">>}, durable]},
{'queue.bind',[ {exchange, <<"Indexer">>}, {queue, <<"Indexer_Replica_Static">>}]}
]
}
]
},
{destinations,
[{broker, "amqp://"},
{declarations, [ {'exchange.declare', [ {exchange, <<"Indexer_Replica_Static">>}
, {type, <<"fanout">>}, durable]},
{'queue.declare', [
{queue, <<"Indexer_Replica_Static">>},
durable]},
{'queue.bind',
[ {exchange, <<"Indexer_Replica_Static">>}
, {queue, <<"Indexer_Replica_Static">>}
]}
]
}
]
},
{queue, <<"Indexer_Replica_Static">>},
{prefetch_count, 0},
{ack_mode, on_confirm},
{publish_properties, [ {delivery_mode, 2} ]},
{reconnect_delay, 2.5}
]
}
]
},
{reconnect_delay, 2.5}
]
}
].
[UPDATE]
This is being run in docker but since I couldn't debug the issue in docker I tried booting up rabbit locally with the same config file. I noticed in the logs that the rabbit config system variable I set (RABBITMQ_CONFIG_FILE) isn't reflected in the log and the shovel settings haven't been applied (no surprise huh). I verified the variable with an echo statement and the correct path is displayed: /dev/rabbitmq_server-3.3.4/rabbitmq
=INFO REPORT==== 3-Sep-2014::15:30:37 ===
node : rabbit#dalmacpmfd57
home dir : /Users/e002678
config file(s) : (none)
cookie hash : n6vhh8tY7Z+uR2DV6gcHUg==
log : /usr/local/rabbitmq_server-3.3.4/sbin/../var/log/rabbitmq/rabbit#dalmacpmfd57.log
sasl log : /usr/local/rabbitmq_server-3.3.4/sbin/../var/log/rabbitmq/rabbit#dalmacpmfd57- sasl.log
database dir : /usr/local/rabbitmq_server-3.3.4/sbin/../var/lib/rabbitmq/mnesia/rabbit#dalmacpmfd57
Thanks!