PM2-health : can i use pm2-health module for sending email alerts/notifications? - pm2

I have a nodejs application which runs on pm2 and I need to be able to send email notifications whenever a crash/ restart occurs. My idea is to monitor the application for crashes and trigger a mail action from pm2-health. The documentation of pm2-health module is here but I'm unable to use it for sending email alerts. Can anyone explain how to use it for this purpose?
P.S: Also, it would be great if you could explain about SMTP configuration for gmail.(I have configured postfix to use gmail smtp according to this and it works fine for test gmail but doesn't work with pm2-health)

This is how I could get pm2-health working with my Gmail account:
Install pm2-health module:
pm2 install pm2-health
Open PM2 module config file:
vim ~/.pm2/module_conf.json
Update it with the Gmail account’s SMTP parameters:
{
"pm2-health": {
"smtp": {
"host": "smtp.gmail.com",
"port": 465,
"user": "EXAMPLE_sender#gmail.com",
"password": "PASSWORD",
"secure": true,
"disabled": false
},
"mailTo": "NOTIFICATION_RECIPIENT_EMAIL_ADDRESS",
"replyTo": "EXAMPLE_SENDER#gmail.com",
"events": [
"exit"
],
"exceptions": true,
"messages": true,
"messageExcludeExps": [],
"metric": {},
"metricIntervalS": 60,
"aliveTimeoutS": 300,
"addLogs": false,
"appsExcluded": [],
"snapshot": {
"url": "",
"token": "",
"auth": {
"user": "",
"password": ""
},
"disabled": false
}
},
"module-db-v2": {
"pm2-health": {}
}
}
Save and close it
Restart pm2-health:
pm2 restart pm2-health
Test it by restarting one of your PM2-managed Node processes. You should receive an email about that event.

For anyone trying to use with 2FA enabled Gmail, you need to use an App Password. More information here: https://support.google.com/accounts/answer/185833

Related

Firebase Local Emulator Suite does not find the 'database.rules.json' file when started from CLI

The firebase.json file:
{
"emulators":
{
"auth": {
"port": 9099
},
"database": {
"port": 9000,
"rules": "database.rules.json",
"target": "default"
},
"ui": {
"enabled": true
}
}
}
There is a database.rules.json file that sits next to the firebase.json file, containing the json rules which work correctly on the live database.
The files created automatically by the command:
firebase init emulators
all are in the project root directory. I copied the rules from the live project, and created the database.rules.json file right next to the other files, like so:
ROOT
|
|.firebaserc
|firebase.json
|database.rules.json
However, the CLI command:
$ firebase emulators:start
returns the following error:
⚠ database: Did not find a Realtime Database rules file specified in a firebase.json config file.
The emulator runs all right, but the rules are not taken into account.
Why is the 'database.rules.json' NOT found by the emulator? Is this a path issue?
The reason it did not work as expected was because a local database should have been initiated, but was not.
To initiate a local database, the following command should be used:
firebase init database
As I had an existing database (created by the Realtime Console) which I had linked with my local project when using the interactive command:
firebase init emulators
I thought I could then start the simulator and test away. Not so.
Failing to initiate a database with a CLI command seems to trigger the CLI response:
⚠ database: Did not find a Realtime Database rules file specified in a firebase.json config file
When what is really meant is:
Duh, there is no local database. You must create a local database first, even if you link this project to an existing remote
try something like this
firebase.json
{
"functions": {
"source": "functions"
},
"firestore": {
"rules": "./firestore.rules"
},
"database": {
"rules": "./database.rules.json"
},
"emulators": {
"auth": {
"port": 9099
},
"functions": {
"port": 5001
},
"firestore": {
"port": 8080
},
"database": {
"port": 9000
},
"hosting": {
"port": 5000
},
"pubsub": {
"port": 8085
},
"storage": {
"port": 9199
},
"ui": {
"enabled": true
}
}
}

Packer custom image build failed with ssh authentication error

I'm trying to build custom image for AWS EKS managed node group, Note: my custom image (ubuntu) already has MFA and private key based authentication enabled.
I cloned github repository to build eks related changes from the below url.
git clone https://github.com/awslabs/amazon-eks-ami && cd amazon-eks-ami
Next i made few changes to run the make file
cat eks-worker-al2.json
{
"variables": {
"aws_region": "eu-central-1",
"ami_name": "template",
"creator": "{{env `USER`}}",
"encrypted": "false",
"kms_key_id": "",
"aws_access_key_id": "{{env `AWS_ACCESS_KEY_ID`}}",
"aws_secret_access_key": "{{env `AWS_SECRET_ACCESS_KEY`}}",
"aws_session_token": "{{env `AWS_SESSION_TOKEN`}}",
"binary_bucket_name": "amazon-eks",
"binary_bucket_region": "eu-central-1",
"kubernetes_version": "1.20",
"kubernetes_build_date": null,
"kernel_version": "",
"docker_version": "19.03.13ce-1.amzn2",
"containerd_version": "1.4.1-2.amzn2",
"runc_version": "1.0.0-0.3.20210225.git12644e6.amzn2",
"cni_plugin_version": "v0.8.6",
"pull_cni_from_github": "true",
"source_ami_id": "ami-12345678",
"source_ami_owners": "00012345",
"source_ami_filter_name": "template",
"arch": null,
"instance_type": null,
"ami_description": "EKS Kubernetes Worker AMI with AmazonLinux2 image",
"cleanup_image": "true",
"ssh_interface": "",
"ssh_username": "nandu",
"ssh_private_key_file": "/home/nandu/.ssh/template_rsa.ppk",
"temporary_security_group_source_cidrs": "",
"security_group_id": "sg-08725678910",
"associate_public_ip_address": "",
"subnet_id": "subnet-01273896789",
"remote_folder": "",
"launch_block_device_mappings_volume_size": "4",
"ami_users": "",
"additional_yum_repos": "",
"sonobuoy_e2e_registry": ""
After adding user and private key build getting failed with below error.
logs
amazon-ebs: Error waiting for SSH: Packer experienced an authentication error when trying to connect via SSH. This can happen if your username/password are wrong. You may want to double-check your credentials as part of your debugging process. original error: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain.
for me just changue region for aws o delete aws region in packer.

Can you run /host firebase emulator that is accessible from public ip?

I am using the firebase emulator to host some GCF functions on my machine. They are configured to run/host on localhost:5001. This works great.
I am now using Google Tasks in my app, and the task I have needs to call a GCF function. Tasks does not run locally, so I've setup my machine to be accessed via a public IP address and opened the port 5001 to allow traffic in on that port. The idea is that the Google task can call the function on my machine, so I can properly test.
I cannot seem to get the emulator to work with any outside public IP address. Is this just not possible or if it is, how do I configure?
UPDATED, there isn't much code to review here...I am just wanting to know if you can configure the emulator to listen on a public port. Here is the default firebase.json file
{
"functions": {
"predeploy": [
"npm --prefix \"$RESOURCE_DIR\" run lint"
],
"source": "functions"
},
"emulators": {
"functions": {
"port": 5001
},
"pubsub": {
"port": 8085
},
"ui": {
"enabled": true
}
}
}
In my research, I found you can add the "host" attribute to allow your local network to access the emulator using "host": "0.0.0.0", so it looks like this: This works to allow access via my local ip like http://192.168.1.216:5001
{
"functions": {
"predeploy": [
"npm --prefix \"$RESOURCE_DIR\" run lint"
],
"source": "functions"
},
"emulators": {
"functions": {
"port": 5001,
"host": "0.0.0.0"
},
"pubsub": {
"port": 8085
},
"ui": {
"enabled": true
}
}
}
Now, using my public ip, which is setup using NAT FORWARDING, it can't access the site http://108.49.78.181:5001
there isn't code so much as, is this even possible? If it is, I'd love some example of how to do it.
Accessing the function should be like this
http://localhost:5001/{project_name}/us-central1/{function_name}
{project_name} must be replaced by the name of your project in .firebaserc
{function_name} must be replaced by the name of the function exported from your js file
Requesting http://localhost:5001/ directly should return Not Found
Access from external IP
Using host with "0.0.0.0" means that the server will listen to all ip addresses.
By default firebase accept only localhost but you can change this by using the host option in firebase.json for each specific emulators.
"hosting": {
"port": 5000,
"host": "0.0.0.0"
},
"functions": {
"port": 5001,
"host": "0.0.0.0"
}
and others like firestore,auth,ui,database...
Start emulators with
firebase emulators:start
Don't forget to port forward your router to the local ip of your computer running the emulator and accept port in your firewall
Yes You can access this from public IP address or private IP address within private network. All you need to do is change the host from firebase.json file.
{
"firestore": {
"rules": "firestore.rules",
"indexes": "firestore.indexes.json"
},
"emulators": {
"functions": {
"host": "192.168.0.240",
"port": 5001
},
"firestore": {
"host": "192.168.0.240",
"port": 8080
},
"ui": {
"host": "192.168.0.240",
"enabled": true
},
"pubsub": {
"host": "192.168.0.240",
"port": 8085
}
}
}
Now
"host": "192.168.0.240",
this could be any IP address or host like localhost or 10.20.0.25 or private IP like mentioned above.

Get host status by CheckMK Web-API

I'm trying to get the status of a host with the CheckMK WebAPI. Can someone point me in the right direction how to get these data?
We're currently using CheckMK enterprise 1.4.0.
I've tried:
https://<monitoringhost.tld>/<site>/check_mk/webapi.py?action=get_host&_username=<user>&_secret=<secret>&output_format=json&effective_attributes=1&request={"hostname": "<hostname>"}
But the response does not have any relevant information about the host itself (e.g. state up/down, uptime, etc.).
{
"result": {
"attributes": {
"network_scan": {
"scan_interval": 86400,
"exclude_ranges": [],
"ip_ranges": [],
"run_as": "api"
},
"tag_agent": "cmk-agent",
"snmp_community": null,
"ipv6address": "",
"alias": "",
"management_protocol": null,
"site": "testjke",
"tag_address_family": "ip-v4-only",
"tag_criticality": "prod",
"contactgroups": [
true,
[]
],
"network_scan_result": {
"start": null,
"state": null,
"end": null,
"output": ""
},
"parents": [],
"management_address": "",
"tag_networking": "lan",
"ipaddress": "",
"management_snmp_community": null
},
"hostname": "<host>",
"path": ""
},
"result_code": 0
The webapi is only for getting/setting the configuration of a host or other objects. If you want't to get the live status of a host use livestatus.
If you enabled livestats on port 6557 (default) you could query the status of a host via network. If you are logged into a shell locally you can use 'lq'.
OMD[mysite]:~$ lq "GET hosts\nColumns: name"
Why:
The CheckMK webapi if for accessing WATO. WATO is the source for creating the nagios configuration. Nagios will do the monitoring of the hosts and the livestatus api is an extension of the nagios core.
http://<monitoringhost.tld>/<site>/check_mk/view.py?view_name=allhosts&output_format=csv
You can use all the views that you see in the webui by adding output_format=[csv|json|python].
You will the data of the table that you see.
You also need to add the creditals as seen in yout question.

sensu mailer and pipe

i'm switching over from nagios to sensu. i'm using chef to automated the process. everything is working great except the mailer or actually, i narrowed it down to the "pipe" that is suppose to redirect the json output from the check to the handler. it doesn't. when i use
{
"handlers": {
"email": {
"type": "pipe",
"command": "mail -s \"sensu alert\" alert#example.com",
"severities": [
"ok",
"critical"
]
}
}
}
i get a blank email. when i use the mailer.rb handler, i get no email whatsoever. i made sure to include mail to and mail from in the mailer.json. i see the logs have the correct information for the handler and email parameters.
so i've concluded the "pipe" isn't working. can anybody help with that? i would greatly appreciate it. i wish there was a sensu community, but it may be too new to have one.
With regards to the mailer.rb, have you checked the server logs (by default in /var/log/sensu/sensu-server.log) for errors? If there is an error in any of the handlers, they will show up in those logs.
mailer.rb requires several gems in order to run. To find out if you are using sensu's embedded ruby or not, check /etc/default/sensu for EMBEDDED_RUBY. If that is false, you will need to make sure your system ruby has all those gems (sensu-handler, mail, timeout) installed. If it is set to true, do the same with sensu's embedded ruby:
/opt/sensu/embedded/bin/gem list
Make sure the gems are installed, try again, and check the sensu-server.log for errors.
If you have more issues, there is in fact a community - check out #sensu on Freenode.
You can write you own event data JSON and pass it through a PIPE as follows:
cat event.json | /opt/sensu/embedded/bin/ruby mailer.rb
The easiest way to get the event.json file is from the sensu-server.log.
To use mailer.rb you need your own mail server ! if you'll post sensu server logs i think i can help you.
I've done some testing and the mail into pipe does not with GNU mail/mailx (assume you're using Ubuntu or something?).
Two solutions:
1) install BSD mail:
sudo apt-get install bsd-mailx
2) Or modify the command slightly get mail to read from stdin you'll need to do something like:
{
"handlers": {
"email": {
"type": "pipe",
"command": " echo $(cat) > /tmp/mail.txt; mail -s \"sensu alert\" alert#example.com < /tmp/mail.txt"
}
}
}
The idea is normally that you read the event json from stdin within a scripting language and then pull out bits of the event.json that you want to send. The above will e-mail out the entire json file.
You can use sensu mailer handler. Please find below steps to setup:-
sensu-install -p sensu-plugins-mailer
apt-get install postifx
/etc/init.d/postfix start
cd /etc/sensu/conf.d/
when we install this plugin will get 3 ruby files.
This time we are using this file:- handler-mailer.rb
First we need to creat handler file in this location /etc/sensu/conf.d/ :-
vim handler-mailer.json
{
"mailer": {
"admin_gui": "http://127.0.0.1:3000/",
"mail_from": "localhost",
"mail_to": ["yourmailid-1","yourmailid-2"],
"smtp_address": "localhost",
"smtp_port": "25"
}
}
Now we need to create one mail handler file in this location /etc/sensu/conf.d/:-
{
"handlers": {
"mymailer": {
"type": "pipe",
"command": "/opt/sensu/embedded/bin/handler-mailer.rb",
"severities": [
"critical",
"unknown"
]
}
}
}
in above file handler name is mymailer we need to use this handler name in our checks.
Use bin/handler-mailer-mailgun.rb or bin/handler-mailer-ses.rb or bin/handler-mailer.rb
Example:
echo '{
"id": "ef6b87d2-1f89-439f-8bea-33881436ab90",
"action": "create",
"timestamp": 1460172826,
"occurrences": 2,
"check": {
"type": "standard",
"total_state_change": 11,
"history": ["0", "0", "1", "1", "2", "2"],
"status": 2,
"output": "No keepalive sent from client for 230 seconds (>=180)",
"executed": 1460172826,
"issued": 1460172826,
"name": "keepalive",
"thresholds": {
"critical": 180,
"warning": 120
}
},
"client": {
"timestamp": 1460172596,
"version": "1.0.0",
"socket": {
"port": 3030,
"bind": "127.0.0.1"
},
"subscriptions": [
"production"
],
"environment": "development",
"address": "127.0.0.1",
"name": "client-01"
} }' | /opt/sensu/embedded/bin/handler-mailer-mailgun.rb
Output:
mail -- sent alert for client-01/keepalive to your.email#example.com