Collecting JVM heap dump from AWS ElasticBeanstalk - amazon-elastic-beanstalk

I'm using AWS ElasticBeanstalk with Tomcat as web server. I would like to debug and to log the Java Virtual Machine performance, crash reports and to write them in CloudWatch Logs.
Currently the AWS ElasticBeanstalk collects logs created by the web server, application server, Elastic Beanstalk platform scripts. You can use CloudWatch logs as a centralized log system.
How can I collect my custom JVM logs in CloudWatch as I mentioned at the beginning?
Thanks.
Florin

I have contected the AWS support team and their answer was:
This can be done using .ebextensions configuration file to install awslogs package and define the log file, log_group_name and log_stream name to stream on CloudWatch,
Kindly find an example below for Enabling Catalina Logs in CloudWatch for Tomcat8 Environment
---.ebextensions/cwatch.config---
packages:
yum:
awslogs: []
files:
"/etc/awslogs/awscli.conf" :
mode: "000600"
owner: root
group: root
content: |
[plugins]
cwlogs = cwlogs
[default]
region = `{"Ref":"AWS::Region"}`
"/etc/awslogs/awslogs.conf" :
mode: "000600"
owner: root
group: root
content: |
[general]
state_file = /var/lib/awslogs/agent-state
[/var/log/tomcat8]
log_group_name = `{"Fn::Join":["-", [{ "Ref":"AWSEBEnvironmentName" }, "catalina"]]}`
log_stream_name = {instance_id}_messages
file = /var/log/tomcat8/catalina.out
datetime_format = %b %d %H:%M:%S
initial_position = start_of_file
buffer_duration = 5000
[/var/log/messages]
log_group_name = `{"Fn::Join":["-", [{ "Ref":"AWSEBEnvironmentName" }, "syslog"]]}`
log_stream_name = {instance_id}_messages
file = /var/log/messages
datetime_format = %b %d %H:%M:%S
initial_position = start_of_file
buffer_duration = 5000
[/var/log/eb-activity.log]
log_group_name = `{"Fn::Join":["-", [{ "Ref":"AWSEBEnvironmentName" }, "eb"]]}`
log_stream_name = {instance_id}_eb-activity.log
file = /var/log/eb-activity.log
datetime_format = [%Y-%m-%dT%H:%M:%S.%3NZ]
initial_position = start_of_file
buffer_duration = 5000
[/var/log/eb-cfn-init.log]
log_group_name = `{"Fn::Join":["-", [{ "Ref":"AWSEBEnvironmentName" }, "eb"]]}`
log_stream_name = {instance_id}_eb-cfn-init.log
file = /var/log/eb-cfn-init.log
datetime_format = [%Y-%m-%dT%H:%M:%S.%3NZ]
initial_position = start_of_file
buffer_duration = 5000
[/var/log/eb-commandprocessor.log]
log_group_name = `{"Fn::Join":["-", [{ "Ref":"AWSEBEnvironmentName" }, "eb"]]}`
log_stream_name = {instance_id}_eb-commandprocessor.log
file = /var/log/eb-commandprocessor.log
datetime_format = [%Y-%m-%dT%H:%M:%S.%3NZ]
initial_position = start_of_file
buffer_duration = 5000
[/var/log/eb-publish-logs.log]
log_group_name = `{"Fn::Join":["-", [{ "Ref":"AWSEBEnvironmentName" }, "eb"]]}`
log_stream_name = {instance_id}_eb-publish-logs.log
file = /var/log/eb-publish-logs.log
datetime_format = %Y-%m-%d %H:%M:%S,%3N
initial_position = start_of_file
buffer_duration = 5000
[/var/log/eb-tools.log]
log_group_name = `{"Fn::Join":["-", [{ "Ref":"AWSEBEnvironmentName" }, "eb"]]}`
log_stream_name = {instance_id}_eb-tools.log
file = /var/log/eb-tools.log
datetime_format = %Y-%m-%d %H:%M:%S,%3N
initial_position = start_of_file
buffer_duration = 5000
commands:
"01":
command: chkconfig awslogs on
"02":
command: service awslogs restart
---end---
and here is a simpler version to Add log to default CloudWatch logs streaming config
Warning: does not set retention policy, won't obey retention policy (e.g "delete when env is deleted)
---.ebextensions/log.config---
files:
/etc/awslogs/config/mylog.conf:
owner: root
group: root
mode: "000644"
content:
Fn::Sub: |
[/var/log/mylog.log]
log_group_name=/aws/elasticbeanstalk/${AWSEBEnvironmentName}/var/log/mylog.log
log_stream_name={instance_id}
file=/var/log/mylog.log
commands:
restart_awslogs:
command: service awslogs restart || service awslogs start
---end---
Note: Do not use the sample provided in production. The goal of the sample is to illustrate the functionality."
I didn't check it, but if someone will test this solution please comment and mark it as valid. If not I'll delete my answer.

Related

Unable to build podman compatiable containers using nix-build and dockerTools.buildImage

The following is invidious.nix, which builds a container that contains nix packages for Bash, Busybox and Invidious:
let
# nixos-22.05 / https://status.nixos.org/
pkgs = import (fetchTarball "https://github.com/NixOS/nixpkgs/archive/d86a4619b7e80bddb6c01bc01a954f368c56d1df.tar.gz") {};
in rec {
docker = pkgs.dockerTools.buildImage {
name = "invidious";
contents = [ pkgs.busybox pkgs.bash pkgs.invidious ];
config = {
Cmd = [ "/bin/bash" ];
Env = [];
Volumes = {};
};
};
}
If I try to load the container with docker load < result, Docker can correctly load the container.
docker load < result
14508d34fd29: Loading layer [==================================================>] 156.6MB/156.6MB
Loaded image: invidious:2nrcdxgz46isccfgyzdcbirs0vvqhp55
However, if I attempt the same thing using podman, I get the following error:
podman load < result
Error: payload does not match any of the supported image formats:
* oci: initializing source oci:/var/tmp/podman3824611648:: open /var/tmp/podman3824611648/index.json: not a directory
* oci-archive: loading index: open /var/tmp/oci1927542201/index.json: no such file or directory
* docker-archive: loading tar component manifest.json: archive/tar: invalid tar header
* dir: open /var/tmp/podman3824611648/manifest.json: not a directory
If I inspect the result, it does appear to have the correct format for an OCI container:
tar tvfz result
dr-xr-xr-x root/root 0 1979-12-31 19:00 ./
-r--r--r-- root/root 391 1979-12-31 19:00 027302622543ef251be6d3f2d616f98c73399d8cd074b0d1497e5a7da5e6c882.json
dr-xr-xr-x root/root 0 1979-12-31 19:00 669db3729b40e36a9153569b747788611e547f0b50a9f7d77107a04c6ddd887e/
-r--r--r-- root/root 3 1979-12-31 19:00 669db3729b40e36a9153569b747788611e547f0b50a9f7d77107a04c6ddd887e/VERSION
-r--r--r-- root/root 353 1979-12-31 19:00 669db3729b40e36a9153569b747788611e547f0b50a9f7d77107a04c6ddd887e/json
-r--r--r-- root/root 156579840 1979-12-31 19:00 669db3729b40e36a9153569b747788611e547f0b50a9f7d77107a04c6ddd887e/layer.tar
-r--r--r-- root/root 280 1979-12-31 19:00 manifest.json
-r--r--r-- root/root 128 1979-12-31 19:00 repositories
How do I get nix-build to create compliant containers that podman can read?
nix-build version: 2.10.3
podman version: 4.2.0
It turns out, the version of podman I'm running can't read gzipped tar files. The following works:
zcat result | podman load

Can I concatenate openssl conf files?

If I have a file (my.cnf) containing
[v3_req]
subjectAltName = #alt_names
[alt-names]
DNS ....
DNS ....
and I concatenate it with openssl.cnf (cat openssl.cnf my.cnf > myopenssl.cnf) will openssl parse the new file and add the subjectAltName = to the earlier [v3_req] section or will it overwrite it (so I lose the previous values in [v3_req])?
On Openssl 1.1.1 you can concatenate and it will merge sections with the same name.
For example, the following configuration will result in a certificate request with basicConstraints = CA:TRUE and all three DNSs: localhost, localhost.localdomain and 127.0.0.1.
...
[ req_ext ]
basicConstraints = CA:FALSE
[ req_ext ]
basicConstraints = CA:TRUE
subjectAltName = #alternate_names
[ alternate_names ]
DNS.1 = localhost
DNS.2 = localhost.localdomain
[ alternate_names ]
DNS.3 = 127.0.0.1

airflow: All tasks finished but dag state is running

I'm using Airflow 2.0.0 with CeleryExecutor and mysql-8.0.22.
Every time we execute any dag. Irrespective of all tasks status being failed/success/mixed, overall dag status is always running.
Because of which after sometime, scheduler also gets crashed.
Airflow is installed at /root/
Here is the airflow.cfg:
[core]
dags_folder = /var/airflow/dags
executor = CeleryExecutor
sql_alchemy_conn = mysql://user:password#localhost:3306/airflow
[logging]
base_log_folder = /var/airflow/logs
[webserver]
base_url = http://localhost:8080
default_ui_timezone = UTC
web_server_host = 0.0.0.0
web_server_port = 8080
[celery]
celery_app_name = airflow.executors.celery_executor
worker_concurrency = 8
worker_log_server_port = 8793
broker_url = sqla+mysql://user:password#localhost:3306/celery
result_backend=db+mysql://user:password#localhost:3306/celery
flower_host = 0.0.0.0
flower_port = 5555
operation_timeout = 1.0
[scheduler]
child_process_log_directory = /var/airflow/logs/scheduler
Can somebody help.
Is your database running on the same node as your airflow worker? It looks wrong.

Cygnus didn't update database

I want to subscribe Orion to send notifications to Cygnus. Then cygnus will save all data in mysql database. I use this script to subscribe the speed attribute of car1.
(curl 130.206.118.44:1026/v1/subscribeContext -s -S --header 'Content-Type: application/json' --header 'Accept: application/json' --header 'Fiware-Service: vehicles' --header 'Fiware-ServicePath: /4wheels' -d #- | python -mjson.tool) <<EOF
{
"entities": [
{
"type": "car",
"isPattern": "false",
"id": "car1"
}
],
"attributes": [
"speed",
"oil_level"
],
"reference": "http://192.168.1.49:5050/notify",
"duration": "P1M",
"notifyConditions": [
{
"type": "ONCHANGE",
"condValues": [
"speed"
]
}
],
"throttling": "PT1S"
}
EOF
But when I update the speed attribute of car 1, cygnus doesn't update the database.
Databases available:
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
+--------------------+
3 rows in set (0.00 sec)
Some information about my cygnus service and my cygnus configuration (systemctl status cygnus):
cygnus.service - SYSV: cygnus
Loaded: loaded (/etc/rc.d/init.d/cygnus)
Active: active (exited) since Wed 2015-10-21 17:54:07 UTC; 8min ago
Process: 31566 ExecStop=/etc/rc.d/init.d/cygnus stop (code=exited, status=0/SUCCESS)
Process: 31588 ExecStart=/etc/rc.d/init.d/cygnus start (code=exited, status=0/SUCCESS)
Oct 21 17:54:05 cygnus systemd[1]: Starting SYSV: cygnus...
Oct 21 17:54:05 cygnus su[31593]: (to cygnus) root on none
Oct 21 17:54:07 cygnus cygnus[31588]: Starting Cygnus mysql... [ OK ]
Oct 21 17:54:07 cygnus systemd[1]: Started SYSV: cygnus.
agent_mysql.conf:
# main configuration
cygnusagent.sources = http-source
cygnusagent.sinks = mysql-sink
cygnusagent.channels = mysql-channel
# source configuration
cygnusagent.sources.http-source.channels = mysql-channel
cygnusagent.sources.http-source.type = org.apache.flume.source.http.HTTPSource
cygnusagent.sources.http-source.port = 5050
cygnusagent.sources.http-source.handler = com.telefonica.iot.cygnus.handlers.OrionRestHandler
# url target
cygnusagent.sources.http-source.handler.notification_target = /notify
cygnusagent.sources.http-source.handler.default_service = def_serv
cygnusagent.sources.http-source.handler.default_service_path = def_servpath
cygnusagent.sources.http-source.handler.events_ttl = 10
cygnusagent.sources.http-source.interceptors = ts gi
cygnusagent.sources.http-source.interceptors.ts.type = timestamp
cygnusagent.sources.http-source.interceptors.gi.type = com.telefonica.iot.cygnus.interceptors.GroupingInterceptor$Builder
cygnusagent.sources.http-source.interceptors.gi.grouping_rules_conf_file = /usr/cygnus/conf/grouping_rules.conf
#Orion MysqlSink Configuration
cygnusagent.sinks.mysql-sink.channel = mysql-channel
cygnusagent.sinks.mysql-sink.type = com.telefonica.iot.cygnus.sinks.OrionMySQLSink
cygnusagent.sinks.mysql-sink.enable_grouping = false
# mysqldb ip
cygnusagent.sinks.mysql-sink.mysql_host = 127.0.0.1
# mysqldb port
cygnusagent.sinks.mysql-sink.mysql_port = 3306
cygnusagent.sinks.mysql-sink.mysql_username = root
cygnusagent.sinks.mysql-sink.mysql_password = 12345
cygnusagent.sinks.mysql-sink.attr_persistence = column
cygnusagent.sinks.mysql-sink.table_type = table-by-destination
# configuracao do canal mysql
cygnusagent.channels.mysql-channel.type = memory
cygnusagent.channels.mysql-channel.capacity = 1000
cygnusagent.channels.mysql-channel.transactionCapacity = 100
After read this question, I changed my agent_mysql.conf in this line:
cygnusagent.sinks.mysql-sink.attr_persistence = column to cygnusagent.sinks.mysql-sink.attr_persistence = row and restarted the service. Then I updated orion entity and I queried database and nothing happened.
Cygnus log file: http://pastebin.com/B2FNKcVf
Note: My JAVA_HOME is set.
As you can see in the logs you posted, there is a problem with the Cygnus log file:
java.io.FileNotFoundException: ./logs/cygnus.log (No such file or directory)
After that, Cygnus stops. You must check your configuration regarding log4j, everything is at /usr/cygnus/conf/log4j.properties (it should exists, it is created by the RPM... if not existing -because you italled from sources instead of the RPM-, it must be created from the available template). In addition, can you post your instance configuration file? Anyway, which version are you running?
EDIT 1
Recently, we have found another user dealing with the same error, and the problem was the content of the /usr/cygnus/conf/log4j.properties file was:
flume.root.logger=INFO,LOGFILE
flume.log.dir=./log
flume.log.file=flume.log
Instead of what the template contains:
flume.root.logger=INFO,LOGFILE
flume.log.dir=/var/log/cygnus/
flume.log.file=flume.log
Once changed it worked because the RPM creates /var/log/cygnus but not ./log.

Configuration of cygnus.conf

I need to store data in cosmos with mysql format but I can't do it. I have checked that the data are stored on text file in cosmos, but when I enter in hive, there is not any table with my name.
The following is the config of cygnus. Is it correct? Where is my error to store in mysql tables?
cygnusagent.sources = http-source
cygnusagent.sinks = hdfs-sink mysql-sink
cygnusagent.channels = hdfs-channel mysql-channel
cygnusagent.sources.http-source.channels = hdfs-channel mysql-channel ckan-channel
cygnusagent.sources.http-source.type = org.apache.flume.source.http.HTTPSource
cygnusagent.sources.http-source.port = 5050
cygnusagent.sources.http-source.handler = es.tid.fiware.fiwareconnectors.cygnus.handlers.OrionRestHandler
cygnusagent.sources.http-source.handler.notification_target = /notify
cygnusagent.sources.http-source.handler.default_service = def_serv
cygnusagent.sources.http-source.handler.default_service_path = def_servpath
cygnusagent.sources.http-source.handler.events_ttl = 10
cygnusagent.sources.http-source.interceptors = ts de
cygnusagent.sources.http-source.interceptors.ts.type = timestamp
cygnusagent.sources.http-source.interceptors.de.type = es.tid.fiware.fiwareconnectors.cygnus.interceptors.DestinationExtractor$Builder
cygnusagent.sources.http-source.interceptors.de.matching_table = /usr/cygnus/conf/matching_table.conf
cygnusagent.sinks.hdfs-sink.channel = hdfs-channel
cygnusagent.sinks.hdfs-sink.type = es.tid.fiware.fiwareconnectors.cygnus.sinks.OrionHDFSSink
cygnusagent.sinks.hdfs-sink.cosmos_host = 130.206.80.46
cygnusagent.sinks.hdfs-sink.cosmos_port = 14000
cygnusagent.sinks.hdfs-sink.cosmos_default_username = cristina.albaladejo
cygnusagent.sinks.hdfs-sink.cosmos_default_password = my_passw
cygnusagent.sinks.hdfs-sink.hdfs_api = httpfs
cygnusagent.sinks.hdfs-sink.attr_persistence = column
cygnusagent.sinks.hdfs-sink.hive_host = 130.206.80.46
cygnusagent.sinks.hdfs-sink.hive_port = 10000
cygnusagent.sinks.hdfs-sink.krb5_auth = false
cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_user = krb5_username
cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_password = xxxxxxxxxxxxx
cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_login_conf_file = /usr/cygnus/conf/krb5_login.conf
cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_conf_file = /usr/cygnus/conf/krb5.conf
# OrionMySQLSink configuration
cygnusagent.sinks.mysql-sink.channel = mysql-channel
cygnusagent.sinks.mysql-sink.type = es.tid.fiware.fiwareconnectors.cygnus.sinks.OrionMySQLSink
cygnusagent.sinks.mysql-sink.mysql_host = 130.206.80.46
cygnusagent.sinks.mysql-sink.mysql_port = 3306
cygnusagent.sinks.mysql-sink.mysql_username = root
cygnusagent.sinks.mysql-sink.mysql_password = my_passw
cygnusagent.sinks.mysql-sink.attr_persistence = column
# hdfs-channel configuration
cygnusagent.channels.hdfs-channel.type = memory
cygnusagent.channels.hdfs-channel.capacity = 1000
cygnusagent.channels.hdfs-channel.transactionCapacity = 100
# mysql-channel configuration
cygnusagent.channels.mysql-channel.type = memory
cygnusagent.channels.mysql-channel.capacity = 1000
cygnusagent.channels.mysql-channel.transactionCapacity = 100
As I answered in this Question:
How to configure Cygnus to save in mysql
The database tables are not created automatically in column mode. So
you would have to create the tables.
The columns look like : recvTime - datetime , field1 , field2 ....
field1_md - varchar , field2_md - varchar ....
If you change
cygnusagent.sinks.mysql-sink.attr_persistence = column
to
cygnusagent.sinks.mysql-sink.attr_persistence = row tables are created
automatically, but I prefer the column way to save and handle data.
You can check cygnus log to see what is really happening. But seems to be that you have to create the database and the table with the fields declaration
You have also there my sample configuration wich is running OK
You can also search for cygnus proccess with ps aux | grep cygnus and kill it with kill XXXX(number of cygnus process)
And then run Cygnus in a terminal like this
/usr/cygnus/bin/cygnus-flume-ng agent --conf /usr/cygnus/conf/ -f /usr/cygnus/conf/YOURAGENTCONFIGFILE.conf -n cygnusagent -Dflume.root.logger=INFO,console
(You should change the name of your config file and your cygnusagent to run it)
I hope this helps you.