Wildfly json log formatter dynamic configuration not applied - json

Wildfly 20 is connected with a Logstash instance listening on tcp port 5300:
logstash.conf:
input {
tcp {
codec => json
port => "5300"
}
}
output {
stdout {}
}
Making use of its built-in json logging capabilities with socket connection, as outpointed in wildfly-logstash does not send logs to logstash, Wildfly is configured on the Wildfly CLI, entering the following sequence of statements (that end up in standalone.xml automatically):
/subsystem=logging/json-formatter=LOG-STASH:add(key-overrides={timestamp=#timestamp,message=#message,logger-name=#source,host-name=#source_host}, exception-output-type=formatted)
/socket-binding-group=standard-sockets/remote-destination-outbound-socket-binding=log-stash:add(host=localhost, port=8000)
/subsystem=logging/socket-handler=LOGSTASH-SOCKET:add(named-formatter=LOG-STASH, outbound-socket-binding-ref=log-stash, level=DEBUG)
/subsystem=logging/async-handler=LOGSTASH-ASYNC:add(queue-length=512, subhandlers=[LOGSTASH-SOCKET])
/subsystem=logging/root-logger=ROOT:add-handler(name=LOGSTASH-ASYNC)
It produces log statements on standard out of the logstash node, as e.g.:
{
"level" => "DEBUG",
"host" => "gateway",
"processId" => 14972,
"sequence" => 34696,
"#version" => "1",
"#source" => "com.myapplication.TaskService",
"#source_host" => "device-01",
"threadName" => "EJB default - 6",
"threadId" => 215,
"loggerClassName" => "org.slf4j.impl.Slf4jLogger",
"mdc" => {},
"ndc" => "",
"port" => 64210,
"processName" => "jboss-modules.jar",
"#timestamp" => 2021-03-31T14:10:19.869Z,
"#message" => "task execution successfull: MailDaemon"
}
That is only half way to the goal, required is another set of attribute names (of the individual json log message) to fit in our enterprise logstash instances.
Especially, neither "host-name" nor "logger-name" are written, although configured; instead "#source_host" and #source are logged.
Further adaption of the log-formatter LOG-STASH partially succeeds.
1) /subsystem=logging/json-formatter=LOG-STASH:write-attribute(name="meta-data",value={service="myapplication-api", serviceversion="1.1.0", instanceId="myapplication-api-1.1.0"})
2) /subsystem=logging/json-formatter=LOG-STASH:write-attribute(name="key-overrides",value=[severity=level,timestamp=#timestamp,message=msg,logger-name=#source,host-name=#source_host])
Further simplifaction results in attribute stored, but not applied:
3) /subsystem=logging/json-formatter=LOG-STASH:write-attribute(name="key-overrides",value={"level"="severity"})
4) /subsystem=logging/json-formatter=LOG-STASH:read-attribute(name="key-overrides")
works and meta data are added. 2. and 3. bring no results. 4. prints out like
INFO [org.jboss.as.cli.CommandContext] {
"outcome" => "success",
"result" => {"level" => "severity"}
}
{
"outcome" => "success",
"result" => {"level" => "severity"}
}

With the above setup the following Wildfly CLI command sucessfully renames the wanted keys' default values:
/subsystem=logging/json-formatter=LOG-STASH:write-attribute(name="key-overrides",value={"level"="severity","sequence"="trace","thread-id"="pid","logger-class-name"="class","thread-name"="thread"})
These settings end up in standalone.xml and logging.properties in the same folder on disk.
During my work there was a discrepancy between configured keys in both files.
Be aware that camel case key names like threadId produce a configuration error. You have to use thread-id instead. I found this by inspection of the JBoss logging library, i.e. looking on the Java source code.
The produced logging output is e.g.
{
"pid" => 212,
"message" => "Synchronizaing finished in 0ms",
"#version" => "1",
"loggerName" => "com.myapp.Cache",
"#timestamp" => 2021-04-08T13:49:00.178Z,
"port" => 59182,
"processName" => "jboss-modules.jar",
"trace" => 4245,
"host" => "gateway",
"severity" => "DEBUG",
"processId" => 10536,
"mdc" => {},
"hostName" => "host-alpha",
"timestamp" => "2021-04-08T15:49:00.176+02:00",
"class" => "org.slf4j.impl.Slf4jLogger",
"ndc" => "",
"thread" => "EJB default - 7"
}
What would be nice still, is to have fields mdc and ndc deprived of the output.

Related

How correctly push data from Logstash to Elasticsearch server?

I am new in ELK. I need to visualize data from a PostgreSQL in Kibana. I ran into a little problem and need a help.
I use:
Elasticsearch 6.4.1
Kibana 6.4.1
Logstash 6.4.1
When I run next logstash.conf file it don't send me correct data to elasticsearch server. What I need to change in my configuration file?
logstash.conf:
input
{
jbdc_connection_string => "path_to_database"
jdbc_user => "postgres"
jdbc_password => "postgres"
jdbc_driver_library => "/path_to/postgresql-42.2.5.jar"
jdbc_driver_class => "org.postgresql.Driver"
statement => "SELECT * from documents"
}
output
{
elasticsearch
{
hosts => ["localhost:9200"]
index => "documents"
}
}
Only when in output I use next configuration I see correct data in terminal:
strout
{
codes => json_lines
}

Where do we configure Yii2 Queue extension in project?

I am trying to use the yii2-queue
https://github.com/yiisoft/yii2-queue/blob/master/docs/guide/usage.md
It says:
In order to use the extension you have to configure it like the
following:
return [
'bootstrap' => [
'queue', // The component registers its own console commands
],
'components' => [
'queue' => [
'class' => \yii\queue\<driver>\Queue::class,
'as log' => \yii\queue\LogBehavior::class,
// Other driver options
],
],
];
My question is simple: In which PHP file, in which directory, should I put this code?
Note: I am using the Basic template.
For Yii2 Basic Template config/console.php
For Yii2 Advanced Template console/config/main.php
return [
'bootstrap' => [
'log',
'queue',
],
'components' => [
'queue' => [
'class' => \yii\queue\db\Queue::class,
'db' => 'db', // DB connection component or its config
'tableName' => '{{%queue}}', // Table name
'channel' => 'default', // Queue channel key
'mutex' => \yii\mutex\MysqlMutex::class, // Mutex that used to sync queries
'as log' => \yii\queue\LogBehavior::class,
// 'deleteReleased' => YII_ENV_PROD,
],
]
];
Refer Yii2 Queue extension guide
Add to the main.php file in backend or frond end you are using like this
'bootstrap' => ['log', 'queue'],
Add this to under component array
'queue' => [
'class' => Queue::class,
'db' => 'db', // DB connection component or its config
'tableName' => '{{%db_queue}}', // Table name
'channel' => 'default', // Queue channel key
'mutex' => MysqlMutex::class, // Mutex used to sync queries
]
To make it workfull you need to do same in console /config/main.php
file and run the command listen form documentaiton
It is very simple to configure it on yii2 basic, add the following configuration on config/web.php file, and for yii2 advanced if you are using frontend then add in frontend/config/main.php, if you are using backend then add to to backend/config.main.php.
Just like this
'components' => [
'request' => [
'cookieValidationKey' => 'htXdOInCiP6ut4gNbDO2',
'csrfParam' => '_frontendCSRF',
],
'queue' => [
'class' => \yii\queue\<driver>\Queue::class,
'as log' => \yii\queue\LogBehavior::class,
// Other driver options
],
]

How do I solve it this error HTTP 400 - Unable to verify your data submission in Yii2?

My Yii 2 application was progressing well until I received an unusual error bout a bad HTTP request.
HTTP 400 Unable to verify your data Submission.
I have looked it up and much of the literature indicates the cause being due to a CSRF issue. However, the CSRF components are all in place within the HTML head section and the hidden field is submitting the correct token.
Additional Info
Yii version = 2.0.12 App Basic
PHP version = 5.6
OS = Ubuntu
I have disabled all the security firmware of the host but I still get the error. Please help the site is in Prod already and I can not find how to solve this many thanks in advance.
web/config/main.php
$config = [
'components' => [
'session' => ['class' => 'yii\web\DbSession'],
'request' => [
'cookieValidationKey' => 'AAOSL2no3kbkJwRA4CNwDuB5g5T5_58t',
],
'cache' => [
'class' => 'yii\caching\FileCache',
],
'user' => [
'identityClass' => 'app\models\User',
'enableAutoLogin' => true,
],
'errorHandler' => ['errorAction' => 'site/error'],
'log' => [
'traceLevel' => YII_DEBUG ? 3 : 0,
'targets' => [
[
'class' => 'yii\log\FileTarget',
'levels' => ['error', 'warning'],
],
],
],
'db' => $db,
'urlManager' => [
'enablePrettyUrl' => true,
'showScriptName' => false,
'rules' => [
],
],
],
'params' => $params,
];
if (YII_ENV_DEV) {
$config['bootstrap'][] = 'debug';
$config['modules']['debug'] = [
'class' => 'yii\debug\Module',
//'allowedIPs' => ['127.0.0.1', '::1'],
];
$config['bootstrap'][] = 'gii';
$config['modules']['gii'] = [
'class' => 'yii\gii\Module',
//'allowedIPs' => ['127.0.0.1', '::1'],
];
}
return $config;
As per Change Logs, BugFix and Enhancement related to CSRF cookie.
2.0.13 November 03, 2017 updates include
Bug #14542: Ensured only ASCII characters are in CSRF cookie value since binary data causes issues with ModSecurity and some browsers (samdark)
Enh #14087: Added yii\web\View::registerCsrfMetaTags() method that registers CSRF tags dynamically ensuring that caching doesn't interfere (RobinKamps).
2.0.14 February 18, 2018 updates include
Bug #15317: Regenerate CSRF token if an empty value is given
Enh #15496: (CVE-2018-6009): CSRF token is now regenerated on changing identity (samdark, rhertogh)(sammousa)
So update your framework to the latest version 2.0.14 use composer update via terminal inside your project root, once updated make sure you have the
<?= Html::csrfMetaTags () ?>
inside the <head> tag of the layout file you are using either main.php or any other custom name.
If still persist you can disable it for the specific action inside the beforeAction
public function beforeAction($action)
{
if ($action->id == 'action-name') {
$this->enableCsrfValidation = false;
}
return parent::beforeAction($action);
}
or for a specific controller by adding
public $enableCsrfValidation = false;
Add <?= Html::csrfMetaTags() ?> in your view, or add in layout(main.php)

Yii2 mail catcher [duplicate]

This question already has an answer here:
Override Yii2 Swiftmailer Recipient
(1 answer)
Closed 5 years ago.
During the development and testing, I want the Yii2 mailer (swiftmailer) not to send emails to actual addresses, but replace it with developers` emails. Is there any setting in config to do it?
Just set the useFileTransport to true in component configuration for the development environment and all emails will not be sent but will be saved as files so you can easily test everything.
You should set configs for prod and dev environment.
Path for dev config (if you have advanced application template): yourProject/environments/dev/common/config/main-local.php and for prod: yourProject/environments/prod/common/config/main-local.php.
You may use EmailTarget class for logs.
Example of config for dev environment to reach the clue:
return [
'bootstrap' => ['log'],
'components' => [
...
'mailer' => [
'class' => 'yii\swiftmailer\Mailer',
'viewPath' => '#common/mail',
'useFileTransport' => true,//set this property to false to send mails to real email addresses
//comment the following array to send mail using php's mail function
],
'log' => [
'targets' =>
[
[
'class' => 'yii\log\EmailTarget',
'levels' => ['error'],
'except' => ['yii\web\HttpException:404'],
'message' => [
'to' => ['example#mail.ru'],
'from' => ['yourproject#mail.ru'],
'subject' => ' Errors ',
]
],
],
],
],
];
And similarly for prod environment but with different emails.
If you don't want to use EmailTarget class you may just set your emails for dev in params config here: yourProject/environments/dev/common/config/params-local.php.
And path for prod: yourProject/environments/prod/common/config/params-local.php
Params config example:
return [
'sendToEmails' => ['email1#mail.ru', 'email2#mail.ru']
];
And then you may use the variable in your code this way: Yii::$app->params['sendToEmails'] to get the array of emails to send messages to.
Don't forget to do php init command in your project after compliting your configs.
You may see the detailed docs about environments here.

Mysql to Elasticsearch - won't create index and export data

I am attempting to import a MySQL table into Elasticsearch.It is a table containing 10 different columns with a an 8 digits VARCHAR set as a Primary Key. MySQL database is located on a remote host.
To transfer data from MySQL into Elasticsearch I've decided to use Logstash and jdbc MySQL driver.
I am assuming that Logstash will create the index for me if it isn't there.
Here's my logstash.conf script:
input{
jdbc {
jdbc_driver_library => "/home/user/logstash/mysql-connector-java-5.1.17-bin.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_connection_string => "jdbc:mysql://[remotehostipnumber]/databasename"
jdbc_validate_connection => true
jdbc_user => "username"
jdbc_password => "password"
schedule => "* * * * *"
statement => "select * from table"
}
}
output
{
elasticsearch
{
index => "tables"
document_type => "table"
document_id => "%{table_id}"
hosts => "localhost:9200"
}stdout { codec => json_lines }
}
When running logstash config test it outputs 'Configration OK' message:
sudo /opt/logstash/bin/logstash --configtest -f /home/user/logstash/logstash.conf
Also when executing the logstash.conf script, Elasticsearch outputs:
Settings: Default filter workers: 1
Logstash startup completed
But when I go to check whether the index has been created and data has also been added:
curl -XGET 'localhost:9200/tables/table/_search?pretty=true'
I get:
{
"error" : {
"root_cause" : [ {
"type" : "index_not_found_exception",
"reason" : "no such index",
"resource.type" : "index_or_alias",
"resource.id" : "tables",
"index" : "table"
} ],
"type" : "index_not_found_exception",
"reason" : "no such index",
"resource.type" : "index_or_alias",
"resource.id" : "tables",
"index" : "tables"
},
"status" : 404
}
What could be the potential reasons behind the data not being indexed?
PS. I am keeping the Elasticsearch server running in the separate terminal window, to ensure Logstash can connect and interact with it.
For those who end up here looking for the answer to the similar problem.
My database had 4m rows and it must have been too much for logstash/elasticsearch/jdbc driver to handle in one command.
After I divided the initial transfer into 4 separate chunks of work, the script run and added the desired table into the elasticsearch NoSQL db.
use following code to export data from mysql table and create index in elastic search
echo '{
"type":"jdbc",
"jdbc":{
"url":"jdbc:mysql://localhost:3306/your_database_name",
"user":"your_database_username",
"password":"your_database_password",
"useSSL":"false",
"sql":"SELECT * FROM table1",
"index":"Index_name",
"type":"Index_type",
"poll" : "6s",
"autocommit":"true",
"metrics": {
"enabled" : true
},
"elasticsearch" : {
"cluster" : "clustername",
"host" : "localhost",
"port" : 9300
}
}
}' | java -cp "/etc/elasticsearch/elasticsearch-jdbc-2.3.4.0/lib/*" -"Dlog4j.configurationFile=file:////etc/elasticsearch/elasticsearch-jdbc-2.3.4.0/bin/log4j2.xml" "org.xbib.tools.Runner" "org.xbib.tools.JDBCImporter"