I am unable to explode my json in the fluentd. When I send this json to docker to execute
docker run --log-driver=fluentd --log-opt fluentd-address=docker.for.mac.localhost:24226 --log-opt tag="docker" python echo '{"hi":"yo"}'
This is the message my fluentd shows
2020-08-13 11:11:48.000000000 +0530 docker: {"log":"{\"hi\":\"yo\"}","container_id":"4d26713583925d70781c3840b886e72c3c1866c67d2fe329e3bf9c16de8cd328","container_name":"/nervous_newton","source":"stdout","tag":"docker"}
My log is a json which is not exploded to key and value
Here is my fluentd config
<source>
#type forward
port 24226
bind 0.0.0.0
</source>
<match docker>
include_tag_key true
#type stdout
</match>
<filter docker>
#type parser
<parse>
#type json
</parse>
key_name log
reserve_data true
</filter>
I am running on my mac machine with fluentd version as 1.0.2 ruby="2.4.2"
Am I missing something ?
How stupid of me. The order is important. filter should be before match
Related
I've been crawling a number of sites like this trying to get Keycloak working with a MySQL persistence layer. I am using docker, but I'm using my own images so it pulls passwords and other sensitive data from a secrets manager instead of environment variables or Docker secrets. The images are pretty close to stock besides that however.
Anyway, I have a MySQL 8 container up and running, and from within the Keycloak 12.0.3 container I can connect to the MySQL container fine:
# mysql -h mysql -u keycloak --password=somethingtochangelater -D keycloak -e "SHOW DATABASES;"
mysql: [Warning] Using a password on the command line interface can be insecure.
+--------------------+
| Database |
+--------------------+
| information_schema |
| keycloak |
+--------------------+
So there's no problems of connectivity between the instances, and that username/password has access to the keycloak database fine.
So then I ran several commands to configure the Keycloak instance (keycloak is installed at /opt/myco/bin/keycloak):
/opt/myco/bin/keycloak/bin/standalone.sh &
# Pausing for server startup
sleep 20
# Add mysql module - JDBC driver unpacked at /opt/myco/bin/keycloak-install/mysql-connector-java-8.0.23/mysql-connector-java-8.0.23.jar
/opt/myco/bin/keycloak/bin/jboss-cli.sh --connect --command="module add --name=com.mysql --dependencies=javax.api,javax.transaction.api --resources=/opt/myco/bin/keycloak-install/mysql-connector-java-8.0.23/mysql-connector-java-8.0.23.jar --module-root-dir=/opt/myco/bin/keycloak/modules/system/layers/keycloak/"
# Removing h2 datasource
/opt/myco/bin/keycloak/bin/jboss-cli.sh --connect --command="/subsystem=datasources/data-source=KeycloakDS:remove"
# Adding MySQL datasource
/opt/myco/bin/keycloak/bin/jboss-cli.sh --connect --command="/subsystem=datasources/jdbc-driver=mysql:add(driver-name=mysql,driver-module-name=com.mysql,driver-class-name=com.mysql.cj.jdbc.Driver)"
# TODO - add connection pooling options here...
# Configuring data source
/opt/myco/bin/keycloak/bin/jboss-cli.sh --connect --command="data-source add --name=KeycloakDS --jndi-name=java:jboss/datasources/KeycloakDS --enabled=true --password=somethingtochangelater --user-name=keycloak --driver-name=com.mysql --use-java-context=true --connection-url=jdbc:mysql://mysql:3306/keycloak?useSSL=false&characterEncoding=UTF-8"
# Testing connection
/opt/myco/bin/keycloak/bin/jboss-cli.sh --connect --command="/subsystem=datasources/data-source=KeycloakDS:test-connection-in-pool"
# Creating admin user
/opt/myco/bin/keycloak/bin/add-user-keycloak.sh -r master -u "admin" -p "somethingelse"
# Shutting down initial server
/opt/myco/bin/keycloak/bin/jboss-cli.sh --connect command=":shutdown"
This all appears to run fine. Note especially the test-connection-in-pool has no problems:
{
"outcome" => "success",
"result" => [true],
"response-headers" => {"process-state" => "reload-required"}
}
However, when I go to start the server back up again, it crashes with several exceptions, starting with:
22:31:52,484 FATAL [org.keycloak.services] (ServerService Thread Pool -- 56) Error during startup: java.lang.RuntimeException: Failed to connect to database
at org.keycloak.keycloak-model-jpa#12.0.3//org.keycloak.connections.jpa.DefaultJpaConnectionProviderFactory.getConnection(DefaultJpaConnectionProviderFactory.java:377)
at org.keycloak.keycloak-model-jpa#12.0.3//org.keycloak.connections.jpa.updater.liquibase.lock.LiquibaseDBLockProvider.lazyInit(LiquibaseDBLockProvider.java:65)
...
it keeps going, though I suspect that Exception ultimately to be fatal, and it eventually dies with:
22:31:53,114 ERROR [org.jboss.as.controller.management-operation] (ServerService Thread Pool -- 40) WFLYCTL0190: Step handler org.jboss.as.controller.AbstractAddStepHandler$1#33063168 for operation add at address [
("subsystem" => "jca"),
("workmanager" => "default"),
("short-running-threads" => "default")
] failed -- java.util.concurrent.RejectedExecutionException: java.util.concurrent.RejectedExecutionException
at org.jboss.threads#2.4.0.Final//org.jboss.threads.RejectingExecutor.execute(RejectingExecutor.java:37)
at org.jboss.threads#2.4.0.Final//org.jboss.threads.EnhancedQueueExecutor.rejectShutdown(EnhancedQueueExecutor.java:2029)
...
The module at /opt/myco/bin/keycloak/modules/system/layers/keycloak/com/mysql/main has the jar file and module.xml:
# ls
module.xml mysql-connector-java-8.0.23.jar
# cat module.xml
<?xml version='1.0' encoding='UTF-8'?>
<module xmlns="urn:jboss:module:1.1" name="com.mysql">
<resources>
<resource-root path="mysql-connector-java-8.0.23.jar"/>
</resources>
<dependencies>
<module name="javax.api"/>
<module name="javax.transaction.api"/>
</dependencies>
The standalone.xml file looks reasonable to me:
...
<subsystem xmlns="urn:jboss:domain:datasources:6.0">
<datasources>
...
<datasource jndi-name="java:jboss/datasources/KeycloakDS" pool-name="KeycloakDS" enabled="true" use-java-context="true">
<connection-url>jdbc:mysql://mysql:3306/keycloak?useSSL=false&characterEncoding=UTF-8</connection-url>
<driver>com.mysql</driver>
<security>
<user-name>keycloak</user-name>
<password>somethingtochangelater</password>
</security>
</datasource>
<drivers>
<driver name="h2" module="com.h2database.h2">
<xa-datasource-class>org.h2.jdbcx.JdbcDataSource</xa-datasource-class>
</driver>
<driver name="mysql" module="com.mysql">
<driver-class>com.mysql.cj.jdbc.Driver</driver-class>
</driver>
</drivers>
</datasources>
...
So.... anyone have any idea what's going on? What else do I need to do to get Keycloak talking properly to MySQL? Anything else I can do to debug what the issue is?
Not sure what is wrong with your particular case, but I used jboss/ keycloak image and it connects to MySQL just fine. Maybe you can derive your custom image from there. The full setup in my blog post https://link.medium.com/eK6IRducpeb
For standalone keycloak server you can try this command.
kc.bat start-dev --db postgres --db-url jdbc:postgresql://localhost:5432/keycloak-server --db-username postgres --db-password root
I typically configure my projects by settings configuration variables in vars/main.yml and rendering a subset of those out to a JSON via to_nice_json.
Consider an example of the vars/main.yaml like the one below:
# Application Configuration Settings.
config:
dev:
# General Settings.
logger_level: DEBUG
# PostgreSQL Server Configuration Settings.
sql_host: "localhost"
sql_port: 5432
sql_username: "someuser"
sql_password: "somepassword"
sql_db: "somedb"
which I render out via a Jinja2 template and the template module with the following content:
{{ config.dev | to_nice_json }}
Recently I tried to use Ansible Vault to encrypt the sensitive bits, e.g., the sql_password through the encrypt_string command as such:
ansible-vault encrypt_string --vault-id .ansible-vault-password "somepassword" --name 'sql_password'
and inline the encrypted version directly in the YAML file like this:
# Application Configuration Settings.
config:
dev:
# General Settings.
logger_level: DEBUG
# PostgreSQL Server Configuration Settings.
sql_host: "localhost"
sql_port: 5432
sql_username: "someuser"
sql_password: !vault |
$ANSIBLE_VAULT;1.1;AES256
35383832623937353934636538306539623336633336643430396662323161333838333463653764
3839653635326166303636643664333466376236626137310a323839373862626237643162303535
35333966383834356239376566356263656635323865323466306362323864356663383661333262
3165643733633262650a663363653832373936383033306137633234626264353538356630336131
3063
sql_db: "somedb"
However, when the to_nice_json filter is applied I get the following error:
fatal: [myrole]: FAILED! => {"changed": false, "msg": "AnsibleError: Unexpected templating type error occurred on ({{ config.dev | to_nice_json }}\n): somepassword' is not JSON serializable"}
As can be see, the variable is property decrypted but it errors out when serialising to JSON. If I wrap the inline vault variable in double-quotes, however, the decryption doesn't happen and the resulting JSON contains the entire vault blob.
Am I missing something? Is this issue with the to_nice_json filter or am inlining it the wrong way?
As a workaround for such problems extract the vaulted value to a separate variable (as opposed to a value of a key in a dictionary):
vars:
my_sql_password: !vault |
$ANSIBLE_VAULT;1.1;AES256
5383832623937353934636538306539623336633336643430396662323161333838333463653764
3839653635326166303636643664333466376236626137310a323839373862626237643162303535
35333966383834356239376566356263656635323865323466306362323864356663383661333262
3165643733633262650a663363653832373936383033306137633234626264353538356630336131
3063
# Application Configuration Settings.
config:
dev:
# General Settings.
logger_level: DEBUG
# PostgreSQL Server Configuration Settings.
sql_host: "localhost"
sql_port: 5432
sql_username: "someuser"
sql_password: "{{ my_sql_password }}"
sql_db: "somedb"
I'm using out_file plugin of fluent (version 0.12.35) to write output to file locally. My fluent config looks like :
<source>
#type forward
port 24224
bind 0.0.0.0
</source>
<source>
#type http
port 8888
bind 0.0.0.0
body_size_limit 32m
keepalive_timeout 10s
</source>
<match **>
type file
path /var/log/test/logs
format json
time_slice_format %Y%m%d
time_slice_wait 24h
compress gzip
include_tag_key true
utc
buffer_path /var/log/test/logs.*
</match>
This creates multiple gz file for every ~10min.
-rw-r--r-- 1 root root 256546 May 6 07:03 logs.20170506_0.log.gz
-rw-r--r-- 1 root root 260730 May 6 07:14 logs.20170506_1.log.gz
-rw-r--r-- 1 root root 261155 May 6 07:25 logs.20170506_2.log.gz
-rw-r--r-- 1 root root 258903 May 6 08:56 logs.20170506_10.log.gz
-rw-r--r-- 1 root root 282680 May 6 09:08 logs.20170506_11.log.gz
...
-rw-r--r-- 1 root root 261973 May 6 10:44 logs.20170506_19.log.gz
I want to know the way to create a single gzipped file for each day. Even setting time_slice_wait to 24h didn't help.
Missed a silly thing in configuration: https://docs.fluentd.org/output/file#append
Updated configuration
<source>
#type forward
port 24224
bind 0.0.0.0
</source>
<source>
#type http
port 8888
bind 0.0.0.0
body_size_limit 32m
keepalive_timeout 10s
</source>
<match **>
type file
path /var/log/test/logs
format json
time_slice_format %Y%m%d
time_slice_wait 24h
compress gzip
include_tag_key true
utc
buffer_path /var/log/test/logs.*
append true
</match>
If anyone is continuing to get errors, in the match block, type should also be #type.
I am using Fluentd data collector for storing Apache httpd logs in MongoDB. I made the necessary changes in td-agent configuration files like:
<source>
#type tail
format apache2
path C:\Program Files (x86)\Apache Group\Apache2\logs\access.log
tag mongo.apache
</source>
and
<match mongo.**>
# plugin type
#type mongo
# mongodb db + collection
database apache
collection access
# mongodb host + port
host localhost
port 27017
# interval
flush_interval 10s
# make sure to include the time key
include_time_key true
</match>
After doing all necessary changes, I tested configurations by pinging Apache server
ab -n 100 -c 10 http://localhost/
Everything works fine, but the logs files were not stored in MongoDB.
I did this in windows.
I have this script currently...
#!/bin/bash
# Check NGINX
nginxstat=$(service nginx status)
# Checking Sites
hostsite="localhost:81 - "$(curl --silent --head --location --output /dev/null --write-out '%{http_code}' http://localhost:81 | grep '^2')
##########
# Send to Slack
curl -X POST --data '{"channel":"#achannel","username":"Ansible", "attachments": [{"fallback":"NGINX Reload","pretext":"'"$HOSTNAME"'","color":"good","fields":[{"title":"nginx localhost","value":"'"$hostsite"'","short":true},{"title":"NGINX","value":"'"$nginxstat"'","short":true}]}]}' -i https://xxx.slack.com/services/hooks/incoming-webhook?token=xxx
I've tried and tried and failed; I want to grab the result of a nginx configtest and push it in. At the moment an nginx reload kicks in prior to this being ran, the reload does a configcheck itself so the server stays up if the config is wrong.
So my nginx status command (which works) displays
NGINX
----------------
nginx (pid 1234) is running...
but I can't get the same to work with config test, which i expect is due to the nature of the escaping required and the other junk it pumps out ala
nginx: [warn] "ssl_stapling" ignored, issuer certificate not found
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
Transform your variable into a JSON string with jq before you embed it in the POST data:
$ echo '"some quoted stuff"' | jq #json
"\"some quoted stuff\""
For example:
nginxstat=$(service nginx status | jq #json)
Then embed unquoted. See also the manual.
Or, if you want it JSON escaped then bash escaped:
echo '"some quoted stuff"' | jq "#json | #sh"
"'\"some quoted stuff\"'"
Did I mention that jq is my new favorite thing?
http://stedolan.github.io/jq/