I have been trying to set up keycloak logging to be scraped by fluentd to be used in elasticsearch. So far I have used the provided CLI string to use in my helm values script.
cli:
# Custom CLI script
custom: |
/subsystem=logging/json-formatter=json:add(exception-output-type=formatted, pretty-print=true, meta-data={label=value})
/subsystem=logging/console-handler=CONSOLE:write-attribute(name=named-formatter, value=json)
However, as you can see in the picture provided, the logs that are generated seem to be completely json apart from the core of the log, the message field. Currently the message field is provided as comma separated key-value pairs. Is there any way to tell keycloak, jboss or wildfly that it needs to provide the message in JSON too? This allows me to efficiently search through the data in elastic.
Check this project on GitHub: keycloak_jsonlog_eventlistener: Outputs Keycloak events as JSON into the server log.
Keycloak JSON Log Eventlistener
Primarily written for the Jboss Keycloak docker image, it will output Keycloak events as JSON into the keycloak server log.
The idea is to parse logs once they get to logstash via journalbeat.
Tested with Keycloak version 8.0.1
Related
I have Postgres running locally. I can access the database locally with psql postgres:///reviewapp and with \dt I can see a few tables.
If I run npx postgraphile -c "postgres:///reviewapp" I dont get any errors in the terminal:
PostGraphile v4.12.4 server listening on port 5000 🚀
‣ GraphQL API: http://localhost:5000/graphql
‣ GraphiQL GUI/IDE: http://localhost:5000/graphiql (RECOMMENDATION: add '--enhance-graphiql')
‣ Postgres connection: postgres:///reviewapp
‣ Postgres schema(s): public
‣ Documentation: https://graphile.org/postgraphile/introduction/
‣ Node.js version: v14.15.5 on darwin x64
‣ Join Mark in supporting PostGraphile development: https://graphile.org/sponsor/
* * *
However when I go to http://localhost:5000/graphql I have an error on the screen:
{"errors":[{"message":"Only POST requests are allowed."}]}
You're visiting the /graphql endpoint which speaks GraphQL (over POST requests), but you're sending it a web request (over GET). Instead, use the /graphiql end point to view the GraphiQL GraphQL IDE - this endpoint speaks web, and will give you a nice interface for communicating with the /graphql endpoint. See this output from the PostGraphile command:
‣ GraphQL API: http://localhost:5000/graphql
‣ GraphiQL GUI/IDE: http://localhost:5000/graphiql (RECOMMENDATION: add '--enhance-graphiql')
I recommend you add the --enhance-graphiql option to the PostGraphile CLI to get an even more powerful IDE in the browser.
It is because when you type in your address into the address bar of your browser, a GET request is being sent, while your Postgraphile instance only accepts POST requests. So this is the problem. You either avoid sending GET requests, or try and ensure that Postraphile accepts GET requests as well.
A very simple solution would be to create a very simple and small website that will act as a proxy and upon load, it would send a POST request to http://localhost:5000/graphql
There is a GitHub ticket where a middleware is suggested, read this for further information: https://github.com/graphile/postgraphile/issues/442
I'm have successfully deployed Nexus3 on openshift cluster via Nexus3 Helm Chart.
In order to feed the nexus container logs in the EFK stack I want nexus container to output logs in JSON format.
I'm unable to find some documentation to change the logging format for Nexus3.
How can I modify the configurations in image so that the std output is in JSON and easily parsed?
I have been using MySQLHook happily in my Airflow DAG but now the MySQL server (AWS RDS) will have SSL connection mandatory. My backend engineer told me that in particular AWS 2019 CA should be used. I looked into the MySQLHook documentation and found the following snippet from https://airflow.readthedocs.io/en/stable/_modules/airflow/hooks/mysql_hook.html:
if conn.extra_dejson.get('ssl', False):
# SSL parameter for MySQL has to be a dictionary and in case
# of extra/dejson we can get string if extra is passed via
# URL parameters
dejson_ssl = conn.extra_dejson['ssl']
if isinstance(dejson_ssl, six.string_types):
dejson_ssl = json.loads(dejson_ssl)
conn_config['ssl'] = dejson_ssl
It looks like I need to specify some configuration in the form of JSON ("SSL" key) in the extra section of the MySQL connection in Airflow but I couldn't find any examples of this. Can someone enlighten me? Any pointer or an example of such JSON would be very appreciated.
Your Connection.extra data should be a JSON string containing a ssl object suitable for passing to the mysql_ssl_set function, according to the "Functions and attributes" section on this page:
This parameter takes a dictionary or mapping, where the keys are parameter names used by the mysql_ssl_set MySQL C API call. If this is set, it initiates an SSL connection to the server; if there is no SSL support in the client, an exception is raised. This must be a keyword parameter.
Presumably something like this would work: {"ssl": {"cert": "PATH TO YOUR PUBLIC CERT FILE ON THE AIRFLOW SERVER"}}
I have Keycloak deployed in Kubernetes using the official codecentric chart. Now I want to make Keycloak logs into json format in order to export them to Kibana.
A comment to the original reply pointed to a cli command to do this.
cli:
# Custom CLI script
custom: |
/subsystem=logging/json-formatter=json:add(exception-output-type=formatted, pretty-print=false, meta-data={label=value})
/subsystem=logging/console-handler=CONSOLE:write-attribute(name=named-formatter, value=json)
It is a Java application that is running on Wildfly. If you check the main process that is running inside the pod, you will see something like:
/usr/lib/jvm/java/bin/java -D[Standalone] -server -Xms64m -Xmx512m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true -Dorg.jboss.boot.log.file=/opt/jboss/keycloak/standalone/log/server.log -Dlogging.configuration=file:/opt/jboss/keycloak/standalone/configuration/logging.properties -jar /opt/jboss/keycloak/jboss-modules.jar -mp /opt/jboss/keycloak/modules org.jboss.as.standalone -Djboss.home.dir=/opt/jboss/keycloak -Djboss.server.base.dir=/opt/jboss/keycloak/standalone -Djboss.bind.address=10.217.0.231 -Djboss.bind.address.private=10.217.0.231 -b 0.0.0.0 -c standalone.xml
Important part here is the following:
-Dlogging.configuration=file:/opt/jboss/keycloak/standalone/configuration/logging.properties
So, the logging configuration is passed to the Java process as a JVM option, and read from the file on the path /opt/jboss/keycloak/standalone/configuration/logging.properties.
If you check the content of the file, it has a section like the following:
...
handler.CONSOLE=org.jboss.logmanager.handlers.ConsoleHandler
handler.CONSOLE.level=INFO
handler.CONSOLE.formatter=COLOR-PATTERN
handler.CONSOLE.properties=autoFlush,target,enabled
handler.CONSOLE.autoFlush=true
handler.CONSOLE.target=SYSTEM_OUT
handler.CONSOLE.enabled=true
...
You need to figure out what to change in this logging configuration to meet your JSON requirements. An example would be:
formatter.json=org.jboss.logmanager.formatters.JsonFormatter
formatter.json.properties=keyOverrides,exceptionOutputType,metaData,prettyPrint,printDetails,recordDelimiter
formatter.json.constructorProperties=keyOverrides
formatter.json.keyOverrides=timestamp\=#timestamp
formatter.json.exceptionOutputType=FORMATTED
formatter.json.metaData=#version\=1
formatter.json.prettyPrint=false
formatter.json.printDetails=false
formatter.json.recordDelimiter=\n
Then, in Kubernetes you can create a ConfigMap with the logging config that you want, define it as a volume in your pod/deployment, and mount it as a file to that exact path in the pod/deployment definition. If you do all steps correctly, you should be able to customize the logging format as you need.
The goal is to be able to send messages using AWS SQS+SNS. This has been a struggle for a few days and I don't know how to make it work.
Symfony 4.2 has a new component, messenger that I wanted to use. It is supposed to work with php-enqueue as a third party transport. I am using that to connect to AWS SQS+SNS.
I can't find any documentation that puts it all together. I see how php-enqueue connects to AWS, but the docs show the config in the code and not in the config yaml or .env files. That is a problem since I want Messenger/enqueue to handle the behind-the-scenes stuff.
I was able to make Symfony Messenger work without php-enqueue for local synchronous messages. But after that... Clearly I am not doing it right. I was hoping someone might have a boilerplate for this configuration.
Here is where I am at. I am just trying to send a message using SQS. I am getting an error:
Error executing "GetQueueUrl" on "https://sqs.us-west-2.amazonaws.com";
AWS HTTP error: Client error: `POST https://sqs.us-west-2.amazonaws.com`
resulted in a `400 Bad Request`
I tried many permutations of keys in the enqueue.yaml file but did not get it right. I used this for help but could not get it to work. https://enqueue.readthedocs.io/en/stable/bundle/config_reference/
->> Edit: I found that you can add the topic and queue names to the DSN. I no longer get the error and a topic is created, but the Queue is not. Now, the message bus is working, but synchronously and locally. No message is sent to AWS.
These are the Composer libs I installed. I am sure that there are too many, but I kept trying to make it work.
"aws/aws-sdk-php": "^3.19",
"enqueue/amqp-lib": "^0.9.8",
"enqueue/enqueue-bundle": "^0.9.8",
"enqueue/messenger-adapter": "^0.2.2",
"enqueue/snsqs": "^0.9.0",
"guzzlehttp/guzzle": "^6.0",
"symfony/amqp-pack": "^1.0",
"symfony/messenger": "4.2.*",
This is my messenger.yaml
framework:
messenger:
transports:
amqp: 'enqueue://default?topic[name]=testQ&queue[name]=testQ'
routing:
# Route your messages to the transports
'App\Message\SmsMessage': amqp
This is enqueue.yaml
enqueue:
default:
transport:
dsn: '%env(resolve:ENQUEUE_DSN)%'
client:~
This is the entry in .env
###> enqueue/enqueue-bundle ###
ENQUEUE_DSN=snsqs::?key={key}&secret={secret}®ion=us-west-2
###< enqueue/enqueue-bundle ###
This is the code in a controller to send a message:
public function index(MessageBusInterface $messageBus) {
$message = new SmsMessage('This is so cool');
$messageBus->dispatch($message);
...
}
I had this same issue which i managed to fix.
This is my messenger.yaml config that's working with SQS
transports:
sqs:
dsn: enqueue://default?topic[name]=YOURTOPICNAME&queue[name]=YOURQUEUENAME&receiveTimeout=3
Hopefully this is of use to someone