I have Keycloak deployed in Kubernetes using the official codecentric chart. Now I want to make Keycloak logs into json format in order to export them to Kibana.
A comment to the original reply pointed to a cli command to do this.
cli:
# Custom CLI script
custom: |
/subsystem=logging/json-formatter=json:add(exception-output-type=formatted, pretty-print=false, meta-data={label=value})
/subsystem=logging/console-handler=CONSOLE:write-attribute(name=named-formatter, value=json)
It is a Java application that is running on Wildfly. If you check the main process that is running inside the pod, you will see something like:
/usr/lib/jvm/java/bin/java -D[Standalone] -server -Xms64m -Xmx512m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true -Dorg.jboss.boot.log.file=/opt/jboss/keycloak/standalone/log/server.log -Dlogging.configuration=file:/opt/jboss/keycloak/standalone/configuration/logging.properties -jar /opt/jboss/keycloak/jboss-modules.jar -mp /opt/jboss/keycloak/modules org.jboss.as.standalone -Djboss.home.dir=/opt/jboss/keycloak -Djboss.server.base.dir=/opt/jboss/keycloak/standalone -Djboss.bind.address=10.217.0.231 -Djboss.bind.address.private=10.217.0.231 -b 0.0.0.0 -c standalone.xml
Important part here is the following:
-Dlogging.configuration=file:/opt/jboss/keycloak/standalone/configuration/logging.properties
So, the logging configuration is passed to the Java process as a JVM option, and read from the file on the path /opt/jboss/keycloak/standalone/configuration/logging.properties.
If you check the content of the file, it has a section like the following:
...
handler.CONSOLE=org.jboss.logmanager.handlers.ConsoleHandler
handler.CONSOLE.level=INFO
handler.CONSOLE.formatter=COLOR-PATTERN
handler.CONSOLE.properties=autoFlush,target,enabled
handler.CONSOLE.autoFlush=true
handler.CONSOLE.target=SYSTEM_OUT
handler.CONSOLE.enabled=true
...
You need to figure out what to change in this logging configuration to meet your JSON requirements. An example would be:
formatter.json=org.jboss.logmanager.formatters.JsonFormatter
formatter.json.properties=keyOverrides,exceptionOutputType,metaData,prettyPrint,printDetails,recordDelimiter
formatter.json.constructorProperties=keyOverrides
formatter.json.keyOverrides=timestamp\=#timestamp
formatter.json.exceptionOutputType=FORMATTED
formatter.json.metaData=#version\=1
formatter.json.prettyPrint=false
formatter.json.printDetails=false
formatter.json.recordDelimiter=\n
Then, in Kubernetes you can create a ConfigMap with the logging config that you want, define it as a volume in your pod/deployment, and mount it as a file to that exact path in the pod/deployment definition. If you do all steps correctly, you should be able to customize the logging format as you need.
Related
I am trying to run openshift on Fedora 36 using Origin-Client or OC.
I have updated fedora to the latest version.
I have installed oc .
whenever I tried to do oc cluster up
it shows below error :
[root#fedora ridhoswasta]# oc cluster up
Getting a Docker client ...
Checking if image openshift/origin-control-plane:v3.11 is available ...
Checking type of volume mount ...
Determining server IP ...
Checking if OpenShift is already running ...
Checking for supported Docker version (=>1.22) ...
Checking if insecured registry is configured properly in Docker ...
Checking if required ports are available ...
Checking if OpenShift client is configured properly ...
Checking if image openshift/origin-control-plane:v3.11 is available ...
Starting OpenShift using openshift/origin-control-plane:v3.11 ...
I0825 12:11:14.411027 50887 flags.go:30] Running "create-kubelet-flags"
I0825 12:11:16.391985 50887 run_kubelet.go:49] Running "start-kubelet"
I0825 12:11:17.200056 50887 run_self_hosted.go:181] Waiting for the kube-apiserver to be ready ...
E0825 12:16:17.201364 50887 run_self_hosted.go:571] API server error: Get "https://127.0.0.1:8443/healthz?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused ()
Error: timed out waiting for the condition
Then I checked the logs for kubelet container it shows :
Flag --tls-cipher-suites has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --tls-cipher-suites has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --tls-cipher-suites has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --tls-cipher-suites has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --tls-min-version has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --tls-private-key-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --file-check-frequency has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
I0825 05:13:19.249680 51788 server.go:417] Version: v1.11.0+d4cacc0
I0825 05:13:19.249928 51788 plugins.go:97] No cloud provider specified.
F0825 05:13:19.253892 51788 server.go:261] failed to run Kubelet: mountpoint for cpu not found
I have tried to reinstall docker with latest version but still I face this issue.
Could someone give me another thing to try?
Thanks!
oc cluster up is using the deprecated version of OpenShift, this has been superseded by OpenShift Local now: https://developers.redhat.com/products/openshift-local/overview. Although OpenShift Local uses a good deal more resources than oc cluster up ever did. There's a spiritual successor that might be worth checking out, and that's MicroShift: https://microshift.io/
My structure
Kubernetes cluster on GKE
Ingress controller deployed using helm
An application which will return list of IP ranges note: it will get updated periodically
curl https://allowed.domain.com
172.30.1.210/32,172.30.2.60/32
Secured application which is not working
What I am trying to do?
Have my clients IPs in my API endpoint which is done
curl https://allowed.domain.com
172.30.1.210/32,172.30.2.60/32
Deploy my example app with ingress so it can pull from the https://allowed.domain.com and allow people to access to the app
What I tried and didn't work?
Deploy the application with include feature of nginx
nginx.ingress.kubernetes.io/configuration-snippet: |
include /tmp/allowed-ips.conf;
deny all;
yes its working but the problem is when /tmp/allowed-ips.conf gets updated the ingress config doesn't
I tried to use if condition to pull the IPs from the endpoint and deny if user is not in the list
nginx.ingress.kubernetes.io/configuration-snippet: |
set $deny_access off;
if ($remote_addr !~ (https://2ce8-73-56-131-204.ngrok.io)) {
set $deny_access on;
}
I am using nginx.ingress.kubernetes.io/whitelist-source-range annotation but that is not what I am looking for
None of the options are working for me.
From the official docs of ingress-nginx controller:
The goal of this Ingress controller is the assembly of a configuration file (nginx.conf). The main implication of this requirement is the need to reload NGINX after any change in the configuration file. Though it is important to note that we don't reload Nginx on changes that impact only an upstream configuration (i.e Endpoints change when you deploy your app)
After the nginx ingress resource was initially created, the ingress controller assembles the nginx.conf file and uses it for routing traffic. Nginx web server does not auto-reload its configuration if the nginx.conf and other config files were changed.
So, you can work around this problem in several ways:
update the k8s ingress resource with new IP addresses and then apply changes to the Kubernetes cluster (kubectl apply / kubectl patch / smth else) / for your options 2 and 3.
run nginx -s reload inside an ingress Pod to reload nginx configuration / for your option 1 with include the allowed list file.
$ kubectl exec ingress-nginx-controller-xxx-xxx -n ingress-nginx -- nginx -s reload
try to write a Lua script (there is a good example for Nginx+Lua+Redis here and here). You should have a good understanding of nginx and lua to estimate if it is worth trying.
Sharing what I implemented at my workplace. We had a managed monitoring tool called Site24x7. The tool pings our server from their VMs with dynamic IPs and we had to automate the whitelisting of the IPs at GKE.
nginx.ingress.kubernetes.io/configuration-snippet allows you to set arbitrary Nginx configurations.
Set up a K8s CronJob resource on the specific namespace.
The CronJob runs a shell script, which
fetches the list of IPs to be allowed (curl, getent, etc.)
generates a set of NGINX configurations (= the value for nginx.ingress.kubernetes.io/configuration-snippet)
runs a kubectl command which overwrites the annotation of the target ingresses.
Example shell/bash script:
#!/bin/bash
site24x7_ip_lookup_url="site24x7.enduserexp.com"
site247_ips=$(getent ahosts $site24x7_ip_lookup_url | awk '{print "allow "$1";"}' | sort -u)
ip_whitelist=$(cat <<-EOT
# ---------- Default whitelist (Static IPs) ----------
# Office
allow vv.xx.yyy.zzz;
# VPN
allow aa.bbb.ccc.ddd;
# ---------- Custom whitelist (Dynamic IPs) ----------
$site247_ips # Here!
deny all;
EOT
)
for target_ingress in $TARGET_INGRESS_NAMES; do
kubectl -n $NAMESPACE annotate ingress/$target_ingress \
--overwrite \
nginx.ingress.kubernetes.io/satisfy="any" \
nginx.ingress.kubernetes.io/configuration-snippet="$ip_whitelist" \
description="*** $(date '+%Y/%m/%d %H:%M:%S') NGINX annotation 'configuration-snippet' updated by cronjob $CRONJOB_NAME ***"
done
The shell/bash script can be stored as ConfigMap to be mounted on the CronJob resource.
I managed to set up artifactory using our existing tomcat. I have set to ARTIFACTORY_HOME=/opt/artifactory, that part works well. There is, however, also the jfrog access.war file, which needs to be running as well. I didn't figure out which variable to use to specify its home, therefore it defaults to ~/.jfrog_access, which is not at all what I like.
I moved the content over to my $ARTIFACTORY_HOME/access and symlinked it, but that's not the way to go for sure. Any help appreciated.
In case someone is stumbling over this thread and struggles with the same problem:
Solution for me was to also extract the Context files (access.xml and artifactory.xml which are available in the zip file under <zip extract>/misc/tomcat) to the Tomcat configuration folder, e.g. $CATALINA_HOME/conf/Catalina/localhost/. After that the $ARTIFACTORY_HOME env will be recognized on Access startup.
A previous answer finally put me on the right track for solving this problem on Amazon Linux.
In addition to copying access.xml and artifactory.xml to ${catalina.home}/host/MY_HOSTNAME, I found that some other changes were needed.
I modified the docBase attributes in the XML context files because my server has multiple hostnames:
/usr/share/tomcat8/conf/Catalina/repo.mydomain.org/access.xml
<Context path="/access" docBase="${catalina.home}/host/repo.mydomain.org/access.war">
<Parameter name="jfrog.access.bundled" value="true" override="true"/>
<!-- enable annotations scanning of access jar files -->
<JarScanner scanClassPath="false">
<JarScanFilter defaultPluggabilityScan="false" pluggabilityScan="access*" defaultTldScan="false"/>
</JarScanner>
</Context>
/usr/share/tomcat8/conf/Catalina/repo.mydomain.org/artifactory.xml
<Context crossContext="true" path="/artifactory" docBase="${catalina.home}/host/repo.mydomain.org/artifactory.war">
</Context>
Important Note: In order to prevent the above two XML files from being deleted by Tomcat Manager during upgrades via Undeploy/Deploy WAR, make sure they are owned by root and not writable by the tomcat user:
chown root.root access.xml artifactory.xml
chmod 644 access.xml artifactory.xml
If you forget to do the above, you will likely end up missing these files, which will break the communication between the access and artifactory web applications, resulting in login failures ("Username or Password Are Incorrect"). In this case, these errors result from the lack of communication between the web applications, not a problem with the credentials themselves.
/usr/share/tomcat8/conf/Catalina/repo.mydomain.org/manager.xml
This gives me the ability to upload new versions of access.war and artifactory.war via https://repo.mydomain.org:8443/manager/html:
<Context docBase="${catalina.home}/webapps/manager" privileged="true" antiResourceLocking="false">
</Context>
Additionally, I created the following folder to serve as the artifactory.home:
sudo mkdir /usr/share/artifactory
sudo chown tomcat.tomcat /usr/share/artifactory
tomcat8.conf
Add (or modify) the following line:
JAVA_OPTS="-Dartifactory.home=/usr/share/artifactory -Djfrog.access.home=/usr/share/artifactory/access -Dartifactory.access.client.serverUrl.override=http://localhost:8080/access"
Note: The Access Client URL specified above must use localhost in order to avoid the Server HTTP parameter from being overwritten by Apache and its modules. For instance, if I use:
https://repo.mydomain.org/access/api/v1/system/ping
The Server HTTP header value in the response is:
Server: Apache/2.4.33 (Amazon) OpenSSL/1.0.2k-fips mod_jk/1.2.43
And the Access Client produces the following exception:
[ERROR] (o.j.a.c.AccessClientImpl:154) - Access client/server version mismatch. Client version: 4.1.5, Server version: 2.4.33 (Amazon) OpenSSL
Which means the Access Client is depending on the first string matching #.#.# in the server header. This seems like a really fragile part of the Access Client. They should have used X-JFrog-Access-Server or something instead of trying to control a value that is set by the web server. So, to reiterate, use http://localhost:8080/access to connect directly to the tomcat server.
Artifactory 6.2.0 depends on Apache Derby (the specific version can be found in jfrog-artifactory-oss-6.2.0.zip\artifactory-oss-6.2.0\tomcat\lib). This should be added as a shared library to Tomcat:
mkdir /usr/share/tomcat8/shared
cd /usr/share/tomcat8/shared
wget http://central.maven.org/maven2/org/apache/derby/derby/10.11.1.1/derby-10.11.1.1.jar
Add or modify the following line in catalina.properties:
shared.loader=${catalina.home}/shared/*.jar
Since we want https://repo.mydomain.org to go to the Artifactory webapp:
mkdir /usr/share/tomcat8/host/repo.mydomain.org/ROOT
echo '<html><head><meta http-equiv="refresh" content="0;URL=/artifactory"></meta></head><body></body></html>' > /usr/share/tomcat8/host/repo.mydomain.org/ROOT/index.html
And make sure the services automatically start on reboot:
sudo chkconfig httpd on
sudo chkconfig tomcat8 on
Artifactory will then be available at the url:
https://repo.mydomain.org/artifactory/webapp/
I have a scenario where I need to change several parameters of a hadoop cluster managed by Ambari to document performance of a particular application. The change in the configs entails a restart of the affected components.
I am using the Ambari REST API for achieving this. I figured out how to do this for all service components of hadoop. I' am not sure whether the API provides a way to restart the MySQL server that Hive uses.
I have the following questions:-
Is it the case that a mere stop and start of mysqld on the appropriate machine is enough to ensure that the required configuration changes are recognized by Ambari and the application?
I chose the 'New MySQL database' option while installing Hive via Ambari. Does this mean that restarts are reflected in Ambari only when it is carried out from the Ambari UI?
Your inputs would be highly appreciated.
Thanks!
Found a solution to the problem. I used the following commands using the Ambari REST API for changing configurations and restarting services from the backend.
Login to the host on which the ambari server is running and use the already provided config.sh script as described below.
Modifying configuration files
#!/bin/bash
CLUSTER_NAME=$1
CONFIG_FILE=$2
PROPERTY_NAME=$3
PROPERTY_VALUE=$4
/var/lib/ambari-server/resources/scripts/configs.sh -port <ambari-server-port> set localhost $1 $2 "$3" "$4"
where CONFIG_FILE can take values like tez-site, mapred-site, hadoop-site, hive-site etc. PROPERTY_NAME and PROPERTY_VALUE should be set to values relevant to the specified CONFIG_FILE.
Restarting host components
curl -uadmin:admin -H 'X-Requested-By: ambari' -X POST -d '
{
"RequestInfo":{
"command":"RESTART",
"context":"Restart MySQL server used by Hive Metastore on node3.cluster.com and HDFS client on node1.cluster.com",
"operation_level":{
"level":"HOST",
"cluster_name":"c1"
}
},
"Requests/resource_filters":[
{
"service_name":"HIVE",
"component_name":"MYSQL_SERVER",
"hosts":"node3.cluster.com"
},
{
"service_name":"HDFS",
"component_name":"HDFS_CLIENT",
"hosts":"node1.cluster.com"
}
]
}' http://localhost:<ambari-server-port>/api/v1/clusters/c1/requests
Reference Links:
Restarting components
modifying configurations
Hope this helps!
Working with Elastic Beanstalk .config files is kinda... interesting. I'm trying to use environment properties with the files: configuration option in an Elastc Beanstalk .config file. What I'd like to do is something like:
files:
"/etc/passwd-s3fs" :
mode: "000640"
owner: root
group: root
content: |
${AWS_ACCESS_KEY_ID}:${AWS_SECRET_KEY}
To create an /etc/passwd-s3fs file with content something like:
ABAC73E92DEEWEDS3FG4E:aiDSuhr8eg4fHHGEMes44zdkIJD0wkmd
I.e. use the environment properties defined in the AWS Console (Elastic Beanstalk/Configuration/Software Configuration/Environment Properties) to initialize system configuration files and such.
I've found that it is possible to use environment properties in container-command:s, like so:
container_commands:
000-create-file:
command: echo ${AWS_ACCESS_KEY_ID}:${AWS_SECRET_KEY} > /etc/passwd-s3fs
However, doing so will require me to manually set owner, group, file permissions etc. It's also much more of a hassle when dealing with larger configuration files than the Files: configuration option...
Anyone got any tips on this?
How about something like this. I will use the word "context" for dev vs. qa.
Create one file per context:
dev-envvars
export MYAPP_IP_ADDR=111.222.0.1
export MYAPP_BUCKET=dev
qa-envvars
export MYAPP_IP_ADDR=111.222.1.1
export MYAPP_BUCKET=qa
Upload those files to a private S3 folder, S3://myapp/config.
In IAM, add a policy to the aws-elasticbeanstalk-ec2-role role that allows reading S3://myapp/config.
Add the following file to your .ebextensions directory:
envvars.config
files:
"/opt/myapp_envvars" :
mode: "000644"
owner: root
group: root
# change the source when you need a different context
#source: https://s3-us-west-2.amazonaws.com/myapp/dev-envvars
source: https://s3-us-west-2.amazonaws.com/myapp/qa-envvars
Resources:
AWSEBAutoScalingGroup:
Metadata:
AWS::CloudFormation::Authentication:
S3Access:
type: S3
roleName: aws-elasticbeanstalk-ec2-role
buckets: myapp
commands:
# commands executes after files per
# http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html
10-load-env-vars:
command: . /opt/myapp_envvars
Per the AWS Developer's Guide, commands "run before the application and web server are set up and the application version file is extracted," and before container-commands. I guess the question will be whether that is early enough in the boot process to make the environment variables available when you need them. I actually wound up writing an init.d script to start and stop things in my EC2 instance. I used the technique above to deploy the script.
Credit for the “Resources” section that allows downloading from secured S3 goes to the May 7, 2014 post that Joshua#AWS made to this thread.
I am gravedigging but since I stumbled across this in the course of my travels, there is a "clever" way to do what you describe–at least in 2018, and at least since 2016. You can retrieve an environment variable by key with get-config:
/opt/elasticbeanstalk/bin/get-config environment --key YOUR_ENV_VAR_KEY
And likewise all environment variables with (as JSON or --output YAML)
/opt/elasticbeanstalk/bin/get-config environment
Example usage in a container command:
container_commands:
00_store_env_var_in_file_and_chmod:
command: "/opt/elasticbeanstalk/bin/get-config environment --key YOUR_ENV_KEY | install -D /dev/stdin /etc/somefile && chmod 640 /etc/somefile"
Example usage in a file:
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/00_do_stuff.sh":
mode: "000755"
owner: root
group: root
content: |
#!/bin/bash
YOUR_ENV_VAR=$(source /opt/elasticbeanstalk/bin/get-config environment --key YOUR_ENV_VAR_KEY)
echo "Hello $YOUR_ENV_VAR"
I was introduced to get-config by Thomas Reggi in https://serverfault.com/a/771067.
I assume that AWS_ACCESS_KEY_ID and AWS_SECRET_KEY are known to you prior to the app deployment.
You can create the file on your workstation and submit it to Elastic Beanstalk instance with the code on $ git aws.push
$ cd .ebextensions
$ echo 'ABAC73E92DEEWEDS3FG4E:aiDSuhr8eg4fHHGEMes44zdkIJD0wkmd' > passwd-s3fs
In .config:
files:
"/etc/passwd-s3fs" :
mode: "000640"
owner: root
group: root
container_commands:
10-copy-passwords-file:
command: "cat .ebextensions/passwd-s3fs > /etc/passwd-s3fs"
You might have to play with the permissions or execute cat as sudo. Also, I put the file into .ebextensions for example, it can be anywhere in your project.
Hope it helps.