Graphhopper with hybrid mode enabled throws IllegalStateException - graphhopper

I'm trying to start Graphhopper in hybrid mode using the latest code from its git repo.
The config file, per its comments and documentation (and this answer) has:
prepare.ch.weightings: no
prepare.lm.weightings: fastest
I build it with docker build -t tgraphhopper:lastest . and then I start one container with docker run --name tgraphhopper -v ./data:/data -p 8989:8989 tgraphhopper:latest
The error which appear in logs is:
java.lang.IllegalStateException: Configured graph.ch.weightings: [] is
not equal to loaded [fastest|car]
at com.graphhopper.storage.GraphHopperStorage.loadExisting(GraphHopperStorage.java:254)
at com.graphhopper.GraphHopper.load(GraphHopper.java:781)
at com.graphhopper.GraphHopper.importOrLoad(GraphHopper.java:637)
at com.graphhopper.http.GraphHopperManaged.start(GraphHopperManaged.java:71)
at io.dropwizard.lifecycle.JettyManaged.doStart(JettyManaged.java:27)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:138)
at org.eclipse.jetty.server.Server.start(Server.java:419)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:117)
at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
at org.eclipse.jetty.server.Server.doStart(Server.java:386)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at io.dropwizard.cli.ServerCommand.run(ServerCommand.java:53)
at io.dropwizard.cli.EnvironmentCommand.run(EnvironmentCommand.java:44)
at io.dropwizard.cli.ConfiguredCommand.run(ConfiguredCommand.java:87)
at io.dropwizard.cli.Cli.run(Cli.java:78)
at io.dropwizard.Application.run(Application.java:93)
at com.graphhopper.http.GraphHopperApplication.main(GraphHopperApplication.java:33)
What am I missing in tyring to start the Graphhopper in hybrid mode?

You need to remove the (potentially) created cache folder for the graph in /data
(So, if you have area-latest.osm.pbf the folder is named area-latest.osm-gh)

Related

mkdir /.gitlab-runner: permission denied running GitLab Runner in Kubernetes deployed via Helm

I'm trying to deploy the GitLab Runner (15.7.1) onto an on-premise Kubernetes cluster and getting the following error:
PANIC: loading system ID file: saving system ID state file: creating directory: mkdir /.gitlab-runner: permission denied
This is occurring with both the 15.7.1 image (Ubuntu?) and the alpine3.13-v15.7.1 image. Looking at the deployment, it looks likes it should be trying to use /home/gitlab-runner, but for some reason it is trying to use root (/), which is a protected directory.
Anyone else experience this issue or have a suggestion as to what to look at?
I am using the Helm chart (0.48.0) using a copy of the images from dockerhub (simply moved into a local repository as internet access is not available from the cluster). Connectivity to GitLab appears to be working, but the error causes the overall startup to fail. Full logs are:
Registration attempt 4 of 30
Runtime platform arch=amd64 os=linux pid=33 revision=6d480948 version=15.7.1
WARNING: Running in user-mode.
WARNING: The user-mode requires you to manually start builds processing:
WARNING: $ gitlab-runner run
WARNING: Use sudo for system-mode:
WARNING: $ sudo gitlab-runner...
Created missing unique system ID system_id=r_Of5q3G0yFEVe
PANIC: loading system ID file: saving system ID state file: creating directory: mkdir /.gitlab-runner: permission denied
I have tried the 15.7.1 image, the alpine3.13-v15.7.1 image, and the gitlab-runner-ocp:amd64-v15.7.1 image and searched the values.yaml for anything relevant to the path. Looking at the deployment template, it appears that it ought to be using /home/gitlab-runner as the directory (instead of /) [though the docs suggested it was /home].
As for "what was I expecting", of course I was expecting that it would "just work" :)
So, resolved this (and other) issues with:
Updated helm deployment template to mount an empty volume at /.gitlab-runner
[separate issue] explicitly added builds_dir and environment [per gitlab-org/gitlab-runner#3511 (comment 114281106)].
These two steps appeared to be sufficient to get the Helm chart deployment working.
You can easily create and mount the emptyDir (in case you are creating gitlab-runner with kubernetes manifest *.yml file):
volumes:
- emptyDir: {}
name: gitlab-runner
volumeMounts:
- name: gitlab-runner
mountPath: /.gitlab-runner
-------------------- OR --------------------
volumeMounts:
- name: root-gitlab-runner
mountPath: /.gitlab-runner
volumes:
- name: root-gitlab-runner
emptyDir:
medium: "Memory"

Keycloak on kubernetes and logging json layout format with log4j2

I have Keycloak deployed in Kubernetes using the official codecentric chart. Now I want to make Keycloak logs into json format in order to export them to Kibana.
A comment to the original reply pointed to a cli command to do this.
cli:
# Custom CLI script
custom: |
/subsystem=logging/json-formatter=json:add(exception-output-type=formatted, pretty-print=false, meta-data={label=value})
/subsystem=logging/console-handler=CONSOLE:write-attribute(name=named-formatter, value=json)
It is a Java application that is running on Wildfly. If you check the main process that is running inside the pod, you will see something like:
/usr/lib/jvm/java/bin/java -D[Standalone] -server -Xms64m -Xmx512m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true -Dorg.jboss.boot.log.file=/opt/jboss/keycloak/standalone/log/server.log -Dlogging.configuration=file:/opt/jboss/keycloak/standalone/configuration/logging.properties -jar /opt/jboss/keycloak/jboss-modules.jar -mp /opt/jboss/keycloak/modules org.jboss.as.standalone -Djboss.home.dir=/opt/jboss/keycloak -Djboss.server.base.dir=/opt/jboss/keycloak/standalone -Djboss.bind.address=10.217.0.231 -Djboss.bind.address.private=10.217.0.231 -b 0.0.0.0 -c standalone.xml
Important part here is the following:
-Dlogging.configuration=file:/opt/jboss/keycloak/standalone/configuration/logging.properties
So, the logging configuration is passed to the Java process as a JVM option, and read from the file on the path /opt/jboss/keycloak/standalone/configuration/logging.properties.
If you check the content of the file, it has a section like the following:
...
handler.CONSOLE=org.jboss.logmanager.handlers.ConsoleHandler
handler.CONSOLE.level=INFO
handler.CONSOLE.formatter=COLOR-PATTERN
handler.CONSOLE.properties=autoFlush,target,enabled
handler.CONSOLE.autoFlush=true
handler.CONSOLE.target=SYSTEM_OUT
handler.CONSOLE.enabled=true
...
You need to figure out what to change in this logging configuration to meet your JSON requirements. An example would be:
formatter.json=org.jboss.logmanager.formatters.JsonFormatter
formatter.json.properties=keyOverrides,exceptionOutputType,metaData,prettyPrint,printDetails,recordDelimiter
formatter.json.constructorProperties=keyOverrides
formatter.json.keyOverrides=timestamp\=#timestamp
formatter.json.exceptionOutputType=FORMATTED
formatter.json.metaData=#version\=1
formatter.json.prettyPrint=false
formatter.json.printDetails=false
formatter.json.recordDelimiter=\n
Then, in Kubernetes you can create a ConfigMap with the logging config that you want, define it as a volume in your pod/deployment, and mount it as a file to that exact path in the pod/deployment definition. If you do all steps correctly, you should be able to customize the logging format as you need.

How to deploy dockerhub or redhat container catalog (external registries) images in minishift

I'm studying openshift from and administration perspective and it is costing me a lot to understand it and one thing that I want to do is to deploy known containers into it and all I can do with minishift is deploy examples it provides or push a local develop which is not helpful to my purpose but I can't find a 'how to' of linking minishift cdk (the one from redhat) to external repositories like dockerhub or registry.access.redhat.com
Any help will be really appreciated.
What I did was:
Connect to minishift docker service
> $ eval $(minishift docker-env)
Tried to pull from redhat container catalog.
$ docker pull registry.access.redhat.com/openshift3/jenkins-2-rhel7:3.11.98-6
Trying to pull repository registry.access.redhat.com/openshift3/jenkins-2-rhel7 ...
error parsing HTTP 404 response body: invalid character 'F' looking for beginning of value: "File not found.\""
$ docker pull registry.redhat.io/openshift3/jenkins-2-rhel7:3.11.98-6
Trying to pull repository registry.redhat.io/openshift3/jenkins-2-rhel7 ...
unable to retrieve auth token: 401 unauthorized
$ docker login registry.redhat.io
Username: ********
Password:
WARNING! Your password will be stored unencrypted in /home/cesar.cabral/.docker/config.json.
Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
$ docker pull registry.redhat.io/openshift3/jenkins-2-rhel7:3.11.98-6
Trying to pull repository registry.redhat.io/openshift3/jenkins-2-rhel7 ...
> error parsing HTTP 404 response body: invalid character 'F' looking for beginning of value: "File not found.\""
tried the same but with dockerhub, no luck either.
$ docker pull hub.docker.com/nginx:1.17.0
Trying to pull repository hub.docker.com/nginx ...
Pulling repository hub.docker.com/nginx
invalid character '<' looking for beginning of value

Creating Openshift build from Dockerfile

I'd need to create an openshift build from a Docker image (tomcat9) which is extended through a Dockerfile. The Dockerfile adds an applications in the webapps folder.
Here's my attempt:
oc new-build --name=runtime --docker-image="docker.io/tomcat:9.0-jre8" \
--source-image=tomcat:9.0-jre8 \
--source-image-path=/home/carla/build/example.war:. \
--dockerfile=$'FROM tomcat:9.0-jre8\nCOPY example.war /usr/local/tomcat/webapps/example.war'
I have placed the webapplication (example.war) in the path /home/carla/build and I need to copy in in the /usr/local/tomcat/webapps folder of the Docker image.
The error I get is:
error: BuildConfig "runtime" is invalid: spec.triggers: Invalid value:
. . .
multiple ImageChange triggers refer to the same image stream tag
--> Failed
I think the problem is related to the -source-image parameter, however I cannot omit it as I have specified the --source-image-path.
Any idea how to fix it ?
Thanks

Using environment properties with files in elastic beanstalk config files

Working with Elastic Beanstalk .config files is kinda... interesting. I'm trying to use environment properties with the files: configuration option in an Elastc Beanstalk .config file. What I'd like to do is something like:
files:
"/etc/passwd-s3fs" :
mode: "000640"
owner: root
group: root
content: |
${AWS_ACCESS_KEY_ID}:${AWS_SECRET_KEY}
To create an /etc/passwd-s3fs file with content something like:
ABAC73E92DEEWEDS3FG4E:aiDSuhr8eg4fHHGEMes44zdkIJD0wkmd
I.e. use the environment properties defined in the AWS Console (Elastic Beanstalk/Configuration/Software Configuration/Environment Properties) to initialize system configuration files and such.
I've found that it is possible to use environment properties in container-command:s, like so:
container_commands:
000-create-file:
command: echo ${AWS_ACCESS_KEY_ID}:${AWS_SECRET_KEY} > /etc/passwd-s3fs
However, doing so will require me to manually set owner, group, file permissions etc. It's also much more of a hassle when dealing with larger configuration files than the Files: configuration option...
Anyone got any tips on this?
How about something like this. I will use the word "context" for dev vs. qa.
Create one file per context:
dev-envvars
export MYAPP_IP_ADDR=111.222.0.1
export MYAPP_BUCKET=dev
qa-envvars
export MYAPP_IP_ADDR=111.222.1.1
export MYAPP_BUCKET=qa
Upload those files to a private S3 folder, S3://myapp/config.
In IAM, add a policy to the aws-elasticbeanstalk-ec2-role role that allows reading S3://myapp/config.
Add the following file to your .ebextensions directory:
envvars.config
files:
"/opt/myapp_envvars" :
mode: "000644"
owner: root
group: root
# change the source when you need a different context
#source: https://s3-us-west-2.amazonaws.com/myapp/dev-envvars
source: https://s3-us-west-2.amazonaws.com/myapp/qa-envvars
Resources:
AWSEBAutoScalingGroup:
Metadata:
AWS::CloudFormation::Authentication:
S3Access:
type: S3
roleName: aws-elasticbeanstalk-ec2-role
buckets: myapp
commands:
# commands executes after files per
# http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html
10-load-env-vars:
command: . /opt/myapp_envvars
Per the AWS Developer's Guide, commands "run before the application and web server are set up and the application version file is extracted," and before container-commands. I guess the question will be whether that is early enough in the boot process to make the environment variables available when you need them. I actually wound up writing an init.d script to start and stop things in my EC2 instance. I used the technique above to deploy the script.
Credit for the “Resources” section that allows downloading from secured S3 goes to the May 7, 2014 post that Joshua#AWS made to this thread.
I am gravedigging but since I stumbled across this in the course of my travels, there is a "clever" way to do what you describe–at least in 2018, and at least since 2016. You can retrieve an environment variable by key with get-config:
/opt/elasticbeanstalk/bin/get-config environment --key YOUR_ENV_VAR_KEY
And likewise all environment variables with (as JSON or --output YAML)
/opt/elasticbeanstalk/bin/get-config environment
Example usage in a container command:
container_commands:
00_store_env_var_in_file_and_chmod:
command: "/opt/elasticbeanstalk/bin/get-config environment --key YOUR_ENV_KEY | install -D /dev/stdin /etc/somefile && chmod 640 /etc/somefile"
Example usage in a file:
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/00_do_stuff.sh":
mode: "000755"
owner: root
group: root
content: |
#!/bin/bash
YOUR_ENV_VAR=$(source /opt/elasticbeanstalk/bin/get-config environment --key YOUR_ENV_VAR_KEY)
echo "Hello $YOUR_ENV_VAR"
I was introduced to get-config by Thomas Reggi in https://serverfault.com/a/771067.
I assume that AWS_ACCESS_KEY_ID and AWS_SECRET_KEY are known to you prior to the app deployment.
You can create the file on your workstation and submit it to Elastic Beanstalk instance with the code on $ git aws.push
$ cd .ebextensions
$ echo 'ABAC73E92DEEWEDS3FG4E:aiDSuhr8eg4fHHGEMes44zdkIJD0wkmd' > passwd-s3fs
In .config:
files:
"/etc/passwd-s3fs" :
mode: "000640"
owner: root
group: root
container_commands:
10-copy-passwords-file:
command: "cat .ebextensions/passwd-s3fs > /etc/passwd-s3fs"
You might have to play with the permissions or execute cat as sudo. Also, I put the file into .ebextensions for example, it can be anywhere in your project.
Hope it helps.