Neo4J - CSV import - Neo.ClientError.Statement.ExternalResourceFailed - csv

Sorry for this redundant post but no one else resolves my problem.
My goal
Import a CSV file on Neo4J.
My Problem
Neo.ClientError.Statement.ExternalResourceFailed
Couldn't load the external resource at: file:/AFG-ADM1.csv ()
My Environment
I work on my computer and install Neo4J with a docker-compose file as follows:
version: "3.1"
services:
neo4j:
image: neo4j:5.3.0
restart: unless-stopped
container_name: neo4j
# The ports that will be accessible from outside the container - HTTP (7474) and Bolt (7687).
ports:
- "7474:7474"
- "7687:7687"
# Uncomment the volumes to be mounted to make them accessible from outside the container.
volumes:
- ~/Documents/neo4j/neo4j.conf:/conf/neo4j.conf
- ~/Documents/neo4j/data:/var/lib/neo4j/data
- ~/Documents/neo4j/logs:/var/lib/neo4j/logs
- ~/Documents/neo4j/conf:/var/lib/neo4j/conf
- ~/Documents/neo4j/import:/var/lib/neo4j/import
# - ./metrics/server1:/var/lib/neo4j/metrics
# - ./licenses/server1:/var/lib/neo4j/licenses
# - ./ssl/server1:/var/lib/neo4j/ssl
environment:
- NEO4J_ACCEPT_LICENSE_AGREEMENT=yes
- NEO4J_AUTH=neo4j/password
- EXTENDED_CONF=yes
- NEO4J_EDITION=docker_compose
- NEO4J_initial_server_mode__constraint=PRIMARY
healthcheck:
test: ["CMD-SHELL", "wget --no-verbose --tries=1 --spider localhost:7474 || exit 1"]
user: 1000:1000
In my "~/Documents/neo4j/import" directory, I have the file "AFG-ADM1.csv".
To be sure, I went to look in my container:
My neo4j config file, neo4j.conf
# Setting that specifies how much memory Neo4j is allowed to use for the page cache.
server.memory.pagecache.size=100M
# Setting that specifies the initial JVM heap size.
server.memory.heap.initial_size=100M
# Enable server-side routing
dbms.routing.enabled=true
# Use server-side routing for neo4j:// protocol connections.
dbms.routing.default_router=SERVER
server.config.strict_validation.enabled=false
dbms.max_databases=$(my_setting.bat)
#dbms.security.allow_csv_import_from_file_urls=true
I try with and without opt dbms.security.allow_csv_import_from_file_urls to true
My cmd
LOAD CSV WITH HEADERS FROM "file:///AFG-ADM1.csv" AS csvLine
CREATE (p:Pipeline {name: csvLine.name, level: csvLine.Level, shapeGroup: csvLine.shapeGroup})
And get this error:

Related

Next cloud and mysql setup: authentication method unknown to the client

Using the rancher GUI, I'm trying to set up Nextcloud with MySQL database workloads on my AKS cluster. In the environment variables, I already have defined the admin user and password so why do I get this error on the create admin page?
Error while trying to create admin user: Failed to connect to the
database: An exception occurred in driver: SQLSTATE[HY000] [2054] The
server requested authentication method unknown to the client
I entered the Username and password correctly multiple times.
Below are my configurations for the database and nextcloud so far.
database workload:
Name: nextdb
Docker image: mysql
port: not set
I have the following variables:
MYSQL_ROOT_PASSWORD=rootpassX
MYSQL_DATABASE=nextDB
MYSQL_USER=nextcloud
MYSQL_PASSWORD=passX
volumes configuration:
Volume Type: Bind-Mount
Volume Name: nextdb
Path on the Node : /nextdb
The Path on the Node must be: a directory or create
Mount Point: /var/lib/mysql
nextcloud workload:
Name: nextcloud
Docker Image: nextcloud
Port Mapping:
Port Name : nextcloud80
Publish the container port: 80
Protocol: TCP
As a: Layer-4 load balancer
On listening port: 80
Environment variables:
MYSQL_DATABASE=nextDB
MYSQL_USER=nextcloud
MYSQL_PASSWORD=passX
MYSQL_HOST=nextdb
NEXTCLOUD_ADMIN_USER=admin
NEXTCLOUD_ADMIN_PASSWORD=adminPass
NEXTCLOUD_DATA_DIR=/var/www/html/nextcloud
Volumes:
Volume 1:
name: nextcloud
Volume Type: Bind-Mount
Path on the Node: /nextcloud
The Path on the Node must be: a directory or create.
Mount Point: /var/www/html
Volume 2
name: nextdb
Volume Type: Bind-Mount
Path on the Node: /nextdatabase
The Path on the Node must be: a directory or create.
Mount Point: /var/lib/mysql
What are the problems with my configurations?
Starting with version 8.02, MySQL updated the default authentication method for client connections. To revert to the older authentication method you need to explicitly specify the default authentication method.
If you are able to update your DB service in Rancher to pass the container argument --default-authentication-plugin=mysql_native_password that should revert MySQL to the older auth method.
Alternatively, depending on the MySQL image you are using, you can create a new Docker image from that base which replaces /etc/mysql/my.cnf inside the container. You should inspect /etc/mysql/my.cnf before you replace it, if there are any !includedir directives in the config file, you can place your supplemental configuration into an included folder using whatever filename you choose.
The supplemental configuration should look like this:
[mysqld]
default_authentication_plugin=mysql_native_password

docker-compose cannot wait for mysql database

I am having real problems trying to get a docker-compose script to initiate a mysql database and a Django project, but get the Django project to wait until the mysql database is ready.
I have two files, a Dockerfile and a docker-compose.yml, which I have copied below.
When I run the docker-compose.yml, and check the logs of the web container, it says that it cannot connect to the database mydb. However the second time that I run it (without clearing the containers and images) it connects properly and the Django app works.
I have spent a whole day trying a number of things such as scripts, health checks etc, but I cannot get it to work.
Dockerfile
FROM python:3.6
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY ./ /code/
RUN pip install -r requirements.txt
RUN python manage.py collectstatic --noinput
docker-compose.yml
version: '3'
services:
mydb:
environment:
- MYSQL_ROOT_PASSWORD=password
- MYSQL_USER=django
- MYSQL_PASSWORD=secret
- MYSQL_DATABASE=dbMarksWebsite
image: mysql:5.7
ports:
# Map default mysql port 3306 to 3308 on outside so that I can connect
# to mysql using workbench localhost with port 3308
- "3308:3306"
web:
environment:
- DJANGO_DEBUG=1
- DOCKER_PASSWORD=secret
- DOCKER_USER=django
- DOCKER_DB=dbMarksWebsite
- DOCKER_HOST=mydb
- DOCKER_PORT=3306
build: .
command: >
sh -c "sleep 10 &&
python manage.py migrate &&
python manage.py loaddata myprojects_testdata.json &&
python manage.py runserver 0.0.0.0:8080"
ports:
- "8080:8080"
depends_on:
- mydb
First run (with no existing images or containers):
...
File "/usr/local/lib/python3.6/site-packages/MySQLdb/__init__.py", line 84, in Connect
return Connection(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/MySQLdb/connections.py", line 179, in __init__
super(Connection, self).__init__(*args, **kwargs2)
django.db.utils.OperationalError: (2002, "Can't connect to MySQL server on 'mydb' (115)")
Second run:
System check identified no issues (0 silenced).
March 27, 2020 - 16:44:57
Django version 2.2.11, using settings 'ebdjango.settings'
Starting development server at http://0.0.0.0:8080/
Quit the server with CONTROL-C.
I solved it using the following function in my entrypoint.sh:
function wait_for_db()
{
while ! ./manage.py sqlflush > /dev/null 2>&1 ;do
echo "Waiting for the db to be ready."
sleep 1
done
}
For anybody who is interested, I found a solution to this:
1 - I wrote a python script to connect to the database every second,
but with a timeout. I set this timeout to be quite high at 60
seconds, but this seems to work on my computer.
2 - I added the command to wait into my compose file.
It should mean that I can bring up a set of test containers for my website, where I can specify the exact version of Python and MySQL used.
The relevant files are listed below:
Dockerfile:
FROM python:3.6
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY ./ /code/
RUN pip install -r requirements.txt
RUN python manage.py collectstatic --noinput
docker-compose.yml
version: '3'
services:
mydb:
container_name: mydb
environment:
- MYSQL_ROOT_PASSWORD=password
- MYSQL_USER=django
- MYSQL_PASSWORD=secret
- MYSQL_DATABASE=dbMarksWebsite
image: mysql:5.7
ports:
# Map default mysql port 3306 to 3308 on outside so that I can connect
# to mysql using workbench localhost with port 3308
- "3308:3306"
web:
container_name: web
environment:
- DJANGO_DEBUG=1
- DOCKER_PASSWORD=secret
- DOCKER_USER=django
- DOCKER_DB=dbMarksWebsite
- DOCKER_HOST=mydb
- DOCKER_PORT=3306
build: .
command: >
sh -c "python ./bin/wait-for.py mydb 3306 django secret dbMarksWebsite 60 &&
python manage.py migrate &&
python manage.py loaddata myprojects_testdata.json &&
python manage.py runserver 0.0.0.0:8080"
ports:
- "8080:8080"
depends_on:
- mydb
wait-for.py
'''
I don't like adding this in here, but I cannot get the typical wait-for scripts
to work with MySQL database in docker, so I hve written a python script that
either times out after ? seconds or successfully connects to the database
The input arguments for the script need to be:
HOST, PORT, USERNAME, PASSWORD, DATABASE, TIMEOUT
'''
import sys, os
import time
import pymysql
def readCommandLineArgument():
'''
Validate the number of command line input arguments and return the
input filename
'''
# Get arguments
if len(sys.argv)!=7:
raise ValueError("You must pass in 6 arguments, HOST, PORT, USERNAME, PASSWORD, DATABASE, TIMEOUT")
# return the arguments as a tuple
return (sys.argv[1], sys.argv[2], sys.argv[3], sys.argv[4], sys.argv[5], sys.argv[6])
def connectToDB(HOST, PORT, USERNAME, PASSWORD, DATABASE):
'''
for now, just try to connect to the database.
'''
con = pymysql.connect(host=HOST, port=PORT, user=USERNAME, password=PASSWORD, database=DATABASE)
with con:
cur = con.cursor()
cur.execute("SELECT VERSION()")
def runDelay():
'''
I don't like passing passwords in, but this is only used for a test docker
delay script
'''
# Get the database connection characteristics.
(HOST, PORT, USERNAME, PASSWORD, DATABASE, TIMEOUT) = readCommandLineArgument()
# Ensure timeout is an integer greater than zero, otherwise use 15 secs a default
try:
TIMEOUT = int(TIMEOUT)
if TIMEOUT <= 0:
raise("Timeout needs to be > 0")
except:
TIMEOUT = 60
# Ensure port is an integer greater than zero, otherwise use 3306 as default
try:
PORT = int(PORT)
if PORT <= 0:
raise("Port needs to be > 0")
except:
PORT = 3306
# Try to connect to the database TIMEOUT times
for i in range(0, TIMEOUT):
try:
# Try to connect to db
connectToDB(HOST, PORT, USERNAME, PASSWORD, DATABASE)
# If an error hasn't been raised, then exit
return True
except Exception as Ex:
strErr=Ex.args[0]
print(Ex.args)
# Sleep for 1 second
time.sleep(1)
# If I get here, assume a timeout has occurred
raise("Timeout")
if __name__ == "__main__":
runDelay()
For testing/development purposes, you could use a version of the MySQL image that has health checks (I believe there's a healthcheck/mysql image), or configure your own (see example here: Docker-compose check if mysql connection is ready).
For production use, you don't want to upgrade the database schema on startup, nor do you want to assume the database is up. Upgrading schema automatically encourages you to not think about what happens when you deploy a bug and need to rollback, and parallel schema upgrades won't work. Longer version: https://pythonspeed.com/articles/schema-migrations-server-startup/
Another option is to use a script to control the startup order, and wrap the web service's command.
In the docker-compose's documentation "wait-for-it" is one of the recommended tools, but other exists.

Getting error with api-platform while trying to connect database

I installed mysql8 on mac, created an empty database to continue with api-platform tutorial but I cannot start a connection. when I used
bin/console doctrine:database:create
Just to check if the db already created or not. But I get this error.
In AbstractMySQLDriver.php line 106:
An exception occurred in driver: SQLSTATE[HY000] [2054] The server requested authentication method unknown to the client
In PDOConnection.php line 31:
SQLSTATE[HY000] [2054] The server requested authentication method unknown to the client
In PDOConnection.php line 27:
SQLSTATE[HY000] [2054] The server requested authentication method unknown to the client
In PDOConnection.php line 27:
PDO::__construct(): The server requested authentication method unknown to the client [caching_sha2_password]
I just edited my env file that is:
# In all environments, the following files are loaded if they exist,
# the latter taking precedence over the former:
#
# * .env contains default values for the environment variables needed by the app
# * .env.local uncommitted file with local overrides
# * .env.$APP_ENV committed environment-specific defaults
# * .env.$APP_ENV.local uncommitted environment-specific overrides
#
# Real environment variables win over .env files.
#
# DO NOT DEFINE PRODUCTION SECRETS IN THIS FILE NOR IN ANY OTHER COMMITTED FILES.
#
# Run "composer dump-env prod" to compile .env files for production use (requires symfony/flex >=1.2).
# https://symfony.com/doc/current/best_practices.html#use-environment-variables-for-infrastructure-configuration
###> symfony/framework-bundle ###
APP_ENV=dev
APP_SECRET=bcd5845fdd72a47f771c43b27daf2fb2
#TRUSTED_PROXIES=127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16
#TRUSTED_HOSTS='^localhost|example\.com$'
###< symfony/framework-bundle ###
###> symfony/mailer ###
# MAILER_DSN=smtp://localhost
###< symfony/mailer ###
###> doctrine/doctrine-bundle ###
# Format described at https://www.doctrine-project.org/projects/doctrine-dbal/en/latest/reference/configuration.html#connecting-using-a-url
# For an SQLite database, use: "sqlite:///%kernel.project_dir%/var/data.db"
# For a PostgreSQL database, use: "postgresql://db_user:db_password#127.0.0.1:5432/db_name?serverVersion=11&charset=utf8"
# IMPORTANT: You MUST configure your server version, either here or in config/packages/doctrine.yaml
DATABASE_URL=mysql://root:a123.s.d#localhost:3306/book_api
###< doctrine/doctrine-bundle ###
###> nelmio/cors-bundle ###
CORS_ALLOW_ORIGIN=^https?://(localhost|127\.0\.0\.1)(:[0-9]+)?$
###< nelmio/cors-bundle ###
I research about the connection problem of mysql. I see mysql8 has different password format that's why mysql cannot establish a connection with the api. However, I could not find any absolute solution. So, I've solved the problem by downgrading mysql8 to mysql5.7
Here is what i did
docker-compose.yml
db:
image: mysql:8
command: mysqld --default-authentication-plugin=mysql_native_password
restart: always
environment:
- MYSQL_DATABASE=api
- MYSQL_USER=api-platform
# You should definitely change the password in production
- MYSQL_ROOT_PASSWORD=root
- MYSQL_PASSWORD=!ChangeMe!
volumes:
- ./api/docker/db/data:/var/lib/mysql
ports:
- target: 3306
published: 3306
protocol: tcp
\api.env
DATABASE_URL=mysql://api-platform:!ChangeMe!#db:3306/api?server_version=8
\api\Dockerfile
change pdo_pgsql to pdo_mysql
Adding this should solve it:
command: --default-authentication-plugin=mysql_native_password

Galera cluster nodes do not trigger wsrep_notify_cmd and wsrep_sst_method

I did setup the Galera cluster with 3 nodes using 3 docker containers. There is a requirement that when data is synchronized from the donor node to the other nodes, based on the wsrep_notify_cmd trigger or wsrep_sst_method trigger at the synchronized node the data also need to be populated to the corresponding Redis queue at that node.
The problem is these 2 triggers are only invoked when I start the cluster. There is log saying that these 2 triggers were invoked when one node has been joined the cluster. But when I tried to modify the schema or do CUD action in one node the triggers were not fired at the other nodes.
I dont know if I did the configuration incorrectly or it is not the way these triggers working.
Below is the files I use to make the cluster working
docker-compose.yml
version: '3'
services:
node1:
build: ./galera/
image: galera_mariadb:latest
container_name: "galera_cluster_node1"
hostname: node1
ports:
- 13306:3306
networks:
- galera_cluster
volumes:
- ./galera/conf.d/galera.cnf:/etc/mysql/conf.d/galera.cnf
- /var/data/galera/mysql/node1:/var/lib/mysql/
# ./galera/scripts contains the bash script which is executed by wsrep_notify_cmd trigger
- ./galera/scripts/:/etc/mysql/scripts/
environment:
- MYSQL_ROOT_PASSWORD=123
- REPLICATION_PASSWORD=123
- MYSQL_DATABASE=test_db
- MYSQL_USER=maria
- MYSQL_PASSWORD=123
- GALERA=On
- NODE_NAME=node1
- CLUSTER_NAME=maria_cluster
- CLUSTER_ADDRESS=gcomm://
command: --wsrep-new-cluster
node2:
image: galera_mariadb:latest
container_name: "galera_cluster_node2"
hostname: node2
links:
- node1
ports:
- 23306:3306
networks:
- galera_cluster
volumes:
- ./galera/conf.d/galera.cnf:/etc/mysql/conf.d/galera.cnf
- /var/data/galera/mysql/node2:/var/lib/mysql/
- ./galera/scripts/:/etc/mysql/scripts/
environment:
- REPLICATION_PASSWORD=123
- GALERA=On
- NODE_NAME=node2
- CLUSTER_NAME=maria_cluster
- CLUSTER_ADDRESS=gcomm://node1
node3:
image: galera_mariadb:latest
container_name: "galera_cluster_node3"
hostname: node3
links:
- node1
ports:
- 33306:3306
networks:
- galera_cluster
volumes:
- ./galera/conf.d/galera.cnf:/etc/mysql/conf.d/galera.cnf
- /var/data/galera/mysql/node3:/var/lib/mysql/
- ./galera/scripts/:/etc/mysql/scripts/
environment:
- REPLICATION_PASSWORD=123
- GALERA=On
- NODE_NAME=node3
- CLUSTER_NAME=maria_cluster
- CLUSTER_ADDRESS=gcomm://node1
networks:
galera_cluster:
driver: bridge
The Dockerfile used to build 3 galera cluster nodes
# Galera Cluster Dockerfile
FROM hauptmedia/mariadb:10.1
RUN apt-get update \
&& apt-get -y install \
vim \
python \
redis-tools
# remove the default galera.cnf in the original image
RUN rm -rf /etc/mysql/conf.d/galera.cnf
# add the custom galera.cnf
COPY ./conf.d/galera.cnf /etc/mysql/conf.d/galera.cnf
# grant access and execution right
RUN chmod 755 /etc/mysql/conf.d/galera.cnf
galera.cnf file
[galera]
wsrep_on=ON
# wsrep only supports binlog_format='ROW' and storage-engine=innodb
binlog_format=row
default_storage_engine=InnoDB
# to avoid issues with 'bulk mode inserts' using autoinc
innodb_autoinc_lock_mode=2
bind-address=0.0.0.0
# relax flushing logs when running in galera mode
innodb_flush_log_at_trx_commit=0
sync_binlog=0
# Query Cache is supported since version 10.0.14 with wsrep
query_cache_size=8000000
query_cache_type=1
wsrep_provider=/usr/lib/galera/libgalera_smm.so
# use the built-in method to manage State Snapshot Transfers
# we can customize this script to perform a specific logic
wsrep_sst_method=xtrabackup-v2
# This bash is volumed from the host which is used to populate synchronized data to the Redis queue
wsrep_notify_cmd=/etc/mysql/scripts/wsrep_notify.sh
# force transaction level to be read commited
#transaction-isolation = READ-COMMITTED
wsrep_notify.sh
#!/bin/sh -eu
wsrep_log()
{
# echo everything to stderr so that it gets into common error log
# deliberately made to look different from the rest of the log
local readonly tst="$(date +%Y%m%d\ %H:%M:%S.%N | cut -b -21)"
echo "WSREP_SST: $* ($tst)" >&2
}
wsrep_log_info()
{
wsrep_log "[INFO] $*"
}
STATUS=""
CLUSTER_UUID=""
PRIMARY=""
MEMBERS=""
INDEX=""
while [ $# -gt 0 ]
do
case $1 in
--status)
STATUS=$2
shift
;;
--uuid)
CLUSTER_UUID=$2
shift
;;
--primary)
PRIMARY=$2
shift
;;
--index)
INDEX=$2
shift
;;
--members)
MEMBERS=$2
shift
;;
esac
shift
done
wsrep_log_info "--status $STATUS --uuid $CLUSTER_UUID --primary $PRIMARY --members $MEMBERS --index $INDEX"
Here is the log files of 3 nodes
node1:
https://drive.google.com/file/d/0B2q2F62RQxVjbkRaQlFrV2NyYnc/view?usp=sharing
node2:
https://drive.google.com/file/d/0B2q2F62RQxVjX3hYZHBpQ2FRV0U/view?usp=sharing
node3:
https://drive.google.com/file/d/0B2q2F62RQxVjelZHQTN3ZDRNZ0k/view?usp=sharing
I have been googling about this issue but there was no luck. I hope anyone who has experienced of Galera Cluster setup can help me to resolve the issue. Or there is another approach to solve the requirement please show me. Thanks a lot
wsrep_notify_cmd
Defines the command the node runs whenever cluster membership or the
state of the node changes.
So, script will be started on node if it changes its status described in the list below to any other status:
The possible statuses are:
Undefined The node has just started up and is not connected to any Primary Component.
Joiner The node is connected to a primary component and now is receiving state snapshot.
Donor The node is connected to primary component and now is sending state snapshot.
Joined The node has a complete state and now is catching up with the cluster.
Synced The node has synchronized itself with the cluster.
Error( if available>) The node is in an error state.
You see script notifies when nodes starts and changes their statuses. It will not notify when data just synchronizes between galera cluster nodes.

How to use a unique value in a Kubernetes ConfigMap

Problem
I have a monitoring application that I want to deploy inside of a DaemonSet. In the application's configuration, a unique user agent is specified to separate the node from other nodes. I created a ConfigMap for the application, but this only works for synchronizing the other settings in the environment.
Ideal solution?
I want to specify a unique value, like the node's hostname or another locally-inferred value, to use as the user agent string. Is there a way I can call this information from the system and Kubernetes will populate the desired key with a value (like the hostname)?
Does this make sense, or is there a better way to do it? I was looking through the documentation, but I could not find an answer anywhere for this specific question.
As an example, here's the string in the app config that I have now, versus what I want to use.
user_agent = "app-k8s-test"
But I'd prefer…
user_agent = $HOSTNAME
Is something like this possible?
You can use an init container to preprocess a config template from a config map. The preprocessing step can inject local variables into the config files. The expanded config is written to an emptyDir shared between the init container and the main application container. Here is an example of how to do it.
First, make a config map with a placeholder for whatever fields you want to expand. I used sed and and ad-hoc name to replace. You can also get fancy and use jinja2 or whatever you like. Just put whatever pre-processor you want into the init container image. You can use whatever file format for the config file(s) you want. I just used TOML here to show it doesn't have to be YAML. I called it ".tpl" because it is not ready to use: it has a string, _HOSTNAME_, that needs to be expanded.
$ cat config.toml.tpl
[blah]
blah=_HOSTNAME_
otherkey=othervalue
$ kubectl create configmap cm --from-file=config.toml.tpl
configmap "cm" created
Now write a pod with an init container that mounts the config map in a volume, and expands it and writes to another volume, shared with the main container:
$ cat personalized-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod-5
labels:
app: myapp
annotations:
spec:
containers:
- name: myapp-container
image: busybox
command: ['sh', '-c', 'echo The app is running and my config-map is && cat /etc/config/config.toml && sleep 3600']
volumeMounts:
- name: config-volume
mountPath: /etc/config
initContainers:
- name: expander
image: busybox
command: ['sh', '-c', 'cat /etc/config-templates/config.toml.tpl | sed "s/_HOSTNAME_/$MY_NODE_NAME/" > /etc/config/config.toml']
volumeMounts:
- name: config-tpl-volume
mountPath: /etc/config-templates
- name: config-volume
mountPath: /etc/config
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumes:
- name: config-tpl-volume
configMap:
name: cm
- name: config-volume
emptyDir:
$ kubctl create -f personalized-pod.yaml
$ sleep 10
$ kubectl logs myapp-pod
The app is running and my config-map is
[blah]
blah=gke-k0-default-pool-93916cec-p1p6
otherkey=othervalue
I made this a bare pod for an example. You can embed this type of pod in a DaemonSet's pod template.
Here, the Downward API is used to set the MY_NODE_NAME Environment Variable, since the Node Name is not otherwise readily available from within a container.
Note that for some reason, you can't get the spec.nodeName into a file, just an env var.
If you just need the hostname in an Env Var, then you can skip the init container.
Since the Init Container only runs once, you should not update the configMap and expect it to be reexpanded. If you need updates, you can do one of two things:
Instead of an init container, run a sidecar that watches the config map volume and re-expands when it changes (or just does it periodically). This requires that the main container also know how to watch for config file updates.
You can just make a new config map each time the config template changes, and edit the daemonSet to change the one line to point to a new config map.
And then do a rolling update to use the new config.