mosquitto and mosquitto-auth-plugin giving error mosquitto_auth_acl_check client id not available - acl

I am using mosquitto ver 1.6.9 and Mosquitto Auth Plugin to make my mosquitto secure ,
I had been doing the same with a previous machine which was ubuntu 16 and it was all ok , I upgraded that machine to Ubuntu 16 and had to rebuild the auth plugin as libmysqlclient.so had been changed fro 18 to 20.
Now when I run mosquito with the auth plugin my client devices and apps can connect but the ACLs are not authenticated and I get the following logs
1585414235: |-- mosquitto_auth_unpwd_check(username_9786)
1585414235: |-- ** checking backend mysql
1585414235: |-- getuser(username_9786) AUTHENTICATED=1 by mysql
1585414235: |-- mosquitto_auth_acl_check(..., client id not available, username_9786, /iremote/username_9786, MOSQ_ACL_WRITE)
1585414235: |-- aclcheck(username_9786, /iremote/username_9786, 4) AUTHORIZED=0 by none
1585414235: |-- Cached [0F9BF3164FBDC47DC745F5C7EFB055979FB61134] for (client id not available,username_9786,4)
1585414235: |-- mosquitto_auth_acl_check(..., client id not available, username_9786, /iremote/username_9786, MOSQ_ACL_WRITE)
1585414235: |-- aclcheck(username_9786, /iremote/username_9786, 4) CACHEDAUTH: 17
1585414235: |-- mosquitto_auth_acl_check(..., client id not available, username_9786, /iremote/username_9786, MOSQ_ACL_WRITE)
1585414235: |-- aclcheck(username_9786, /iremote/username_9786, 4) CACHEDAUTH: 17
1585414236: |-- mosquitto_auth_acl_check(..., client id not available, username_9786, dev/udevls, MOSQ_ACL_WRITE)
I guess the client id not available is the issue , but I am clueless at this point .

Related

Podman Non-Root "Error setting up pivot dir"

First time posting on StackOverflow so please be gentle!
I'm setting up a new RHEL8 server to run Podman. Previously, I've done this on a pretty vanilla server but this one is setup in line with our corporate image. This means a homedir that is mounted over NFS.
When I try a simple podman command such as podman run centos, I get a couple of errors (see below). According to https://github.com/containers/podman/blob/main/rootless.md, Podman non-root is known to have problems with NFS homedirs.
Output from podman run centos (and others):
❯ podman run centos
Resolved "centos" as an alias (/etc/containers/registries.conf.d/000-shortnames.conf)
Trying to pull quay.io/centos/centos:latest...
Getting image source signatures
Copying blob 7a0437f04f83 done
Error: writing blob: adding layer with blob "sha256:7a0437f04f83f084b7ed68ad9c4a4947e12fc4e1b006b38129bac89114ec3621": Error processing tar file(exit status 1): Error setting up pivot dir: mkdir /home/me/.local/share/containers/storage/overlay/2653d992f4ef2bfd27f94db643815aa567240c37732cae1405ad1c1309ee9859/diff/.pivot_root926823499: permission denied
No, my username isn't really 'me'
Is there a way to use podman non-root in this setup? I'd prefer to avoid creating a local user account to run things under (this is my dev server and isn't where the application will actually be running but will involve me building, running, destroying regularly so I'd rather avoid having to do anything 'clever')
Output of podman info:
❯ podman info
host:
arch: amd64
buildahVersion: 1.23.1
cgroupControllers: []
cgroupManager: cgroupfs
cgroupVersion: v1
conmon:
package: conmon-2.0.32-1.module+el8.5.0+13852+150547f7.x86_64
path: /usr/bin/conmon
version: 'conmon version 2.0.32, commit: 4b12bce835c3f8acc006a43620dd955a6a73bae0'
cpus: 1
distribution:
distribution: '"rhel"'
version: "8.5"
eventLogger: file
hostname: servername
idMappings:
gidmap:
- container_id: 0
host_id: 2000
size: 1
uidmap:
- container_id: 0
host_id: 10279927
size: 1
kernel: 4.18.0-348.12.2.el8_5.x86_64
linkmode: dynamic
logDriver: k8s-file
memFree: 1881419776
memTotal: 3918233600
ociRuntime:
name: runc
package: runc-1.0.3-1.module+el8.5.0+13556+7f055e70.x86_64
path: /usr/bin/runc
version: |-
runc version 1.0.3
spec: 1.0.2-dev
go: go1.16.7
libseccomp: 2.5.1
os: linux
remoteSocket:
path: /run/user/10279927/podman/podman.sock
security:
apparmorEnabled: false
capabilities: CAP_NET_RAW,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: true
seccompEnabled: true
seccompProfilePath: /usr/share/containers/seccomp.json
selinuxEnabled: true
serviceIsRemote: false
slirp4netns:
executable: /usr/bin/slirp4netns
package: slirp4netns-1.1.8-1.module+el8.5.0+12582+56d94c81.x86_64
version: |-
slirp4netns version 1.1.8
commit: d361001f495417b880f20329121e3aa431a8f90f
libslirp: 4.4.0
SLIRP_CONFIG_VERSION_MAX: 3
libseccomp: 2.5.1
swapFree: 4294963200
swapTotal: 4294963200
uptime: 2h 45m 20.28s (Approximately 0.08 days)
plugins:
log:
- k8s-file
- none
- journald
network:
- bridge
- macvlan
volume:
- local
registries:
search:
- registry.fedoraproject.org
- registry.access.redhat.com
- registry.centos.org
- docker.io
store:
configFile: /home/me/.config/containers/storage.conf
containerStore:
number: 0
paused: 0
running: 0
stopped: 0
graphDriverName: overlay
graphOptions:
overlay.mount_program:
Executable: /usr/bin/fuse-overlayfs
Package: fuse-overlayfs-1.8-1.module+el8.5.0+13754+92ec836b.x86_64
Version: |-
fusermount3 version: 3.2.1
fuse-overlayfs: version 1.8
FUSE library version 3.2.1
using FUSE kernel interface version 7.26
graphRoot: /home/me/.local/share/containers/storage
graphStatus:
Backing Filesystem: nfs
Native Overlay Diff: "false"
Supports d_type: "true"
Using metacopy: "false"
imageStore:
number: 0
runRoot: /run/user/10279927/containers
volumePath: /home/me/.local/share/containers/storage/volumes
version:
APIVersion: 3.4.2
Built: 1642068949
BuiltTime: Thu Jan 13 10:15:49 2022
GitCommit: ""
GoVersion: go1.16.7
OsArch: linux/amd64
Version: 3.4.2
Thank you!
Based on this article, https://www.redhat.com/sysadmin/rootless-podman-nfs, podman and nfs home directories don't mix well together.
This is worked around by changing the graphroot(which is described in the above article) to write to a local, non-nfs, location.

how to install mysql version 8.0.21 using ansible

I am trying to install MySQL version 8.0.21 using ansible. I have downloaded the required tar file from the MySQL community and tried to install using dpkg command but found issues while passing passwords.
- name: Set MySQL root password before installing
debconf: name='mysql-server' question='mysql-server/root_password' value='{{MySQL_root_pass | quote}}' vtype='password'
become: yes
- name: Confirm MySQL root password before installing
debconf: name='mysql-server' question='mysql-server/root_password_again' value='{{MySQL_root_pass | quote}}' vtype='password'
become: yes
- name: Install my_package
apt: deb="required_package.deb"
become: true
become_method: sudo
This is my YAML.The mysql version is installed with an empty password
What about the role from geerlingguy
ansible-galaxy install geerlingguy.mysql
After installing you yml like this:
- hosts: YOUR_SERVER_IP_HERE
become: yes
vars_files:
- vars/main.yml
roles:
- { role: geerlingguy.mysql }
pre_tasks:
- name: Install the MySQL repo.
yum:
name: http://repo.mysql.com/mysql-community-release-el7-5.noarch.rpm
state: present
when: ansible_os_family == "RedHat"
- name: Override variables for MySQL (RedHat).
set_fact:
mysql_daemon: mysqld
mysql_packages: ['mysql-server']
mysql_log_error: /var/lib/mysql/error.log
mysql_syslog_tag: mysqld
mysql_pid_file: /var/run/mysqld/mysqld.pid
mysql_socket: /var/lib/mysql/mysql.sock
when: ansible_os_family == "RedHat"
Make sure to updat the vars/main.yml file including the version you wish!

Issues with helm install Orion Context Broker

I'm trying to install FIWARE Orion on AKS using your Helm chart. I installed MongoDB using
helm repo add azure-marketplace https://marketplace.azurecr.io/helm/v1/repo
helm install my-release azure-marketplace/mongodb
Consequently I configured the MongoDB in values.yaml as follows:
## database configuration
db:
# -- configuration of the mongo-db hosts. if multiple hosts are inserted, its assumed that mongo is running as a replica set
hosts: [my-release-mongodb]
# - my-release-mongodb
# -- the db to use. if running in multiservice mode, its used as a prefix.
name: orion
# -- Database authentication (not needed if MongoDB doesn't use --auth)
auth:
# --user for connecting mongo
user: root
# -- password to be used on mongo
password: mypasswd
# -- the MongoDB authentication mechanism to use in the case user and password is set
#mech: SCRAM-SHA-1
I use the command : helm install test orion
As I see this error in the pod logging I suppose something is wrong;
kubectl logs test-orion-7dfcc9c7fb-8vbgw
time=2021-05-28T19:50:29.737Z | lvl=ERROR | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=mongocContextCachePersist.cpp[59]:mongocContextCachePersist | msg=Database Error (persisting context: command insert requires authentication)
Can you help me with this please?
Kind regards,
Johan,
you should assure that mongo-db is actually available at "my-release-mongodb:27017", you can use "kubectl get services" for that. Beside that, assure that "root:mypasswd" are actually the credentials setup at mongodb.

Unable to set endpoint using the Azure CLI

I used docker-machine with Azure as the driver to spin up a VM. I then deployed a simple nginx test container on to the host. My issue is that when I try to set and endpoint I am getting the following error:
azure vm endpoint create huldra 80 32769
info: Executing command vm endpoint create
+ Getting virtual machines
+ Reading network configuration
+ Updating network configuration
error: Parameter 'ConsoleScreenshotBlobUri' should not be set.
info: Error information has been recorded to /Users/ryan/.azure/azure.err
error: vm endpoint create command failed
When I look at the error log it pretty much repeats what the console said Parameter 'ConsoleScreenshotBlobUri' should not be set.
Here are my docker and azure environment details:
❯ docker info
Containers: 1
Running: 1
Paused: 0
Stopped: 0
Images: 3
Server Version: 1.10.2
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 21
Dirperm1 Supported: true
Execution Driver: native-0.2
Logging Driver: json-file
Plugins:
Volume: local
Network: bridge null host
Kernel Version: 4.2.0-18-generic
Operating System: Ubuntu 15.10
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 1.636 GiB
Name: huldra
ID: PHUY:JRE3:DOJO:NNWO:JBBH:42H2:56ZO:HVSB:MZDE:QLOI:GO6F:SCC5
WARNING: No swap limit support
Labels:
provider=azure
~/Projects/dockerswarm master*
❯ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ce51127b2bb8 nginx "nginx -g 'daemon off" 11 minutes ago Up 11 minutes 0.0.0.0:32769->80/tcp, 0.0.0.0:32768->443/tcp machinenginx
❯ azure --version
0.9.17 (node: 5.8.0)
❯ azure vm list
info: Executing command vm list
+ Getting virtual machines
data: Name Status Location DNS Name IP Address
data: ------ --------- -------- ------------------- -------------
data: huldra ReadyRole West US huldra.cloudapp.net x.x.x.x
info: vm list command OK

Docker Remote API does not list containers

I have locally installed docker server which runs one container.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3d7ef4f6bb0a debian "/bin/bash" 7 hours ago Up 7 hours 0.0.0.0:80->2376/tcp nostalgic_fermat
when I tried to use the docker remote API in order to get the information about this container I did not see the json output about the containers running on host. The result from rest call is:
wget -v 192.168.99.100:2376/containers/json/
--2016-01-16 23:57:20-- http://192.168.99.100:2376/containers/json/
Connecting to 192.168.99.100:2376... connected.
HTTP request sent, awaiting response... 200 No headers, assuming HTTP/0.9
Length: unspecified
Saving to: 'index.html.3'
index.html.3 [ <=> ] 7 --.-KB/s in 0s
2016-01-16 23:57:20 (297 KB/s) - 'index.html.3' saved [7]
What exactly I am missing?
The version of API is:
Client:
Version: 1.9.0
API version: 1.21
Go version: go1.4.3
Git commit: 76d6bc9
Built: Tue Nov 3 19:20:09 UTC 2015
OS/Arch: darwin/amd64
EDIT (RESOLVE)
It appears that docker server requires SSL authentication. I was able to authorized to docker localhost by providing the local docker server certificates.
The following command stores json file with information of all containers running on local docker server.
wget --no-check-certificate --ca-certificate ca.pem --certificate=cert.pem --certificate-type=PEM --private-key=key.pem --private-key-type=PEM https://192.168.99.100:2376/containers/json