enable authentication in mqtt username and password - configuration

changes in mqtt configuration file for authentication with username and password conf file settinguse_identity_as_username true
username:password
password_file
what changes we need to do in configuration file

I am using Mosquitto MQTT broker and I do like this:
Set allow_anonymous false
password_file file_path
Then try starting broker with conf file::::
From command prompt:
c:\>mosquitto\mosquitto -c mosquitto.conf
//

Related

How can I do an ssh tunnel with port forwarding on a Windows runner in Github actions?

I have a MongoDB instance on a Google compute engine running that I want to connect to from my Github action (On a windows runner if it makes a difference) to insert test and performance results.
Currently, I am trying to open an SSH tunnel with port forwarding and just test that the port is open.
Here is what my GIthub action step is:
- name: 'Create ssh tunnel'
if: (runner.os == 'Windows')
run: |
gcloud config set auth/impersonate_service_account *****#***.iam.gserviceaccount.com
gcloud compute config-ssh
$sshTunnelJob = Start-Job -Name SshTunnelJob -ScriptBlock { ssh -o "User=*****_iam_gserviceaccount_com" *****.us-east1-b.**** -vvv -fNT -L 27017:0.0.0.0:27017}
Get-Job
Receive-Job -Name SshTunnelJob | Format-List -Force -Expand CoreOnly
netstat -aon
Test-NetConnection localhost -port 27017
gcloud config unset auth/impersonate_service_account
gcloud compute config-ssh --remove
I expect this, Test-NetConnection localhost -port 27017, to succeed, but it fails. Forwarding port 80 is succeeding, though.
Here is the output:
WARNING: TCP connect to (::1 : 27017) failed
WARNING: TCP connect to (127.0.0.1 : 27017) failed
ComputerName: localhost
RemoteAddress: ::1
ResolvedAddresses: {::1, 127.0.0.1}
PingSucceeded: True
PingReplyDetails: System.Net.NetworkInformation.PingReply
TcpClientSocket:
TcpTestSucceeded: False
RemotePort: 27017
TraceRoute:
Detailed: False
InterfaceAlias: Loopback Pseudo-Interface 1
InterfaceIndex: 1
InterfaceDescription:
NetAdapter:
NetRoute: MSFT_NetRoute (InstanceID = "DD;9;?B55;55DD55;")
SourceAddress: ::1
NameResolutionSucceeded: True
BasicNameResolution: {Microsoft.DnsClient.Commands.DnsRecord_AAAA,Microsoft.DnsClient.Commands.DnsRecord_A}
LLMNRNetbiosRecords: {}
DNSOnlyRecords: {Microsoft.DnsClient.Commands.DnsRecord_A}
AllNameResolutionResults: {Microsoft.DnsClient.Commands.DnsRecord_AAAA,Microsoft.DnsClient.Commands.DnsRecord_A}
IsAdmin: True
NetworkIsolationContext: Loopback
MatchingIPsecRules:
What am I missing? Is GitHub limiting ports? I couldn't find any documentation on what ports are blocked or not.
Solution 1 :
The issue might be that the connection from client to server is blocked by a firewall. Can you Please check if the relevant GCP firewall setting is enabled for port 27017.
Also , Please check the target tags and update it accordingly if required . This will allow instances tagged with mongodb-instance to accept connections on port 27017.
Solution 2 :
As per the below output provided by you, it is observed that PingSucceeded was True. Whereas, the response returned as False for the PingSucceeded. In such cases, it is observed that the ICMP requests might be disabled on the remote server/device.
PingSucceeded: True
TcpTestSucceeded: False
As you are expecting Test-NetConnection localhost -port 27017 to succeed,please follow the below steps.
Open PowerShell in the Windows server and type the following command:
tnc <ip_address> -port <PortNumber>
If the device was having issues where it powered off or it got disconnected from the network, a response like below is expected.
PingSucceeded : False
TCPTestSucceeded : False
If the connection is healthy (i.e. MongoDb Server is able to successfully connect) then the following response in PowerShell is expected.
TcpTestSucceeded : True
The above response tells us specifically that the Port 27017 is open and the Test-NetConnection module was able to validate TCP handshake, so the port should be ready to establish a connection.
The above information is derived from the link which was drafted by Rodrigo Restrepo

How to change default MySQL username and password using docker setup?

I want to change the default MySQL username and password either during setup or after.
I've tried a bit of a scattershot approach and changed every config file I can find with username and password, but still doesn't work.
Files I've edited below:
> ~/.env
> ~/conf/dist/env.ac
> ~/conf/dist/env.docker
> ~/conf/config.sh
> ~/env/docker/etc/authserver.conf.dockerdist
> ~/env/docker/etc/worldserver.conf.dockerdist
> ~/docker-compose.yml
Error during rebuild:
> Searching on /azerothcore/data/sql/custom/db_world/ ...
> ===== DONE =====
> ===== CHECKING DBs ===== ERROR 1045 (28000): Access denied for user 'wowadmin'#'172.19.0.3' (using password: YES)
From the FAQ of the official guide:
How can I change the docker containers configuration?
You can copy the file /conf/dist/.env.docker to .env and place it in the root folder of the project, then edit it according to your needs.
In the .env file you can configure:
the location of the data, etc and logs folders
the open ports
the MySQL root password
Then your docker-compose up will automatically locate the .env with your custom settings.
Okay got this sorted out, here's the solution.
You can copy the file /conf/dist/.env.docker to .env and place it in the root folder of the project, then edit it according to your needs.
In the .env file you can configure:
the location of the data, etc and logs folders
the open ports
the MySQL root password
Then your docker-compose up will automatically locate the .env with your custom settings.
.env file:
DOCKER_AC_ENV_FILE=C:\Games\WoW-WotLK-Server\azerothcore-wotlk\conf\dist\env.ac
DOCKER_VOL_DATA=
DOCKER_VOL_ETC=
DOCKER_VOL_LOGS=
DOCKER_VOL_CONF=
DOCKER_WORLD_EXTERNAL_PORT=
DOCKER_SOAP_EXTERNAL_PORT=
DOCKER_AUTH_EXTERNAL_PORT=
DOCKER_DB_EXTERNAL_PORT=
DOCKER_DB_ROOT_PASSWORD=yourpassword
...
Note: Apparently there is an unresolved issue with using a # in password so dont.
..\azerothcore-wotlk\conf\dist\
env.ac file:
DB_AUTH_CONF=MYSQL_USER='root'; MYSQL_PASS='yourpassword'; MYSQL_HOST='ac-database'; MYSQL_PORT='3306';
DB_CHARACTERS_CONF=MYSQL_USER='root'; MYSQL_PASS='yourpassword'; MYSQL_HOST='ac-database'; MYSQL_PORT='3306';
DB_WORLD_CONF=MYSQL_USER='root'; MYSQL_PASS='yourpassword'; MYSQL_HOST='ac-database'; MYSQL_PORT='3306';
..\azerothcore-wotlk\env\docker\etc
authserver.conf file:
###############################################
# AzerothCore Auth Server configuration file #
###############################################
[authserver]
# Do not change this
# Files in LogsDir will reflect on your host directory: docker/authserver/logs
LogsDir = "/azerothcore/env/dist/logs"
# Change this configuration accordingly with your docker setup
# The format is "hostname;port;username;password;database":
# - docker containers must be on the same docker network to be able to communicate
# - the DB hostname will be the name of the database docker container
LoginDatabaseInfo = "ac-database;3306;root;yourpassword;acore_auth"
# Add more configuration overwrites by copying settings from from authserver.conf.dist
LogLevel = 3
SQLDriverLogFile = "SQLDriver.log"
SQLDriverQueryLogging = 1
worldserver.conf:
################################################
# AzerothCore World Server configuration file #
################################################
[worldserver]
# Do NOT change those Dir configs
# Files in LogsDir will reflect on your host directory: docker/worldserver/logs
LogsDir = "/azerothcore/env/dist/logs"
DataDir = "/azerothcore/env/dist/data"
# Change this configuration accordingly with your docker setup
# The format is "hostname;port;username;password;database":
# - docker containers must be on the same docker network to be able to communicate
# - the DB hostname will be the name of the database docker container
LoginDatabaseInfo = "ac-database;3306;root;yourpassword;acore_auth"
WorldDatabaseInfo = "ac-database;3306;root;yourpassword;acore_world"
CharacterDatabaseInfo = "ac-database;3306;root;yourpassword;acore_characters"
# Add more configuration overwrites by copying settings from worldserver.conf.dist
LogLevel = 2
# Disable idle connections automatic kick since it doesn't work well on macOS + Docker
CloseIdleConnections = 0

Vitess MySQL authentication is not working

While installing Vitess through helm in site-values.YAML we enabled authentication
mysqlProtocol:
enabled: false
authType: secret
# authType can be: none or secret. For secret, perform the following changes:
username: mysqluser
# this is the secret that will be mounted as the user password
# kubectl create secret generic mysql-user-passowrd --from-literal=password=abc_123
passwordSecret: mysql-user-passowrd
but after this, if we try to connect to mysql like
mysql -h 10.108.8.197 -p 15991 -u mysqluser
and after entering password it's not authenticating
and showing error Can't connect to MySQL server on '10.108.8.197' (111)
10.108.8.197 is our Vtgate service cluster IP, if we try from 127.0.0.1 also same
Is there anything we are missing?
What worked for us is
deleted vitess installed through helm by helm delete vitess --purge then recreated vitess by enabling mysql protocol
mysqlProtocol:
enabled: true
authType: secret
# authType can be: none or secret. For secret, perform the following changes:
username: mysqluser
# this is the secret that will be mounted as the user password
# kubectl create secret generic mysql-user-passowrd --from-literal=password=abc_123
passwordSecret: mysql-user-passowrd

Zabbix proxy - Received empty response from Zabbix Agent

I am trying to setup zabbix proxy. My network is as below
Zabbix server IP: 192.168.101.11 (internal network)
Zabbix proxy server: 192.168.102.109 (internal network)
Zabbix agent: 172.1.16.2 (outside network but pingable from 102.109)
I can ping the zabbix agent IP from my proxy machine.
[root#102_109 ~]# ping 172.1.16.2
PING 172.1.16.2 (172.1.16.2) 56(84) bytes of data.
64 bytes from 172.1.16.2: icmp_seq=1 ttl=64 time=215 ms
64 bytes from 172.1.16.2: icmp_seq=2 ttl=64 time=214 ms
64 bytes from 172.1.16.2: icmp_seq=3 ttl=64 time=214 ms
64 bytes from 172.1.16.2: icmp_seq=4 ttl=64 time=214 ms
I can connect to the zabbix proxy from my zabbix server -
zabbix_get -k agent.ping -s 192.168.102.109
1
My zabbix_proxy.conf file (on 102.109) is as below
ProxyMode=0
Server=192.168.101.11
Hostname=CME_Proxy
LogFile=/tmp/zabbix_proxy.log
DBName=zabbix
DBUser=root
DBPassword=password
And on the zabbix agent machine (172.1.16.2) the configuration is as below.
EnableRemoteCommands=1
LogFile=/tmp/zabbix_agentd.log
Server=192.168.101.11,192.168.102.109
ServerActive=192.168.101.11,192.168.102.109
Hostname=172.1.16.2
AllowRoot=1
On my zabbix front end, I have configured the host as monitored by proxy (CME_Proxy) and there is only 1 item (agent.ping).
I am not able to get any data from the zabbix agent. From my proxy machine, when I run the following command, it returns a blank value.
zabbix_get -k agent.ping -s 172.1.16.2
<this is blank response>
Due to this, in the host configuration, zabbix shows error -
"Received empty response from Zabbix Agent at [172.1.16.2]. Assuming
that agent dropped connection because of access permissions."
Can someone please guide me if the way I have configured is correct? If not how to do this correctly. If you need additional data please let me know.
Thank you
Mukul
Figured it out:
In the agent config file, the following parameters
Server=192.168.101.11,192.168.102.109
ServerActive=192.168.101.11,192.168.102.109
should have been
Server=192.168.101.11,172.1.16.1
ServerActive=192.168.101.11,172.1.16.1
> server 172.1.16.2, some changes in zabbix_agentd.conf
you need specified who will have permission to request the data to agent (passive checks).
Server=192.168.102.109 # it will allow connections from proxy ip
# ServerActive=192.168.102.109 # comment ServerActive if you won't use active checks
at the web interface (set monitored by: CME_Proxy, or the same you defined in Hostname at zabbix_proxy.conf on 192.168.102.109)
> check communication: as you did before!
at the proxy terminal (192.168.102.109):
enter code here
zabbix_get -k agent.ping -s 172.1.16.2 # It should return 1.
PS: check Hostname in the zabbix_proxy.conf, it should be CME_Proxy, or the same you defined at web interface.
# FOR step-by-step guide of running latest zabbix version 5.0 follow these links
# https://blog.zabbix.com/zabbix-docker-containers/7150/
# https://techexpert.tips/zabbix/monitoring-docker-using-zabbix/
#it's simple just add all zabbix server IPs in zabbix host agent conf #file like below
Server=192.168.101.11,172.1.16.1
ServerActive=192.168.101.11,172.1.16.1
if you ur using zabbix server-agent model using docker containers then while deploying containers specify zabbix server Host/Container IPs which wants to connect to zabbix agent container
Assuming if you wanna deploy zabbix server and agent in the same server running docker containers just run below docker deploy commands
#Zabbix Server Container
sudo docker run --name zabbix-appliance -p 8080:80 -p 10051:10051 -d -h zabbix-server zabbix/zabbix-appliance
#Zabbix Agent container
sudo docker run --name=dockbix-agent-xxl --privileged -v /:/rootfs -v /var/run:/var/run -p 10050:10050 -e "ZA_Server=192.168.0.3,172.17.0.1" -e "ZA_ServerActive=192.168.0.3,172.17.0.1" -d monitoringartist/dockbix-agent-xxl-limited:latest
#Default username and password of zabbix server
#username: Admin password: zabbix
# For monitoring docker containers resources import a template from this cloned repository https://github.com/monitoringartist/zabbix-docker-monitoring

SSH issue when connection MySQL workbench to vagrant

I'm trying to connect to my vagrant MySQL server using MySQL workbench. It shows some error as shown in the image.
The workbench error log is pasted below.
17:34:50 [INF][ SSH tunnel]: Existing SSH tunnel not found, opening new one
17:34:50 [INF][ SSH tunnel]: Opening SSH tunnel to 127.0.0.1:2222
17:34:50 [ERR][ sshtunnel.py]: Traceback (most recent call last):
File "/usr/share/mysql-workbench/sshtunnel.py", line 231, in _connect_ssh
look_for_keys=has_key, allow_agent=has_key)
File "/usr/lib/python2.7/dist-packages/paramiko/client.py", line 337, in connect
self._auth(username, password, pkey, key_filenames, allow_agent, look_for_keys)
File "/usr/lib/python2.7/dist-packages/paramiko/client.py", line 528, in _auth
raise saved_exception
AuthenticationException: Authentication failed.
17:34:50 [ERR][ SSH tunnel]: Authentication error opening SSH tunnel: Authentication error. Please check that your username and password are correct and try again.
vagrant up command output is pasted below
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
default: Adapter 1: nat
default: Adapter 2: hostonly
==> default: Forwarding ports...
default: 22 => 6216 (adapter 1)
default: 22 => 2222 (adapter 1)
==> default: Running 'pre-boot' VM customizations...
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 127.0.0.1:2222
default: SSH username: vagrant
default: SSH auth method: private key
default: Warning: Connection timeout. Retrying...
==> default: Machine booted and ready!
==> default: Checking for guest additions in VM...
==> default: Configuring and enabling network interfaces...
The command vagrant ssh works fine in terminal. What am I doing wrong here?
If you run vagrant ssh-config it will show which key it is using. It does not normally use the .vagrant.d/insecure_private_key but a key in the project directory like .vagrant/machines/default/virtualbox/private_key.
If you specify that key in the MySQL connection panel you should be able to logon without having to add another key to the vm.
Regarding error you mention in comments:
When using ssh you don't specify port like this
ssh 127.0.0.1:2222
You must use option -p
ssh 127.0.0.1 -p 2222
After some googling, I got it working by adding my ssh public key to vagrant authorized_keys file. Steps Below.
generate ssh keys for your machine
copy your public key from /home/{username}/.ssh/id_rsa.pub file
open vagrant ssh in termial
use some editor to edit /home/vagrant/.ssh/authorized_keys(eg: nano /home/vagrant/.ssh/authorized_keys)
paste your public key to the end of that file and save
done!