Connect ejabberd to MySQL via native driver for mod_archive_odbc - mysql

I'm trying to connect our ejabberd server to MySQL to add the mod_archive_odbc module. We're running ejabberd 2.1.13. The rest of the server uses mnesia for storage. I tried the DSN approach first, but that failed. I'm currently getting this error in erlang.log:
=PROGRESS REPORT==== 24-Sep-2013::13:50:27 ===
supervisor: {local,ejabberd_sup}
started: [{pid,<0.777.0>},
{name,'ejabberd_mod_archive_odbc_chat.hostname.com'},
{mfargs,
{mod_archive_odbc,start_link,
["chat.hostname.com",
[{database_type,"mysql"},
{default_auto_save,true},
{enforce_default_auto_save,false},
{default_expire,infinity},
{enforce_min_expire,0},
{enforce_max_expire,infinity},
{replication_expire,31536000},
{session_duration,1800},
{wipeout_interval,86400}]]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
=CRASH REPORT==== 24-Sep-2013::13:50:36 === crasher:
initial call: mod_archive_odbc:init/1
pid: <0.777.0>
registered_name: 'ejabberd_mod_archive_odbc_chat.hostname.com'
exception exit: {aborted,{no_exists,[sql_pool,"chat.hostname.com"]}}
in function gen_server:terminate/6
ancestors: [ejabberd_sup,<0.37.0>]
This is what the modules section looks like:
{mod_archive_odbc, [{database_type, "mysql"},
{default_auto_save, true},
{enforce_default_auto_save, false},
{default_expire, infinity},
{enforce_min_expire, 0},
{enforce_max_expire, infinity},
{replication_expire, 31536000},
{session_duration, 1800},
{wipeout_interval, 86400}]}
This is what the database section looks like:
{odbc_server, {mysql, "localhost", "ejabberd", "ejabberd", "password"}}.
I can connect to the mysql server locally and remotely using the ejabberd user as well.
Here is the ngrep output while the errors occur:
# ngrep port 3306
interface: eth0 (10.179.7.192/255.255.255.192)
filter: (ip or ip6) and ( port 3306 )
^Cexit
0 received, 0 dropped
# ngrep -d lo port 3306
interface: lo (127.0.0.0/255.0.0.0)
filter: (ip or ip6) and ( port 3306 )
^Cexit
0 received, 0 dropped
Here is ngrep output if I connect to MySQL with the ejabberd user via another computer on the network
# ngrep port 3306
interface: eth0 (10.179.7.192/255.255.255.192)
filter: (ip or ip6) and ( port 3306 )
####
T 10.179.7.235:3306 -> XX.XXX.XXX.XXX:55909 [AP]
J....5.5.32.....xxpKb-VK...................UKXV(a2rh6r].mysql_native_password.
##
T XX.XXX.XXX.XXX:55909 -> 10.179.7.235:3306 [AP]
>...................................ejabberd....).p.P..lt=BTK..w..
##
T 10.179.7.235:3306 -> XX.XXX.XXX.XXX:55909 [AP]
...........
#
T XX.XXX.XXX.XXX:55909 -> 10.179.7.235:3306 [AP]
!....select ##version_comment limit 1
#
T 10.179.7.235:3306 -> XX.XXX.XXX.XXX:55909 [AP]
.....'....def....##version_comment............................MySQL Community Server (GPL).........
##
T XX.XXX.XXX.XXX:55909 -> 10.179.7.235:3306 [AP]
.....
###
The MySQL module appears to be installed:
(ejabberd#ip-10-179-7-235)1> ejabberd_check:check_database_module(mysql).
ok

The problem was the changes I was making were not being updated within the actual service. Another question, How to uninstall odbc for Ejabberd?, and the comments above by #giavac made me realize that apparently the configuration file is not always authoritative for the configuration.
Specifically, I fixed my problem by adding this line:
override_local.

Related

Problems connecting Cloud Run Application to Cloud SQL using Spring boot

I am trying to connect a Spring application (using Kotlin and Gradle) to a Google Cloud SQL instance and database. I am getting the error message
java.lang.RuntimeException: [<project-name>:europe-west1:<db-instance>] The Cloud SQL Instance does not exist or your account is not authorized to access it. Please verify the instance connection name and check the IAM permissions for project "<project-name>"
I have followed the guide on how to connect carefully, but to no avail.
Relevant files
src/main/resources/application.yml
server:
port: ${PORT:8080}
spring:
liquibase:
change-log: classpath:liquibase/db.changelog.xml
contexts: production
cloud:
appId: <project-id>
gcp:
sql:
instance-connection-name: <instance-connection-name>
database-name: <db-name>
jpa:
hibernate:
dialect: org.hibernate.dialect.MySQL8Dialect
default_schema: <schema>
show_sql: true
ddl-auto: none
datasource:
driver-class-name: com.mysql.cj.jdbc.Driver
continue-on-error: true
initialization-mode: always
url: jdbc:mysql:///<db-name>?cloudSqlInstance=<instance-connection-name>&socketFactory=com.google.cloud.sql.mysql.SocketFactory&user=<user>&password=<password>
username: <user>
password: <password>
---
spring:
config:
activate:
on-profile: dev
jpa:
hibernate:
ddl-auto: create-drop
spring.jpa.database-platform: org.hibernate.dialect.H2Dialect
datasource:
url: jdbc:h2:mem:mydb
username: sa
password: password
driverClassName: org.h2.Driver
cloud:
gcp:
sql:
enabled: false
build.gradle.kts
import org.jetbrains.kotlin.gradle.tasks.KotlinCompile
plugins {
id("org.springframework.boot") version "2.6.5"
id("io.spring.dependency-management") version "1.0.11.RELEASE"
kotlin("jvm") version "1.6.10"
kotlin("plugin.spring") version "1.6.10"
kotlin("plugin.allopen") version "1.4.32"
kotlin("plugin.jpa") version "1.4.32"
kotlin("kapt") version "1.4.32"
}
allOpen {
annotation("javax.persistence.Entity")
annotation("javax.persistence.Embeddable")
annotation("javax.persistence.MappedSuperclass")
}
group = "com.<company>"
version = "0.0.1-SNAPSHOT"
java.sourceCompatibility = JavaVersion.VERSION_17
repositories {
mavenCentral()
}
dependencies {
implementation("org.springframework.boot:spring-boot-starter-web:2.6.5")
implementation("org.springframework.boot:spring-boot-starter-webflux:2.6.5")
implementation("org.springframework.boot:spring-boot-starter-data-jpa:2.6.5")
implementation("org.springframework.cloud:spring-cloud-gcp-starter-sql-mysql:1.2.8.RELEASE")
implementation("org.jetbrains.kotlin:kotlin-reflect:1.6.10")
implementation("org.jetbrains.kotlin:kotlin-stdlib-jdk8:1.6.10")
implementation("com.fasterxml.jackson.dataformat:jackson-dataformat-xml:2.13.2")
implementation("com.fasterxml.jackson.core:jackson-annotations:2.13.2")
implementation("com.fasterxml.jackson.core:jackson-core:2.13.2")
implementation("com.fasterxml.jackson.core:jackson-databind:2.13.2.2")
implementation("com.fasterxml.jackson.module:jackson-module-kotlin:2.13.2")
implementation("com.fasterxml.jackson.datatype:jackson-datatype-jsr310:2.13.2")
implementation("org.hibernate:hibernate-core:5.6.7.Final")
implementation("javax.persistence:javax.persistence-api:2.2")
implementation( "commons-codec:commons-codec:1.15")
implementation("io.github.microutils:kotlin-logging-jvm:2.1.21")
implementation("ch.qos.logback:logback-classic:1.2.11")
implementation("com.google.cloud.sql:mysql-socket-factory-connector-j-8:1.4.4")
runtimeOnly("com.h2database:h2:2.1.210")
runtimeOnly("org.springframework.boot:spring-boot-devtools:2.6.5")
testImplementation("org.springframework.boot:spring-boot-starter-test:2.6.5")
}
tasks.withType<KotlinCompile> {
kotlinOptions {
freeCompilerArgs = listOf("-Xjsr305=strict")
jvmTarget = "17"
}
}
tasks.withType<Test> {
useJUnitPlatform()
}
Dockerfile
FROM openjdk:17-alpine
ENV USER=appuser
# <placeholder> Replace context path for your own application
ENV JAVA_HOME=/opt/openjdk-17 \
HOME=/home/$USER \
CONTEXT_PATH=/aws-service-baseline
RUN adduser -S $USER
# <placeholder> Add additional packages for the docker container here
RUN apk add --no-cache su-exec
# <placeholder> Replace baseline.jar with your applications JAR file (defined in build.gradle.kts)
COPY Docker/runapp.sh build/libs/<application-name>-0.0.1-SNAPSHOT.jar $HOME/
RUN chmod 755 $HOME/*.sh && \
chown -R $USER $HOME
WORKDIR /home/$USER
CMD [ "./runapp.sh"]
Docker/runapp.sh
#!/bin/sh
set -e
# The module to start.
# <placeholder> Replace this with your own modulename (from module-info)
APP_JAR="<application-name>-0.0.1-SNAPSHOT.jar"
JAVA_PARAMS="-XshowSettings:vm"
echo " --- RUNNING $(basename "$0") $(date -u "+%Y-%m-%d %H:%M:%S Z") --- "
set -x
/sbin/su-exec "$USER:1000" "$JAVA_HOME/bin/java" "$JAVA_PARAMS $JAVA_PARAMS_OVERRIDE" -jar -Dserver.port=$PORT "$APP_JAR"
GCP details
I have made sure the SQL instances connection is added to the Cloud Run Revisions. The IAM roles for the compute service account also seem to be right. See images
IAM: https://i.stack.imgur.com/yYaC5.png
Database: https://i.stack.imgur.com/NErad.png
Cloud Run connection https://i.stack.imgur.com/fKTSZ.png
Additional details
When running ./gradlew bootRun on my local machine (with GCP credentials present), the App works properly with an SQL connection. It also works after running ./gradle bootRun to build the JAR file and run the JAR directly. It does not work out of the box when running in Docker, but if I add the GCP credentials to the Docker container locally, it connects to the Database.
Does anyone have any suggestions on what might be wrong? Any help much appreciated!
I have tried connecting locally and locally in a Docker container.
Figured it out! Human error of course. The Cloud Run Service was initially configured with another Services Account, and not the default Compute Engine Service account.

docker and mysql: Got an error reading communication packets

I have a problem with connectivity in docker. I use an official mysql 5.7 image and Prisma server. When I start it via prisma cli, that uses docker compose underneath (described here) everything works.
But I need to start this containers programmatically via docker api and in this case connections from app are dropped with [Note] Aborted connection 8 to db: 'unconnected' user: 'root' host: '164.20.10.2' (Got an error reading communication packets).
So what I doo:
Creating a bridge network:
const network = await docker.network.create({
Name: manifest.name + '_network',
IPAM: {
"Driver": "default",
"Config": [
{
"Subnet": "164.20.0.0/16",
"IPRange": "164.20.10.0/24"
}
]
}});
Creating mysql container and attaching it to network
const mysql = await docker.container.create({
Image: 'mysql:5.7',
Hostname: manifest.name + '-mysql',
Names: ['/' + manifest.name + '-mysql'],
NetworkingConfig: {
EndpointsConfig: {
[manifest.name + '_network']: {
Aliases: [manifest.name + '-mysql']
}
}
},
Restart: 'always',
Args: [
"mysqld",
"--max-connections=1000",
"--sql-mode=ALLOW_INVALID_DATES,ANSI_QUOTES,ERROR_FOR_DIVISION_BY_ZERO,HIGH_NOT_PRECEDENCE,IGNORE_SPACE,NO_AUTO_CREATE_USER,NO_AUTO_VALUE_ON_ZERO,NO_BACKSLASH_ESCAPES,NO_DIR_IN_CREATE,NO_ENGINE_SUBSTITUTION,NO_FIELD_OPTIONS,NO_KEY_OPTIONS,NO_TABLE_OPTIONS,NO_UNSIGNED_SUBTRACTION,NO_ZERO_DATE,NO_ZERO_IN_DATE,ONLY_FULL_GROUP_BY,PIPES_AS_CONCAT,REAL_AS_FLOAT,STRICT_ALL_TABLES,STRICT_TRANS_TABLES,ANSI,DB2,MAXDB,MSSQL,MYSQL323,MYSQL40,ORACLE,POSTGRESQL,TRADITIONAL"
],
Env: [
'MYSQL_ROOT_PASSWORD=secret'
]
});
await network.connect({
Container: mysql.id
});
await mysql.start();
Then I wait Mysql to boot, create needed databases and needed Prisma containers from prismagraphql/prisma:1.1 and start them. App server resolves mysql host correctly, but connections are dropped by mysql.
Telnet from app container to mysql container in 3306 port responds correctly:
J
5.7.21U;uH Kem']#45T]2mysql_native_password
What am I doing wrong?
Check the below:
max_allowed_packets
wait_timeout
net_read_timeout
Also monitor MySQL process list during the issue to identify timeouts.
Can you try some wait, it could be possible that application try to connect to mysql server before its ready to accept connection. To test this, add some wait on startup or run mysql followed by application as different deployments.
The fix is to add --wait-timeout=28800 (or higher number) into MySQL arguments:
Args: [
"mysqld",
"--max-connections=1000",
"--sql-mode=ALLOW_INVALID_DATES,ANSI_QUOTES,ERROR_FOR_DIVISION_BY_ZERO,HIGH_NOT_PRECEDENCE,IGNORE_SPACE,NO_AUTO_CREATE_USER,NO_AUTO_VALUE_ON_ZERO,NO_BACKSLASH_ESCAPES,NO_DIR_IN_CREATE,NO_ENGINE_SUBSTITUTION,NO_FIELD_OPTIONS,NO_KEY_OPTIONS,NO_TABLE_OPTIONS,NO_UNSIGNED_SUBTRACTION,NO_ZERO_DATE,NO_ZERO_IN_DATE,ONLY_FULL_GROUP_BY,PIPES_AS_CONCAT,REAL_AS_FLOAT,STRICT_ALL_TABLES,STRICT_TRANS_TABLES,ANSI,DB2,MAXDB,MSSQL,MYSQL323,MYSQL40,ORACLE,POSTGRESQL,TRADITIONAL",
"--wait-timeout=28800" // 28800 sec = 8 hours
],
Reference: https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_wait_timeout
But maybe it's wiser to find out what is the root cause for idle connections.

Connect to MySQL in a docker container (Vagrant on Windows/VirtualBox)

I am trying to create a virtualised dev environment on Windows using Vagrant and Docker (as are a lot of people). The problem I have is that I cannot connect (or I dont understand how to) from MySQL Workbench running on my Windows laptop to my MySQL DB in a Docker container in Boot2Docker. This is how I visualise the connection:
MySQL Work bench -> 3306 -> Boot2Docker -> 3306 -> Docker -> MySql
However I cannot connect to the DB from MySQLWorkbench. I have tried connection to the Boot2Docker host 10.0.2.15 on 3306 and tcp over ssh using the private key of the Boot2Docker box ".vagrant\machines\dockerhost\virtualbox\id"
What am I doing wrong/what have I misunderstood.
My Vagrantfile:
ENV['VAGRANT_DEFAULT_PROVIDER'] = 'docker'
DOCKER_HOST_NAME = "dockerhost"
DOCKER_HOST_VAGRANTFILE = "./host/Vagrantfile"
Vagrant.configure("2") do |config|
config.vm.network "forwarded_port", guest: 3306, host: 3306
config.vm.define "mysql" do |v|
v.vm.provider "docker" do |d|
d.image = "mysql"
d.env = {
:MYSQL_ROOT_PASSWORD => "root",
:MYSQL_DATABASE => "dockertest",
:MYSQL_USER => "dockertest",
:MYSQL_PASSWORD => "docker"
}
d.ports =["3306:3306"]
d.remains_running = "true"
d.vagrant_machine = "#{DOCKER_HOST_NAME}"
d.vagrant_vagrantfile = "#{DOCKER_HOST_VAGRANTFILE}"
end
end
end
My hosts/Vagrantfile (describing my Boot2docker host) is:
FORWARD_DOCKER_PORTS='true'
Vagrant.configure(2) do |config|
config.vm.provision "docker"
# The following line terminates all ssh connections. Therefore
# Vagrant will be forced to reconnect.
# That's a workaround to have the docker command in the PATH
#Clear any existing ssh connections
####NOTE: ps aux seems to give variable results depending on run -> process number can be ####first #or second causing provision to fail!!!
config.vm.provision "clear-ssh", type: "shell", inline:
"ps aux | grep 'sshd:' | awk '{print $1}' | xargs kill"
# "ps aux | grep 'sshd:' | awk '{print $2}' | xargs kill"
config.vm.define "dockerhost"
config.vm.box = "dduportal/boot2docker"
config.vm.network "forwarded_port",guest: 8080, host: 8080
config.vm.provider "virtualbox" do |vb|
vb.name = "dockerhost"
end
end
Solved. As I suspected the ports were not getting forwarded between the host machine and the docker host. The solution is to move the port forwarding config line:
config.vm.network "forwarded_port", guest: 3306, host: 3306
into the docker host Vagrant file. MySQL Workbench on the Windows host can then connect on localhost:3306.

MySQL import hangs on Vagrant CoreOS box on Mac

I have a local development setup using the following:
Mac Yosemite 10.10.3
Vagrant 1.7.3
CoreOS alpha version 681.0.0
2 Docker containers one for apache PHP and another for mysql both based on Ubuntu 12.10
Its set up to sync the local dev directory ~/Sites to the Vagrant box using NFS, since my working directories as well as the MySQL directories are located here (~/Sites/.coreos-databases/mysql). From what I have read this is not the best type of setup but it has worked for me for quite some time as well as others at work.
Recently I have not been able to import any database dumps into this setup. The import starts and hangs approximately half way through the process. It happens on the command line as well as with Sequel Pro. It does import some of the the tables, but freezes exactly at the same spot everytime. It doesn't seem to matter what the size of the dump is - the one I have been attempting is only 104Kb. Someone else is having the same issue with a 100MB+ dump - freezing at the same spot approx halfway.
My Vagrantfile:
Vagrant.configure("2") do |config|
# Define the CoreOS box
config.vm.box = "coreos-alpha"
config.vm.box_url = "http://alpha.release.core-os.net/amd64-usr/current/coreos_production_vagrant.json"
# Define a static IP
config.vm.network "private_network",
ip: "33.33.33.77"
# Share the current folder via NFS
config.vm.synced_folder ".", "/home/core/sites",
id: "core",
:nfs => true,
:mount_options => ['nolock,vers=3,udp,noatime']
# Provision docker with shell
# config.vm.provision
config.vm.provision "shell",
path: ".coreos-devenv/scripts/provision-docker.sh"
end
Dockerfile for mysql:
# Start with Ubuntu base
FROM ubuntu:12.10
# Install some basics
RUN apt-get update
# Install mysql
RUN apt-get install -y mysql-server
# Clean up after install
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Add a grants file to set up remote user
# and disbale the root user's remote access.
ADD grants.sql /etc/mysql/
# Add a conf file for correcting "listen"
ADD listen.cnf /etc/mysql/conf.d/
# Run mysqld on standard port
EXPOSE 3306
ENTRYPOINT ["/usr/sbin/mysqld"]
CMD ["--init-file=/etc/mysql/grants.sql"]
I 'vagrant ssh' in and run dmesg and this is what it spits out after it freezes:
[ 465.504357] nfs: server 33.33.33.1 not responding, still trying
[ 600.091356] INFO: task mysqld:1501 blocked for more than 120 seconds.
[ 600.092388] Not tainted 4.0.3 #2
[ 600.093277] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 600.094442] mysqld D ffff880019dbfbc8 0 1501 939 0x00000000
[ 600.095953] ffff880019dbfbc8 ffffffff81a154c0 ffff88001ec61910 ffff880019dbfba8
[ 600.098871] ffff880019dbffd8 0000000000000000 7fffffffffffffff 0000000000000002
[ 600.101594] ffffffff8150b4e0 ffff880019dbfbe8 ffffffff8150ad57 ffff88001ed5eb18
[ 600.103794] Call Trace:
[ 600.104376] [<ffffffff8150b4e0>] ? bit_wait+0x50/0x50
[ 600.105934] [<ffffffff8150ad57>] schedule+0x37/0x90
[ 600.107505] [<ffffffff8150da7c>] schedule_timeout+0x20c/0x280
[ 600.108369] [<ffffffff8101d799>] ? read_tsc+0x9/0x10
[ 600.109370] [<ffffffff810d731e>] ? ktime_get+0x3e/0xa0
[ 600.110353] [<ffffffff8101d799>] ? read_tsc+0x9/0x10
[ 600.111327] [<ffffffff810d731e>] ? ktime_get+0x3e/0xa0
[ 600.112347] [<ffffffff8150a31c>] io_schedule_timeout+0xac/0x130
[ 600.113368] [<ffffffff810a9ee7>] ? prepare_to_wait+0x57/0x90
[ 600.114358] [<ffffffff8150b516>] bit_wait_io+0x36/0x50
[ 600.115332] [<ffffffff8150b145>] __wait_on_bit+0x65/0x90
[ 600.116343] [<ffffffff81146072>] wait_on_page_bit+0xc2/0xd0
[ 600.117453] [<ffffffff810aa360>] ? autoremove_wake_function+0x40/0x40
[ 600.119304] [<ffffffff81146179>] filemap_fdatawait_range+0xf9/0x190
[ 600.120646] [<ffffffff81152ffe>] ? do_writepages+0x1e/0x40
[ 600.121346] [<ffffffff81147f96>] ? __filemap_fdatawrite_range+0x56/0x70
[ 600.122397] [<ffffffff811480bf>] filemap_write_and_wait_range+0x3f/0x70
[ 600.123460] [<ffffffffa0207b1e>] nfs_file_fsync_commit+0x23e/0x3c0 [nfs]
[ 600.124399] [<ffffffff811e7bf0>] vfs_fsync_range+0x40/0xb0
[ 600.126163] [<ffffffff811e7cbd>] do_fsync+0x3d/0x70
[ 600.127092] [<ffffffff811e7f50>] SyS_fsync+0x10/0x20
[ 600.128086] [<ffffffff8150f089>] system_call_fastpath+0x12/0x17
Any ideas as whats going on here?
I am also using this same setup. Vagrant defaults to UDP so removing that from your setup seems to work. Haven't tested it though but I didn't run into the MYSQL issues you had.
config.vm.synced_folder ".", "/home/core/sites",
id: "core",
nfs_version: "4",
:nfs => true,
:mount_options => ['nolock,noatime']
This worked for me. YMMV.

mosquitto 1.4 - once running with ACL enabled, gets "Socket error on client <unknown>, disconnecting"

following instructions from Jeremy Gooch, see http://goochgooch.co.uk/2014/08/01/building-mosquitto-1-4/, i installed mosquitto over websockets on RPi. i can sub/pub messages to test site http://test.mosquitto.org/ws.html
from that point, i enabled user and topic access control in mosquitto.conf for more tests, but the strange point is that when i start mosquitto again, i see socket errors per second...
sudo /usr/local/sbin/mosquitto -v -c /etc/mosquitto/mosquitto.conf
1429857948: mosquitto version 1.4 (build date 2015-04-20 22:04:51+0800) starting
1429857948: Config loaded from /etc/mosquitto/mosquitto.conf.
1429857948: Opening ipv4 listen socket on port 1883.
1429857948: Opening ipv6 listen socket on port 1883.
1429857948: Warning: Address family not supported by protocol
1429857949: New connection from 127.0.0.1 on port 1883.
1429857949: Sending CONNACK to 127.0.0.1 (0, 5)
1429857949: Socket error on client <unknown>, disconnecting.
1429857950: New connection from 127.0.0.1 on port 1883.
1429857950: Sending CONNACK to 127.0.0.1 (0, 5)
...
i modify the config file to enable ACL only, comment out all others, the socket errors are still there. config file looks now:
sudo nano /etc/mosquitto/mosquitto.conf
autosave_interval 1800
persistence true
persistence_file m2.db
persistence_location /var/tmp/
connection_messages true
log_timestamp true
log_dest stderr
log_type error
log_type warning
log_type debug
allow_anonymous false
password_file /etc/mosquitto/mqtt.pw
acl_file /etc/mosquitto/mqtt.acl
port 1883
protocol mqtt
i even test to use the sample password_file and acl_file, but same error.
searched on google, also no result, could anyone help on this? thanks.
1429857949: Sending CONNACK to 127.0.0.1 (0, 5)
CONNACK return code of 5 means the connection was not authorised. If it
works with allow_anonymous=true, then it sounds like your client isn't
sending a username / or isn't sending a correct username&password.
It looks like you have a Paho Python client running.
I had the same problem my solution was that I wasn't closing the connection. Once I added client.Disconnect() it solved my problem.
Code:
public IEnumerator ooverhere()
{
MqttClient client;
client = new MqttClient(urlPath, port, false, MqttSslProtocols.None, null, null);
client.ProtocolVersion = MqttProtocolVersion.Version_3_1;
byte code = client.Connect(Guid.NewGuid().ToString(), user, pass);
if (code == 0)
{
Debug.Log("successful connection ...");
//client.MqttMsgPublishReceived += client_recievedMessage;
Debug.Log("your client id is: " + client.ClientId);
client.Subscribe(new string[] { "example" }, new byte[] { 0 });
client.Publish("Helpme", Encoding.UTF8.GetBytes("#" + 0));
yield return client;
client.Disconnect();
}
}