WARNING neutron_lbaas.services.loadbalancer.drivers.haproxy.namespace_driver [-] Stats socket not found for pool - namespaces

I have the following problem with OpenStack Libery lbaas. When I create a new pool this error starts to appear:
WARNING neutron_lbaas.services.loadbalancer.drivers.haproxy.namespace_driver [-] Stats socket not found for pool
I have deployed controller and compute services on the same node and I use lbaas not lbaasv2. I use linuxBridgeDriver.
Can you help with that, cause I don't know what is wrong.

I had the same problem. For me the messages stopped as soon as I assigned a VIP to the LBaaS.
The stats socket is in a directory with the ID as part of it:
/var/lib/neutron/lbaas/<lbaas-id>
But this directory is created the moment you assign a VIP.

Related

macOS - Dockerize MySQL service connection refused, crashes upon use

Using mysql(v8.0.21) image with mac docker-desktop (v4.2.0, Docker-Engine v20.10.10)
As soon service up:
entrypoints ready
innoDB initialization done
ready for connection
But as soon try to run the direct script(query) it crashes, refused to connect (also from phpmyadmin) and restarted again.
terminal log
connection refused for phpMyAdmin
In the logs we are able to see an Error:
[ERROR] [MY-011947] [InnoDB] Cannot open '/var/lib/mysql/ib_buffer_pool' for reading: No such file or directory
The error we are able to see into log is not an issue, as it is fixed and updated by InnoDB already, here is the reference below:
https://jira.mariadb.org/browse/MDEV-11840
Note: docker-compose file we are pretty much sure that, there is no error as same is working fine for windows as well ubuntu, but the issue is only for macOS.
Thanks #NicoHaase and #Patrick for going through the question and suggestions.
Found the reason for connection refused and crashing, posting answer so that it maybe helpful for others.
It was actually due to docker-desktop macOS client there was by default 2GB Memory was allocated as Resource, and for our scenario it was required more than that.
We just allocate more memory according to our requirement and it was just started working perfectly fine.
For resource allocation:
open docker-desktop preferences
resources > advanced

Django does not gracefully close MySQL connection upon shutdown when run through uWSGI

I have a Django 2.2.6 running under uWSGI 2.0.18, with 6 pre-forking worker processes. Each of these has their own MySQL connection socket (600 second CONN_MAX_AGE). Everything works fine but when a worker process is recycled or shutdown, the uwsgi log ironically says:
Gracefully killing worker 6 (pid: 101)...
But MySQL says:
2020-10-22T10:15:35.923061Z 8 [Note] Aborted connection 8 to db: 'xxxx' user: 'xxxx' host: '172.22.0.5' (Got an error reading communication packets)
It doesn't hurt anything but the MySQL error log gets spammed full of these as I let uWSGI recycle the workers every 10 minutes and I have multiple servers.
It would be good if Django could catch the uWSGI worker process "graceful shutdown" and close the mysql socket before dying. Maybe it does and I'm configuring this setup wrong. Maybe it can't. I'll dig in myself but thought I'd ask as well..
If CONN_MAX_AGE is set to a positive value, then persistent connections are created by Django, that get cleaned up upon request start and request end. Clean up here, means if they are invalid, had too many errors or have been started longer than CONN_MAX_AGE seconds ago.
Otherwise, connections are closed at request close. So this problem occurs when you are using persistent connections and do uWSGI periodic reloads, by design.
There is this bit of code, that calls instructs uwsgi to shutdown all sockets, but I'm unsure if this is communicated to Django or that uwsgi uses a more brutal method and is causing the aborts. That shuts down all uwsgi owned sockets, so from the looks of it, unix sockets and connections with webserver. There's no hook either to be called just before or during reload.
Perhaps this get you on your way. :)

Unable to connect to the binlog client in NiFi

I'm building a NiFi dadaflow, and I need to get the data changes from a MySql database, so I want to use the CaptureChangeMySQL processor to do that.
I get the following error when I run the CaptureChangeMySQL processor and I don't see what's causing this :
Failed to process session due to Could not connect binlog client to any of the specified hosts due to: BinaryLogClient was unable to connect in 10000ms: org.apache.nifi.processor.exception.ProcessException: Could not connect binlog client to any of the specified hosts due to: BinaryLogClient was unable to connect in 10000ms
I have the following controller services enabled :
DistributedMapCacheClientService
DistributedMapCacheServer
But I'm not sure if they are properly configured :
DistributedMapCacheServer properties
DistributedMapCacheClientService properties
In MySql, I have enabled the log_bin variable, by default it wasn't. I checked and I have indeed some binlog files created when data change.
So I think the issue is with the controller services and how they connect, it's not clear to me.
I searched for tutorials about how to use this NiFi processor but I couldt not find how to fix this error. I looked mainly at this one : https://community.hortonworks.com/articles/113941/change-data-capture-cdc-with-apache-nifi-version-1-1.html but it did not helped me.
Does anyone have already use this processor to do CDC?
Thank you in advance.
I found what was wrong : I was trying to connect to the wrong port for the MySQL Host of the CaptureChangeMySQL processor :x
For others who are still facing similar issues, check if the firewall of the server is stopping the connection. Allow mysql 3306 in your firewall rules.

Playframework 2.2 and Heroku: Unable to connect to non-Heroku database

I have my database hosted somewhere else and I have this in my /conf file:
db.default.driver= com.mysql.jdbc.Driver
db.default.url="jdbc:mysql://myserver.com:3306/mydb"
db.default.user=myusername
db.default.password=mypassword
When I test it locally then it connects to the database just fine. I'm able to create/delete from the tables, etc. I changed the heroku config:
heroku config:add DATABASE_URL=mysql://myusername:mypassword#<myserver>:3306/mydb
and procfile
web: target/start -Dhttp.port=${PORT} ${JAVA_OPTS}
When I deploy it to heroku, I get errors:
2014-06-08T08:21:35.308207+00:00 heroku[web.1]: State changed from starting to crashed
2014-06-08T08:21:35.309586+00:00 heroku[web.1]: State changed from crashed to starting
2014-06-08T08:21:33.996174+00:00 heroku[web.1]: Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch
2014-06-08T08:21:33.996382+00:00 heroku[web.1]: Stopping process with SIGKILL
2014-06-08T08:21:35.293114+00:00 heroku[web.1]: Process exited with status 137
The error log is pretty long. Please let me know if you need further information. Any help is appreciated!
This is a shot in the dark, as there isn't any indication of issue in the logs, but I've faced similar "ghost" failures with Heroku.
It seems that there is a very large latency trying to leave their network, for whatever reason. While using an Apache Thrift RPC system on Heroku, nothing was working until I bumped the connection timeout to about 30 seconds. I saw intermittent failures with RabbitMQ (the Heroku add-on version), and their support told me to bump the connection timeout in this case as well.
Based on that, I would add this to your config file:
db.default.connectionTimeout=30 seconds

(JPA/Toplink) Network error IOException: Address already in use: connect

I have a JPA project which used to work. This month, I have added some data in my database. When I run the usual job (I used to run on preceeding months), I get this error:
Exception [TOPLINK-4002] (Oracle TopLink Essentials - 2.0.1 (Build b09d-fcs (12/06/2007))): oracle.toplink.essentials.exceptions.DatabaseException
Internal Exception: java.sql.SQLException: Network error IOException: Address already in use: connect
Error Code: 0
I checked on my LocalPersistenceFacade containing most methods I'm calling, by printing a counter, and I get the exact number of closed and opened connexions there: 457. And then my job crashes. Normally, It should go till 601 and not 457.
On database side, there is no information related to a possible crash. All seems to be correct, but my java code is saying something else.
Did someone have any idea please?
Regards,
Jean
My understanding is that you are opening/closing a connection for each row and the problem you are facing looks like the one described in this page:
Possible Causes
When running large volume of data
through maps that have multiple
functions. Windows does not close
connections fast enough which causes
the Network I/O exception.
Recommendations
Modify the following two values in the
Windows registry:
This one modifies the range of ports
that Windows uses to open connections.
By default it only allows up to the
port 5000. By modifying this value,
Windows will be able to open up more
ports before having to recycle back to
the beginning. Every connection uses a
port, so it starts at 1025 and goes up
to this value. When it reaches the max
value it goes back to 1025 and tries
to open up that port again.
System Key:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters
Name: MaxUserPort Type: REG_DWORD
Value: 5000-65534
This will "release" closed ports
faster. By default Windows leaves a
port in a TIME_WAIT state for 240
seconds. This can cause problems if
the MaxPort value is set to where a
new connection will use an "older"
port that has not been removed from
TIME_WAIT state yet. By decreasing
this value, you allow the connections
to be released faster.
System Key:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters
Value Name: TcpTimedWaitDelay Data
Type: REG_DWORD Value Data: 30-300
The symptoms and the change - more rows - introduced match perfectly. However, while the suggested "recommendation" may solve the problem, my recommendation would be use connection pooling (use a standalone connection pool like c3p0 or DBCP). This would IMO solve the problem and increase the performances.