I still don't know if the issue is with docker networking, node, or the connection from node to mysql.
But I have a docker that contains express gateway for api management. Every once in a while it starts giving Operation timed out
The error is coming from nodejs but when it happens :
I can't see anything in the logs of the container
Running tcpdump from the server shows a call being made to the docker api but returns a response of 500 (when running correctly i can see after it the call to the port 3306 to connect to the database)
Running tcpdump from inside the docker container returns absolutely nothing (when working correctly I can see the calls)
Calls that don't require a database connection work correctly! but still i can't see their logs in the container nor their calls inside tcpdump
It's as if the server is calling another docker, but i searched all volumes, images, there's no duplicate.
I tried to check the following :
Resources on the same machine
Resources on the database machine
tcpdump with wireshark on both the server and the docker
Add connection pooling to sequelize (In case a connection to the database is causing the block sometimes)
Checking all oauth2 routes in case it's redirecting to localhost server or anything
Literally adding logs everywhere just to see a log when this happens, but in vain
telnet from the server to the localhost with external port and to 172.17.0.2 with internal port -> slight difference when i do it from localhost, after a while i receive a Connection closed by foreign host
I don't know if it's normal for a docker container to hang like this or if an image is not correctly deleted, but things simply worked when I created the container with another name.
Related
Going to be quick and straight-forward, as I'm sure I'm just overlooking something simple.
I have 3 app servers load balanced and therefore a separate database server. I set up my env properly to point to the DB server's IP instead of 127.0.01, and input the correct password and username for a user I created. However, when I try to deploy the servers, it fails with a 2002 error that the connection timed out. I've tried looking through all other threads with similar issues but none seem to really be having the same issue.
Example of env (except for username and password, obviously)
DB_CONNECTION=mysql
DB_HOST=3.19.111.11 -- External Database IP
DB_PORT=3306
DB_DATABASE=xxx -- Correct database name (as seen in Forge dashboard on DB server)
DB_USERNAME=xxx -- Correct username (just set up this user)
DB_PASSWORD="xyz123" -- Correct password for aforesaid user
I can connect to the database server via TablePlus, so the issue is localized to something I'm doing on the app servers themselves, but I can't see anything wrong.
As additional information, I have setup the individual servers network's to allow them to connect to the database server and vice versa, although I'm not sure that was necessary.
Adding an envvar for APP_URL and setting that to the DB's IP and changing DB_HOST to 127.0.0.1 changes the error to be a Connection Refused error, whether or not that's better.
Turns out the issue was that despite opening up the network on Forge's control panel, they weren't actually allowed to each other. Going into EC2 dashboard and making a security group that opened up port 3306 to each app server's private IP solved the issue.
I am using couchbase server 6.0.2 image from RedHat
https://access.redhat.com/containers/?tab=overview&get-method=registry-tokens#/registry.connect.redhat.com/couchbase/server
in openshift.
The Pod is running but does not react to http://localhost:8091. The Logs show the error shown below.
I have 3 questions:
Why is whoami failing in the entrypoint?
Why isn't the server responding on port 8091?
Does the couchbase server image require root permissions?
It seems the couchbase/server image is expecting to be run as root, then creates its own user couchbase and group couchbase.
At the end it's running an entrypoint script and in there checking if the user running the whole thing, is actually the user couchbase by executing the whois command.
This is not the case if you just run it in openshift, as the container will be run as some "random" unprivileged user.
This leads to a set of consecutive failures:
Here You will find the evaluation that is done in the entrypoint.sh.
Now the whois command is failing since there is not actual user just said random UID. that failing, leaves the first part of the evaluation blank, which will result in a failure.
This is a bug in the couchbase/server image and as such you should, if time allows contribute to fixing by opening an issue against that repo.
Right now I am connecting to a cluster endpoint that I have set up for an Aurora DB-MySQL compatible cluster, and after I do a "failover" from the AWS console, my web application is unable to properly connect to the DB that should be writable.
My setup is like this:
Java Web App (tomcat8) with HikariCP as the connection pool, with ConnecterJ as the driver for MySQL. I am evaluating Aurora-MySQL to see if it will satisfy some of the needs the application has. The web app sits in an EC2 instance that is in the same VPC and SG as the Aurora-MySQL cluster. I am connecting through the cluster endpoint to get to the database.
After a failover, I would expect HikariCP to break connections (it does), and then attempt to reconnect (it does), however, the application must be connecting to the wrong server, because anytime a write is hit to the database, a SQL Exception is thrown that says:
The MySQL server is running with the --read-only option so it cannot execute this statement
What is the solution here? Should I rework my code to flush DNS after all connections go down, or after I start receiving this error, and then try to re-initiate connections after that? That doesn't seem right...
I don't know why I keep asking questions if I just answer them (I should really be more patient), but here's an answer in case anyone stumbles upon this in a Google search:
RDS uses DNS changes when working with the cluster endpoint to make it looks "seamless". Since the IP behind the hostname can change, if there is any sort of caching going on, then you can see pretty quickly how a change won't be reflected. Here's a page from AWS' docs that go into it a bit more: https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/java-dg-jvm-ttl.html
To resolve my issue, I went into the jvm's security file and then changed it to be 0 just to verify if what was happening was correct. Seems correct. Now I just need to figure out how to do it properly...
I have two separate Ubuntu VMs running on VirtualBox. I am getting the error "Remote DB Error: connect ECONNREFUSED". Here is some background information:
When co-located on same VM, NodeJS to MySQL works fine together. Problem only started after moving MySQL to its own VM.
VMs set up in VirtualBox as Internal Network. They have their own static IPs, and the two VMs can ping each other's IP addresses fine.
When I first got the error, the indication was that nodeJS was trying connection on port 3306 ("Error: connect ECONNREFUSED 192.168.1.69:3306"). Then, I added the port option when creating the connection object ("port : '3306'), but this did not fix problem.
Next, I saw a thread that suggested checking to see what port mySQL is listening on by running (netstat -ln | grep mysql), and the result I got back was "unix 2 [ ACC ] STREAM LISTENING 1831 /var/run/mysqld/mysqld.sock". So, since it said it was listening on 1831, I switched my port in the connection creation code to below:
qvar connection = mysql.createConnection({
host : '192.168.1.69',
port : '1831',
user : 'root',
password : 'vinson',
database : 'pilot',
stringifyObjects: 'true'
});
, however, I was still getting the same error.
UPDATE TO MY POST:
Since my first posting of this... eh.. post, I have learned some things, and in the process made some incremental progress:
By default, MySQL only listens to localhost traffic. In order to have it listen to external traffic you have to disable the listen/bind address in it's my.cnf file. So, I did that, and then restarted MySQL.
Once I did that, I ran "netstat -tlnp", a new line dsiplayed indicating something (definitely MySQL) listening on 0.0.0.0:3306, and this was not there before I made the config change and restarted MySQL.
Then, I executed a query again from the NodeJS VM, and I got a different error (hey, I'll take this as a sign of incremental progress):
" Error: Cannot enqueue Query after fatal error."
So that is where I am now. As before, I would be grateful for any ideas as to what I might try next. Thanks for any help!
Ok, I figured out the rest of my issues. The error above (Error: Cannot enqueue Query after fatal error.) was due to the fact that I had not restarted my NodeJS server, so it had the old connection object. Once I restarted NodeJS server, I then got a new error:
ER_HOST_NOT_PRIVILEGED
The reason I was getting this error was simply because, for a given user, MySQL must know the host from which that user is connecting from (since a valid connection credential for MySQL is combination userId/password/host). Once I updated my user account for remote connections with the appropriated allowed remote hosts, everything worked fine!
I use WampServer (Apache, PHP, MySQL) and have no problems when some kind of network adapter(wireless or lan) is connected (i.e. Local Area Connection has status connected) even if i am not connected to the internet (for example when i am connected to the router but that is not connected to the internet).
When there is no network connection, I get a php error like MySQL could not connect to 127.0.0.1 on port 3306.
Interestingly, telnet 127.0.0.1 3306 says that it could not connect to the port, even when the server and MySQL are running fine (i.e. when some kind of local area connection is connected).
So I turned off all kinds of firewall (antivirus and Windows) but still no difference in anything. And that is why this issue is quite puzzling.
Things I have already tried (will update this list along the way):
The skip-networking directive in my.ini.
You could modify your MySQL server and client configuration to connect to one another using a named pipe instead of a TCP/IP loopback connection. That way, the current state of the network connection should have less impact.
To do so, start the server with --enable-named-pipe or the corresponding config file setting, and execute the client with --pipe or --protocol=PIPE. Similar configurations should be available for your PHP connector as well. It may depend on which library you use there, and whether or not it will take the mentioned configuration settings from the my.ini file (settings without leading -- there).