Playframework 2.2 and Heroku: Unable to connect to non-Heroku database - mysql

I have my database hosted somewhere else and I have this in my /conf file:
db.default.driver= com.mysql.jdbc.Driver
db.default.url="jdbc:mysql://myserver.com:3306/mydb"
db.default.user=myusername
db.default.password=mypassword
When I test it locally then it connects to the database just fine. I'm able to create/delete from the tables, etc. I changed the heroku config:
heroku config:add DATABASE_URL=mysql://myusername:mypassword#<myserver>:3306/mydb
and procfile
web: target/start -Dhttp.port=${PORT} ${JAVA_OPTS}
When I deploy it to heroku, I get errors:
2014-06-08T08:21:35.308207+00:00 heroku[web.1]: State changed from starting to crashed
2014-06-08T08:21:35.309586+00:00 heroku[web.1]: State changed from crashed to starting
2014-06-08T08:21:33.996174+00:00 heroku[web.1]: Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch
2014-06-08T08:21:33.996382+00:00 heroku[web.1]: Stopping process with SIGKILL
2014-06-08T08:21:35.293114+00:00 heroku[web.1]: Process exited with status 137
The error log is pretty long. Please let me know if you need further information. Any help is appreciated!

This is a shot in the dark, as there isn't any indication of issue in the logs, but I've faced similar "ghost" failures with Heroku.
It seems that there is a very large latency trying to leave their network, for whatever reason. While using an Apache Thrift RPC system on Heroku, nothing was working until I bumped the connection timeout to about 30 seconds. I saw intermittent failures with RabbitMQ (the Heroku add-on version), and their support told me to bump the connection timeout in this case as well.
Based on that, I would add this to your config file:
db.default.connectionTimeout=30 seconds

Related

macOS - Dockerize MySQL service connection refused, crashes upon use

Using mysql(v8.0.21) image with mac docker-desktop (v4.2.0, Docker-Engine v20.10.10)
As soon service up:
entrypoints ready
innoDB initialization done
ready for connection
But as soon try to run the direct script(query) it crashes, refused to connect (also from phpmyadmin) and restarted again.
terminal log
connection refused for phpMyAdmin
In the logs we are able to see an Error:
[ERROR] [MY-011947] [InnoDB] Cannot open '/var/lib/mysql/ib_buffer_pool' for reading: No such file or directory
The error we are able to see into log is not an issue, as it is fixed and updated by InnoDB already, here is the reference below:
https://jira.mariadb.org/browse/MDEV-11840
Note: docker-compose file we are pretty much sure that, there is no error as same is working fine for windows as well ubuntu, but the issue is only for macOS.
Thanks #NicoHaase and #Patrick for going through the question and suggestions.
Found the reason for connection refused and crashing, posting answer so that it maybe helpful for others.
It was actually due to docker-desktop macOS client there was by default 2GB Memory was allocated as Resource, and for our scenario it was required more than that.
We just allocate more memory according to our requirement and it was just started working perfectly fine.
For resource allocation:
open docker-desktop preferences
resources > advanced

Druid setup ERROR: service serving this console is not responding

I am trying to setup druid in cluster mode.
The setup is completer and there is no error in the server logs.
Even the druid console is launching successfully, but getting below error:
"It appears that the service serving this console is not responding. The console will not function at the moment."
As you can see the status Apache druid is running successfully. But else where I am getting error message "Error: Request failed with status code 500"
Can anyone help me to resolve it?
Issue is resolved.
Need to add below config in all node and in all cluster folder runtime.properties
druid.host=<respective_server_ip>
Example: in all server nodes (master, data, query)
In master folder: coordinator-overlord
druid.host=<master_server_ip>
In data folder: historical, middleManager
druid.host=<data_server_ip>
In query folder: broker, router
druid.host=<data_server_ip>
Do not configure druid.host=localhost when deploying as cluster, it
will use the in-process function
InetAddress.getLocalHost().getCanonicalHostName() to get the
hostname, which is very convenient
Comment out below line in Druid configuration
conf/druid/cluster/_common/common.runtime.properties
#druid.host=localhost
Make sure that the zookeeper service is running.
If zookeeper is not running, start it.
And restart the services on the master server.

Django does not gracefully close MySQL connection upon shutdown when run through uWSGI

I have a Django 2.2.6 running under uWSGI 2.0.18, with 6 pre-forking worker processes. Each of these has their own MySQL connection socket (600 second CONN_MAX_AGE). Everything works fine but when a worker process is recycled or shutdown, the uwsgi log ironically says:
Gracefully killing worker 6 (pid: 101)...
But MySQL says:
2020-10-22T10:15:35.923061Z 8 [Note] Aborted connection 8 to db: 'xxxx' user: 'xxxx' host: '172.22.0.5' (Got an error reading communication packets)
It doesn't hurt anything but the MySQL error log gets spammed full of these as I let uWSGI recycle the workers every 10 minutes and I have multiple servers.
It would be good if Django could catch the uWSGI worker process "graceful shutdown" and close the mysql socket before dying. Maybe it does and I'm configuring this setup wrong. Maybe it can't. I'll dig in myself but thought I'd ask as well..
If CONN_MAX_AGE is set to a positive value, then persistent connections are created by Django, that get cleaned up upon request start and request end. Clean up here, means if they are invalid, had too many errors or have been started longer than CONN_MAX_AGE seconds ago.
Otherwise, connections are closed at request close. So this problem occurs when you are using persistent connections and do uWSGI periodic reloads, by design.
There is this bit of code, that calls instructs uwsgi to shutdown all sockets, but I'm unsure if this is communicated to Django or that uwsgi uses a more brutal method and is causing the aborts. That shuts down all uwsgi owned sockets, so from the looks of it, unix sockets and connections with webserver. There's no hook either to be called just before or during reload.
Perhaps this get you on your way. :)

Openshift Backup - Server Not Reachable

I had openshift 2 starter account where I had my application running.
Openshift 2 been shut down and now I got mail to migrate it to 3
But I don't have backup of an application
I am getting following errors
Upon rhc save-snapshot myapp I am getting following error.
Error in trying to save snapshot. You can try to save manually by
running: ssh 54f03dbd4382ec9101000159#myapp-myapps.rhcloud.com
'snapshot' > myapp.tar.gz
If I try to ssh application then connection is getting closed.
ssh 54f03dbd4382ec9101000159#myapp-myapps.rhcloud.com
Connection to myapp-myapps.rhcloud.com closed.
If I try to restart application from console then I am getting error
could not open session
could not open session
could not open session Failed to execute: 'control restart' for
/var/lib/openshift/54f03dbd4382ec9101000159/mysql Failed to execute:
'control restart' for
/var/lib/openshift/54f03dbd4382ec9101000159/phpmyadmin Failed to
execute: 'control restart' for
/var/lib/openshift/54f03dbd4382ec9101000159/php
EDIT : I get following error in browser when I try to open my site.
Proxy Error
The proxy server received an invalid response from an upstream server.
The proxy server could not handle the request GET /.
Reason: Error reading from remote server
Apache/2.2.15 (Red Hat) Server at www.mydomain.com Port 80
Need your suggestions. Thanks.
There are a new post on OpenShift blog:
Updated October 3, 2017
We understand how important your data is, and
we have made a one-time exception to allow you to access your
OpenShift Online v2 data. You have until October 5, 2017 at 4:00 PM
UTC to perform a backup of your application. If you have not used it
before, you can download the rhc tool here.
Then you can perform your backup until the 2017/10/05.
Reading around (don't know exactly where) I found that the payed accounts still working until decebmber 31th, so I updated to bronze and could restart the service and backup it. Don't know exactly if is because the upgrade or if was fixed some issue.

Rails Server hangs, Rake db.migrate hangs, appears to be hanging on DB server connection, but I can connect to DB server without a problem

I'm switching a project from Rails2 to Rails3. I run:
rails server
The server starts up without errors:
=> Booting WEBrick
=> Rails 3.0.7 application starting in development on http://0.0.0.0:3000
=> Call with -d to detach
=> Ctrl-C to shutdown server
loaded openid
When I connect to localhost:3000, the server registers a \GET request in the log, but never responds. The HTTP just hangs open. Inspecting the process shows that it is connecting to the localhost DB switching ports every 2 seconds.
Development.log shows "Started GET "/" for 127.0.0.1 at Mon Jun 06 12:44:08 -0400 2011" but nothing else.
Same issue occurs if I attempt to run any rake task.
Other people in my office are running the same code in rails3 without problems (I tried it with a fresh git clone).
I can connect to the localhost DB without any problems.
Problem did not occur when running rails2.
Any ideas about where my problem is? How I can debug (secret log files, places to sneak in a debugger to see what is going on, etc..)?
EDIT: Problem magically went away, how odd.
You say the app hangs. When you kill it, it should show a backtrace of where it was just before. This should give you a clue where to look for the problem.
A further investigation of this issue determined that the database.yml was incorrectly configured (the wrong ip address was being specified for local host).