Upstart node.js mysql connection - mysql

I am new to node.js so in an attempt to learn it I tried to run this example
http://markshust.com/2013/11/07/creating-nodejs-server-client-socket-io-mysql
It has no problems and everything is working fine. But then I wanted to run this server through forever and upstart, and a strange problem occured. When I first reboot the system, upstart runs the server fine, but the server script doesn't list mysql entries. But when I kill the server and forever starts it up again, everything works fine. Could you help me with this? Here is my upstart conf.
#!upstart
description "Forever and Node.js"
start on (starting mysql)
stop on shutdown
expect fork
env NODE_BIN_DIR="***"
env NODE_PATH="***"
env APPLICATION_DIRECTORY="***"
env APPLICATION_START="***"
env LOG="***"
script
PATH=$NODE_BIN_DIR:$PATH
exec forever --sourceDir $APPLICATION_DIRECTORY -a -l $LOG --minUptime 5000 -- spinSleepTime 2000 start $APPLICATION_START
end script
pre-stop script
PATH=$NODE_BIN_DIR:$PATH
exec forever stop $APPLICATION_START >> $LOG
end script

Replacing
start on (starting mysql)
with
start on (started mysql)
fixed the problem. Thank you all for this priceless community.

Related

Install MySQL on Windows Docker Image

Anyone had success adding MySQL to a Windows docker image? I tried two different ways of deploying MySQL to my image.
I tried using the msi from MySQL in non-interactive mode. Does not work at all in a container.
While Installing Mysql.msi through powershell getting below error
I tried extracting the zip to set things up manually using the mysqld commands does nothing at all. Literally nothing, the exectutables behave as if they just run and exit (no output, nothing):
https://github.com/Somesh-K/Automation-Mysql/blob/main/1.mysql_setup_v2.ps1
Something is very weird about all of this.
Yes, I know that there's a perfectly good MySQL docker linux container published by Oracle to Docker hub. This works. The problem is that running a Windows container and Linux container that need to interact creates a really unnecessary frustration for the user in terms of networking between the two.
Using a different back-end (like SQL server) for our application is not feasible and using .NET core instead of .NET framework is not feasible. To simplify, I'd like to just install MySQL on our windows based webserver docker image. This seems do-able using the two methods described in the links above, but as noted, it does not work and there's very odd behavior from the MySQL binaries when they are run in the container.
Here's an example of the odd behavior:
Install Docker Desktop for Windows
Download the Win32 install zips from MySQL and place in C:\mydata
https://dev.mysql.com/downloads/mysql/
Pull down the ASPNET image from Docker Hub, Run it, and Open up Powershell:
# docker pull mcr.microsoft.com/dotnet/framework/aspnet:4.8
# docker run --name testweb -v C:\mydata:C:\mydata:R -d mcr.microsoft.com/dotnet/framework/aspnet:4.8
# docker exec -it testweb powershell
C:\ > cd C:\mydata
C:\mydata\ > Expand-archive -path .\mysql-5.7.36-winx64.zip .
C:\mydata\ > cd \mysql-5.7.36-winx64\bin
C:\mydata\mysql-5.7.36-winx64\bin\ > .\mysql.exe -version
[zero output, acts like it's an empty executable]
Results
None of the executables/binaries in the extracted mysql bin directory on the container do anything at all. They behave as if someone wrote and executable that just exits. I thought I had a bad install zip so I extracted the same zip on my regular Windows 10 workstation. All of the binaries at least return errors or do something.
This is super odd. Any help would be appreciated.
Downloading this executable and putting it into my container seemed to do the trick:
https://download.microsoft.com/download/2/E/6/2E61CFA4-993B-4DD4-91DA-3737CD5CD6E3/vcredist_x64.exe
Placed this on my container and started it
C:\vcredist.exe /Q
After doing this, the executables starting working:
C:\ > cmd.exe /C "C:\mysql\bin\mysqld" --initialize-insecure
C:\ > cmd.exe /C "C:\mysql\bin\mysqld" --install
C:\ > start-service mysql
C:\ > cmd.exe /C "C:\mysql\bin\mysql" -u root

PM2 keeps getting killed every 90 seconds on centos 8

I just installed CentOS 8 and added nodejs (tried v12 & v14) And then I installed pm2 using npm install pm2#latest (so at the time of posting it uses v4.4.0). I did try an older version (v3.5.0), but it does the exact same thing.
and after pm2 got installed, i ran the command "pm2 startup"
after a restart, pm2 does start, but gets killed after 90 seconds and then restarts giving this message
"pm2 has been killed by signal, dumping process list before exit..."
First, I thought it was because of my app (the one that pm2 is supposed to manage), but i removed it from pm2, so it's practically empty, but it does the same thing
Running the following command as root worked for me:
pm2 update
I had the same issue and I tried several solutions online but none worked for me.
However, I completely removed pm2, restarted the server, and reinstalled pm2 and that does it for me.
1- Stop and remove pm2
pm2 kill
sudo npm remove pm2 -g
2- Restart the server
sudo reboot
3- Log in again, then reinstall pm2
sudo npm install -g pm2
I did not disable SE Linux (I think it's not safe to disable it), but the following method helped me:
Edit file: /etc/systemd/system/pm2-root.service
Add new line: Environment=PM2_PID_FILE_PATH=/run/pm2.pid
And replace: PIDFile=/root/.pm2/pm2.pid to: PIDFile=/run/pm2.pid
Versions:
CentOS 8.3.2011
Node.js 14.16.0
NPM 7.7.5
PM2 4.5.5
Original answer. Thanks Alec!
Later update. For those who are facing the same issues. It's an issue related to SE Linux. Known workarounds (the ones I discovered).
Disabling SE Linux (obviously, not recommended)
go to /etc/systemd/system/pm2-root.service - comment PIDFile=... (add a # in front of that line)
Audit and trace - use following commands:
# dnf install policycoreutils-python-utils setroubleshoot-server -y
# journalctl -f
At ths point, you should see the solution in the output (the log)
it should be something like:
# ausearch -c 'systemd' --raw | audit2allow -M my-systemd
# semodule -i my-systemd.pp
You need to do the last step (ausearch... and semodule...) twice - I did it once, restarted the machine and noticed the same issue after 90 seconds. But if you read the log carefully, you will notice that the issue seems to be outputed twice. (looks the same). Probably two things are trying to write to that file (pm2-root.service).
Still waiting for the perfect solution (done by the person that really knows how to fix this in a proper manner), but for those that have this issue, any of these options seem to work just fine.
I've had this problem (on Debian), when for some reason two "PM2 God Daemon" processes (not threads) were launched, so they conflicting with each other.
Killing one of them solved the issue.

ElasticBeanstalk CLI deploy command succeeds silently, but does not deploy

I'm having an odd issue with the awsebcli package when running it on Gitlab's pipeline CI system.
When I run eb deploy locally, the command succeeds (or fails) much as expected. When I run it as part of the CD scripts I've written, it runs successfully (i.e. returns an exit code of 0) but doesn't actually trigger the deployment. No errors are returned - in fact there's no text output from the command at all.
Can anyone suggest what could be going wrong here?

Bluemix `docker exec` returns 404

I pushed an image (mysql:5.5 to be exact) to my registry and am currently running the container under the name db and it does appear when I run cf ic ps.
As docker exec seems to be supported now, I tried to run cf ic exec -it db bash but I get a response of Error response from daemon: 404 error encountered while processing request!. Any exec command I try results in the same error... Does anyone know why this returns a 404 when my container does exist?
For reference I need to load a dump onto the container which is why I'm trying docker exec in the first place.
Edit: Can confirm this occurs for any container I create and try to exec -it into. logs for any container give the same error as well
For some reasons the daemon could not reach your container. I've just tried the following command on different kinds of containers and it worked:
cf ic exec -it [containerId] [command]
I think you should retry. If the problem persists I suggest you to restart the container with:
cf ic restart [containerId]
If you still get 404 you could try with a new container instance using docker run again.
Moreover, be sure that you have installed the latest version of the IBM Containers CLI
Due to a platform issue this command, even though recently added to the docker's supported commands on Bluemix, was not working fine. This was a bug that's been resolved few days ago so you should try again.

Openshift MYSQL environment variables not set

all my MySQL environment variables result in an empty string such as
echo getenv('OPENSHIFT_MYSQL_DB_URL');
echo getenv('OPENSHIFT_MYSQL_DB_HOST');
however the others such as
echo getenv('OPENSHIFT_APP_NAME');
echo getenv('OPENSHIFT_REPO_DIR');
work perfectlly fine. Any ideas what i am doing wrong?
I had the same issue, and tried creating and recreating Applications multiple times without success
The solution was to use Git to push the code to Openshift (at least one time), if you only use sFTP to push the code, those variables will not be accessible
You can just use the rhc app stop & rhc app start commands to restart your application and the environment variables will then be provided to your application. Make sure that you don't just use the rhc app restart command, as it doesn't not usually work, think of it as an apachectl stop/start vs apachectl reload.