Erro Wamp Server - 2 of 3 services running - mysql

I installed Wamp and everything worked correctly, but after I restarted the computer, only 2 out of 3 services are running. I believe the error is on port 3306, but I don't know how to fix it. Can anyone help me with this problem? I tested the ports and they returned the following message:
Your port 80 is used by a process with PID = 1976 The processus of PID 1976 is 'httpd.exe' Session: Services The service of PID 1976 for 'httpd.exe' is 'wampapache' This service is from Wampserver - It is correct
Your port 3306 is used by a process with PID = 3908 The processus of PID 3908 is 'mysqld.exe' Session: Services The service of PID 3908 for 'mysqld.exe' is 'N/A' N/A means that there are no service related to PID 3908
Your port 3307 is used by a process with PID = 1488 The processus of PID 1488 is 'mysqld.exe' Session: Services The service of PID 1488 for 'mysqld.exe' is 'wampmariadb' This service is from Wampserver - It is correct
When i uninstall and install wamp, the 3 services run, but when i restart the computer, only 2 work.

Related

LocalServer - 2/3 services Running WAMP - MySQL is not Running [FIXED]

I recently ran into a problem that mysqld service wasn't running.
I checked the MySQL's wamp log and got the following messages:
[ERROR] Can't start server: Bind on TCP/IP port: No such file or directory
[ERROR] Do you already have another mysqld server running on port: 3306 ?
So I ran it and realized that my TCP port 3306 was already running with another mysqld.exe process
netstat -ano|find "3306"
TCP 0.0.0.0:3306 0.0.0.0:0 LISTENING 4576
TCP 0.0.0.0:33060 0.0.0.0:0 LISTENING 4576
TCP [::]:3306 [::]:0 LISTENING 4576
TCP [::]:33060 [::]:0 LISTENING 4576
after that I went to Task Manager and found the 4576 PID which was the other mysqld.exe running. So I finished the task and restarted my WAMP server, and all services went smoothly.
The problem probably occurred because I've got two wamp servers installed, don't ask me why... Hope I can help someone with this same problem.

[ERROR][Server] Do you already have another mysqld server running on port: 3306?

When i type mysqld --console(picture 1)
I get the error that I have another mysql server running on port 3306 (picture 2)
How can I find and delete the other mysql running on port 3306?(from command line)
First identify the PID of the process using port 3306
netstat -aon
Stop the task by
taskkill /pid 1234 /f
Replace 1234 with the PID of port 3306

mysql and gunicorn open connections at the same port

SOME BACKGROUND:
I have created a django app and I am at the point where I want to deploy it. I have looked at multiple options including wsgi but since the new mac os update came about, I can not install mod_wsgi because I do not have apxs or apxs2 on my computer, (Some discussion on web about rights to write in files, If you know more and would like to explain, please do.)
However, I looked into other options to try to deploy the app and I want to use Heroku. I have followed the dev guide for Django deployment until I reached the part where I test using "heroku local web".
THE ISSUE
The problem stems from here because the local mysql server uses the same port that the gunicorn is also trying to use. I have found similar posts on stackoverflow about 'connections in use' but none have shown how to change ports for gunicorn. I have found some open ports available on my localhost but everytime I try to change the mysql ports to those, the connection times out. Therefore, I would like to know how to change the port the Gunicorn connects to so it does not try to connect to the same default port as the mysql which is 3306.
I was serving the django project with the server it came with and the database I am using is mysql for local production. I am trying to connect locally with gunicorn and Heroku now because I feel that if this goes right locally it will probably go right when I attempt to put the project online.
ERROR GIVEN
10:38:52 PM web.1 | [2017-01-08 22:38:52 -0500] [83200] [ERROR] Connection in use: ('0.0.0.0', 3306)
10:38:52 PM web.1 | [2017-01-08 22:38:52 -0500] [83200] [ERROR] Retrying in 1 second.
10:38:53 PM web.1 | [2017-01-08 22:38:53 -0500] [83200] [ERROR] Connection in use: ('0.0.0.0', 3306)
10:38:53 PM web.1 | [2017-01-08 22:38:53 -0500] [83200] [ERROR] Retrying in 1 second.
10:38:54 PM web.1 | [2017-01-08 22:38:54 -0500] [83200] [ERROR] Connection in use: ('0.0.0.0', 3306)
10:38:54 PM web.1 | [2017-01-08 22:38:54 -0500] [83200] [ERROR] Retrying
in 1 second.
MY PROFILE
web: gunicorn project_name.wsgi.application --log-file -
The gunicorn connects when I stop the mysql server, but I get an exception since the project can not connect to the databases.
--Thank you
You can specify the port for Gunicorn as follows -
gunicorn --bind 127.0.0.1:8000
So basically the complete command would be
gunicorn --bind 127.0.0.1:8000 myproject.wsgi:application
You can change 8000 to any of your desired port number.
To install mod_wsgi on MacOS X see:
https://pypi.python.org/pypi/mod_wsgi
All you need to do is pip install mod_wsgi.
You can then use mod_wsgi-express on the command line to run it on an unprivileged port, with all configuration done for you.
Or, you can integrate it with existing Apache installation and manually configure it yourself by running mod_wsgi-express module-config and taking what it outputs and add it to the main Apache configuration for the system. Then add you specific WSGI application configuration to the Apache configuration file as well.

Intermittent MySQL connection on Vagrant VirtualBox when Jenkins runs PHPUnit

We have a Jenkins CI server that runs our suite of tests on every commit, triggered by a GitHub hook.
We recently moved the suite of tests from running locally on the Jenkins server to running inside a VirtualBox/Vagrant VM. This is to ensure that the test configuration matches the dev environment. This is an Ubuntu 14.04 guest running on Ubuntu 14.04 host.
After moving to the VM model, PHPUnit occasionally fails with no connection to MySQL. The error is Can't connect to MySQL server on '127.0.0.1'.
This error is intermittent, not easily reproducible. That is, if I trigger a new build on Jenkins, it usually succeeds. However, when the new build is triggered by the GitHub hook, it fails more often than manually triggered builds, and sometimes succeeds.
Here's what I tried:
sudo service mysql restart before running phpunit
sleep 5 between the mysql restart and phpunit
Connecting to localhost and 127.0.0.1 -- When I tried connecting to localhost, I received intermittent errors Can't connect to MySQL server on '/var/run/mysqld/mysqld.sock'.
Here's the full output of the failed build:
sudo service mysql restart
* Stopping MySQL (Percona Server) mysqld
...done.
* Starting MySQL (Percona Server) database server mysqld
...done.
* Checking for corrupt, not cleanly closed and upgrade needing tables.
sleep 5
sudo service mysql status
* /usr/bin/mysqladmin Ver 8.42 Distrib 5.6.23-72.1, for debian-linux-gnu on x86_64
Server version 5.6.23-72.1-log
Protocol version 10
Connection Localhost via UNIX socket
UNIX socket /var/run/mysqld/mysqld.sock
Uptime: 6 sec
Threads: 1 Questions: 111 Slow queries: 0 Opens: 761 Flush tables: 1 Open tables: 754 Queries per second avg: 18.500
phpunit
PHPUnit 4.6.2 by Sebastian Bergmann and contributors.
Configuration read from /vagrant/phpunit.xml
...........EEE.E.............E............................EEEEE.
Time: 8.51 seconds, Memory: 135.25Mb
1) ProcessDatasetsTest::test_process_on_census_fraction
PDOException: SQLSTATE[HY000] [2003] Can't connect to MySQL server on '127.0.0.1' (111)
I've had intermittent connectivity issues with Mysql on Vagrant, but not precisely related to PHPUnit. Connections were dropping just out of the blue, until I found out there were many boxes running at the same time in virtualbox for the same app. I killed them all, then ran vagrant global-status --purge and I had perfect connections again.
We saw a similar issue on a different Vagrant VM -- Can't connect to MySQL server -- and it turned out to be a memory issue. The VM was out of RAM. This was fixed by adding or increasing a swapfile on the VM:
sudo fallocate -l 1G /swapfile.img
sudo chmod 0600 /swapfile.img
sudo mkswap /swapfile.img
sudo swapon /swapfile.img

MySQL Cluster - [ [ndbd] ERROR -- Couldn't start as daemon, error: 'Failed to open logfile ]

recently I want to set up mysql cluster, one Mgmt node, one sql node and two data node,
it seems successfully installed and Mgmt node started, but when I try to start data node, I hit a problem...
here is the error message when I try to start data node:
Does anyone know what's going wrong?
basically I follow the step by step tutorial on this site and this site
It would be very appreciated if you can give me some advice!
thanks
Okay, I came up with a solution to fix this issue : 013-01-18 09:26:10 [ndbd] ERROR -- Couldn't start as daemon, error: 'Failed to open logfile
I was stuck with the same issue and after exploring I opened the $MY_CLUSTER_INSTALLATION/ndb_data/ndb_1_cluster.log
1.I found the following message present in the log:
2013-01-18 09:24:50 [MgmtSrvr] INFO -- Got initial configuration
from 'conf/config.ini',
will try to set it when all ndb_mgmd(s) started
2013-01-18 09:24:50 [MgmtSrvr] INFO -- Node 1: Node 1 Connected
2013-01-18 09:24:54 [MgmtSrvr] ERROR -- Unable to bind management
service port: *:1186!
Please check if the port is already used,
(perhaps a ndb_mgmd is already running),
and if you are executing on the correct computer
2013-01-18 09:24:54 [MgmtSrvr] ERROR -- Failed to start mangement service!
2.I checked the services running on port on my Mac machine using following command:
lsof -i :1186
And sure enough, I found the ndb_mgmd(s):
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
ndb_mgmd 418 8u IPv4 0x33a882b4d23b342d 0t0 TCP *:mysql-cluster (LISTEN)
ndb_mgmd 418 9u IPv4 0x33a882b4d147fe85 0t0 TCP localhost:50218->localhost:mysql-cluster (ESTABLISHED)
ndb_mgmd 418 10u IPv4 0x33a882b4d26901a5 0t0 TCP localhost:mysql-cluster->localhost:50218 (ESTABLISHED)
3.To kill the processes on the specific port (for me : 1186) I ran following command:
sof -P | grep '1186' | awk '{print $2}' | xargs kill -9
4.I repeated the steps listed in mySql Cluster installation pdf again:
$PATH/mysqlc/bin/ndb_mgmd -f conf/config.ini --initial --configdir=/$PATH/my_cluster/conf/
$PATH/mysqlc/bin/ndbd -c localhost:1186
Hope this helps!
Hope this will be useful
In my case, two data node were connected already
you can check this out in your management node
[root#ab0]# ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> show
what i did was
ndb_mgm> shutdown
and then execute the restart command. it works for me
Check that the datadir exists and is writeable with "ls -ld /home/netdb/mysql_cluster/data" on datanode1.