phpmyadmin docker site can't be reached - mysql

Very frustrating error. I hope you can advise me something.
I removed all images, all containers, made system prune with docker.
Then, I run the following command. I know I am not specifying mysql host and password, but who cares, this still should work or show me main page of phpmyadmin where I can login and it should say mysql can't connect.
sudo docker run --name adminphp1 -d -p 8000:80 phpmyadmin/phpmyadmin.
After running this command, seeing the container list, it shows me the following:
389268e87d4b phpmyadmin/phpmyadmin "/run.sh supervisord…" 2 minutes ago Up 2 minutes 9000/tcp, 0.0.0.0:8000->80/tcp adminphp1
Where does 9000/tcp come from?
After running, docker logs adminphp1, it shows the following:
Complete! phpMyAdmin has been successfully copied to /var/www/html
/usr/lib/python2.7/site-packages/supervisor/options.py:461: UserWarning: Supervisord is running as root and it is searching for its configuration file in default locations (including its current working directory); you probably want to specify a "-c" argument specifying an absolute path to a configuration file for improved security.
'Supervisord is running as root and it is searching '
2019-04-11 15:15:09,745 CRIT Supervisor is running as root. Privileges were not dropped because no user is specified in the config file. If you intend to run as root, you can set user=root in the config file to avoid this message.
2019-04-11 15:15:09,746 INFO Included extra file "/etc/supervisor.d/nginx.ini" during parsing
2019-04-11 15:15:09,746 INFO Included extra file "/etc/supervisor.d/php.ini" during parsing
2019-04-11 15:15:09,756 INFO RPC interface 'supervisor' initialized
2019-04-11 15:15:09,756 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2019-04-11 15:15:09,756 INFO supervisord started with pid 1
2019-04-11 15:15:10,760 INFO spawned: 'php-fpm' with pid 21
2019-04-11 15:15:10,762 INFO spawned: 'nginx' with pid 22
[11-Apr-2019 15:15:10] NOTICE: fpm is running, pid 21
[11-Apr-2019 15:15:10] NOTICE: ready to handle connections
2019-04-11 15:15:11,826 INFO success: php-fpm entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2019-04-11 15:15:11,827 INFO success: nginx entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
Then I try to access it with website.com:8000 and browser shows me site can't be reached after thinking out some time.
Just would appreciate anything you can suggest.

Related

mysql and gunicorn open connections at the same port

SOME BACKGROUND:
I have created a django app and I am at the point where I want to deploy it. I have looked at multiple options including wsgi but since the new mac os update came about, I can not install mod_wsgi because I do not have apxs or apxs2 on my computer, (Some discussion on web about rights to write in files, If you know more and would like to explain, please do.)
However, I looked into other options to try to deploy the app and I want to use Heroku. I have followed the dev guide for Django deployment until I reached the part where I test using "heroku local web".
THE ISSUE
The problem stems from here because the local mysql server uses the same port that the gunicorn is also trying to use. I have found similar posts on stackoverflow about 'connections in use' but none have shown how to change ports for gunicorn. I have found some open ports available on my localhost but everytime I try to change the mysql ports to those, the connection times out. Therefore, I would like to know how to change the port the Gunicorn connects to so it does not try to connect to the same default port as the mysql which is 3306.
I was serving the django project with the server it came with and the database I am using is mysql for local production. I am trying to connect locally with gunicorn and Heroku now because I feel that if this goes right locally it will probably go right when I attempt to put the project online.
ERROR GIVEN
10:38:52 PM web.1 | [2017-01-08 22:38:52 -0500] [83200] [ERROR] Connection in use: ('0.0.0.0', 3306)
10:38:52 PM web.1 | [2017-01-08 22:38:52 -0500] [83200] [ERROR] Retrying in 1 second.
10:38:53 PM web.1 | [2017-01-08 22:38:53 -0500] [83200] [ERROR] Connection in use: ('0.0.0.0', 3306)
10:38:53 PM web.1 | [2017-01-08 22:38:53 -0500] [83200] [ERROR] Retrying in 1 second.
10:38:54 PM web.1 | [2017-01-08 22:38:54 -0500] [83200] [ERROR] Connection in use: ('0.0.0.0', 3306)
10:38:54 PM web.1 | [2017-01-08 22:38:54 -0500] [83200] [ERROR] Retrying
in 1 second.
MY PROFILE
web: gunicorn project_name.wsgi.application --log-file -
The gunicorn connects when I stop the mysql server, but I get an exception since the project can not connect to the databases.
--Thank you
You can specify the port for Gunicorn as follows -
gunicorn --bind 127.0.0.1:8000
So basically the complete command would be
gunicorn --bind 127.0.0.1:8000 myproject.wsgi:application
You can change 8000 to any of your desired port number.
To install mod_wsgi on MacOS X see:
https://pypi.python.org/pypi/mod_wsgi
All you need to do is pip install mod_wsgi.
You can then use mod_wsgi-express on the command line to run it on an unprivileged port, with all configuration done for you.
Or, you can integrate it with existing Apache installation and manually configure it yourself by running mod_wsgi-express module-config and taking what it outputs and add it to the main Apache configuration for the system. Then add you specific WSGI application configuration to the Apache configuration file as well.

I cant start context broker due to "error starting REST interface"

When I enter the following command:
/etc/init.d/contextBroker start
I get the following output:
Starting contextBroker... cat: /var/run/contextBroker/contextBroker.pid: No such file or directory
pidfile not found [FAILED]
I have two machines where I am practising with context broker and I havent touched the second one in days after I succesfully installed it and managed to receive a post message from a remote weather station.
I see that the directory /var/run/contextBroker/ is actually empty
What should I do to fix this now? reinstal context broker or?
So is this somehow my fault and how do I prevent in the future? I dont want this happening when my app goes live.
EDIT1: the orion version is 0.20.0
EDIT2: I just reinstalled contextBroker and I get the same problem. What are exectly the contents of that directory? Could I maybe just create the files inside?
EDIT3: Since running contextBroker as a system service still yields an unsuccessful start, I also attempted to run it symply by typing:
contextBroker in the command line, after which I get the following response
INFO#14:03:03 contextBroker.cpp[1346]: Orion Context Broker is running
[root#localhost DevF12]# INFO#14:03:03 MongoGlobal.cpp[181]: Successful connection to database
INFO#14:03:03 contextBroker.cpp[1157]: Connected to mongo at localhost:orion
INFO#14:03:03 MongoGlobal.cpp[499]: Database Operation Successful ({ conditions.type: "ONTIMEINTERVAL" })
FATAL#14:03:03 rest.cpp[1013]: Fatal Error (error starting REST interface)
EDIT4: Ok so I tried ps aux | grep contextBroker and the result is:
494 2196 0.0 7.0 688696 135116 ? Ssl Apr21 0:02 /usr/bin/contextBroker -port 1026 -logDir /var/log/contextBroker -pidpath /var/run/contextBroker/contextBroker.pid -dbhost localhost -db orion
root 7299 0.0 6.9 621052 134440 ? Ssl 04:21 0:00 contextBroker -port 1028
root 8870 0.0 0.0 103256 848 pts/0 S+ 08:51 0:00 grep contextBroker
but there simply isnt anything in /var/run/contextBroker/
Should I put contextBroker.pid by myself? and if so, what should the contents be?
EDIT5: I just ran netstat -ntlpd | grep 1026 and the output is:
tcp 0 0 0.0.0.0:1026 0.0.0.0:* LISTEN 2196/contextBroker
tcp 0 0 :::1026 :::* LISTEN 2196/contextBroker
So I guess nothing else but contextBroker is listening?
For the record (it was answered in the comments).
The message FATAL#XX:XX:XX rest.cpp[1013]: Fatal Error (error starting REST interface) means that there is a networking problem. Usually an interface or an already used port.
The usual cause is that there is another instance of Orion running (as a service, for example).
The way to solve it would be to kill the process entirely. Show all Orion processes with ps aux | grep contextBroker and issue a kill -9 <pid>, where <pid> is the process number (the second column of the output of the ps command).

Mysql Galera Autostart from boot --wsrep-new-cluster

To recover from a blackout I need to start the Galera cluster when the system boots and I can only do this with the following:
service mysql start --wsrep-new-cluster
"service mysql start" will get launched on boot but will fail because it is the only one in the cluster. How do I get the cluster to start from boot and not fail if it is the only one there?
EDIT
Looks like I have to leave gcomm:// blank for it to start but it is not the best solution as if another server came online first then it would fail.
#galera settings
wsrep_provider=/usr/lib/galera/libgalera_smm.so
wsrep_cluster_name="my_wsrep_cluster"
wsrep_cluster_address="gcomm://"
wsrep_sst_method=rsync
wsrep_provider_options="pc.bootstrap=true"
My solution is to edit init scripts - This is solution for debian - location my init script is /etc/init.d/mysql
then I found this line:
/usr/bin/mysqld_safe "${#:2}" > /dev/null 2>&1 &
and I added parameter --wsrep-new-cluster
/usr/bin/mysqld_safe --wsrep-new-cluster "${#:2}" > /dev/null 2>&1 &
and it is working after boot.
I've been through this before. The following is the procedure I documented for my co-workers:
First we will determine the node with the latest change
On each node go to /var/lib/mysql and examine the grastate.dat file
We are looking for the node with the highest seqno and a uuid that is not all zeros
On the node that captured the latest change startup the cluster in bootstrap mode
service mysql bootstrap
Startup the other nodes via the usual startup command
service mysql start
Check each node if they have the same DB list
mysql -u root -p
show databases;
On any node run the command to check for the status of the cluster and ensure you see something like the following
wsrep_local_state_comment | Synced <-- cluster is synced
wsrep_incoming_addresses | 10.0.0.9:3306,10.0.0.11:3306,10.0.0.13:3306 <-- all nodes are providers
wsrep_cluster_size | 3 <-- cluster consists of 3 nodes
wsrep_ready | ON <-- good :)

hadoop examples not running on amazon ec2

I am using hadoop-1.0.4 on amazon ec2 of 3 ubuntu 12.10 instances, 1 master and 2 slaves, just under ~ directory.
Now start-all.sh and stop-all.sh is ok, but when i run jps on master or slaves, it prints nothing. Then i tested hadoop examples:
~/hadoop$ bin/hadoop jar hadoop-examples-1.0.4.jar pi 10 10000
It shows
Exception in thread "main" java.io.IOException: Permission denied
at java.io.UnixFileSystem.createFileExclusively(Native Method)
at java.io.File.createTempFile(File.java:1879)
at org.apache.hadoop.util.RunJar.main(RunJar.java:115)
However i've chmod 777 -R tmp to tmp folders.
~/hadoop$ sudo bin/hadoop jar hadoop-examples-1.0.4.jar pi 10 10000
With sudo, it produces
13/05/12 03:58:11 WARN conf.Configuration: DEPRECATED: hadoop-site.xml
found in the classpath. Usage of hadoop-site.xml is deprecated.
Instead use core-site.xml, mapred-site.xml and hdfs-site.xml to
override properties of core-default.xml, mapred-default.xml
and hdfs-default.xml respectively
Number of Maps = 10
Samples per Map = 10000
13/05/12 03:58:12 WARN fs.FileSystem: "54.235.101.85:50001" is a deprecated
filesystem name. Use "hdfs://54.235.101.85:50001/" instead.
13/05/12 03:58:13 INFO ipc.Client: Retrying connect to server:
hdmaster/54.235.101.85:50001. Already tried 0 time(s).
13/05/12 03:58:14 INFO ipc.Client: Retrying connect to server:
hdmaster/54.235.101.85:50001. Already tried 1 time(s).
13/05/12 03:58:15 INFO ipc.Client: Retrying connect to server:
hdmaster/54.235.101.85:50001. Already tried 2 time(s).
Then failed to connect. So what is the problem? should i put sudo to run the examples? Thanks a lot.
I think, the problem is, 54.235.101.85 is suppose to be a public IP address. Use ifconfig in all the nodes to get a list of IP address and check for IP beginning with 10.x.x.x/172.x.x.x/192.x.x.x. If you find any, modify your configuration files in all the nodes accordingly.

MySQL Cluster - [ [ndbd] ERROR -- Couldn't start as daemon, error: 'Failed to open logfile ]

recently I want to set up mysql cluster, one Mgmt node, one sql node and two data node,
it seems successfully installed and Mgmt node started, but when I try to start data node, I hit a problem...
here is the error message when I try to start data node:
Does anyone know what's going wrong?
basically I follow the step by step tutorial on this site and this site
It would be very appreciated if you can give me some advice!
thanks
Okay, I came up with a solution to fix this issue : 013-01-18 09:26:10 [ndbd] ERROR -- Couldn't start as daemon, error: 'Failed to open logfile
I was stuck with the same issue and after exploring I opened the $MY_CLUSTER_INSTALLATION/ndb_data/ndb_1_cluster.log
1.I found the following message present in the log:
2013-01-18 09:24:50 [MgmtSrvr] INFO -- Got initial configuration
from 'conf/config.ini',
will try to set it when all ndb_mgmd(s) started
2013-01-18 09:24:50 [MgmtSrvr] INFO -- Node 1: Node 1 Connected
2013-01-18 09:24:54 [MgmtSrvr] ERROR -- Unable to bind management
service port: *:1186!
Please check if the port is already used,
(perhaps a ndb_mgmd is already running),
and if you are executing on the correct computer
2013-01-18 09:24:54 [MgmtSrvr] ERROR -- Failed to start mangement service!
2.I checked the services running on port on my Mac machine using following command:
lsof -i :1186
And sure enough, I found the ndb_mgmd(s):
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
ndb_mgmd 418 8u IPv4 0x33a882b4d23b342d 0t0 TCP *:mysql-cluster (LISTEN)
ndb_mgmd 418 9u IPv4 0x33a882b4d147fe85 0t0 TCP localhost:50218->localhost:mysql-cluster (ESTABLISHED)
ndb_mgmd 418 10u IPv4 0x33a882b4d26901a5 0t0 TCP localhost:mysql-cluster->localhost:50218 (ESTABLISHED)
3.To kill the processes on the specific port (for me : 1186) I ran following command:
sof -P | grep '1186' | awk '{print $2}' | xargs kill -9
4.I repeated the steps listed in mySql Cluster installation pdf again:
$PATH/mysqlc/bin/ndb_mgmd -f conf/config.ini --initial --configdir=/$PATH/my_cluster/conf/
$PATH/mysqlc/bin/ndbd -c localhost:1186
Hope this helps!
Hope this will be useful
In my case, two data node were connected already
you can check this out in your management node
[root#ab0]# ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> show
what i did was
ndb_mgm> shutdown
and then execute the restart command. it works for me
Check that the datadir exists and is writeable with "ls -ld /home/netdb/mysql_cluster/data" on datanode1.