Host not served (ejabberd) - ejabberd

I'm using out of the box ejabberd/ecs - Docker Hub and I've tried to run curl command (from my own container) to register the user , yet got following message:
Host not served
actual curl command w/ output:
/app # curl -ks --request POST https://ejabberd:5443/api/register --data '{"user":"test","host":"localhost","password":"testing"}'
Host not served
/app #
As far as Docker goes, both my app and ejabberd containers are both in same network.
Please advise.
ejabberd.yml just in case.

I was able to address my issue by adding container name as my hosts:
# grep -A2 hosts ./home/ejabberd/conf/ejabberd.yml
hosts:
- localhost
- ejabberd
#

Related

Connect .NET Core Web API to MySQL on different docker container

I have 2 docker containers running on same virtual machine (Ubuntu server 18.04 on VMWare workstation 12 Player). The first one is MySql Container, which is running on port 3306 and the second one is asp.net core (v2.0) web api (port 5000 on vm and export outside through nginx with port 80 ). My VM api is 192.168.20.197
project architecture image
My connection string on web api project is: optionsBuilder.UseMySQL("server=localhost;port=3306;database=mydatabase;user=root;CharSet=utf8");
My docker file content is
FROM microsoft/dotnet:sdk AS build-env
WORKDIR /app
COPY *.csproj ./
RUN dotnet restore
COPY . ./
RUN dotnet publish -c Release -o out
FROM microsoft/dotnet:aspnetcore-runtime
WORKDIR /app
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "DemoMySql.dll"]
I have tried to make a HTTP request to the web api on VM but server response error(500) when i tried to interact with the database (the web api is still work normally when i make it return a sample string such as 192.168.20.197/api/values/samplestring). So How can i connect the web api to mysql on different container ?
p/s: Sorry for my bad grammar
Thanks for #Tao Zhou and #Nathan Werry advices, I solved the problem by replacing the localhost in connection string to the ip address of my virtual machine. Then I used docker --link tag (legacy feature of docker) to link the mysql container to the web api container.
docker run \
--name <webapi-container-name> \
-p 8082:8081 \
--link <mysql-container-name>:<mysql-image> \
-d <webapi-image>
For connecting from web api to mysql, you could add Compose which will create a shared network for web api and mysql, then you could access mysql with service container name.
version: "3"
services:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
ports:
- "8001:5432"
You could access db by postgres://db:5432, refer Networking in Compose.
For another option, you could create your own Bridge networks to share the network between two containers. refer Bridge networks.

Prometheus API returning HTML instead of JSON

Configured prometheus with kubernates and trying to execute queries using API's. Followed document to configure and execute the API
https://github.com/prometheus/prometheus/blob/master/docs/querying/api.md
Executing below curl command for output:
curl -k -X GET "https://127.0.0.1/api/v1/query?query=kubelet_volume_stats_available_bytes"
But getting output in HTML instead of JSON.
Is any additional configuration needed to be done to get output in json format for prometheus?
Per the Prometheus documentation, Prometheus "[does] not provide any server-side authentication, authorisation or encryption".
It would seem that you're hitting some proxy, so you need to figure out how to get past that proxy and through to Prometheus. Once you do that, you'll get the response you expect.
When I run prometheus on my local machine, it runs on port 9090 by default based on the Prometheus README.md:
* Install docker
* change the prometheus.yml section called target
#static_configs: (example)
# - targets: ['172.16.129.33:8080']
the target IP should be your localhost IP. Just providing localhost also would work.
* docker build -t prometheus_simple .
* docker run -p 9090:9090 prometheus_simple
* endpoint for prometheus is http://localhost:9090
So if I put the port in your curl call I have
curl -k -X GET "https://127.0.0.1:9090/api/v1/query?query=kubelet_volume_stats_available_bytes"
And I get:
{"status":"success","data":{"resultType":"vector","result":[]}}

Nginx Docker Volumes empty

When using docker-compose, nginx isn't showing files like other images might.
For example with Mysql, the below code will save the data created at /var/lib/mysql to the local machine at ./volumes/db/data
./volumes/db/data:/var/lib/mysql
Another example, with Wordpress, the below code will save the data created at /var/www/html/wp-content/uploads to the local machine at ./volumes/uploads/data
./volumes/uploads/data:/var/www/html/wp-content/uploads
This is not working with nginx though so no matter what I change /some/nginx/path to, it never appears at ./volumes/nginx/data
./volumes/nginx/data:/some/nginx/path
Does nginx work differently in this regard?
Update
Using a named volume with the following configurations solved this problem:
In the services section of the docker-compose file, I changed ./volumes/nginx/data:/some/nginx/path to nginx_data:/some/nginx/path
And then my volumes section reads as follows
volumes:
nginx_data:
driver: local
driver_opts:
o: bind
device: ${PWD}/volumes/nginx/data
There should be no difference, a volume is mounting a local directory to a directory in the container. Either you are not mounting correctly or you are mounting an incorrect path inside the nginx container (one which nginx does not use).
Based on the offical nginx docker image docs on https://docs.docker.com/samples/library/nginx/ you should mount to /usr/share/nginx/html
$ docker run --name some-nginx -v /some/content:/usr/share/nginx/html:ro -d nginx
In additonal I would include full paths in your docker-compose.yaml:
volumes:
- /full_path/volumes/nginx/data:/usr/share/nginx/html
If this is still not working you should exec into the container and confirm that the directory is mounted:
$ docker exec -it <container_name> sh
$ df -h | grep nginx
# write data, confirm you see it on the docker host's directory
$ cd /usr/share/nginx/html
$ touch foo
# on docker host
$ ls /full_path/volumes/nginx/data/foo
If any of this is failing I would look at docker logs to see if there was an issue mounting the directory, perhaps a path or permission issue.
$ docker logs <container_name>
--- UPDATE ---
I ran everything you are using and it just worked:
$ cat Dockerfile
FROM nginx
RUN touch /usr/share/nginx/html/test1234 && ls /usr/share/nginx/html/
$ docker build -t nginx-image-test .; docker run -p 8889:80 --name some-nginx -v /full_path/test:/usr/share/nginx/html:rw -d nginx-image-test; ls ./test;
...
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
33ccbea6c1c1 nginx-image-test "nginx -g 'daemon of…" 4 minutes ago Up 4 minutes 0.0.0.0:8889->80/tcp some-nginx
$ cat test/index.html
hello world
$ curl -i http://localhost:8889
HTTP/1.1 200 OK
Server: nginx/1.15.3
Date: Thu, 06 Sep 2018 15:35:43 GMT
Content-Type: text/html
Content-Length: 12
Last-Modified: Thu, 06 Sep 2018 15:31:11 GMT
Connection: keep-alive
ETag: "5b91483f-c"
Accept-Ranges: bytes
hello world
-- UPDATE 2 --
Awesome you figured it out, this post seems to explain why:
docker data volume vs mounted host directory
The host directory is, by its nature, host-dependent. For this reason, you can’t mount a host directory from Dockerfile because built images should be portable. A host directory wouldn’t be available on all potential hosts.
If you have some persistent data that you want to share between containers, or want to use from non-persistent containers, it’s best to create a named Data Volume Container, and then to mount the data from it.
I ran across this issue as well when trying to create a volume to place my own nginx configurations into etc/nginx. While I never found out the real issue I think it has to do with how nginx is built from the Dockerfile.
I solved the problem by using my own Dockerfile to extend the original and copy over the configuration files at build time. Hopefully this helps.
FROM nginx:1.15.2
COPY ./nginx.conf /etc/nginx/nginx.conf
COPY ./global /etc/nginx/global
COPY ./conf.d /etc/nginx/conf.d

Wildfly on OpenShift 3 with path-base routing and accessible console

I have Wildfly 10 running on Openshift Origin 3 in AWS with an elastic ip.
I setup a Route in Openshift to map / to the wildfly service. This is working fine. If I go to http://my.ip.address I get the WildFly welcome page.
But if I map a different path, say /wf01, it doesn't work. I get a 404 Not Found error.
My guess is the router is passing along the /wf01 to the service? If that's the case, can I stop it from doing it? Otherwise how can I map http://my.ip.address/wf01 to my wildfly service?
I also want the wildfly console to be accessible from outside (this is a demo server for my own use). I added "-bmanagement","0.0.0.0" to the deploymentconfig but looking at the wildfly logs it is still binding to 127.0.0.1:
02:55:41,483 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0051:
Admin console listening on http://127.0.0.1:9990
A router today cannot remap/rewrite the incoming HTTP path to another path value before passing it along. A workaround is to mount another route+service at the root that handles the root and redirects / forwards.
You can also use port-forward :
oc port-forward -h
Forward 1 or more local ports to a pod
Usage:
oc port-forward POD [LOCAL_PORT:]REMOTE_PORT [...[LOCAL_PORT_N:]REMOTE_PORT_N] [options]
Examples:
# Listens on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in the pod
$ oc port-forward -p mypod 5000 6000
# Listens on port 8888 locally, forwarding to 5000 in the pod
$ oc port-forward -p mypod 8888:5000
# Listens on a random port locally, forwarding to 5000 in the pod
$ oc port-forward -p mypod :5000
# Listens on a random port locally, forwarding to 5000 in the pod
$ oc port-forward -p mypod 0:5000

jekyll 2.2.0 | Error: Address already in use - bind(2)

I am new to Jekyll blogging and trying to view blog locally on
http://localhost:4000
but failed.
➜ my-awesome-site > jekyll serve
Notice: for 10x faster LSI support, please install http://rb-gsl.rubyforge.org/
Configuration file: /home/Git/my-awesome-site/_config.yml
Source: /home/Git/my-awesome-site
Destination: /home/Git/my-awesome-site/_site
Generating...
done.
Configuration file: /home/Git/my-awesome-site/_config.yml
jekyll 2.2.0 | Error: Address already in use - bind(2)
I tried
$ lsof -wni tcp:3000
$ lsof -wni tcp:4000
but both of them return nothing.
My Ruby version is:
➜ my-awesome-site > ruby --version
ruby 2.0.0p451 (2014-02-24 revision 45167) [universal.x86_64-darwin13]
What should I do next? I've re-installed jekyll already but the same problem remains.
See the comments in http://jekyllrb.com/docs/usage/, should help you:
If you need to kill the server, you can kill -9 1234 where "1234" is
the PID.
If you cannot find the PID, then do, ps aux | grep jekyll
and kill the instance. Read more.
Steps here fixed it for me. I had to append 'sudo' along with the commands.
$> sudo lsof -wni tcp:4000
It will give you information of process running on tcp port 4000 which also contains PID (Process ID). Now use command below to kill the process.
$> sudo kill -9 PID
Now you can execute jekyll serve command to start your site
Try to see which process is using that port, kill it and run again or try running jekyll on different port.
If #Matifou's answer here doesn't work, do the following instead:
The fix for anyone: run jekyll serve on an unused port:
Two ways:
In your _config.yml file, specify a port other than 4000 like this, for example:
port: 4001
OR (my preferred choice), add --port 4001 to your jekyll serve command, like this, for example:
bundle exec jekyll serve --livereload --port 4001
From: https://jekyllrb.com/docs/configuration/options/#serve-command-options
See my answer here: Is it possible to serve multiple Jekyll sites locally?
My particular problem: NoMachine is interfering:
When I run:
bundle exec jekyll serve --livereload --drafts --unpublished
I get these errors:
jekyll 3.9.0 | Error: Address already in use - bind(2) for 127.0.0.1:4000
.
.
.
/usr/lib/ruby/2.7.0/socket.rb:201:in `bind': Address already in use - bind(2) for 127.0.0.1:4000 (Errno::EADDRINUSE)
ps aux | grep jekyll doesn't show any processes running except this grep command itself. So, that doesn't help.
sudo lsof -wni tcp:4000, however, shows a running nxd nx daemon process:
$ sudo lsof -wni tcp:4000
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
nxd 914803 nx 3u IPv4 7606783 0t0 TCP *:4000 (LISTEN)
nxd 914803 nx 4u IPv6 7599664 0t0 TCP *:4000 (LISTEN)
I discovered this is due to my NoMachine remote login server.
If running NoMachine, click on the NoMachine icon in the top-right of your task bar. Ex: this is on Ubuntu 20.04:
Then click on "Show server status" --> Ports, and you'll see that NoMachine is running nx on Port 4000, which is interfering:
So, use the fix above to serve jekyll on a different port, such as 4001 instead of 4000. I recommend leaving the NoMachine port settings as-is, on port 4000, because NoMachine says:
Automatic updates require that hosts with NoMachine client or server installed have access to the NoMachine update server on port 4000 and use the TCP protocol.
See also:
Is it possible to serve multiple Jekyll sites locally?
my answer