Prometheus API returning HTML instead of JSON - json

Configured prometheus with kubernates and trying to execute queries using API's. Followed document to configure and execute the API
https://github.com/prometheus/prometheus/blob/master/docs/querying/api.md
Executing below curl command for output:
curl -k -X GET "https://127.0.0.1/api/v1/query?query=kubelet_volume_stats_available_bytes"
But getting output in HTML instead of JSON.
Is any additional configuration needed to be done to get output in json format for prometheus?

Per the Prometheus documentation, Prometheus "[does] not provide any server-side authentication, authorisation or encryption".
It would seem that you're hitting some proxy, so you need to figure out how to get past that proxy and through to Prometheus. Once you do that, you'll get the response you expect.

When I run prometheus on my local machine, it runs on port 9090 by default based on the Prometheus README.md:
* Install docker
* change the prometheus.yml section called target
#static_configs: (example)
# - targets: ['172.16.129.33:8080']
the target IP should be your localhost IP. Just providing localhost also would work.
* docker build -t prometheus_simple .
* docker run -p 9090:9090 prometheus_simple
* endpoint for prometheus is http://localhost:9090
So if I put the port in your curl call I have
curl -k -X GET "https://127.0.0.1:9090/api/v1/query?query=kubelet_volume_stats_available_bytes"
And I get:
{"status":"success","data":{"resultType":"vector","result":[]}}

Related

Host not served (ejabberd)

I'm using out of the box ejabberd/ecs - Docker Hub and I've tried to run curl command (from my own container) to register the user , yet got following message:
Host not served
actual curl command w/ output:
/app # curl -ks --request POST https://ejabberd:5443/api/register --data '{"user":"test","host":"localhost","password":"testing"}'
Host not served
/app #
As far as Docker goes, both my app and ejabberd containers are both in same network.
Please advise.
ejabberd.yml just in case.
I was able to address my issue by adding container name as my hosts:
# grep -A2 hosts ./home/ejabberd/conf/ejabberd.yml
hosts:
- localhost
- ejabberd
#

Zabbix sender. Discovery rules. Host prototypes

I would like to automate host creation on zabbix server without using agent on hosts. Tried to use Discovery rules and sending JSON data with zabbix_sender. But without luck. Server does not accept data.
Environment:
Zabbix server 3.4 installed on Centos 7.Hosts with Windows or Ubuntu.
On server I created host with name zab_trap
In that host I created Discovery rule with key zab_trap.discovery and type Zabbix_trapper. Then in Discovery rule I created Host prototype with name {#RH.NAME}.
Command line with JSON "data":
zabbix_sender.exe -z zab_server -s zab_trap -k zab_trap.discovery -o "{"data":[{"{#RH.NAME}":"HOST1"}]}"
I expected that "HOST1" will be created. But after execution I got:
"info from server: "processed: 0; failed: 1; total: 1; seconds spent: 0.000188"
sent: 1; skipped: 0; total: 1"
And there is no error in zabbix_server.log (with debug level 5)
I see this:
trapper got '{"request":"sender data","data":[{"host":"zab_trap","key":"zab_trap.discovery","value":"'{data:[{{#RH.NAME}:HOST1}]}'"}]}'
I think that maybe there is something wrong with JSON syntax.
Please help.
It seems I have found solution. Problem is hidden in a way to send JSON. As I understood it does not work properly or there is problem with syntax(quotes) if write JSON directly in command line. But it works if zabbix_sender send file with JSON.
Command line:
zabbix_sender -z zab_server -s zab_trap -i test.json
File test.json contain line:
- zab_trap.discovery {"data":[{"{#RH.NAME}":"HOST1"}]}
Host created.
If you want to use the command line, without file json, you need to clean the string with:
zabbix_sender.exe -z zab_server -s zab_trap -k zab_trap.discovery -o "$(echo '{"data":[{"{#RH.NAME}":"HOST1"}]}' | tr -cd '[:print:]')"

Getting curl: (52) Empty reply from server when trying to send a curl command to a http address of a docker running an AutoML model

I am trying to send a prediction request as a JSON to a docker image of AutoML model running on a docker container. I have exported the image from the AutoML UI and stored it in the Google Cloud Storage.
I am running the following to launch the docker image.
CPU_DOCKER_GCS_PATH="gcr.io/automl-vision-ondevice/gcloud-container-1.12.0:latest"
YOUR_MODEL_PATH="gs://../../saved_model.pb"
PORT=8501
CONTAINER_NAME="my_random_name"
sudo docker run --rm --name ${CONTAINER_NAME} -p ${PORT}:8501 -v ${YOUR_MODEL_PATH}:/tmp/mounted_model/0001 -t ${CPU_DOCKER_GCS_PATH}
when I run this command, I get the following error but the program runs.
2019-05-09 11:29:06.810470: E tensorflow_serving/sources/storage_path/file_system_storage_path_source.cc:369] FileSystemStoragePathSource encountered a file-system access error: Could not find base path /tmp/mounted_model/ for servable default
I am running the following command to send the prediction request.
curl -d #/home/arkanil/saved_model/cloud_output.json -X POST http://localhost:8501/v1/models/default:predict
This returns
curl: (52) Empty reply from server.
I have tried to follow the steps written in the google docs mentioned below.
https://cloud.google.com/vision/automl/docs/containers-gcs-tutorial#install-docker
https://docs.docker.com/install/linux/docker-ce/debian/
Getting output as
curl: (52) Empty reply from server.
The expected result should be a JSON file depicting the prediction numbers of the AutoML model that is running in the docker.
Seems like you are trying to run with passing path of your model at google storage.
You should download saved_model.pb from GS to your local computer and pass its path to YOUR_MODEL_PATH variable.
To download model use:
gsutil cp ${YOUR_MODEL_PATH} ${YOUR_LOCAL_MODEL_PATH}/saved_model.pb

How can I use Ansible when I only have read-only access?

I am using Ansible to automate some network troubleshooting tasks, but when I try to ping all my devices as a sanity check I get the following error:
"msg": "Authentication or permission failure. In some cases, you may have been able to authenticate and did not have permissions on the remote directory. Consider changing the remote temp path in ansible.cfg to a path rooted in \"/tmp\".
When I run the command in Ansible verbose mode, right before this error I get the following output:
<10.25.100.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "echo Cmd exec error./.ansible/tmp/ansible-tmp-1500330345.12-194265391907358" && echo ansible-tmp-1500330345.12-194265391907358="echo Cmd exec error./.ansible/tmp/ansible-tmp-1500330345.12-194265391907358" ) && sleep 0'
I am an intern and thus only have read-only access to all devices; therefore, I believe the error is occurring because of the mkdir command. My two questions are thus:
1) Is there anyway to configure Ansible to not create any temp files on the devices?
2) Is there some other factor that may be causing this error that I might have missed?
I have tried searching through the Ansible documentation for any relevant configurations, but I do not have much experience working with Ansible so I have been unable to find anything.
The question does not make sense in a broader context. Ansible is a tool for server configuration automation. Without write access you can't configure anything on the target machine, so there is no use case for Ansible.
In a narrower context, although you did not post any code, you seem to be trying to ping the target server. Ansible ping module is not an ICMP ping. Instead, it is a component which connects to the target server, transfers Python scripts and runs them. The scripts produce a response which means the target system meets minimal requirements to run Ansible modules.
However you seem to want to run a regular ping command using Ansible command module on your control machine and check the status:
- hosts: localhost
vars:
target_host: 192.168.1.1
tasks:
- command: ping {{ target_host }}
You might want to play with failed_when, ignore_errors, or changed_when parameters. See Error handling in playbook.
Note, that I suggested running the whole play on localhost, because in your configuration, it doesn't make sense to configure the target machines to which you have limited access rights in the inventory.
Additionally:
Is there anyway to configure Ansible to not create any temp files on the devices?
Yes. Running commands through raw module will not create temporary files.
As you seem to have an SSH access, you can use it to run a command and check its result:
- hosts: 192.168.1.1
tasks:
- raw: echo Hello World
register: echo
- debug:
var: echo.stdout
If someone have multiple nodes and sudo permission, and you want to bypass Read Only restriction, try to use raw module, to remount disk, on remoute node with raed/write option, it was helful for me.
Playbook example:
---
- hosts: bs
gather_facts: no
pre_tasks:
- name: read/write
raw: ansible bs -m raw -a "mount -o remount,rw /" -b --vault-password-file=vault.txt
delegate_to: localhost
tasks:
- name: dns
raw: systemctl restart dnsmasq
- name: read only
raw: mount -o remount,ro /

Docker API can’t apply json filters

According to the https://docs.docker.com/engine/reference/api/docker_remote_api_v1.24/#/list-tasks, filter can be only used to get running containers with a particular service name. For some reason, I am getting a full list of all tasks regardless of their names or desired states. I can't find any proper examples of using curl with JSON requests with Docker API.
I'm using the following command:
A)
curl -X GET -H "Content-Type: application/json" -d '{"filters":[{ "service":"demo", "desired-state":"running" }]}' https://HOSTNAME:2376/tasks --cert ~/.docker/cert.pem --key ~/.docker/key.pem --cacert ~/.docker/ca.pem
Returns everything
B)
trying to get something working from Docker Remote API Filter Exited
curl https://HOSTNAME:2376/containers/json?all=1&filters={%22status%22:[%22exited%22]} --cert ~/.docker/cert.pem --key ~/.docker/key.pem --cacert ~/.docker/ca.pem
This one returns "curl: (60) Peer's Certificate issuer is not recognized.", so I guess that curl request is malformed.
I have asked on Docker forums and they helped a little. I'm amazed that there are no proper documentation anywhere on the internet on how to use Docker API with curl or is it so obvious and I don't understand something?
I should prefix this with the fact that I have never seen curl erroneously report a certificate error when in fact there was some sort of other issue in play, but I will trust your assertion that this is in fact not a certificate problem.
I thought at first that your argument to filters was incorrect, because
according to the API reference, the filters parameter is...
a JSON encoded value of the filters (a map[string][]string) to process on the containers list.
I wasn't exactly sure how to interpret map[string][]string, so I set up a logging proxy between my Docker client and server and ran docker ps -f status=exited, which produced the following request:
GET /v1.24/containers/json?filters=%7B%22status%22%3A%7B%22exited%22%3Atrue%7D%7D HTTP/1.1\r
If we decode the argument to filters, we see that it is:
{"status":{"exited":true}}
Whereas you are passing:
{"status":["exited"]}
So that's different, obviously, and I was assuming that was the source of the problem...but when trying to verify that, I ran into a curious problem. I can't even run your curl command line as written, because curl tries to perform some globbing behavior due to the braces:
$ curl http://localhost:2376/containers/json'?filters={%22status%22:[%22exited%22]}'
curl: (3) [globbing] nested brace in column 67
If I correctly quote your arguments to filter:
$ python -c 'import urllib; print urllib.quote("""{"status":["exited"]}""")'
%7B%22status%22%3A%5B%22exited%22%5D%7D
It seems to work just fine:
$ curl http://localhost:2376/containers/json'?filters=%7B%22status%22%3A%5B%22exited%22%5D%7D'
[{"Id":...
I can get the same behavior if I use your original expression and pass -g (aka --globoff) to disable the brace expansion:
$ curl -g http://localhost:2376/containers/json'?filters={%22status%22:[%22exited%22]}'
[{"Id":...
One thing I would like to emphasize is the utility of sticking a proxy between the docker client and server. If you ever find yourself asking, "how do I use this API?", an excellent answer is to see exactly what the Docker client is doing in the same situation.
You can create a logging proxy using socat. Here is an example.
docker run -v /var/run/docker.sock:/var/run/docker.sock -p 127.0.0.1:1234:1234 bobrik/socat -v TCP-LISTEN:1234,fork UNIX-CONNECT:/var/run/docker.sock
Then run a command like so in another window.
docker -H localhost:1234 run --rm -p 2222:2222 hello-world
This example uses docker on ubuntu.
A docker REST proxy can be simple like this:
https://github.com/laoshanxi/app-mesh/blob/main/src/sdk/docker/docker-rest.go
Then you can curl like this:
curl -g http://127.0.0.1:6058/containers/json'?filters={%22name%22:[%22jenkins%22]}'