Can I send POST request using http_load? - stress-testing

I want to do stress test on a API server using http_load. But I can only send GET request by default. Is POST request supported by http_load?

http_load does not support POST method. You can use apache bench (ab) with -p parameter
ab -T "application/x-www-form-urlencoded; charset=UTF-8" -p post_data_file.txt -n 1000 -c 32 http://localhost/

Related

HTTP POST request and basic authentication

When I do:
curl https://example.com/my/ressource \
-H "Content-Type: application/json" \
--data '{"itemid":["123","456"]}' \
-u myuser
I get HTTP 403 Forbidden error code. If I use a get request it works, e.g.
curl https://example.com/my/ressource
Works fine. Also if I disable basic authentication at the server side the above post request works fine.
The server is an Apache 2.4 and it is acting as a reverse proxy.
What is wrong with the post request?
It turned out, that in the above case Apache is used as a reverse proxy and the server behind Apache could not handle Authorization, so the solution was to remove this header with:
RequestHeader unset Authorization

Prometheus API returning HTML instead of JSON

Configured prometheus with kubernates and trying to execute queries using API's. Followed document to configure and execute the API
https://github.com/prometheus/prometheus/blob/master/docs/querying/api.md
Executing below curl command for output:
curl -k -X GET "https://127.0.0.1/api/v1/query?query=kubelet_volume_stats_available_bytes"
But getting output in HTML instead of JSON.
Is any additional configuration needed to be done to get output in json format for prometheus?
Per the Prometheus documentation, Prometheus "[does] not provide any server-side authentication, authorisation or encryption".
It would seem that you're hitting some proxy, so you need to figure out how to get past that proxy and through to Prometheus. Once you do that, you'll get the response you expect.
When I run prometheus on my local machine, it runs on port 9090 by default based on the Prometheus README.md:
* Install docker
* change the prometheus.yml section called target
#static_configs: (example)
# - targets: ['172.16.129.33:8080']
the target IP should be your localhost IP. Just providing localhost also would work.
* docker build -t prometheus_simple .
* docker run -p 9090:9090 prometheus_simple
* endpoint for prometheus is http://localhost:9090
So if I put the port in your curl call I have
curl -k -X GET "https://127.0.0.1:9090/api/v1/query?query=kubelet_volume_stats_available_bytes"
And I get:
{"status":"success","data":{"resultType":"vector","result":[]}}

Export contents of the Openshift image to a file

I've been searching for this for a while. I don't have access to the binary items used to build the image because an artifactory migration ruined the repo. There is one particularly precious binary I would love to extract from the image. I know docker save would save me, but I don't have access to docker, only to the oc client.
EDIT:
After looking around a little, thought that docker-registry API should be the way to go. Debugging oc client and logs of the docker-registry pods, found that both v1 and v2 API versions seem to be used.
Somehow cannot get any further than the version check.
Getting the auth token and registry url from oc:
TOKEN=`oc whoami -t`
URL="https://"`oc -n default get route docker-registry -o jsonpath="{.status.ingress[0].host}"
Then getting a correct response to:
curl -k -X GET -H "Authorization: Bearer $TOKEN" "$URL/v2/"
...
HTTP/1.1 200 OK
but:
curl -k -X GET -H "Authorization: Bearer $TOKEN" "$URL/v2/_catalog"
...
HTTP/1.1 400 Bad Request
You can log in to the internal image registry if exposed and then pull the image back down to your local system and do what you want with it. Instructions for logging in can be found in:
http://cookbook.openshift.org/image-registry-and-image-streams/how-do-i-push-an-image-to-the-internal-image-registry.html
That talks about doing a push, but you want to do a pull.

Expired Certificate in Cosmos API

Using WebHDFS API of Cosmos generates an expired certificate response.
Using this url: https://cosmos.lab.fi-ware.org:13000/
we can see certificate seems expired
Do we need updated certificate or any way to go around this problem?
The certificate must be renewed, for sure. In the meantime, you can simply ignore the certificate. If you are using curl, use the -k option:
$ curl -k -X POST "https://cosmos.lab.fiware.org:13000/cosmos-auth/v1/token" -H "Content-Type: application/x-www-form-urlencoded" -d "grant_type=password&username=frb#tid.es&password=MY_PASSWORD_IS_PRIVATE"

OAuth2 access to Cosmos' WebHDFS in FIWARE Lab

I've recently seen the access to Cosmos' WebHDFS in FIWARE Lab has been protected with OAuth2. I know I have to add a OAuth2 token to the request in order to continue using WebHDFS, but:
How can I get the token?
How the token is added to the request?
Without the token, the API always returns:
$ curl -X GET "http://cosmos.lab.fi-ware.org:14000/webhdfs/v1/user/gtorodelvalle?op=liststatus&user.name=gtorodelvalle"
Auth-token not found in request header
Yes, now WebHDFS access is protected with OAuth2. This is part of the general mechanism for pretecting REST APIs in FIWARE, which performs authentication and authorization. You can find more details here.
First of all, you must request an OAuth2 token to the Cosmos tokens generator. This is a service running in cosmos.lab.fiware.org:13000. You can do this using any REST client, the easiest way is using the curl command:
$ curl -k -X POST "https://cosmos.lab.fiware.org:13000/cosmos-auth/v1/token" -H "Content-Type: application/x-www-form-urlencoded" -d "grant_type=password&username=frb#tid.es&password=xxxxxxxx"
{"access_token": "qjHPUcnW6leYAqr3Xw34DWLQlja0Ix", "token_type": "Bearer", "expires_in": 3600, "refresh_token": "V2Wlk7aFCnElKlW9BOmRzGhBtqgR2z"}
As you can see, your FIWARE Lab credentials are required in the payload, in the form of a password-based grant type.
Once the access token is got (in the example above, it is qjHPUcnW6leYAqr3Xw34DWLQlja0Ix), simply add it to the same WebHDFS request you were performing in the past. The token is added by using the X-Auth-Token header:
$ curl -X GET "http://cosmos.lab.fiware.org:14000/webhdfs/v1/user/frb/path/to/the/data?op=liststatus&user.name=frb" -H "X-Auth-Token: qjHPUcnW6leYAqr3Xw34DWLQlja0Ix"
{"FileStatuses":{"FileStatus":[...]}}
If you try the above request with a random token the server will return the token is not valid; that's because you have not authenticated properly:
$ curl -X GET "http://cosmos.lab.fiware.org:14000/webhdfs/v1/user/frb/path/tp/the/data?op=liststatus&user.name=frb" -H "X-Auth-Token: randomtoken93487345"
User token not authorized
The same way, if using a valid token but trying to access another HDFS userspace, you will get the same answer; that's because you are not authorized to access any HDFS userspace but the one owned by you:
$ curl -X GET "http://cosmos.lab.fiware.org:14000/webhdfs/v1/user/fgalan/path/tp/the/data?op=liststatus&user.name=fgalan" -H "X-Auth-Token: qjHPUcnW6leYAqr3Xw34DWLQlja0Ix"
User token not authorized
IMPORTANT UPDATE:
From summer 2016, cosmos.lab.fiware.org is not workin anymore. Instead, a pair of clusters, storage.cosmos.lab.fiware.org and computing.cosmos.lab.fiware.org have been setup. Regarding the auth server of Cosmos, it currently run in computing.cosmos.lab.fiware.org, port TCP/13000.
The right request must be:
curl -X POST "https://cosmos.lab.fi-ware.org:13000/cosmos-auth/v1/token" -H "Content-Type: application/x-www-form-urlencoded" -d "grant_type=password&username=user#domain.com&password=yourpassword" -k
The url was incorrect, the correct is https://cosmos.lab.fi-ware.org:13000
-k is for turn off certificate verification