Can't execute some of the curl commands in the quickstart guide - daml

I am working through the Digital Asset quickstart guide.
I am able to run:
curl -X GET http://localhost:8080/iou
And:
curl -X GET http://localhost:8080/iou/0
Without a problem. However, I am having trouble running:
curl -X PUT -d '{"issuer":"Alice","owner":"Alice","currency":"AliceCoin","amount":1.0,"observers":[]}' http://localhost:8080/iou
And:
curl -X POST -d '{ "newOwner":"Bob" }' http://localhost:8080/iou/ID/transfer
I get an output of
<html><body><h2>500 Internal Server Error</h2></body></html>
Is there a log somewhere that allows me to see what error occured? How can I debug this issue?

First I stopped the mvn, navigator and sandbox processes. Then I re-ran
da run damlc -- package daml/Main.daml target/daml/iou
Then I restarted sandbox. and re-entered
mvn clean compile exec:java
Now it works fine...

Related

Export contents of the Openshift image to a file

I've been searching for this for a while. I don't have access to the binary items used to build the image because an artifactory migration ruined the repo. There is one particularly precious binary I would love to extract from the image. I know docker save would save me, but I don't have access to docker, only to the oc client.
EDIT:
After looking around a little, thought that docker-registry API should be the way to go. Debugging oc client and logs of the docker-registry pods, found that both v1 and v2 API versions seem to be used.
Somehow cannot get any further than the version check.
Getting the auth token and registry url from oc:
TOKEN=`oc whoami -t`
URL="https://"`oc -n default get route docker-registry -o jsonpath="{.status.ingress[0].host}"
Then getting a correct response to:
curl -k -X GET -H "Authorization: Bearer $TOKEN" "$URL/v2/"
...
HTTP/1.1 200 OK
but:
curl -k -X GET -H "Authorization: Bearer $TOKEN" "$URL/v2/_catalog"
...
HTTP/1.1 400 Bad Request
You can log in to the internal image registry if exposed and then pull the image back down to your local system and do what you want with it. Instructions for logging in can be found in:
http://cookbook.openshift.org/image-registry-and-image-streams/how-do-i-push-an-image-to-the-internal-image-registry.html
That talks about doing a push, but you want to do a pull.

Jenkins Build Time Trend API does not yield output using curl API

I got this link to get the Build Time Trend along with other Data in jenkins
https://jenkins:8080/view/<view-name>/job/<job-name>/<buildnumber>/api/json
This works well in a web browser but this does not seem to work with curl, does not give any result when I run along with curl command
This is what I tried
curl -u user:api_token -s -k "https://jenkins:8080/view/<view-name>/job/<job-name>/<buildnumber>/api/json"
This syntax worked with other API's.
Not sure what is wrong here.
curl -u userid:api_token -s -k "https://jenkins:8080/view/<view-name>/job/<job-name>/<buildnumber>/api/json" | jq.'causes[]|{result}'
jq.causes[]|{result}: command not found
You need a space between jq and its arguments (and probably not a period).
... | jq 'causes[]|{result}'
^
space here

hadoop + ambari cluster change configuration

I want to upload the new bluprint.json file to my ambari cluster as the following
curl -u admin:admin -H "X-Requested-By: ambari" -X GET http://10.14.5.40:8080/api/v1/clusters/HDP6?format=blueprint -o /tmp/1-HDP6_blueprint.json
when I run it , seems that every thing is ok because we not get any warning /error
but when I read the ambari GUI parameters I see that the new bluprint.json not affected the ambari cluster with the new configuration
how to debug this ?, or how to get notification from the curl ... syntax about what happens ?
Please note that curl command you used is for downloading existing cluster configuration in blueprint format (its XGET).
You will have to use curl -XPOST to register and upload new blueprint to ambari.
curl --verbose -H "X-Requested-By: ambari" -X POST -u admin:admin http://10.14.5.40:8080/api/v1/blueprints/:HDP6new?validate_topology=false --data "#./blueprint.json"
Also note that, to change existing cluster configuration, uploading modified bluetooth is not correct way. You may refer this document for modifying configurations.

Docker API can’t apply json filters

According to the https://docs.docker.com/engine/reference/api/docker_remote_api_v1.24/#/list-tasks, filter can be only used to get running containers with a particular service name. For some reason, I am getting a full list of all tasks regardless of their names or desired states. I can't find any proper examples of using curl with JSON requests with Docker API.
I'm using the following command:
A)
curl -X GET -H "Content-Type: application/json" -d '{"filters":[{ "service":"demo", "desired-state":"running" }]}' https://HOSTNAME:2376/tasks --cert ~/.docker/cert.pem --key ~/.docker/key.pem --cacert ~/.docker/ca.pem
Returns everything
B)
trying to get something working from Docker Remote API Filter Exited
curl https://HOSTNAME:2376/containers/json?all=1&filters={%22status%22:[%22exited%22]} --cert ~/.docker/cert.pem --key ~/.docker/key.pem --cacert ~/.docker/ca.pem
This one returns "curl: (60) Peer's Certificate issuer is not recognized.", so I guess that curl request is malformed.
I have asked on Docker forums and they helped a little. I'm amazed that there are no proper documentation anywhere on the internet on how to use Docker API with curl or is it so obvious and I don't understand something?
I should prefix this with the fact that I have never seen curl erroneously report a certificate error when in fact there was some sort of other issue in play, but I will trust your assertion that this is in fact not a certificate problem.
I thought at first that your argument to filters was incorrect, because
according to the API reference, the filters parameter is...
a JSON encoded value of the filters (a map[string][]string) to process on the containers list.
I wasn't exactly sure how to interpret map[string][]string, so I set up a logging proxy between my Docker client and server and ran docker ps -f status=exited, which produced the following request:
GET /v1.24/containers/json?filters=%7B%22status%22%3A%7B%22exited%22%3Atrue%7D%7D HTTP/1.1\r
If we decode the argument to filters, we see that it is:
{"status":{"exited":true}}
Whereas you are passing:
{"status":["exited"]}
So that's different, obviously, and I was assuming that was the source of the problem...but when trying to verify that, I ran into a curious problem. I can't even run your curl command line as written, because curl tries to perform some globbing behavior due to the braces:
$ curl http://localhost:2376/containers/json'?filters={%22status%22:[%22exited%22]}'
curl: (3) [globbing] nested brace in column 67
If I correctly quote your arguments to filter:
$ python -c 'import urllib; print urllib.quote("""{"status":["exited"]}""")'
%7B%22status%22%3A%5B%22exited%22%5D%7D
It seems to work just fine:
$ curl http://localhost:2376/containers/json'?filters=%7B%22status%22%3A%5B%22exited%22%5D%7D'
[{"Id":...
I can get the same behavior if I use your original expression and pass -g (aka --globoff) to disable the brace expansion:
$ curl -g http://localhost:2376/containers/json'?filters={%22status%22:[%22exited%22]}'
[{"Id":...
One thing I would like to emphasize is the utility of sticking a proxy between the docker client and server. If you ever find yourself asking, "how do I use this API?", an excellent answer is to see exactly what the Docker client is doing in the same situation.
You can create a logging proxy using socat. Here is an example.
docker run -v /var/run/docker.sock:/var/run/docker.sock -p 127.0.0.1:1234:1234 bobrik/socat -v TCP-LISTEN:1234,fork UNIX-CONNECT:/var/run/docker.sock
Then run a command like so in another window.
docker -H localhost:1234 run --rm -p 2222:2222 hello-world
This example uses docker on ubuntu.
A docker REST proxy can be simple like this:
https://github.com/laoshanxi/app-mesh/blob/main/src/sdk/docker/docker-rest.go
Then you can curl like this:
curl -g http://127.0.0.1:6058/containers/json'?filters={%22name%22:[%22jenkins%22]}'

Error while trying to run a MapReduce job on FIWARE-Cosmos using Tidoop REST API

I am following this guide on Github and I am not able run the example mapreduced job mentioned in Step 5.
I am aware that this file no longer exists:
/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar
And I am aware that the same file can now be found here:
/usr/lib/hadoop-0.20/hadoop-examples-0.20.2-cdh3u6.jar
So I form my call as below:
curl -v -X POST "http://computing.cosmos.lab.fiware.org:12000/tidoop/v1/user/$user/jobs" -d '{"jar":"/usr/lib/hadoop-0.20/hadoop-examples-0.20.2-cdh3u6.jar","class_name":"WordCount","lib_jars":"/usr/lib/hadoop-0.20/hadoop-examples-0.20.2-cdh3u6.jar","input":"testdir","output":"testoutput"}' -H "Content-Type: application/json" -H "X-Auth-Token: $TOKEN"
The input directory exists in my hdfs user space and there is a file called testdata.txt inside it. The testoutput folder does not exist in my hdfs user space since I know it creates problems.
When I execute this curl command, the error I get is {"success":"false","error":1} which is not very descriptive. Is there something I am missing here?
This has been just tested with my user frb and a valid token for that user:
$ curl -X POST "http://computing.cosmos.lab.fiware.org:12000/tidoop/v1/user/frb/jobs" -d '{"jar":"/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar","class_name":"wordcount","lib_jars":"/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar","input":"testdir","output":"outputtest"}' -H "Content-Type: application/json" -H "X-Auth-Token: xxxxxxxxxxxxxxxxxxx"
{"success":"true","job_id": "job_1460639183882_0011"}
Please observe the fat jar with the MapReduce examples in the "new" cluster (computing.cosmos.lab.fiware.org) is at /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar, as detailed in the documentation. /usr/lib/hadoop-0.20/hadoop-examples-0.20.2-cdh3u6.jar was the fat jar in the "old" cluster (cosmos.lab.fiware.org).
EDIT 1
Finally, the user had no account in the "new" pair of clusters of Cosmos in FIWARE LAB (storage.cosmos.lab.fiware.org and computing.cosmos.lab.fiware.org), where Tidoop runs, but in another "old" cluster (cosmos.lab.fiwre.org). Thus, the issue was fixed by simply provisioning an account in the "new" ones.