hadoop + ambari cluster change configuration - json

I want to upload the new bluprint.json file to my ambari cluster as the following
curl -u admin:admin -H "X-Requested-By: ambari" -X GET http://10.14.5.40:8080/api/v1/clusters/HDP6?format=blueprint -o /tmp/1-HDP6_blueprint.json
when I run it , seems that every thing is ok because we not get any warning /error
but when I read the ambari GUI parameters I see that the new bluprint.json not affected the ambari cluster with the new configuration
how to debug this ?, or how to get notification from the curl ... syntax about what happens ?

Please note that curl command you used is for downloading existing cluster configuration in blueprint format (its XGET).
You will have to use curl -XPOST to register and upload new blueprint to ambari.
curl --verbose -H "X-Requested-By: ambari" -X POST -u admin:admin http://10.14.5.40:8080/api/v1/blueprints/:HDP6new?validate_topology=false --data "#./blueprint.json"
Also note that, to change existing cluster configuration, uploading modified bluetooth is not correct way. You may refer this document for modifying configurations.

Related

Can't execute some of the curl commands in the quickstart guide

I am working through the Digital Asset quickstart guide.
I am able to run:
curl -X GET http://localhost:8080/iou
And:
curl -X GET http://localhost:8080/iou/0
Without a problem. However, I am having trouble running:
curl -X PUT -d '{"issuer":"Alice","owner":"Alice","currency":"AliceCoin","amount":1.0,"observers":[]}' http://localhost:8080/iou
And:
curl -X POST -d '{ "newOwner":"Bob" }' http://localhost:8080/iou/ID/transfer
I get an output of
<html><body><h2>500 Internal Server Error</h2></body></html>
Is there a log somewhere that allows me to see what error occured? How can I debug this issue?
First I stopped the mvn, navigator and sandbox processes. Then I re-ran
da run damlc -- package daml/Main.daml target/daml/iou
Then I restarted sandbox. and re-entered
mvn clean compile exec:java
Now it works fine...

Export contents of the Openshift image to a file

I've been searching for this for a while. I don't have access to the binary items used to build the image because an artifactory migration ruined the repo. There is one particularly precious binary I would love to extract from the image. I know docker save would save me, but I don't have access to docker, only to the oc client.
EDIT:
After looking around a little, thought that docker-registry API should be the way to go. Debugging oc client and logs of the docker-registry pods, found that both v1 and v2 API versions seem to be used.
Somehow cannot get any further than the version check.
Getting the auth token and registry url from oc:
TOKEN=`oc whoami -t`
URL="https://"`oc -n default get route docker-registry -o jsonpath="{.status.ingress[0].host}"
Then getting a correct response to:
curl -k -X GET -H "Authorization: Bearer $TOKEN" "$URL/v2/"
...
HTTP/1.1 200 OK
but:
curl -k -X GET -H "Authorization: Bearer $TOKEN" "$URL/v2/_catalog"
...
HTTP/1.1 400 Bad Request
You can log in to the internal image registry if exposed and then pull the image back down to your local system and do what you want with it. Instructions for logging in can be found in:
http://cookbook.openshift.org/image-registry-and-image-streams/how-do-i-push-an-image-to-the-internal-image-registry.html
That talks about doing a push, but you want to do a pull.

Ambari cluster + Service Auto Start Configuration by API

Ambari services can be configured to start automatically on system boot. Each service can be configured to start all components, masters and workers, or selectively.
so how to enable all services in ambari cluster to start automatically on system boot by API ?
Remark - by default all services are disabled
You may use auto-restart API, refer following document https://cwiki.apache.org/confluence/display/AMBARI/Recovery%3A+auto+start+components
Syntax. Following is syntax of API
curl -u admin:<password> -H "X-Requested-By: ambari" -X PUT 'http://<ambari host>:<ambari port>/api/v1/clusters/<cluster_name>/components?ServiceComponentInfo/component_name.in(<component name>)' -d '{"ServiceComponentInfo" : {"recovery_enabled":"true"}}'
Example. To set auto-restart for app timeline server component of YARN service use curl command as follows.
curl -u admin:<password> -H "X-Requested-By: ambari" -X PUT 'http://localhost:8080/api/v1/clusters/HDPCL/components?ServiceComponentInfo/component_name.in(APP_TIMELINE_SERVER)' -d '{"ServiceComponentInfo" : {"recovery_enabled":"true"}}'
NOTE: You can find list of components from http://<ambarihost>:<ambari port>/api/v1/clusters/Fenton/components

Error while trying to run a MapReduce job on FIWARE-Cosmos using Tidoop REST API

I am following this guide on Github and I am not able run the example mapreduced job mentioned in Step 5.
I am aware that this file no longer exists:
/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar
And I am aware that the same file can now be found here:
/usr/lib/hadoop-0.20/hadoop-examples-0.20.2-cdh3u6.jar
So I form my call as below:
curl -v -X POST "http://computing.cosmos.lab.fiware.org:12000/tidoop/v1/user/$user/jobs" -d '{"jar":"/usr/lib/hadoop-0.20/hadoop-examples-0.20.2-cdh3u6.jar","class_name":"WordCount","lib_jars":"/usr/lib/hadoop-0.20/hadoop-examples-0.20.2-cdh3u6.jar","input":"testdir","output":"testoutput"}' -H "Content-Type: application/json" -H "X-Auth-Token: $TOKEN"
The input directory exists in my hdfs user space and there is a file called testdata.txt inside it. The testoutput folder does not exist in my hdfs user space since I know it creates problems.
When I execute this curl command, the error I get is {"success":"false","error":1} which is not very descriptive. Is there something I am missing here?
This has been just tested with my user frb and a valid token for that user:
$ curl -X POST "http://computing.cosmos.lab.fiware.org:12000/tidoop/v1/user/frb/jobs" -d '{"jar":"/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar","class_name":"wordcount","lib_jars":"/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar","input":"testdir","output":"outputtest"}' -H "Content-Type: application/json" -H "X-Auth-Token: xxxxxxxxxxxxxxxxxxx"
{"success":"true","job_id": "job_1460639183882_0011"}
Please observe the fat jar with the MapReduce examples in the "new" cluster (computing.cosmos.lab.fiware.org) is at /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar, as detailed in the documentation. /usr/lib/hadoop-0.20/hadoop-examples-0.20.2-cdh3u6.jar was the fat jar in the "old" cluster (cosmos.lab.fiware.org).
EDIT 1
Finally, the user had no account in the "new" pair of clusters of Cosmos in FIWARE LAB (storage.cosmos.lab.fiware.org and computing.cosmos.lab.fiware.org), where Tidoop runs, but in another "old" cluster (cosmos.lab.fiwre.org). Thus, the issue was fixed by simply provisioning an account in the "new" ones.

Use cURL to add a JSON web page's data in Solr

I see from the UpdateJSON page how to use a command prompt to index a standalone file stored locally. Using this example I was able to successfully make a .json file accessible through Solr:
curl 'http://localhost:8983/solr/update/json?commit=true' --data-binary #books.json -H 'Content-type:application/json'
What I'm not able to find is the proper syntax to do the same for a webpage containing JSON data. I've tried with the #:
curl 'http://localhost:8983/solr/update/json?commit=true' --data-binary #[URL] -H 'Content-type:application/json'
and without:
curl 'http://localhost:8983/solr/update/json?commit=true' --data-binary [URL] -H 'Content-type:application/json'
Both ways lead to errors. How do I configure a command to prompt Solr to index the contents at [URL]?
According to documentation, (https://wiki.apache.org/solr/ContentStream) you should first ensure remote streaming is enabled (solrconfig, search for enableRemoteStreaming)
Then the command should be of the kind:
curl 'http://localhost:8983/solr/update/json?commit=true&stream.url=YOURURL' -H 'Content-type:application/json'