I checked their API documentation and didn't find anything useful that gives me the ability to create snapshots of the URLs I choose.
Submitting a POST request to the root path (https://pragma.archivelab.org) with JSON data containing url (String) and annotation (Object) fields will save a snapshot of url using the Wayback Machine and store the annotation object, making a bidirectional link between the stored snapshot and annotation entries.
Here's an example of such a request:
curl -X POST -H "Content-Type: application/json" -d '{"url": "google.com", "annotation": {"id": "lst-ib", "message": "Theres a microphone button in the searchbox"}}' https://pragma.archivelab.org
Related
I'm working on an app to translate Revit models which have already been published to the cloud into IFC format using the data-management and model-derivative APIs, and have run into two key issues for the model-derivative API.
1. Model-Derivative Translation Issue:
I have run into a failed translation error which has also popped up in other threads
Model Uploader Error
The file is not a Revit file or is not a supported version
code:"Revit-UnsupportedFileType"
message:"<message>The file is not a Revit file or is not a supported version.</message>"
type:"error"
code:"TranslationWorker-RecoverableInternalFailure"
message:"Possibly recoverable warning exit code from extractor: -536870935"
type:"error
However my case is somewhat unique, and other answers are not applicable. Previous cases have failed due to incorrect Revit version (apparently the translation may work on a 2016 version but not a 2019 version), or corruption during the upload of the file. This cannot be applicable for me as the .rvt to .svf translation was successful for this model, only my .rvt to .ifc translation has failed, also I am not uploading through the app, rather accessing files that are already on BIM360 docs.
Another strange part of this behavior is that the .rvt->.ifc translation has been successful for earlier versions of the same model. This leads me to believe that perhaps there is a file size issue where the latest version of the model is too large for translation, although I haven't found any limits on the file size in the model-derivative documentation.
2. Model-Derivative Download Issue:
Routing the download of a translated file through my server before downloading again from server to client, in order to use the 3-legged OAuth token, means having to download the same file twice (once from data-management api endpoint to server, secondly from server to client). This is problematic for large models.
Currently my solution has been to just pass the 3-legged OAuth token and file URI to the client and have the request go straight from client to autodesk endpoint, although I thought this was bad practice.
I have not found any samples which incorporate this download endpoint, a NodeJS one would be optimal for me. Also I wish there was a content-length header attached to this endpoint to give a better idea of download progress.
The relevant endpoints for my issues are both here:
https://forge.autodesk.com/en/docs/model-derivative/v2/tutorials/translate-source-file-to-obj/
Translate:
curl -X 'POST' -H 'Authorization: Bearer WmzXZq9MATYyfrnOFpYOE75sL5dh' -H 'Content-Type: application/json' -v 'https://developer.api.autodesk.com/modelderivative/v2/designdata/job' -d
'{
"input": {
"urn": "dXJuOmFkc2sub2JqZWN0czpvcy5vYmplY3Q6bW9kZWxkZXJpdmF0aXZlL0E1LmlhbQ"
},
"output": {
"formats": [
{
"type": "obj"
}
]
}
}'
Note: I have used "type": ifc rather than "type": obj, as my output format on this endpoint; that is not the issue.
Verify:
curl -X 'GET' -H 'Authorization: Bearer RWLzh098vuF3068r73FI7nF2RORf' -v 'https://developer.api.autodesk.com/modelderivative/v2/designdata/dXJuOmFkc2sub2JqZWN0czpvcy5vYmplY3Q6bW9kZWxkZXJpdmF0aXZlL0E1LmlhbQ/manifest'
Download:
curl -X 'GET' -H 'Authorization: Bearer RWLzh098vuF3068r73FI7nF2RORf' -v 'https://developer.api.autodesk.com/modelderivative/v2/designdata/dXJuOmFkc2sub2JqZWN0czpvcy5vYmplY3Q6bW9kZWxkZXJpdmF0aXZlL0E1LmlhbQ/manifest/urn%3Aadsk.viewing%3Afs.file%3AdXJuOmFkc2sub2JqZWN0czpvcy5vYmplY3Q6bW9kZWxkZXJpdmF0aXZlL0E1LmlhbQ%2Foutput%2Fgeometry%2Fbc3339b2-73cd-4fba-9cb3-15363703a354.obj'
means having to download the same file twice (once from data-management api endpoint to server, secondly from server to client). This is problematic for large models.
Try set up a proxy service in your backend to relay user requests and direct them to fetch the derivatives and inject tokens into their headers so the tokens won’t get exposed to clients so the files won’t have to land physically on your backend as a go-between - see here for an sample.
Will update this answer once we get to the bottom of the first issue.
How can I update rpccorsdomain value with my private node already mining?
Note
I have already setup it before to point to a url --rpccorsdomain "http://mywebsite.com" and need to update it to different value
You can restart the RPC API using either the console or through a curl command.
In the console, you can issue an admin.stopRPC() and then restart it passing in the new cors value with admin.startRPC(host, port, newCorsList, apis).
If you prefer using curl:
curl -X POST --data '{"method": "admin_stopWS"}' nodeHostName:nodePortNumber
curl -X POST --data '{"method": "admin_startWS", "params": [host, port, cors, apis]}' nodeHostName:nodePortNumber
The full list of available management APIs can be found here.
Alternatively, you can just stop the node and restart it passing in the new cors list via command line options.
I have a Zabbix web monitoring task where I need to pass in JSON data to the URL via a http post. For example, the curl command to run this request is:
curl -H "Content-Type: application/json" --data #myData.json https://example.net
When I am configuring the Zabbix web monitoring task, where do I put this JSON data?
I see under the "Step of the web scenario" there are fields for 'Post', 'Variable', and 'Headers'. Does the JSON data go directly in one of those fields?
curl --data => it's POST request => paste your JSON data into Zabbix Post field + you need to set JSON header in the Zabbix Headers field: Content-Type: application/json.
I am having trouble requesting the current status of a resumable upload. Based on the Google Documentation, the following request should return a Range header with the current range google has of my upload, but I keep getting the following response:
Failed to parse Content-Range header
Here is my curl request:
curl -H "Content-Range: bytes */1443452365" -H "Content-Length: 0" locationUrl -X PUT
I have also tried "bytes */*" and "*/*" for the Content-Range header, but no luck.
Any ideas?
First, you need to check the correct request format like the sample given below:
curl -H "Accept: application/json" -H "Content-type: application/json" -X POST -d '{"id":100}' http://localhost/api/postJsonReader.do
And, discussed in other command-line tools, when sending raw HTTP data, be aware that the POST and PUT operations will require computing the value for a Content-Length header. You can use the UNIX tool wc to compute this value. Place all the content of the HTTP body into a text file such as template_entry.xml (example used above) and run wc -c template_entry.xml. It is often difficult to debug if you accidentally use an incorrect value for the Content-Length header.
Lastly, you can request the status between chunks, not just if the upload is interrupted. If the upload request is interrupted, follow the procedure outlined in resume an interrupted upload.
I am new to Couchbase and I want to know about how CRUD can be achieved in it.
I have sucessfully created a bucket and I've tried to insert documents in it using CURL.
Creating a bucket using CURL succeeded like the following:
curl -X POST -u admin:citrus -d name=test-bucket -d ramQuotaMB=100 -d authType=none -d replicaNumber=2 -d proxyPort=11216 http://example.com:8091/pools/default/buckets
Now how can I create sample documents in this bucket?
How can I achieve this by using a REST API, please help me..
Couchbase isn't designed to use a REST API for data creation or mutation, perhaps you are thinking of CouchDB which does offer this and shares some similarities with Couchbase although they are distinct technologies.
You need to use one of the sdks to interact with your bucket, there are a multitude of SDK's available in all the major languages, Java,Ruby,Python,C etc. Check out the list of them here, they also contain getting started guides which covers the basic operations such as get/set and more complex examples of views and topics such as locking.
http://docs.couchbase.com/
I think you should use the Sync Gateway of Couchbase. The Sync Gateway provides a REST API which allows you to Create, Read, Update and Delete (CRUD) documents.
For example, if you have a Couchbase Server running on port 8091 with a bucket called test-bucket, you can set up your Sync Gateway with the following content in your sync_gateway.json configuration file:
{
"log": ["HTTP+"],
"adminInterface": "127.0.0.1:4985",
"interface": "0.0.0.0:4984",
"databases": {
"test-db": {
"server": "http://localhost:8091",
"bucket": "test-bucket",
"username": "test-bucket",
"password": "test-bucket-password",
"users": {
"GUEST": {"disabled": false, "admin_channels": ["*"] }
}
}
}
}
Then, after starting the Sync Gateway, you can create a document like the following:
curl -X PUT -H 'Content-Type: application/json' http://localhost:4984/test-db/myNewDocId -d #document.file
With document.file being a file with the JSON content of the document you'd like to create and with myNewDocId being the ID of the new document.
You can find all supported REST API methods and details in the official documentation: http://developer.couchbase.com/documentation/mobile/1.1.0/develop/references/sync-gateway/rest-api/document/index.html
Couchbase now supports adding document via curl call.
you can do
curl localhost:8093/query/service -u uname:paaswd -d 'statement=INSERT INTO `bucketName` (KEY, VALUE) VALUES ( "my_doc_id", {"Price":"price"} );'
Note we are using 8093 which is query service port. for this to run you have to have query service(n1ql) running.