I've uploaded file to oss and have object id, if bucket object is not yet translated then how to check derivatives info. with object id?
It's straightforward, just base64 encode your objectId, then call GET {urn}/manifest. If it returns a 404 http status code, then it means this URN hasn't got translated.
If your file is stored on BIM360/ACC, you will need to get derivative URN from the file's version tip. Please follow this tutorial, but find relationships.data.derivatives.data.id instead for the URN like the below for example.
https://forge.autodesk.com/en/docs/bim360/v1/tutorials/document-management/download-document/#step-4-find-the-storage-object-id-for-the-file
"derivatives": {
"data": {
"type": "derivatives",
"id": "dXJuOmFkc2sud2lwcHJvZDpmcy5maWxlOnZmLkVueWtrU3FjU0lPVTVYMGhRdy1mQUM_dmVyc2lvbj0x"
},
// ...
},
Node.js code sample tested with yiskang/forge-viewmodels-nodejs-svf2
const {
DerivativesApi
} = require('forge-apis');
const { getClient, getPublicToken } = require('./routes/common/oauth');
const derivativeApi = new DerivativesApi();
const urn = 'dXJuOmFkc2sub2JqZWN0czpvcy5vYmplY3Q6bXlidWNrZXQvdGVzdC5ydnQ';
getPublicToken().then(accessToken => {
derivativeApi.getManifest(urn, {}, null, accessToken).then(function (res) {
console.log(res.statusCode, res.statusMessage);
},
function (err) {
// When the urn hasn't got translated, it goes here
console.error('error', err.statusCode, err.statusMessage);
// if you want to redire page to some where, write your codes here
});
}, function (err) {
console.error(err);
});
ref: https://stackoverflow.com/a/70664111/7745569
Related
I'm using React to create a web application. I have a DynamoDB table in AWS and an AppSync API configured.
I'm using the following to make an api call:
const [items, setItems] = useState([]);
useEffect(() => {
(async () => {
const apiGroups = await API.graphql(graphqlOperation(queries.getNtig, { PK: "Status", SK: "Active" }));
setItems(apiGroups.data.getNtig.Group);
})();
}, []);
Later on I use the results to create a dropdown. I had this working perfectly with Rest but I'm trying to switch to using GraphQL.
I see the JSON response in the webconsole:
{
"data": {
"getNTIG": {
"PK": "Status",
"SK": "Active",
"Group": [
"Group1",
"Group2"
]
}
}
}
I always get Unhandled Rejection (TypeError): apiGroups.data.getNtig is undefined
Any help greatly appreciated.
The problem is that the key you are referring to is named getNTIG and not getNtig. The language is case sensitive so it is important to use the right case.
I've been following this viewer walkthrough tutorial (node.js), for uploading and showing a file in the forge viewer.
I've been using angular to recreate the example, except instead of a user uploading a file, the file is hardcoded into the app from my assets folder for testing purposes.
The issue comes to when i try and translate the revit file into svf.
I know there isn't an issue with the revit file as i have used models.autodesk.io to check if all is good.
I can successfully create a bucket and post a job, but when calling the translation status to check if translation is completed, i receive this:
{
"type": "manifest",
"hasThumbnail": "false",
"status": "failed",
"progress": "complete",
"region": "US",
"urn": "dXJuOmFkc2sub2JqZWN0czpvcy5vYmplY3Q6OTA0ZmZmYmMtODI1Ni00OWY2LWI3YzYtNDI3MmM1ZDlmNDljL2RyYXBlLnJ2dA",
"version": "1.0",
"derivatives": [
{
"name": "drape.rvt",
"hasThumbnail": "false",
"status": "failed",
"progress": "complete",
"messages": [
{
"type": "error",
"code": "Revit-UnsupportedFileType",
"message": "<message>The file is not a Revit file or is not a supported version.</message>"
},
{
"type": "error",
"message": "Possibly recoverable warning exit code from extractor: -536870935",
"code": "TranslationWorker-RecoverableInternalFailure"
}
],
"outputType": "svf"
}
]
}
I'm pretty sure my code for translating a project is correct, i think the issue is coming from uploading the file to my bucket.
The body structure for the PUT request must contain the contents of the file.
Here is my code for loading and reading a file using XMLHttpRequest and FileReader
loadFile(bucketKey, accessToken) {
const reader: XMLHttpRequest = new XMLHttpRequest();
reader.open('GET', './assets/drape.rvt', true);
reader.responseType = 'blob';
reader.onloadend = (request) => {
const blob: Blob = reader.response;
console.log(blob); //returns Blob {size: 372736, type: "text/xml"}
// Create file from blob
const modelFile: File = new File([blob], 'drape.rvt');
this.readFile(bucketKey, accessToken, modelFile);
};
reader.send();
}
readFile(bucketKey, accessToken, modelFile) {
const myReader: FileReader = new FileReader();
myReader.readAsArrayBuffer(modelFile);
myReader.onloadend = (e) => {
const arrayBuffer: ArrayBuffer = myReader.result as ArrayBuffer;
this.fileToBucket(bucketKey, accessToken, arrayBuffer);
};
}
And the put request:
fileToBucket(bucketKey, accessToken, fileContent) {
const encodedBucketKey = encodeURIComponent(bucketKey);
const encodedFileName = encodeURIComponent('drape.rvt');
const uploadURI = `https://developer.api.autodesk.com/oss/v2/buckets/${encodedBucketKey}/objects/${encodedFileName}`;
const options = {
headers: new HttpHeaders({
'Content-Type': 'application/octet-stream',
Authorization: 'Bearer ' + accessToken
})
};
const body = {
data: fileContent
};
this.http.put(uploadURI, body, options)
.subscribe(
success => {
// URL safe base64 encoding
const urn = btoa(success.objectId);
this.translateObject(accessToken, urn);
}, error => {
console.log('fileToBucket');
console.log(error);
});
}
I'm assuming that the file content is the issue, here is the equivalent using node.js for the tutorial: PUT request read file.
You can use a utility web app like https://oss-manager.autodesk.io/ to check if the file you previously uploaded is correct (by downloading it or trying to translate it to SVF through the app's UI) and upload the file using this utility and then try to translate it with your app. It can also be used to delete all the derivatives for a given file in your bucket.
That could help narrow down the issue.
It's also possible that the file was not correctly uploaded the first time (at a certain point when you were still testing things) and so the translation failed back then and now it won't try to translate the file again. You can force the translation to take place by adding x-ads-force to the POST Job request with value "true" - see https://forge.autodesk.com/en/docs/model-derivative/v2/reference/http/job-POST/
Basically, I am setting up a web server via Node.js and Express (I am a beginner at this) to retrieve data by reading a JSON file.
For example, this is my data.json file:
[{
"color": "black",
"category": "hue",
"type": "primary"
},
{
"color": "red",
"category": "hue",
"type": "primary"
}
]
I am trying to retrieve all of the colors by implementing this code for it to display on localhost:
router.get('/colors', function (req, res) {
fs.readFile(__dirname + '/data.json', 'utf8', function (err, data) {
data = JSON.parse(data);
res.json(data); //this displays all of the contents of data.json
})
});
router.get('/colors:name', function (req, res) {
fs.readFile(__dirname + '/data.json', 'utf8', function (err, data) {
data = JSON.parse(data);
for (var i = 0; i < data.length; i++) {
res.json(data[i][1]); //trying to display the values of color
}
})
});
How do I go about doing this?
What you are trying to do is actually pretty simple once you break it into smaller problems. Here is one way to break it down:
Load your JSON data into memory for use by your API.
Define an API route which extracts only the colours from your JSON data and sends them to the client as a JSON.
var data = [];
try {
data = JSON.parse(fs.readFileSync('/path/to/json'));
} catch (e) {
// Handle JSON parse error or file not exists error etc
data = [{
"color": "black",
"category": "hue",
"type": "primary"
},
{
"color": "red",
"category": "hue",
"type": "primary"
}
]
}
router.get('/colors', function (req, res, next) {
var colors = data.map(function (item) {
return item.color
}); // This will look look like: ["black","red"]
res.json(colors); // Send your array as a JSON array to the client calling this API
})
Some improvements in this method:
The file is read only once synchronously when the application is started and the data is cached in memory for future use.
Using Array.prototype.map Docs to extract an array of colors from the object.
Note:
You can structure the array of colors however you like and send it down as a JSON in that structure.
Examples:
var colors = data.map(function(item){return {color:item.color};}); // [{"color":"black"},{"color":"red"}]
var colors = {colors: data.map(function(item){return item.color;})} // { "colors" : ["black" ,"red"] }
Some gotchas in your code:
You are using res.json in a for loop which is incorrect as the response should only be sent once. Ideally, you would build the JS object in the structure you need by iterating over your data and send the completed object once with res.json (which I'm guessing internally JSON.stringifys the object and sends it as a response after setting the correct headers)
Reading files is an expensive operation. If you can afford to read it once and cache that data in memory, it would be efficient (Provided your data is not prohibitively large - in which case using files to store info might be inefficient to begin with)
in express, you can do in this way
router.get('/colors/:name', (req, res) => {
const key = req.params.name
const content = fs.readFileSync(__dirname + '/data.json', 'utf8')
const data = JSON.parse(content)
const values = data.reduce((values, value) => {
values.push(value[key])
return values
}, [])
// values => ['black', 'red']
res.send(values)
});
and then curl http://localhost/colors/color,
you can get ['black', 'red']
What you're looking to do is:
res.json(data[i]['color']);
If you don't really want to use the keys in the json you may want to use the Object.values function.
...
data = JSON.parse(data)
var values = []
for (var i = 0; i < data.length; i++) {
values.push(Object.values(data[i])[0]) // 0 - color, 1 - category, 2 - type
}
res.json(values) // ["black","red"]
...
You should never use fs.readFileSync in production. Any sync function will block the event loop until the execution is complete hence delaying everything afterwords (use with caution if deemed necessary). A few days back I had the worst experience myself and learnt that in a hard way.
In express you can define a route with param or query and use that to map the contents inside fs.readFile callback function.
/**
* get color by name
*
* #param {String} name name of the color
* #return {Array} array of the color data matching param
*/
router.get('/colors/:name', (req, res) => {
const color = req.params.name
const filename = __dirname + '/data.json';
fs.readFile('/etc/passwd', 'utf8', (err, data) => {
if(err){
return res.send([]); // handle any error returned by readFile function here
}
try{
data = JSON.parse(data); // parse the JSON string to array
let filtered = []; // initialise empty array
if(data.length > 0){ // we got an ARRAY of objects, right? make your check here for the array or else any map, filter, reduce, forEach function will break the app
filtered = data.filter((obj) => {
return obj.color === color; // return the object if the condition is true
});
}
return res.send(filtered); // send the response
}
catch(e){
return res.send([]); // handle any error returned from JSON.parse function here
}
});
});
To summarise, use fs.readFile asynchronous function so that the event loop is not clogged up. Inside the callback parse through the content and then return the response. return is really important or else you might end up getting Error: Can't set headers after they are sent
DISCLAIMER This code above is untested but should work. This is just to demonstrate the idea.
I think you can’t access JSON without key. You can use Foreach loop for(var name : object){} check about foreach it may help you
Background
Recently, I have been working with the Elasticsearch Node.js API to bulk-index a large JSON file. I have successfully parsed the JSON file. Now, I should be able to pass the index-ready array into the Elasticsearch bulk command. However, using console log, it appears as though the array shouldn't be causing any problems.
Bugged Code
The following code is supposed to take an API URL (with a JSON response) and parse it using the Node.js HTTP library. Then using the Elasticsearch Node.js API, it should bulk-index every entry in the JSON array into my Elasticsearch index.
var APIUrl = /* The url to the JSON file on the API providers server. */
var bulk = [];
/*
Used to ready JSON file for indexing
*/
var makebulk = function(ParsedJSONFile, callback) {
var JSONArray = path.to.array; /* The array was nested... */
var action = { index: { _index: 'my_index', _type: 'my_type' } };
for(const item of items) {
var doc = { "id": `${item.id}`, "name": `${item.name}` };
bulk.push(action, doc);
}
callback(bulk);
}
/*
Used to index the output of the makebulk function
*/
var indexall = function(madebulk, callback) {
client.bulk({
maxRetries: 5,
index: "my_index",
type: "my_type",
body: makebulk
}, function(err, resp) {
if (err) {
console.log(err);
} else {
callback(resp.items);
}
});
}
/*
Gets the specified URL, parses the JSON object,
extracts the needed data and indexes into the
specified Elasticsearch index
*/
http.get(APIUrl, function(res) {
var body = '';
res.on('data', function(chunk) {
body += chunk;
});
res.on('end', function() {
var APIURLResponse = JSON.parse(body);
makebulk(APIURLResponse, function(resp) {
console.log("Bulk content prepared");
indexall(resp, function(res) {
console.log(res);
});
console.log("Response: ", resp);
});
});
}).on('error', function(err) {
console.log("Got an error: ", err);
});
When I run node bulk_index.js on my web server, I receive the following error: TypeError: Bulk body should either be an Array of commands/string, or a String. However, this doesn't make any sense because the console.log(res) command (From the indexall function under http.get client request) outputs the following:
Bulk content prepared
Response: [ { index: { _index: 'my_index', _type: 'my_type', _id: '1' } },
{ id: '5', name: 'The Name' }, ... },
... 120690 more items ]
The above console output appears to show the array in the correct format.
Question
What does TypeError: Bulk body should either be an Array of commands/string, or a String indicate is wrong with the array I am passing into the client.bulk function?
Notes
My server is currently running Elasticsearch 6.2.4 and Java Development Kit version 10.0.1. Everything works as far as the Elaticsearch server and even my Elaticsearch mappings (I didn't provide the client.indices.putMapping code, however I can if it is needed). I have spent multiple hours reading over every scrap of documentation I could find regarding this TypeError. I couldn't find much in regards to the error being thrown, so I am not sure where else to look for information regarding this error.
Seems a typo in your code.
var indexall = function(**madebulk**, callback) {
client.bulk({
maxRetries: 5,
index: "my_index",
type: "my_type",
body: **makebulk**
Check the madebulk & makebulk.
Below mentioned sample json documents.It contains two fields.
{
"_id": "daef4a0e39c0c7a00feb721f6c4ce8b9",
"_rev": "2-8c7ef28df59ecbdaa23b536e58691416",
"name": "sukil",
"skills": "java"
}
In server.js
var express = require('express');
var app = express();
var cloudant = require('cloudant');
cloudant({account:"test", password:"test"}, function(err, cloudant) {
var alice = cloudant.use('opti-update')
alice.atomic("_design/sample", "inplace", "daef4a0e39c0c7a00feb721f6c4ce8b9", {field: "name", value: "bar"}, function (error, response) {
console.log(error+""+response);
})
})
Here _design/sample is a design document name and inplace is update function name then next is document id.It returns error is document update conflict and response is undefined.
In design document mentioned below
{
"_id": "_design/sample",
"_rev": "9-94393ee4665bdfd6fb283e3419a53f24",
"updates": {
"inplace": "function(doc,req){var field = req.body.field;var value = req.body.value;doc[field] = value;return [doc,''];}"
}
}
I want to update the data in cloudant using node cloudant module. I want to update the name field in json document.Above method i tried but it shows document update conflict error.How to resolve this?
The atomic method assumes the first parameter as the design document only. So need to explicitly specify "_design".
alice.atomic("sample", "inplace", "daef4a0e39c0c7a00feb721f6c4ce8b9", {field: "name", value: "bar"}, function (error, response) {
console.log(error+""+response);
})
This may be causing the problem.