Forge api translating .fbx to SVF2 doesnt work and translates only to SVF - autodesk-forge

I am using forge-apis package on Node.js and I want to translate a .fbx file in SVF2. When I do so and load the model, size and GPU memory used is the same as normal translate to SVF and when I check viewer.model.isSVF2() it return false.
const {
DerivativesApi,
JobPayload,
JobPayloadInput,
JobPayloadOutput,
JobSvfOutputPayload} = require('forge-apis');
and
router.post('/jobs', async (req, res, next) => {
const xAdsForce = (req.body.xAdsForce === true);
let job = new JobPayload();
job.input = new JobPayloadInput();
job.input.urn = req.body.objectName;
if(req.body.rootFilename && req.body.compressedUrn) {
job.input.rootFilename = req.body.rootFilename;
job.input.compressedUrn = req.body.compressedUrn;
}
job.output = new JobPayloadOutput([
new JobSvfOutputPayload()
]);
job.output.formats[0].type = 'svf2' ;
job.output.formats[0].views = ['2d', '3d'];
try {
// Submit a translation job using [DerivativesApi](https://github.com/Autodesk-Forge/forge-api-nodejs-client/blob/master/docs/DerivativesApi.md#translate).
const result = await new DerivativesApi().translate(job, { xAdsForce: xAdsForce }, req.oauth_client, req.oauth_token);
res.status(200).end();
} catch(err) {
next(err);
}});
How can I handle this problem? Thanks a lot.

Related

Api call to json to interface to Mat-Tree

I'm running into issues with trying to convert a json response from an Api call into an interface that will be accepted by this buildFileTree. So the call is pulling from SQL, it is working in dapper, I also see the array of data in my webapp in my console. The issue is when I try to change the initialize() value for buildFileTree from my static json file 'SampleJson' (inside the project) to my new interface 'VehicleCatalogMod' the tree shows up with SampleJson but when I switch the data to VehicleCatalogMod, the tree collapses.
dataStoreNew: VehicleCatalogMod[] = [];
constructor(private _servicesService: ServicesService){
this._servicesService.GetVehicleCat()
.subscribe(data => {
this.dataStoreNew = [];
this.dataStoreNew = data;
console.log(data);
})
this.initialize();
}
initialize() {
this.treeData = SampleJson;
// Working as SampleJson this is where the problem happens
const data = this.buildFileTree(VehicleCatalogMod, 0);
console.log(data);
this.dataChange.next(data);
}
buildFileTree(obj: object, level: number): TodoItemNode[] {
return Object.keys(obj).reduce<TodoItemNode[]>((accumulator, key) => {
let value = obj[key];
const node = new TodoItemNode();
node.item = key;
if (value != null) {
if (typeof value === 'object') {
node.children = this.buildFileTree(value, level + 1);
} else {
node.item = value;
}
}
return accumulator.concat(node);
}, []);
}
GetVehicleCat(): Observable<any> {
console.log('Vehicle Catalog got called');
return this.http.get('https://api/right/here',
{ headers: this.options.headers });
}
I tried a multitude of different things to try & get this working. I'm pretty much stagnated. Same error occurs when I try this.dataStoreNew instead. No errors in console, it literally just collapses the tree into one non distinguishable line. Also when I used: const vcm = new VehicleCatalogMod(); it made the tree pop up with the different properties but not the API values.
I also attached an image of the HTML element that appears.
with VehicleCatalogMod
with SampleJson

How to save & retrive image to/from mysql?

Two part quersion.
Part 1:
Im uploading an image to my server and want to save it to my database.
So far:
table:
resolver:
registerPhoto: inSequence([
async (obj, { file }) => {
const { filename, mimetype, createReadStream } = await file;
const stream = createReadStream();
const t = await db.images.create({
Name: 'test',
imageData: stream ,
});
},
])
executing query:
Executing (default): INSERT INTO `images` (`Id`,`imageData`,`Name`) VALUES (DEFAULT,?,?);
But nothing is saved.
Im new to this and im probably missing something but dont know what.
Part2:
This is followed by part 1, lets say I manage to save the image, how do I read it and send it back to my FE?
An edit: Ive read alot of guides saving the an image name to the db and then tha actuall image in a folder. This is NOT what im after, want to save the image to the DB and then be able to fetch it from the DB abd present it.
This took me some time but I finaly figured it out.
First step (saving to the db):
Have to get the entire stream data and read it like this:
export const readStream = async (stream, encoding = 'utf8') => {
stream.setEncoding('base64');
return new Promise((resolve, reject) => {
let data = '';
// eslint-disable-next-line no-return-assign
stream.on('data', chunk => (data += chunk));
stream.on('end', () => resolve(data));
stream.on('error', error => reject(error));
});
};
use like this:
const streamData = await readStream(stream);
Before saving I tur the stream into a buffer:
const buff = Buffer.from(streamData);
Finaly the save part:
db.images.create(
{
Name: filename,
imageData: buff,
Length: stream.bytesRead,
Type: mimetype,
},
{ transaction: param }
);
Note that I added Length and Type parameter, this is needed if you like to return a stream when you return the image.
Step 2 (Retrieving the image).
As #xadm said multiple times you can not return an image from GRAPHQL and after some time I had to accept that fact, hopefully graphql will remedy this in the future.
S What I needed to do is set up a route on my fastify backend, send a image Id to this route, fetch the image and then return it.
I had a few diffirent approaches to this but in the end I simpy returned a binary and on the fronted I encoded it to base64.
Backend part:
const handler = async (req, reply) => {
const p: postParams = req.params;
const parser = uuIdParserT();
const img = await db.images.findByPk(parser.setValueAsBIN(p.id));
const binary = img.dataValues.imageData.toString('binary');
const b = Buffer.from(binary);
const myStream = new Readable({
read() {
this.push(Buffer.from(binary));
this.push(null);
},
});
reply.send(myStream);
};
export default (server: FastifyInstance) =>
server.get<null, any>('/:id', opts, handler);
Frontend part:
useEffect(() => {
// axiosState is the obj that holds the image
if (!axiosState.loading && axiosState.data) {
// #ts-ignore
const b64toBlob = (b64Data, contentType = '', sliceSize = 512) => {
const byteCharacters = atob(b64Data);
const byteArrays = [];
for (let offset = 0; offset < byteCharacters.length; offset += sliceSize) {
const slice = byteCharacters.slice(offset, offset + sliceSize);
const byteNumbers = new Array(slice.length);
// #ts-ignore
// eslint-disable-next-line no-plusplus
for (let i = 0; i < slice.length; i++) {
byteNumbers[i] = slice.charCodeAt(i);
}
const byteArray = new Uint8Array(byteNumbers);
byteArrays.push(byteArray);
}
const blob = new Blob(byteArrays, { type: contentType });
return blob;
};
const blob = b64toBlob(axiosState.data, 'image/jpg');
const urlCreator = window.URL || window.webkitURL;
const imageUrl = urlCreator.createObjectURL(blob);
setimgUpl(imageUrl);
}
}, [axiosState]);
and finaly in the html:
<img src={imgUpl} alt="NO" className="imageUpload" />
OTHER:
For anyone who is attempting the same NOTE that this is not a best practice thing to do.
Almost every article I found saved the images on the sever and save an image Id and other metadata in the datbase. For the exact pros and cons for this I have found the following helpful:
Storing Images in DB - Yea or Nay?
I was focusing on finding out how to do it if for some reason I want to save an image in the datbase and finaly solved it.
There are two ways to store images in your SQL database. You either store the actual image on your server and save the image path inside your mySql db OR you create a BLOB using the image and store it in db.
Here is a handy read https://www.technicalkeeda.com/nodejs-tutorials/nodejs-store-image-into-mysql-database
you should save the image in a directory and save the link of this image in the database

Ionic 4 : Recording Audio and Send to Server (File has been corrupted)

I would like to record the audio file in mobile application(iOS & Android) and tranfer to server as a formData in ionic 4. I have used the "cordova-plugin-media" to capture the audio using below logics
if (this.platform.is('ios')) {
this.filePaths = this.file.documentsDirectory;
this.fileExtension = '.m4a';
} else if (this.platform.is('android')) {
this.filePaths = this.file.externalDataDirectory;
this.fileExtension = '.3gp';
}
this.fileName = 'recording'+new Date().getHours()+new Date().getMinutes()+new Date().getSeconds()+this.fileExtension;
if(this.filePaths) {
this.file.createFile(this.filePaths,this.fileName,true).then((entry:FileEntry)=> {
this.audio = this.media.create(entry.toInternalURL());
this.audio.startRecord();
});
}
Even I have tried to create the media directly without "File Creation"
I can record and play the audio, but If I am trying to send this file
to server using below logics It won't send properly(corrupted data)
and also web application unable to play .m4a extensions
.
Please correct me If I am doing anything wrong in my code
Upload logic:
let formData:FormData = new FormData();
formData.append('recordID' , feedbackID);
that.file.readAsDataURL(filePath,file.name).then((data)=>{
const audioBlob = new Blob([data], { type: file.type });
formData.append('files', audioBlob, file.name);
that.uploadFormData(formData,feedbackID); //POST Logics -
})
;
I have used the soultion as suggested by Ankush and it works fine.
Used readAsArrayBuffer instead of readAsDataURL.
The .m4a format has supported both ios and android. Also I can
download the the same file from web application.
I am using below code to upload the image to the server. I assume that only a few modifications will be required in this code to transfer media instead of the image file.
private uploadPicture(imagePath: string, apiUrl: string): Observable<ApiResponse<ImageUploadResponseModel>> {
return this.convertFileFromFilePathToBlob(imagePath).pipe(
switchMap(item => this.convertBlobToFormData(item)),
switchMap(formData => this.postImageToServer(formData, apiUrl))
);
}
Rest functions used in above code:
private postImageToServer(formData: FormData, apiUrl: string): Observable<ApiResponse<ImageUploadResponseModel>> {
const requestHeaders = new HttpHeaders({ enctype: 'multipart/form-data' });
return this.http.post<ApiResponse<ImageUploadResponseModel>>(apiUrl, formData, { headers: requestHeaders });
}
private convertBlobToFormData(blob: Blob): Observable<FormData> {
return new Observable<FormData>(subscriber => {
// A Blob() is almost a File() - it's just missing the two properties below which we will add
// tslint:disable-next-line: no-string-literal
blob['lastModifiedDate'] = new Date();
// tslint:disable-next-line: no-string-literal
blob['name'] = 'sample.jpeg';
const formData = new FormData();
formData.append('file', blob as Blob, 'sample.jpeg');
subscriber.next(formData);
subscriber.complete();
});
}
private convertFileFromFilePathToBlob(imagePath: string): Observable<Blob> {
return new Observable<Blob>(subscriber => {
const directoryPath = imagePath.substr(0, imagePath.lastIndexOf('/'));
let fileName = imagePath.split('/').pop();
fileName = fileName.split('?')[0];
this.file.readAsArrayBuffer(directoryPath, fileName).then(fileEntry => {
const imgBlob: any = new Blob([fileEntry], { type: 'image/jpeg' });
imgBlob.name = 'sample.jpeg';
subscriber.next(imgBlob);
subscriber.complete();
}, () => {
subscriber.error('Some error occured while reading image from the filepath.');
});
});
}

Possible to run Headless Chrome/Chromium in a Google Cloud Function?

Is there any way to run Headless Chrome/Chromium in a Google Cloud Function? I understand I can include and run statically compiled binaries in GCF. Can I get a statically compiled version of Chrome that would work for this?
The Node.js 8 runtime for Google Cloud Functions now includes all the necessary OS packages to run Headless Chrome.
Here is a code sample of an HTTP function that returns screenshots:
Main index.js file:
const puppeteer = require('puppeteer');
exports.screenshot = async (req, res) => {
const url = req.query.url;
if (!url) {
return res.send('Please provide URL as GET parameter, for example: ?url=https://example.com');
}
const browser = await puppeteer.launch({
args: ['--no-sandbox']
});
const page = await browser.newPage();
await page.goto(url);
const imageBuffer = await page.screenshot();
await browser.close();
res.set('Content-Type', 'image/png');
res.send(imageBuffer);
}
and package.json
{
"name": "screenshot",
"version": "0.0.1",
"dependencies": {
"puppeteer": "^1.6.2"
}
}
I've just deployed a GCF function running headless Chrome. A couple takeways:
you have to statically compile Chromium and NSS on Debian 8
you have to patch environment variables to point to NSS before launching Chromium
performance is much worse than what you'd get on AWS Lambda (3+ seconds)
For 1, you should be able to find plenty of instructions online.
For 2, the code that I'm using is the following:
static executablePath() {
let bin = path.join(__dirname, '..', 'bin', 'chromium');
let nss = path.join(__dirname, '..', 'bin', 'nss', 'Linux3.16_x86_64_cc_glibc_PTH_64_OPT.OBJ');
if (process.env.PATH === undefined) {
process.env.PATH = path.join(nss, 'bin');
} else if (process.env.PATH.indexOf(nss) === -1) {
process.env.PATH = [path.join(nss, 'bin'), process.env.PATH].join(':');
}
if (process.env.LD_LIBRARY_PATH === undefined) {
process.env.LD_LIBRARY_PATH = path.join(nss, 'lib');
} else if (process.env.LD_LIBRARY_PATH.indexOf(nss) === -1) {
process.env.LD_LIBRARY_PATH = [path.join(nss, 'lib'), process.env.LD_LIBRARY_PATH].join(':');
}
if (fs.existsSync('/tmp/chromium') === true) {
return '/tmp/chromium';
}
return new Promise(
(resolve, reject) => {
try {
fs.chmod(bin, '0755', () => {
fs.symlinkSync(bin, '/tmp/chromium'); return resolve('/tmp/chromium');
});
} catch (error) {
return reject(error);
}
}
);
}
You also need to use a few required arguments when starting Chrome, namely:
--disable-dev-shm-usage
--disable-setuid-sandbox
--no-first-run
--no-sandbox
--no-zygote
--single-process
I hope this helps.
As mentioned in the comment, work is being done on a possible solution to running a headless browser in a cloud function. A directly applicable discussion:"headless chrome & aws lambda" can be read on Google Groups.
The question at. had was can you run headless chrome or chromium in Firebase Cloud Functions... the answer is NO! since the node.js project will not have access any chrome/chromium executables and therefore will fail! (TRUST ME - I've Tried!).
A better solutions is to use the Phantom npm package, which uses PhantomJS under the hood:
https://www.npmjs.com/package/phantom
Docs and info can be found here:
http://amirraminfar.com/phantomjs-node/#/
or
https://github.com/amir20/phantomjs-node
The site i was trying to crawl had implemented screen scraping software, the trick is to wait for the page to load by searching for expected string, or regex match, i.e. i do a regex for a , if you need a regex of any complexity made for you - get in touch at https://AppLogics.uk/ - starting at £5 (GPB).
here is a typescript snippet to make the http or https call:
const phantom = require('phantom');
const instance: any = await phantom.create(['--ignore-ssl-errors=yes', '--load-images=no']);
const page: any = await instance.createPage();
const status = await page.open('https://somewebsite.co.uk/');
const content = await page.property('content');
same again in JavaScript:
const phantom = require('phantom');
const instance = yield phantom.create(['--ignore-ssl-errors=yes', '--load-images=no']);
const page = yield instance.createPage();
const status = yield page.open('https://somewebsite.co.uk/');
const content = yield page.property('content');
Thats the easy bit! if its a static page your pretty much done and you can parse the HTML into something like the cheerio npm package: https://github.com/cheeriojs/cheerio - an implementation of core JQuery designed for servers!
However if it is a dynamically loading page, i.e. lazy loading, or even anti-scraping methods, you will need to wait for the page to update by looping and calling the page.property('content') method and running a text search or regex to see if your page has finished loading.
I have created a generic asynchronous function returning the page content (as a string) on success and throws an exception on failure or timeout. It takes as parameters the variables for the page, text (string to search for that indicates success), error (string to indicate failure or null to not check for error), and timeout (number - self explanatory):
TypeScript:
async function waitForPageToLoadStr(page: any, text: string, error: string, timeout: number): Promise<string> {
const maxTime = timeout ? (new Date()).getTime() + timeout : null;
let html: string = '';
html = await page.property('content');
async function loop(): Promise<string>{
async function checkSuccess(): Promise <boolean> {
html = await page.property('content');
if (!isNullOrUndefined(error) && html.includes(error)) {
throw new Error(`Error string found: ${ error }`);
}
if (maxTime && (new Date()).getTime() >= maxTime) {
throw new Error(`Timed out waiting for string: ${ text }`);
}
return html.includes(text)
}
if (await checkSuccess()){
return html;
} else {
return loop();
}
}
return await loop();
}
JavaScript:
function waitForPageToLoadStr(page, text, error, timeout) {
return __awaiter(this, void 0, void 0, function* () {
const maxTime = timeout ? (new Date()).getTime() + timeout : null;
let html = '';
html = yield page.property('content');
function loop() {
return __awaiter(this, void 0, void 0, function* () {
function checkSuccess() {
return __awaiter(this, void 0, void 0, function* () {
html = yield page.property('content');
if (!isNullOrUndefined(error) && html.includes(error)) {
throw new Error(`Error string found: ${error}`);
}
if (maxTime && (new Date()).getTime() >= maxTime) {
throw new Error(`Timed out waiting for string: ${text}`);
}
return html.includes(text);
});
}
if (yield checkSuccess()) {
return html;
}
else {
return loop();
}
});
}
return yield loop();
});
}
I have personally used this function like this:
TypeScript:
try {
const phantom = require('phantom');
const instance: any = await phantom.create(['--ignore-ssl-errors=yes', '--load-images=no']);
const page: any = await instance.createPage();
const status = await page.open('https://somewebsite.co.uk/');
await waitForPageToLoadStr(page, '<div>Welcome to somewebsite</div>', '<h1>Website under maintenance, try again later</h1>', 1000);
} catch (error) {
console.error(error);
}
JavaScript:
try {
const phantom = require('phantom');
const instance = yield phantom.create(['--ignore-ssl-errors=yes', '--load-images=no']);
const page = yield instance.createPage();
yield page.open('https://vehicleenquiry.service.gov.uk/');
yield waitForPageToLoadStr(page, '<div>Welcome to somewebsite</div>', '<h1>Website under maintenance, try again later</h1>', 1000);
} catch (error) {
console.error(error);
}
Happy crawling!

Node.js - Can I store writeable streams as JSON in Redis?

I am still working on fully understanding streams in node.js. If I create a writable stream, would I be able able to store the stream object as JSON in Redis, and then access it later, and continue writing to it (after JSON.parse)?
example:
var fs = require( 'fs' );
var redis = require( 'redis' );
var streamName = fs.createWriteStream(upfilePath, streamopts);
streamName = JSON.stringify(streamName);
rclient.set('streamJSON', streamName);
....
var myNewdata = 'whatever';
rclient.get('streamJSON', function (err, streamJSON) {
var recoveredStream = JSON.parse(streamJSON);
recoveredStream.write(myNewdata, function (err, written, buffer) {
//write successful??
}
}
You can't store variable references on redis. You would only need to store the filename, then reopen the stream with the a flag which allows you to append data to it.
I thought this was pretty an interesting question and created this that allows you to save the state of a stream and then use it later. But I don't see the point if you can just use the a flag. Might be useful for ReadableStreams though.
var fs = require('fs');
exports.stringify = function(stream) {
var obj = {
path: stream.path
, writable: stream.writable
, fd: stream.fd
, options: {
encoding: stream.encoding
, mode: stream.mode
}
};
if (stream.writable) {
obj.bytesWritten = stream.bytesWritten;
} else {
obj.options.bufferSize = stream.bufferSize;
obj.bytesRead = stream.bytesRead;
}
return JSON.stringify(obj);
};
exports.parse = function(json, callback) {
var obj = JSON.parse(json);
var stream;
if (obj.writable) {
obj.options.flags = 'a';
stream = fs.createWriteStream(obj.path, obj.options);
stream.bytesWritten = obj.bytesWritten;
} else {
stream = fs.createReadStream(obj.path, obj.options);
stream.bytesRead = obj.bytesRead;
}
// if stream was already opened, wait until it is
if (obj.fd !== null) {
stream.on('open', function() {
callback(null, stream);
});
} else {
process.nextTick(function() {
callback(null, stream);
});
}
return stream;
};