I have a new bucket and 1 object inside. Object was uploaded from Revit file. When i change something on revit file and try to upload it again with the same name I always have the same old model on Forge as the first times, not thing was updated.
I use Nodejs and configure for the upload look like this.
let opts = {'xAdsForce': true}
try {
await new DerivativesApi().translate(job, opts, req.oauth_client, req.oauth_token);
} catch(err) {
next(err);
}
Is there something wrong ?. Thank in advance !
Passing in the xAdsForce: true option is the right way to force the translation even if the object with the same name has already been translated. According to the docs this method should return a promise that resolves in a Job object. Try investigating this value to see if everything is ok. Also make sure that the upload of your updated Revit file actually succeeded.
Alternatively, you can try triggering the translation manually, by making a request to the POST job endpoint, with x-ads-force header in the request set to true.
Related
Im trying to perform a simple download of a .docx file info a buffer so I can handle it latter inside my Cloud Function. I've been using the whole Google Platform for multiple projects but never faced the need to download in server side, and now I need to, I just cant.
The following piece of code is not working, it just sends timeout as a response (I don't even get an error If I try to catch it or something):
var bucket = admin.storage().bucket("gs://myBucket.com");
return bucket.file("001Lineales/4x3-1/1000.docx").download().then((contents)=>{
var buffer = contents[0];
//I never get into this point
}).catch((error)=>{
//No error
})
I tried in a local NodeJs script and worked as expected. Also tried to perform a readStream() download but no luck, the function gets hang up in any try of downloading the file.
return new Promise((resolve,reject)=>{
var archivo = bucket.file(selectedCategory).createReadStream();
var array = [];
//Under here, never happens
archivo.on('data', (d) => {array.push(d)}).on("end",()=>{
var newbuff = Buffer.concat(array);
resolve(newbuff)
})
})
The file permissions read/write are public. And the main problem is that debugging is difficult cause Im not able to perform this function in local emulator.
What can I do? Thanks in advance.
EDIT:
Double checking a local call with emulator, I get the following error:
Anonymous caller does not have storage.objects.get access to the Google Cloud Storage object.
Double check the service account hat you've assigned to the Cloud Function and that you've given it the permission it needs.
I think Storage Object Viewer will give you what you need to read a file into the buffer.
By default, if you haven't changed it, the AppEngine's default service account gets used, which I don't think has access to Storage.
I have developed one dashboard application in angular js which is having search functionality and multiple views with lots of different data coming from that search functionality.
Its single page application. So I am facing issue on page reload all data is gone and it shows view with blank data.
Is there any way to solve this issue or any language which help me to create single page application and maintain same data on page reload??
Any suggestion will be appreciated.
Search Page
Data after search
Data after reload
You can use sessionStorage to set & get the data.
Step 1 :
Create a factory service that will save and return the saved session data based on the key.
app.factory('storageService', ['$rootScope', function($rootScope) {
return {
get: function(key) {
return sessionStorage.getItem(key);
},
save: function(key, data) {
sessionStorage.setItem(key, data);
}
};
}]);
Step 2 :
Inject the storageService dependency in the controller to set and get the data from the session storage.
app.controller('myCtrl',['storageService',function(storageService) {
// Save session data to storageService on successfull response from $http service.
storageService.save('key', 'value');
// Get saved session data from storageService on page reload
var sessionData = storageService.get('key');
});
You need to save data before changing the state of application / moving to another page,route.
So, either save that data using
Angular services (Save data, change route, come back check the data in service and reassign variables from service.)
Local-storage to save data. (Save data, change route, come back check the data in service and reassign variables from service.)
Rootscope. Set data in rootscope and move .. after coming back check the variables and reassign.
Your problem is not saving the data, but saving the state in your URL. As it is, you only save the "step", but no other information, so when you reload the page (refresh, or close/re-open browser, or open bookmark, or share...) you don't have all the information needed to restore the full page.
Add the relevant bits to the URL (as you have a single page app, probably in the fragment identifier). Use that information to load the data, rather than relying on other mechanisms to pass data between the "pages".
Get data on page refresh, that is something like,
$rootScope.on('onStateChangeStart', function(){
// here make a call to the source
$http(); // request and try to get data
})
use localStorage or save data to your server for users in a database temporarily and get it back with an API call(localstorage has a 10MB) limit.
You can have your code try to retrieve localstorage values first if they exist.
In my application I have dynamic field sets on what is otherwise the same form. I can load them from the server as javascript includes and that works OK.
However, it would be much better to be able to load them from a separate API.
$.getJSON() provides a good way to load the json but I have not found the right place to do this. Clearly it needs to be completed before the compile step begins.
I see there is a fieldTransform facility in formly. Could this be used to transform vm.fields from an empty object to whatever comes in from the API?
If so how would I do that?
Thx. Paul
There is an example on the website that does exactly what you're asking about. It uses $timeout to simulate an async operation to load the field configuration, but you could just as easily use angular's own $http to get the json from the server. It hides the form behind an ng-if and only shows the form when the fields return (when ng-if resolves to true, it compile the template).
Thx #kent
OK, so we need to replace the getFields() promise with this
function getFields() {
return $http.get('fields-demo.json', {headers:{'Cache-Control':'no-cache'}});
}
This returns data.fields so in vm.loadingData we say
vm.fields = result[0].data;
Seems to work for OK for me.
When testing I noticed that you have to make sure there is nothing wrong with your json such as using a field type you haven't defined. In that case the resulting error message is not very clear.
Furthermore you need to deal with the situation where the source of the data is unavailable. I tried this:
function getFields() {
console.log('getting',fields_url);
return $http.get(fields_url, {headers: {'Cache-Control':'no-cache'}}).
error(function() {
alert("can't get fields from server");
//return new Promise({status:'fields server access error'}); //??
});
.. which does at least throw the alert. However, I'm not sure how to replace the promise so as to propagate the error back to the caller.
Paul
I am trying to read a JSON file with Meteor. I've seen various answers on stackoverflow but cannot seem to get them to work. I have tried this one which basically says:
Create a file called private/test.json with the following contents:
[{"id":1,"text":"foo"},{"id":2,"text":"bar"}]
Read the file contents when the server starts (server/start.js):
Meteor.startup(function() {
console.log(JSON.parse(Assets.getText('test.json')));
});
However this seemingly very simple example does not log anything to the console. If I trye to store it in a variable instead on console.logging it and then displaying it client side I get
Uncaught ReferenceError: myjson is not defined
where myjson was the variable I stored it in. I have tried reading the JSON client side
Template.hello.events({
'click input': function () {
myjson = JSON.parse(Assets.getText("myfile.json"));
console.log("myjson")
});
}
Which results in:
Uncaught ReferenceError: Assets is not defined
If have tried all of the options described here: Importing a JSON file in Meteor with more or less the same outcome.
Hope someone can help me out
As per the docs, Assets.getText is only available on the server as it's designed to read data in the private directory, to which clients should not have access (thus the name).
If you want to deliver this information to the client, you have two options:
Use Assets.getText exactly as you have done, but inside a method on the server, and call this method from the client to return the results. This seems like the best option to me as you're rationing access to your data via the method, rather than making it completely public.
Put it in the public folder instead and use something like jQuery.getJSON() to read it. This isn't something I've ever done, so I can't provide any further advice, but it looks pretty straightforward.
The server method is OK, just remove the extra semi-colon(;). You need a little more in the client call. The JSON data comes from the callback.
Use this in your click event:
if (typeof console !== 'undefined'){
console.log("You're calling readit");
Meteor.call('readit',function(err,response){
console.log(response);
});
}
Meteor!
I am using the Box windows V2 SDK to upload files to my Box account using the following code:
BoxFileRequest request = new BoxFileRequest()
{
Parent = new BoxRequestEntity() { Id = "0" },
Name = attachment.Name,
Description = "This is failing to be sent..."
};
client.FilesManager.UploadAsync(request, new MemoryStream(attachment.FileContent)).Result;
Uploading the file works great. However, I can not get the description field sent to the box server. Is it possible to upload a file with a description, or do I have to call FilesManager.UpdateInformationAsync after the file has been uploaded to accomplish this? It would be nice if this was an option so I could reduce the number of API calls..
The description must be set in a separate API request after uploading the file.
We have heard reusing some of the request objects may cause some confusion on what can be done with each request. We are evaluating whether or not this should be changed