Error while importing images to Google storage in Cloud AutoML Vision - google-apis-explorer

I am using the google AutoML vision api object detection option in google console.When I try to upload the images to google storage to train the model, I am getting "Failed to import" error.
I have tried using the csv file option as well but the same error persists.
Since I am using the object detection model in google console, so it does not require to write any code.
The expected result is that the images are imported successfully so that I can move to next step to train my model.

I was facing same problem when upload a dataset from cloud storage (Select a CSV file on Cloud Storage) it shows this error message.
Failed to import data
However, I checked the IMAGES section and the dataset was there, so it was successfully uploaded regardless the error message (you can ignore it).

Related

BIM360 using Model Derivate API is unable to see the SVF2 View page

I am trying to view SVF is working well BIM 360 using Model Derivate APIs, but now using SVF2 getting errors like
NO ACCESS Sorry, you don't have the required privileges to access this item, the same screenshot of the error message is given below.(error message)
Missing anyother steps in the program(program image)
Which is created a program is image enclosed.

View BIM 360 & Fusion models Unable to load model:Failed to load resource: net::ERR_NAME_NOT_RESOLVED

I did it according to the example on the official website, but the model couldn't be loaded。I don't know why, I followed the tutorial, and I checked the API should be open normally,I think I need help. Here's a screenshot of the error;
I see two possible explanations for this type of error:
The model is coming from A360 or Fusion Teams. If that's the case, please note that models uploaded to these applications are not translated automatically by the Forge Model Derivative service (hence the viewer error stating that there are no "viewables" to load). You have to open the designs in their owning application first to initiate the translation process, and then you can try accessing them from your own Forge application.
It's possible that the sample application is having issues with some of the characters used in the file names. Check if the same error happens when you name your files using English alphabet.

How to get all elements from Revit files uploaded in the system using Revit API

we can view the Revit file on Autodesk forge viewer.
And we need to get all elements from Revit files uploaded in system using Revit API.
we are trying using below link
https://forge.autodesk.com/en/docs/model-derivative/v2/reference/http/urn-metadata-GET/
But unfortunately we are getting below error
"Token does not have the privilege for this request."
errorcode : AUTH-010
Please help me how to resolve the error
Our main aim is to get all elements(equipments,floors,rooms,spaces etc)
Please make sure you have given the proper scopes, call this API using the internal token from your server.
Ref:
https://learnforge.autodesk.io/#/oauth/?id=scopes
https://forge.autodesk.com/en/docs/oauth/v2/developers_guide/scopes/

Prevent Conversion of same Drawing file to svf File again when Forge Viewer is called

We have integrated AutoDesk Forge Viewer. We are sending a request to the Forge API's for conversion (using Model derivative API). After closing the Viewer, If we need to show the same file again, Currently we are posting the dwg file again for conversion to view it.
Instead is there a way so save the svf file in my local system so that I need not call the Forge web service twice for the same file.
According to the pricing, for every simple conversion job, its going to cost 0.2 credits.
Please suggest how I can avoid this same conversion second and n number of times.
Thank you,
Shiva Kumar
Unless the DWG file has changed, you do not need to upload the DWG file again and/or POST a translation again. If you do this, you will effectively consume 0.2cc. Instead, just reference the URN you had received after upload/translation when starting the viewer. The 'bubble' or SVF persists on the backend depending of the storage policy you chose. For example, if you create a transient bucket, the file and bubble will persists for 24h, temporary for 1 month, and persistent forever.
I have found a reference to how to iterate through the bucket and see the files which are present in the bucket.
We can login in the below live demo using the forge credentials.
Live Demo
Source Code
Thanks.

BigQuery auto-detect schema cause load of Google Drive CSV to fail

I've been using BigQuery for a while and load my data by fetching a CSV from an http address, uploading this to Google Drive using the Drive API, then attaching this to BigQuery using the BigQuery API.
I always specified auto-detect schema via the API and it has worked perfectly on a cron until March 16, 2017.
On March 16 it stopped working. The CSV still loads to Google Drive fine, but BigQuery won't pick it up.
I started troubleshooting by attempting to load the same CSV manually using the BigQuery UI, and noticed something strange: using auto-detect schema seems to prevent the loading of the CSV, because when I enter the schema manually it loads fine.
I thought maybe some rogue data might be the problem, but auto-detect schema isn't working for me now even with incredibly basic test tables, like...
id name
1 Paul
2 Peter
Has anyone else found auto-detect schema suddenly stopped working.
Maybe some default setting has changed on the API?
I could not get it to work either from GDrive today - 23 mar.
Note: first time ever using BigQuery/Google Cloud Storage.
I had a large CSV of bus stops 134MB.
Tried uploading it to GDrive but couldnt get it to import to big Query.
Just Tried Google Cloud Storage Bucket and it worked ok.