Multiple app were running in background , how to close them properly, autodesk forge - autodesk-forge

It is a forge app question. The app is created in visual studio and forge app. I was debugging. I am working on a forge application where I am using Data management, design automation and model derivatives. Design automation app were closing properly. After integrating Model Derivative I was passing rfa file by mistake and app was running endlessly.. The model is very small, but initially I did some mistake while passing the model to model derivative. I was passing rfa file, later realise model derivatives do not read rfa file. In the mean time I tried so many times. After passing a rvt file to model derivative, it was very fast and was running perfect. I had deleted those hanged apps. When they were running endlessly and did not show the model in the viewer, I closed the app and rerun the app or deleted the app and created a new one and rerun. Then when I saw data usage,it shows multiple apps were consuming cloud credit.Is it possible, multiple app were running simultaneously and the jobs which I had deleted did not close properly? The test model was really small.
I had stopped the app abruptly and debug again. I can guess, multiply apps were running one upon another. It is my guess. I can be wrong.
Please let me know few known reasons, for app running in background and how to properly end a job in forge cloud platform. I simply stopped debugging in visual studio and deleted the app in forge and created a new one.
In the usage graph, it is showing overlapping colors and usage of several app. My new model for design automation is big, my worry is I will lost all of my cloud credit if I do not close the jobs properly.
How to stop a project running in forge cloud. I am sure I could not stop app running in cloud properly before creating a new forge app and debug. All of them were running. I can be wrong, because I am new to forge cloud.
I described above, what can go wrong. Please let me know what could go wrong. Model was really small.
May be Ngrok was not closed properly. No idea.
I thought I closed each job, before starting a new one.

Model Derivative API doesn't support RFA format currently. With my experience, it will the following message while submitting a translation job on the RFA file with POST Jobs. Therefore, I'm not sure why you're saying the job or the app keeps running endlessly. Could you share more details on that??
{
"diagnostic": "Failed to trigger translation for this file."
}
Besides, I'm confused with your statement here. What kind of app was deleted? The Forge app inside your myapp page? It's unclear to me.
I had deleted those hanged apps. When they were running endlessly and
did not show the model in the viewer, I closed the app and rerun the
app or deleted the app and created a new one and rerun.
To cancel or delete translation jobs, you can call DELETE manifest. According to our engineering team, each translation job has designed expire time (e.g. 2 hours). If your translation job cannot be completed within the period, then the job will be canceled immediately by our service.
Note. To cancel a Desing Automation WorkItem, you can call DELETE workitems/:id to stop the job.
Lastly, back to the cloud credits question, Model Derivative API is charging per translation job while calling POST Jobs, not by the processing hour. Currently, only the Design Automation API is charging by the processing hour of your work item, see the pricing table here: https://forge.autodesk.com/pricing.
If you have further questions, you can drop me a line to forge[DOT]help[AT]autodesk[DOT]com.
Cheers,

Related

Is there any way to work with a users local (non-BIM360) file using design automation and still get things like user selection?

I am trying to use the design automation API to modify a Revit file in place. From what I am reading, it seems like, because you have to upload a revit model to the cloud, that there is no way to modify a single users revit model that they have open on thier local Revit client unless the model they are operating on is on BIM360 or another cloud service already.
I would like to make all of my Revit addins into web services to avoid having to manage versioning and distribution of installers for my addins, but it seems that there is no way to make Design Automation API work on a local file that a user has open and modify it in place.
I would also need the ability to get information about what the user is currently selecting in the model to make some of these addins work.
Could anyone shed some light as to whether what I'm asking is possible with Forge in any way or if its just a pipe dream for now?
Unfortunately, just as you have already noted, the Forge environment in mainly oriented towards viewing CAD files.
The Design Automation API adds the possibility to programmatically modify them in fully automated batch mode.
In all cases, however, the CAD engine is used to manipulate the seed CAD file, in this case the Revit BIM.
For all end user interaction with and modification of the Revit model, you will still be making use of a standard Revit end user installation on the Windows desktop.
You can certainly modify your add-ins to run fully automated in the Forge Design Automation environment.
However, they will not be able to execute on locally stored Revit models in place.
I hope this clarifies.

Is it possible to partially calculate the graphs of Open Trip Planner?

I have a question about Open Trip Planner. In the project I'm working on I've automated the download of GTFS files whenever an update is available.
Every time my server downloads a new update, however, it is recreating the entire graph (including the map and the GTFS that have not been modified).
Is it possible to update only the graphs related to specific GTFS in OTP?
In the guide it says that it is possible to save graphs, but this part of the tutorial is very poor, or maybe I cannot find it.
link to guide I am referring: http://docs.opentripplanner.org/en/latest/Basic-Tutorial/
I am using the following command to start OTP:
java -Xmx512m -jar "/otp-1.4.0-shaded.jar" --build "/downloads" --inMemory

Google Cloud Platform - I need a webhook to get JSON data. How to approach it?

I am fairly new to the google-cloud-platform world and I am struggling with my 1st steps there.
So, what I want to know is how to make a webhook app, that will run 24/7 and "catch" data sent from another 3rd party service (later I will try to do something with this data - manipulate it and push into a DB, but that's anohter question to ask).
I have set up an instance on GCP which Linux based, but , what's next?
I am familiar with PHP, but I want to do it this time in Phyton (learning it nowdays).
Which service in GCP should I use and how I set up the server to catch every data the 3rd party service is sending?
This sounds like a perfect fit for Google App Engine. As long as the 3rd-party service makes HTTP requests, App Engine is a great fit. You can write your application in Python, PHP, Java, or just about anything else, then GAE takes care of the rest. No need to manage Linux, instances, firewall rules, or anything else.
If your load is minimal, you may even fit into the free tier and pay nothing to run your app.
Check out the GAE Python docs at https://cloud.google.com/appengine/docs/python/.
If you want to run your Web Hook continuously, then you can run it as a Cron job. Here is a guide on how to run Python scripts as cron jobs on Google App Engine: Scheduling Tasks With Cron for Python

Azure Web Jobs - Triggered from queue - triggered or continuous?

I have a web job I've created and I want to deploy to azure. However I'm confused about the configuration when you create it.
It's a web job that's triggered from an azure storage queue. It works fine locally.
However when I go to create the web job in azure, I'm confused by the choices...my choices are Triggered or Continuous.
If I choose Continuous, I get a choice single or Multi.
If I choose Triggered, I'm given a choice of Scheduled or Manual. I don't want a Scheduled, and I'm not sure what Manual means...that doesn't seem like it's right either.
I know the web job that's triggered from the azure queue is really "polling" and not triggered...so it seems like continuous is the right choice. But I'm not sure.
So the question is...when creating a Web Job that's triggered from an Azure Queue, what is the right deploy configuration?
I sounds like you are using the Azure WebJobs SDK. In SDK scenarios, even though your individual functions are 'triggered', the WebJob as a while runs continuously (i.e. your exe keeps running and does its own internal triggering). So what you want is Continuous Multi. No reason to use singleton in most cases, and it's not relevant anyway until you scale out to multiple instances.

Schedule funtion to Pull CSV file to database

Sorry, I have not done any coding in WEB API so I don't have any sample to share here.
But I just wanted to know that, how we can schedule a task on server for WEB API to read a csv file after every 15 or 30 mins.
From client we are successfully able to push csv to my ftp server where my application is located. But how to retrieve the data from csv after every fixed interval?
WebAPI is the wrong tool for the job.
You can certainly push (or upload specifically) CSV data to a WebAPI method, and that method can do whatever you need to do with that data. But WebAPI doesn't run on a timer. It's not going to periodically look for data to process, you have to send it that data.
A web application is a request/response system. In order to do something, you have to send it a request.
For an application that periodically runs on a timer, consider a Windows Service or even simply a Console Application which is scheduled to run with some job scheduler (Windows Task Scheduler, cron, etc.). That would run periodically without the need for input or a user interface at all.
Now, if the code you need to use is in the WebAPI application already, that's ok. You can refactor that code into a separate Class Library in the same solution as the WebAPI application. Then add the new application (Windows Service or Console Application) to that solution and reference that same Class Library. Both applications can share the same supporting code. Indeed, the application layer should be as thin as possible over a set of common business logic that can be shared by multiple applications.