Autodesk Forge European server for design Automation - autodesk-forge

The base url for posting a Design Automation workitem is
https://developer.api.autodesk.com/da/us-east/v3/workitems
which points to a US east coast location for the server.
Several other Forge API's have a way to specify that they should be handled by European servers (both by adding the region to the URL and by options specifying which region the request should be handled in)
This possibility seems to be missing for the Design Automation API.
Is this correct and is this API currently limited to only a single USA server?
If not, is there documentation available where I can find how to direct a workitem to a European server?
I am asking because we are running Inventor design automation jobs from a website and these typically take 45 to 55 seconds to complete from start to displaying the modified design. However, occasionally a task takes 2 to 4 minutes and I am trying to find if this could be caused by a congested server queue. If there is more than one server I can try to run the same job on different servers and at different times of day to see if there is a commonality between these occasional delays.

Currently Design Automation is only available in us-east region.
However, occasionally a task takes 2 to 4 minutes and I am trying to find if this could be caused by a congested server queue.
The [GET] workitems/:id endpoint returns various statistics. You can derive the information about how long your workitem spent in the queue by subtracting timeQueued from timeDownloadStarted.
"stats": {
"timeQueued": "2022-03-28T12:34:18.3289895Z",
"timeDownloadStarted": "2022-03-28T12:34:18.5377785Z",
"timeInstructionsStarted": "2022-03-28T12:34:19.6206329Z",
"timeInstructionsEnded": "2022-03-28T12:34:36.1960527Z",
"timeFinished": "2022-03-28T12:34:36.2611905Z",
"bytesDownloaded": 106
},

While I see some benefits in hitting the server which is closer to your geographical location (which is not possible for Design Automation at this point) those benefits would not include much of an improvement in the workitem processing time. You are today indeed reaching to "one" server in the US, but your workitems are being processed by several computers in the background.

Related

Translation from IFC to SVF way too slow

I tried to translate a big IFC file (150mb) for the veiwer and it took around 15 minutes only the translation without the uploading part. So is such translation time normal and can it be done something about it? I am currently using the free credits only for testing. Is there a performance increase with the paid credits?
The /modelderivative/v2/regions/eu/designdata/job is used with the advanced conversionMethod:modern.
No, there is no performance difference between trial and paid accounts. Forge dev account with a valid trial plan can use full Forge features as paid one as I know.
What happened after submitting a translation job?
After submitting a translation request and it returns success, it means that your translation request is sent to our service and has been put into the translation queue successfully. It doesn't mean that your model will be processed by our service immediately.
Instead, you have to line up! Since our service resources are limited and shared worldwide, you will need to wait for enqueued IFC translation jobs from other Forge users completed, and then our service will start to process yours. Therefore, waiting for 15mins is not slow from my perspective.

Max Concurrent Connections in Microsoft Graph API

I have Windows Service that accesses Outlook items through Exchange Web Service (EWS). It often experiences concurrency issues.
Based on this doc, a maximum of 27 connections per user can be made concurrently for Exchange Online. I couldn't find documentation for the detailed throttling policies for Microsoft Graph.
Is there anyone know if I can make more concurrent connections in Graph API than EWS? Also, does Graph API show better performance in general than EWS such as response time?
This is documented in Microsoft Graph throttling guidance. There is also an in-depth look at Throttling patterns available.
Each endpoint throttles a little differently. For Exchange/Outlook endpoints it is 10,000 requests per app id, per user, within a 10 minute window.
The term "connections" doesn't really apply to an API like Microsoft Graph. As a REST-based API, it is fundamentally stateless. Each request is a self-contained transaction.

Any Way to Detect Micro Delays using EWS?

I am encountering some very long response times from Exchange Online called via the EWS Managed API 2.0 in C#. I suspect I am being throttled, but I cannot find anything that lets me prove this in the Admin portal for my O365 account. I have seen in some search results that using PowerShell you can see messages indicating "micro delays" have been applied, but I'm stuck in C#/EWS, so my question is: is there anything I can look at coming back in the responses to my EWS calls that can identify if these micro delays have been applied? BTW, response times are very close to the 100 second timeout time, which is killing my code.
Thx,
Paul
100 Seconds ins't a micro delay, micro-delays are milliseconds(capped at 500 ms) and are more aimed at delaying a large volume of requests. (eg if an app is going to may 100 sequential requests a microdelay would spread the load of those request out over a greater time by punishing the app more and more and that would lower the resource load on the server). One request taking 100 seconds to fulfill is probably more to do with the request itself. Eg overuse of search filters or overcomplex search etc which my also impact throttling or if your using batching each request withing the batch could have a micro-delay applied.
EWS doesn't return metrics of throttle usage (the new REST API does give a little more information back in this regards). What you need is access to the EWS logs which has that information. Each Exchange request the EWS Managed API makes has and Client Requestid to help correlate the request to log entry there more detail in https://msdn.microsoft.com/en-us/library/office/dn720380(v=exchg.150).aspx

In which domains are message oriented middleware like AMQP useful?

What problem do MOM (Message Oriented Middleware) solve? Scalability? Integration?
In which domain are they typically used and in which domains are they typically not used?
For example, say, is Google using such solution for it's main search engine or to power GMail?
What about big websites like Walmart, eBay, FedEx (pretty much a Java shop) and buy.com (pretty much an MS shop)? Does MOM solve a need there?
Does it make any sense when you're writing a Webapp where you control the server-side and have an homogenous environment (say tens of Amazon EC2 instances all running Linux + Java JVMs) there and where the clients are, well, Web browsers?
Does it make sense for desktop apps that need to communicate with a server?
Or is it 'only' for big enterprise stuff where you typically have a happy mix of countless of different systems that needs to communicate in a way or another?
I'm a bit confused as to what they're useful for and I think that with example of where they're appropriate and where they're not appropriate I could better understand their use.
This is a great question.
The main uses of messaging are: scaling, offloading work, integration, monitoring, event handling, routing, networking, push, mobility, buffering, queueing, task sharing, alerts, management, logging, batch, data delivery, pubsub, multicast, audit, scheduling, ... and more. Basically: anything where you need data but don't want to make a database request. (Caching is another, longer story).
Another way of looking at this is to notice that many applications used to be built by assuming that users (people) would perform actions that would be fulfilled by executing a transaction on a database (including reads, writes). But today, many actions are not user-initiated. Instead they are application-initiated. For example "tell me when the book that I want to buy is in stock". The best way to solve this class of problems is with messaging of some sort. Whether you call it middleware or web push or real time salad dressing does not matter. It's all messaging.
When you enable applications to initiate or react to events, then it is much easier to scale because your architecture can be based on loosely coupled components. It is also much easier to integrate those components if your messaging is based on a stable, scalable, serviceable tool, preferably using open standard APIs and protocols.
I hope this helps. We try to maintain a list of useful links about messaging here
Please get in touch with questions and comments on any of this, we are dead easy to find.
To address your specific questions:
In which domain are they typically used and in which domains are they typically not used?
Like databases, messaging systems crop up everywhere.
For example, say, is Google using such solution for it's main search engine or to power GMail?
Google uses a lot of home grown technology, but a lot of their open source contributions and known use cases suggest that messaging is (or should be) central to some of the main services.
What about big websites like Walmart, eBay, FedEx (pretty much a Java shop) and buy.com (pretty much an MS shop)? Does MOM solve a need there?
Very much so.
An example use case is scaling web page requests. When the user makes a web request, the web server puts it onto a queue for background processing. This means that the web server can keep working while the request is processed. It also means that the web server does not need to know how the request is handled, making system maintenance, upgrade and rollback much simpler because the main parts are 'decoupled'.
So, anyway, the web request gets processed by a back end service, or possibly by many services, eg 'look up book titles', 'draw shopping cart', 'get advertisement', 'check user account'... Finally all the results get put onto another queue, ready for collection and user response by the web server. Typically the system will include a timeout of around 100ms so that any late requests just get thrown away. The user sees anything that got processed in the time interval. This is one reason why some large ecommerce sites have pages that appear to load in stages.
There are many more use cases...
Does it make any sense when you're writing a Webapp where you control the server-side and have an homogenous environment (say tens of Amazon EC2 instances all running Linux + Java JVMs) there and where the clients are, well, Web browsers?
Definitely. If you have an unknown, or unbounded, number of users, server side instances, and application latencies, then it makes sense to use messaging, even if just as a scalable substrate for non-blocking RPC.
Does it make sense for desktop apps that need to communicate with a server?
In lots of cases. One very common case is when the server pushes events to the desktop app, eg game event, tweets, price feeds in finance, system alerts....
Or is it 'only' for big enterprise stuff where you typically have a happy mix of countless of different systems that needs to communicate in a way or another?
Definitely not only for those 'legacy integration' cases but they are important too. At RabbitMQ, the biggest customers we have in terms of pure scale or message volume are cloud providers and big web application providers.
I will answer only one answer, from prior experience - take a look at this middle-ware that is employed by big companies here - middle-ware has one purpose - to glue dis-connected systems (written in disparate languages) together so that they can interact with one another and streamline the business process - Entera as I have had experience with, creates a middle layer in which the unix box using processes written in C, interact with the mainframe system (DB2, COBOL) via a front-end written in PowerBuilder (I am not naming the company!).
From the description I have given, Entera is a middle-ware which hosts a number of things - smooth integration of the flow of data regardless of the endian format, ability for different languages to talk to the middle-ware broker (a broker is a CORBA or DCE like process, that conforms to 'The Open Group) that listens on a particular port) and is specified by an IDL which makes a process appear to be local - if you understand the terminology used in Remoting under Microsoft's .NET Framework, you are not far off the mark! The middle-ware generates stubs which are linked at compile-time and manages the creation of the process, hosting it off a port, multi-threading at run-time, and also, the modern front-ends (such as .NET, Java, PowerBuilder even the unspeakable VB6...ok...VB.NET for the purists out there) can interact by opening a connection to the specified port on a particular IP address, and using the stubs generated, can interact with it directly.
Obviously, from what was described you can see how the legacy systems can have new life breathed into it and thus scalability of the process, the major downside of this is the cost factor which can run into thousdands of dollars. Big companies who uses mainframes as their back-end processing systems for billing/invoicing, who generate a huge revenue can obviously afford such an expensive product - to them it would seem like throwing pennies into a pool of water...because of the use of middle-ware which prolongs the business process, and breathe new life into it, can extend the business by a good number of years into the future without worrying about 'legacy' tag attached to it.
Incidentally, I carried this out as part of my thesis for my BSc. in Information Systems which covered this commercial front-end. There was an open source version of the middle-ware available on sourceforge called FreeDCE, but development efforts have declined or stopped.
Edit:
#cocotwo: That is exactly what middle-ware does as you said it is a plumbing tool...message oriented middle-ware is not really heard of AFAIK because I would imagine, the processes (functions) would need to be called as if they are locally visible within the application domain of the front-end to make it easy to interact with.
Using messages may have its advantages over RPC calls in that the messages are queued in a safe-holding area in the event that a network disconnection occurs - there may be some data caching going on within that aspect to allow the front-end to continue regardless...it would be useful in the instances of 'updating a status of a particular billing/invoice number' - a one-way write-data to the back-end via the middle-ware.
Ok, big companies would have advanced systems infrastructure in that technicians are constantly around the clock to ensure a smooth delivery of data-flow so that would have to be factored in. The company that I worked with had IBM Global Support contract to fulfill in order to ensure a maximum uptime 99% with 6 nine's after the decimal point...with hot-swapping/balanced-clusters/mirroring systems in place...
Whereas with RPC, if the disconnection occurs, the front-end would have to be restarted or would have to handle the disconnection event. It really depends if the message-queueing middle-ware handles each message in real-time and pass back results to the front-end immediately...
This is where each (Message-queueing and RPC related middle-ware) have their strengths and weaknesses...and also the cost mitigation factor such as support, maximum up-time, development efforts and training - that's a biggie here as middle-ware are really proprietary (despite following the 'The Open Group' layout/standards) and complex to setup and to glue the whole thing together via scripts.
Good answers and discussion here. Our consulting team has two preferred "messaging" solutions: RabittMQ and NXTera a high speed RPC middleware, the contemporary version of Entera mentioned above. My partners and I have developed several solutions using RabittMQ, it is the best tool available in that space right now. Additionally, I happen to work for the company that makes NXTera/Entera.
From experience I can clearly say that both of these products meet the need for reliability and low maintenance as discussed above. There are situations where a messaging service, like RabittMQ, is the right choice -- where Publish and subscribe, certified delivery, Queuing or store-and-forward are required.
In other cases, RPC's (remote procedure calls) are the best and fastest solutions for transactional and distributed processing for enterprise or cloud-based applications. When it is right to use an RPC, but SOAP/.NET (yes these are RPC implementations) are too slow, expensive or complex, a lightwieght high speed RPC middleware like NXTera/Entera is the right choice for us.
There is some use case overlap between RPC middleware and message oriented middleware, and where there are you can use either successfully. But both are strong and dependable choices.
The large companies I work with use both RPC and MoM side-by-side. As far as Internet companies, Google (Protocol Buffers) and Facebook (Thrift) show that RPC's have a roll to play in modern web and cloud-based development.

How do popular routing gps/phones/mapping web sites update their route information?

How do popular routing gps/phones/mapping web sites update their route information?
And do any phones send back data based on the users actual trip to allow the system to update route information?
What do you mean with "route information"? The map data they use to calculate routes is usually provided by companies Like NavTeq. They provide updates to the data on a regular base.
Concerning data collected by users, TomTom provides so called "IQ routes" which are based on actual traffic data. Meaning when you travel at 5am the system will likely suggest a different route compared to travelling during rush hour.
The required data was collected by the TomTom systems but AFAIK users have to manually upload it to TomTom or at least agree to provide the data when they do an online update of their system.
The two major players in this world are TeleAtlas, a TomTom subsidiary and NavTeq, a Nokia subsidiary.
IMO TomTom/TeleAtlas has the most advanced system. They operate a real-time system for measuring traffic flows, HD traffic. This takes into account data from other HD Traffic users, but also anonymixed data extracted from the GSM network. Now, in addition to the real-time view this provides, TeleAtlas also compiles a statistical average out of this; TomTom sells that as IQ routes.
Now it follows logically that if there's a lot of new traffic across a river, then probably someone built a bridge there ;)
In addition to HD Traffic and IQ Routes, TomTom also allows their users to report map erros and updates with MapShare. For many classes of changes (e.g. one-way roads or blocked roads, or changed roadnames), TomTom can use MapShare to immediately distribute updates for their maps without issuing a full map update. As a TomTom subsidiary, TeleAtlas presumably has access to these reported updates as well.