AZURE ML getting model weights - cortana-intelligence

I have deployed a regression model on azure ML , is it possible to get the model weights/coefficients of the model programatically from azure, rather than getting predicted value? .

I think you can do so, in your training experiment add an output to your evaluate model module then select deploy webservice right away without going through the predictive experiment option.
Once You publish and click the TEST button You should the values as below

No. Currently we do not feature exporting weights from the models including with Azure Machine Learning.
If you have a method for extracting weights from Python models, you may be able to work this out using the execute Python Script module.
The primary purpose of Azure Machine Learning is to make deployable and scalable web services from the machine learning modules. Though the authoring experience for creating ML models is great, it is not intended to be a place to create and export models, but instead a place to create and operationalize your models.
UPDATE New features may make this answer outdated.

Related

How can I use the Design Automation API to extract metadata from an uploaded AutoCAD file?

Per my meeting with Denis Grigor, I was informed that the the Design Automation API has the same capabilities as Model Derivative API to extract metadata from an uploaded AutoCAD file. Model Derivative has a fixed-job pricing structure which is more cost-effective for large files, since it's charged per job, whereas Design Automation is charged per processing hour.
My client will only be extracting data from smaller files, so it doesn't make sense to use Model Derivative API if Design Automation can do the same.
I don't know where to start however. Which specific APIs do I need to use if I want to upload an autocad file such as .dxf or .dwg and retrieve geometric results using Design Automation API?
Whether you are setting up a Design Automation pipeline for AutoCAD, Inventor, Revit, or any other "engine", the process is pretty much the same
develop and debug a plugin/script (in your case an AutoCAD plug-in) locally
upload the plugin/script to Design Automation service as an app bundle
create a Design Automation activity - a reusable template for tasks you will want to execute later, specifying the engine, app bundle, inputs, outputs, etc.
create a Design Automation work item, executing a task based on an activity with specific inputs/outputs (usually just URLs where input files can be downloaded from and output files uploaded to)
Here's a blog post with a simple example using Design Automation for Inventor - it takes an Inventor plugin that generates custom screenshots, and turns it into a Design Automation activity that is later executed with different input Inventor models: https://forge.autodesk.com/blog/simple-introduction-design-automation-inventor.
The same process is also explained in this tutorial: https://learnforge.autodesk.io/#/tutorials/modifymodels.

Web Development with BIM 360 and Forge - Backend

Anybody recommend any backends or frameworks for Forge?
I'm seeing resources for Nodejs, PHP, .Net Core and others which are for the backend.
Are any of these any more convenient or dependable with Forge than the others?
I also know Python and thought Django would be another option but I don't see too many resources on the Python side of things.
Any perspectives on the tools (pro or con) would be great.
The more I understand the kinds of tech stacks, user projects and ways people use Forge to expand on BIM 360 and other APIs the more it can help me and the community get familiar with the service.
This relies completely on the excisting stack used by your company. Forge is a collection of API's accessible via endpoints.
Any library just abstracts the calls away in a accesible way. I've had moderate succes with the dotnet core Forge package, it works very well but you are giving away some strict typing.
If you dont wanna be bound by abstractions made by other people, create your own ! This will ultimately lead to the most lightweight solution since you are only creating what you need.
Cheers

How can I provide different models in support of a/b testing with ML Kit?

How can I provide different models in support of a/b testing with ML Kit?
I'm looking at the implementation path for ML Kit and I'm a little concerned because I don't see any description of support for multiple models. I need to support A/B tests with my models.
My planned workflow:
build a "default model" that everyone can use.
retrain the model as input comes in from the user base. update the model on a schedule.
allow a/b testing for using/not using the model, and comparing different models to decide a progression.
users download the model, possibly converting to CoreML ?, and running it locally as needed.
Your workflow is supported using Firebase Remote Config and A/B testing. Here is how you'd go about it:
Publish your TFLite models in ML Kit, via the Firebase console. Give each model a unique name.
When loading the remote model in the app, use Remote Config to dynamically switch the model name via the Firebase console instead of hardcoding the model name.
You can use A/B testing in combination with Remote Config, to set a different value for the config variable (i.e. each different will be the name of the different model you have published)

MVVMCross View blocked by sqlite call

I am trying to build a fairly simple sqlite database based mobile app using mvvmcross and Portable class libraries. The database I have running is fairly large so querying it takes enough time where I don't want the UI to get blocked while running queries.
The way I currently have it set up is in a few classes based on the mvvmcross n+1 tutorials n=10 tutorial. I have two services that manage the look-ups for the two entities.
How can I perform these database calls on a separate thread and have the view be updated when completed. I assume that this capability exists within mvvmcross I just haven't been able to track down the documentation or any tutorials on it specifically.
Any help pointing me to the right direction would be much appreciated.
Thanks,
JH
Portable class libraries do give you access to the ThreadPool - so you could use that to marshall the work onto a background thread? http://msdn.microsoft.com/en-us/library/system.threading.threadpool.queueuserworkitem.aspx
Alternatively, if you have a setup/configuration which allows the TPL to be used in your library, then you could use the TaskFactory or async and await to provide some background threading - this is probably the better long-term route, but getting this setup and working initially may take longer as Xamarin's async support is very new and it's PCL support is changing.

Experience of developing web version of a desktop application

We are in the process of starting a web version of desktop application developed in Winforms in vs2008 with linq-to-sql.Has anyone ever done such an implementation? What issues you faced when reusing code for web version?
if you partitioned your business logic and data layer into well-separated objects, it works well. But if you have UI logic scattered throughout it's gonna be painful. My advice: separate projects and unit test for UI, Business Objects, Business Logic and Data and use interfaces between each layer. Done it multiple times and it provides the best way . Of course you're already tied into an existing system.
if you design your application with n-tier architecture, thus you must have separated logic and data access and UI layers. with this architecture you don't need that write logic and data access layers. only write a new web ui for this.