How can I provide different models in support of a/b testing with ML Kit? - firebase-mlkit

How can I provide different models in support of a/b testing with ML Kit?
I'm looking at the implementation path for ML Kit and I'm a little concerned because I don't see any description of support for multiple models. I need to support A/B tests with my models.
My planned workflow:
build a "default model" that everyone can use.
retrain the model as input comes in from the user base. update the model on a schedule.
allow a/b testing for using/not using the model, and comparing different models to decide a progression.
users download the model, possibly converting to CoreML ?, and running it locally as needed.

Your workflow is supported using Firebase Remote Config and A/B testing. Here is how you'd go about it:
Publish your TFLite models in ML Kit, via the Firebase console. Give each model a unique name.
When loading the remote model in the app, use Remote Config to dynamically switch the model name via the Firebase console instead of hardcoding the model name.
You can use A/B testing in combination with Remote Config, to set a different value for the config variable (i.e. each different will be the name of the different model you have published)

Related

Modify environment variables or configurations for a module on an IoT Edge deployment

Is there a quick way to modify environment variables, or configurations (that is accessible) for a module in IoT Edge?
Once a deployment is created, the environment variables become read only.
What would be the best practice of maintaining a modifiable set of configurations, so I can rather easily change them on the fly, and have the module be able to access them?
On Azure Cloud Services, for example, there are web configurations that are editable and would restart the service so they would kick in (since they are accessible to the service). I am looking for the same kind of behavior.
You can modify the module's device twin in the portal and deploy it. The module should be informed of the update. Alternatively you could send your module a direct message.
Screen shots on how to update the IOT Edge Module's Environment Variables.
Note, I am using the VisionAI Kit camera in this example.
Screen shots on how to update the IOT Edge Module's Environment Variables.
Note, I am using the VisionAI Kit camera in this example.
Your scenario is Cloud To Device (C2D)communication. Refer here for details.
Out of all, the best match is Module Twin Desired Property update.
You can easily create handlers for desired property updates in your edge module implementation and run custom logic based on the changes in desired properties.
For C#, the handler code would go like -
await ioTHubModuleClient.SetDesiredPropertyUpdateCallbackAsync(OnDesiredPropertiesUpdate, null);
This would be a great read on it.
P.S : Environment Variables are designed to be read-only after deployment. It should only have configurations that are deployment specific and do not change post deployment.

Hexagonal Architecture / Ports & Adapters: Application Configuration with multiple driver adapters

I'm look for some guidance or best practise for how to configure and structure an Application which conforms to Hexagonal architecture that supports multiple (driver) adapters simultaneously.
My API / Application Layer / Ports represent the boundary of the Application. I am now writing the driver adapters, with the goal that the application supports both a console / CLI adapter and REST adapter in tandem.
Does anyone have any thoughts on approaches to the Main Component that configures and wires the application together?
A single Main Component that configures the full application: including all primary adapters. Along with loading the application configuration. In this case it would start the REST services and start the CLI console app.
A separate Main Component for each type of Primary adapter. ie. One for the REST application. One for the CLI / Console application. My concern is will result in a lot of duplication for configuring the Application within the boundary (ie. the API Services, Repositories, etc etc).
Follow the above approach but extract the common configuration / wiring into a shared class.
If anyone has any examples they could share that would be interesting to see.
Cheers,
Steve
This is an interesting question.
From my point of view, trying to be faithful to the pattern explained by its author, although it would also be posible to run more than one driver adapter for one driver port, the "app as a whole" (let's call it system, since the app is the hexagon) is an instance of a driver adapter running on each driver port of the hexagon, and a driven adapter implementing each driven port.
The configuration of the system is the adapter to select for each port. When you run the main component, you have to specify which adapter you want for every port.
That said, I studied two approaches in order to run the system:
(1) To have an additional component (name it main component, composition root, startup, init, or whatever you want) that instantiates the driven adapters and the hexagon, and finally instantiate the driver adapters and run them. This way, the system architecture would look like an app container in the driver side, and a plugin architecture in the driven side.
(2) To run each driver adapter on its own. It is the driver adapter that starts the game, asking the hexagon for a driver port instance, and the hexagon would ask every driven port for a driven adapter instance.
So to your question about the main component in your example, according to my approach (1), I would have two hexagon instances running, but you could have just one, I don't see any problem on that.
I wrote a theorical article about hexagonal architecture at https://softwarecampament.wordpress.com/portsadapters/ , and now I'm working on an article about how to implement hexagonal architecture, and a code example.

AZURE ML getting model weights

I have deployed a regression model on azure ML , is it possible to get the model weights/coefficients of the model programatically from azure, rather than getting predicted value? .
I think you can do so, in your training experiment add an output to your evaluate model module then select deploy webservice right away without going through the predictive experiment option.
Once You publish and click the TEST button You should the values as below
No. Currently we do not feature exporting weights from the models including with Azure Machine Learning.
If you have a method for extracting weights from Python models, you may be able to work this out using the execute Python Script module.
The primary purpose of Azure Machine Learning is to make deployable and scalable web services from the machine learning modules. Though the authoring experience for creating ML models is great, it is not intended to be a place to create and export models, but instead a place to create and operationalize your models.
UPDATE New features may make this answer outdated.

Production vs QA configuration

Time and again I am faced with the issue of having multiple environments that must be configured individually for an application that would run in all of them (e.g. QA, regional production env's, dev, staging, etc.) and I am wondering what would be the best way to organize different configurations?
Would it be in the database? Different configuration files per environment? Or maybe the same file with different sections/xml tags? How would these be then deployed? Embedded within the app? Or put manually in after installation to be modified in-place?
This question is not technology-specific - I've worked with .net and Java, web-apps and desktop apps and this issue comes up time and again. I'm looking to learn different approaches to maybe adapt a hybrid to address this.
EDIT: There's one caveat that I must point out - when configuration is part of the deployed solution, it is generally installed under root user on the host. In large organizations developers usually don't have a root access to production hosts so any changes to the configuration require a new build and deployment. Needless to say this isn't the nicest approach - especially at organizations that have a very strict release process involving multiple teams and approval levels... (sigh I know!)
Borrowed from Jez Humble and David Farley's book "Continuous Delivery (page 41)", you can:
Your build scripts can pull configuration in and incorporate it into your binaries at build time.
Your packaging software can inject configuration at packaging time, such as when creating assemblies, ears, or gems.
Your deployment scripts or installers can fetch the necessary information or ask the user for it and pass it to your application at
deployment time as part of the installation process.
Your application itself can fetch configuration at startup time or run time.
It is considered bad practice by them to inject configuration files in build and compile times, because you should be able to deploy the same binary file to every environments.
My experience was that you could bake all configuration files for every environments (except sensitive information) to your deployment file (war, jar, zip, etc). And you design your application to take in an extra parameter when starts, to pickup the right sets of configuration files (from your extracted deployment file, or from local/remote file system if they are sensitive, or from a database) during application's startup time.
The question is difficult to answer because it's somewhat vague. There is no technology-agnostic approach to configuration as far as I know. Exactly how configuration is set up will depend on the language/technology in question.
I'm not familiar with .net but with java a popular approach is to have a maven build set up with different profiles. Each profile is specific to an environment. You can then define different properties files that have environment-specific values, an example from the above link is:
environment.properties - This is the default configuration and will be packaged in the artifact by default.
environment.test.properties - This is the variant for the test environment.
environment.prod.properties - This is basically the same as the test variant and will be used in the production environment.
You can then build your project as follows:
mvn -Pprod package
I have good news and bad news.
The good news is that Config4* (of which I am the maintainer) neatly addresses this issue with its support for adaptive configuration. Basically, this is the ability for a configuration file to adapt itself to the environment (including hostname, username, environment variables, and command-line options) in which it is running. Read Chapter 2 of the "Getting Started" manual for details. Don't worry: it is a short chapter.
The bad news is that, currently, Config4* implementations exist only for C++ and Java, so your .Net applications are out of luck. And even with C++ and Java applications, it won't make pragmatic sense to retrofit Config4* into an existing application. Because of this, I'd advise trying to use Config4* only in new applications.
Despite the bad news, I think it is worth your while to read the above-mentioned chapter of the Config4* documentation, because doing so may provide you with ideas that you can adapt to fit your needs.

Can I parameterize a CruiseControl.NET project configuration such that the parameters are exposed by the web interface?

I am currently trying to use NAnt and CruiseControl.NET to manage various aspects of my software development. Currently, NAnt handles just about everything, including replacing environment specific settings (e.g., database connection strings) based on an input target that I specify on the command line.
CruiseControl.NET is used to build the application for the default environment (dev) anytime new code is committed. I also want CruiseControl.NET to invoke a build for my additional environments test and stage, but I do not want these to be automatically invoked every time that a dev build invoked (daily) as test and stage deployments happen far less frequently. Test and stage deployments only occur when the application is ready for QA.
I can easily do this by specifying multiple projects, one for each environment. However, I already have many projects configured, one for each milestone in within my application. If I have to setup 3 projects for each milestone the CruiseControl.NET configuration can get out of hand quickly.
Here is my question:
Can I parameterize a CruiseControl.NET project configuration such that the parameters are exposed by the web interface?
Preferably (I think), I could have checkboxes for each environment (e.g., dev, test, stage) exposed in the web interface. A build would be made for each environment that is checked, whether the build was forced or automatic. It would be even better if I could default the checked state.
This feature (Dynamic Build Parameters) is currently being worked on for 1.5, and you can try it out in the nightlies. Here's a post describing the feature.
As Scott has mentioned, this isn't available, but it wouldn't take too much just to write a little template and then auto-generate the ccnet.config file given that template and a list of environments in a mail-merge type way.
Unfortunately, you can't do anything like that with CruiseControl.NET. It's a good idea, so you might want to submit it as a feature request.
This is fully supported now starting with cruisecontrol 1.5: http://cruisecontrolnet.org/projects/ccnet/wiki/Parameters