This has been puzzling me for a while and I may be 'barking up the wrong tree'.
We currently use Sagemaker to make predictions on component failures for certain products in a basic way. This is done fairly simply by training the model and passing "modelcode, manufacture_date, component_code, failure_type" to the endpoint.
The issue is that certain products have trends in component failures and passing the above doesn't include the historic issues with a product in question. e.g. the product may have had 2 component failures that we would predict would lead to a 3rd component failure as other products have had the same issues/trend.
Ideally we would pass nested JSON into the endpoint as follows:
{
"modelcode": "XX001",
"manufacturedate": "2008.10.08",
"component_failures":[
{
"component_code":"CC001",
"failure_type":"shattered",
"failure_date":"2010.01.01",
}
{
"component_code":"CC012",
"failure_type":"cracked",
"failure_date":"2012.12.19",
}
]
}
Is this possible using AWS Sagemaker or would I have to use an alternative product?
Thanks.
Yes it is possible.
Sagemaker is very flexible, in that you can customize your own inference code to handle different types of input.
For example, if you are using MXNet as your deep learning framework, you can supply your own inference script and customize it to your own use-case in how to handle the input/output. For more information you can find the detailed explanation here: https://sagemaker.readthedocs.io/en/stable/using_mxnet.html#process-model-input
Similarly there is also one for Tensorflow deep learning framework.
Related
I have an optimization algorithm deployed as a live deployment. It takes a set of objects and returns a set of objects with potentially different size. This works just fine when I'm using the REST api.
The problem is, I want to let the user of the workshop app query the model with a set of objects. The returned objects need to be written back to the ontology.
I looked into an action backed function but it seems like I can't query the model from a function?!
I looked into webhooks but it seems to not fit the purpose and I also would need to handle the API key and can't write back to the ontology?!
I know how to query the model with scenarios but it is per sample and that does not fit the purpose, plus I cant write back to the ontology.
My questions:
Is there any way to call the model from a function and write the return back to the ontology?
Is there any way to call a model from workshop with a set of objects and write back to the ontology?
Is modeling objectives just the wrong place for this usecase?
Do I need to implement the optimization in Functions itself?
I answered the questions, as well I tried to address some of the earlier points.
Q: "I looked into an action backed function but it seems like I can't query the model from a function?!"
A: That is correct, at this time you can't query a model from a function. However there are javascript based linear optimization libraries which can be used in a function.
Q: "I looked into webhooks but it seems to not fit the purpose and I also would need to handle the API key and can't write back to the ontology?!"
A: Webhooks are for hitting resources on networks where a magritte agent are installed. So if you have like a flask app on your corporate network you could hit that app to conduct the optimization. Then set the webhook as "writeback" on an action and use the webhook outputs as inputs for a ontology edit function.
Q: "I know how to query the model with scenarios but it is per sample and that does not fit the purpose, plus I cant write back to the ontology."
A: When querying a model via workshop you can pass in a single object as well as any objects linked in a 1:1 relationship with that object. This linking is defined in the modeling objective modeling api. You are correct to understand you can't pass in an arbitrary collection of objects. You can write back to the ontology however, you have to set up an action to apply the scenario back to the ontology (https://www.palantir.com/docs/foundry/workshop/scenarios-apply/).
Q: "Is there any way to call the model from a function and write the return back to the ontology?"
A: Not from an ontology edit function
Q: "Is there any way to call a model from workshop with a set of objects and write back to the ontology?"
A: Only object sets where the objects have 1:1 links within the ontology. You can writeback by appyling the scenario (https://www.palantir.com/docs/foundry/workshop/scenarios-apply/).
Q: "Is modeling objectives just the wrong place for this usecase? Do I need to implement the optimization in Functions itself?"
A: If you can write the optimization in an ontology edit function it will be quite a bit more straightforward. The main limitation of this is you have to use Typescript which is not as commonly used for this kind of thing as Python. There are some basic linear optimization libraries available for JS/TS.
A general question. I have used i.e. Weka classifier model functionallity in their tool. But is there a way to "call Weka" and get a model in response from a website?
It is not important that it is Weka, but I want to implement some simple classification based on a json coming from a web-site.
Thanks.
You can write a REST webservice in Java which loads your model and makes predictions using data it receives, sending back the predictions in a suitable format. There are a number of frameworks for writing such webservices (e.g., JAX-RS).
In terms of using the Weka API, check out the Use Weka in your Java code article.
Hello there I am creating an application in Flutter and I am receiving JSON response from the API and I know that we need to parse the response to use in the Flutter app but I found that if we use the normal way as:
jsonData['key'] to get and show the data because this way you can handle any kind of response easily but when I am using the models way then I am facing a lot of issues in which the data structure and data types included.
and I think the model only provides an object structure in which you can access data as an object way like jsonData.key instead of the jsonData['key'] this is only my thinking you can correct me if I am wrong here.
I just want to know that if I am using a non-model way then will it affect my app or not?
models are not resiliant. Your code will always break if the api is modified.
Use an Object is a good practice because helps to take advantage of the strong typed language. This allows you to have a better debug process, catch potential error during writing time (and in compilation time). And this is independent of the state management package that you choose.
Firstly, this has nothing to do with getX. Parsing json into models is much cleaner. You can compare two objects but how do you compare two jsons?
And if you need to create an instance of the object, how would you do so without a model? How would you pass it to another class or a function? I think the answers to this questions will solve your dilemma.
Spring Security uses UserDetailsManager to manage users & authorities.
I want use same implementation JdbcUserDetailsManager to manage user page by admin (user CRUD, managing groups, pagination). But unfortunately there is no implementation for paging, manage groups and so on.
So I've got an issues:
User CRUD because of json convertation for REST.
Group CRUD because of json convertation for REST.
User paging because of UserDetailsManager has no correspond implementation.
Group paging - no implementation.
User JSON to POJO (for create\update operations could be used UserDetails has implementations InetOrgPerson, Person, User but... i've got json convertation issue, can not mark classes with #JsonIgnore).
User POJO to JSON (for read operation i can not remove from data any important fields (for example: password)).
All of this issues have few solution ways:
1.1. Create one more user object similar to User, add expected JSON annotations OR in rest controller create builder to build User object from input parameter map (builder is good pattern by i think this is ugly way to manage something if it was implemented once)
1.2. add spring-datajpa repository (duplicates some security part of JdbcUserDetailsManager) OR extend from JdbcUserDetailsManager and add unimplemented part to manage users, groups...
Solution is the same with 1.
If 1. implemeted using spring-data-jpa - no problems, other case it needs to implement correspond factory builders to provide paging in dynamic way.
Solution is the same with 3.
Solved on 1., 2. steps.
Solved on 1., 2. steps.
Which solution way i should follow (implement managers spring-data-jpa based with additional POJOs or expand functionality for JdbcUserDetailsManager) ?
Describing situation, i think i'll implement solution using spring-data-jpa and extra POJO-entities to have all CRUD and JSON manipulating abilities. This way seems better saving implementation time, and code will be more cleaner for support generation.
If i mistook in my choice, let me know please. also please let me know if my issues already solved in spring (sorry i did not find solutions for described issues). Also i believe someone wants other solution architecture - i would discuss or consider clever ideas.
I was consulting several references to discover how I may output trained Weka models into Java source code so that I may use the classifiers I am training in actual code for research applications I have been developing.
As I was playing with Weka 3.7, I noticed that while it does output Java code to its main text buffer when use simpler classification (supervised in my case this time) methods such as J48 decision tree, it removes the option (rather, it voids it by removing the ability to checkmark it and fades the text) to output Java code for RandomTree and RandomForest (which are the ones that give me the best performance in my situation).
Note: I am clicking on the "More Options" button and checking "Output source code:".
Does Weka not allow you to output RandomTree or RandomForest as Java code? If so, why? Or if it does and just doesn't put it in the output buffer (since RF is multiple decision trees which I imagine it doesn't want to waste buffer space), how does one go digging up where in the file system Weka outputs java code by default?
Are there any tricks to get Weka to give me my trained RandomForest as Java code? Or is Serialization of the output *.model files my only hope when it comes to RF and RandomTree?
Thanks in advance to those who provide help.
NOTE: (As an addendum to the answer provided below) If you run across a similar situation (requiring you to use your trained classifier/ML model in your code), I recommend following the links posted in the answer that was provided in response to my question. If you do not specifically need the Java code for the RandomForest, as an example, de-serializing the model works quite nicely and fits into Java application code, fulfilling its task as a trained model/hardened algorithm meant to predict future unlabelled instances.
RandomTree and RandomForest can't be output as Java code. I'm not sure for the reasoning why, but they don't implement the "Sourceable" interface.
This explains a little about outputting a classifier as Java code: Link 1
This shows which classifiers can be output as Java code: Link 2
Unfortunately I think the easiest route will be Serialization, although, you could maybe try implementing "Sourceable" for other classifiers on your own.
Another, but perhaps inconvenient solution, would be to use Weka to build the classifier every time you use it. You wouldn't need to load the ".model" file, but you would need to load your training data and relearn the model. Here is a starters guide to building classifiers in your own java code http://weka.wikispaces.com/Use+WEKA+in+your+Java+code.
Solved the problem for myself by turning the output of WEKA's -printTrees option of the RandomForest classifier into Java source code.
http://pielot.org/2015/06/exporting-randomforest-models-to-java-source-code/
Since I am using classifiers with Android, all of the existing options had disadvantages:
shipping Android apps with serialized models didn't reliably work across devices
computing the model on the phone took too much resources
The final code will consist of three classes only: the class with the generated model + two classes to make the classification work.