get weights for each layer of the model using OpenVINO API - deep-learning

I'd like to get the tensor of weights and biases for each layer of the network using the C++/Python API on the OpenVINO framework. I found this solution but unfortunately it uses an older API (release 2019 R3.1) that is no longer supported. For example, the class CNNNetReader does not exist in the latest OpenVINO. Does anyone know how to achieve this in new releases? (2020+)
Thanks!

Can you try something like this? First you need to read-in a model, the iterate over the nodes in the graph and filter the Constants (I'm making a copy here because the nodes are held as shared pointers). Then each Constant node can be transformed into a vector of values if you're looking for that. To do that you can use the templated method cast_vector<T> and you can figure out which type to use by checking what the Constant holds (using get_element_type).
const auto model = core.read_model("path");
std::vector<std::shared_ptr<ov::op::v0::Constant>> weights;
const auto ops = model->get_ops();
std::copy_if(std::begin(ops), std::end(ops), std::back_inserter(weights), [](const std::shared_ptr<ov::Node>& op) {
return std::dynamic_pointer_cast<ov::op::v0::Constant>(op) != nullptr;
});
for (const auto& w : weights) {
if (w->get_element_type() == ov::element::Type_t::f32) {
const std::vector<float> floats = w->cast_vector<float>();
} else if (w->get_element_type() == ov::element::Type_t::i32) {
const std::vector<int32_t> ints = w->cast_vector<int32_t>();
} else if (...) {
...
}
}

Use the class Core and the method read_model to read the model. Next, use the methods in the class Model to get the detail for each layer.
import openvino.runtime as ov
ie = ov.Core()
network = ie.read_model('face-detection-adas-0001.xml')
# print(network.get_ops())
# print(network.get_ordered_ops())
# print(network.get_output_op(0))
# print(network.get_result())
Refer to OpenVINO Python API and OpenVINO Runtime C++ API for more information.

Related

Is it possible to convert saga iterator to regular promise?

I'm building abstraction layer for keepassxc webextension. It's using redux-saga channels to make look chrome messaging synchronous. It's working (un)surprisingly well. However I want to completely abstract redux-saga, the way it will look like normal function returning Promise.
tl;dr
KeePassXC-browser will be browser extension that will allow retrieving passwords stored in KeePassXC app from the browser.
There are two possible communication protocols: HTTP and NativeClient. So I decided to use typescript interface and depending on communication protocol there will be two classes that implements this interface.
Interface:
interface Keepass {
getDatabaseHash(): Promise<string>;
getCredentials(origin: string, formUrl: string): Promise<KeepassCredentials[]>;
associate(): Promise<KeepassAssociation>;
isAssociated(dbHash: string): Promise<boolean>;
}
First implementation representing HTTP communication protocol is using fetch api, which is already Promise based, so implementation is straight forward and 100% conformed to this interface.
Second implementation representing NativeClient protocol is using redux-saga (effects and channels) to make asynchronous messaging look like synchronous function call. It's bit complicated, but works pretty well and covers edge cases, that would be hard to handle any other way, because native messaging is protocol based on standard input and standard output streams, so request and responses can be interleaved, out of order etc...
The actual problem I'm failing to solve, is that second implementation is not implementing interface, because it's generators not Promises.
Basically would like to convert (wrap) saga iterator function with function returning Promise. There is nice co library that basically does this for normal generators. But doesn't seem to work with redux saga.
function* someGenerator() {
const state = yield select(); // execution freeze here when called from wrapper
const result = yield call(someEffect);
return result;
}
function wrapper() {
return co(someGenerator); // returns Promise
}
Is this possible? If so, what I'm doing wrong?
Redux-saga is based on generator functions for special reason - to allow split asynchronous actions to separated yielded parts and manage them from one endpoint, which located at internal saga process-manager. Instead, in general case, Promise is a thing-in-self, and can't be partial executed. In other simplified words, Promises manage control flow in which them are located, and generators are managed by outer control flow.
yield select(); // execution freeze here when called from wrapper
Your main misconception is to assume that select actual perform some async operation. No, it just pauses function somegenatator on that point and transfers control to redux-saga engine, which knows that to do with returned value, and maybe state async process (Maybe no - it does not matter)
When process is done, saga engine resumes generator, and passes return value to it.
You can easily see it in source code of select (https://github.com/redux-saga/redux-saga/blob/master/src/internal/io.js#L139 ). It just returns an object with some structure, which can be understood by saga engine, then engine perform real action, and calls your generator in generatorName.next(resultValue) format.
UPD. Pure theoretically, you can wrap it to re-assignable promise, but it's not usable case
// Your library code
function deferredPromise() {
let resolver = null;
const promise = new Promise(resolve => (resolver = resolve));
return [
resolver,
promise
];
}
function generateSomeGenerator() {
let [ selectDoneResolve, selectDonePromise ] = deferredPromise();
const someGenetator = function* () {
const state = yield select(); // execution freeze here when called from wrapper
const [newSelectDoneResolve, newSelectDonePromise] = deferredPromise();
selectDoneResolve({
info: state, nextPromise: newSelectDonePromise
});
selectDoneResolve = newSelectDoneResolve;
selectDonePromise = newSelectDonePromise;
const result = yield call(someEffect);
return result;
}
return {
someGenetator,
selectDonePromise
};
}
const { someGenetator: someGenetatorImpl, selectDonePromise } = generateSomeGenerator();
export const someGenetator = someGenetatorImpl;
// Wrapper for interface
selectDonePromise.then(watchDone)
function watchDone({ info, nextPromise }) {
// Do something with your info
nextPromise.then(watchDone);
}

Getting all of the variables of a dart class for json encoding

I have a class
class Person{
String _fn, _ln;
Person(this._fn, this._ln);
}
Is there a way to get a list of variables and then serialize it? Essentially i want to make a toJson, but i wanted to have it generic enough such that key is the variable name, and the value is the value of the variable name.
In javascript it would be something like:
var myObject = {}; //.... whatever you want to define it as..
var toJson = function(){
var list = Object.keys(myObject);
var json = {};
for ( var key in list ){
json[list[key]] = myObject[list[key]] ;
}
return JSON.stringify(json);
}
Dart doesn't have a built in functionality for serialization. There are several packages with different strategies available at pub.dartlang.org. Some use mirrors, which is harmful for client applications because it results in big or huge JS output size. The new reflectable packages replaces mirrors without the disadvantage but I don't know if serialization packages are already ported to use it instead. There are also packages that use code generation.
There is a question with an answer that lists available solutions. I'll look it up when I'm back.

Grails JSON response of models and submodels (one to many)

I'm very new to Grails and so I hope I have an easy question for you.
I have a DomainModel and inside this model is a related Model (one-to-many). Let's say Service and a service as 'n' tasks.
I select (via findAllBy()) e.g. 3 services and every service has at least one task or three for example.
Now my question. I don't want return "render foundServices as JSON". Reason: I don't want to let the people all over the world know my model definition and some maybe "secret" properties, which are all filled automatically by the database return/select. Is that a right thought, or is it a "too much and too deep security"-thought?
So I tried to find out how can I return relevant datas I need in a similiar way than these objects.
I tried:
List<Service> servicesSelection = Service.findAllByCompany("someCompany")
ArrayList services = new ArrayList();
for (Service service: servicesSelection) {
ArrayList myService = new ArrayList()
myService .add(service.id)
myService .add(service.getServiceName())
for (Tasks task: service.tasks) {
ArrayList serviceTasks = new ArrayList()
serviceTasks.add(task.id)
serviceTasks.add(task.getTaskName())
myService.add(serviceTasks)
}
services.add(myService)
}
render services as JSON
1) Is this too much "overhead"?
2) Do you think "ok doesn'n matter, return the whole DomainModel (from search result)"
3) If I put my own 'array lists" together, how can this be done to be similiar to domain-models to access easily all properties and the 'n' task-list in each service
Thank you very much!
It's not too much overhead if your security requirements dictate that certain information not be shared. In most cases, I don't think it's a problem to just convert the whole domain object to JSON, but your app might be a special case.
You could write the code to do this in a manner more consistent with Groovy/Grails practice:
def services = []
for (s in Service.findAllByCompany("someCompany")) {
def tasks = []
for (t in s.tasks) {
tasks << [id: t.id, taskName: t.taskName]
}
def service = [id: s.id, serviceName: s.serviceName, tasks: tasks]
services << service
}
render services as JSON
I just noticed that your code also didn't provide keys for the ids and names (using lists instead of maps), which is probably something you'd want to do and which the example code I've written does.

What are some uses of closures for OOP?

PHP and .Net have closures; I have been wondering what are some examples of using closures in OOP and design patterns, and what advantages they have over pure OOP programming.
As a clarification, this is not a OOP vs. functional programming, but how to best use closures in a OOP design. How do closures fit in, say, factories or the observer pattern? What are some tricks you can pull which clarify the design and results in looser coupling, for example.
Closures are useful for event-handling. This example is a bit contrived, but I think it conveys the idea:
class FileOpener
{
public FileOpener(OpenFileTrigger trigger)
{
trigger.FileOpenTriggered += (sender, args) => { this.Open(args.PathToFile); };
}
public void Open(string pathToFile)
{
//…
}
}
my file opener can either open a file by directly calling instance.Open(pathToFile), or it can be triggered by some event. If I didn't have anonymous functions + closures, I'd have to write a method that had no other purpose than to respond to this event.
Any language that has closures can use them for trampolining, which is a technique for refactoring recursion into iteration. This can get you out of "stack overflow" problems that naive implementations of many algorithms run into.
A trampoline is a function that "bounces" a closure back up to its caller. The closure captures "the rest of the work".
For example, in Python you can define a recursive accumulator to sum the values in an array:
testdata = range(0, 1000)
def accum(items):
if len(items) == 0:
return 0
elif len(items) == 1:
return items[0]
else:
return items[0] + accum(items[1:])
print "will blow up:", accum(testdata)
On my machine, this craps out with a stack overflow when the length of items exceeds 998.
The same function can be done in a trampoline style using closures:
def accum2(items):
bounced = trampoline(items, 0)
while (callable(bounced)):
bounced = bounced()
return bounced
def trampoline(items, initval):
if len(items) == 0:
return initval
else:
return lambda: trampoline(items[1:], initval+items[0])
By converting recursion to iteration, you don't blow out the stack. The closure has the property of capturing the state of the computation in itself rather than on the stack as you do with recursion.
Suppose you want to provide a class with the ability to create any number of FileOpener instances, but following IoC principles, you don't want the class creating FileOpeners to actually know how to do so (in other words, you don't want to new them). Instead, you want to use dependency injection. However, you only want this class to be able to generate FileOpener instances, and not just any instance. Here's what you can do:
class AppSetup
{
private IContainer BuildDiContainer()
{
// assume this builds a dependency injection container and registers the types you want to create
}
public void setup()
{
IContainer container = BuilDiContainer();
// create a function that uses the dependency injection container to create a `FileOpener` instance
Func<FileOpener> getFileOpener = () => { return container.Resolve<FileOpener>(); };
DependsOnFileOpener dofo = new DependsOnFileOpener(getFileOpener);
}
}
Now you have your class that needs to be able to make FileOpener instances. You can use dependency injection to provide it with this capability, while retaining loose coupling
class DependsOnFileOpener()
{
public DependesOnFileOpener(Func<FileOpener> getFileOpener)
{
// this class can create FileOpener instances any time it wants, without knowing where they come from
FileOpener f = getFileOpener();
}
}

What is Map/Reduce?

I hear a lot about map/reduce, especially in the context of Google's massively parallel compute system. What exactly is it?
From the abstract of Google's MapReduce research publication page:
MapReduce is a programming model and
an associated implementation for
processing and generating large data
sets. Users specify a map function
that processes a key/value pair to
generate a set of intermediate
key/value pairs, and a reduce function
that merges all intermediate values
associated with the same intermediate
key.
The advantage of MapReduce is that the processing can be performed in parallel on multiple processing nodes (multiple servers) so it is a system that can scale very well.
Since it's based from the functional programming model, the map and reduce steps each do not have any side-effects (the state and results from each subsection of a map process does not depend on another), so the data set being mapped and reduced can each be separated over multiple processing nodes.
Joel's Can Your Programming Language Do This? piece discusses how understanding functional programming was essential in Google to come up with MapReduce, which powers its search engine. It's a very good read if you're unfamiliar with functional programming and how it allows scalable code.
See also: Wikipedia: MapReduce
Related question: Please explain mapreduce simply
Map is a function that applies another function to all the items on a list, to produce another list with all the return values on it. (Another way of saying "apply f to x" is "call f, passing it x". So sometimes it sounds nicer to say "apply" instead of "call".)
This is how map is probably written in C# (it's called Select and is in the standard library):
public static IEnumerable<R> Select<T, R>(this IEnumerable<T> list, Func<T, R> func)
{
foreach (T item in list)
yield return func(item);
}
As you're a Java dude, and Joel Spolsky likes to tell GROSSLY UNFAIR LIES about how crappy Java is (actually, he's not lying, it is crappy, but I'm trying to win you over), here's my very rough attempt at a Java version (I have no Java compiler, and I vaguely remember Java version 1.1!):
// represents a function that takes one arg and returns a result
public interface IFunctor
{
object invoke(object arg);
}
public static object[] map(object[] list, IFunctor func)
{
object[] returnValues = new object[list.length];
for (int n = 0; n < list.length; n++)
returnValues[n] = func.invoke(list[n]);
return returnValues;
}
I'm sure this can be improved in a million ways. But it's the basic idea.
Reduce is a function that turns all the items on a list into a single value. To do this, it needs to be given another function func that turns two items into a single value. It would work by giving the first two items to func. Then the result of that along with the third item. Then the result of that with the fourth item, and so on until all the items have gone and we're left with one value.
In C# reduce is called Aggregate and is again in the standard library. I'll skip straight to a Java version:
// represents a function that takes two args and returns a result
public interface IBinaryFunctor
{
object invoke(object arg1, object arg2);
}
public static object reduce(object[] list, IBinaryFunctor func)
{
if (list.length == 0)
return null; // or throw something?
if (list.length == 1)
return list[0]; // just return the only item
object returnValue = func.invoke(list[0], list[1]);
for (int n = 1; n < list.length; n++)
returnValue = func.invoke(returnValue, list[n]);
return returnValue;
}
These Java versions need generics adding to them, but I don't know how to do that in Java. But you should be able to pass them anonymous inner classes to provide the functors:
string[] names = getLotsOfNames();
string commaSeparatedNames = (string)reduce(names,
new IBinaryFunctor {
public object invoke(object arg1, object arg2)
{ return ((string)arg1) + ", " + ((string)arg2); }
}
Hopefully generics would get rid of the casts. The typesafe equivalent in C# is:
string commaSeparatedNames = names.Aggregate((a, b) => a + ", " + b);
Why is this "cool"? Simple ways of breaking up larger calculations into smaller pieces, so they can be put back together in different ways, are always cool. The way Google applies this idea is to parallelization, because both map and reduce can be shared out over several computers.
But the key requirement is NOT that your language can treat functions as values. Any OO language can do that. The actual requirement for parallelization is that the little func functions you pass to map and reduce must not use or update any state. They must return a value that is dependent only on the argument(s) passed to them. Otherwise, the results will be completely screwed up when you try to run the whole thing in parallel.
After getting most frustrated with either very long waffley or very short vague blog posts I eventually discovered this very good rigorous concise paper.
Then I went ahead and made it more concise by translating into Scala, where I've provided the simplest case where a user simply just specifies the map and reduce parts of the application. In Hadoop/Spark, strictly speaking, a more complex model of programming is employed that require the user to explicitly specify 4 more functions outlined here: http://en.wikipedia.org/wiki/MapReduce#Dataflow
import scalaz.syntax.id._
trait MapReduceModel {
type MultiSet[T] = Iterable[T]
// `map` must be a pure function
def mapPhase[K1, K2, V1, V2](map: ((K1, V1)) => MultiSet[(K2, V2)])
(data: MultiSet[(K1, V1)]): MultiSet[(K2, V2)] =
data.flatMap(map)
def shufflePhase[K2, V2](mappedData: MultiSet[(K2, V2)]): Map[K2, MultiSet[V2]] =
mappedData.groupBy(_._1).mapValues(_.map(_._2))
// `reduce` must be a monoid
def reducePhase[K2, V2, V3](reduce: ((K2, MultiSet[V2])) => MultiSet[(K2, V3)])
(shuffledData: Map[K2, MultiSet[V2]]): MultiSet[V3] =
shuffledData.flatMap(reduce).map(_._2)
def mapReduce[K1, K2, V1, V2, V3](data: MultiSet[(K1, V1)])
(map: ((K1, V1)) => MultiSet[(K2, V2)])
(reduce: ((K2, MultiSet[V2])) => MultiSet[(K2, V3)]): MultiSet[V3] =
mapPhase(map)(data) |> shufflePhase |> reducePhase(reduce)
}
// Kinda how MapReduce works in Hadoop and Spark except `.par` would ensure 1 element gets a process/thread on a cluster
// Furthermore, the splitting here won't enforce any kind of balance and is quite unnecessary anyway as one would expect
// it to already be splitted on HDFS - i.e. the filename would constitute K1
// The shuffle phase will also be parallelized, and use the same partition as the map phase.
abstract class ParMapReduce(mapParNum: Int, reduceParNum: Int) extends MapReduceModel {
def split[T](splitNum: Int)(data: MultiSet[T]): Set[MultiSet[T]]
override def mapPhase[K1, K2, V1, V2](map: ((K1, V1)) => MultiSet[(K2, V2)])
(data: MultiSet[(K1, V1)]): MultiSet[(K2, V2)] = {
val groupedByKey = data.groupBy(_._1).map(_._2)
groupedByKey.flatMap(split(mapParNum / groupedByKey.size + 1))
.par.flatMap(_.map(map)).flatten.toList
}
override def reducePhase[K2, V2, V3](reduce: ((K2, MultiSet[V2])) => MultiSet[(K2, V3)])
(shuffledData: Map[K2, MultiSet[V2]]): MultiSet[V3] =
shuffledData.map(g => split(reduceParNum / shuffledData.size + 1)(g._2).map((g._1, _)))
.par.flatMap(_.map(reduce))
.flatten.map(_._2).toList
}
Map is a native JS method that can be applied to an array. It creates a new array as a result of some function mapped to every element in the original array. So if you mapped a function(element) { return element * 2;}, it would return a new array with every element doubled. The original array would go unmodified.
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/map
Reduce is a native JS method that can also be applied to an array. It applies a function to an array and has an initial output value called an accumulator. It loops through each element in the array, applies a function, and reduces them to a single value (which begins as the accumulator). It is useful because you can have any output you want, you just have to start with that type of accumulator. So if I wanted to reduce something into an object, I would start with an accumulator {}.
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/Reduce?v=a