Why is MATLAB/Simulink function call creating duplicated / cloned parameters? - function

I have a MATLAB/Simulink software which is converted to C code, to be applied as an embedded application.
Inside this Simulink model I have a few stateflow with function call charts. These subsystems refer to some base workspace parameters to do their calculations, as follows:
Whenever I try to simulate this model, stimulating the inputs and reading the outputs, I get the following 'glitch' - the workspace defined variable 'multiplies', and the model simulation crashes because these new parameters have no defined values:
If I try to run the model again, more clones are created (4, 5 ,6 ...).
Do anyone has any clue on this?
Thanks!

Related

stable-baseline3, gym, train while also step/predict

With stable-baselines3 given an agent, we can call "action = agent.predict(obs)". And then with Gym, this would be "new_obs, reward, done, info = env.step(action)". (more or less, maybe missed an input or an output).
We also have "agent.learn(10_000)" as an example, yet here we're less involved in the process and don't call the environment.
Looking for a way to train the agent while still calling "env.step". If you wander why, just trying to implement self play (agent and a previous version of it) playing with one environment (for example turns play as Chess).
WKR, Oren.
But why do you need it? If you take a look at the implementation of any learn method, you will see it is nothing more than an iteration over time steps calling collect_rollouts and train with some additional logging and setup at the beginning for, e.g., further saving the agent etc. Your env.step is called inside collect_rollouts.
I'd better suggest you to write a callback based on CheckpointCallback, which saves your agent (model) after N training steps and then attach this callback to your learn call. In your environment you could instantiate each N steps a "new previous" version of your model by calling ModelClass.load(file) on the file saved by a callback, so that finally you would be able to select actions of the other player using a self-play in your environment

Accessing regmap RegFields

I am trying to find a clean way to access the regmap that is used with *RegisterNode for creating documentation and testing files. The TLRegisterNode has methods for generating the json through some Annotations. These are done in the regmap method by adding them to the ElaborationArtefacts object. Other protocols don't seem to have these annotations.
Is there anyway to iterate over the "regmap" Register Fields post elaboration or during?
I cannot just access the regmap as it's not really a val/var since it's a method. I can't quite figure out where this information is being stored. I don't really believe it's actually "storing" any information as much as it is simply creating the hardware to attach the specified logic to the RegisterNode based logic.
The JSON output is actually fine for me as I could just write a post processing script to convert JSON to my required formats, but I'm wondering if I can access this information OR if I could add a custom function call at the end. I cannot extend the case class *RegisterNode, but I'm not sure if it's possible to add custom functions to run at the end of the regmap method.
Here is something I threw together quickly:
//in *RegisterRouter.scala
def customregmap(customFunc: (RegField.Map*) => Unit, mapping: RegField.Map*) = {
regmap(mapping:_*)
customFunc(mapping:_*)
}
def regmap(mapping: RegField.Map*) = {
//normal stuff
}
A user could then create a custom function to run and pass it to the regmap or to the RegisterRouter
def myFunc(mapping: RegField.Map*): Unit = {
println("I'm doing my custom function for regmap!")
}
// ...
node.customregmap(myFunc,
0x0 -> coreControlRegFields,
0x4 -> fdControlRegFields,
0x8 -> fdControl2RegFields,
)
This is just a quick example I have. I believe what would be better, if something like this was possible, would be to have a Seq of functions that could be added to the RegisterNode that are ran at the end of the regmap method, similar to how TLRegisterNode currently works. So a user could add an arbitrary number and you still use the regmap call.
Background (not directly part of question):
I have a unified register script that I have built over the years in which I describe the registers for a particular IP. It works very similar to the RegField/node.regmap, except it obviously doesn't know about diplomacy and the like. It will generate the Verilog, but also a variety of files for DV (basic `defines for simple verilog simulations and more complex uvm_reg_block defines also with the ability to describe multiple of the IPs for a subsystem all the way up to an SoC level). It will also print out C Header files for SW and Sphinx reStructuredText for documentation.
Diplomacy actually solves one of the main issues I've been dealing with so I'm obviously trying to push most of my newer designs to Chisel/Diplo.
I ended up solving this by creating my own RegisterNode which is the same as the rocketchip RegisterNodes except that I use a different Elaboration Artifact to grab the info and store it for later.

CAPL Test Functions and Normal Functions in CAPL

I want to know the difference between the "CAPL Test Functions" and normal functions (Like in C or C++) which can be used in CAPL ??
Under which scenario should I use TestFunctions ??
Thanks.
Test functions are mainly used for test nodes which is used for running test cases (defined
in an XML file) and provide reports about the results.
Normal functions can be used in test/simulation/program nodes
Internally pre-defined CAPL functions does not require function libraries or linking header files to use and compile these functions.
CAPL's internal library provides functions in 3 categories.
1. capl's internal library
2. user defined functions
3. Dll functions, which require user to implement a dynamic linked library.
The idea behind test function and normal functions is quite simple. You can use both in Vector CANoe (test modules) and vTEST Studio. BTW. To make function visible in outter scope you use 'export' keyword.
Test Functions:
- they are always top-most (cannot be nested or executed by any other function)
- does not return anything
- provides additional logging in Vector CANoe test reports (visible either in HTML/XML based or CANoe Test Report Viewer)
- use it only in CAPL Test Modules as 'test steps' of test cases (top most functions)
Casual Functions:
- might be called by other functions and test functions
- might have a return
- executing a function does not influence test logs directly (only information added by testStep, testStepPassed etc. will be added in test report)
- use them in test cases only when you want to return some values (test functions cannot be used in this case)
- use them as smaller pieces of test functions

Lambda function calling another Lambda function

I want to create a Lambda function that runs through S3 files and if needed triggers other Lambda functions to parse the files in parallel.
Is this possible?
Yes it's possible. You would use the AWS SDK (which is included in the Lambda runtime environment for you) to invoke other Lambda functions, just like you would do in code running anywhere else.
You'll have to specify which language you are writing the Lambda function in if you want a more detailed answer.
If I understand your problem correctly you want one lambda that goes through a list of files in a S3-bucket. Some condition will decide whether a file should be parsed or not. For the files that should be parsed you want another 'file-parsing' lambda to parse those files.
To do this you will need two lambdas - one 'S3 reader' and one 'S3 file parser'.
For triggering the 'S3 file parser' lambda you have many few different options. Here are a two:
Trigger it using a SNS topic. (Here is an article on how to do that). If you have a very long list of files this might be an issue, as you most likely will surpass the number of instances of a lambda that can run in parallel.
Trigger it by invoking it with the AWS SDK. (See the article 'Leon' posted as comment to see how to do that.) What you need to consider here is that a long list of files might cause the 'S3 reader' lambda that controls the invocation to timeout since there is a 5 min runtime limit for a lambda.
Depending on the actual use case another potential solution is to just have one lambda that gets triggered when a file gets uploaded to the S3 bucket and let it decide whether it should get parsed or not and then parse it if needed. More info about how to do that can be found in this article and this tutorial.

SSIS SQL Task Map Result Set to Project Parameter

I am implementing a custom auditing framework, logging ETL events such as start, end, error, insertrows etc.
As well as logging at a package level, I'm implementing "session logging" where a sequence of package executions, i.e. a controller package that executes several packages, is a session. In order to keep track of the "session", the stored procedures always return a SessionLogID.
I was hoping I could map this result set to a project parameter as otherwise, I will have to save it to a user var and then pass it around between packages via parameters. This will mean every single package will have a Package Parameter and User Variable called SessionLogID. I don't want to do this if I don't need to.
Open to other suggestions.
Thanks,
Adam
Parameters cannot change at runtime. They are a set once kind of deal whereas variables can change at any time. You can set the variable once in the parent package and map the variable to the child package's using a parameter.