I want to know the difference between the "CAPL Test Functions" and normal functions (Like in C or C++) which can be used in CAPL ??
Under which scenario should I use TestFunctions ??
Thanks.
Test functions are mainly used for test nodes which is used for running test cases (defined
in an XML file) and provide reports about the results.
Normal functions can be used in test/simulation/program nodes
Internally pre-defined CAPL functions does not require function libraries or linking header files to use and compile these functions.
CAPL's internal library provides functions in 3 categories.
1. capl's internal library
2. user defined functions
3. Dll functions, which require user to implement a dynamic linked library.
The idea behind test function and normal functions is quite simple. You can use both in Vector CANoe (test modules) and vTEST Studio. BTW. To make function visible in outter scope you use 'export' keyword.
Test Functions:
- they are always top-most (cannot be nested or executed by any other function)
- does not return anything
- provides additional logging in Vector CANoe test reports (visible either in HTML/XML based or CANoe Test Report Viewer)
- use it only in CAPL Test Modules as 'test steps' of test cases (top most functions)
Casual Functions:
- might be called by other functions and test functions
- might have a return
- executing a function does not influence test logs directly (only information added by testStep, testStepPassed etc. will be added in test report)
- use them in test cases only when you want to return some values (test functions cannot be used in this case)
- use them as smaller pieces of test functions
Related
Could someone please tell me why a variable is declared as "External" in a module and how to use that in other modules through Linkage section and how to pass them into new fields so I can write it to a new file.
EXTERNAL items are commonly found in WORKING-STORAGE. These are normally not passed from one program to another via CALL and LINKAGE but shared directly via the COBOL runtime.
Declaring an item as EXTERNAL behaves like "runtime named global storage", you assign a name and a length to a global piece of memory and can access it anywhere in the same runtime unit (no direct CALL needed), even in cases like the following:
MAIN
-> CALL B
B: somevar EXTERNAL
-> MOVE 'TEST' TO somevar
-> CANCEL B
-> CALL C
C: somevar EXTERNAL -> now contains 'TEST'
On an IBM Z mainframe, running z/OS, the runtime routines for all High Level Languages (HLLs) is called Language Environment (LE). Decades ago, each HLL had its own runtime and this caused some problems when they were mixed into the same run unit; starting in the early 1990s IBM switched all HLLs to LE for their runtime.
LE has the concept of an enclave. Part of the text at that link says an enclave is the equivalent of a run unit in COBOL.
Your question is tagged CICS, and sometimes behavior is different when running in that environment. Quoting from that link...
Under CICS the execution of a CICS LINK command creates what Language Environment calls a Child Enclave. A new environment is initialized and the child enclave gets its runtime options. These runtime options are independent of those options that existed in the creating enclave.
[...]
Something similar happens when a CICS XCTL command is executed. In this case we do not get a child enclave, but the existing enclave is terminated and then reinitialized with the runtime options determined for the new program. The same performance considerations apply.
So, as #SimonSobich noted, if you use CALLs to invoke your subroutines when running in CICS, EXTERNAL data is global to the run unit. But, if you use EXEC CICS XCTL to invoke your subroutines, you may see different behavior and have to design your application differently.
In Foundry's Slate application, is there a clean way to write functions that accept arguments as input using the handlebar syntax?
Instead of function arguments, inputs to a Slate function are defined by a Handlebar reference inside the Function itself; for example to access data from a query from inside a Function, you might write:
const data = {{q_myQuery}}
Defining dependencies in this way allows Slate to automatically recompute the Function outputs whenever the values of upstream dependencies change. In this way, you never "call" a function, but rather some other element in Slate references the Function output and that output is updated whenever the inputs change.
If you want to do some kind of code reuse you can use functionLibraries to write common code that you can re-use between Functions. These are standard javascript functions that are included in the global javascript scope and can be referenced simply by the function name from any Function and take function parameters using normal javascript syntax. Since these are vanilla javascript you cannot use Handlebars inside a functionLibrary - here any input must be passed in as a parameter from the parent Function.
From the documentation (Slate > Concepts > Functions):
Per-Document level function libraries
Users are able to write reusable javascript functions with parameters. This will assist in the refactoring of code and reducing the copying and pasting of code in functions. You can also re-run and update all the functions dependent on a function library using the Re-run All Function button.
Default JavaScript libraries available
For enhanced use of functions, Slate ships by default (as of Slate 2.15) with the following external JavaScript libraries: Lodash, Math.js, Moment, Numeral and es6-shim. Feel free to use these libraries when writing your functions.Do not use ES6 syntax features unless all users are mandated to use a browser supporting these features.
I want to use Randoop to generate unit test cases for an application which consist of many API functions. I can specify which methods to use to form the tests using methodlist option. But what if this method uses the class fields which have to be set using some setter functions first. Right now, these setter functions are being called after the API call in almost all the tests generated by Randoop. Can I specify the order in which the methods are called in my test?
Not currently. I determined this by reading the Randoop manual.
If you want to suggest the feature, you can do so at the issue tracker or on the mailing list.
I am using a third-party library that is a wrapper over some C functions. Unfortunately, nearly all of the Go functions are free (they don't have a receiver, they are not methods); not the design approach I would have taken but it is what I have.
Using just Go's standard "testing" library:
Is there a solution that allows me to create tests where I can mock functions?
Or is the solution to wrap the library into structures and interfaces, then mock the interface to achieve my goal?
I have created a monte carlo simulation that also process the produced dataset. One of my evaluation algorithms looks for specific models that it then passes the third-party function for its evaluation. I know my edge cases and know what the call counts should be, and this is what I want to test.
Perhaps a simple counter is all that is needed?
Other projects using this library, that I have found, do not have full coverage or no testing at all.
You can do this by using a reference to the actual function whenever you need to call it.
Then, when you need to mock the function you just point the reference to a mock implementation.
Let's say this is your external function:
// this is the wrapper for an external function
func externalFunction(i int) int {
return i * 10 // do something with input
}
You never call this directly but instead declare a reference to it:
var processInt func(int) int = externalFunction
When you need to invoke the function you do it using the reference:
fmt.Println(processInt(5))
Then, went you want to mock the function you just assign a mock implementation to that reference:
processInt = mockFunction
This playground puts it together and might be more clear than the explanation:
https://play.golang.org/p/xBuriFHlm9
If you have a function that receives a func(int) int, you can send that function either the actual externalFunction or the mockFunction when you want it to use the mock implementation.
I want to create a Lambda function that runs through S3 files and if needed triggers other Lambda functions to parse the files in parallel.
Is this possible?
Yes it's possible. You would use the AWS SDK (which is included in the Lambda runtime environment for you) to invoke other Lambda functions, just like you would do in code running anywhere else.
You'll have to specify which language you are writing the Lambda function in if you want a more detailed answer.
If I understand your problem correctly you want one lambda that goes through a list of files in a S3-bucket. Some condition will decide whether a file should be parsed or not. For the files that should be parsed you want another 'file-parsing' lambda to parse those files.
To do this you will need two lambdas - one 'S3 reader' and one 'S3 file parser'.
For triggering the 'S3 file parser' lambda you have many few different options. Here are a two:
Trigger it using a SNS topic. (Here is an article on how to do that). If you have a very long list of files this might be an issue, as you most likely will surpass the number of instances of a lambda that can run in parallel.
Trigger it by invoking it with the AWS SDK. (See the article 'Leon' posted as comment to see how to do that.) What you need to consider here is that a long list of files might cause the 'S3 reader' lambda that controls the invocation to timeout since there is a 5 min runtime limit for a lambda.
Depending on the actual use case another potential solution is to just have one lambda that gets triggered when a file gets uploaded to the S3 bucket and let it decide whether it should get parsed or not and then parse it if needed. More info about how to do that can be found in this article and this tutorial.