Scenario Outline: Using Whitelists & Ranges? - junit

I have been building up a Cucumber automation framework and have a number of components to test. I have used Scenario Outline's to capture various values and their expected responses.
What I see as a problem:
I have to specify every single type of input of data and the error message to go with it. From the example Scenario Outline below you can see I have certain numbers that all are expected to return the one message. If anything does not equal these values the return an error message:
Scenario Outline: Number is or is not valid
Given I send an event with the "Number" set to <num>
Then I will receive the following <message>
Examples:
| num | message |
| 0 | "Processed event" |
| 1 | "Processed event" |
| 2 | "Processed event" |
| 3 | "Processed event" |
| 4 | "Processed event" |
| 5 | "Processed event" |
| 6 | "Message failed" |
| -1 | "Message failed" |
| "One" | "Message failed" |
What I would like to do:
I would basically like to have a "whitelist" of good data defined in the Scenario Outline and if there is any other value input - it returns the the expected error message. Like the following:
Scenario Outline: Number is or is not valid
Given I send an event with the "Number" set to <num>
Then I will receive the following <message>
Examples:
| num | message |
| 0-5 | "Processed event" |
| Anything else | "Message failed" |
Is the following possible with the code behind it? As you can see it would have benefits of making an automation suite far more concise and maintainable. If so please let me know, keen to discuss.
Thanks!
Kirsty

Cucumber is a tool to support BDD. This means that it works really well when you have to communicate about behavior. But this particular problem is going towards validating the properties of the event validator, i.e. property based testing. So it might be worth to split the test strategy accordingly.
It appears there is a rule that valid events results are processed and invalid events are rejected. This is something you could test with Cucumber. For example:
Feature: Events
This system accepts events. Events are json messages.
Examples of well known valid and invalid json messages
can be found in the supplementary documentation.
Scenario: The system accepts valid events
When a well known valid event is send
Then the system accepts the event
And responds with "Processed event"
Scenario: The system rejects invalid events
When a well known invalid event is send
Then the system rejects the event
And responds with "Message failed"
It also appears there is a rule that valid events have a field "Number" set to any value between 0-5. And since sounds like a json object I'm guessing the strings "0", "1", "2", "3", "4", "5" are also valid. Anything else is invalid.
A good way to test this exhaustively is by using property based testing framework. For example JQwik. Given a description of a set of either valid or invalid values it will randomly try a few. For a simplified example:
package my.example.project;
import net.jqwik.api.*;
import static org.assertj.core.api.Assertions.assertThat;
class ValidatorProperties {
#Provide
Arbitrary<Object> validValues() {
Arbitrary<Integer> validNumbers = Arbitraries.integers().between(0, 5);
Arbitrary<String> validStrings = validNumbers.map(Object::toString);
return Arbitraries.oneOf(validNumbers, validStrings);
}
#Provide
Arbitrary<Object> invalidValues() {
Arbitrary<Integer> invalidNumbers = Arbitraries.oneOf(
Arbitraries.integers().lessOrEqual(-1),
Arbitraries.integers().greaterOrEqual(6)
);
Arbitrary<String> invalidStrings = invalidNumbers.map(Object::toString);
return Arbitraries.oneOf(
invalidNumbers,
invalidStrings,
Arbitraries.just(null)
);
}
#Property
void accepts0To5(#ForAll("validValues") Object value) {
Validator validator = new Validator();
assertThat(validator.isValid(value)).isTrue();
}
#Property
void rejectsAnythingElse(#ForAll("invalidValues") Object value) {
Validator validator = new Validator();
assertThat(validator.isValid(value)).isFalse();
}
static class Validator {
boolean isValid(Object event) {
return event != null && event.toString().matches("^[0-5]$");
}
}
}
Split this way the Cucumber tests describe how the system should respond to valid and invalid events. While the JQwik test describe what the properties of a valid and invalid event are. This allows much more clarity on the first and a greater fidelity on the second.

Related

Sentinel KQL JSON with Dynamic Label

I'm experimenting with Microsoft Sentinel and trying to understand how to parse JSON elements. One experiment is that I've wired my house with temperature and humidity sensors and fed them in, now the difficulty is the parsing... they're syslog events with a Message containing JSON as shown below.
SENSOR =
{
"ZbReceived":
{
"0x03FA":
{
"Device":"0x03FA",
"Name":"2_Back_Bedroom",
"Humidity":71.66,"Endpoint":1,
"LinkQuality":66
}
}
}
Unfortunately the devices include the device ID as a label in the JSON, which makes it hard for me to figure out how to extract all the fields. There are 8 sensors, so repeating this for every one of them seems inefficient, but maybe it's necessary?
Is there a way I could extract the values from 8 different sensors? I've tried .[0]. and other variants, but no luck.
print T = dynamic('SENSOR = {"ZbReceived":{"0x03FA":{"Device":"0x03FA","Name":"2_Back_Bedroom","Humidity":71.66,"Endpoint":1,"LinkQuality":66}}}')
| mv-expand humidity = parse_json(substring(T, 9)).ZbReceived.["0x03FA"].Humidity
| mv-expand device = parse_json(substring(T, 9)).ZbReceived.["0x03FA"].Device
| mv-expand name = parse_json(substring(T, 9)).ZbReceived.["0x03FA"].Name
| mv-expand battery = parse_json(substring(T, 9)).ZbReceived.["0x03FA"].Battery
| mv-expand temperature = parse_json(substring(T, 9)).ZbReceived.["0x03FA"].Temperature
Quick explanation:
Under
print T = dynamic('SENSOR = {"ZbReceived":{"0x03FA":{"Device":"0x03FA","Name":"2_Back_Bedroom","Humidity":71.66,"Endpoint":1,"LinkQuality":66}}}')
| parse tostring(T) with "SENSOR = " sensor:dynamic
| project device = sensor.ZbReceived[tostring(bag_keys(sensor.ZbReceived)[0])]
| evaluate bag_unpack(device)
Device
Endpoint
Humidity
LinkQuality
Name
0x03FA
1
71.66
66
2_Back_Bedroom
Fiddle
P.S.
For clarity, the line with the project operator could be replaced with the following 2 lines:
| extend device_id = tostring(bag_keys(sensor.ZbReceived)[0]) // e.g., 0x03FA
| project device = sensor.ZbReceived[device_id]

how can I match these regular expressions?

I have a url stored in a database. From a given url, i have to fetch data from the database but I'm confused how can I indentify the actual url. It's confusing when the url from the user is dynamic like text or a number.
example:
/user should match the url /user
/user/1 should match the url /user/{id}
/user/name/johndoe should match the url /user/name/{name}
/user/1/johndoe should match the url /user/{id}/{name}
I have tried this one it works for /user/1 i.e with integers but I cannot make it work with string. There's no way to indentify the string parameters. Or is there other way in mysql a workaround.
MOTO: Fetch the content from the database where a given url string matches the url column. Note: the {id}, {name}, {type} are dynamic values.
Controller:
$sanitizedUrl = preg_replace('/\/*\d{1,}/','/{id}', '/user/name/1');
UrlContent::where('url', $sanitizedUrl)->first()->content;
This code can only work with the urls which have numbers only. Example: /user/12/type/12 is replaced as /user/{id}/type/{id}
Database : url_contents
| url | content |
-----------------------------------------------------------
| /user/name/{id} | matches if url is /user/name/1 |
| /user/{id}/{name} | if url is /user/1/john |
| /user/{type}/{name} | if url i /user/{type}/{name} |
You should do it the other way:
function getUrlContent($url)
{
$replacements = [
'{id}' => '\d+',
'{name}' => '\w+',
];
$urlContents = UrlContent::all();
foreach ($urlContents as $urlContent) {
if (preg_match('~' . strtr($urlContent->url, $replacements) . '~', $url)) {
return $urlContent->content;
}
}
return null;
}
$url = '/user/1/johndoe';
var_dump(getUrlContent($url));
Also you may need to cache all built regular expressions for a time to reduce mysql overhead of a roundtrip. If url_contents table grows significantly you may have to add another column which will contain the regular expression itself:
+-----------------+----------------+------------+
| url | url_regex | content |
+-----------------+----------------+------------+
| /user/name/{id} | /user/name/\d+ | ......... |
+-----------------+----------------+------------+

Kusto KQL reference first object in an JSON array

I need to grab the value of the first entry in a json array with Kusto KQL in Microsoft Defender ATP.
The data format looks like this (anonymized), and I want the value of "UserName":
[{"UserName":"xyz","DomainName":"xyz","Sid":"xyz"}]
How do I split or in any other way get the "UserName" value?
In WDATP/MSTAP, for the "LoggedOnUsers" type of arrays, you want "mv-expand" (multi-value expand) in conjunction with "parsejson".
"parsejson" will turn the string into JSON, and mv-expand will expand it into LoggedOnUsers.Username, LoggedOnUsers.DomainName, and LoggedOnUsers.Sid:
DeviceInfo
| mv-expand parsejson(LoggedOnUsers)
| project DeviceName, LoggedOnUsers.UserName, LoggedOnUsers.DomainName
Keep in mind that if the packed field has multiple entries (like DeviceNetworkInfo's IPAddresses field often does), the entire row will be expanded once per entry - so a row for a machine with 3 entries in "IPAddresses" will be duplicated 3 times, with each different expansion of IpAddresses:
DeviceNetworkInfo
| where Timestamp > ago(1h)
| mv-expand parsejson(IPAddresses)
| project DeviceName, IPAddresses.IPAddress
to access the first entry's UserName property you can do the following:
print d = dynamic([{"UserName":"xyz","DomainName":"xyz","Sid":"xyz"}])
| extend result = d[0].UserName
to get the UserName for all entries, you can use mv-expand/mv-apply:
print d = dynamic([{"UserName":"xyz","DomainName":"xyz","Sid":"xyz"}])
| mv-apply d on (
project d.UserName
)
thanks for the reply, but the proposed solution didn't work for me. However instead I found the following solution:
project substring(split(split(LoggedOnUsers,',',0),'"',4),2,9)
The output of this is: UserName

Type error when equipping Menhir with an abstract syntax tree

EDIT:
My below question still stands but I appreciate that it's hard to answer without sifting through a pile of code. Therefore, to ask a somewhat similar question, does anyone have any examples of Menhir being used to implement an AST? Preferably not "toy" projects like a calculator but I would appreciate any help I could get.
Original Question:
I'm trying to implement an abstract syntax tree using Menhir and there's an issue I can't seem to solve. My set up is as follows:
The AST's specification is generated using atdgen. This is basically a file with all of my grammar rules translated to to the ATD format. This allows me to serialize some JSON which is what I use to print out the AST.
In my parser.mly file I have a long list of production. As I'm using Menhir I can link these productions up to AST node creation, i.e. each production from the parser corresponds with an instruction to record a value in the AST.
The second point is where I'm really struggling to make progress. I have a huge grammar (the ast.atd file is ~600 lines long and the parser.mly file is ~1000 files long) so it's hard to pin down where I'm going wrong. I suspect I have a type error somewhere along the way.
Snippets of Code
Here's what my ast.atd file looks like:
...
type star = [ Star ]
type equal = [ Equal ]
type augassign = [
| Plusequal
| Minequal
| Starequal
| Slashequal
| Percentequal
| Amperequal
| Vbarequal
| Circumflexequal
| Leftshiftequal
| Rightshiftequal
| Doublestarequal
| Doubleslashequal
]
...
Here's what my parser.mly file looks like:
...
and_expr // Used in: xor_expr, and_expr
: shift_expr
{ $1 }
| and_expr AMPERSAND shift_expr
{ `And_shift ($1, `Ampersand, $3) } ;
shift_expr // Used in: and_expr, shift_expr
: arith_expr
{ $1 }
| shift_expr pick_LEFTSHIFT_RIGHTSHIFT arith_expr
{ `Shift_pick_arith ($1, $2, $3) } ;
pick_LEFTSHIFT_RIGHTSHIFT // Used in: shift_expr
: LEFTSHIFT
{ `Leftshift }
| RIGHTSHIFT
{ `Rightshift } ;
...
The error I get when I try to compile the files with
ocamlbuild -use-menhir -tag thread -use-ocamlfind -quiet -pkgs
'core,yojson,atdgen' main.native
is a type error, i.e.
This expression has type [GIANT TYPE CONSTRUCTION] but an expression
was expected of type [DIFFERENT GIANT TYPE CONSTRUCTION]
I realise that this question is somewhat difficult to answer in the abstract like this, and I'm happy to provide a link to the dropbox of my code, but I'd really appreciate if anyone could point me in the right direction.
Possibly of interest: I have some productions in parser.mly that were initially "empty" which I dealt with by using the ocaml option type (Some and None). Perhaps I could be having issues here?
About examples of code using menhir, you can have a look at the list on the right on the OPAM menhir page - all those depend on menhir.

Is there a way to create custom settings that could be a representation of JSON structure?

I have a generic JSON string containing bunch on arrays. This list can grow in future and can have n-number of repeating elements.
For ex:
"parent_node": {
"node_1": {
"a": "1",
"b": "2"
},
"node_2": {
"a": "1",
"b": "2"
},
"node_3": {
"a": "1",
"b": "2"
}
}
I can easily use static resource, but maintenance becomes a problem. My idea is to provide user friendly customization. Using JSON would be much easier for me but my salesforce users are not aware of JSON and that adds a dependency for them to learn to build a valid JSON file.
I am trying to use custom settings, but doesn't seems to be much helpful. My idea is to accommodate all future enhancements without modifying APEX code and every new addition of child elements or even parent elements must be configurable.
Take in count that a custom setting is the equivalent to a table on a relational database, so you cannot use "document base" representations.
If the fields on the nodes are fixed, you could use custom settings for that, per example the previous json structure could be represented as a custom setting like:
|parent node | node | a | b |
|------------------------------|
|parent node x | node1 | 1 | 2 |
|parent node x | node2 | 1 | 2 |
|parent node x | node3 | 1 | 2 |
|parent node x | node4 | 1 | 2 |
A few considerations:
Make the custom setting type "List".
visibility "Public"