Sentinel KQL JSON with Dynamic Label - json

I'm experimenting with Microsoft Sentinel and trying to understand how to parse JSON elements. One experiment is that I've wired my house with temperature and humidity sensors and fed them in, now the difficulty is the parsing... they're syslog events with a Message containing JSON as shown below.
SENSOR =
{
"ZbReceived":
{
"0x03FA":
{
"Device":"0x03FA",
"Name":"2_Back_Bedroom",
"Humidity":71.66,"Endpoint":1,
"LinkQuality":66
}
}
}
Unfortunately the devices include the device ID as a label in the JSON, which makes it hard for me to figure out how to extract all the fields. There are 8 sensors, so repeating this for every one of them seems inefficient, but maybe it's necessary?
Is there a way I could extract the values from 8 different sensors? I've tried .[0]. and other variants, but no luck.
print T = dynamic('SENSOR = {"ZbReceived":{"0x03FA":{"Device":"0x03FA","Name":"2_Back_Bedroom","Humidity":71.66,"Endpoint":1,"LinkQuality":66}}}')
| mv-expand humidity = parse_json(substring(T, 9)).ZbReceived.["0x03FA"].Humidity
| mv-expand device = parse_json(substring(T, 9)).ZbReceived.["0x03FA"].Device
| mv-expand name = parse_json(substring(T, 9)).ZbReceived.["0x03FA"].Name
| mv-expand battery = parse_json(substring(T, 9)).ZbReceived.["0x03FA"].Battery
| mv-expand temperature = parse_json(substring(T, 9)).ZbReceived.["0x03FA"].Temperature

Quick explanation:
Under
print T = dynamic('SENSOR = {"ZbReceived":{"0x03FA":{"Device":"0x03FA","Name":"2_Back_Bedroom","Humidity":71.66,"Endpoint":1,"LinkQuality":66}}}')
| parse tostring(T) with "SENSOR = " sensor:dynamic
| project device = sensor.ZbReceived[tostring(bag_keys(sensor.ZbReceived)[0])]
| evaluate bag_unpack(device)
Device
Endpoint
Humidity
LinkQuality
Name
0x03FA
1
71.66
66
2_Back_Bedroom
Fiddle
P.S.
For clarity, the line with the project operator could be replaced with the following 2 lines:
| extend device_id = tostring(bag_keys(sensor.ZbReceived)[0]) // e.g., 0x03FA
| project device = sensor.ZbReceived[device_id]

Related

Scenario Outline: Using Whitelists & Ranges?

I have been building up a Cucumber automation framework and have a number of components to test. I have used Scenario Outline's to capture various values and their expected responses.
What I see as a problem:
I have to specify every single type of input of data and the error message to go with it. From the example Scenario Outline below you can see I have certain numbers that all are expected to return the one message. If anything does not equal these values the return an error message:
Scenario Outline: Number is or is not valid
Given I send an event with the "Number" set to <num>
Then I will receive the following <message>
Examples:
| num | message |
| 0 | "Processed event" |
| 1 | "Processed event" |
| 2 | "Processed event" |
| 3 | "Processed event" |
| 4 | "Processed event" |
| 5 | "Processed event" |
| 6 | "Message failed" |
| -1 | "Message failed" |
| "One" | "Message failed" |
What I would like to do:
I would basically like to have a "whitelist" of good data defined in the Scenario Outline and if there is any other value input - it returns the the expected error message. Like the following:
Scenario Outline: Number is or is not valid
Given I send an event with the "Number" set to <num>
Then I will receive the following <message>
Examples:
| num | message |
| 0-5 | "Processed event" |
| Anything else | "Message failed" |
Is the following possible with the code behind it? As you can see it would have benefits of making an automation suite far more concise and maintainable. If so please let me know, keen to discuss.
Thanks!
Kirsty
Cucumber is a tool to support BDD. This means that it works really well when you have to communicate about behavior. But this particular problem is going towards validating the properties of the event validator, i.e. property based testing. So it might be worth to split the test strategy accordingly.
It appears there is a rule that valid events results are processed and invalid events are rejected. This is something you could test with Cucumber. For example:
Feature: Events
This system accepts events. Events are json messages.
Examples of well known valid and invalid json messages
can be found in the supplementary documentation.
Scenario: The system accepts valid events
When a well known valid event is send
Then the system accepts the event
And responds with "Processed event"
Scenario: The system rejects invalid events
When a well known invalid event is send
Then the system rejects the event
And responds with "Message failed"
It also appears there is a rule that valid events have a field "Number" set to any value between 0-5. And since sounds like a json object I'm guessing the strings "0", "1", "2", "3", "4", "5" are also valid. Anything else is invalid.
A good way to test this exhaustively is by using property based testing framework. For example JQwik. Given a description of a set of either valid or invalid values it will randomly try a few. For a simplified example:
package my.example.project;
import net.jqwik.api.*;
import static org.assertj.core.api.Assertions.assertThat;
class ValidatorProperties {
#Provide
Arbitrary<Object> validValues() {
Arbitrary<Integer> validNumbers = Arbitraries.integers().between(0, 5);
Arbitrary<String> validStrings = validNumbers.map(Object::toString);
return Arbitraries.oneOf(validNumbers, validStrings);
}
#Provide
Arbitrary<Object> invalidValues() {
Arbitrary<Integer> invalidNumbers = Arbitraries.oneOf(
Arbitraries.integers().lessOrEqual(-1),
Arbitraries.integers().greaterOrEqual(6)
);
Arbitrary<String> invalidStrings = invalidNumbers.map(Object::toString);
return Arbitraries.oneOf(
invalidNumbers,
invalidStrings,
Arbitraries.just(null)
);
}
#Property
void accepts0To5(#ForAll("validValues") Object value) {
Validator validator = new Validator();
assertThat(validator.isValid(value)).isTrue();
}
#Property
void rejectsAnythingElse(#ForAll("invalidValues") Object value) {
Validator validator = new Validator();
assertThat(validator.isValid(value)).isFalse();
}
static class Validator {
boolean isValid(Object event) {
return event != null && event.toString().matches("^[0-5]$");
}
}
}
Split this way the Cucumber tests describe how the system should respond to valid and invalid events. While the JQwik test describe what the properties of a valid and invalid event are. This allows much more clarity on the first and a greater fidelity on the second.

How do you write an array of numbers to a csv file?

let mut file = Writer::from_path(output_path)?;
file.write_record([5.34534536546, 34556.456456467567567, 345.56465456])?;
Produces the following error:
error[E0277]: the trait bound `{float}: AsRef<[u8]>` is not satisfied
--> src/main.rs:313:27
|
313 | file.write_record([5.34534536546, 34556.456456467567567, 345.56465456])?;
| ------------ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the trait `AsRef<[u8]>` is not implemented for `{float}`
| |
| required by a bound introduced by this call
|
= help: the following implementations were found:
<&T as AsRef<U>>
<&mut T as AsRef<U>>
<Arc<T> as AsRef<T>>
<Box<T, A> as AsRef<T>>
and 44 others
note: required by a bound in `Writer::<W>::write_record`
--> /home/mlueder/.cargo/registry/src/github.com-1ecc6299db9ec823/csv-1.1.6/src/writer.rs:896:12
|
896 | T: AsRef<[u8]>,
| ^^^^^^^^^^^ required by this bound in `Writer::<W>::write_record`
Is there any way to use the csv crate with numbers instead of structs or characters?
Only strings or raw bytes can be written to a file; if you try to give it something else, it isn't sure how to handle the data (as #SilvioMayolo mentioned). You can map your float array to one with strings, and then you will be able to write the string array to the file.
let float_arr = [5.34534536546, 34556.456456467567567, 345.56465456];
let string_arr = float_arr.map(|e| e.to_string();
This can obviously be combined to one line without using the extra variable, but it is a little easier to see the extra step that we need to take if it is split apart.

how can I match these regular expressions?

I have a url stored in a database. From a given url, i have to fetch data from the database but I'm confused how can I indentify the actual url. It's confusing when the url from the user is dynamic like text or a number.
example:
/user should match the url /user
/user/1 should match the url /user/{id}
/user/name/johndoe should match the url /user/name/{name}
/user/1/johndoe should match the url /user/{id}/{name}
I have tried this one it works for /user/1 i.e with integers but I cannot make it work with string. There's no way to indentify the string parameters. Or is there other way in mysql a workaround.
MOTO: Fetch the content from the database where a given url string matches the url column. Note: the {id}, {name}, {type} are dynamic values.
Controller:
$sanitizedUrl = preg_replace('/\/*\d{1,}/','/{id}', '/user/name/1');
UrlContent::where('url', $sanitizedUrl)->first()->content;
This code can only work with the urls which have numbers only. Example: /user/12/type/12 is replaced as /user/{id}/type/{id}
Database : url_contents
| url | content |
-----------------------------------------------------------
| /user/name/{id} | matches if url is /user/name/1 |
| /user/{id}/{name} | if url is /user/1/john |
| /user/{type}/{name} | if url i /user/{type}/{name} |
You should do it the other way:
function getUrlContent($url)
{
$replacements = [
'{id}' => '\d+',
'{name}' => '\w+',
];
$urlContents = UrlContent::all();
foreach ($urlContents as $urlContent) {
if (preg_match('~' . strtr($urlContent->url, $replacements) . '~', $url)) {
return $urlContent->content;
}
}
return null;
}
$url = '/user/1/johndoe';
var_dump(getUrlContent($url));
Also you may need to cache all built regular expressions for a time to reduce mysql overhead of a roundtrip. If url_contents table grows significantly you may have to add another column which will contain the regular expression itself:
+-----------------+----------------+------------+
| url | url_regex | content |
+-----------------+----------------+------------+
| /user/name/{id} | /user/name/\d+ | ......... |
+-----------------+----------------+------------+

Kusto KQL reference first object in an JSON array

I need to grab the value of the first entry in a json array with Kusto KQL in Microsoft Defender ATP.
The data format looks like this (anonymized), and I want the value of "UserName":
[{"UserName":"xyz","DomainName":"xyz","Sid":"xyz"}]
How do I split or in any other way get the "UserName" value?
In WDATP/MSTAP, for the "LoggedOnUsers" type of arrays, you want "mv-expand" (multi-value expand) in conjunction with "parsejson".
"parsejson" will turn the string into JSON, and mv-expand will expand it into LoggedOnUsers.Username, LoggedOnUsers.DomainName, and LoggedOnUsers.Sid:
DeviceInfo
| mv-expand parsejson(LoggedOnUsers)
| project DeviceName, LoggedOnUsers.UserName, LoggedOnUsers.DomainName
Keep in mind that if the packed field has multiple entries (like DeviceNetworkInfo's IPAddresses field often does), the entire row will be expanded once per entry - so a row for a machine with 3 entries in "IPAddresses" will be duplicated 3 times, with each different expansion of IpAddresses:
DeviceNetworkInfo
| where Timestamp > ago(1h)
| mv-expand parsejson(IPAddresses)
| project DeviceName, IPAddresses.IPAddress
to access the first entry's UserName property you can do the following:
print d = dynamic([{"UserName":"xyz","DomainName":"xyz","Sid":"xyz"}])
| extend result = d[0].UserName
to get the UserName for all entries, you can use mv-expand/mv-apply:
print d = dynamic([{"UserName":"xyz","DomainName":"xyz","Sid":"xyz"}])
| mv-apply d on (
project d.UserName
)
thanks for the reply, but the proposed solution didn't work for me. However instead I found the following solution:
project substring(split(split(LoggedOnUsers,',',0),'"',4),2,9)
The output of this is: UserName

Fail to convert to json a record with union types with websharper

I'm using websharper to convert records/unions to post to a API in json. This are the declarations:
[<NamedUnionCases "Icon">]
type IconName =
| Aaabattery
| Abacus
[<NamedUnionCases>]
type Size =
| H1
| H2
| H3
| H4
| H5
type Icon = {title:string; name:IconName; size:Size; updated:DateTime; }
Icon {title = "hello";
name = Aaabattery;
size = H1;
updated = 11/06/2015 3:18:29 p. m.;}
This is how I encode:
let ToJString (jP:Core.Json.Provider, msg:Widget) =
let enc = jP.GetEncoder<Widget>()
enc.Encode msg
|> jP.Pack
|> Core.Json.Stringify
printfn "D:"
let j = Core.Json.Provider.Create()
let data = ToJString(j, widget)
printfn "D: %A" data
The program never reach the last printfn "D: %A" data. However, If I turn the unions to enum or remove it them worked. What is missing?
[<NamedUnionCases>] without argument relies on the names of the arguments to disambiguate between cases. For example, with the following type:
[<NamedUnionCases>]
type Foo =
| Case1 of x: int
| Case2 of y: int
values are serialized as either {"x":123} or {"y":123}, so deserialization is possible by checking which fields are present. But in your type Size, all cases have zero argument so they would essentially all be serialized as {}, so the deserializer wouldn't know which case to choose.
There are several solutions:
If you want to serialize these values as objects with a field telling which case it is, use [<NamedUnionCases "fieldName">] to get eg. {"fieldName":"H1"}.
If you want to serialize them as constant numbers or strings, use the Constant attribute like this:
type Size =
| [<Constant 1>] H1
| [<Constant 2>] H2
| [<Constant 3>] H3
| [<Constant 4>] H4
| [<Constant 5>] H5
this way for example H1 will be serialized simply as 1.