Perform a query with regular expressions in Orion-LD Context Broker - fiware

This is my request:
http://localhost:1026/ngsi-ld/v1/entities?q=measurementVariable~=*temperature
This is the answer:
{
"type": "https://uri.etsi.org/ngsi-ld/errors/BadRequestData",
"title": "ngsi-ld query language: after match operator must come a RegExp",
"detail": "Variable"
}
I have tried several regular expressions and get the same error every time. What am I doing wrong?

Something like:
curl -L -X GET 'http://localhost:1026/ngsi-ld/v1/entities/?q=name~=.*'
Should work. * on its own is not a valid Regex.

Orion-LD had a bug in the REGEX processing. Fixed it just a few weeks ago. My guess is you actually got a newer version of the broker.

Related

How do I write Grok patterns to parse json data?

I was tasked to filter through data using the elasticsearch-kibana stack. My data comes in JSON format like so,
{
"instagram_account": "christywood",
"full_name": "Christy Wood",
"bio": "Dog mom from Houston",
"followers_count": 1000,
"post_count": 1000,
"email": christy#gmail.com,
"updated_at": "2022-07-18 02:06:29.998639"
}
However, when I try to import the data into Kibana, I get an error that states my data does not match the default GROK pattern.
I tried writing my own GROK, using the list of acceptable syntaxes in this repo, but the debugger always parses the key rather than the actual desired value. For instance, the GROK pattern
%{USERNAME:instagram_account}
returns this undesired data structure
{
"instagram_account": "instagram_account"
}
I've tried a couple other syntax options, but it seems that my debugger always grabs the key and not the actual value. No wonder elastic search cannot make sense of my data!
I've searched for examples, but I am unable to find any that use JSON data. To be fair, I'm very unfamiliar with GROK and would like to understand what % , /n , and other delimiters mean within this context.
Please tell me what I'm doing wrong, and point me in the right direction. Thank you!

AWS CLI create multiple users

I am trying to create multiple users using the AWS CLI. This is just an exercise in learning.
I get different errors depending on what changes I make. After a bit of searching, I changed the encoding and used file:// which elimated the Invalid JSON received errors.
I've tried this with a json file that is ASCII encoded and only one user.
aws iam create-user --cli-input-json file://aws-ec2.json --profile MyProf
I get:
Parameter validation failed: Invalid length for parameter Path, value:
0, valid range: 1-inf Invalid length for parameter
PermissionsBoundary, value: 0, valid range: 20-inf Invalid length for
parameter Tags[0].Key, value: 0, valid range: 1-inf
If I add another user, so change the UserName line to "MyEC2","SecondEC2", it just gives me Invalid JSON received.
Here's the JSON I am using:
{
"Path": "",
"UserName": "MyEC2",
"PermissionsBoundary": "",
"Tags": [
{
"Key": "",
"Value": ""
}
]
}
I know I'm doing something wrong, I just can't figure out what it is!
Thank you John. No one had anything to add. A friend of mine, networking person, suggested that I copy the command into Excel, concatenate the columns and then copy and paste the commands into the CLI.
I was hoping for a different response, but as I wrote in my question, this was just an exercise in learning. Your answer is an answer, and apparently a correct answer.
Thanks again for your time.

Parse jolokia output with jq

I have an Apache Artemis broker, of which I can get some management information through jolokia. This response is in json format; I also have jq to do "json stuff" with it.
curl -s -X GET --url 'http://localhost:8161/console/jolokia/read/org.apache.activemq.artemis:*'
This works; and provides a json response.
I want to make a kind of generic script to check some values from this response; hence a few questions:
(For ease of testing I stored the response in a file broker.json, normally I would just pipe the output from curl to jq or store it in a variable, depending on how often jq has to be called)
One of the keys I want to query I can get like this:
jq '."value"."org.apache.activemq.artemis:broker=\"broker1\""' broker.json
However, in a more generic script, I won't know the name of the broker (which is "broker1" here); is there some way I can wildcard the key like this: "org.apache.activemq.artemis:broker=\"*\"" ? My attempts so far have not given me anything
The second question is a bit harder I think.
In the response there is a field that can be found by querying .request.timestamp
the value is in seconds since epoch.
On the broker are queues, and some of them might have messages; I want to find those that have messages older than, say, 5 minutes.
I can find one such object with this key:
jq '."value"."org.apache.activemq.artemis:address=\"my.queue\",broker=\"broker1\",component=addresses,queue=\"my.queue\",routing-type=\"anycast\",subcomponent=queues"' broker.json
This object contains two keys I can use for this purpose:
- FirstMessageAge : age in ms
- FirstMessageTimestamp: timestamp in miliseconds since epoch.
How would I query for this? Ideally I'd like to get the answer "my.queue has messages older than X"; where my.queue can also be obtained from having the key "Address" or "Name"
Artemis uses Address and Queues as separate entities; for all practical purposes here, both have the same name.
I am trying to make a (simple) script that can periodically monitor the broker health (not too many messages on queues for too long, queues having consumers, stuff like that; which all can be gotten from this single rest call; I think that with the answers to above questions I should be able to figure out how to get this.
is there some way I can wildcard the key like this:
"org.apache.activemq.artemis:broker=\"*\""
The best way to match wildcards on key names is by using with_entries or to_entries. Since you have not provided an example in accordance with the MCVE guidelines, it's not clear exactly how you'd do so, but by analogy with the example you give, you could start with:
.value
| to_entries[]
| select(.key | test("^org.apache.activemq.artemis:broker=\".*\""))
| .value

Process jsonpath postprocessor result

I run a request in Jmeter which response looks something like this:
{
"Items":[
{
"Available":3,
"Info":[
{
"Sample1":1,
"Sample2":33,
"Sample3":50,
"Sample4":"asd",
"Sample5":88,
"Sample6":null,
"Sample7":null,
"Sample8":null,
"Sample9":35,
"Sample0":35
}
]
}
]
}
And my goal is to go through the list of items (in my sample there is only one but there can be more) and if 'Available' is greater than 0 then save some values from 'Info' into a variable to use them for the next request.
Right now my solution is that I added JSON path postprocessor and there I separate the values like this:
$.Items[?(#.Available > 0)].Info[0].Sample1[0];
$.Items[?(#.Available > 0)].Info[0].Sample2[0];
$.Items[?(#.Available > 0)].Info[0].Sample3[0]...
but obviously this is not a very beautiful solution and I also think that this will take too much resource if I have to do it many times.
So my question is that is it somehow possible to separate the
$.Items[?(#.Available > 0)].Info[0]
element and then process it to get the fields I need?
I believe you can do it in a single JSON Path query using an asterisk - * - which stands for wildcard character like:
$.Items[?(#.Available > 0)].Info[0].*
References:
JSON Path Operators
Advanced Usage of the JSON Path Extractor in JMeter
After I solved the problem for myself, then I completely forgot about this question. But I'll post my solution now if someone should be struggling with the same problem.
I'm sure there are ways how to solve this problem using JMeter but it got way too complicated for me to manipulate JSON with regular expressions and also I felt that this is a little ridiculous since there are so nice tools for this. So I wrote a little Java script using GSON, packed it into a .jar and added it to JMeter.

Defining a RegEx Type in JSON Schema

I am looking for an elegant way of checking that a value is a valid regular expression in a JSON schema. So far I have been content with requiring a string type:
{
"pattern": { "type": "string" }
}
I would like to make my check more strict and see that pattern is a valid regular expression:
{
"definitions": {
"regex": {
??? #not another regex -- that was already disproved
}
}
"pattern": { "type": "regex" }
}
By valid I mean checking that there are no syntax errors in the regular expression, such as open parentheses and so on.
I thought for a while that one of the possible solutions was a regex that would match a regex, but I was shown this was already discussed for instance in Regexp that matches valid regexps, turning out that such an approach was impossible. What other ways are there? I can think of a few directions this could lead, but could not find any information. Can I have the schema validator, for instance, somehow compile the regex? Do some validators support a regex primitive type unofficially? Is it in line to become a primitive type in JSON Schema v5, or some standard extension? Can I ``shell out'' of the schema and make the check? Or anything else?
Then I'd have to agree with what's said in Regexp that matches valid regexps - It's not possible.
However, I guess you could limit the complexity of the allowed regex, e.g. in it's most limited way maybe \.\* only allowing the regex .*, up to more complex, but still simple, constructs. Like
^(?:(?:[.\w\s]|\\w|\\d|\\s)(?:\*|\+|{\d+(?:,\d*)?})?)+$
allowing things like .* as well as prefix\d+postfix, \w{1,3}\d+, and so on...
(This was meant as a comment, but could be a possible answer, so... ;)
Regards