Token not allowed in path expression - Reading Configuration file in playframework - junit

I am writing junit test cases in play. I want to read certain configurations from a configuration file. So I am loading that file programatically
private Configuration additionalConfigurations;
Config additionalConfig = ConfigFactory.parseFile(new File("conf/application.conf"));
Config resolConfig = additionalConfig.resolve(ConfigResolveOptions.noSystem());
additionalConfigurations = new Configuration(scaleBasedConf);
running(fakeApplication(additionalConfigurations.asMap()), new Runnable() {
public void run() {
// test Code
}
While running my test case using "play test" I am getting error "Token not allowed in path expression: '[' (you can double-quote this token if you really want it here)
" . My configuration where I am getting this error is
Mykey.a.b.c"[]".xyz = "value"
I have double quoted square brackets. But still getting the error.

After hours of research I finally found out the reason why this is throwing exception. It is because when I do
Config additionalConfig = ConfigFactory.parseFile(new File("conf/application.conf"))
additionalConfig.resolve(ConfigResolveOptions.noSystem());
Then it parses the configuration file taking double quotes in consideration and thus dont give any exception. However it does 1 more thing, it removes those double quotes while parsing. Then the map which we get after parsing , we are passing it to
fakeApplication(additionalConfigurations.asMap()
have key like -> Mykey.a.b.c[].xyz
Here, what play does it again parses the map . Now when double quotes are removed, it throws exception . So the solution for it is-
Mykey."\""a.b.c"[]"\"".xyz = "value"
Doing this, in first parse it creates string as - > Mykey."a.b.c[]".xyz and so in second parse it goes well and dont throw any exception.

Related

Why are the double slash in this parameter definition on Terraform required?

Currently struggling writing a Terraform module to deploy a Helm chart, I was getting:
│ Error: YAML parse error on external-dns/templates/serviceaccount.yaml: error unmarshaling JSON: while decoding JSON: json: cannot unmarshal object into Go struct field .metadata.annotations of type string
with a resource definition like this one:
resource "helm_release" "external_dns" {
name = "externaldns"
namespace = var.external_dns_namespace
repository = "https://charts.bitnami.com/bitnami"
chart = "external-dns"
version = "5.3.0"
set {
name = "serviceAccount.annotations.eks.amazonaws.com/role-arn"
value = resource.aws_iam_role.external_dns_role.arn
}
}
When I found a public repository with a similar module: https://github.com/lablabs/terraform-aws-eks-external-dns/blob/master/main.tf and see that it has the last parameter defined as
set {
name = "serviceAccount.annotations.eks\\.amazonaws\\.com/role-arn"
value = aws_iam_role.external_dns[0].arn
}
I tried adding those double slashes (\) and everything works! Now I would like to understand... why are these double slash required before the last two "." but not in the other two?
I understand that, in Terraform, the double slash means literally a slash... but I cannot understand why would it be required there.
This is what I am trying to put into the Terraform module.
Any help with an explanation for this issue will be appreciated :)
in name = "serviceAccount.annotations.eks\\.amazonaws\\.com/role-arn" you want to define 3 groups, that are separated by dots:
serviceAccount -> annotations -> eks.amazonaws.com/role-arn
Since your third group happens to contain dots, you successfully found out that you must escape the dot characters in order to preserve proper structure.
Without escaping, the string would somehow mean
serviceAccount -> annotations -> eks -> amazonaws-> com/role-arn, which makes no sense here

groovy "MissingMethodException" RESTAPI call

I am trying to access data from RESTAPI using groovy code where i am getting error as below:
groovy.lang.MissingMethodException: No signature of method: java.lang.String.call() is applicable for argument types: () values: []
Possible solutions: wait(), chars(), any(), wait(long), take(int), tap(groovy.lang.Closure)
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.unwrap(ScriptBytecodeAdapter.java:70)
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethodN(ScriptBytecodeAdapter.java:182)
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeClosure(ScriptBytecodeAdapter.java:586)
The error is coming mostly on the below part of the lines from code :
String requestString = getRequestStringPrefix() + sb.toString()
readHistory(authToken,ricMap,outFile)
writeInstFile(outFile)
I am really new in the groovy coding and not understanding exactly the cause of the issue and how to resolve this issue in the code.
With this getRequestStringPrefix() you are calling a method with that name or as a shortcut a method call() on the underlying object, then it looks like getRequestStringPrefix.call().
I'm not sure what your intention was, but the line:
String requestString = getRequestStringPrefix() + sb.toString()
should look like
String requestString = getRequestStringPrefix + sb.toString()
because the variable getRequestStringPrefix (a strange name for a var) is defined as String further down:
String getRequestStringPrefix = """{
"ExtractionRequest": {..."""

How to split the data of NodeObject in Apache Flink

I'm using Flink to process the data coming from some data source (such as Kafka, Pravega etc).
In my case, the data source is Pravega, which provided me a flink connector.
My data source is sending me some JSON data as below:
{"key": "value"}
{"key": "value2"}
{"key": "value3"}
...
...
Here is my piece of code:
PravegaDeserializationSchema<ObjectNode> adapter = new PravegaDeserializationSchema<>(ObjectNode.class, new JavaSerializer<>());
FlinkPravegaReader<ObjectNode> source = FlinkPravegaReader.<ObjectNode>builder()
.withPravegaConfig(pravegaConfig)
.forStream(stream)
.withDeserializationSchema(adapter)
.build();
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
DataStream<ObjectNode> dataStream = env.addSource(source).name("Pravega Stream");
dataStream.map(new MapFunction<ObjectNode, String>() {
#Override
public String map(ObjectNode node) throws Exception {
return node.toString();
}
})
.keyBy("word") // ERROR
.timeWindow(Time.seconds(10))
.sum("count");
As you see, I used the FlinkPravegaReader and a proper deserializer to get the JSON stream coming from Pravega.
Then I try to transform the JSON data into a String, KeyBy them and count them.
However, I get an error:
The program finished with the following exception:
Field expression must be equal to '*' or '_' for non-composite types.
org.apache.flink.api.common.operators.Keys$ExpressionKeys.<init>(Keys.java:342)
org.apache.flink.streaming.api.datastream.DataStream.keyBy(DataStream.java:340)
myflink.StreamingJob.main(StreamingJob.java:114)
It seems that KeyBy threw this exception.
Well, I'm not a Flink expert so I don't know why. I've read the source code of the official example WordCount. In that example, there is a custtom splitter, which is used to split the String data into words.
So I'm thinking if I need to use some kind of splitter in this case too? If so, what kind of splitter should I use? Can you show me an example? If not, why did I get such an error and how to solve it?
I guess you have read the document about how to specify keys
Specify keys
The example codes use keyby("word") because word is a field of POJO type WC.
// some ordinary POJO (Plain old Java Object)
public class WC {
public String word;
public int count;
}
DataStream<WC> words = // [...]
DataStream<WC> wordCounts = words.keyBy("word").window(/*window specification*/);
In your case, you put a map operator before keyBy, and the output of this map operator is a string. So there is obviously no word field in your case. If you actually want to group this string stream, you need to write it like this .keyBy(String::toString)
Or you can even implement a customized keySelector to generate your own key.
Customized Key Selector

how to Validate if JSON Path Exists in JSON

In a given json document, how to validate if a json path exists ?
I am using jayway-jsonpath and have the below code
JsonPath.read(jsonDocument, jsonPath)
The above code can potentially throw below exception
com.jayway.jsonpath.PathNotFoundException: No results for path:
$['a.b.c']
In order to mitigate it, I intend to validate if the path exists before trying to read it with JsonPath.read
For reference I went through the following 2 documentations, but couldn't really get what I want.
http://www.baeldung.com/guide-to-jayway-jsonpath
https://github.com/json-path/JsonPath
Whilst it is true that you can catch an exception, like it is mentioned in the comments there might be a more elegant way to check if a path exists without writing try catch blocks all over the code.
You can use the following configuration option with jayway-jsonpath:
com.jayway.jsonpath.Option.SUPPRESS_EXCEPTIONS
With this option active no exception is thrown. If you use the read method, it simply returns null whenever a path is not found.
Here is an example with JUnit 5 and AssertJ showing how you can use this configuration option, avoiding try / catch blocks just for checking if a json path exists:
#ParameterizedTest
#ArgumentsSource(CustomerProvider.class)
void replaceStructuredPhone(JsonPathReplacementArgument jsonPathReplacementArgument) {
DocumentContext dc = jsonPathReplacementHelper.replaceStructuredPhone(
JsonPath.parse(jsonPathReplacementArgument.getCustomerJson(),
Configuration.defaultConfiguration().addOptions(Option.SUPPRESS_EXCEPTIONS)),
"$.cps[5].contactPhoneNumber", jsonPathReplacementArgument.getUnStructuredPhoneNumberType());
UnStructuredPhoneNumberType unstructRes = dc.read("$.cps[5].contactPhoneNumber.unStructuredPhoneNumber");
assertThat(unstructRes).isNotNull();
// this path does not exist, since it should have been deleted.
Object structRes = dc.read("$.cps[5].contactPhoneNumber.structuredPhoneNumber");
assertThat(structRes).isNull();
}
You can also create a JsonPath object or ReadContext with a Configuration if you have a use case to check multiple paths.
// Suppress errors thrown by JsonPath and instead return null if a path does not exist in a JSON blob.
Configuration suppressExceptionConfiguration = Configuration
.defaultConfiguration()
.addOptions(Option.SUPPRESS_EXCEPTIONS);
ReadContext jsonData = JsonPath.using(suppressExceptionConfiguration).parse(jsonString);
for (int i = 0; i < listOfPaths.size(); i++) {
String pathData = jsonData.read(listOfPaths.get(i));
if (pathData != null) {
// do something
}

Getting line number of json file at which the json validation failed

I am using json-schema-validator for validating my json.
I want to show the line number in the json data file where the validation failure occurs. I want to show the failure messages in the user friendly manner.
I get the pointer to the json node where the validation failure might have occurred as follows:
JsonNode jsondatanode = JsonLoader.fromFile(new File("jsondata.json"));
JsonNode jsonschemanode = JsonLoader.fromFile(new File("jsonschema.json"));
final JsonSchemaFactory factory = JsonSchemaFactory.byDefault();
final JsonSchema datastoreschema = factory.getJsonSchema(jsonschemanode);
ProcessingReport report;
report = datastoreschema.validate(jsondatanode);
However the pointer is inconvenient to locate the json object/attribute when the json file contains many nodes of type specified by the pointer.
I got following validation failure message:
--- BEGIN MESSAGES ---
error: instance value (12) not found in enum (possible values:["true","false","y","n","yes","no",0,1])
level: "error"
schema: {"loadingURI":"#","pointer":"/properties/configuration/items/properties/skipHeader"}
instance: {"pointer":"/configuration/0/skipHeader"}
domain: "validation"
keyword: "enum"
value: 12
enum: ["true","false","y","n","yes","no",0,1]
--- END MESSAGES ---
I want to show the custom message for validation failures with the line number in json data file which caused schema validation failure. I know I can access the individual details of validation report as shown in below code.
I want to show the custom message as follows:
List<ProcessingMessage> messages = Lists.newArrayList((AbstractProcessingReport)report);
JsonNode reportJson = messages.get(0).asJson();
if(reportJson.get("keyword").toString().equals("enum"))
{
System.out.println("Value "+report.Json.get("value").toString() +"is invalid in " + filepath + " at line " + linenumber);
}
else if{
//...
}
//...
What I dont understand is how can I get that linenumber variable in above code.
Edit
Now I realize that
instance: {"pointer":"/configuration/0/skipHeader"}
shows which occurrence of skipHeader is into problem and in this case its 0th instance of skipHeader inside configuration. However I still think its better to get the line number which ran into problem.
(library author here)
While it can be done (I have somewhere an implementation of JsonParser which does just that) the problem is that the line/column information will most of the time be irrelevant.
In order to save bandwidth, most of the time, JSON sent over the wire will always be on a single line, therefore the problem will remain that you would get, say, "line 1, column 202" without getting any the smarter.
I'll probably do this anyway for the next major version but for 2.2.x it is too late...