I have the below json:
{
"status":"success",
"data":{
"_id":"ABCD",
"CNTL":{"XMN Version":"R3.1.0"},
"OMN":{"dree":["ANY"]},
"os0":{
"Enable":true,"Service Reference":"","Name":"",
"TD ex":["a0.c985.c0"],
"pn ex":["s0.c100.c0"],"i ex":{},"US Denta Treatment":"copy","US Denta Value":0,"DP":{"Remote ID":"","cir ID":"","Sub Options":"","etp Number":54469},"pe":{"Remote ID":"","cir ID":""},"rd":{"can Identifier":"","can pt ID":"","uno":"Default"},"Filter":{"pv":"pass","pv6":"pass","ep":"pass","pe":"pass"},"sc":"Max","dc":"","st Limit":2046,"dm":false},
"os1":{
"Enable":false,"Service Reference":"","Name":"",
"TD ex":[],
"pn ex":[],"i ex":{},"US Denta Treatment":"copy","US Denta Value":0,"DP":{"Remote ID":"","cir ID":"","Sub Options":"","etp Number":54469},"pe":{"Remote ID":"","cir ID":""},"rd":{"can Identifier":"","can pt ID":"","uno":"Default"},"Filter":{"pv":"pass","pv6":"pass","ep":"pass","pe":"pass"},"sc":"Max","dc":"","st Limit":2046,"dm":false},
"ONM":{
"ONM-ALARM-XMN":"Default","Auto Boot Mode":false,"XMN Change Count":0,"CVID":0,"FW Bank Files":[],"FW Bank":[],"FW Bank Ptr":65535,"pn Max Frame Size":2000,"Realtime Stats":false,"Reset Count":0,"SRV-XMN":"Unmodified","Service Config Once":false,"Service Config pts":[],"Skip ot":false,"Name":"","Location":"","dree":"","Picture":"","Tag":"","PHY Delay":0,"Labels":[],"ex":"From OMN","st Age":60,"Laser TX Disable Time":0,"Laser TX Disable Count":0,"Clear st Count":0,"MIB Reset Count":0,"Expected ID":"ANY","Create Date":"2023-02-15 22:41:14.422681"},
"SRV-XMN Values":{},
"nc":{"Name":"ABCD"},
"Alarm History":{
"Alarm IDs":[],"Ack Count":0,"Ack Operator":"","Purge Count":0},"h FW Upgrade":{"wsize":64,"Backoff Divisor":2,"Backoff Delay":5,"Max Retries":4,"End Download Timeout":0},"Epn FW Upgrade":{"Final Ack Timeout":60},
"UNI-x 1":{"Max Frame Size":2000,"Duplex":"Auto","Speed":"Auto","lb":false,"Enable":true,"bd Rate Limit":200000,"st Limit":100,"lb Type":"PHY","Clear st Count":0,"ex":"Off","pc":false},
"UNI-x 2":{"Max Frame Size":2000,"Duplex":"Auto","Speed":"Auto","lb":false,"Enable":true,"bd Rate Limit":200000,"st Limit":100,"lb Type":"PHY","Clear st Count":0,"ex":"Off","pc":false},
"UNI-POTS 1":{"Enable":true},"UNI-POTS 2":{"Enable":true}}
}
All I am trying to do is to replace only 1 small value in this super-complicated json. I am trying to replace the value of os0 tags's TD ex's value from ["a0.c985.c0"] to ["a0.c995.c0"].
Is freemarker the best way to do this? I need to change only 1 value. Can this be done through regex or should I use gson?
I can replace the value like this:
JsonObject jsonObject = new JsonParser().parse(inputJson).getAsJsonObject();
JsonElement jsonElement = jsonObject.get("data").getAsJsonObject().get("os0").getAsJsonObject().get("TD ex");
String str = jsonElement.getAsString();
System.out.println(str);
String[] strs = str.split("\\.");
String replaced = strs[0] + "." + strs[1].replaceAll("\\d+", "201") + "." + strs[2];
System.out.println(replaced);
How to put it back and create the json?
FreeMarker is a template engine, so it's not the tool for this. Load JSON with some real JSON parser library (like Jackson, or GSon) to a node tree, change the value in that, and then use the same JSON library to generate JSON from the node tree. Also, always avoid doing anything in JSON with regular expressions, as JSON (like most pracitcal languages) can describe the same value in many ways, and so writing truly correct regular expression is totally unpractical.
Related
I am using great-expectation for pipeline testing.
I have One Dataframe batch of type :-
great_expectations.dataset.pandas_dataset.PandasDataset
I want to build dynamic validation expression.
i.e
batch.("columnname","value") in which
validationtype columname and value coming from json file .
JSON structure:-
{
"column_name": "sex",
"validation_type": "expect_column_values_to_be_in_set",
"validation_value": ["MALE","FEMALE"]
},
when i am building this expression getting error message described below .
Code:-
def add_validation(self,batch,validation_list):
for d in validation_list:
expression = "." + d["validation_type"] + "(" + d["column_name"] + "," +
str(d["validation_value"]) + ")"
print(expression)
batch+expression
batch.save_expectation_suite(discard_failed_expectations=False)
return batch
Output:-
print statement output
.expect_column_values_to_be_in_set(sex,['MALE','FEMALE'])
Error:-
TypeError: ufunc 'add' did not contain a loop with signature matching
types dtype('
In great_expectations, the expectation_suite object is designed to capture all of the information necessary to evaluate an expectation. So, in your case, the most natural thing to do would be to translate the source json file you have into the great_expectations expectation suite format.
The best way to do that will depend on where you're getting the original JSON structure from -- you'd ideally want to do the translation as early as possible (maybe even before creating that source JSON?) and keep the expectations in the GE format.
For example, if all of the expectations you have are of the type expect_column_values_to_be_in_set, you could do a direct translation:
expectations = []
for d in validation_list:
expectation_config = {
"expectation_type": d["validation_type"],
"kwargs": {
"column": d["column_name"],
"value_set": d["validation_value"]
}
}
expectation_suite = {
"expectation_suite_name": "my_suite",
"expectations": expectations
}
On the other hand, if you are working with a variety of different expectations, you would also need to make sure that the validation_value in your JSON gets mapped to the right kwargs for the expectation (for example, if you expect_column_values_to_be_between then you actually need to provide min_value and/or max_value).
I would like to dump a nested datastructure in ruby to json (I am aware of the Marshal module but I need a standard format) and be able to load/parse the datastructure again. Catch: I use structs (or easier for the example: hashes) as keys of hashes. Example:
require 'json'
h = {{hello: 123} => 123}
JSON.parse(JSON.generate(h)) #=> {"{:hello=>123}"=>123}
So the problem is, that JSON.generate(h) serialises the key {:hello=>123} as a string and when I parse the result again, it remains a string.
How can I solve this and regain the original structure after generate/parse?
JSON only allows strings as object keys. For this reason to_s is called for all keys.
You'll have the following options to solve your issue:
The best option is changing the data structure so it can properly be serialized to JSON.
You'll have to handle the stringified key yourself. An Hash produces a perfectly valid Ruby syntax when converted to a string that can be converted using Kernel#eval like Andrey Deineko suggested in the comments.
result = json.transform_keys { |key| eval(key) }
# json.transform_keys(&method(:eval)) is the same as the above.
The Hash#transform_keys method is relatively new (available since Ruby 2.5.0) and might currently not be in you development environment. You can replace this with a simple Enumerable#map if needed.
result = json.map { |k, v| [eval(k), v] }.to_h
Note: If the incoming JSON contains any user generated content I highly sugest you stay away from using eval since you might allow the user to execute code on your server.
I need a standard format
YAML is a standard format that would suffice here:
▶ h = {{hello: 123} => 123}
#⇒ {{:hello=>123}=>123}
▶ YAML.dump h
#⇒ "---\n? :hello: 123\n: 123\n"
▶ YAML.load _
#⇒ {{:hello=>123}=>123}
As already pointed by mudasobwa, YAML is a good tool: allows you to store also custom class objects:
require 'yaml'
class MyCaptain
attr_accessor :name, :ship
def initialize(name, ship)
#name = name
#ship = ship
end
end
kirk = MyCaptain.new('James T. Kirk', 'USS Enterprise NCC-1701')
picard = MyCaptain.new('Jean-Luc Picard', 'Enterprise NCC-1701D')
captains = [kirk, picard]
File.open("my_captains.yml","w") do |file|
file.write captains.to_yaml
end
p YAML.load_file('my_captains.yml')
#=> [#<MyCaptain:0x007f889d0973b0 #name="James T. Kirk", #ship="USS Enterprise NCC-1701">, #<MyCaptain:0x007f889d096b40 #name="Jean-Luc Picard", #ship="Enterprise NCC-1701D">]
I have this JSON file in a data lake that looks like this:
{
"id":"398507",
"contenttype":"POST",
"posttype":"post",
"uri":"http://twitter.com/etc",
"title":null,
"profile":{
"#class":"PublisherV2_0",
"name":"Company",
"id":"2163171",
"profileIcon":"https://pbs.twimg.com/image",
"profileLocation":{
"#class":"DocumentLocation",
"locality":"Toronto",
"adminDistrict":"ON",
"countryRegion":"Canada",
"coordinates":{
"latitude":43.7217,
"longitude":-31.432},
"quadKey":"000000000000000"},
"displayName":"Name",
"externalId":"00000000000"},
"source":{
"name":"blogs",
"id":"18",
"param":"Twitter"},
"content":{
"text":"Description of post"},
"language":{
"name":"English",
"code":"en"},
"abstracttext":"More Text and links",
"score":{}
}
}
in order to call the data into my application, I have to turn the JSON into a string using this code:
DECLARE #input string = #"/MSEStream/{*}.json";
REFERENCE ASSEMBLY [Newtonsoft.Json];
REFERENCE ASSEMBLY [Microsoft.Analytics.Samples.Formats];
#allposts =
EXTRACT
jsonString string
FROM #input
USING Extractors.Text(delimiter:'\b', quoting:true);
#extractedrows = SELECT Microsoft.Analytics.Samples.Formats.Json.JsonFunctions.JsonTuple(jsonString) AS er FROM #allposts;
#result =
SELECT er["id"] AS postID,
er["contenttype"] AS contentType,
er["posttype"] AS postType,
er["uri"] AS uri,
er["title"] AS Title,
er["acquisitiondate"] AS acquisitionDate,
er["modificationdate"] AS modificationDate,
er["publicationdate"] AS publicationDate,
er["profile"] AS profile
FROM #extractedrows;
OUTPUT #result
TO "/ProcessedQueries/all_posts.csv"
USING Outputters.Csv();
This output the JSON into a .csv file that is readable and when I download the file all data is displayed properly. My problem is when I need to get the data inside profile. Because the JSON is now a string I can't seem to extract any of that data and put it into a variable to use. Is there any way to do this? or do I need to look into other options for reading the data?
You can use JsonTuple on the profile string to further extract the specific properties you want. An example of U-SQL code to process nested Json is provided in this link - https://github.com/Azure/usql/blob/master/Examples/JsonSample/JsonSample/NestedJsonParsing.usql.
You can use JsonTuple on the profile column to further extract specific nodes
E.g. use JsonTuple to get all the child nodes of the profile node and extract specific values like how you did in your code.
#childnodesofprofile =
SELECT
Microsoft.Analytics.Samples.Formats.Json.JsonFunctions.JsonTuple(profile) AS childnodes_map
FROM #result;
#values =
SELECT
childnodes_map["name"] AS name,
childnodes_map["id"] AS id
FROM #result;
Alternatively, if you are interested in specific values, you can also pass paramters to the JsonTuple function to get the specific nodes you want. The code below gets the locality node from the recursively nested nodes (as described by the "$..value" construct.
#locality =
SELECT Microsoft.Analytics.Samples.Formats.Json.JsonFunctions.JsonTuple(profile, "$..locality").Values AS locality
FROM #result;
Other supported constructs by JsonTuple
JsonTuple(json, "id", "name") // field names
JsonTuple(json, "$.address.zip") // nested fields
JsonTuple(json, "$..address") // recursive children
JsonTuple(json, "$[?(#.id > 1)].id") // path expression
JsonTuple(json) // all children
Hope this helps.
I have a chunk of json that has the following format:
{"page":{"size":7,"number":1,"totalPages":1,"totalElements":7,"resultSetId":null,"duration":0},"content":[{"id":"787edc99-e94f-4132-b596-d04fc56596f9","name":"Verification","attributes":{"ruleExecutionClass":"VerificationRule"},"userTags":[],"links":[{"rel":"self","href":"/endpoint/787edc99-e94f-4132-b596-d04fc56596f9","id":"787edc99-e94f-...
Basically the size attribute (in this case) tells me that there are 7 parts to the content section. How do I convert this chunk of json to an array in Perl, and can I do it using the size attribute? Or is there a simpler way like just using decode_json()?
Here is what I have so far:
my $resources = get_that_json_chunk(); # function returns exactly the json you see, except all 7 resources in the content section
my #decoded_json = #$resources;
foreach my $resource (#decoded_json) {
I've also tried something like this:
my $deserialize = from_json( $resources );
my #decoded_json = (#{$deserialize});
I want to iterate over the array and handle the data. I've tried a few different ways because I read a little about array refs, but I keep getting "Not an ARRAY reference" errors and "Can't use string ("{"page":{"size":7,"number":1,"to"...) as an ARRAY ref while "strict refs" in use"
Thank you to Matt Jacob:
my $deserialized = decode_json($resources);
print "$_->{id}\n" for #{$deserialized->{content}};
I am using Dell Boomi to map data from one system to another. I can use groovy in the maps but have no experience with it. I tried to do this with the other Boomi tools, but have been told that I'll need to use groovy in a script. My inbound data is:
132265,Brown
132265,Gold
132265,Gray
132265,Green
I would like to output:
132265,"Brown,Gold,Gray,Green"
Hopefully this makes sense! Any ideas on the groovy code to make this work?
It can be elegantly solved with groupBy and the spread operator:
#Grapes(
#Grab(group='org.apache.commons', module='commons-csv', version='1.2')
)
import org.apache.commons.csv.*
def csv = '''
132265,Brown
132265,Gold
132265,Gray
132265,Green
'''
def parsed = CSVParser.parse(csv, CSVFormat.DEFAULT.withHeader('code', 'color')
parsed.records.groupBy({ it.code }).each { k,v -> println "$k,\"${v*.color.join(',')}\"" }
The above prints:
132265,"Brown,Gold,Gray,Green"
Well, I don't know how are you getting your data, but here is a general way to achieve your goal. You can use a library, such as the one bellow to parse the csv.
https://github.com/xlson/groovycsv
The example for your data would be:
#Grab('com.xlson.groovycsv:groovycsv:1.1')
import static com.xlson.groovycsv.CsvParser.parseCsv
def csv = '''
132265,Brown
132265,Gold
132265,Gray
132265,Green
'''
def data = parseCsv(csv)
I believe you want to associate the number with various values of colors. So for each line you can create a map of the number and the colors associated with that number, splitting the line by ",":
map = [:]
for(line in data) {
number = line.split(',')[0]
colour = line.split(',')[1]
if(!map[number])
map[number] = []
map[number].add(colour)
}
println map
So map should contain:
[132265:["Brown","Gold","Gray","Green"]]
Well, if it is not what you want, you can extract the general idea.
Assuming your data is coming in as a comma separated string of data like this:
"132265,Brown 132265,Gold 132265,Gray 132265,Green 122222,Red 122222,White"
The following Groovy script code should do the trick.
def csvString = "132265,Brown 132265,Gold 132265,Gray 132265,Green 122222,Red 122222,White"
LinkedHashMap.metaClass.multiPut << { key, value ->
delegate[key] = delegate[key] ?: []; delegate[key] += value
}
def map = [:]
def csv = csvString.split().collect{ entry -> entry.split(",") }
csv.each{ entry -> map.multiPut(entry[0], entry[1]) }
def result = map.collect{ k, v -> k + ',"' + v.join(",") + '"'}.join("\n")
println result
Would print:
132265,"Brown,Gold,Gray,Green"
122222,"Red,White"
Do you HAVE to use scripting for some reason? This can be easily accomplished with out-of-the-box Boomi functionality.
Create a map function that prepends the ID field to a string of your choice (i.e. 222_concat_fields). Then use that value to set a dynamic process prop with that value.
The value of the process prop will contain the result of concatenating the name fields. Simply adding this function to your map should take care of it. Then use the final value to populate your result.
Well it depends upon the data how is it coming.
If the data which you have posted in the question is coming in a single document, then you can easily handle this in a map with groovy scripting.
If the data which you have posted in the question is coming into multiple documents i.e.
doc1: 132265,Brown
doc2: 132265,Gold
doc3: 132265,Gray
doc4: 132265,Green
In that case it cannot be handled into map. You will need to use Data Process Step with Custom Scripting.
For the code which you are asking to create in groovy depends upon the input profile in which you are getting the data. Please provide more information i.e. input profile, fields etc.