Hey guys I have a singular string output which I need to convert in JSON:
Policy Name: Default_US1 Id: abc123 Buckets: bucket1,bothplaces
Policy Name: Default_CH1 Id: def456 Buckets: support,ch1,ch2
Policy Name: Default_NY2 Id: ghi789 Buckets: demo,bucket1,test1,test
How SHOULD it look like in JSON format?
[
{"Policy Name": "Default_US1"}, {"Id": "abc123"}, {"Buckets":[ "bucket1","bothplaces"]}
{"Policy Name": "Default_CH1"}, {"Id": "def456"}, {"Buckets":[ "support","ch1","ch2"]}
{"Policy Name": "Default_NY2"}, {"Id": "ghi789"}, {"Buckets":[ "demo","bucket1","test1","test"]}
]
above is my current attempt... but other than not working.. I know instinctively it's missing something/s... but I cant figure out what and how to remedy it
Directions on how to do it in Powershell would be a plus, but not necessary
I keep trying but messing up, since I know the best test is making Convertfrom-json show me normal output.
I do not care much how it ends up looking at the end, I just wish to extract all that data, with JSON being the format of choice, any VALID JSON result I can work with and manipulate....but first i need a valid JSON conversion
Ok, so you were correct - your current JSON format is ghastly! The mistake you are making is treating each little bit of data as a separate object when there appears to be a natural hierarchy in your data model.
The following structure more naturally fits your data model. However, this is purely based on a cursory examination of the input data you have posted - I know nothing about the data model itself.
[
{
"Name": "Default_US1",
"Id": "abc123",
"Buckets": [
"bucket1",
"bothplaces"
]
},
{
"Name": "Default_CH1",
"Id": "def456",
"Buckets": [
"support",
"ch1",
"ch2"
]
},
{
"Name": "Default_NY2",
"Id": "ghi789",
"Buckets": [
"demo",
"bucket1",
"test1",
"test2"
]
}
]
Related
I have the following JSON:
[
{
"date": "29/11/2021",
"Name": "jack",
},
{
"date": "30/11/2021",
"Name": "Adam",
},
"date": "27/11/2021",
"Name": "james",
}
]
Using Talend, I wanna add 2 lines to have something like:
[
{
"company": "AMA",
"service": "BI",
"date": "29/11/2021",
"Name": "jack",
},
{
"company": "AMA",
"service": "BI",
"date": "30/11/2021",
"Name": "Adam",
},
"company": "AMA",
"service": "BI",
"date": "27/11/2021",
"Name": "james",
}
]
Currently, I use 3 components (tJSONDocOpen, tFixedFlowInput, tJSONDocOutput) but I can't have the right configuration of components in order to get the job done !
If you are not comfortable with json .
Just do these steps :
In the metaData just create a FileJson like this then paste it in your job as a tFileInputJson
Your job design and mapping would be
In your tFileOutputJson don't forget to change in the name of the data block "Data" with ""
What you need to do there according to the Talend practices is read your JSON. Then extract each object of it, add your properties and finally rebuild your JSON in a file.
An efficient way to do this is using tMap componenent like this.
The first tFileInputJSON will have to specify what properties it has to read from the JSON by setting your 2 objects in the mapping field.
Then the tMap will simply add 2 columns to your main stream, here is an example with hard coded string values. Depending on you needs, this component will also offer you the possibility to assign dynamic data to your 2 new columns, it's a powerful tool for manipulating the structure of a data stream.
You will find more infos about this component in the official documentation : https://help.talend.com/r/en-US/7.3/tmap/tmap; especially the "tMap scenarios" part.
Note
Instead of using the tMap, if you are comfortable with Java, you can use a tjavaRow instead. Using this, you can setup your 2 new columns with whatever java code you want to put as long as you have defined the output schema of the component.
output_row.Name = input_row.Name;
output_row.date = input_row.date;
output_row.company = "AMA";
output_row.service = "BI";
Good day community, I am using LUIS to train a data set to let it classified between different meaning of the words. After I've done trained, I want to import a set of data to let it test. There is a batch testing options for me to import a json file, however it keeps showing this error:
BadArgument: Dataset object cannot be null. Parameter name: dataSet
I have already follow the json format that it gave which is like this:
[
{
"text": "hey dad, are you hungry?",
"intent": "None",
"entities":
[
{
"entity": "FamilyMember",
"startPos": 4,
"endPos": 6
}
]
},
{
.
.
.
}
]
My json file has the format like this:
[
{
"text" : "Hello"
"intent": "Greetings"
},
{
"text" : "I want bread"
"intent": "Request"
}
]
Can anyone tell me what am I doing wrong? The training doesn't include any entities so I did not put it into my json file.
Thank you.
You still need to provide the entities attribute and give it an empty array, otherwise you'll receive a different error. Regarding your format, you're missing commas after your text attributes.
[
{
"text" : "Hello",
"intent": "Greetings",
"entities": []
},
{
"text" : "I want bread",
"intent": "Request",
"entities": []
}
]
When I used the above code the batch test successfully completed for me.
I have a JSON object which is very complex and very big size. I know how to parse or get values. But I want to learn what is the fastest way to filter data from that JSON?
The actual JSON is very big and complex. To make it simple, I just created a sample JSON which looks like the follwoing. I want to filter only the "CompanyTitle" which is only locatated like "NY".
{
"Companies": [
{
"Url": "www.abc.com",
"CompanyTitle": "title of ABC",
"OfficeLocations": [
"Online",
"NY",
"CO"
],
"OfficeLocationsDisplay": "Campus/Online"
},
{
"Url": "www.xyz.com",
"CompanyTitle": "title of xyz",
"OfficeLocations": [
"CO",
"NY",
"IL"
],
"OfficeLocationsDisplay": "Campus/Online"
}]
}
Note: I already completed the parsing but it is very slow. So, I want to learn if there is any fastest way to figure out, I will use that instead of my parsing.
This JSON is loaded on page from a .NET Model. So, I need to do it form the same page.
Thanks
What is the right way to format your responses in JSON and why? I've seen different services do it two ways, consider a simple GET /users resource:
{
"success": true,
"message": "User created successfully",
"data": [
{"id": 1, "name": "John"},
{"id": 2, "name": "George"},
{"id": 3, "name": "Bob"},
{"id": 4, "name": "Jane"}
]
}
That is how I usually do that. I have some abstract helper fields like success and message, there may be some more but the question is if should I nest the data in the data field to an array called the same way as the resource - users:
{
"success": true,
"message": "User created successfully",
"data": {
"users": [
{"id": 1, "name": "John"},
{"id": 2, "name": "George"},
{"id": 3, "name": "Bob"},
{"id": 4, "name": "Jane"}
]
}
}
Even if we don't use the abstraction:
{
"users": [
{"id": 1, "name": "John"},
{"id": 2, "name": "George"},
{"id": 3, "name": "Bob"},
{"id": 4, "name": "Jane"}
]
}
Seems the users key is obsolete as any client will know the route they called, which consists of /users, where users are mentioned, and the client code like
$users = $request->perform('http://this.api/users')->body()->json_decode();
looks much better than
$users = $request->perform('http://this.api/users')->body()->json_decode()->users;
as it avoids repeated users.
One use case where the envelope can be useful is when you are expecting to be dealing with large lists and need to do pagination to prevent huge response payloads. The envelope is a good place to put the pagination meta data:
{
"users": [...],
"offset": 0,
"limit": 50,
"total": 10000
}
(This is what we do in a RESTful API I'm working on)
Clearly this is only relevant for requests that return lists of things (e.g. /users/) and not for requests that return single entities (e.g. /users/42) and even for requests that return lists, you don't have to use an envelope - one alternative would be to use response headers for this meta data instead.
PS. I would only advise having a success and message fields if you have a concrete use case for them. Otherwise don't bother, they are simply unnecessary.
Just to get on the same page, data is a field in a JSON object. In the first example the value of data is an array. In the second example the value of data is an object.
Either is valid, so to answer your question: no it is not necessary to nest named objects in an named object. It is necessary that all fields of an object be named, but you are free to nest arrays within an object.
It really just depends on what the processor expects. If data can be anything, then the first approach is fine. If code expects the value of the data field to be an object, then you have to use something like the second example.
According to your comment which you added to first comment: more descriptive data is better data as every information is useful for consumer of you API - REST endpoint. So if you know that the content is user, or whatever, it's better to use it in schema or endpoint url.
Better description = better consuption :-)
My current project sends a lot of data to the browser in JSON via ajax requests.
I've been trying to decide which format I should use. The two I have in mind are
[
"colname1" : "content",
"colname2" : "content",
],
[
"colname1" : "content",
"colname2" : "content",
],
...
and
{
"columns": [
"column name 1",
"column name 2",
],
"rows": [
[
"content",
"content"
],
[
"content",
"content"
]
...
]
}
The first method is better because it is easier to work with. I just have to convert to an object once received. The second will need some post processing to convert it into a format more like the first so it is easier to work with in JavaScript.
The second is better because it is less verbose and therefore takes up less bandwidth and downloads more quickly. Before compression it is usually between 0.75% and 0.85% of the size of the first format.
GZip compression complicates things further. Making the difference in file size nearer 0.85% to 0.95%
Which format should I go with and why?
I'd suggest using RJSON:
RJSON (Recursive JSON) converts any JSON data collection into more compact recursive form. Compressed data is still JSON and can be parsed with JSON.parse. RJSON can compress not only homogeneous collections, but any data sets with free structure.
Example:
JSON:
{
"id": 7,
"tags": ["programming", "javascript"],
"users": [
{"first": "Homer", "last": "Simpson"},
{"first": "Hank", "last": "Hill"},
{"first": "Peter", "last": "Griffin"}
],
"books": [
{"title": "JavaScript", "author": "Flanagan", "year": 2006},
{"title": "Cascading Style Sheets", "author": "Meyer", "year": 2004}
]
}
RJSON:
{
"id": 7,
"tags": ["programming", "javascript"],
"users": [
{"first": "Homer", "last": "Simpson"},
[2, "Hank", "Hill", "Peter", "Griffin"]
],
"books": [
{"title": "JavaScript", "author": "Flanagan", "year": 2006},
[3, "Cascading Style Sheets", "Meyer", 2004]
]
}
Shouldn't the second bit of example 1 be "rowname1"..etc.? I don't really get example 2 so I guess I would aim you towards 1. There is much to be said for having data immediately workable without pre-processing it first. Justification: I once spend too long optimizing array system that turned out to work perfectly but its hell to update it now.