JQ - format JSON array appending values to a specific a value - json

I am trying to get this:
[
["John Black",[
["Lorem ipsum dolor sit amet.",27],
["Ut tempus lectus ut mi.",23]
]],
["Peter Pan",[
["Quisque pulvinar odio.",22],
["Nec ut lorem quis interdum elit.",32]
]],
["Gary Halbert",[
["Placerat aliquam.",17]
]],
["Richard Gere",[
["Porttitor commodo fermentum.",28]
]]
]
Till now, this is what I got:
export A=$(cat <<'EOL'
[
["John Black",["Lorem ipsum dolor sit amet.",27]],
["Peter Pan",["Quisque pulvinar odio.",22]],
["John Black",["Ut tempus lectus ut mi.",23]],
["Gary Halbert",["Placerat aliquam.",17]],
["Peter Pan",["Nec ut lorem quis interdum elit.",32]],
["Richard Gere",["Porttitor commodo fermentum.",28]]
]
EOL
)
echo "$A" | jq 'map({(.[0]): .[1]}) | add'
Resulting this:
{
"John Black": [
"Ut tempus lectus ut mi.",
23
],
"Peter Pan": [
"Nec ut lorem quis interdum elit.",
32
],
"Gary Halbert": [
"Placerat aliquam.",
17
],
"Richard Gere": [
"Porttitor commodo fermentum.",
28
]
}
I am using jq-1.5.
Any ideas? Thanks.

This is an appropriate use case for a reducer. Most of the below is related not to joining items under shared keys, but to getting them into the desired nested-list form:
jq -n '[
inputs |
reduce .[] as $item ({}; .[$item[0]] += [$item[1]]) |
to_entries |
.[] |
[.key, .value]
]' <<<"$A"
...yields as output (edited only for compactness with regard to whitespace):
[
["John Black", [["Lorem ipsum dolor sit amet.", 27], ["Ut tempus lectus ut mi.",23]]],
["Peter Pan", [["Quisque pulvinar odio.",22], ["Nec ut lorem quis interdum elit.", 32]]],
["Gary Halbert", [["Placerat aliquam.", 17]]],
["Richard Gere", [["Porttitor commodo fermentum.", 28]]]
]

the Couchdb engine is append only, it means that each document will append into a file manager by Couchdb, and each file is related to a db.
to use properly think how to avoid updates in the same document, remember all revisions will be storage.
my suggestion for each document.
{
"name": "John Black",
"entries": [
{
"test": "Lorem ipsum dolor sit amet.",
"value": 27
},
{
"text": "Ut tempus lectus ut mi.",
"value": 23
}
],
"type": "user"
}

Related

How do you output processed JSON from AWS Glue to DynamoDB?

{
"adult": false,
"backdrop_path": "/example.jpg",
"belongs_to_collection": null,
"budget": 350000,
"genres": [
{
"id": 18,
"name": "Drama"
}
],
"homepage": "",
"id": 123,
"imdb_id": "a3f4w4f4",
"original_language": "en",
"overview": "Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum",
"popularity": 27.298,
"poster_path": "/example.jpg",
"production_companies": [
{
"id": 60,
"logo_path": "/example.png",
"name": "example 1",
"origin_country": "US"
},
{
"id": 10212,
"logo_path": null,
"name": "example 2",
"origin_country": ""
}
],
"production_countries": [
{
"iso_3166_1": "US",
"name": "United States of America"
}
],
"release_date": "1970-04-10",
"revenue": 1000000,
"runtime": 97,
"spoken_languages": [
{
"iso_639_1": "en",
"name": "English"
}
],
"status": "Released",
"tagline": "Lorem ipsum.",
"title": "Example name",
"video": false,
"vote_average": 8.5,
"vote_count": 5004
}
I am new to AWS Glue. From what I know it creates a zeppelin notebook that flattens json you throw at it using relationalize transform. Then it normally allows writing to RDS/s3 etc.
I didnt find any good information on directly exporting to dynamodb from AWS glue.
Above is one of the json items in a collection I want to store in dynamodb.
The json fields and keys are identical and consistent with the other json items, albeit some have fewer or more subitems.
If the dynamodb table and schema exists, -- you can assume each json key maps to a dynamo column -- I want AWS Glue to insert or update this json information into dynamo.
How do I do that? Can AWS Glue recreate a dynamo schema? I want to automate as much as possible

v-if statement to compare one parameter and if matches return another parameter within the same object on Vue js

What would be an v-if that I could use to look for a matching title in an object on a json file and retrieve the object content if title is found.
Here is the sample json file I am using:
{
"data": [
{
"image_path": "static/products/T130-SHEET-COLLECTION.jpg",
"title": "Productivity Tools",
"title_color": "badge-warning",
"heading": "T130 Sheet Collection",
"read_more_url": "javascript:void(0);",
"content": "blanket posuere proin blandit accumsan senectus netus nullam curae, ornare laoreet adipiscing luctus mauris adipiscing pretium eget fermentum, tristique lobortis est ut metus lobortis tortor.",
"vendor": "George Courey",
"space": "Bedding",
"category": "Linen"
},
{
"image_path": "static/products/PRESTIGE-PLUS-T180-SHEET-COLLECTION.jpg",
"title": "Productivity Tools",
"title_color": "badge-warning",
"heading": "Prestige Plus T180 Sheet Collection",
"read_more_url": "javascript:void(0);",
"content": "blanket posuere proin blandit accumsan senectus netus nullam curae, ornare laoreet adipiscing luctus mauris adipiscing pretium eget fermentum, tristique lobortis est ut metus lobortis tortor.",
"vendor": "George Courey",
"space": "Bedding",
"category": "Linen"
}
]
}
v-if in v-for:
<div v-for="item of data">
<div v-if="item.title === 'condition'">
{{ item.title }}
</div>
</div>
It is unclear what your condition would be. ie. are you matching duplicates or to another value.
Either way, I would recommend that you use a computed instead of a relying on a v-if. This has numerous benefits, such as cleaner separation of template and logic, makes for easier reading an debugging, and easier to write the filtering logic in js.

From JSON Multidimensional array to SQL

I have this JSON structure and need to pass it to a database. But I do not know if the structure I made is the best possible code has some repetitions that I do not think this is correct and I think it is unnecessary.
JSON:
{
"info": [
{
"atriz": "Sandra Bullock",
"atriz-id": "162",
"atriz-slug": "sandra-bullock",
"carreira": {
"id": "264",
"inicio": "08/10/91",
"final": "16/08/18",
"videos": [
{
"id": "2930500",
"titulo": "Lorem ipsum habitant commodo cubilia eget blandit",
"desc": "Suscipit augue dictum ultrices ultricies aliquam mattis nostra taciti sagittis",
"code": "Sb6y1oTwd",
"source": "http://URL",
"download": true,
"link": "http://URL",
"duration": null,
"post_status": "pending",
"category": "drama",
"upload": {
"send": true,
"id": "118840448",
"url": "http://URL",
"status": "finished"
}
},
{
"id": "2930499",
"titulo": "Lorem ipsum habitant commodo cubilia eget blandit",
"desc": "Suscipit augue dictum ultrices ultricies aliquam mattis nostra taciti sagittis",
"code": "R2G0GhTwF",
"source": "http://URL",
"download": true,
"link": "http://URL",
"duration": null,
"post_status": "pending",
"category": "acao",
"upload": {
"send": true,
"id": "118840554",
"url": "http://URL",
"status": "finished"
}
}
]
}
},
{
"atriz": "Jennifer Lawrence",
"atriz-id": "207",
"atriz-slug": "jennifer-lawrence",
"carreira": {
"id": "263",
"inicio": "02/01/88",
"final": "09/08/18",
"videos": [
{
"id": "2930443",
"titulo": "Lorem ipsum habitant commodo cubilia eget blandit",
"desc": "Suscipit augue dictum ultrices ultricies aliquam mattis nostra taciti sagittis",
"code": "DNJNYHWh",
"source": "http://URL",
"download": true,
"link": "http://URL",
"duration": null,
"post_status": "pending",
"category": "drama",
"upload": {
"send": true,
"id": "118844113",
"url": "http://URL",
"status": "finished"
}
},
{
"id": "2930442",
"titulo": "Lorem ipsum habitant commodo cubilia eget blandit",
"desc": "Suscipit augue dictum ultrices ultricies aliquam mattis nostra taciti sagittis",
"code": "OqieXJHwh",
"source": "http://URL",
"download": true,
"link": "http://URL",
"duration": null,
"post_status": "pending",
"category": "comedia",
"upload": {
"send": true,
"id": "118844112",
"url": "http://URL",
"status": "finished"
}
}
]
}
}
]
}
I do not understand much of SQL so I used an online convert and it returned me this:
SQL:
CREATE TABLE IF NOT EXISTS infos (
`info_atriz` VARCHAR(17) CHARACTER SET utf8,
`info_atriz_id` INT,
`info_atriz_slug` VARCHAR(17) CHARACTER SET utf8,
`info_carreira_id` INT,
`info_carreira_inicio` DATETIME,
`info_carreira_final` DATETIME,
`info_carreira_videos_id` INT,
`info_carreira_videos_titulo` VARCHAR(49) CHARACTER SET utf8,
`info_carreira_videos_desc` VARCHAR(78) CHARACTER SET utf8,
`info_carreira_videos_code` VARCHAR(9) CHARACTER SET utf8,
`info_carreira_videos_source` VARCHAR(10) CHARACTER SET utf8,
`info_carreira_videos_download` VARCHAR(4) CHARACTER SET utf8,
`info_carreira_videos_link` VARCHAR(10) CHARACTER SET utf8,
`info_carreira_videos_duration` INT,
`info_carreira_videos_post_status` VARCHAR(7) CHARACTER SET utf8,
`info_carreira_videos_category` VARCHAR(7) CHARACTER SET utf8,
`info_carreira_videos_upload_send` VARCHAR(4) CHARACTER SET utf8,
`info_carreira_videos_upload_id` INT,
`info_carreira_videos_upload_url` VARCHAR(10) CHARACTER SET utf8,
`info_carreira_videos_upload_status` VARCHAR(8) CHARACTER SET utf8
);
INSERT INTO infos VALUES
('Sandra Bullock',162,'sandra-bullock',264,'1991-10-08 00:00:00','2018-08-16 00:00:00',2930500,'Lorem ipsum habitant commodo cubilia eget blandit','Suscipit augue dictum ultrices ultricies aliquam mattis nostra taciti sagittis','Sb6y1oTwd','http://URL','True','http://URL',NULL,'pending','drama','True',118840448,'http://URL','finished'),
('Sandra Bullock',162,'sandra-bullock',264,'1991-10-08 00:00:00','2018-08-16 00:00:00',2930499,'Lorem ipsum habitant commodo cubilia eget blandit','Suscipit augue dictum ultrices ultricies aliquam mattis nostra taciti sagittis','R2G0GhTwF','http://URL','True','http://URL',NULL,'pending','acao','True',118840554,'http://URL','finished'),
('Jennifer Lawrence',207,'jennifer-lawrence',263,'1988-01-02 00:00:00','2018-08-09 00:00:00',2930443,'Lorem ipsum habitant commodo cubilia eget blandit','Suscipit augue dictum ultrices ultricies aliquam mattis nostra taciti sagittis','DNJNYHWh','http://URL','True','http://URL',NULL,'pending','drama','True',118844113,'http://URL','finished'),
('Jennifer Lawrence',207,'jennifer-lawrence',263,'1988-01-02 00:00:00','2018-08-09 00:00:00',2930442,'Lorem ipsum habitant commodo cubilia eget blandit','Suscipit augue dictum ultrices ultricies aliquam mattis nostra taciti sagittis','OqieXJHwh','http://URL','True','http://URL',NULL,'pending','comedia','True',118844112,'http://URL','finished');
Is this really the best possible structure for the database? This repetition in these values for example ('Sandra Bullock', 162, 'sandra-bullock', 264, '1991-10-08 00:00:00', '2018-08-16 00:00:00') does not have how to optimize this?
The answer depends on your goals you try to achieve. And also it is important to know in what way you will use this data further.
The structure is not very good. You will have several records (their count may be huge - depends on JSON) for the only JSON item. I'm not sure this will be convenient to retrieve data in such way.
It is preferred to use relational model:
Generally, each table/relation represents one "entity type" (such as customer or product). The rows represent instances of that type of entity (such as "Lee" or "chair") and the columns representing values attributed to that instance (such as address or price).
So, in your case, I would use more than 1 table.
-- videos with columns {id, titulo, desc, etc.}
-- carreira with id of related "atriz" record as a separate column and other columns {id, inicio, final}
-- carreira_videos - separate table which stores carreira-video relations. The rable may contain only two columns: carreira_id and video_id. For example, "Sandra Bullock" item in your example JSON has carreira with two videos - so, in carreira_videos table you will have two records:
carreira_id | video_id
264 | 2930500
264 | 2930499
-- atrizs with columns {atriz, atriz-id, atriz-slug} and separate column related carreira_id.
Actually, you're right about a lot of data repetition in your example. In case your JSON has more than just two items in "info" node, then there are more data duplicates. If you will use relational model (as it is preferred in SQL-based databases) and optimal schema such as described above you will avoid repetitions.
You need to study basics of relational databases and SQL before you go further. If you do not understand them then it is not real to create optimize design.
https://www.w3schools.com/sql/
https://searchdatamanagement.techtarget.com/definition/relational-database
Actually, if you want to store JSON itself then you may not need SQL-based database. You should look at NoSQL approaches and DBs https://en.wikipedia.org/wiki/NoSQL
https://www.quackit.com/json/tutorial/list_of_json_databases.cfm
All depends on your goals.

Insert comments in Jira with Talend

During my searching, I would like advice about how to insert a comment in Jira issue via Talend Open Studio.
Here is my job :
So, I am trying to insert comment via Talend.
I use a tHttpRequest set like that :
uri is my string connection to get Jira account.
As it's a POST method, my header is Content-Type | application/json.
My post parameters are in a JSON file :
{
"fields": {
"project": {
"key": "TRL"
},
"summary": "A",
"description": "B",
"issuetype": {
"name": "Task"
},
"labels": ["Webapp"],
"reporter": {
"name": "x.x"
},
"assignee": {
"name": "x.x"
}
},
"body": "TEST1",
"visibility": {
"type": "role",
"value": "Administrators"
}}
When I launch this job, the following error appears :
As if the file of the response body was NULL, or maybe It's not the good manner to do the insert of the comment.
I clarify that with Insomnia(insomnia), the insertion of the comment works.
I try also the componant tRest but I don't succeed to link this one with tFileInputDelimited or tJIRAOutput.
Before to continue my work, I want to know if I am in the good direction ? Any clues ?
Thanks by advance,
Ale
I'd recommend using the tRest or tRestClient components. You can just send your JSON as "HTTP body" with these components.
On the JIRA side, you can get the necessary information here: https://developer.atlassian.com/jiradev/jira-apis
Assuming you're working with the on-premise JIRA, you'd use something like this:
URL: hostname + /rest/api/2/issue/{issueIdOrKey}/comment
HTTP Body:
{
"body": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque eget venenatis elit. Duis eu justo eget augue iaculis fermentum. Sed semper quam laoreet nisi egestas at posuere augue semper.",
"visibility": {
"type": "role",
"value": "Administrators"
}
}
Don't forget about the Authentication

Fiware POI add_poi web service fw_media structure

I created json for adding new poi with fw_media data included, based on json from example_components/fw_media.json:
{"fw_core":
{"location":{"wgs84":{"latitude":1,"longitude":1}},
"categories":["Field"],
"name":{"":"poljana 1"}},
"fw_media": {
"entities": [
{
"type": "photo",
"short_label": {
"en": "Sunset at sea"
},
"caption": {
"en": "Sunset on the Bothnian Bay, Northwest from Hailuoto summer 2013"
},
"description": {
"": "Lorem ipsum dolor sit amet, consectetur adipisci elit, sed eiusmod tempor incidunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquid ex ea commodi consequat.",
"fi": "Oli mukava retki."
},
"thumbnail": "http://www.example.org/sunset_on_sea_tbn.jpg",
"url": "http://www.example.org/sunset_on_sea.jpg",
"copyright": "Photo: Ari Okkonen"
},
{
"type": "audio",
"short_label": {
"": "Säkkijärven polkka"
},
"url": "http://www.example.org/sakkijarven_polkka.mp3"
}
],
"last_update": {
"timestamp": 1390203898
}, "source": {
"website": "http://www.cie.fi",
"name": "CIE, University of Oulu",
"id": "7c32c67d-cf00-4d11-9acc-2471141e03a3",
"license": "http://www.gnu.org/licenses/gpl.html"
}
}}
But im getting error:
JSON does not validate. Violations:
[fw_media] The property - source - is not defined and the definition
does not allow additional properties
POI data validation failed!
Can you give me example of working fw_media json?
And also is it possible to upload image with poi in fw_media?(Not just url to image but whole image)
Corrected in master branch in GitHub.
Thank you for your report.
It seems that an old example somewhere has the fw_image.source field that is not implemented. Just edit the source structure out of the data, and it should be Ok. Of course there is no sense to have one source description common for all images, etc. for the one POI.
A source field might be usable in per-item basis. So I'll introduce it to the next release.
Please, could you tell me where that erroneous Sunset at sea is, so I can go correct it. Correction is tracked in https://github.com/Chiru/FIWARE-POIDataProvider/issues/7
I feel uploading pictures a kind of Specific Enabler stuff. So it left to the community to implement. I suggest using some media repository in combination of a standard POI-DP by a specialized client software.