I'm trying to delete some items in dynamodb which currently is not null.I'm not sure how many items will be there.So first I've used a scan with filter attributes to get that item then I'll feed that result set to delete-item
So far I can able to perform the first step. but I couldn't figure out how the nested queries may work with dynamodb (Using AWS CLI)
filter.json
{
"demo": {
"ComparisonOperator": "NOT_NULL"
}
}
First Query:
aws dynamodb scan --table-name test --scan-filter file://D:\filter.JSON
Now I need to find a way to feed the result set of the above query into delete-item
UPDATE 1
Output of Scanned query:
{
"Count": 2,
"Items": [
{
"demo": {
"S": "Hai"
},
"id": {
"S": "123"
}
},
{
"demo": {
"S": "Welcome"
},
"id": {
"S": "124"
}
}
],
"ScannedCount": 3643,
"ConsumedCapacity": null
}
Related
I am trying to delete all documents in my collection infrastructure that have a type.primary property of "pipelines" and a type.secondary property of "oil."
I'm trying to use the following query:
db.infrastructure.deleteMany({$and: [{"properties.type.primary": "pipelines"}, {"properties.type.secondary": "oil"}] }),
That returns: { acknowledged: true, deletedCount: 0 }
I expect my query to work because in MongoDB Compass, I can retrieve 182 documents that match the query {$and: [{"properties.type.primary": "pipelines"}, {"properties.type.secondary": "oil"}] }
My documents appear with the following structure (relevant section only):
properties": {
"optional": {
"description": ""
},
"original": {
"Opername": "ENBRIDGE",
"Pipename": "Lakehead",
"Shape_Leng": 604328.294581,
"Source": "EIA"
},
"required": {
"unit": null,
"viz_dim": null,
"years": []
},
"type": {
"primary": "pipelines",
"secondary": "oil"
}
...
My understanding is that I just need to pass a filter to deleteMany() and that $and expects an array of objects. For some reason the two combined isn't working here.
I realized the simplest answer was the correct one -- I spelled my database name incorrectly.
Is there a way to do deep copy using apache VTL?
I was trying to use x-amazon-apigateway-integration using requestTemplates.
The input JSON is as shown below,
{
"userid": "21d6523137f6",
"time": "2020-06-16T15:22:33Z",
"item": {
"UserID" : { "S": "21d6523137f6" },
<... some complex json nodes here ...>,
"TimeUTC" : { "S": "2020-06-16T15:22:33Z" },
}
}
The requestTemplate is as shown below,
requestTemplates:
application/json: !Sub
- |
#set($inputRoot = $input.path('$'))
{
"TableName": "${tableName}",
"ConditionExpression": "attribute_not_exists(TimeUTC) OR TimeUTC > :sk",
"ExpressionAttributeValues": {
":sk":{
"S": "$util.escapeJavaScript($input.path('$.time'))"
}
},
"Item": "$input.path('$.item')", <== Copy the entire item over to Item.
"ReturnValues": "ALL_OLD",
"ReturnConsumedCapacity": "INDEXES",
"ReturnItemCollectionMetrics": "SIZE"
}
- {
tableName: !Ref EventsTable
}
The problem is, the item gets copied like,
"Item": "{UserID={S=21d6523137f6}, Lat={S=37.33180957}, Lng={S=-122.03053391}, ... other json elements..., TimeUTC={S=2020-06-16T15:22:33Z}}",
As you can see, the whole nested json become a single atribute. While I expected it to become a fully blown json node on its own like below,
"Item": {
"UserID" : { "S": "21d6523137f6" },
"Lat": { "S": "37.33180957" },
"Lng": { "S": "-122.03053391" },
<.... JSON nodes ...>
"TimeUTC" : { "S": "2020-06-20T15:22:33Z" }
},
Is it possible to deep/nested copy operation on a json node like above without doing the kung-fu of iterating the node and appending the childs o a json node variable etc...
btw, I'm using AWS API Gateway request template, so it may not support all the Apache VTL templating options.
You need to use the $input.json method instead of $input.path.
"Item": $input.json('$.item'),
Note that I removed the double quotes.
If you had the double quotes because you want to stringify $.item, you can do that like so:
"Item": "$util.escapeJavaScript($input.json('$.item'))",
I'm trying to get cell values from a json formatted table but only for specific columns and have it output into its own object.
json example -
{
"rows":[
{
"id":409363222161284,
"rowNumber":1,
"cells":[
{
"columnId":"nameColumn",
"value":"name1"
},
{
"columnId":"infoColumn",
"value":"info1"
},
{
"columnId":"excessColumn",
"value":"excess1"
}
]
},
{
"id":11312541213,
"rowNumber":2,
"cells":[
{
"columnId":"nameColumn",
"value":"name2"
},
{
"columnId":"infoColumn",
"value":"info2"
},
{
"columnId":"excessColumn",
"value":"excess2"
}
]
},
{
"id":11312541213,
"rowNumber":3,
"cells":[
{
"columnId":"nameColumn",
"value":"name3"
},
{
"columnId":"infoColumn",
"value":"info3"
},
{
"columnId":"excessColumn",
"value":"excess3"
}
]
}
]
}
Ideal output would be filtered by two columns - nameColumn, infoColumn - with each row being a single line of the values.
Output example -
{
"name": "name1",
"info": "info1"
}
{
"name": "name2",
"info": "info2"
}
{
"name": "name3",
"info": "info3"
}
I've tried quite a few different combinations of things with select statements and this is the closest I've come but it only uses one.
jq '.rows[].cells[] | {name: (select(.columnId=="nameColumn") .value), info: "infoHereHere"}'
{
"name": "name1",
"info": "infoHere"
}
{
"name": "name2",
"info": "infoHere"
}
{
"name": "name3",
"info": "infoHere"
}
If I try to combine another one, it's not so happy.
jq -j '.rows[].cells[] | {name: (select(.columnId=="nameColumn") .value), info: (select(.columnId=="infoColumn") .value)}'
Nothing is output.
** Edit **
Apologies for being unclear with this. The final output would ideally be a csv for the selected columns values
name1,info1
name2,info2
Presumably you would want the output to be grouped by row, so let's first consider:
.rows[].cells
| map(select(.columnId=="nameColumn" or .columnId=="infoColumn"))
This produces a stream of JSON arrays, the first of which using your main example would be:
[
{
"columnId": "nameColumn",
"value": "name1"
},
{
"columnId": "infoColumn",
"value": "info1"
}
]
If you want the output in some alternative format, then you could tweak the above jq program accordingly.
If you wanted to select a large number of columns, the use of a long "or" expression might become unwieldy, so you might also want to consider using a "whitelist". See e.g. Whitelisting objects using select
Or you might want to use del to delete the unwanted columns.
Producing CSV
One way would be to use #csv with the -r command-line option, e.g. with:
| map(select(.columnId=="nameColumn" or .columnId=="infoColumn")
| {(.columnId): .value} )
| add
| [.nameColumn, .infoColumn]
| #csv
I just start using json-server and struggling with one thing. I want to have URL which are nested so e.g. to get user orgs, request would looks like:
/rest/user/orgs and will return array of user orgs
{
"rest": {
"user": {
"select": {
"org": []
},
"orgs": [{
"id": "5601e1c0-317c-4af8-9731-a1863f677e85",
"name": "DummyOrg"
}],
"logout": {}
}
}
}
Any idea what I am doing wrong?
This is not supported by the library. The way to get this working is to add a custom routes file to de server, where you will map (or redirect) requests made to /rest/user/ to /.
db.json
{
"select": {
"org": []
},
"orgs": [{
"id": "5601e1c0-317c-4af8-9731-a1863f677e85",
"name": "DummyOrg"
}],
"logout": {}
}
routes.json
{
"/rest/user/*": "/$1"
}
and then run it using json-server db.json --routes routes.json
I have done a lot of research on stackoverflow but cannot find any related post.
assume I have a json like
{
"talk": {
"docs": {
"count": 22038185,
"deleted": 626193
},
"store": {
"size_in_bytes": 6885993125,
"throttle_time_in_millis": 1836569
}
},
"list": {
"docs": {
"count": 22038185,
"deleted": 626193
},
"store": {
"size_in_bytes": 6885993125,
"throttle_time_in_millis": 1836569
}
}
}
I want to filter out "store" field in all keys to get an output like
{
"talk": {
"docs": {
"count": 22038185,
"deleted": 626193
}
},
"list": {
"docs": {
"count": 22038185,
"deleted": 626193
}
}
}
How can I achieve it with jq?
Use del and recurse together.
jq 'del(recurse|.store?)' foo.json
You can also the short .. for recurse with no arguments:
jq 'del(..|.store?)' foo.json
The ? prevents errors when recurse reaches something for which .store is an invalid filter.
If you only want to remove the "store" key when it occurs at the second level, then consider:
map_values( del(.store) )
Postscript
Subsequently, the OP asked:
But what if the deleted fields are many? can we only keep 'docs'
Answer (in this particular case):
map_values( {docs} )