While putting below JSON in dynamo DB using AWS CLI with below command:
aws dynamodb put-item --table-name ScreenList --item file://tableName.json
I am getting Parameter validation failed Exception.I have gone rigorously through AWS docs but failed to find example to insert a complicated json.Every small help is welcome.
The updated Json :
{
"itemName": {
"S": "SCREEN_LIST"
},
"productName": {
"S": "P2P_MOBITEL"
},
"screenList": {
"L": [
{
"menu": {
"L": [
{
"M": {
"menuId": {
"N": "1"
},
"menuText": {
"S": "ENG_HEADING"
},
"menuType": {
"S": "Dynamic"
}
}
}
]
},
"M": {
"screenFooter": {
"S": "F_LANGUAGE_CHANGE"
},
"screenHeader": {
"S": "H_LANGUAGE_CHANGE"
},
"screenId": {
"S": "LANGUAGE_CHANGE"
},
"screenType": {
"S": ""
}
}
}
]
}
}
It seems that you are defining complex types incorrectly. According to AWS documentation you should define a list like this:
"L": ["Cookies", "Coffee", 3.14159]
and a map should be defined like this:
"M": {"Name": {"S": "Joe"}, "Age": {"N": "35"}}
which means that a menu map should be defined like this:
"menu": {
"L": [
{
"M": {
"menuId": {"N" :"1"},
"menuText": {"S" :"PACKS_SCREEN"},
"menuType": {"S" :"Dynamic"}
}
}
]
}
Notice the "M" and "L" attributes.
You should change the rest of your JSON in a similar fashion.
You can find full JSON definition here in the Options section.
UPDATE
Now your list definition is incorrect. You have:
"screenList":{
"L":[
{
"menu":{ ... },
"M":{ ... }
}
]
}
While it should be:
"screenList":{
"L":[
{
"M":{ ... }
},
{
"M":{ ... }
},
]
}
Related
For example I have a JSON file with a mess number
{
"data": {
"31": {
...
},
"52": {
...
},
"1": {
...
}
}
}
I wanted to make It like sorted by number so the json data will not be messed up
{
"data": {
"52": {
...
},
"31": {
...
},
"52": {
...
}
}
}
I tried a code that uses:
with open ("file.json", "r", encoding="utf-8") as f:
file = json.load(f)
file["data"].update(
{num: {"question": question, "answer": answer, "options": options}}
)
My errors code: TypeError: cannot convert dictionary update sequence element #0 to a sequence
Option #1
Use sorted with key argument:
import json
my_dict = {
"data": {
"31": {
"k": "..."
},
"52": {
"k": "..."
},
"1": {
"k": "..."
}
}
}
my_dict['data'] = dict(sorted(my_dict['data'].items(), key=lambda t: int(t[0])))
print(my_dict['data'])
Result:
{'1': {'k': '...'}, '31': {'k': '...'}, '52': {'k': '...'}}
Option #2
Use json.dumps with sort_keys argument:
print(json.dumps(my_dict, indent=2, sort_keys=True))
Result:
{
"data": {
"1": {
"k": "..."
},
"31": {
"k": "..."
},
"52": {
"k": "..."
}
}
}
I'm trying to get my mocked JSON data via GraphQL in Gatsby. The response shows the correct data, but also two null objects as well. Why is it happening?
I'm using the gatsby-transformer.json plugin to query my data and gatsby-source-filesystem to point the transformer to my json files.
categories.json
the mock file I'm trying to get to work :)
{
"categories": [
{
"title": "DEZERTY",
"path": "/dezerty",
"categoryItems": [
{
"categoryName": "CUKRIKY",
"image": "../../../../static/img/dessertcategories/cukriky.jpg"
},
{
"categoryName": "NAHODNE",
"image": "../../../../static/img/dessertcategories/nahodne.jpg"
},
]
},
{
"title": "CANDY BAR",
"path": "/candy-bar",
"categoryItems": [
{
"categoryName": "CHEESECAKY",
"image": "../../../../static/img/dessertcategories/cheesecaky.jpg"
},
{
"categoryName": "BEZLEPKOVÉ TORTY",
"image": "../../../../static/img/dessertcategories/bezlepkove-torty.jpg"
},
]
}
]
}
GraphQL query in GraphiQL
query Collections {
allMockJson {
edges {
node {
categories {
categoryItems {
categoryName
image
}
title
path
}
}
}
}
}
And the response GraphiQL gives me
{
"data": {
"allMockJson": {
"edges": [
{
"node": {
"categories": null
}
},
{
"node": {
"categories": null
}
},
{
"node": {
"categories": [
{
"categoryItems": [
{
"categoryName": "CHEESECAKY",
"image": "../../../../static/img/dessertcategories/cheesecaky.jpg"
},
{
"categoryName": "BEZLEPKOVÉ TORTY",
"image": "../../../../static/img/dessertcategories/bezlepkove-torty.jpg"
}
],
"title": "DEZERTY",
"path": "/dezerty"
},
{
"categoryItems": [
{
"categoryName": "CUKRIKY",
"image": "../../../../static/img/dessertcategories/CUKRIKY.jpg"
},
{
"categoryName": "NAHODNE",
"image": "../../../../static/img/dessertcategories/NAHODNE.jpg"
}
],
"title": "CANDY BAR",
"path": "/candy-bar"
}
]
}
}
]
}
}
}
I expected only to get the DEZERTY and CANDY BAR sections. Why are there null categories and how do I fix it?
Thanks in advance
Your JSON contains syntax errors in the objects DEZERTY and CANDY BAR. It silently fails without telling you. Try this json linter.
Error: Parse error on line 12: },
Error: Parse error on line 25: },
Try again. Your query should work now.
You should look into an IDE that highlights these types of errors and saves you time and frustration.
Want to read "Items" from the AWS CLI response. and want to write "for loop" on "Items". But I am not able to do in shell script.
The output format of AWS cli command is in JSON format.
Sh Code:
[ ~]$ response_scan=`aws dynamodb scan --table-name recovery-plan --max-items 10 --attributes-to-get '["job_id", "job_type", "launch_category"]'`
[ ~]$ echo $response_scan
{
"Count": 166,
"Items": [
{ "launch_category": { "S": "TOT" }, "job_type": { "S": "TEST" }, "job_id": { "S": "39504214122e" } },
{ "job_type": { "S": "TEST" }, "job_id": { "S": "8c48-914d0aa2a186" } },
{ "job_type": { "S": "TEST" }, "job_id": { "S": "cbd07892491d" } },
{ "job_type": { "S": "TEST1" }, "job_id": { "S": "7afef48b0283" } },
{ "job_type": { "S": "TEST" }, "job_id": { "S": "7d678fab68e1" } }
],
"NextToken": "eyJFasasaseGNsdXasasaslX2Ftb3VudCasasI6IDEwfQ==",
"ScannedCount": 166,
"ConsumedCapacity": null
}
Can anyone help me to iterate over response_scan["Items"] ?
What I am doing exactly:
I want to add field - launch_category to items/row which not have this field.
Value of launch_category is TOT for TEST and TOT1 for TEST1
response_scan=$(aws dynamodb scan --table-name recovery-plan --max-items 10 --attributes-to-get '["job_id", "job_type", "launch_category"]' --query Items[].job_type.S --output text )
Or you can use jq to parse json - https://stedolan.github.io/jq/
The above answer is still working, updating here as per AWS Documentation using --projection-expression is recommended instead of --attribute-to-get.
--attributes-to-get (list)
This is a legacy parameter. Use ProjectionExpression instead. For more information, see AttributesToGet in the Amazon DynamoDB Developer Guide .
response_scan=$(aws dynamodb scan
--table-name recovery-plan
--max-items 10
--projection-expression "job_id, job_type,launch_category"
--query Items[].job_type.S --output text
)
AttributesToGet
I have a MongoDB that is structured as below:
[
{
"subject_id": "1",
"name": "Maria",
"dob": "1/1/00",
"gender": "F",
"visits": {
"1/1/18": {
"date_entered": "1/2/18",
"entered_by": "Sally"
},
"1/2/18": {
"date_entered": "1/2/18",
"entered_by": "Tim",
}
},
"samples": {
"XXX123": {
"collected_by": "Sally",
"collection_date": "1/3/18"
}
}
},
{
"subject_id": "2",
"name": "Bob",
"dob": "1/2/00",
"gender": "M",
"visits": {
"1/3/18": {
"date_entered": "1/4/18",
"entered_by": "Tim"
}
},
"samples": {
"YYY456": {
"collected_by": "Sally",
"collection_date": "1/5/18"
},
"ZZZ789": {
"collected_by": "Tim",
"collection_date": "1/6/18"
},
"AAA123": {
"collected_by": "Sally",
"collection_date": "1/7/18"
}
}
}
]
If I wanted to query the database to find all samples collected by Sally or all visits entered by Tim, what would be the best way of doing that?
I'm new to MongoDB and my attempts with various regex's haven't produced results. Any advice would be greatly appreciated.
I first used project on the required fields to use objectToArray followed by unwind to create separate records for array created in project.
The results are then filtered using match.
This works for the data provided in the question -
db.so.aggregate([
{$project: {visits: {$objectToArray: "$visits"}, samples: {$objectToArray: "$samples"}}},
{$unwind: "$visits"},
{$unwind: "$samples"},
{ $match: {
$or : [
{ "visits.v.entered_by" : "Tim" },
{ "samples.v.collected_by" : "Sally" }
]
}
}
])
I am trying to test my lambda manually with the following dynamodb event input configured in tests -
Let's call this Json-1
{
"Records": [
{
"eventID": "1",
"eventVersion": "1.0",
"dynamodb": {
"Keys": {
"Id": {
"N": "101"
}
},
"NewImage": {
"Message": {
"S": "New item!"
},
"Id": {
"N": "101"
}
},
"StreamViewType": "NEW_AND_OLD_IMAGES",
"SequenceNumber": "111",
"SizeBytes": 26
},
"awsRegion": "us-west-2",
"eventName": "INSERT",
"eventSourceARN": eventsourcearn,
"eventSource": "aws:dynamodb"
},
{
"eventID": "2",
"eventVersion": "1.0",
"dynamodb": {
"OldImage": {
"Message": {
"S": "New item!"
},
"Id": {
"N": "101"
}
},
"SequenceNumber": "222",
"Keys": {
"Id": {
"N": "101"
}
},
"SizeBytes": 59,
"NewImage": {
"Message": {
"S": "This item has changed"
},
"Id": {
"N": "101"
}
},
"StreamViewType": "NEW_AND_OLD_IMAGES"
},
"awsRegion": "us-west-2",
"eventName": "MODIFY",
"eventSourceARN": sourcearn,
"eventSource": "aws:dynamodb"
},
{
"eventID": "3",
"eventVersion": "1.0",
"dynamodb": {
"Keys": {
"Id": {
"N": "101"
}
},
"SizeBytes": 38,
"SequenceNumber": "333",
"OldImage": {
"Message": {
"S": "This item has changed"
},
"Id": {
"N": "101"
}
},
"StreamViewType": "NEW_AND_OLD_IMAGES"
},
"awsRegion": "us-west-2",
"eventName": "REMOVE",
"eventSourceARN": sourcearn,
"eventSource": "aws:dynamodb"
}
]
}
However, the json of dynamodb items look like this -
Let's call this Json-2
{
"id": {
"S": "RIGHT-aa465568-f4c8-4822-9c38-7563ae0cd37b-1131286033464633.jpg"
},
"lines": {
"L": [
{
"M": {
"points": {
"L": [
{
"L": [
{
"N": "0"
},
{
"N": "874.5625"
}
]
},
{
"L": [
{
"N": "1765.320601851852"
},
{
"N": "809.7800925925926"
}
]
},
{
"L": [
{
"N": "3264"
},
{
"N": "740.3703703703704"
}
]
}
]
},
"type": {
"S": "guard"
}
}
}
]
},
"modified": {
"N": "1483483932472"
},
"qastatus": {
"S": "reviewed"
}
}
Using the lambda function below, I can connect to my table. My goal is create a json which elastic search will accept.
#Override
public Object handleRequest(DynamodbEvent dynamodbEvent, Context context) {
List<DynamodbEvent.DynamodbStreamRecord> dynamodbStreamRecordlist = dynamodbEvent.getRecords();
DynamoDB dynamoDB = new DynamoDB(new AmazonDynamoDBClient());
log.info("Whole event - "+dynamodbEvent.toString());
dynamodbStreamRecordlist.stream().forEach(dynamodbStreamRecord -> {
if(dynamodbStreamRecord.getEventSource().equalsIgnoreCase("aws:dynamodb")){
log.info("one record - "+dynamodbStreamRecord.getDynamodb().toString());
log.info(" getting N from new image "+dynamodbStreamRecord.getDynamodb().getNewImage().toString());
String tableName = getTableNameFromARN(dynamodbStreamRecord.getEventSourceARN());
log.info("Table name :"+tableName);
Map<String, AttributeValue> keys = dynamodbStreamRecord.getDynamodb().getKeys();
log.info(keys.toString());
AttributeValue attributeValue = keys.get("Id");
log.info("Value of N: "+attributeValue.getN());
Table table = dynamoDB.getTable(tableName);
}
});
return dynamodbEvent;
}
The format of a JSON item that elastic search expects is this and this is what I want to map the test input json to-
Let's call this Json-3
{
_index: "bar-guard",
_type: "bar-guard_type",
_id: "LEFT-b1939610-442f-4d8d-9991-3ca54685b206-1147042497459511.jpg",
_score: 1,
_source: {
#SequenceNumber: "4901800000000019495704485",
#timestamp: "2017-01-04T02:24:20.560358",
lines: [{
points: [[0,
1222.7129629629628],
[2242.8252314814818,
1254.702546296296],
[4000.0000000000005,
1276.028935185185]],
type: "barr"
}],
modified: 1483483934697,
qastatus: "reviewed",
id: "LEFT-b1939610-442f-4d8d-9991-3ca54685b206-1147042497459511.jpg"
}
},
So what I need is read Json-1 and map it to Json-3.
However, Json-1 does not seem to be complete i.e. it does not have information that a dynamodb json has - like points and lines in Json-2.
And so, I was trying to get a connection to the original table and then read this additional information of lines and points by using the ID.
I am not sure if this is the right approach. Basically, want to figure out a way to get the actual JSON that dynamodb has and not the one that has attribute types
How can I get lines and points from json-2 using java? I know we have DocumentClient in javascript but I am looking for something in java.
Also, came across a converter here but doesn't help me- https://github.com/aws/aws-sdk-js/blob/master/lib/dynamodb/converter.js
Is this something that I should use DynamoDBMapper or ScanJavaDocumentAPI for ?
http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/dynamodbv2/datamodeling/DynamoDBMapper.html#marshallIntoObjects-java.lang.Class-java.util.List-com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBMapperConfig-
If yes, I am a little lost how to do that in the code below -
ScanRequest scanRequest = new ScanRequest().withTableName(tableName);
ScanResult result = dynamoDBClient.scan(scanRequest);
for(Map<String, AttributeValue> item : result.getItems()){
AttributeValue value = item.get("lines");
if(value != null){
List<AttributeValue> values = value.getL();
for(AttributeValue value2 : values){
//what next?
}
}
}
Ok, this seems to work for me.
ScanRequest scanRequest = new ScanRequest().withTableName(tableName);
ScanResult result = dynamoDBClient.scan(scanRequest);
for(Map<String, AttributeValue> item : result.getItems()){
AttributeValue value = item.get("lines");
if(value != null){
List<AttributeValue> values = value.getL();
for(AttributeValue value2 : values){
if(value2.getM() != null)
{
Map<String, AttributeValue> map = value2.getM();
AttributeValue points = map.get("points");
List<AttributeValue> pointsvalues = points.getL();
if(!pointsvalues.isEmpty()){
for(AttributeValue valueOfPoint : pointsvalues){
List<AttributeValue> pointList = valueOfPoint.getL();
for(AttributeValue valueOfPoint2 : pointList){
}
}
}
}
}
}
}