While running db.runCommand it is not able to recognize dollar symbol and therefore treating $answer as string and not actual value.
[
{
"update": "userfeedback",
"updates": [
{
"q": {
"userId": 8426,
"questionIdentifier": "resumeLink"
},
"u": {
"$set": {
"answer": [
{
"resumeLink": "$answer",
"resumeId": "$UUID().hex().match(/^(.{8})(.{4})(.{4})(.{4})(.{12})$/).slice(1,6).join('-')",
"uploadSizeInByte": -1,
"source":"manual",
"dateUploaded": "$updatedAt"
}
]
}
}
}
]
}
]
Output: dollar symbol is not recognized.
[
{
"resumeLink" : "$answer",
"resumeId" : "$UUID().hex().match(/^(.{8})(.{4})(.{4})(.{4})(.{12})$/).slice(1,6).join('-')",
"uploadSizeInByte" : -1,
"source" : "manual",
"dateUploaded" : "$updatedAt"
}
]
While using updateMany query the similar query works
Update Query:
db.getCollection('userfeedback').updateMany(
{userId:8426, questionIdentifier:"resumeLink"},
[{
"$set": {
answer: [{
"resumeLink": "$answer",
"resumeId": UUID().hex().match(/^(.{8})(.{4})(.{4})(.{4})(.{12})$/).slice(1,6).join('-'),
"uploadSizeInByte": -1,
"source":"manual",
"dateUploaded": "$updatedAt"
}]
}
}]
)
Result:
[
{
"resumeLink": "https://cdn.upgrad.com/resume/asasjyotiranjana11.docx",
"dateUploaded": "2051-04-26T14:30:00.000Z",
"uploadSizeInByte": 644234,
"resumeId": "7fa1478d-478f-4869-9c4b-7ca8c0b9434g",
"source": "hiration"
}
]
can some one help me how to get same result with runCommand. thanks in advance
Related
I'm having some problem to write a query to return a triple nested value from a document. The documents I'm using are structured like this
{
"areaname": "name1",
"places": [
{
"placename": "place1",
"objects": [
{
"objname": "obj1",
"tags": [
"tag1",
"tag2"
]
},
{
"objname": "obj2",
"tags": [
"tag6",
"tag7"
]
}
]
},
{
"placename": "place2",
"objects": [
{
"objname": "obj45",
"tags": [
"tag46",
"tag34"
]
},
{
"objname": "obj77",
"tags": [
"tag56",
"tag11"
]
}
]
}
]
}
It is quite simple actually but I can't find a solution to a simple query like:
"return the objname of the object that contains tag1 inside their tag"
So for the give document if I use "tag1" as a parameter it is expected for the query to return "obj1"
It should give me the same result if I use "tag2" as a parameter
Other example: using "tag56" it should return only "obj77"
Right now i have no problem returning the whole document using the dot-notation or top level field such as areaname or others
db.users.find( {"places.objects.tags":"tag1"}, { areaname: 1, _id:0 } )
Is this even possible?
Keeping it simple:
[
{
"$match" : {
"places.objects.tags" : "tag1"
}
},
{
"$unwind" : "$places"
},
{
"$unwind" : "$places.objects"
},
{
"$match" : {
"places.objects.tags" : "tag1"
}
},
{
"$group" : {
"_id" : "$_id",
"obj_names" : {
"$push" : "$places.objects.objname"
}
}
}
],
You should add any other fields you want to keep to the group stage,
this can also be done without the double $unwind stage but i choose this for read-ability.
I wrote a script to aggregate some data, but the output isn't in true json.
I tried modifying the $project part of the aggregation pipeline, but I don't think I'm doing it right.
pipeline = [
{
"$match": {
"manu": {"$ne": "randomized"},
}},
{
"$match": {
"rssi": {"$lt": "-65db"}
}
},
{"$sort": {"time": -1}},
{
"$group": {"_id": "$mac",
"lastSeen": {"$first": "$time"},
"firstSeen": {"$last": "$time"},
}
},
{
"$project":
{
"_id": 1,
"lastSeen": 1,
"firstSeen": 1,
"minutes":
{
"$trunc":
{
"$divide": [{"$subtract": ["$lastSeen", "$firstSeen"]}, 60000]
}
},
}
},
{
"$facet": {
"0-5": [
{"$match": {"minutes": {"$gte": 1, "$lte": 5}}},
{"$count": "0-5"},
],
"5-10": [
{"$match": {"minutes": {"$gte": 5, "$lte": 10}}},
{"$count": "5-10"},
],
"10-20": [
{"$match": {"minutes": {"$gte": 10, "$lte": 20}}},
{"$count": "10-20"},
],
}
},
{"$project": {
"0-5": {"$arrayElemAt": ["$0-5.0-5", 0]},
"5-10": {"$arrayElemAt": ["$5-10.5-10", 0]},
"10-20": {"$arrayElemAt": ["$10-20.10-20", 0]},
}},
{"$sort": SON([("_id", -1)])}
]
data = list(collection.aggregate(pipeline, allowDiskUse=True))
So I basically get the output as {'0-5': 2914, '5-10': 1384, '10-20': 1295} - which cannot be used to iterate through.
Ideally it should be something like
{'timeframe': '0-5', 'count': 262}
Any suggestions?
Thanks in advance.
You can try below aggregation (replacing your current $facet and below stages):
db.col.aggregate([{
"$facet": {
"0-5": [
{"$match": {"minutes": {"$gte": 1, "$lte": 5}}},
{"$count": "total"},
],
"5-10": [
{"$match": {"minutes": {"$gte": 5, "$lte": 10}}},
{"$count": "total"},
],
"10-20": [
{"$match": {"minutes": {"$gte": 10, "$lte": 20}}},
{"$count": "total"},
]
},
},
{
$project: {
result: { $objectToArray: "$$ROOT" }
}
},
{
$unwind: "$result"
},
{
$unwind: "$result.v"
},
{
$project: {
timeframe: "$result.k",
count: "$result.v.total"
}
}
])
$facet returns single document that contains three fields (results of sub-aggregations). You can use $objectToArray to get it in a shape of k and v fields and then use $unwind to get single document per key.
I am new to json , trying to create a json workable for this hashmap:
HashMap<SomeEnum,HashMap<Integer,String>> agentNumbers;
So i created this JSON
{
"agentNumbers": [
{
"Additional": [
{
"insuranceId": 111,
"agentNumber": "09090"
},
{
"insuranceId": 1111,
"agentNumber": "090900"
}
]
},
{
"Full": [
{
"insuranceId": 1112,
"agentNumber": "090901"
}
]
}
]
}
When i do : gson.fromJson(....
It says :
com.google.gson.JsonSyntaxException: java.lang.IllegalStateException: Expected
BEGIN_ARRAY but was BEGIN_OBJECT at line 1 column 20 path $.agentNumbers[0]
Please guid me what i'm missing
thanks
I guess it'll work if you leave out the first index agentNumbers. Something like this:
{
[
{
"Additional": [
//...
]
}
{
"Additional": {
"112": "data2",
"113": "data3",
"114": "data4",
"115": "data5",
"111": "data1"
},
"Full": {
"112": "data2",
"113": "data3",
"114": "data4",
"115": "data5",
"111": "data1"
}
}
Try this
I am trying to test my lambda manually with the following dynamodb event input configured in tests -
Let's call this Json-1
{
"Records": [
{
"eventID": "1",
"eventVersion": "1.0",
"dynamodb": {
"Keys": {
"Id": {
"N": "101"
}
},
"NewImage": {
"Message": {
"S": "New item!"
},
"Id": {
"N": "101"
}
},
"StreamViewType": "NEW_AND_OLD_IMAGES",
"SequenceNumber": "111",
"SizeBytes": 26
},
"awsRegion": "us-west-2",
"eventName": "INSERT",
"eventSourceARN": eventsourcearn,
"eventSource": "aws:dynamodb"
},
{
"eventID": "2",
"eventVersion": "1.0",
"dynamodb": {
"OldImage": {
"Message": {
"S": "New item!"
},
"Id": {
"N": "101"
}
},
"SequenceNumber": "222",
"Keys": {
"Id": {
"N": "101"
}
},
"SizeBytes": 59,
"NewImage": {
"Message": {
"S": "This item has changed"
},
"Id": {
"N": "101"
}
},
"StreamViewType": "NEW_AND_OLD_IMAGES"
},
"awsRegion": "us-west-2",
"eventName": "MODIFY",
"eventSourceARN": sourcearn,
"eventSource": "aws:dynamodb"
},
{
"eventID": "3",
"eventVersion": "1.0",
"dynamodb": {
"Keys": {
"Id": {
"N": "101"
}
},
"SizeBytes": 38,
"SequenceNumber": "333",
"OldImage": {
"Message": {
"S": "This item has changed"
},
"Id": {
"N": "101"
}
},
"StreamViewType": "NEW_AND_OLD_IMAGES"
},
"awsRegion": "us-west-2",
"eventName": "REMOVE",
"eventSourceARN": sourcearn,
"eventSource": "aws:dynamodb"
}
]
}
However, the json of dynamodb items look like this -
Let's call this Json-2
{
"id": {
"S": "RIGHT-aa465568-f4c8-4822-9c38-7563ae0cd37b-1131286033464633.jpg"
},
"lines": {
"L": [
{
"M": {
"points": {
"L": [
{
"L": [
{
"N": "0"
},
{
"N": "874.5625"
}
]
},
{
"L": [
{
"N": "1765.320601851852"
},
{
"N": "809.7800925925926"
}
]
},
{
"L": [
{
"N": "3264"
},
{
"N": "740.3703703703704"
}
]
}
]
},
"type": {
"S": "guard"
}
}
}
]
},
"modified": {
"N": "1483483932472"
},
"qastatus": {
"S": "reviewed"
}
}
Using the lambda function below, I can connect to my table. My goal is create a json which elastic search will accept.
#Override
public Object handleRequest(DynamodbEvent dynamodbEvent, Context context) {
List<DynamodbEvent.DynamodbStreamRecord> dynamodbStreamRecordlist = dynamodbEvent.getRecords();
DynamoDB dynamoDB = new DynamoDB(new AmazonDynamoDBClient());
log.info("Whole event - "+dynamodbEvent.toString());
dynamodbStreamRecordlist.stream().forEach(dynamodbStreamRecord -> {
if(dynamodbStreamRecord.getEventSource().equalsIgnoreCase("aws:dynamodb")){
log.info("one record - "+dynamodbStreamRecord.getDynamodb().toString());
log.info(" getting N from new image "+dynamodbStreamRecord.getDynamodb().getNewImage().toString());
String tableName = getTableNameFromARN(dynamodbStreamRecord.getEventSourceARN());
log.info("Table name :"+tableName);
Map<String, AttributeValue> keys = dynamodbStreamRecord.getDynamodb().getKeys();
log.info(keys.toString());
AttributeValue attributeValue = keys.get("Id");
log.info("Value of N: "+attributeValue.getN());
Table table = dynamoDB.getTable(tableName);
}
});
return dynamodbEvent;
}
The format of a JSON item that elastic search expects is this and this is what I want to map the test input json to-
Let's call this Json-3
{
_index: "bar-guard",
_type: "bar-guard_type",
_id: "LEFT-b1939610-442f-4d8d-9991-3ca54685b206-1147042497459511.jpg",
_score: 1,
_source: {
#SequenceNumber: "4901800000000019495704485",
#timestamp: "2017-01-04T02:24:20.560358",
lines: [{
points: [[0,
1222.7129629629628],
[2242.8252314814818,
1254.702546296296],
[4000.0000000000005,
1276.028935185185]],
type: "barr"
}],
modified: 1483483934697,
qastatus: "reviewed",
id: "LEFT-b1939610-442f-4d8d-9991-3ca54685b206-1147042497459511.jpg"
}
},
So what I need is read Json-1 and map it to Json-3.
However, Json-1 does not seem to be complete i.e. it does not have information that a dynamodb json has - like points and lines in Json-2.
And so, I was trying to get a connection to the original table and then read this additional information of lines and points by using the ID.
I am not sure if this is the right approach. Basically, want to figure out a way to get the actual JSON that dynamodb has and not the one that has attribute types
How can I get lines and points from json-2 using java? I know we have DocumentClient in javascript but I am looking for something in java.
Also, came across a converter here but doesn't help me- https://github.com/aws/aws-sdk-js/blob/master/lib/dynamodb/converter.js
Is this something that I should use DynamoDBMapper or ScanJavaDocumentAPI for ?
http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/dynamodbv2/datamodeling/DynamoDBMapper.html#marshallIntoObjects-java.lang.Class-java.util.List-com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBMapperConfig-
If yes, I am a little lost how to do that in the code below -
ScanRequest scanRequest = new ScanRequest().withTableName(tableName);
ScanResult result = dynamoDBClient.scan(scanRequest);
for(Map<String, AttributeValue> item : result.getItems()){
AttributeValue value = item.get("lines");
if(value != null){
List<AttributeValue> values = value.getL();
for(AttributeValue value2 : values){
//what next?
}
}
}
Ok, this seems to work for me.
ScanRequest scanRequest = new ScanRequest().withTableName(tableName);
ScanResult result = dynamoDBClient.scan(scanRequest);
for(Map<String, AttributeValue> item : result.getItems()){
AttributeValue value = item.get("lines");
if(value != null){
List<AttributeValue> values = value.getL();
for(AttributeValue value2 : values){
if(value2.getM() != null)
{
Map<String, AttributeValue> map = value2.getM();
AttributeValue points = map.get("points");
List<AttributeValue> pointsvalues = points.getL();
if(!pointsvalues.isEmpty()){
for(AttributeValue valueOfPoint : pointsvalues){
List<AttributeValue> pointList = valueOfPoint.getL();
for(AttributeValue valueOfPoint2 : pointList){
}
}
}
}
}
}
}
I have the following objects indexed:
{ "ProjectName" : "Project 1",
"Roles" : [
{ "RoleName" : "Role 1", "AddedAt" : "2015-08-14T17:11:31" },
{ "RoleName" : "Role 2", "AddedAt" : "2015-09-14T17:11:31" } ] }
{ "ProjectName" : "Project 2",
"Roles" : [
{ "RoleName" : "Role 1", "AddedAt" : "2015-10-14T17:11:31" } ] }
{ "ProjectName" : "Project 3",
"Roles" : [
{ "RoleName" : "Role 2", "AddedAt" : "2015-11-14T17:11:31" } ] }
I.e., a list of projects with different roles added, added in different time.
(Roles list is a nested field)
What I need is to have aggregation which would select how many projects exist per certain role, BUT only(!) if the role was added to the project in certain period.
A classic query (without the dates rande filtering) looks like this (and works well):
{ // ... my main query here
"aggs" : {
"agg1" : {
"nested" : {
"path" : "Roles"
},
"aggs" : {
"agg2": {
"terms": {
"field" : "Roles.RoleName"
},
"aggs": {
"agg3":{
"reverse_nested": {}
}}}}}}
But this approach is not working for me, because if I need filtering by dates starting from let's say '2015-09-01', both 'Role 1' and 'Role 2' would be selected for the first project (i.e., the project for them) as the 'Role 1' would hit because 'Role 2''s project hits because of the 'Role 2' AddedAt date criterium.
So, I consider, I should add the following condition somewhere additionally:
"range": { "Roles.AddedAt": {
"gte": "2015-09-01T00:00:00",
"lte": "2015-12-02T23:59:59"
}}
But I can not find a correct way to do that.
The results of the working query are (kind of) the following:
"aggregations": {
"agg1": {
"doc_count": 17,
"agg2": {
"buckets": [
{
"key": "Role 1",
"doc_count": 2,
"agg3": {
"doc_count": 2
}
},
{
"key": "Role 2",
"doc_count": 2,
"agg3": {
"doc_count": 2
}
},
Try this:
{
"aggs": {
"agg1": {
"nested": {
"path": "Roles"
},
"aggs": {
"NAME": {
"filter": {
"query": {
"range": {
"Roles.AddedAt": {
"gte": "2015-09-01T00:00:00",
"lte": "2015-12-02T23:59:59"
}
}
}
},
"aggs": {
"agg2": {
"terms": {
"field": "Roles.RoleName"
},
"aggs": {
"agg3": {
"reverse_nested": {}
}
}
}
}
}
}
}
}
}