I'm relatively new to Firebase and Firestore, I have created 2 Index in the firestore web app and seems like I also have to update the firestore.indexes.json on my IDE.
Base from the image above, is this the correct way of adding it on VSCODE? I'm just assuming it is.
{
"indexes": [
{
"collectionGroup": "timeinout",
"queryScope": "COLLECTION",
"fields": [
{
"fieldPath": "uid",
"order": "ASCENDING"
},
{
"fieldPath": "createdAt",
"order": "ASCENDING"
},
{
"fieldPath": "uid",
"order": "ASCENDING"
},
{
"fieldPath": "timein",
"order": "ASCENDING"
}
]
}
],
"fieldOverrides": []
}
The index definition is almost there, but it will only create one index, across all four of those fields (one of which, uid, will be duplicated).
To create 2 indexes, you can put 2 entries in the indexes array rather than putting all 4 fields into a single index entry:
{
"indexes": [
{
"collectionGroup": "timeinout",
"queryScope": "COLLECTION",
"fields": [
{
"fieldPath": "uid",
"order": "ASCENDING"
},
{
"fieldPath": "createdAt",
"order": "ASCENDING"
}
]
},
{
"collectionGroup": "timeinout",
"queryScope": "COLLECTION",
"fields": [
{
"fieldPath": "uid",
"order": "ASCENDING"
},
{
"fieldPath": "timein",
"order": "ASCENDING"
}
]
}
],
"fieldOverrides": []
}
Related
I have the following output coming from a step function task: ListObjectsV2
{
"Contents": [
{
"ETag": "\"86c12c034bc6c30cb89b500b954c188f\"",
"Key": "55271f52fffe4461a2ee3228ebb97157/input/batch_1.csv",
"LastModified": "2023-02-09T13:46:20Z",
"Size": 796014,
"StorageClass": "STANDARD"
},
{
"ETag": "\"58e4a770e0f66073b00d185df500f07f\"",
"Key": "55271f52fffe4461a2ee3228ebb97157/input/batch_2.csv",
"LastModified": "2023-02-09T13:47:20Z",
"Size": 934038,
"StorageClass": "STANDARD"
},
{
"ETag": "\"460abd0de64d5cb67e8f0d46878cb1ef\"",
"Key": "55271f52fffe4461a2ee3228ebb97157/input/batch_3.csv",
"LastModified": "2023-02-09T13:46:57Z",
"Size": 794264,
"StorageClass": "STANDARD"
},
{
"ETag": "\"1bfedc3dc92e4ba8d04e24b9b5a0ed58\"",
"Key": "55271f52fffe4461a2ee3228ebb97157/input/batch_4.csv",
"LastModified": "2023-02-09T13:46:24Z",
"Size": 788756,
"StorageClass": "STANDARD"
},
{
"ETag": "\"9d6c434ce5ebdf203a790fbcf19338dc\"",
"Key": "55271f52fffe4461a2ee3228ebb97157/input/batch_5.csv",
"LastModified": "2023-02-09T13:47:07Z",
"Size": 831156,
"StorageClass": "STANDARD"
}
],
"IsTruncated": false,
"KeyCount": 5,
"MaxKeys": 1000,
"Name": "vita-internal-text-classification-dev-183576513728",
"Prefix": "55271f52fffe4461a2ee3228ebb97157"
}
I want to have an array containing only the Key key, to pass to the next state, like so:
[
{
"Key": "55271f52fffe4461a2ee3228ebb97157/input/batch_1.csv",
},
{
"Key": "55271f52fffe4461a2ee3228ebb97157/input/batch_2.csv",
},
{
"Key": "55271f52fffe4461a2ee3228ebb97157/input/batch_3.csv",
},
{
"Key": "55271f52fffe4461a2ee3228ebb97157/input/batch_4.csv",
},
{
"Key": "55271f52fffe4461a2ee3228ebb97157/input/batch_5.csv",
}
]
So far I've tried setting the ResultPath to:
$.Contents[*].Key
$.Contents[*].['Key']
What I get is:
[
"55271f52fffe4461a2ee3228ebb97157/input/batch_1.csv",
"55271f52fffe4461a2ee3228ebb97157/input/batch_2.csv",
"55271f52fffe4461a2ee3228ebb97157/input/batch_3.csv",
"55271f52fffe4461a2ee3228ebb97157/input/batch_4.csv",
"55271f52fffe4461a2ee3228ebb97157/input/batch_5.csv",
]
But I've gotten bad output from that, any help?
The way I've solved this is to use an Inline Map state with a Pass state to build the necessary format. You can see this pattern in an example here for how to use Step Functions Distributed Map to bulk delete objects from S3. You can see this in the inner Create Object Identifier Array Map state. If you were doing this in Standard Workflows, this could be a cost concern given the number of state transitions involved. But since in the Item Processor I'm using Express Workflows, which are billed by duration (and these are super fast), it works pretty well.
{
"Comment": "A state machine to bulk delete objects from S3 using Distributed Map",
"StartAt": "Confirm Bucket Provided",
"States": {
"Confirm Bucket Provided": {
"Type": "Choice",
"Choices": [
{
"Not": {
"Variable": "$.bucket",
"IsPresent": true
},
"Next": "Fail - No Bucket"
}
],
"Default": "Check for Prefix"
},
"Check for Prefix": {
"Type": "Choice",
"Choices": [
{
"Not": {
"Variable": "$.prefix",
"IsPresent": true
},
"Next": "Generate Parameters - Without Prefix"
}
],
"Default": "Generate Parameters - With Prefix"
},
"Generate Parameters - Without Prefix": {
"Type": "Pass",
"Parameters": {
"Bucket.$": "$.bucket",
"Prefix": ""
},
"ResultPath": "$.list_parameters",
"Next": "Delete Objects from S3 Bucket"
},
"Fail - No Bucket": {
"Type": "Fail",
"Error": "InsuffcientArguments",
"Cause": "No Bucket was provided"
},
"Generate Parameters - With Prefix": {
"Type": "Pass",
"Next": "Delete Objects from S3 Bucket",
"Parameters": {
"Bucket.$": "$.bucket",
"Prefix.$": "$.prefix"
},
"ResultPath": "$.list_parameters"
},
"Delete Objects from S3 Bucket": {
"Type": "Map",
"ItemProcessor": {
"ProcessorConfig": {
"Mode": "DISTRIBUTED",
"ExecutionType": "EXPRESS"
},
"StartAt": "Create Object Identifier Array",
"States": {
"Create Object Identifier Array": {
"Type": "Map",
"ItemProcessor": {
"ProcessorConfig": {
"Mode": "INLINE"
},
"StartAt": "Create Object Identifier",
"States": {
"Create Object Identifier": {
"Type": "Pass",
"End": true,
"Parameters": {
"Key.$": "$.Key"
}
}
}
},
"ItemsPath": "$.Items",
"ResultPath": "$.object_identifiers",
"Next": "Delete Objects"
},
"Delete Objects": {
"Type": "Task",
"Next": "Clear Output",
"Parameters": {
"Bucket.$": "$.BatchInput.bucket",
"Delete": {
"Objects.$": "$.object_identifiers"
}
},
"Resource": "arn:aws:states:::aws-sdk:s3:deleteObjects",
"Retry": [
{
"ErrorEquals": [
"States.ALL"
],
"BackoffRate": 2,
"IntervalSeconds": 1,
"MaxAttempts": 6
}
],
"ResultSelector": {
"Deleted.$": "$.Deleted",
"RetryCount.$": "$$.State.RetryCount"
}
},
"Clear Output": {
"Type": "Pass",
"End": true,
"Result": {}
}
}
},
"ItemReader": {
"Resource": "arn:aws:states:::s3:listObjectsV2",
"Parameters": {
"Bucket.$": "$.list_parameters.Bucket",
"Prefix.$": "$.list_parameters.Prefix"
}
},
"MaxConcurrency": 5,
"Label": "S3objectkeys",
"ItemBatcher": {
"MaxInputBytesPerBatch": 204800,
"MaxItemsPerBatch": 1000,
"BatchInput": {
"bucket.$": "$.list_parameters.Bucket"
}
},
"ResultSelector": {},
"End": true
}
}
}
I am trying to set up an automated Kibana alert that takes in data from a defined extraction query. I get all the information I want, however, the response query returns values without rounding them up (up to 12 decimal points). Where in the extraction query and what do I specify to round this value up?
{
"size": 0,
"query": {
"bool": {
"filter": [
{
"match_all": {
"boost": 1
}
},
{
"range": {
"#timestamp": {
"from": "{{period_end}}||-24h",
"to": "{{period_end}}",
"include_lower": true,
"include_upper": true,
"format": "epoch_millis",
"boost": 1
}
}
}
],
"adjust_pure_negative": true,
"boost": 1
}
},
"_source": {
"includes": [],
"excludes": []
},
"stored_fields": "*",
"docvalue_fields": [
{
"field": "#timestamp",
"format": "date_time"
},
{
"field": "timestamp",
"format": "date_time"
}
],
"script_fields": {},
"aggregations": {
"2": {
"terms": {
"field": "tag.country.keyword",
"size": 20,
"min_doc_count": 1,
"shard_min_doc_count": 0,
"show_term_doc_count_error": false,
"order": [
{
"1": "desc"
},
{
"_key": "asc"
}
]
},
"aggregations": {
"1": {
"avg": {
"field": "my_field"
}
}
}
}
}
}
Here, I'm talking about the "avg" aggregation at the very bottom. As I understand, right below the "field" key, I should specify a "script" key, defining the rounding function that I want to use. Can anybody help me come up with the correct function?
I'm not sure what to specify in the "script" key to make the rounding function work.
The entire JSON file is rather large so I've only taken out the subsection I've had an issue with.
{
"diagrams": {
"5f759d15cd046720c28531dd": {
"_id": "5f759d15cd046720c28531dd",
"offsetX": 320,
"offsetY": 42,
"zoom": 80,
"modified": 1604279356,
"nodes": {
"5f9f5c3ccd046720c28531e4": {
"nodeID": "5f9f5c3ccd046720c28531e4",
"type": "start",
"coords": [
360,
120
],
"data": {
"name": "Start",
"color": "standard",
"ports": [
{
"type": "",
"target": "5f9f5c3ccd046720c28531e6"
}
],
"steps": []
}
},
"5f9f5c3ccd046720c28531e5": {
"nodeID": "5f9f5c3ccd046720c28531e5",
"type": "block",
"coords": [
760,
120
],
"data": {
"name": "Help Message",
"color": "standard",
"steps": [
"5f9f5c3ccd046720c28531e6",
"5f9f5c3ccd046720c28531e7"
]
}
},
"5f9f5c3ccd046720c28531e6": {
"nodeID": "5f9f5c3ccd046720c28531e6",
"type": "speak",
"data": {
"randomize": false,
"dialogs": [
{
"voice": "Alexa",
"content": "You said help. Do you want to continue?"
}
],
"ports": [
{
"type": "",
"target": "5f9f5c3ccd046720c28531e7"
}
]
}
},
"5f9f5c3ccd046720c28531e7": {
"nodeID": "5f9f5c3ccd046720c28531e7",
"type": "interaction",
"data": {
"name": "Choice",
"else": {
"type": "path",
"randomize": false,
"reprompts": []
},
"choices": [
{
"intent": "",
"mappings": []
},
{
"intent": "",
"mappings": []
}
],
"reprompt": null,
"ports": [
{
"type": "else",
"target": null
},
{
"type": "",
"target": null
},
{
"type": "",
"target": "5f9f5c3ccd046720c28531e9"
}
]
}
},
"5f9f5c3ccd046720c28531e8": {
"nodeID": "5f9f5c3ccd046720c28531e8",
"type": "block",
"coords": [
1170,
260
],
"data": {
"name": "Exit",
"color": "standard",
"steps": [
"5f9f5c3ccd046720c28531e9"
]
}
},
"5f9f5c3ccd046720c28531e9": {
"nodeID": "5f9f5c3ccd046720c28531e9",
"type": "exit",
"data": {
"ports": []
}
}
},
"children": [],
"creatorID": 42661,
"variables": [],
"name": "Help Flow",
"versionID": "5f759d15cd046720c28531db"
}
}
}
The Current JSON Schema Definition I have is:
{
"$schema":"http://json-schema.org/schema#",
"type":"object",
"properties":{
"diagrams":{
"type":"object"
}
},
"required":[
"diagrams",
]
}
The problem I am having is that within diagrams contains multiple objects with a random string as the name e.g "5f759d15cd046720c28531dd".
Then within that object there are properties such as (_id, offsetX) which I want to express as well as a nodes object, which again contains multiple objects with arbitrary names e.g ("5f9f5c3ccd046720c28531e4", "5f9f5c3ccd046720c28531e5", ...) which have a unique node definition where some nodes have different properties to other nodes (nodeID, type, data vs nodeID, type, data, coords).
My question is with all these arbitrary things such as random names as well as different properties per each node. How do I turn it into 1 JSON schema definition which covers all the cases of how a diagram/node can be made.
You can do this with additionalProperties or patternProperties.
additionalProperties applies to any property that isn't declared in properties or patternProperties.
{
"type": "object",
"additionalProperties": {
"type": "object",
"properties": {
"_id": { ... },
"offsetX": { ... },
...
}
}
}
Your property names appear to always be hex numbers. If you want to enforce that those property names are always hex numbers, you can use patternProperties. Any property that matches the regex must conform to that schema.
{
"type": "object",
"patternProperties": {
"^[0-9a-f]{24}$": {
"type": "object",
"properties": {
"_id": { ... },
"offsetX": { ... },
...
}
}
},
"additionalProperties": false
}
I have a JSON Data that I am getting from URL. I want to merge the data which has same Value Below is the sample data. I have searched through the Internet,but i didn't got the solution. All i found is for same Keys.
[ {
"banana": [
{
"color": yellow,
"size": "small"
},
{
"size": "medium"
},
{
"size": "large"
}
],
"process_name": "fruits"},{
"carnivores": [
{
"name": "lion"
},
{
"name": "tiger"
},
{
"name": "chetah"
},
{
"name": "dianosaur"
}
],
"process_name": "animal"}, {
"apple": [
{
"color": red,
"size": "large"
}
],
"process_name": "fruits"}]
And I want to merge the data of "process_name" :"fruits" into one array as below and the result should be
[{
"banana": [
{
"color": yellow,
"size": "small"
},
{
"size": "medium"
},
{
"size": "large"
}
],
"apple": [
{
"color": red,
"size": "large"
}
],
"process_name": "fruits" }, {
"carnivores": [
{
"name": "lion"
},
{
"name": "tiger"
},
{
"name": "chetah"
},
{
"name": "dianosaur"
}
],
"process_name": "animal"}]
can anyone help on this?
Just some snippet to start:
// data definition
const json_before = [ {
"banana": [
{
"color": "yellow",
"size": "small"
},
{
"size": "medium"
},
{
"size": "large"
}
],
"process_name": "fruits"},
{
"carnivores": [
{
"name": "lion"
},
{
"name": "tiger"
},
{
"name": "chetah"
},
{
"name": "dianosaur"
}
],
"process_name": "animal"},
{
"apple": [
{
"color": "red",
"size": "large"
}
],
"process_name": "fruits"
}];
// processing
const json_after = json_before.reduce((arr, next) => {
const exist = arr.find(el => el.process_name === next.process_name);
if (exist) Object.assign(exist, next);
else arr.push(next);
return arr;
}, []);
// check
console.log(json_after);
The main idea is to use reduce for array traversing. To shorten the processing code, I've used Object.assign to modify previuosly added element, although in real code it's better to use Object.keys and manually check whether the properties of the previous object aren't overwritten.
Note that this is mutating the source data (check by adding console.log(json_before)). Again, if you don't want this, you should clone each element yourself, without direct object assignment.
I'm trying to build a query allowing me to make multiple aggregations (on the same level, not sub aggregations) on a single query. Here's the request I'm sending :
{
"index": "index20",
"type": "arret",
"body": {
"size": 0,
"query": {
"bool": {
"must": [
{
"multi_match": {
"query": "anim fore",
"analyzer": "query_analyzer",
"type": "cross_fields",
"fields": [
"doc_id"
],
"operator": "and"
}
}
]
}
},
"aggs": {
"anim_fore": {
"terms": {
"field": "suggest_keywords.autocomplete",
"order": {
"_count": "desc"
},
"include": {
"pattern": "anim.*fore.*"
}
}
},
"fore": {
"terms": {
"field": "suggest_keywords.autocomplete",
"order": {
"_count": "desc"
},
"include": {
"pattern": "fore.*"
}
}
}
}
}
}
However, I'm getting the following error when executing this query :
Error: [parsing_exception] Unknown key for a START_OBJECT in [fore]., with { line=1 & col=1351 }
I've been trying to change this query in many forms to make it works but I always end up with this error. It seems really strange to me as this query seems compatible with the format specified there : ES documentation.
Maybe there is something specific about terms aggregations but I haven't been able to sort it out.
The error is in your include settings, which should simply be strings
"aggs": {
"anim_fore": {
"terms": {
"field": "suggest_keywords.autocomplete",
"order": {
"_count": "desc"
},
"include": "anim.*fore.*" <--- here
}
},
"fore": {
"terms": {
"field": "suggest_keywords.autocomplete",
"order": {
"_count": "desc"
},
"include": "fore.*" <--- and here
}
}
}
You have trailing commas after doc_id and after closing array tag for must, your query should look like this
"must": [
{
"multi_match": {
"query": "anim fore",
"analyzer": "query_analyzer",
"type": "cross_fields",
"fields": [
"doc_id" // You have trailing comma here
],
"operator": "and"
}
}
] // And here