Array of objects nested in an Array of objects - json

What is the correct way to represent this structure in JSON
Its Array of strings, with identifiers (A is identifier Printer is the array item)
then there is nested list of strings, with identifiers
A Printer
A0010 Not printing
A0020 Out of ink
A0030 No power
A0040 Noise
A0300 Feedback
A0500 Other
B PC Issues
B0010 No power
B0020 BSOD
B0030 Virus related
B0300 Feedback
B0500 Other
Thank you for your help

Does this work making it easy for you to filter for things?
you can use Object.keys to find the corresponding message
const json = {
data: [{
identifier: 'A',
itemType: 'Printer',
error: [
{
'A0010': 'Not printing'
},
{
'A0020': 'Out of ink'
},
{
'A0030': 'No power',
},
{
'A0040': 'Noise',
},
{
'A0300': 'Feedback',
},
{
'A0500': 'Other'
}
]
},
{
identifier: 'B',
itemType: 'PC Issues',
error: [
{
'B0010': 'No power'
},
{
'B0020': 'BSOD',
},
{
'B0030': 'Virus related'
}, {
'B0300': 'Feedback'
},
{
'B0500': 'Other'
},
]
}
]
}

I'm not totally sure what you mean by identifier unless you mean via javascript 👇
var a = {
"Printer":[
{
"identifier" : "A0010",
"reason" : "Not printing"
},
{
"identifier" : "A0020",
"reason" : "Out of ink"
},
{
"identifier" : "A0030",
"reason" : "No power"
},
{
"identifier" : "A0040",
"reason" : "Noise"
},
{
"identifier" : "A0300",
"reason" : "Feedback"
},
{
"identifier" : "A0500",
"reason" : "Other"
}]
}
var b = {
"PC Issues":[
{
"identifier" : "B0010",
"reason" : "No power"
},
{
"identifier" : "B0020",
"reason" : "BSOD"
},
{
"identifier" : "B0030",
"reason" : "Virus related"
},
{
"identifier" : "B0300",
"reason" : "Feedback"
},
{
"identifier" : "B0500",
"reason" : "Other"
}]
}

Related

How to map network_adapters and storage blocks using Packer vSphere-ISO HCL2 (JSON syntax)

I'm trying to build a VM using the HCL2 JSON syntax. It's not a deal breaker if this is impossible; I just prefer to use the JSON syntax as I find it easier and cleaner to manipulate programmatically. I keep running into the issue of adding storage and network_adapters complex types. I believe both keywords are of type list(map(string)). Here is what I have currently:
var-defs.pkr.json:
{
"variable": {
"network_adapters": {
"description": "List of network adapters to add to the VM.",
"type": "list(object({ network=string, network_card=string }))",
"default": [
{
"network": "Infra",
"network_card": "vmxnet3"
}
]
},
"storage": {
"description": "List of virtual disks to add to the VM.",
"type": "list(object({ disk_controller_index=number, disk_size=number, disk_thin_provisioned=bool }))",
"default": [
{
"disk_controller_index": 0,
"disk_size": 65536,
"disk_thin_provisioned": true
}
]
}
}
}
I also tried setting the type as "list(map(string))".
var.auto.pkrvars.json:
{
"network_adapters" : [
{
"network": "Infra",
"network_card": "vmxnet3"
}
],
"storage" : [
{
"disk_controller_index" : 0,
"disk_size" : 65536,
"disk_thin_provisioned" : true
},
{
"disk_controller_index" : 0,
"disk_size" : 65536,
"disk_thin_provisioned" : true
}
]
}
I've tried the following approaches with no success:
source.pkr.json (direct assignment):
{
"source" : {
"vsphere-iso" : {
"hcl2-json-build-vm" : {
"network_adapters" : "${var.network_adapters}",
"storage" : "${var.storage}"
}
}
}
}
source.pkr.json (splatting):
{
"source" : {
"vsphere-iso" : {
"hcl2-json-build-vm" : {
"network_adapters" : [
{
"network" : "${var.network_adapters[*].network}",
"network_card" : "${var.network_adapters[*].network_card}"
}
],
"storage" : [
{
"disk_controller_index" : "${var.storage[*].disk_controller_index}",
"disk_size" : "${var.storage[*].disk_size}",
"disk_thin_provisioned" : "${var.storage[*].disk_thin_provisioned}"
}
]
}
}
}
}
source.pkr.json (dynamic blocks):
{
"source" : {
"vsphere-iso" : {
"hcl2-json-build-vm" : {
"dynamic" : {
"network_adapters" : {
"for_each" : "${var.network_adapters}",
"content" : {
"network" : "${network_adapters.network}",
"network_card" : "${network_adapters.network_card}"
}
},
"storage" : {
"for_each" : "${var.storage}",
"content" : {
"disk_controller_index" : "${storage.disk_controller_index}",
"disk_size" : "${storage.disk_size}",
"disk_thin_provisioned" : "${storage.disk_thin_provisioned}"
}
}
}
}
}
}
}
Can anyone help point out my mistakes, or let me know if this is even possible currently? Much appreciated!
edit:
To document better (and give the thread a bump, let's be honest) the errors I get when trying to define network_adapters and storage in user variables and using the splatting method are:
Error: Incorrect attribute value type
on source.pkr.json line 39:
(source code not available)
with var.storage as tuple with 2 elements.
Inappropriate value for attribute "disk_controller_index": number required.
Error: Incorrect attribute value type
on source.pkr.json line 40:
(source code not available)
with var.storage as tuple with 2 elements.
Inappropriate value for attribute "disk_size": number required.
Error: Incorrect attribute value type
on source.pkr.json line 41:
(source code not available)
with var.storage as tuple with 2 elements.
Inappropriate value for attribute "disk_thin_provisioned": bool required.
Error: Incorrect attribute value type
on source.pkr.json line 32:
(source code not available)
with var.network_adapters as tuple with 1 element.
Inappropriate value for attribute "network": string required.
Error: Incorrect attribute value type
on source.pkr.json line 33:
(source code not available)
with var.network_adapters as tuple with 1 element.
Inappropriate value for attribute "network_card": string required.
The string interpolation doesn't seem to work as expected. Am I doing it wrong?
After a little bit of tinkering, I was able to get the suggestion made by SwampDragons here to work. The final implementation looks something like this, in case anyone runs into the same issue in the future:
var-defs.pkr.json
...
"network_adapters": {
"description": "List of network adapters to add to the VM.",
"type": "list(map(string))",
"default": [
{
"network": "LLE-Infra",
"network_card": "vmxnet3"
}
]
},
"storage": {
"description": "List of virtual disks to add to the VM.",
"type": "list(map(string))",
"default": [
{
"disk_controller_index": 0,
"disk_size": 65536,
"disk_thin_provisioned": true
}
]
}
...
var.auto.pkrvars.json
...
"network_adapters" : [
{
"network" : "Infra",
"network_card" : "vmxnet3"
}
],
"storage" : [
{
"disk_controller_index" : 0,
"disk_size" : 65536,
"disk_thin_provisioned" : true
},
{
"disk_controller_index" : 0,
"disk_size" : 65536,
"disk_thin_provisioned" : true
}
]
...
source.pkr.json
...
"dynamic" : {
"network_adapters" : {
"for_each": "${var.network_adapters}",
"content" : {
"network": "${network_adapters.value.network}",
"network_card": "${network_adapters.value.network_card}"
}
},
"storage" : {
"for_each" : "${var.storage}",
"content" : {
"disk_controller_index": "${storage.value.disk_controller_index}",
"disk_size": "${storage.value.disk_size}",
"disk_thin_provisioned": "${storage.value.disk_thin_provisioned}"
}
}
}
...

Value of property QueueConfigurations must be of type List

I am trying to write SQS triggers for my S3 bucket. I am running into an error saying "Value of property QueueConfigurations must be of type List." Is there something wrong with my indentation/formatting? Or is it a content error? I recently had to transcribe this from YAML to JSON, and I could really use a second pair of eyes on this issue. Keep in mind that the reason the codeblock below is so indented is because I have some sensitive info I shouldn't post. Thanks in advance!
"NotificationConfiguration" : {
"QueueConfigurations" : {
"Id" : "1",
"Event" : "s3:ObjectCreated:*",
"Filter" : {
"S3Key" : {
"Rules" : {
"Name" : "prefix",
"Value" : "prod_hvr/cdc/"
}
}
},
"Queue" : "arn:aws:sqs:us-east-1:958262988361:interstate-cdc_feeder_prod_hvr_dev"
},
"QueueConfigurations" : {
"Id" : "2",
"Event" : "s3:ObjectCreated:*",
"Filter" : {
"S3Key" : {
"Rules" : {
"Name" : "prefix",
"Value" : "prod_hvr/latency/"
}
}
},
"Queue" : "arn:aws:sqs:us-east-1:958262988361:interstate-latency_hvr_dev"
}
}
It should be something like below. And as per this docs, "Id" is not a valid attribute.
{
"NotificationConfiguration": {
"QueueConfigurations": [
{
"Event": "s3:ObjectCreated:*",
"Filter": {
"S3Key": {
"Rules": {
"Name": "prefix",
"Value": "prod_hvr/cdc/"
}
}
},
"Queue": "arn:aws:sqs:us-east-1:958262988361:interstate-cdc_feeder_prod_hvr_dev"
},
{
"Event": "s3:ObjectCreated:*",
"Filter": {
"S3Key": {
"Rules": {
"Name": "prefix",
"Value": "prod_hvr/latency/"
}
}
},
"Queue": "arn:aws:sqs:us-east-1:958262988361:interstate-latency_hvr_dev"
}
]
}
}

Creating Multiple QueueConfigurations in CloudFormation

I'm currently trying to write multiple QueueConfigurations into my CloudFormation template. Each is an SQS queue that is triggered when an object is created to a specified prefix. Here's what I have so far:
{
"Resources": {
"S3Bucket": {
"Type" : "AWS::S3::Bucket",
"Properties" :
"BucketName" : { "Ref" : "paramBucketName" },
"LoggingConfiguration" : {
"DestinationBucketName" : "test-bucket",
"LogFilePrefix" : { "Fn::Join": [ "", [ { "Ref": "paramBucketName" }, "/" ] ] }
},
"NotificationConfiguration" : {
"QueueConfigurations" : [{
"Id" : "1",
"Event" : "s3:ObjectCreated:*",
"Filter" : {
"S3Key" : {
"Rules" : {
"Name" : "prefix",
"Value" : "folder1/"
}
}
},
"Queue" : "arn:aws:sqs:us-east-1:958262988361:interstate-cdc_feeder_prod_hvr_dev"
}],
"QueueConfigurations" : [{
"Id" : "2",
"Event" : "s3:ObjectCreated:*",
"Filter" : {
"S3Key" : {
"Rules" : {
"Name" : "prefix",
"Value" : "folder2/"
}
}
},
"Queue" : "arn:aws:sqs:us-east-1:958262988361:interstate-latency_hvr_dev"
}]
}
}
}
}
}
}
I've encountered the error saying Encountered unsupported property Id. I thought that by defining the ID, I would be able to avoid the Duplicate object key error.
Does anyone know how to create multiple triggers in a single CloudFormation template? Thanks for the help in advance.
It should be structured like the below, There should only be one QueueConfigurations attribute
that contains all queue configurations within it. Also the Id parameter is not a valid property.
{
"Resources": {
"S3Bucket": {
"Type" : "AWS::S3::Bucket",
"Properties" :
"BucketName" : { "Ref" : "paramBucketName" },
"LoggingConfiguration" : {
"DestinationBucketName" : "test-bucket",
"LogFilePrefix" : { "Fn::Join": [ "", [ { "Ref": "paramBucketName" }, "/" ] ] }
},
"NotificationConfiguration" : {
"QueueConfigurations" : [{
"Event" : "s3:ObjectCreated:*",
"Filter" : {
"S3Key" : {
"Rules" : {
"Name" : "prefix",
"Value" : "folder1/"
}
}
},
"Queue" : "arn:aws:sqs:us-east-1:958262988361:interstate-cdc_feeder_prod_hvr_dev"
},
{
"Event" : "s3:ObjectCreated:*",
"Filter" : {
"S3Key" : {
"Rules" : {
"Name" : "prefix",
"Value" : "folder2/"
}
}
},
"Queue" : "arn:aws:sqs:us-east-1:958262988361:interstate-latency_hvr_dev"
}]
}
}
}
}
}
}
There is more information about QueueConfiguration in the documentation.

Sub-records in Avro with Morphlines

I'm trying to convert JSON into Avro using the kite-sdk morphline module. After playing around I'm able to convert the JSON into Avro using a simple schema (no complex data types).
Then I took it one step further and modified the Avro schema as displayed below (subrec.avsc). As you can see the schema consist of a subrecord.
As soon as I tried to convert the JSON to Avro using the morphlines.conf and the subrec.avsc it failed.
Somehow the JSON paths "/record_type[]/alert/action" are not translated by the toAvro function.
The morphlines.conf
morphlines : [
{
id : morphline1
importCommands : ["org.kitesdk.**"]
commands : [
# Read the JSON blob
{ readJson: {} }
{ logError { format : "record: {}", args : ["#{}"] } }
# Extract JSON
{ extractJsonPaths { flatten: false, paths: {
"/record_type[]/alert/action" : /alert/action,
"/record_type[]/alert/signature_id" : /alert/signature_id,
"/record_type[]/alert/signature" : /alert/signature,
"/record_type[]/alert/category" : /alert/category,
"/record_type[]/alert/severity" : /alert/severity
} } }
{ logError { format : "EXTRACTED THIS : {}", args : ["#{}"] } }
{ extractJsonPaths { flatten: false, paths: {
timestamp : /timestamp,
event_type : /event_type,
source_ip : /src_ip,
source_port : /src_port,
destination_ip : /dest_ip,
destination_port : /dest_port,
protocol : /proto,
} } }
# Create Avro according to schema
{ logError { format : "WE GO TO AVRO"} }
{ toAvro { schemaFile : /etc/flume/conf/conf.empty/subrec.avsc } }
# Create Avro container
{ logError { format : "WE GO TO BINARY"} }
{ writeAvroToByteArray { format: containerlessBinary } }
{ logError { format : "DONE!!!"} }
]
}
]
And the subrec.avsc
{
"type" : "record",
"name" : "Event",
"fields" : [ {
"name" : "timestamp",
"type" : "string"
}, {
"name" : "event_type",
"type" : "string"
}, {
"name" : "source_ip",
"type" : "string"
}, {
"name" : "source_port",
"type" : "int"
}, {
"name" : "destination_ip",
"type" : "string"
}, {
"name" : "destination_port",
"type" : "int"
}, {
"name" : "protocol",
"type" : "string"
}, {
"name": "record_type",
"type" : ["null", {
"name" : "alert",
"type" : "record",
"fields" : [ {
"name" : "action",
"type" : "string"
}, {
"name" : "signature_id",
"type" : "int"
}, {
"name" : "signature",
"type" : "string"
}, {
"name" : "category",
"type" : "string"
}, {
"name" : "severity",
"type" : "int"
}
] } ]
} ]
}
The output on { logError { format : "EXTRACTED THIS : {}", args : ["#{}"] } } I output the following:
[{
/record_type[]/alert / action = [allowed],
/record_type[]/alert / category = [],
/record_type[]/alert / severity = [3],
/record_type[]/alert / signature = [GeoIP from NL,
Netherlands],
/record_type[]/alert / signature_id = [88006],
_attachment_body = [{
"timestamp": "2015-03-23T07:42:01.303046",
"event_type": "alert",
"src_ip": "1.1.1.1",
"src_port": 18192,
"dest_ip": "46.231.41.166",
"dest_port": 62004,
"proto": "TCP",
"alert": {
"action": "allowed",
"gid": "1",
"signature_id": "88006",
"rev": "1",
"signature" : "GeoIP from NL, Netherlands ",
"category" : ""
"severity" : "3"
}
}],
_attachment_mimetype=[json/java + memory],
basename = [simple_eve.json]
}]
UPDATE 2017-06-22
you MUST populate the data in the structure in order for this to work, by using addValues or setValues
{
addValues {
micDefaultHeader : [
{
eventTimestampString : "2017-06-22 18:18:36"
}
]
}
}
after debugging the sources of morphline toAvro, it appears that the record is the first object to be evaluated, no matter what you put in your mappings structure.
the solution is quite simple, but unfortunately took a little extra time, eclipse, running the flume agent in debug mode, cloning the source code and lots of coffee.
here it goes.
my schema:
{
"type" : "record",
"name" : "co_lowbalance_event",
"namespace" : "co.tigo.billing.cboss.lowBalance",
"fields" : [ {
"name" : "dummyValue",
"type" : "string",
"default" : "dummy"
}, {
"name" : "micDefaultHeader",
"type" : {
"type" : "record",
"name" : "mic_default_header_v_1_0",
"namespace" : "com.millicom.schemas.root.struct",
"doc" : "standard millicom header definition",
"fields" : [ {
"name" : "eventTimestampString",
"type" : "string",
"default" : "12345678910"
} ]
}
} ]
}
morphlines file:
morphlines : [
{
id : convertJsonToAvro
importCommands : ["org.kitesdk.**"]
commands : [
{
readJson {
outputClass : java.util.Map
}
}
{
addValues {
micDefaultHeader : [{}]
}
}
{
logDebug { format : "my record: {}", args : ["#{}"] }
}
{
toAvro {
schemaFile : /home/asarubbi/Development/test/co_lowbalance_event.avsc
mappings : {
"micDefaultHeader" : micDefaultHeader
"micDefaultHeader/eventTimestampString" : eventTimestampString
}
}
}
{
writeAvroToByteArray {
format : containerlessJSON
codec : null
}
}
]
}
]
the magic lies here:
{
addValues {
micDefaultHeader : [{}]
}
}
and in the mappings:
mappings : {
"micDefaultHeader" : micDefaultHeader
"micDefaultHeader/eventTimestampString" : eventTimestampString
}
explanation:
inside the code the first field name that is evaluated is micDefaultHeader of type RECORD. as there's no way to specify a default value for a RECORD (logically correct), the toAvro code evaluates this, does not get any value configured in mappings and therefore it fails at it detects (wrongly) that the record is empty when it shouldn't.
however, taking a look at the code, you may see that it requires a Map object, containing no values to please the parser and continue to the next element.
so we add a map object using the addValues and fill it with an empty map [{}]. notice that this must match the name of the record that is causing you an empty value. in my case "micDefaultHeader"
feel free to comment if you have a better solution, as this looks like a "dirty fix"

DataTables Uncaught SyntaxError: Unexpected token :

I try to use the DataTables component with data provided by a REST API. Chrome reports the following error Uncaught SyntaxError: Unexpected token : on line 2 (see JSON below) when I use server-side data but it works if I use a text file. The setup is:
$('#table_id')
.dataTable({
"bProcessing": true,
"bServerSide": true,
"sAjaxSource": "http://mylocalhost:8888/_ah/api/realestate/v1/properties/demo",
//"sAjaxSource": "data.txt",
"sAjaxDataProp": "items",
"aoColumns": [{
"mData": "id"
}],
"fnServerData": function (
sUrl,
aoData,
fnCallback,
oSettings) {
oSettings.jqXHR = $
.ajax({
"url": sUrl,
"data": aoData,
"success": fnCallback,
"dataType": "jsonp",
"cache": false
});
}
}
}
The JSON returned by the server or in the data.txt file:
{
"iTotalRecords" : 10,
"iTotalDisplayRecords" : 10,
"sEcho" : "1",
"items" : [ {
"id" : "0"
}, {
"id" : "1"
}, {
"id" : "2"
}, {
"id" : "3"
}, {
"id" : "4"
}, {
"id" : "5"
}, {
"id" : "6"
}, {
"id" : "7"
}, {
"id" : "8"
}, {
"id" : "9"
} ]
}
Changing the sAjaxSource to data.txt works but not when the data comes from the server whereas the data are the same.