This is my Output section for JSON script that I use for CloudFormation template.
"Outputs": {
"Value":{"Fn::Join": [" ",["Username : ",{"Ref": "Username"},"Password : ",{"Ref": "Pass"}]]},
}
How do I print Output in below format:
username : abc
password : abc
Right now I am getting this in same line (Username: abc Password abc).
Try appending new line. It should work. As the arguments are taken as string.
Why not print these out as separate outputs?
"Outputs" : {
"Username" : {
"Description" : "Username",
"Value": { "Ref": "Username" }
},
"Password" : {
"Description" : "Password",
"Value": { "Ref": "Password" }
}
}
Related
I am integrating lua-module in nginx. I want to check parameter's value is empty or not. Whenever I use following code, I get true result against the JSON request
Request
{
"data": {
"user": {
"username": "Ethen",
"type": "PDF"
}
},
"passport": {
"user": "001"
},
}
Code
local arg = ngx.req.get_body_data();
L="return "..arg:gsub('("[^"]-"):','[%1]=')
T=loadstring(L)()
ngx.print(T.data.user.username)
if T.data.user.username == "" then
ngx.say("Empty username");
end
But when I use this request, I got error (attempt to call a nil value stack traceback)
Request
{
"reference" : "567456314",
"callback_url" : "http://www.example.com/",
"verification_mode" : "any",
"document" : {
"proof" : "data:image/png;base64,iVBOR=“,
"additional_proof": "data:image/png;base64,iVBORw0=",
"supported_types" : ["id_card","driving_license","passport"],
"expiry_date" : "",
"document_number" : ""
},
"address" : {
"proof" : "data:image/png;base64,iVBORw0KG=",
"supported_types" : ["id_card","bank_statement"],
"name" : "",
"issue_date":""
}
}
Code
local arg = ngx.req.get_body_data();
L="return "..arg:gsub('("[^"]-"):','[%1]=')
T=loadstring(L)()
ngx.print(T.document.proof)
if T.document.proof == "" then
ngx.say("Empty proof");
end
What's the problem and solution??....Thanks in advance!!
I have this specific JSON file I'm not allowed to change, which looks like this :
{
"computers" : {
"John" : {
"version" : "1.42.0",
"environment" : "Default",
"platform" : "x64",
"admin" : "true"
},
"Peter" : {
"version" : "1.43.6",
"environment" : "Default",
"platform" : "x64",
"admin" : "true"
},
"Eric" : {
"version" : "1.43.6",
"environment" : "Default",
"platform" : "x64",
"admin" : "false"
}
}
I use the JSON.parse() method to parse the file and I put it into a MatTableDataSource.
Problem is, when I need to display it in my MatTable, I can't really access it the way I want.
I have a column where I want to display all version parameters, so for this I can't say : this.dataSource.computers.????.version
Do you guys see where I'm getting at ? Do you have any idea of what I can do differently in order to solve this ?
Looking forward to reading you.
The Angular mat-table requires the input data to be in an array. First, we use Object.keys() to extract the keys, which will contain the list of names. Then, we can use Object.values() to other values within each key in array format. This is followed by mapping the above array objects with the name property from the list of names.
const data = {
"computers": {
"John": {
"version": "1.42.0",
"environment": "Default",
"platform": "x64",
"admin": "true"
},
"Peter": {
"version": "1.43.6",
"environment": "Default",
"platform": "x64",
"admin": "true"
},
"Eric": {
"version": "1.43.6",
"environment": "Default",
"platform": "x64",
"admin": "false"
}
}
};
const nameList = Object.keys(data.computers);
const dataList = Object.values(data.computers).map((obj, index) => {
obj['name'] = nameList[index];
return obj;
});
console.log(dataList);
I am trying to find a value in the json file and based on that I need to get the entire json data instead of that particular block.
Here is my sample json
[{
"name" : "Redirect to Website 1",
"behaviors" : [ {
"name" : "redirect",
"options" : {
"mobileDefaultChoice" : "DEFAULT",
"destinationProtocol" : "HTTPS",
"destinationHostname" : "SAME_AS_REQUEST",
"responseCode" : 302
}
} ],
"criteria" : [ {
"name" : "requestProtocol",
"options" : {
"value" : "HTTP"
}
} ],
"criteriaMustSatisfy" : "all"
},
{
"name" : "Redirect to Website 2",
"behaviors" : [ {
"name" : "redirect",
"options" : {
"mobileDefaultChoice" : "DEFAULT",
"destinationProtocol" : "HTTPS",
"destinationHostname" : "SAME_AS_REQUEST",
"responseCode" : 301
}
} ],
"criteria" : [ {
"name" : "contentType",
"options" : {
"matchOperator" : "IS_ONE_OF",
"values" : [ "text/html*", "text/css*", "application/x-javascript*" ],
}
} ],
"criteriaMustSatisfy" : "all"
}]
I am trying to match for "name" : "redirect" inside each behaviors array and if it matches then I need the entire block including the "criteria" section, as you can see its under same block {}
I managed to find the values using select methods but not able to get the parent section.
https://jqplay.org/s/BWJwVdO3Zv
Any help is much appreciated!
To avoid unwanted duplication:
.[]
| first(select(.behaviors[].name == "redirect"))
Equivalently:
.[]
| select(any(.behaviors[]; .name == "redirect"))
You can try this jq command:
<file jq 'select(.[].behaviors[].name=="redirect")'
I have an index/type in ES which has the following type of records:
body "{\"Status\":\"0\",\"Time\":\"2017-10-3 16:39:58.591\"}"
type "xxxx"
source "11.2.21.0"
The body field is a JSON.So I want to search for example the records that have in their JSON body Status:0.
Query should look something like this(it doesn't work):
GET <host>:<port>/index/type/_search
{
"query": {
"match" : {
"body" : "Status:0"
}
}
}
Any ideas?
You have to change the analyser settings of your index.
For the JSON pattern you presented you will need to have a char_filter and a tokenizer which remove the JSON elements and then tokenize according to your needs.
Your analyser should contain a tokenizer and a char_filter like these ones here:
{
"tokenizer" : {
"type": "pattern",
"pattern": ","
},
"char_filter" : [ {
"type" : "mapping",
"mappings" : [ "{ => ", "} => ", "\" => " ]
} ],
"text" : [ "{\"Status\":\"0\",\"Time\":\"2017-10-3 16:39:58.591\"}" ]
}
Explanation: the char_filter will remove the characters: { } ". The tokenizer will tokenize by the comma.
These can be tested using the Analyze API. If you execute the above JSON against this API you will get these tokens:
{
"tokens" : [ {
"token" : "Status:0",
"start_offset" : 2,
"end_offset" : 13,
"type" : "word",
"position" : 0
}, {
"token" : "Time:2017-10-3 16:39:58.591",
"start_offset" : 15,
"end_offset" : 46,
"type" : "word",
"position" : 1
} ]
}
The first token ("Status:0") which is retrieved by the Analyze API is the one you were using in your search.
I'm trying to convert JSON into Avro using the kite-sdk morphline module. After playing around I'm able to convert the JSON into Avro using a simple schema (no complex data types).
Then I took it one step further and modified the Avro schema as displayed below (subrec.avsc). As you can see the schema consist of a subrecord.
As soon as I tried to convert the JSON to Avro using the morphlines.conf and the subrec.avsc it failed.
Somehow the JSON paths "/record_type[]/alert/action" are not translated by the toAvro function.
The morphlines.conf
morphlines : [
{
id : morphline1
importCommands : ["org.kitesdk.**"]
commands : [
# Read the JSON blob
{ readJson: {} }
{ logError { format : "record: {}", args : ["#{}"] } }
# Extract JSON
{ extractJsonPaths { flatten: false, paths: {
"/record_type[]/alert/action" : /alert/action,
"/record_type[]/alert/signature_id" : /alert/signature_id,
"/record_type[]/alert/signature" : /alert/signature,
"/record_type[]/alert/category" : /alert/category,
"/record_type[]/alert/severity" : /alert/severity
} } }
{ logError { format : "EXTRACTED THIS : {}", args : ["#{}"] } }
{ extractJsonPaths { flatten: false, paths: {
timestamp : /timestamp,
event_type : /event_type,
source_ip : /src_ip,
source_port : /src_port,
destination_ip : /dest_ip,
destination_port : /dest_port,
protocol : /proto,
} } }
# Create Avro according to schema
{ logError { format : "WE GO TO AVRO"} }
{ toAvro { schemaFile : /etc/flume/conf/conf.empty/subrec.avsc } }
# Create Avro container
{ logError { format : "WE GO TO BINARY"} }
{ writeAvroToByteArray { format: containerlessBinary } }
{ logError { format : "DONE!!!"} }
]
}
]
And the subrec.avsc
{
"type" : "record",
"name" : "Event",
"fields" : [ {
"name" : "timestamp",
"type" : "string"
}, {
"name" : "event_type",
"type" : "string"
}, {
"name" : "source_ip",
"type" : "string"
}, {
"name" : "source_port",
"type" : "int"
}, {
"name" : "destination_ip",
"type" : "string"
}, {
"name" : "destination_port",
"type" : "int"
}, {
"name" : "protocol",
"type" : "string"
}, {
"name": "record_type",
"type" : ["null", {
"name" : "alert",
"type" : "record",
"fields" : [ {
"name" : "action",
"type" : "string"
}, {
"name" : "signature_id",
"type" : "int"
}, {
"name" : "signature",
"type" : "string"
}, {
"name" : "category",
"type" : "string"
}, {
"name" : "severity",
"type" : "int"
}
] } ]
} ]
}
The output on { logError { format : "EXTRACTED THIS : {}", args : ["#{}"] } } I output the following:
[{
/record_type[]/alert / action = [allowed],
/record_type[]/alert / category = [],
/record_type[]/alert / severity = [3],
/record_type[]/alert / signature = [GeoIP from NL,
Netherlands],
/record_type[]/alert / signature_id = [88006],
_attachment_body = [{
"timestamp": "2015-03-23T07:42:01.303046",
"event_type": "alert",
"src_ip": "1.1.1.1",
"src_port": 18192,
"dest_ip": "46.231.41.166",
"dest_port": 62004,
"proto": "TCP",
"alert": {
"action": "allowed",
"gid": "1",
"signature_id": "88006",
"rev": "1",
"signature" : "GeoIP from NL, Netherlands ",
"category" : ""
"severity" : "3"
}
}],
_attachment_mimetype=[json/java + memory],
basename = [simple_eve.json]
}]
UPDATE 2017-06-22
you MUST populate the data in the structure in order for this to work, by using addValues or setValues
{
addValues {
micDefaultHeader : [
{
eventTimestampString : "2017-06-22 18:18:36"
}
]
}
}
after debugging the sources of morphline toAvro, it appears that the record is the first object to be evaluated, no matter what you put in your mappings structure.
the solution is quite simple, but unfortunately took a little extra time, eclipse, running the flume agent in debug mode, cloning the source code and lots of coffee.
here it goes.
my schema:
{
"type" : "record",
"name" : "co_lowbalance_event",
"namespace" : "co.tigo.billing.cboss.lowBalance",
"fields" : [ {
"name" : "dummyValue",
"type" : "string",
"default" : "dummy"
}, {
"name" : "micDefaultHeader",
"type" : {
"type" : "record",
"name" : "mic_default_header_v_1_0",
"namespace" : "com.millicom.schemas.root.struct",
"doc" : "standard millicom header definition",
"fields" : [ {
"name" : "eventTimestampString",
"type" : "string",
"default" : "12345678910"
} ]
}
} ]
}
morphlines file:
morphlines : [
{
id : convertJsonToAvro
importCommands : ["org.kitesdk.**"]
commands : [
{
readJson {
outputClass : java.util.Map
}
}
{
addValues {
micDefaultHeader : [{}]
}
}
{
logDebug { format : "my record: {}", args : ["#{}"] }
}
{
toAvro {
schemaFile : /home/asarubbi/Development/test/co_lowbalance_event.avsc
mappings : {
"micDefaultHeader" : micDefaultHeader
"micDefaultHeader/eventTimestampString" : eventTimestampString
}
}
}
{
writeAvroToByteArray {
format : containerlessJSON
codec : null
}
}
]
}
]
the magic lies here:
{
addValues {
micDefaultHeader : [{}]
}
}
and in the mappings:
mappings : {
"micDefaultHeader" : micDefaultHeader
"micDefaultHeader/eventTimestampString" : eventTimestampString
}
explanation:
inside the code the first field name that is evaluated is micDefaultHeader of type RECORD. as there's no way to specify a default value for a RECORD (logically correct), the toAvro code evaluates this, does not get any value configured in mappings and therefore it fails at it detects (wrongly) that the record is empty when it shouldn't.
however, taking a look at the code, you may see that it requires a Map object, containing no values to please the parser and continue to the next element.
so we add a map object using the addValues and fill it with an empty map [{}]. notice that this must match the name of the record that is causing you an empty value. in my case "micDefaultHeader"
feel free to comment if you have a better solution, as this looks like a "dirty fix"