Configuring amq.topic binding with x-filter-jms-selector argument - qpid

Any idea how to configure amq.topic binding with x-filter-jms-selector argument?
I know how to do that in web admin UI.
If we are amending config file of qpid, then how to add this filter in that?

Config json will have something like this -
{
"id" : "1c91c97b-df6d-44e8-bf5d-673e7f0133b5",
"name" : "amq.topic",
"type" : "topic",
"durableBindings" : [ {
"arguments" : { },
"bindingKey" : "*.*.event",
"destination" : "test"
}, {
"arguments" : {
"x-filter-jms-selector" : "event NOT IN ('location', 'weather')"
},
"bindingKey" : "*.*.tick",
"destination" : "test"
} ],
"lastUpdatedBy" : "guest",
"lastUpdatedTime" : 1590073211015,
"createdBy" : null,
"createdTime" : 1589575285215
}

Related

Extract JSON value using Jmeter

I have this JSON:
{
"totalMemory" : 12206567424,
"totalProcessors" : 4,
"version" : "0.4.1",
"agent" : {
"reconnectRetrySec" : 5,
"agentName" : "1001",
"checkRecovery" : false,
"backPressure" : 10000,
"throttler" : 100
},
"logPath" : "/eq/equalum/eqagent-0.4.1.0-SNAPSHOT/logs",
"startTime" : 1494837249902,
"status" : {
"current" : "active",
"currentMessage" : null,
"previous" : "pending",
"previousMessage" : "Recovery:Starting pipelines"
},
"autoStart" : false,
"recovery" : {
"agentName" : "1001",
"partitionInfo" : { },
"topicToInitialCapturePosition" : { }
},
"sources" : [ {
"dataSource" : "oracle",
"name" : "oracle_source",
"captureType" : "directOverApi",
"streams" : [ ],
"idlePollingFreqMs" : 100,
"status" : {
"current" : "active",
"currentMessage" : null,
"previous" : "pending",
"previousMessage" : "Trying to init storage"
},
"host" : "192.168.191.5",
"metricsType" : { },
"bulkSize" : 10000,
"user" : "STACK",
"password" : "********",
"port" : 1521,
"service" : "equalum",
"heartbeatPeriodInMillis" : 1000,
"lagObjective" : 1,
"dataSource" : "oracle"
} ],
"upTime" : "157 min, 0 sec",
"build" : "0-SNAPSHOT",
"target" : {
"targetType" : "equalum",
"agentID" : 1001,
"engineServers" : "192.168.56.100:9000",
"kafkaOptions" : null,
"eventsServers" : "192.168.56.100:9999",
"jaasConfigurationPath" : null,
"securityProtocol" : "PLAINTEXT",
"stateMonitorTopic" : "_state_change",
"targetType" : "equalum",
"status" : {
"current" : "active",
"currentMessage" : null,
"previous" : "pending",
"previousMessage" : "Recovery:Starting pipelines"
},
"serializationFormat" : "avroBinary"
}
}
I trying using Jmeter to extract out the value of agentID, how can I do that using Jmeter, what would be better ? using extractor or json extractor?
what I am trying to do is to extract agentID value in order to use it on another http request sample, but first I have to extract it from this request.
thanks!
I believe using JSON Extractor is the best way to get this agentID value, the relevant JsonPath query will be as simple as $..agentID
Demo:
See the following reference material:
JsonPath - Getting Started - for initial information regarding JsonPath language, functions, operators, etc.
JMeter's JSON Path Extractor Plugin - Advanced Usage Scenarios - for more complex scenarios.

firebase rules and validation

I am trying to do some basic validation on my firebase database to ensure when an order is submitted that both email and mobile are present.
This is the rule I thought who achieve this but I am getting an error in the simulator say write access denied
{
"rules": {
"Orders": {
".read": true,
".write": true,
"$order_id": {
".validate": "newData.hasChildren(['email', 'phone'])"
}
}
}
}
This is the child node of /Orders
"-KeDyBIqnzNik0vOCEfQ" : {
"date" : "2017-03-02T23:22:32+1100",
"email" : "beanindustries#gmail.bean",
"items" : [ {
"description" : "Almond",
"name" : "Cappuccino",
"price" : ".5",
"qty" : 1
}, {
"description" : "Almond",
"name" : "Cappuccino",
"price" : ".5",
"qty" : 1
} ],
"name" : "Mr Bean",
"notes" : "\n\n",
"phone" : "0412258499",
"status" : "new"
}
So, it turns out I wasnt using data section within the simulator which is why my validations failed. I have since pasted the above JSON into that box and the validations appear to be working as expected

Manipulating JSON messages from Kafka topic using Logstash filter

I am using Logstash 2.4 to read JSON messages from a Kafka topic and send them to an Elasticsearch Index.
The JSON format is as below --
{
"schema":
{
"type": "struct",
"fields": [
{
"type":"string",
"optional":false,
"field":"reloadID"
},
{
"type":"string",
"optional":false,
"field":"externalAccountID"
},
{
"type":"int64",
"optional":false,
"name":"org.apache.kafka.connect.data.Timestamp",
"version":1,
"field":"reloadDate"
},
{
"type":"int32",
"optional":false,
"field":"reloadAmount"
},
{
"type":"string",
"optional":true,
"field":"reloadChannel"
}
],
"optional":false,
"name":"reload"
},
"payload":
{
"reloadID":"328424295",
"externalAccountID":"9831200013",
"reloadDate":1446242463000,
"reloadAmount":240,
"reloadChannel":"C1"
}
}
Without any filter in my config file, the target documents from the ES index look like below --
{
"_index" : "kafka_reloads",
"_type" : "logs",
"_id" : "AVfcyTU4SyCFNFP2z5-l",
"_score" : 1.0,
"_source" : {
"schema" : {
"type" : "struct",
"fields" : [ {
"type" : "string",
"optional" : false,
"field" : "reloadID"
}, {
"type" : "string",
"optional" : false,
"field" : "externalAccountID"
}, {
"type" : "int64",
"optional" : false,
"name" : "org.apache.kafka.connect.data.Timestamp",
"version" : 1,
"field" : "reloadDate"
}, {
"type" : "int32",
"optional" : false,
"field" : "reloadAmount"
}, {
"type" : "string",
"optional" : true,
"field" : "reloadChannel"
} ],
"optional" : false,
"name" : "reload"
},
"payload" : {
"reloadID" : "155559213",
"externalAccountID" : "9831200014",
"reloadDate" : 1449529746000,
"reloadAmount" : 140,
"reloadChannel" : "C1"
},
"#version" : "1",
"#timestamp" : "2016-10-19T11:56:09.973Z",
}
}
But, I want only the value part of the "payload" field to move to my ES index as the target JSON body. So I tried to use the 'mutate' filter in the config file as below --
input {
kafka {
zk_connect => "zksrv-1:2181,zksrv-2:2181,zksrv-4:2181"
group_id => "logstash"
topic_id => "reload"
consumer_threads => 3
}
}
filter {
mutate {
remove_field => [ "schema","#version","#timestamp" ]
}
}
output {
elasticsearch {
hosts => ["datanode-6:9200","datanode-2:9200"]
index => "kafka_reloads"
}
}
With this filter, the ES documents now look like below --
{
"_index" : "kafka_reloads",
"_type" : "logs",
"_id" : "AVfch0yhSyCFNFP2z59f",
"_score" : 1.0,
"_source" : {
"payload" : {
"reloadID" : "850846698",
"externalAccountID" : "9831200013",
"reloadDate" : 1449356706000,
"reloadAmount" : 30,
"reloadChannel" : "C1"
}
}
}
But actually It should be like below --
{
"_index" : "kafka_reloads",
"_type" : "logs",
"_id" : "AVfch0yhSyCFNFP2z59f",
"_score" : 1.0,
"_source" : {
"reloadID" : "850846698",
"externalAccountID" : "9831200013",
"reloadDate" : 1449356706000,
"reloadAmount" : 30,
"reloadChannel" : "C1"
}
}
Is there a way to do this? Can anyone help me on this?
I also tried the below filter --
filter {
json {
source => "payload"
}
}
But that is giving me errors like --
Error parsing json {:source=>"payload", :raw=>{"reloadID"=>"572584696", "externalAccountID"=>"9831200011", "reloadDate"=>1449093851000, "reloadAmount"=>180, "reloadChannel"=>"C1"}, :exception=>java.lang.ClassCastException: org.jruby.RubyHash cannot be cast to org.jruby.RubyIO, :level=>:warn}
Any help will be much appreciated.
Thanks
Gautam Ghosh
You can achieve what you want using the following ruby filter:
ruby {
code => "
event.to_hash.delete_if {|k, v| k != 'payload'}
event.to_hash.update(event['payload'].to_hash)
event.to_hash.delete_if {|k, v| k == 'payload'}
"
}
What it does is:
remove all fields but the payload one
copy all payload inner fields at the root level
delete the payload field itself
You'll end up with what you need.
It's been a while but here there is a valid workaround, hope it would be useful.
json_encode {
source => "json"
target => "json_string"
}
json {
source => "json_string"
}

jsTree - Setting href attributes in Json data

Im trying to create a 'jsTree' treeview that gets it's data from a .Net webservice.
Everything is working, except for the a-node's href attribute. Whatever I try, it always renders as '#'.
As I understand from the documentation, all attributes in any data object get copied to the a-node.
Below is an example of my current json object. Can anyway figure out why the href attribute isn't copied to the nodes?
[ { "attributes" : { "id" : "rootnode_2",
"rel" : "root2"
},
"children" : [ { "attributes" : { "id" : "childnode_9",
"rel" : "folder"
},
"children" : [ { "attributes" : { "id" : "childnode_23",
"rel" : "folder"
},
"children" : null,
"data" : { "href" : "http://www.google.com",
"title" : "Test_Below_1"
},
"state" : null
} ],
"data" : { "href" : "http://www.google.com",
"title" : "Test_1"
},
"state" : null
},
{ "attributes" : { "id" : "childnode_10",
"rel" : "folder"
},
"children" : [ { "attributes" : { "id" : "childnode_24",
"rel" : "folder"
},
"children" : null,
"data" : { "href" : "http://www.google.com",
"title" : "Test_Below_2"
},
"state" : null
} ],
"data" : { "href" : "http://www.google.com",
"title" : "Test_2"
},
"state" : null
}
],
"data" : { "href" : "http://www.google.com",
"title" : "Glatt"
},
"state" : "closed"
} ]
This is how I initialize the tree;
$("#jstreejson").jstree({
json_data : {
"data": treeObject
},
themes: {
"theme": "apple",
"dots": true,
"icons": true,
"url": "/Scripts/themes/apple/style.css"
},
plugins: ['core', 'themes', 'json', "json_data"]
});
So... I'm not sure that's entirely correct. You can't control the anchor attributes as far as I know, but what you can do add stuff to the attr hash in the json and then use the select_node.jstree event to open the desired link, i.e:
.bind("select_node.jstree", function (e,data) {
var href_address = data.rslt.obj.attr("whatever");
// open desired link
}

How to add Timestamp to Spring-Data-Mongo in Roo?

I have a Spring Roo project I am trying to create based on log4mongo-java appender and I want to get access to the data entries that looks like:
{
"_id" : ObjectId("4f16cd30b138685057c8ebcb"),
"timestamp" : ISODate("2012-01-18T13:46:24.704Z"),
"level" : "INFO", "thread" : "catalina-exec-8180-3",
"message" : "method execution[execution(TerminationComponent.terminateCall(..))]",
"loggerName" :
{ "fullyQualifiedClassName" : "component_logger",
"package" : ["component_logger"],
"className" : "component_logger"
},
"properties" : {
"cookieId" : "EDE44DC03EB65D91657885A34C80595E"
},
"fileName" : "LoggingAspect.java",
"method" : "logForComponent",
"lineNumber" : "81", "class" : {
"fullyQualifiedClassName" : "com.comcast.ivr.core.aspects.LoggingAspect",
"package" : ["com", "comcast", "ivr", "core", "aspects", "LoggingAspect"],
"className" : "LoggingAspect"
},
"host" : {
"process" : "2220#pacdcivrqaapp01",
"name" : "pacdcivrqaapp01",
"ip" : "24.40.31.85"
},
"applicationName" : "D2",
"eventType" : "Development"
}
The timestamp looks like:
"timestamp" : ISODate("2012-01-17T22:30:19.839Z")
How can I add a field in my Logging domain object to map this field?
That's just the JavaScript Date (according to the mongo docs, and as can be demonstrated in the shell), so try with java.util.Date.