Google Stackdriver Error Reporting not picking up errors - google-compute-engine

Logs with severity “ERROR” is not identified by Error and Reporting tool. Application logs are being directed at Google Stackdriver Logging using fluentd agent and some of these are third party java components.
{
insertId: "14sf3lvg3ccncgh"
jsonPayload: {
class: "o.a.w.MarkupContainer"
message: "Unable to find component with id 'search2' in [Form [Component id = form]]
Expected: 'form:search2'.
Found with similar names: 'form:search'
at org.apache.wicket.markup.MarkupStream.throwMarkupException(MarkupStream.java:526) ~[wicket-core-6.22.0.jar:6.22.0]
at org.apache.wicket.MarkupContainer.renderNext(MarkupContainer.java:1438) ~[wicket-core-6.22.0.jar:6.22.0]
at org.apache.wicket.MarkupContainer.renderAll(MarkupContainer.java:1557) ~[wicket-core-6.22.0.jar:6.22.0]
at org.apache.wicket.MarkupContainer.renderComponentTagBody(MarkupContainer.java:1532) ~[wicket-core-6.22.0.jar:6.22.0]
at org.apache.wicket.MarkupContainer.onComponentTagBody(MarkupContainer.java:1487) ~[wicket-core-6.22.0.jar:6.22.0]"
milsec: "576"
reportLocation: {…}
serviceContext: {…}
tag: "test.gui"
thread: "[ajp-apr-8009-exec-5]"
}
labels: {…}
logName: "projects/myservice/logs/test.gui"
receiveTimestamp: "2017-08-29T15:20:16.847782870Z"
resource: {…}
severity: "ERROR"
timestamp: "2017-08-29T15:20:11Z"
}
Using the following configuration allows for my application logs to be forwarded correctly to Googles Stackdriver logging and all entries are correctly identified.
<source>
#type tail
path /var/log/test/test_gui/test_gui.log
pos_file /var/lib/google-fluentd/pos/test_gui-multiline.pos
read_from_head true
tag test.gui
format multiline
time_format %Y-%m-%d %H:%M:%S
format_firstline /\d{4}-\d{1,2}-\d{1,2} \d{1,2}:\d{1,2}:\d{1,2},\d{1,3}\s(?<severity>\S*)/
format1 /^(?<time>\d{4}-\d{1,2}-\d{1,2} \d{1,2}:\d{1,2}:\d{1,2}),(?<milsec>\d{1,3})\s(?<severity>\S*)\s(?<class>\S*)\s(?<thread>\[\S*\])\s(?<message>.*)/
</source>
However for severity ERROR, Error Reporting never noticed these entries.
The output was identified as textPayLoad, I used the following filter, which ensured output was jsonPayload
<filter test.gui>
#type record_transformer
<record>
serviceContext {"service": "test.gui", "version": "1"}
reportLocation {"filePath": "test_gui.log", "lineNumber": "unknown", "functionName": "unknown"}
tag ${tag}
</record>
</filter>
Still the error jsonPayload is being ignored.
If I replace the message using the filter, then suddenly Error Reporting is working
<filter test.gui>
#type record_transformer
<record>
serviceContext {"service": "test.gui", "version": "1"}
reportLocation {"filePath": "test_gui.log", "lineNumber": "unknown",
"functionName": "unknown"}
message "java.lang.TestError: msg
at com.example.TestClass.test (TestClass.java:51)
at com.example.AnotherClass (AnotherClass.java:25)"
tag ${tag}
</record>
</filter>
How can I force Error Reporting to pick these error entries as my next step would be to implement some form of alerting.

Third parties did not produce correct Java Stack trace. I needed the reportLocation, but this needs to be in context.
I changed the following line:
reportLocation {"filePath": "test_gui.log", "lineNumber": "unknown", "functionName": "unknown"}
to
context { "reportLocation" : {"filePath": "test_gui.log", "lineNumber": 1, "functionName": "unknown"} }
which ensured that the logs are now picked up by Stackdriver Error Reporting.
This is the final version of my filter:
<filter test.gui>
#type record_transformer
<record>
serviceContext {"service": "test.gui", "version": "1"}
context { "reportLocation" : {"filePath": "test_gui.log", "lineNumber": 1, "functionName": "unknown"} }
tag ${tag}
</record>
</filter>

Related

Ignore snyk code quality issue with .snyk file

Snyk finds some code quality issue that should be ignored. I'm using Snyk CLI:
"snyk code test"
✗ [High] Server-Side Request Forgery (SSRF)
Path: project/src/main/java/com/MyClass.java, line 140
Info: Unsanitized input from an HTTP parameter flows into org.apache.http.client.methods.HttpPost, where it is used as an URL to perform a request. This may result in a Server-Side Request Forgery vulnerability.
That's example.
I know to ignore something I need to put this in .snyk file.
I had trouble doing that so I've put 4 times same thing:
ignore:
'java/Ssrf':
- '*':
reason: None Given
expires: 2023-02-17T14:43:55.203Z
created: 2023-01-18T14:43:55.205Z
'CWE-918':
- '*':
reason: None Given
expires: 2023-02-17T14:43:55.203Z
created: 2023-01-18T14:43:55.205Z
java/Ssrf:
- '*':
reason: None Given
expires: 2023-02-17T14:43:55.203Z
created: 2023-01-18T14:43:55.205Z
CWE-918:
- '*':
reason: None Given
expires: 2023-02-17T14:43:55.203Z
created: 2023-01-18T14:43:55.205Z
But it still throws that problem.
I've added to 'snyk code test' --policy-path=.snyk - no help.
I've tried to use in the id 'Server-Side Request Forgery (SSRF)' <- no success.
All I see is ingoring dependency vulnerabilites in documentation. Is it possible to use that for code check?
I got CWE-918 and 'java/Ssrf' by calling that test to json:
"rules": [
{
"id": "java/Ssrf",
"name": "Ssrf",
"shortDescription": {
"text": "Server-Side Request Forgery (SSRF)"
},
"defaultConfiguration": {
"level": "error"
},
"precision": "very-high",
"repoDatasetSize": 233,
"cwe": [
"CWE-918"
]
}
Is it anyhow possible to do that?

How to access json elements in fluentd config match directive

I have setup fluentd in my kubernetes cluster (AKS) to send the logs to azure blob using the microsoft plugin azure-storage-append-blob. Currently the path how my logs are stored is as follows containername/logs/file.log. but I want it to be in this way containername/logs/podname/file.log. I've used fluent-plugin-kubernetes_metadata_filter plugin to filter out the kubernetes metadata. Below is my current configuration that I tried. but this did not work out well for me. Also I'm posting a sample JSON output from the logs. I know this is possible but just need a little bit help or guidance here to finish this off.
Current configuration:
<match fluent.**>
#type null
</match>
<source>
#type tail
path /var/log/containers/*.log
pos_file /var/log/td-agent/tmp/access.log.pos
tag container.*
#format json
format json
time_key time
time_format %Y-%m-%dT%H:%M:%S.%NZ
read_from_head true
</source>
<match container.var.log.containers.**fluentd**.log>
#type null
</match>
<filter container.**>
#type kubernetes_metadata
</filter>
<match **>
#type azure-storage-append-blob
azure_storage_account mysaname
azure_storage_access_key mysaaccesskey
azure_container fluentdtest
auto_create_container true
path logs/
append false
azure_object_key_format %{path}%{tag}%{time_slice}_%{index}.log
time_slice_format %Y%m%d-%H-%M
# if you want to use %{tag} or %Y/%m/%d/ like syntax in path / azure_blob_name_format,
# need to specify tag for %{tag} and time for %Y/%m/%d in <buffer> argument.
<buffer tag,time,timekey>
#type file
path /var/log/fluent/azurestorageappendblob
timekey 300s
timekey_wait 10s
timekey_use_utc true # use utc
chunk_limit_size 5MB
queued_chunks_limit_size 1
</buffer>
</match>
Sample Json from the logs
container.var.log.containers.nginx - connector - deployment - 5 bbfdf4f86 - p86dq_mynamespace_nginx - ee437ca90cb3924e1def9bdaa7f682577fc16fb023c00975963a105b26591bfb.log:
{
"log": "2020-07-16 17:12:56,761 INFO spawned: 'consumer' with pid 87068\n",
"stream": "stdout",
"docker": {
"container_id": "ee437ca90cb3924e1def9bdaa7f682577fc16fb023c00975963a105b26591bfb"
},
"kubernetes": {
"container_name": "nginx",
"namespace_name": "mynamespace",
"pod_name": "nginx-connector-deployment-5bbfdf4f86-p86dq",
"container_image": "docker.io/nginx",
"container_image_id": "docker-pullable://docker.io/nginx:f908584cf96053e50862e27ac40534bbd57ca3241d4175c9576dd89741b4926",
"pod_id": "93a630f9-0442-44ed-a8d2-9a7173880a3b",
"host": "aks-nodepoolkube-15824989-vmss00000j",
"labels": {
"app": "nginx",
"pod-template-hash": "5bbfdf4f86"
},
"master_url": "https://docker.io:443/api",
"namespace_id": "87092784-26b4-4dd5-a9d2-4833b72a1366"
}
}
Below is the official github link for the append-blob plugin https://github.com/microsoft/fluent-plugin-azure-storage-append-blob
Please refer below link for configuration for fluentd for reading JSON/NON-JSON multiline logs. Try with this configuration it will work.
How to get ${kubernetes.namespace_name} for index_name in fluentd?

Junit XML to json format in Jroovy with XmlSlurper

I am trying to write a bridge function to convert XML data to the Json format below are the data I have
the sample xml file is
<testsuites> <testsuite tests="4" failures="4" errors="0" name="AT">
<testcase name="#1 notificate › v1 › announcement › announcement.feature/#TEST CASE: Notification: Send an announcement: Send an announcement using the minimum requirements"/>
<testcase name="#2 notifiivate › v1 › announcement › announcement.feature/#TEST CASE: Notification: Send an ant"/>
<testcase name="#1 No tests found in features/tests/auth/auth.POST.js">
<failure/>
</testcase>
<testcase name="#2 versioninfo › versioninfo › versioninfo.feature/#TEST CASE: CDP ADMIN: Get version info: Get the version of the CDP service">
<failure>
name: AssertionError
message: Rejected promise returned by test
values:
</failure>
</testcase>
<testcase name="#3 projects › edit_entitlement › edit_entitlement.feature/#TEST CASE: CDP ADMIN: Edit Entitlement: Attempt to edit an entitlement_id to be a negative number">
<failure>
---
name: AssertionError
message: Rejected promise returned by test
values:
...
</failure>
</testcase>
</testsuite>
</testsuites>
I am trying to write a function in groovy to get the below json format
{
testsuites{
"testsuite": {
"tests": "4",
"failures": "4",
"errors": "0",
"name": "AT-cdpServer.Default",
"testcase": [
{
"name": "#1 notificate › v1 › announcement › Send an announcement: Send an announcement using the minimum requirements"
},
{
"name": "#2 notifiivate › v1 › announcement › announcement.feature/#TEST CASE: Notification: Send an ant"
},
{
"name": "#1 No tests found in features/tests/auth/auth.POST.js",
"failure": []
},
{
"name": "#2 versioninfo › versioninfo › versioninfo.feature/#TEST CASE: CDP ADMIN: Get version info: Get the version of the CDP service",
"failure": "---\n name: AssertionError\n message: Rejected promise returned by test\n values: {\"Rejected promise returned by test. Reason:\":\"Error {\\n message: 'no schema with key or ref \\\"/versioninfo.get.200\\\"',\\n}\"}\n at: Ajv.validate (node_modules/ajv/lib/ajv.js:95:19)\n ..."
},
{
"name": "#3 projects › edit_entitlement › edit_entitlement.feature/#TEST CASE: CDP ADMIN: Edit Entitlement: Attempt to edit an entitlement_id to be a negative number",
"failure": "---\n name: AssertionError\n message: Rejected promise returned by test\n values: {\"Rejected promise returned by test. Reason:\":\"TypeError {\\n message: 'Only absolute URLs are supported',\\n}\"}\n ..."
},
]
}
}}
}
Appreciate any inputs in the right direction , thank you
So far I have this, it reads all the data, but the structure is off
def toJsonBuilder(xml){
def xmlToJson = build(new XmlSlurper().parseText(xml))
new groovy.json.JsonBuilder(xmlToJson)
}
def build(node){
if (node instanceof String){
return // ignore strings...
}
def map = [(node.name()): node.collect]
if (!node.attributes().isEmpty()) {
map.put(node.name(),node.attributes().collectEntries{it})
}
if (!node.children().isEmpty() && !(node.children().getAt(0) instanceof String)) {
map.put(node.children().name, node.children().collect{build(it)}.findAll{it != null})
} else if (node.text() != ''){
map.put(node.name(), node.text())
}
map
}

Orion / Proton subscription: java.lang.NullPointerException parsing an event from Orion

My Proton instance fails with a java.lang.NullPointerException whenever an event is sent by Orion
this is the Proton log:
proton_1 | 01-Jul-2016 09:46:03.117 INFO [http-nio-8080-exec-1] com.ibm.hrl.proton.webapp.providers.EventXmlNgsiMessageReader.readFrom started event message body reader
proton_1 | 01-Jul-2016 09:46:03.125 INFO [http-nio-8080-exec-1] com.ibm.hrl.proton.webapp.providers.EventXmlNgsiMessageReader.readFrom Event: ApeContextUpdate
proton_1 | 01-Jul-2016 09:46:03.126 SEVERE [http-nio-8080-exec-1] com.ibm.hrl.proton.webapp.providers.EventXmlNgsiMessageReader.readFrom Could not parse XML NGSI event java.lang.NullPointerException, reason: null
proton_1 | last attribute name: null last value: null
proton_1 | 01-Jul-2016 09:46:03.130 INFO [http-nio-8080-exec-1] com.ibm.hrl.proton.webapp.providers.EventXmlNgsiMessageReader.readFrom finished event message body reader
proton_1 | 01-Jul-2016 09:46:03.131 INFO [http-nio-8080-exec-1] com.ibm.hrl.proton.webapp.resources.EventResource.submitNewEvent starting submitNewEvent
proton_1 | 01-Jul-2016 09:46:03.132 SEVERE [http-nio-8080-exec-1] com.ibm.hrl.proton.webapp.resources.EventResource.submitNewEvent Could not send event, reason: java.lang.NullPointerException, message: null
I've read the Appendix of the User guide and double checked the event name and the attributes list.
This is an xml sent by orion:
POST /ProtonOnWebServer/rest/events HTTP/1.1
User-Agent: orion/0.28.0 libcurl/7.19.7
Host: localhost:8080
Accept: application/xml, application/json
Content-length: 772
Content-type: application/xml
<notifyContextRequest>
<subscriptionId>57762eb9982959644644f9ee</subscriptionId>
<originator>localhost</originator>
<contextResponseList>
<contextElementResponse>
<contextElement>
<entityId type="Ape" isPattern="false">
<id>u1</id>
</entityId>
<contextAttributeList>
<contextAttribute>
<name>carsharing</name>
<type>urn:x-ogc:def:trs:IDAS:1.0:ISO8601</type>
<contextValue>2016-07-01T11:01:06</contextValue>
</contextAttribute>
</contextAttributeList>
</contextElement>
<statusCode>
<code>200</code>
<reasonPhrase>OK</reasonPhrase>
</statusCode>
</contextElementResponse>
</contextResponseList>
</notifyContextRequest>
This is the definition of the Proton project (BTW this is the project copied from
the server filesystem because also the rest api fails with a
NullPointerException)
{
"epn": {
"events": [
{
"name": "ApeContextUpdate",
"createdDate": "Fri Jul 01 2016",
"attributes": [
{
"name": "entityId",
"type": "String",
"dimension": "0"
},
{
"name": "entityType",
"type": "String",
"dimension": "0"
},
{
"name": "carsharing",
"type": "Date",
"dimension": "0"
}
]
}
],
"epas": [],
"contexts": {
"temporal": [],
"segmentation": [],
"composite": []
},
"consumers": [],
"producers": [],
"name": "t0"
}
}
and this is my docker-compose file:
mongo:
image: mongo:2.6
command: --smallfiles --quiet
proton:
image: fiware/proactivetechnologyonline
ports:
- "8080:8080"
orion:
image: fiware/orion:0.28
links:
- mongo
- proton
command: -dbhost mongo --silent
ports:
- "1026:1026"
I'm using Orion 0.28 (the last one that supports XML notifications) and the latest Proton
UPDATE 1 - catalina.log
07-Jul-2016 07:52:39.914 INFO [http-nio-8080-exec-1] com.ibm.hrl.proton.webapp.providers.EventXmlNgsiMessageReader.readFrom started event message body reader
07-Jul-2016 07:52:39.924 INFO [http-nio-8080-exec-1] com.ibm.hrl.proton.webapp.providers.EventXmlNgsiMessageReader.readFrom Event: ApeContextUpdate
07-Jul-2016 07:52:39.924 SEVERE [http-nio-8080-exec-1] com.ibm.hrl.proton.webapp.providers.EventXmlNgsiMessageReader.readFrom Could not parse XML NGSI event java.lang.NullPointerException, reason: null
last attribute name: null last value: null
07-Jul-2016 07:52:39.928 INFO [http-nio-8080-exec-1] com.ibm.hrl.proton.webapp.providers.EventXmlNgsiMessageReader.readFrom finished event message body reader
07-Jul-2016 07:52:39.929 INFO [http-nio-8080-exec-1] com.ibm.hrl.proton.webapp.resources.EventResource.submitNewEvent starting submitNewEvent
07-Jul-2016 07:52:39.929 SEVERE [http-nio-8080-exec-1] com.ibm.hrl.proton.webapp.resources.EventResource.submitNewEvent Could not send event, reason: java.lang.NullPointerException, message: null
The problem seems to be that your Proton instance is not actually configured with your project's JSON definition file, therefore when sending the POST of any type you will always get NullPointerException since no such event can be found in Proton's metadata.
Please try to configure your instance's admin interface, as described here:
http://proactive-technology-online.readthedocs.io/en/latest/Proton-InstallationAndAdminGuide/index.html (Setup Apache Tomcat for management part)
And then run the following query:
GET //<ip of the machine running Proton>:8080/ProtonOnWebServerAdmin/resources/definitions.
This should return the all the project definitions this instance have...
And then if you see this in the list, you can retrieve your specific project's definition by running:
GET /<ip of the machine running Proton>:8080/resources/definitions/{definition_name}.
I think this will either return nothing, or will be empty.
You can update the definitions by using RESTful interface as described here: http://forge.fiware.org/plugins/mediawiki/wiki/fiware/index.php/Complex_Event_Processing_Open_RESTful_API_Specification (under the Managing Definitions Repository part)

BOSH implementation on ejabberd

I tried to start BOSH on ejabberd. My ejabberd.cfg snippet is below:
{5280, ejabberd_http, [
{request_handlers, [
{["xmpp-httpbind"], mod_http_bind}
]},
captcha,
http_bind,
http_poll,
web_admin
]}
http://localhost:5280/http-bind fails to open any page.
And my client getting this response from server
Sent XML:
<iq to='localhost' id='uid:50502b03:00004823' type='get' x
mlns='jabber:client'><query xmlns='jabber:iq:auth'><username>anurag</username></
query></iq>
Received XML:
<iq xmlns='jabber:client' from='localhost' id='uid:505
029df:00004823' type='error'><error code='503' type='cancel'><service-unavailabl
e xmlns='urn:ietf:params:xml:ns:xmpp-stanzas'/></error></iq>
Sent XML: </stream:stream>
auth failed. reason: 0
ce: 18
I am using gloox library to create a client.
Did you add {mod_http_bind, []} to your modules section?