I have a JSON file with the following contents:
[
{
"url" : "www.google.com",
"valid_from" : " Jul 31 10:16:13 2017 GMT",
"valid_till" : " Jul 31 10:16:13 2019 GMT",
"validity" : "Valid",
"days" : "464"
},
{
"url" : "www.youtube.com",
"valid_from" : " Apr 9 12:12:17 2017 GMT",
"valid_till" : " Apr 9 12:12:17 2019 GMT",
"validity" : "Valid",
"days" : "351"
}
]
I want to delete a block of JSON by passing a url argument corresponding to the block to delete.
I have a script cert-check-script-delete.sh which contains the following code:
line_num=1
cat certs.json >> certs-new.json
while read p; do # Iterate through each line in certs.json
if [[ $p == *"$1"* ]]; # Check if current line contains argument
then
sed -i "${line_num-1}d" certs-new.json # {
sed -i "${line_num}d" certs-new.json # Url
sed -i "${line_num+1}d" certs-new.json # Valid from
sed -i "${line_num+2}d" certs-new.json # Valid till
sed -i "${line_num+3}d" certs-new.json # Validity
sed -i "${line_num+4}d" certs-new.json # Days
sed -i "${line_num+5}d" certs-new.json # }
break
fi
((line_num++))
done <certs.json
mv certs-new.json certs.json
And after running my script with argument www.youtube.com I'm getting weird behaviour where it seems to just be deleting random lines:
{
"valid_from" : " Jul 31 10:16:13 2017 GMT",
"validity" : "Valid",
{
"valid_till" : " Apr 9 12:12:17 2019 GMT",
"validity" : "Valid",
"days" : "351"
}
]
I know I should use jq for inserting/deleting JSON but I'm not able to install it at work, so please don't just comment saying use jq.
Any help is appreciated!
You can do it like below:-
sed -i '/www.youtube.com/I,+6 d;$!N;/www.youtube.com/!P;D' certs-new.json
If your search string has been provided as command line parameter use like
sed -i '/'$1'/I,+6 d;$!N;/'$1'/!P;D' certs-new.json
How will it work, First it will search for pattern www.youtube.com and delete 6 lines below the pattern and second part of sed command will search for pattern www.youtube.com and delete the pattern line and one line above it.
In your example the output will be:-
[
{
"url" : "www.google.com",
"valid_from" : " Jul 31 10:16:13 2017 GMT",
"valid_till" : " Jul 31 10:16:13 2019 GMT",
"validity" : "Valid",
"days" : "464"
},
]
Related
Our origin-node.service on the master node fails with:
root#master> systemctl start origin-node.service
Job for origin-node.service failed because the control process exited with error code. See "systemctl status origin-node.service" and "journalctl -xe" for details.
root#master> systemctl status origin-node.service -l
[...]
May 05 07:17:47 master origin-node[44066]: bootstrap.go:195] Part of the existing bootstrap client certificate is expired: 2020-02-20 13:14:27 +0000 UTC
May 05 07:17:47 master origin-node[44066]: bootstrap.go:56] Using bootstrap kubeconfig to generate TLS client cert, key and kubeconfig file
May 05 07:17:47 master origin-node[44066]: certificate_store.go:131] Loading cert/key pair from "/etc/origin/node/certificates/kubelet-client-current.pem".
May 05 07:17:47 master origin-node[44066]: server.go:262] failed to run Kubelet: cannot create certificate signing request: Post https://lb.openshift-cluster.mydomain.com:8443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests: EOF
So it seems that kubelet-client-current.pem and/or kubelet-server-current.pem contains an expired certificate and the service tries to create a CSR using an endpoint which is probably not yet available (because the master is down). We tried redeploying the certificates according to the OpenShift documentation Redeploying Certificates, but this fails while detecting an expired certificate:
root#master> ansible-playbook -i /etc/ansible/hosts openshift-master/redeploy-openshift-ca.yml
[...]
TASK [openshift_certificate_expiry : Fail when certs are near or already expired] *******************************************************************************************************************************************
fatal: [master.openshift-cluster.mydomain.com]: FAILED! => {"changed": false, "msg": "Cluster certificates found to be expired or within 60 days of expiring. You may view the report at /root/cert-expiry-report.20200505T042754.html or /root/cert-expiry-report.20200505T042754.json.\n"}
[...]
root#master> cat /root/cert-expiry-report.20200505T042754.json
[...]
"kubeconfigs": [
{
"cert_cn": "O:system:cluster-admins, CN:system:admin",
"days_remaining": -75,
"expiry": "2020-02-20 13:14:27",
"health": "expired",
"issuer": "CN=openshift-signer#1519045219 ",
"path": "/etc/origin/node/node.kubeconfig",
"serial": 27,
"serial_hex": "0x1b"
},
{
"cert_cn": "O:system:cluster-admins, CN:system:admin",
"days_remaining": -75,
"expiry": "2020-02-20 13:14:27",
"health": "expired",
"issuer": "CN=openshift-signer#1519045219 ",
"path": "/etc/origin/node/node.kubeconfig",
"serial": 27,
"serial_hex": "0x1b"
},
[...]
"summary": {
"expired": 2,
"ok": 22,
"total": 24,
"warning": 0
}
}
There is a guide for OpenShift 4.4 for Recovering from expired control plane certificates, but that does not apply for 3.11 and we did not find such a guide for our version.
Is it possible to recreate the expired certificates without a running master node for 3.11? Thanks for any help.
OpenShift Ansible: https://github.com/openshift/openshift-ansible/releases/tag/openshift-ansible-3.11.153-2
Update 2020-05-06: I also executed redeploy-certificates.yml, but it fails at the same TASK:
root#master> ansible-playbook -i /etc/ansible/hosts playbooks/redeploy-certificates.yml
[...]
TASK [openshift_certificate_expiry : Fail when certs are near or already expired] ******************************************************************************
Wednesday 06 May 2020 04:07:06 -0400 (0:00:00.909) 0:01:07.582 *********
fatal: [master.openshift-cluster.mydomain.com]: FAILED! => {"changed": false, "msg": "Cluster certificates found to be expired or within 60 days of expiring. You may view the report at /root/cert-expiry-report.20200506T040603.html or /root/cert-expiry-report.20200506T040603.json.\n"}
Update 2020-05-11: Running with -e openshift_certificate_expiry_fail_on_warn=False results in:
root#master> ansible-playbook -i /etc/ansible/hosts -e openshift_certificate_expiry_fail_on_warn=False playbooks/redeploy-certificates.yml
[...]
TASK [Wait for master API to come back online] *****************************************************************************************************************
Monday 11 May 2020 03:48:56 -0400 (0:00:00.111) 0:02:25.186 ************
skipping: [master.openshift-cluster.mydomain.com]
TASK [openshift_control_plane : restart master] ****************************************************************************************************************
Monday 11 May 2020 03:48:56 -0400 (0:00:00.257) 0:02:25.444 ************
changed: [master.openshift-cluster.mydomain.com] => (item=api)
changed: [master.openshift-cluster.mydomain.com] => (item=controllers)
RUNNING HANDLER [openshift_control_plane : verify API server] **************************************************************************************************
Monday 11 May 2020 03:48:57 -0400 (0:00:00.945) 0:02:26.389 ************
FAILED - RETRYING: verify API server (120 retries left).
FAILED - RETRYING: verify API server (119 retries left).
[...]
FAILED - RETRYING: verify API server (1 retries left).
fatal: [master.openshift-cluster.mydomain.com]: FAILED! => {"attempts": 120, "changed": false, "cmd": ["curl", "--silent", "--tlsv1.2", "--max-time", "2", "--cacert", "/etc/origin/master/ca-bundle.crt", "https://lb.openshift-cluster.mydomain.com:8443/healthz/ready"], "delta": "0:00:00.182367", "end": "2020-05-11 03:51:52.245644", "msg": "non-zero return code", "rc": 35, "start": "2020-05-11 03:51:52.063277", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
root#master> systemctl status origin-node.service -l
[...]
May 11 04:23:28 master.openshift-cluster.mydomain.com origin-node[109972]: E0511 04:23:28.077964 109972 bootstrap.go:195] Part of the existing bootstrap client certificate is expired: 2020-02-20 13:14:27 +0000 UTC
May 11 04:23:28 master.openshift-cluster.mydomain.com origin-node[109972]: I0511 04:23:28.078001 109972 bootstrap.go:56] Using bootstrap kubeconfig to generate TLS client cert, key and kubeconfig file
May 11 04:23:28 master.openshift-cluster.mydomain.com origin-node[109972]: I0511 04:23:28.080555 109972 certificate_store.go:131] Loading cert/key pair from "/etc/origin/node/certificates/kubelet-client-current.pem".
May 11 04:23:28 master.openshift-cluster.mydomain.com origin-node[109972]: F0511 04:23:28.130968 109972 server.go:262] failed to run Kubelet: cannot create certificate signing request: Post https://lb.openshift-cluster.mydomain.com:8443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests: EOF
[...]
I have this same case in customer environment, this error is because the certified was expiry, i "cheated" changing da S.O date before the expiry date. And the origin-node service started in my masters:
systemctl status origin-node
● origin-node.service - OpenShift Node
Loaded: loaded (/etc/systemd/system/origin-node.service; enabled; vendor preset: disabled)
Active: active (running) since Sáb 2021-02-20 20:22:21 -02; 6min ago
Docs: https://github.com/openshift/origin
Main PID: 37230 (hyperkube)
Memory: 79.0M
CGroup: /system.slice/origin-node.service
└─37230 /usr/bin/hyperkube kubelet --v=2 --address=0.0.0.0 --allow-privileged=true --anonymous-auth=true --authentication-token-webhook=true --authentication-token-webhook-cache-ttl=5m --authorization-mode=Webhook --authorization-webhook-c...
Você tem mensagem de correio em /var/spool/mail/okd
The openshift_certificate_expiry role uses the openshift_certificate_expiry_fail_on_warn variable to determine if the playbook should fail when the days left are less than openshift_certificate_expiry_warning_days.
So try running the redeploy-certificates.yml with this additional variable set to "False":
ansible-playbook -i /etc/ansible/hosts -e openshift_certificate_expiry_fail_on_warn=False playbooks/redeploy-certificates.yml
I want store JSON file each like form
[
{
"remote-addr" : "127.0.0.1",
"date" : " 2018.07.28"
}
]
and i use this code
var format=json(
':remote-addr:date'
);
app.use(logger({
format:format,
stream: fs.createWriteStream('log.json')
}));
i use this code and get
{"remote-addr":"::ffff:127.0.0.1","date":"Sat, 28 Jul 2018 04:38:41 GMT"}
{"remote-addr":"::ffff:127.0.0.1","date":"Sat, 28 Jul 2018 04:38:41 GMT"}
{"remote-addr":"::ffff:127.0.0.1","date":"Sat, 28 Jul 2018 04:38:42 GMT"}
{"remote-addr":"::ffff:127.0.0.1","date":"Sat, 28 Jul 2018 04:38:48 GMT"}
this is json file but there no [ ] and ,
how to get json file??
Technically Morgan does not allow you to do this, because it's entire purpose is to write one standard access.log line per request (credits to Douglas Wilson for pointing that out).
Yes, you can hack around the single log line, like you did, which makes it a valid JSON line. However, in order to make your log.json file to be a valid JSON as well, the only way I can think of is to implement some kind of post-processing to the file.
Here's how the post-processing will look like: First, you need to read line by line the file. Then, create your valid JSON. Finally - save it in a separate file (or override the log.json anyways).
Here's how I did it. Input: your current log.json file:
{"remote-addr":"::ffff:127.0.0.1","date":"Sat, 28 Jul 2018 04:38:41 GMT"}
{"remote-addr":"::ffff:127.0.0.1","date":"Sat, 28 Jul 2018 04:38:41 GMT"}
{"remote-addr":"::ffff:127.0.0.1","date":"Sat, 28 Jul 2018 04:38:42 GMT"}
{"remote-addr":"::ffff:127.0.0.1","date":"Sat, 28 Jul 2018 04:38:48 GMT"}
The post-processing script I wrote:
const fs = require('fs');
// Since Node.js v0.12 and as of Node.js v4.0.0, there is a stable
// readline core module. That's the easiest way to read lines from a file,
// without any external modules. Credits: https://stackoverflow.com/a/32599033/1333836
const readline = require('readline');
const lineReader = readline.createInterface({
input: fs.createReadStream('log.json')
});
const realJSON = [];
lineReader.on('line', function (line) {
realJSON.push(JSON.parse(line));
});
lineReader.on('close', function () {
// final-log.json is the post-processed, valid JSON file
fs.writeFile('final-log.json', JSON.stringify(realJSON), 'utf8', () => {
console.log('Done!');
});
});
Result: the final-log.json file, which is a valid JSON (I validated it with jsonlint, all good).
[{
"remote-addr": "::ffff:127.0.0.1",
"date": "Sat, 28 Jul 2018 04:38:41 GMT"
}, {
"remote-addr": "::ffff:127.0.0.1",
"date": "Sat, 28 Jul 2018 04:38:41 GMT"
}, {
"remote-addr": "::ffff:127.0.0.1",
"date": "Sat, 28 Jul 2018 04:38:42 GMT"
}, {
"remote-addr": "::ffff:127.0.0.1",
"date": "Sat, 28 Jul 2018 04:38:48 GMT"
}]
I am trying to invoke API from the data pipeline where i am getting below error .
This is what i am trying .
aws apigateway test-invoke-method --rest-api-id int836id123 --resource-id 1ukckkkwq1 --http-method POST --body "{\"QUEUEURL\": \"\",
\"BUCKETREGION\": \"us-east-1\",
\"FLAGFILE\": \"\",
\"FTPUSERID\": \"abcd-test-parameter\",
\"FTPPATH\": \"/abcd/Incr1\",
\"FTPPASSWORD\": \"abcd-test-parameter\",
\"PARAMETERSTOREREGION\":\"us-east-1\",
\"ISFTP2S3\": \"false\",
\"FTPSERVER\": \"11.42.123.111\",
\"BUCKETNAME\": \"path/Lineite/MAIN\",
\"QUEUEREGION\": \"\",
\"LOCALPATH\": \"path\"}"
I have verified there is no extra space of enter in the command .
Also i tried to to run without \ but same error .
Here is the error i get
2018 : Lambda invocation failed with status: 400\nMon Apr 02 06:45:20
UTC 2018 : Execution failed: Could not parse request body into json:
Unexpected character ('Q' (code 81)): was expecting double-quote to
start field name\n at [Source: [B#72073757; line: 1, column: 3]\nMon
Apr 02 06:45:20 UTC 2018 : Method completed with status: 400\n",
"latency": 41,
"headers": {} }
When i tried to run from AWS cli it worked but not working from data pipeline.
You could use a here-document to define a properly formatted JSON so that you don't have to worry about escaping the quotes. Define a function as
jsonDump()
{
cat <<EOF
{
"QUEUEURL":"",
"BUCKETREGION":"us-east-1",
"FLAGFILE":"",
"FTPUSERID":"abcd-test-parameter",
"FTPPATH":"/abcd/Incr1",
"FTPPASSWORD":"abcd-test-parameter",
"PARAMETERSTOREREGION":"us-east-1",
"ISFTP2S3":"false",
"FTPSERVER":"11.42.123.111",
"BUCKETNAME":"path/Lineite/MAIN",
"QUEUEREGION":"",
"LOCALPATH":"path"
}
EOF
}
and call now the function as below
aws apigateway test-invoke-method --rest-api-id int836id123 --resource-id 1ukckkkwq1 --http-method POST --body "$(jsonDump)"
I'm trying to follow the guide on the link below:
http://www.viaboxx.de/code/easily-generate-live-heatmaps-for-geolocations-with-elk/#codesyntax_1
It worked fine for me the first time but when I try it now, it gives me the following error at a step where I'm trying to load the csv data. The command I execute is:
cat test.csv | /opt/logstash/bin/logstash -f geostore.conf
and I get the following error:
Settings: Default pipeline workers: 2
Pipeline main started
Error parsing csv {:field=>"message", :source=>"", :exception=>#<NoMethodError: undefined method `each_index' for nil:NilClass>, :level=>:warn}
Pipeline main has been shutdown
stopping pipeline {:id=>"main"}
Can you please help !!! I've spent days on it trying to figure out.
Edit adding the geostore.conf:
input { stdin {} }
filter { # Step 1, drop the csv header line
if [message] =~ /^#/ {
drop {}
} # Step 2, split latitude and longitude
csv {
separator => ','
columns => [ 'lat', 'lon' ] }
# Step 3 # move lat and lon into location object # for defined geo_point type in ES
mutate {
rename => [ "lat", "[location][lat]", "lon", "[location][lon]" ]
}
}
output {
elasticsearch {
hosts => 'localhost'
index => 'geostore'
document_type => "locality"
flush_size => 1000
}
}
I've changed my output section from this:
output {
elasticsearch {
hosts => 'localhost'
index => 'geostore'
document_type => "locality"
flush_size => 1000
}
to this
output {
elasticsearch {
hosts => 'localhost'
index => 'geostore'
document_type => "locality"
flush_size => 1000
stdout {}
}
and now I'm getting a bit more verbose error message:
fetched an invalid config {:config=>"input {\n stdin {}\n}\nfilter {\n #
Step 1, drop the csv header line\n if [message] =~ /^#/ {\n drop {}\n }\n
\n # Step 2, split latitude and longitude\n csv {\n separator => ','\n
columns => [ 'lat', 'lon' ]\n }\n \n # Step 3\n # move lat and lon into
location object \n # for defined geo_point type in ES\n mutate { \n rename
=> [ \"lat\", \"[location][lat]\", \"lon\", \"[location][lon]\" ]\n
}\n}\noutput {\n elasticsearch {\n hosts => 'localhost'\n index =>
'geostore'\n document_type => \"locality\"\n flush_size => 1000\n
stdout {}\n }\n}\n\n", :reason=>"Expected one of #, => at line 29, column 12
(byte 543) after output {\n elasticsearch {\n hosts => 'localhost'\n
index => 'geostore'\n document_type => \"locality\"\n flush_size =>
1000\n stdout ", :level=>:error}
Can't understand why it worked for the first time.
Settings: Default pipeline workers: 2
Pipeline main started
Error parsing csv {:field=>"message", :source=>"", :exception=>#<NoMethodError: undefined method `each_index' for nil:NilClass>, :level=>:warn}
2017-03-30T13:46:31.171Z localhost.localdomain 53.97917361, -6.389038611
2017-03-30T13:46:31.171Z localhost.localdomain 54.00310028, -6.397707778
2017-03-30T13:46:31.172Z localhost.localdomain 53.99960056, -6.381966111
2017-03-30T13:46:31.172Z localhost.localdomain 54.00534917, -6.423718889
2017-03-30T13:46:31.172Z localhost.localdomain 51.92071667, -8.475726111
2017-03-30T13:46:31.172Z localhost.localdomain 51.82731222, -8.381912222
2017-03-30T13:46:31.173Z localhost.localdomain 51.81096639, -8.415731667
2017-03-30T13:46:31.173Z localhost.localdomain 54.28450222, -8.463775556
2017-03-30T13:46:31.173Z localhost.localdomain 54.27841, -8.495700278
2017-03-30T13:46:31.173Z localhost.localdomain 54.2681225, -8.462056944
2017-03-30T13:46:31.174Z localhost.localdomain 52.276167, -9.680497
2017-03-30T13:46:31.174Z localhost.localdomain 52.25660139, -9.703921389
2017-03-30T13:46:31.174Z localhost.localdomain 52.27031306, -9.723975556
2017-03-30T13:46:31.174Z localhost.localdomain 54.95663111, -7.714384167
2017-03-30T13:46:31.175Z localhost.localdomain 54.00133111, -7.352790833
2017-03-30T13:46:31.175Z localhost.localdomain 52.34264222, -6.4854175
2017-03-30T13:46:31.176Z localhost.localdomain 52.32439028, -6.464626111
2017-03-30T13:46:31.176Z localhost.localdomain 52.33008944, -6.487005
2017-03-30T13:46:31.176Z localhost.localdomain 53.70765861, -6.374657778
2017-03-30T13:46:31.177Z localhost.localdomain 53.72636306, -6.326768611
2017-03-30T13:46:31.177Z localhost.localdomain 53.71461361, -6.336066111
2017-03-30T13:46:31.177Z localhost.localdomain 51.55948417, -9.244535833
2017-03-30T13:46:31.177Z localhost.localdomain 53.52894667, -7.358543056
2017-03-30T13:46:31.177Z localhost.localdomain 53.51801167, -7.324215
2017-03-30T13:46:31.179Z localhost.localdomain 53.16202278, -6.795522222
2017-03-30T13:46:31.179Z localhost.localdomain 53.182702, -6.819299
2017-03-30T13:46:31.179Z localhost.localdomain 52.83053972, -8.991989444
2017-03-30T13:46:31.180Z localhost.localdomain 52.85651944, -8.965725833
2017-03-30T13:46:31.180Z localhost.localdomain 53.02885028, -7.300381667
2017-03-30T13:46:31.180Z localhost.localdomain
Pipeline main has been shutdown
stopping pipeline {:id=>"main"}
Hopefull, this would help other's as well.
I deleted the pattern from the command line:
curl -XDELETE 'localhost:9200/geostore?pretty';
and then went to to kibana to delete it from there as well. Reloaded the pattern back again as below and it worked.
curl -XPUT 'http://localhost:9200/geostore'
curl -XPUT 'http://localhost:9200/geostore/_mapping/locality' -d '
{
"locality" : {
"properties" : {
"location" : {
"type" : "geo_point",
"geohash_prefix": true,
"geohash_precision": "1km"
}
}
}
}'
cat test.csv | /opt/logstash/bin/logstash -f geostore.conf
This will take a few seconds to startup logstash, parse the input and store the result into Elasticsearch.
Now that we have the data in Elasticsearch, let's move to Kibana 4. After logged into Kibana, you need to add the index to Kibana.
Go to: Settings -> Indices -> Add New -> Write "geostore" in the index name field.
After you add the index, you'll see all fields in the documents of the index, especially you should check if the property location is classified as geo_point.
The whole process is described in detail at the below link.
http://www.viaboxx.de/code/easily-generate-live-heatmaps-for-geolocations-with-elk/#codesyntax_1
I am parsing through a log file and get result lines (using grep) like the following:
2017-01-26 17:19:40 +0000 docker: {"source":"stdout","log":"I, [2017-01-26T17:19:40.703988 #24] INFO -- : {\"tags\":\"structured_log\",\"payload\":{\"results\":[{\"baserate\":\"-1\"}]},\"commit_stamp\":1485451180,\"resource\":\"google_price_result_metric\",\"object_id\":\"20170126171940700\"}","container_id":"6ecbf7f64e4c9557e9dd1efbc6666a3c6c53f9cd5c18414ed5633cad8c302e","container_name":"/test-container-b49c8188c3ebe4b93300"}
2017-01-26 17:19:40 +0000 docker: {"container_id":"6ecbf7f64e4c9557e9dd1efbc6666a3c6c53f9cd5c18414ed5633cad8c302e","container_name":"/test-container-b49c8188c3ebe4b93300","source":"stdout","log":"I, [2017-01-26T17:19:40.704364 #24] INFO -- : method=POST path=/prices.xml format=xml controller=TestController action=prices status=200 duration=1686.51 view=0.08 db=0.62"}
I then extract the JSON objects with the following command:
... | grep -o -E "\{.*$"
I know I can parse a single line with python -mjson.tool like so:
... | grep -o -E "\{.*$" | tail -n1 | python -mjson.tool
But I want to parse both lines (or n lines). How can I do this in bash?
(I think xargs is supposed to let me do this, but I am new to the tool and can't figure it out)
jq can be told to accept plain text as input, and attempt to parse an extracted subset as JSON. Consider the following example, tested with jq 1.5:
jq -R 'capture("docker: (?<json>[{].*[}])$") | .json? | select(.) | fromjson' <<'EOF'
2017-01-26 17:19:40 +0000 docker: {"source":"stdout","log":"I, [2017-01-26T17:19:40.703988 #24] INFO -- : {\"tags\":\"structured_log\",\"payload\":{\"results\":[{\"baserate\":\"-1\"}]},\"commit_stamp\":1485451180,\"resource\":\"google_price_result_metric\",\"object_id\":\"20170126171940700\"}","container_id":"6ecbf7f64e4c9557e9dd1efbc6666a3c6c53f9cd5c18414ed5633cad8c302e","container_name":"/test-container-b49c8188c3ebe4b93300"}
2017-01-26 17:19:40 +0000 docker: {"container_id":"6ecbf7f64e4c9557e9dd1efbc6666a3c6c53f9cd5c18414ed5633cad8c302e","container_name":"/test-container-b49c8188c3ebe4b93300","source":"stdout","log":"I, [2017-01-26T17:19:40.704364 #24] INFO -- : method=POST path=/prices.xml format=xml controller=TestController action=prices status=200 duration=1686.51 view=0.08 db=0.62"}
EOF
...properly yields:
{
"source": "stdout",
"log": "I, [2017-01-26T17:19:40.703988 #24] INFO -- : {\"tags\":\"structured_log\",\"payload\":{\"results\":[{\"baserate\":\"-1\"}]},\"commit_stamp\":1485451180,\"resource\":\"google_price_result_metric\",\"object_id\":\"20170126171940700\"}",
"container_id": "6ecbf7f64e4c9557e9dd1efbc6666a3c6c53f9cd5c18414ed5633cad8c302e",
"container_name": "/test-container-b49c8188c3ebe4b93300"
}
{
"container_id": "6ecbf7f64e4c9557e9dd1efbc6666a3c6c53f9cd5c18414ed5633cad8c302e",
"container_name": "/test-container-b49c8188c3ebe4b93300",
"source": "stdout",
"log": "I, [2017-01-26T17:19:40.704364 #24] INFO -- : method=POST path=/prices.xml format=xml controller=TestController action=prices status=200 duration=1686.51 view=0.08 db=0.62"
}