I try to read out OBD-2 data from Hyundai Ioniq Electro (Version 28kWh), using a Raspberry PI and a Bluetooth ELM327 interface. Connection and data transfer works fine.
For example: sending 2105<cr><lf> gives a response (<cr> is value 0x0d = 13):
7F2112<cr>7F2112<cr>7F2112<cr>02D<cr>0:6105FFFFFFFF<cr>7F2112<cr>1:00000000001616<cr>2:161616161621FA<cr>3:26480001501616<cr>4:03E82403E80FC0<cr>5:003A0000000000<cr>6:00000000000000<cr><cr>>
The value C0 in 4:03E82403E80FC0 seems to be the State of charge (SOC) display value:
C0 -> 192 -> 192/2 % = 96%
There are some tables for decoding available (see https://github.com/JejuSoul/OBD-PIDs-for-HKMC-EVs/tree/master/Ioniq%20EV%20-%2028kWh), but how to use these tables?
For example sending 2101<cr><lf> gives the response:
02C<cr>
0:6101FFFFF800<cr>
01E<cr>
0:6101000003FF<cr>
03D<cr>
0:6101FFFFFFFF<cr>
016<cr>
0:6101FFE00000<cr>
1:0002D402CD03F0<cr>
1:0838010A015C2F<cr>
7F2112<cr>
1:B4256026480000<cr>
1:0921921A061B03<cr>
2:000582003401BD<cr>
2:0000000A002702<cr>
2:000F4816161616<cr>
2:00000000276234<cr>
3:04B84100000000<cr>
3:5B04692F180018<cr>
3:01200000000000<cr>
3:1616160016CB3F<cr>
4:00220000600000<cr>
4:00D0FF00000000<cr>
4:CB0100007A0002<cr>
5:000001F3026A02<cr>
5:5D4000025D4600<cr>
6:D2000000000000<cr>
6:00DECA0000D8E6<cr>
7:008A2FEB090002<cr>
8:0000000003E800<cr>
<cr>
>
Please note, that the line feed was added behind every carriage return (<cr>) for better readability and is not part of the original data response.
How can I decode temperature, currents, ... from these data?
I have found the mistake by myself. The ELM327 description (http://elmelectronics.com/DSheets/ELM327DS.pdf) explains the AT commands in detail.
The problem on this issue was the mixing of CAN responses from multiple ECU's caused by the AT H0 command (headers off) in the initialization phase (not described in question). See also EM327DS.pdf page 44 (Multiple Responses).
When using AT H1 on startup, the responses can be decoded without problem.
Initialization (with AT H1 = headers on)
AT D\r\n
AT Z\r\n
AT L0\r\n
AT E0\r\n
AT S0\r\n
AT H1\r\n
AT SP 0\r\n
Afterwards communication with ECU's:
Response on first command 0100\r\n:
SEARCHING...\r7EB06410080000001\r7EC06410080000001\r\r>
Response on second command 2101\r\n:
7EE037F2112\r7ED102C6101FFFFF800\r7EA10166101FFE00000\r7EC103D6101FFFFFFFF\r7EB101E6101000003FF\r7EA2109211024062703\r7EC214626482648A3FF\r7ED2100907D87E15592\r7EB210838011D88B132\r7ED2202A1A7024C0134\r7EA2200000000546900\r7EC22C00D9E1C1B1B1B\r7EB220000000A000802\r7EA2307200000000000\r7ED23050343102000C8\r7EC231B1B1C001BB50F\r7EB233C04B8320000D0\r7EC24B5010000810002\r7ED24047400C8760017\r7EB24FF300000000000\r7ED25001401F387F46A\r7EC256AC100026CB100\r7EC2600E3C50000DE69\r7ED263F001300000000\r7EC27008CC38209015C\r7EC280000000003E800\r\r>
Response on third command 2105\r\n:
7EE037F2112\r7ED037F2112\r7EA037F2112\r7EC102D6105FFFFFFFF\r7EB037F2112\r7EC2100000000001B1C\r7EC221C1B1B1B1B2648\r7EC2326480001641A1B\r7EC2403E80803E80147\r7EC25003A0000000000\r7EC2600000000000000\r\r>
Now every response starts with the id of the ECU. Take attention only to responses starting with 7EC.
Example:
Looking for battery current in amps. In the document Spreadsheet_IoniqEV_BMS_2101_2105.xls you find the battery current on:
response 21 for 2101: last byte = High Byte of battery current
response 22 for 2101: first byte = Low Byte of battery current
So look to the response of 2101\r\n and search for 7EC21 and 7EC22: You will find:
7EC214626482648A3FF: take last byte for battery high value -> FF
7EC22C00D9E1C1B1B1B: take first byte after 7EC22 for battery low value -> C0
The battery current value is: FFC0
This value is two complements encoded:
0xffc0 = 65472 -> 65472 - 65536 = -64 -> -6.4A
Result: the battery is charged with 6.4A
For a coding example see:
https://github.com/greenenergyprojects/obd2-gateway, file src/obd2/obd2.ts
Related
I am new to python. So please excuse me if I am not asking the questions in pythonic way.
My requirements are as follows:
I need to write python code to implement this requirement.
Will be reading 60 json files as input. Each file is approximately 150 GB.
Sample structure for all 60 json files is as shown below. Please note each file will have only ONE json object. And the huge size of each file is because of the number and size of the "array_element" array contained in that one huge json object.
{
"string_1":"abc",
"string_1":"abc",
"string_1":"abc",
"string_1":"abc",
"string_1":"abc",
"string_1":"abc",
"array_element":[]
}
Transformation logic is simple. I need to merge all the array_element from all 60 files and write it into one HUGE json file. That is almost 150GB X 60 will be the size of the output json file.
Questions for which I am requesting your help on:
For reading: Planning on using "ijson" module's ijson.items(file_object, "array_element"). Could you please tell me if ijson.items will "Yield" (that is NOT load the entire file into memory) one item at a time from "array_element" array in the json file? I dont think json.load is an option here because we cannot hold such a huge dictionalry in-memory.
For writing: I am planning to read each item using ijson.item, and do json.dumps to "encode" and then write it to the file using file_object.write and NOT using json.dump since I cannot have such a huge dictionary in memory to use json.dump. Could you please let me know if f.flush() applied in the code shown below is needed? To my understanding, the internal buffer will automatically get flushed by itself when it is full and the size of the internal buffer is constant and wont dynamically grow to an extent that it will overload the memory? please let me know
Are there any better approach to the ones mentioned above for incrementally reading and writing huge json files?
Code snippet showing above described reading and writing logic:
for input_file in input_files:
with open("input_file.json", "r") as f:
objects = ijson.items(f, "array_element")
for item in objects:
str = json.dumps(item, indent=2)
with open("output.json", "a") as f:
f.write(str)
f.write(",\n")
f.flush()
with open("output.json", "a") as f:
f.seek(0,2)
f.truncate(f.tell() - 1)
f.write("]\n}")
Hope I have asked my questions clearly. Thanks in advance!!
The following program assumes that the input files have a format that is predictable enough to skip JSON parsing for the sake of performance.
My assumptions, inferred from your description, are:
All files have the same encoding.
All files have a single position somewhere at the start where "array_element":[ can be found, after which the "interesting portion" of the file begins
All files have a single position somewhere at the end where ]} marks the end of the "interesting portion"
All "interesting portions" can be joined with commas and still be valid JSON
When all of these points are true, concatenating a predefined header fragment, the respective file ranges, and a footer fragment would produce one large, valid JSON file.
import re
import mmap
head_pattern = re.compile(br'"array_element"\s*:\s*\[\s*', re.S)
tail_pattern = re.compile(br'\s*\]\s*\}\s*$', re.S)
input_files = ['sample1.json', 'sample2.json']
with open('result.json', "wb") as result:
head_bytes = 500
tail_bytes = 50
chunk_bytes = 16 * 1024
result.write(b'{"JSON": "fragment", "array_element": [\n')
for input_file in input_files:
print(input_file)
with open(input_file, "r+b") as f:
mm = mmap.mmap(f.fileno(), 0)
start = head_pattern.search(mm[:head_bytes])
end = tail_pattern.search(mm[-tail_bytes:])
if not (start and end):
print('unexpected file format')
break
start_pos = start.span()[1]
end_pos = mm.size() - end.span()[1] + end.span()[0]
if input_files.index(input_file) > 0:
result.write(b',\n')
pos = start_pos
mm.seek(pos)
while True:
if pos + chunk_bytes >= end_pos:
result.write(mm.read(end_pos - pos))
break
else:
result.write(mm.read(chunk_bytes))
pos += chunk_bytes
result.write(b']\n}')
If the file format is 100% predictable, you can throw out the regular expressions and use mm[:head_bytes].index(b'...') etc for the start/end position arithmetic.
I have ingested some logs to Splunk which now looks like below when searching from search header.
{\"EventID\":563662,\"EventType\":\"LogInspectionEvent\",\"HostAgentGUID\":\"11111111CE-7802-1111111-9E74-BD25B707865E\",\"HostAgentVersion\":\"12.0.0.967\",\"HostAssetValue\":1,\"HostCloudType\":\"amazon\",\"HostGUID\":\"1111111-08CF-4541-01333-11901F731111109\",\"HostGroupID\":71,\"HostGroupName\":\"private_subnet_ap-southeast-1a (subnet-03160)\",\"HostID\":85,\"HostInstanceID\":\"i-0665c\",\"HostLastIPUsed\":\"192.168.43.1\",\"HostOS\":\"Ubuntu Linux 18 (64 bit) (4.15.0-1051-aws)\",\"HostOwnerID\":\"1111112411\",\"HostSecurityPolicyID\":1,\"HostSecurityPolicyName\":\"Base Policy\",\"Hostname\":\"ec2-11-11-51-45.ap-southeast-3.compute.amazonaws.com (ls-ec2-as1-1b-datalos) [i-f661111148a3f6]\",\"LogDate\":\"2020-07-08T11:52:38.000Z\",\"OSSEC_Action\":\"\",\"OSSEC_Command\":\"\",\"OSSEC_Data\":\"\",\"OSSEC_Description\":\"Non standard syslog message (size too large)\",\"OSSEC_DestinationIP\":\"\",\"OSSEC_DestinationPort\":\"\",\"OSSEC_DestinationUser\":\"\",\"OSSEC_FullLog\":\"Jul 8 11:52:37 ip-172-96-50-2 amazon-ssm-agent.amazon-ssm-agent[24969]: \\\"Document\\\": \\\"{\\\\n \\\\\\\"schemaVersion\\\\\\\": \\\\\\\"2.0\\\\\\\",\\\\n \\\\\\\"description\\\\\\\": \\\\\\\"Software Inventory Policy Document.\\\\\\\",\\\\n \\\\\\\"parameters\\\\\\\": {\\\\n \\\\\\\"applications\\\\\\\": {\\\\n \\\\\\\"type\\\\\\\": \\\\\\\"String\\\\\\\",\\\\n \\\\\\\"default\\\\\\\": \\\\\\\"Enabled\\\\\\\",\\\\n \\\\\\\"description\\\\\\\": \\\\\\\"(Optional) Collect data for installed applications.\\\\\\\",\\\\n \\\\\\\"allowedValues\\\\\\\": [\\\\n \\\\\\\"Enabled\\\\\\\",\\\\n
How can I format this correctly to show in JSON format when searing in searcher header. I'm pretty new to Splunk, hence have less idea on this.
My file_monitor > props.conf looks like below
[myapp:data:events]
pulldown_type=true
INDEXED_EXTRACTIONS= json
KV_MODE=none
category=Structured
description=data
disabled=false
TRUNCATE=88888
I'm trying to set a rule in Azure Stream Analytics job with the use of reference data and input stream which is coming from an event hub.
This is my reference data JSON packet in BLOB storage:
{
"ruleId": 1234,
"Tag" : "TAG1",
"metricName": "velocity",
"alertName": "velocity over 500",
"operator" : "AVGGREATEROREQUAL",
"value": 500
}
And here is the transformation query in the stream analytics job:
WITH
transformedInput AS
(
SELECT
metric = GetArrayElement(DeviceInputStream.data,0),
masterTag = rules.Tag,
ruleId = rules.ruleId,
alertName = rules.alertName,
ruleOperator = rules.operator,
ruleValue = rules.value
FROM
DeviceInputStream
timestamp by EventProcessedUtcTime
JOIN
rules
ON DeviceInputStream.masterTag = rules.Tag
)
--rule output--
SELECT
System.Timestamp as time,
transformedInput.Tag as Tag,
transformedInput.ruleId as ruleId,
transformedInput.alertName as alert,
AVG(metric.velocity) as avg
INTO
alertruleblob
FROM
transformedInput
GROUP BY
transformedInput.masterTag,
transformedInput.ruleId,
transformedInput.alertName,
ruleOperator,
ruleValue,
TumblingWindow(second, 6)
HAVING
ruleOperator = 'AVGGREATEROREQUAL' AND avg(metric.velocity) >= ruleValue
This is not yielding any results. However, when I do a test with sample input and reference data I get the expected results. But this doens't seem to be working with the streaming data. My use case is if the average velocity is greater than 500 for a 6 second window, store that result in another blob storage. The value of velocity has been greater than 500 for sometime, but I'm not getting any results.
What am I doing wrong?
This was working all along. I just had to specify the input path of the reference blob in the reference input path of stream analytics including the file name. I was basically referencing only the blob container without the actual file. So when I changed the path pattern to "filename.json", I got the results. It was a stupid mistake.
I have saved the model using
mx.model.save(model = fit_dl, prefix = "model", iteration = 10)
and loaded later
fit <- mx.model.load(prefix = "model", iteration = 10)
Now, using object fit, I want to extract the input features (column names of train data). How to do that
Posting for sake of all open source community
As per my email exchange with maintner of mxnet packge, Qiang Kou replies following
From: Qiang Kou
To: Shiv Onkar Kumar
Sent: Wednesday, 14 June 2017 10:33 PM
Subject: Re: Extract Input parameters from “mxnet” model
Hi, Shiv,
I don't this is possible since we never store this information in the model.
Best,
Qiang Kou
The end goal for this is to be part of a chatbot that returns an airport's weather.
Using import.io, I built an endpoint to query the weather service I'd which provides this response:
{"extractorData"=>
{"url"=>
"https://www.aviationweather.gov/metar/data?ids=kokb&format=decoded&hours=0&taf=off&layout=on&date=0",
"resourceId"=>"66ca907842aabb6b08b8bc12049ad533",
"data"=>
[{"group"=>
[{"Timestamp"=>[{"text"=>"Data at: 2135 UTC 12 Dec 2016"}],
"Airport"=>[{"text"=>"KOKB (Oceanside Muni, CA, US)"}],
"FullText"=>
[{"text"=>
"KOKB 122052Z AUTO 24008KT 10SM CLR 18/13 A3006 RMK AO2 SLP179 T01780133 58021"}],
"Temperature"=>[{"text"=>"17.8°C ( 64°F)"}],
"Dewpoint"=>[{"text"=>"13.3°C ( 56°F) [RH = 75%]"}],
"Pressure"=>
[{"text"=>
"30.06 inches Hg (1018.0 mb) [Sea level pressure: 1017.9 mb]"}],
"Winds"=>
[{"text"=>"from the WSW (240 degrees) at 9 MPH (8 knots; 4.1 m/s)"}],
"Visibility"=>[{"text"=>"10 or more sm (16+ km)"}],
"Ceiling"=>[{"text"=>"at least 12,000 feet AGL"}],
"Clouds"=>[{"text"=>"sky clear below 12,000 feet AGL"}]}]}]},
"pageData"=>
{"resourceId"=>"66ca907842aabb6b08b8bc12049ad533",
"statusCode"=>200,
"timestamp"=>1481578559306},
"url"=>
"https://www.aviationweather.gov/metar/data?ids=kokb&format=decoded&hours=0&taf=off&layout=on&date=0",
"runtimeConfigId"=>"2ddb288f-9e57-4b58-a690-1cd409f9edd3",
"timestamp"=>1481579246454,
"sequenceNumber"=>-1}
I seem to be running into two issues. How do I:
pull each field and write it into its own variable
ignore the "text" modifier in the response.
If you're getting a response object, you might want to do something like
parsed_json = JSON.parse(response.body)
Then you can do things like parsed_json[:some_field]
The simple answer is:
require 'json'
foo = JSON['{"a":1}']
foo # => {"a"=>1}
JSON is smart enough to look at the parameter and, based on whether it's a string or an Array or Hash, parse it or serialize it. In the above case it parsed it back into a Hash.
From that point it takes normal Ruby to dive into the hash you got back and access particular values:
foo = JSON['{"a":1, "b":[{"c":3}]}']
foo # => {"a"=>1, "b"=>[{"c"=>3}]}
foo['b'][0]['c'] # => 3
How to walk through a hash is covered extensively on the internet and here on Stack Overflow, so search around and see what you can find.