Splunk query output formating to JSON format - json

I have ingested some logs to Splunk which now looks like below when searching from search header.
{\"EventID\":563662,\"EventType\":\"LogInspectionEvent\",\"HostAgentGUID\":\"11111111CE-7802-1111111-9E74-BD25B707865E\",\"HostAgentVersion\":\"12.0.0.967\",\"HostAssetValue\":1,\"HostCloudType\":\"amazon\",\"HostGUID\":\"1111111-08CF-4541-01333-11901F731111109\",\"HostGroupID\":71,\"HostGroupName\":\"private_subnet_ap-southeast-1a (subnet-03160)\",\"HostID\":85,\"HostInstanceID\":\"i-0665c\",\"HostLastIPUsed\":\"192.168.43.1\",\"HostOS\":\"Ubuntu Linux 18 (64 bit) (4.15.0-1051-aws)\",\"HostOwnerID\":\"1111112411\",\"HostSecurityPolicyID\":1,\"HostSecurityPolicyName\":\"Base Policy\",\"Hostname\":\"ec2-11-11-51-45.ap-southeast-3.compute.amazonaws.com (ls-ec2-as1-1b-datalos) [i-f661111148a3f6]\",\"LogDate\":\"2020-07-08T11:52:38.000Z\",\"OSSEC_Action\":\"\",\"OSSEC_Command\":\"\",\"OSSEC_Data\":\"\",\"OSSEC_Description\":\"Non standard syslog message (size too large)\",\"OSSEC_DestinationIP\":\"\",\"OSSEC_DestinationPort\":\"\",\"OSSEC_DestinationUser\":\"\",\"OSSEC_FullLog\":\"Jul 8 11:52:37 ip-172-96-50-2 amazon-ssm-agent.amazon-ssm-agent[24969]: \\\"Document\\\": \\\"{\\\\n \\\\\\\"schemaVersion\\\\\\\": \\\\\\\"2.0\\\\\\\",\\\\n \\\\\\\"description\\\\\\\": \\\\\\\"Software Inventory Policy Document.\\\\\\\",\\\\n \\\\\\\"parameters\\\\\\\": {\\\\n \\\\\\\"applications\\\\\\\": {\\\\n \\\\\\\"type\\\\\\\": \\\\\\\"String\\\\\\\",\\\\n \\\\\\\"default\\\\\\\": \\\\\\\"Enabled\\\\\\\",\\\\n \\\\\\\"description\\\\\\\": \\\\\\\"(Optional) Collect data for installed applications.\\\\\\\",\\\\n \\\\\\\"allowedValues\\\\\\\": [\\\\n \\\\\\\"Enabled\\\\\\\",\\\\n
How can I format this correctly to show in JSON format when searing in searcher header. I'm pretty new to Splunk, hence have less idea on this.
My file_monitor > props.conf looks like below
[myapp:data:events]
pulldown_type=true
INDEXED_EXTRACTIONS= json
KV_MODE=none
category=Structured
description=data
disabled=false
TRUNCATE=88888

Related

issue with connecting data in databricks from data lake and reading JSON into Folium

i'm working on something based of this blogpost:
https://python-visualization.github.io/folium/quickstart.html#Getting-Started
specifically part 13 - using Cloropleth maps:
the piece of code they use is the following:
import pandas as pd
url = (
"https://raw.githubusercontent.com/python-visualization/folium/master/examples/data"
)
state_geo = f"{url}/us-states.json"
state_unemployment = f"{url}/US_Unemployment_Oct2012.csv"
state_data = pd.read_csv(state_unemployment)
m = folium.Map(location=[48, -102], zoom_start=3)
folium.Choropleth(
geo_data=state_geo,
name="choropleth",
data=state_data,
columns=["State", "Unemployment"],
key_on="feature.id",
fill_color="YlGn",
fill_opacity=0.7,
line_opacity=0.2,
legend_name="Unemployment Rate (%)",
).add_to(m)
folium.LayerControl().add_to(m)
m
if I use this I get the requested map.
Now I try to do this with my own data; i work in databricks
so I have a JSON with the GEOJSON data (source_file1) and a CSV file (source_file2) with the data that needs to be "plotted" on the map.
source_file1 = "dbfs:/mnt/sandbox/MAARTEN/TOPO/Belgie_GEOJSON.JSON"
state_geo = spark.read.json(source_file1,multiLine=True)
source_file2 = "dbfs:/mnt/sandbox/MAARTEN/TOPO/DATASVZ.csv"
df_2 = spark.read.format("CSV").option("inferSchema", "true").option("header", "true").option("delimiter",";").load(source_file2)
state_data = df_2.toPandas()
when adjusting the code below:
m = folium.Map(location=[48, -102], zoom_start=3)
folium.Choropleth(
geo_data=state_geo,
name="choropleth",
data=state_data,
columns=["State", "Unemployment"],
key_on="feature.properties.name_nl",
fill_color="YlGn",
fill_opacity=0.7,
line_opacity=0.2,
legend_name="% Marktaandeel CC",
).add_to(m)
folium.LayerControl().add_to(m)
m
So i upload the geo_data parameter as a Sparkdatafram, I get the following error:
ValueError: Cannot render objects with any missing geometries: DataFrame[features: array<struct<geometry:struct<coordinates:array<array<array<string>>>,type:string>,properties:struct<arr_fr:string,arr_nis:bigint,arr_nl:string,fill:string,fill-opacity:double,name_fr:string,name_nl:string,nis:bigint,population:bigint,prov_fr:string,prov_nis:bigint,prov_nl:string,reg_fr:string,reg_nis:string,reg_nl:string,stroke:string,stroke-opacity:bigint,stroke-width:bigint>,type:string>>, type: string]```
I think it is because transforming the data from the "blob format" in the Azure datalake to the sparkdataframe, something goes wrong with the format. I tested this in a jupyter notebook from my desktop, data straight from file to folium and it all works.
If i load it directly from the source, like the example does with their webpage, so i adjust the 'geo_data' parameter for the folium function:
m = folium.Map(location=[48, -102], zoom_start=3)
folium.Choropleth(
geo_data=source_file1, #this gets adjusted directly to data lake
name="choropleth",
data=state_data,
columns=["State", "Unemployment"],
key_on="feature.properties.name_nl",
fill_color="YlGn",
fill_opacity=0.7,
line_opacity=0.2,
legend_name="% Marktaandeel CC",
).add_to(m)
folium.LayerControl().add_to(m)
m
I get the error
Use "/dbfs", not "dbfs:": The function expects a local file path. The error is caused by passing a path prefixed with "dbfs:".
So I started wondering what is the difference between my JSON file and the one of the blogpost. And the only thing i can imagine is that the Azure datalake doesn't store my json as a json but as a block blob file and for some reason i am not converting it properly so that folium can read it.
Azure blob storage (data lake)
So can someone with folium knowledge let me know if
A. it is not possible to load the geo_data directly from a datalake ?
B. in what format I need to upload the data ?
any thoughts on this would be helpfull!!!
thanks in advance!
Solved this issue, just had to replace "dbfs:" with "/dbfs". I tried it a lot of times but used "/dbfs:" and got another error.
can't believe i'm this stupid :-)

Lua Roblox HttpService:PostAsync Stripping Json

I'm trying to validate against an external API that uses hashing against the body + API.
In Roblox I have:
body = '{"attributes":[{"name":"player_level","value":120},{"name":"is_whale","value":true,"data_type":"boolean"}]}'
Which I then call this:
local attributeData = game.HttpService:PostAsync(encoded_url,body,Enum.HttpContentType.ApplicationJson,false,headers)
But in my logs:
19 Aug 2022 20:44:34.8862022-08-19 20:44:34 +0000 severity=INFO, method=POST path=/api/v1/appuser/toms_awesome_rblx_guy1/attributes format=json controller=Api::V1::Appuser::AttributesController action=create status=0 duration=1.09 view=0.00 db=0.00 ip= attributes=[{"name"=>"player_level", "value"=>120}, {"name"=>"is_whale", "value"=>true, "data_type"=>"boolean"}] session_id=62f52645795301001985e2d7 enc=0fea853f09e361b9d80579d99acda0a69c8e3c7045a2834d30b03e650a5d5f86 appuser_external_id=toms_awesome_rblx_guy1
Notice how the {}'s were stripped off to become this: attributes=[{"name"=>"player_level", "value"=>120}, {"name"=>"is_whale", "value"=>true, "data_type"=>"boolean"}]
This is ultimately causing issues with validation because the hash was made on my end with the {}'s.
Any ideas where I'm going wrong? I've tried different combinations of Enum.HttpContentType.ApplicationUrlEncoded and "data="..body and encoded_body = game.HttpService:JSONEncode(body)
But they all seem to append things in front of the generated json in the logs like data= or _json=. None of which were in my original hash based on the body json string I wouldn't expect to change.

How to decode OBD-2 data from Hyundai Ioniq EV

I try to read out OBD-2 data from Hyundai Ioniq Electro (Version 28kWh), using a Raspberry PI and a Bluetooth ELM327 interface. Connection and data transfer works fine.
For example: sending 2105<cr><lf> gives a response (<cr> is value 0x0d = 13):
7F2112<cr>7F2112<cr>7F2112<cr>02D<cr>0:6105FFFFFFFF<cr>7F2112<cr>1:00000000001616<cr>2:161616161621FA<cr>3:26480001501616<cr>4:03E82403E80FC0<cr>5:003A0000000000<cr>6:00000000000000<cr><cr>>
The value C0 in 4:03E82403E80FC0 seems to be the State of charge (SOC) display value:
C0 -> 192 -> 192/2 % = 96%
There are some tables for decoding available (see https://github.com/JejuSoul/OBD-PIDs-for-HKMC-EVs/tree/master/Ioniq%20EV%20-%2028kWh), but how to use these tables?
For example sending 2101<cr><lf> gives the response:
02C<cr>
0:6101FFFFF800<cr>
01E<cr>
0:6101000003FF<cr>
03D<cr>
0:6101FFFFFFFF<cr>
016<cr>
0:6101FFE00000<cr>
1:0002D402CD03F0<cr>
1:0838010A015C2F<cr>
7F2112<cr>
1:B4256026480000<cr>
1:0921921A061B03<cr>
2:000582003401BD<cr>
2:0000000A002702<cr>
2:000F4816161616<cr>
2:00000000276234<cr>
3:04B84100000000<cr>
3:5B04692F180018<cr>
3:01200000000000<cr>
3:1616160016CB3F<cr>
4:00220000600000<cr>
4:00D0FF00000000<cr>
4:CB0100007A0002<cr>
5:000001F3026A02<cr>
5:5D4000025D4600<cr>
6:D2000000000000<cr>
6:00DECA0000D8E6<cr>
7:008A2FEB090002<cr>
8:0000000003E800<cr>
<cr>
>
Please note, that the line feed was added behind every carriage return (<cr>) for better readability and is not part of the original data response.
How can I decode temperature, currents, ... from these data?
I have found the mistake by myself. The ELM327 description (http://elmelectronics.com/DSheets/ELM327DS.pdf) explains the AT commands in detail.
The problem on this issue was the mixing of CAN responses from multiple ECU's caused by the AT H0 command (headers off) in the initialization phase (not described in question). See also EM327DS.pdf page 44 (Multiple Responses).
When using AT H1 on startup, the responses can be decoded without problem.
Initialization (with AT H1 = headers on)
AT D\r\n
AT Z\r\n
AT L0\r\n
AT E0\r\n
AT S0\r\n
AT H1\r\n
AT SP 0\r\n
Afterwards communication with ECU's:
Response on first command 0100\r\n:
SEARCHING...\r7EB06410080000001\r7EC06410080000001\r\r>
Response on second command 2101\r\n:
7EE037F2112\r7ED102C6101FFFFF800\r7EA10166101FFE00000\r7EC103D6101FFFFFFFF\r7EB101E6101000003FF\r7EA2109211024062703\r7EC214626482648A3FF\r7ED2100907D87E15592\r7EB210838011D88B132\r7ED2202A1A7024C0134\r7EA2200000000546900\r7EC22C00D9E1C1B1B1B\r7EB220000000A000802\r7EA2307200000000000\r7ED23050343102000C8\r7EC231B1B1C001BB50F\r7EB233C04B8320000D0\r7EC24B5010000810002\r7ED24047400C8760017\r7EB24FF300000000000\r7ED25001401F387F46A\r7EC256AC100026CB100\r7EC2600E3C50000DE69\r7ED263F001300000000\r7EC27008CC38209015C\r7EC280000000003E800\r\r>
Response on third command 2105\r\n:
7EE037F2112\r7ED037F2112\r7EA037F2112\r7EC102D6105FFFFFFFF\r7EB037F2112\r7EC2100000000001B1C\r7EC221C1B1B1B1B2648\r7EC2326480001641A1B\r7EC2403E80803E80147\r7EC25003A0000000000\r7EC2600000000000000\r\r>
Now every response starts with the id of the ECU. Take attention only to responses starting with 7EC.
Example:
Looking for battery current in amps. In the document Spreadsheet_IoniqEV_BMS_2101_2105.xls you find the battery current on:
response 21 for 2101: last byte = High Byte of battery current
response 22 for 2101: first byte = Low Byte of battery current
So look to the response of 2101\r\n and search for 7EC21 and 7EC22: You will find:
7EC214626482648A3FF: take last byte for battery high value -> FF
7EC22C00D9E1C1B1B1B: take first byte after 7EC22 for battery low value -> C0
The battery current value is: FFC0
This value is two complements encoded:
0xffc0 = 65472 -> 65472 - 65536 = -64 -> -6.4A
Result: the battery is charged with 6.4A
For a coding example see:
https://github.com/greenenergyprojects/obd2-gateway, file src/obd2/obd2.ts

Do not show JSON data in columns

The software I'm using saves a copy of the data that I think is json in an extra-different table when I do records in the database.
What I want to do is to be able to query the json data contained in the DATASETS column separately.
I'm using SQL 2012 as my server
This is the query I tried so far:
SELECT TOP 1 IND, SNAPSHOTDATE, DATASETS, USERNAME, OWNERFORM
FROM TBLSNAPSHOTS
CODE RESULT:
105 2018-09-14 02:59:34.000 { "Datasets": [{"Name": "TBLSTOKLAR","Lines": [{"IND": "102","STOKNO": "","MALINCINSI": "TITIZ PLASTIK BUYUK KASIK 10 ADET","STOKKODU": "8691262708050","ANABIRIM": "102","BIRIMEX": "102","ALTSEVIYE": "","KRITIKSEVIYE": "","USTSEVIYE": "","DEPOSEVIYESI": "True","URETICI": "","AYLIKVADE": "0","SERINO": "","DEPO": "1","STOKGRUBU": "","GARANTI": "0","PRIM": "0","IPTAL": "False","STOKTIPI": "0","STOKTAKIP": "0","TEMINYERI": "1","RAFOMRU": "0","RESIM": "","KALAN": "0","REZERV": "0","KOD1": "","KOD2": "","KOD3": "","KOD4": "","KOD5": "","KOD6": "","KOD7": "","KOD8": "","KOD9": "","KOD10": "","TAKSITSAYISI": "0","ISTIHBARAT": "","FIYATYOK": "","DELETED": "","ALISFIYATI": "0","ESKIALISFIYATI": "0","SONALISTARIHI": "","SONSATISTARIHI": "","KARTINACILMATARIHI": "14.09.18 ı. 02:57:58","DEVIRIND": "","MALIYET": "1","KDVGRUBU": "1","AKTIF": "False","ISCILIKIND": "0","ISCILIKBIRIMIND": "0","ISCILIKACIKLAMA": "","ISCILIKSTOKKODU": "","ALISFIYATIDEGISMETARIHI": "","STATUS": "1","DALISFIYATI": "","APB": "","OIV": "0","KARORANI": "0","OTV": "0","ISK": "0","STOKGRUPTANIMI": "","ISKSATISFIYATI2": "0","ISKSATISFIYATI3": "0","ALISKDVORANI": "18","ALISISKORANI": "","SIPARISALINMASIN": "False","SIPARISVERILMESIN": "False","P1": "","P2": "","P3": "","SATISKOSULU": "","DEFAULTALISFIYATI": "","DEFAULTALISFIYATIDEGISMESTARIHI": "","KDVGRUBUT": "","HEDEFSATISFIAYTI": "","KURUMISKONTOSU": "","TICARIISKONTO": "","ITSBILDIRIMI": "False","MAXISKORANI": "","IMALATCISATISFIYATI": "","DKUR": "1","ACILSEVK": "False","SOGUKSEVK": "False","ICMIKTAR": "","TICARISEKIL": "","MAXISKTUTAR": "","TAXE": "","KOD11": "","DAPB": "","IKINCIEL": "","ETICARET": "","STOKNEVI": "0","OTVORANSAL": "True","POZ": "","YAZARKASA": "False","KOD12": "","KOD13": "","KOD14": "","KOD15": "","KOD16": "","KOD17": "","KOD18": "","KOD19": "","KOD20": "","KOD21": "","UID": "{0DE71D73-E447-45B0-BF6A-1D312DBAFDD2}"}]}]} ADMIN frmEdtStok```
In SQL 2012 - no, you can't directly query the JSON. In SQL 2016 they added functions to let you do this:
https://learn.microsoft.com/en-us/sql/t-sql/functions/json-query-transact-sql?view=sql-server-2017
But if you need to stay on 2012 you are limited to String parsing it (don't do this), or writing/finding a CLR function which parses it using .Net code and returns the results
If you simply must do it quickly there are some hackey solutions to parse it like so: https://www.red-gate.com/simple-talk/sql/t-sql-programming/consuming-json-strings-in-sql-server/ but don't expect it to work smoothly with complex json

Max date row using Talend Data Integration

Using Talend Data Integration, following data is inputted from a CSV file
Time A B C
18:22.0 -0.30107087 3.79899955 6.52000999
18:23.5 -9.648633 0.84515321 1.50116444
18:25.0 -6.01184034 7.66982508 4.42568207
18:25.9 -9.22426033 3.12263751 5.10443783
18:26.7 -9.00698662 4.03901815 0.01316811
18:27.4 -4.31255579 6.25724602 5.02961922
18:28.2 -2.67013335 7.5932107 5.41628265
18:28.8 -1.76213241 6.26981592 7.44536877
18:29.5 -2.18590617 5.58567238 6.23928976
18:30.3 0.42078096 3.1429882 8.46290493
18:30.9 0.36391866 3.02926373 8.86752415
18:31.6 0.35673606 3.07176089 8.93396378
18:32.4 0.35374331 3.05081153 8.93994904
18:33.0 0.38187516 3.05799413 8.89745235
18:33.7 0.32920274 3.03644633 8.9315691
18:34.4 0.37529111 3.07475352 8.93575954
18:35.0 0.40342298 3.07654929 8.86393356
18:35.7 0.35254622 3.05260706 8.9267807
How do I extract only the max date row (18:35.7 0.35254622 3.05260706 8.9267807) and load it into a JSON?
You can achieve this by sorting your file on the Time column (reverse alphanumeric sort gives the most recent Time on top), then take the first row and write it to a JSON file. Like so :
tSampleRow_1 config :