Custom Function to Extract Data from JSON API Based on Column Values in Excel VBA - json

I've an excel workbook that looks something like this:
/-------------------------------------\
| Lat | Long | Area |
|-------------------------------------|
| 5.3 | 103.8 | AREA_NAME |
\-------------------------------------/
I also have a JSON api with a url of the following structure:
https://example.com/api?token=TOKEN&lat=X.X&lng=X.X
that returns a JSON object with the following structure:
{ "Area": "AREA_NAME", "OTHERS": "Other_details"}
I tried to implement a VBA function that will help me to extract AREA_NAME. However, I keep getting syntax errors. I don't know where I am going wrong.
Function get_p()
Source = Json.Document (Web.Contents("https://example.com/api?token=TOKEN&lat=5.3&lng=103.8"))
name = Source[Area]
get_p = Name
End Function
I intentionally hardcoded the lat and long value for development purposes. Eventually, I want the function to accept lat and long as parameters. I got the first line of the function from PowerQuery Editor.
Where am I going wrong? How to do this properly in VBA? Or is there a simpler way using PowerQuery?

Related

KQL | How do I extract or check for data in a long string with many quotation marks?

I'm super newbie to KQL and data in general.
I'm working with a data column with long strings like this:
"data": {"stageID":1670839857060,"entities":[{"entity":{"key":"BearKnight","owner":0,"id":"[2|1]"},"levels":{"main":4,"star":1,"ShieldWall.main":4,"ShieldWall.enhance":0,"ShieldThrow.main":4,"ShieldThrow.enhance":0}},{"entity":{"key":"DryadHealer","owner":0,"id":"[3|1]"},"levels":{"main":5,"star":1,"HealingTouch.main":5,"HealingTouch.enhance":0,"CuringTouch.main":5,"CuringTouch.enhance":0}},{"entity":{"key":"HumanKnight","owner":1,"id":"[4|1]"},"levels":{"main":4,"star":0,"BullRush.main":4,"BullRush.enhance":0,"FinishingStrike.main":4,"FinishingStrike.enhance":0,"SwordThrow.main":4,"SwordThrow.enhance":0,"StrongAttack.main":0,"StrongAttack.enhance":0}},
I need to get a list of the *HeroNames *inside here [ "key":"HeroName","owner":0 ] but not in here [ "key":"HeroName","owner":1 ].
I've been trying the extract_all and has_any functions, but I can't work with the data if it has all the quotation marks. Can I parse this somewhow and remove them?
My ideal output would be a list of hero names who have owner:0.
For example, for the top string the ideal output is: "BearKnight","DryadHealer"
print txt = 'data: {"stageID":1670839857060,"entities":[{"entity":{"key":"BearKnight","owner":0,"id":"[2|1]"},"levels":{"main":4,"star":1,"ShieldWall.main":4,"ShieldWall.enhance":0,"ShieldThrow.main":4,"ShieldThrow.enhance":0}},{"entity":{"key":"DryadHealer","owner":0,"id":"[3|1]"},"levels":{"main":5,"star":1,"HealingTouch.main":5,"HealingTouch.enhance":0,"CuringTouch.main":5,"CuringTouch.enhance":0}},{"entity":{"key":"HumanKnight","owner":1,"id":"[4|1]"},"levels":{"main":4,"star":0,"BullRush.main":4,"BullRush.enhance":0,"FinishingStrike.main":4,"FinishingStrike.enhance":0,"SwordThrow.main":4,"SwordThrow.enhance":0,"StrongAttack.main":0,"StrongAttack.enhance":0}}]}'
| parse txt with * ": " doc
| mv-apply e = parse_json(doc).entities on (where e.entity.owner == 0 | summarize HeroNames = make_list(e.entity.key))
| project-away txt, doc
HeroNames
["BearKnight","DryadHealer"]
Fiddle

Convert string column to json and parse in pyspark

My dataframe looks like
|ID|Notes|
---------------
|1|'{"Country":"USA","Count":"1000"}'|
|2|{"Country":"USA","Count":"1000"}|
ID : int
Notes : string
When i use from_json to parse the column Notes, it gives all Null values.
I need help in parsing this column Notes into columns in pyspark
When you are using from_json() function, make sure that the column value is exactly a json/dictionary in String format. In the sample data you have given, the Notes column value with id=1 is not exactly in json format (it is a string but enclosed within additional single quotes). This is the reason it is returning NULL values. Implementing the following code on the input dataframe gives the following output.
df = df.withColumn("Notes",from_json(df.Notes,MapType(StringType(),StringType())))
You need to change your input data such that the entire Notes column is in same format which is json/dictionary as a string and nothing more because it is the main reason for the issue. The below is the correct format that helps you to fix your issue.
| ID | Notes |
---------------
| 1 | {"Country":"USA","Count":"1000"} |
| 2 | {"Country":"USA","Count":"1000"} |
To parse Notes column values as columns in pyspark, you can simply use function called json_tuple() (no need to use from_json()). It extracts the elements from a json column (string format) and creates the result as new columns.
df = df.select(col("id"),json_tuple(col("Notes"),"Country","Count")) \
.toDF("id","Country","Count")
df.show()
Output:
NOTE: json_tuple() also returns null if the column value is not in the correct format (make sure the column values are json/dictionary as a string without additional quotes).

How do you write an array of numbers to a csv file?

let mut file = Writer::from_path(output_path)?;
file.write_record([5.34534536546, 34556.456456467567567, 345.56465456])?;
Produces the following error:
error[E0277]: the trait bound `{float}: AsRef<[u8]>` is not satisfied
--> src/main.rs:313:27
|
313 | file.write_record([5.34534536546, 34556.456456467567567, 345.56465456])?;
| ------------ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the trait `AsRef<[u8]>` is not implemented for `{float}`
| |
| required by a bound introduced by this call
|
= help: the following implementations were found:
<&T as AsRef<U>>
<&mut T as AsRef<U>>
<Arc<T> as AsRef<T>>
<Box<T, A> as AsRef<T>>
and 44 others
note: required by a bound in `Writer::<W>::write_record`
--> /home/mlueder/.cargo/registry/src/github.com-1ecc6299db9ec823/csv-1.1.6/src/writer.rs:896:12
|
896 | T: AsRef<[u8]>,
| ^^^^^^^^^^^ required by this bound in `Writer::<W>::write_record`
Is there any way to use the csv crate with numbers instead of structs or characters?
Only strings or raw bytes can be written to a file; if you try to give it something else, it isn't sure how to handle the data (as #SilvioMayolo mentioned). You can map your float array to one with strings, and then you will be able to write the string array to the file.
let float_arr = [5.34534536546, 34556.456456467567567, 345.56465456];
let string_arr = float_arr.map(|e| e.to_string();
This can obviously be combined to one line without using the extra variable, but it is a little easier to see the extra step that we need to take if it is split apart.

Kusto KQL reference first object in an JSON array

I need to grab the value of the first entry in a json array with Kusto KQL in Microsoft Defender ATP.
The data format looks like this (anonymized), and I want the value of "UserName":
[{"UserName":"xyz","DomainName":"xyz","Sid":"xyz"}]
How do I split or in any other way get the "UserName" value?
In WDATP/MSTAP, for the "LoggedOnUsers" type of arrays, you want "mv-expand" (multi-value expand) in conjunction with "parsejson".
"parsejson" will turn the string into JSON, and mv-expand will expand it into LoggedOnUsers.Username, LoggedOnUsers.DomainName, and LoggedOnUsers.Sid:
DeviceInfo
| mv-expand parsejson(LoggedOnUsers)
| project DeviceName, LoggedOnUsers.UserName, LoggedOnUsers.DomainName
Keep in mind that if the packed field has multiple entries (like DeviceNetworkInfo's IPAddresses field often does), the entire row will be expanded once per entry - so a row for a machine with 3 entries in "IPAddresses" will be duplicated 3 times, with each different expansion of IpAddresses:
DeviceNetworkInfo
| where Timestamp > ago(1h)
| mv-expand parsejson(IPAddresses)
| project DeviceName, IPAddresses.IPAddress
to access the first entry's UserName property you can do the following:
print d = dynamic([{"UserName":"xyz","DomainName":"xyz","Sid":"xyz"}])
| extend result = d[0].UserName
to get the UserName for all entries, you can use mv-expand/mv-apply:
print d = dynamic([{"UserName":"xyz","DomainName":"xyz","Sid":"xyz"}])
| mv-apply d on (
project d.UserName
)
thanks for the reply, but the proposed solution didn't work for me. However instead I found the following solution:
project substring(split(split(LoggedOnUsers,',',0),'"',4),2,9)
The output of this is: UserName

Jira JSON date format

I am using the Jira API, and need the start and end dates for a sprint.
The JSON data I get back is :
{"jodaTimeZoneId":"Europe/Berlin","sprints":[{"id":5,"start":"13082015044305","end":"27082015044305",...
Normally, json returns the date in milliseconds, and you need to deserialize that.
Now however, I can clearly see the date (13-08-2015 & 27-08-2015) followed by some other numbers I don't care about. Is there anyway Angular can get the correct format using | date? Or any other way I can use?
When I use {{13082015044305 | date:'dd-MM-yyyy'}} it returns 21-07-2384. The parsing date format is wrong. So change the format to recognized way.
So I used
input.toString().replace(/(\d\d)(\d\d)(\d\d\d\d)(\d\d\d\d\d\d)/, '$1-$2-$3');
Used it in a custom filter.
app.filter('correctDateFormat', function() {
return function(input) {
return input.toString().replace(/(\d\d)(\d\d)(\d\d\d\d)(\d\d\d\d\d\d)/, '$1-$2-$3');
};
});
Then
Display the date as
{{13082015044305 | correctDateFormat }}
I think you can use
{{ data | filter:options }}
where data is your json and date filter
{{'1388123412323' | date:'MM/dd/yyyy # h:mma'}}
an option like this.