Edit Parameter in JSON - json

I want to deploy an Azure ARM Template.
In the parameter section I defined a IP Range for the Subnet.
"SubnetIP": {
"defaultValue": "10.0.0.0",
"type": "string"
},
"SubnetMask": {
"type": "int",
"defaultValue": 16,
"allowedValues": [
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27
]
}
When creating the private IP i used
"privateIPAddress": "[concat(parameters('SubnetIP'),copyindex(20))]",
This give me not the excepted output because Subnet Ip is 10.0.0.0 and not 10.0.0. is there a way to edit the parameter in that function?
Regards Stefan

you should do a bit calculation if you want this to be robust:
"ipAddress32Bit": "[add(add(add(mul(int(split(parameters('ipAddress'),'.')[0]),16777216),mul(int(split(parameters('ipAddress'),'.')[1]),65536)),mul(int(split(parameters('ipAddress'),'.')[2]),256)),int(split(parameters('ipAddress'),'.')[3]))]",
"modifiedIp": "[add(variables('ipAddress32Bit'),1)]",
"ipAddressOut": "[concat(string(div(variables('modifiedIP'),16777216)), '.', string(div(mod(variables('modifiedIP'),16777216),65536)), '.', string(div(mod(variables('modifiedIP'),65536),256)), '.', string(mod(variables('modifiedIP'),256)))]"
not going to take credit for that. source. addition happens in the modifiedIp variable in this example. you could also combine this with copy function.
edit. ok, i thought that this is somewhat obvious, but I'll explain how I understand whats going on (i might be wrong).
he takes individual ip address pieces (10.1.2.3 > 10, 1, 2, 3)
he multiplies each piece by a specific number to get its decimal representation
he sums the pieces
he adds 1 (to get next ip address in decimal representation)
he casts decimal number back to ip address
To illustrate the idea use these 3 links:
https://www.browserling.com/tools/dec-to-ip
https://www.ipaddressguide.com/ip

So you want only the first part of the specified subnet?
maybe try something like this?
"variables":{
"SubnetPrefix": "[substring(parameters('SubnetIP'), 0, lastIndexOf(parameters('SubnetIP'), '.'))]"
"privateIPAddress": "[concat(variables('SubnetPrefix'),copyindex(20))]"
}
It would not be pretty for larger subnets than /24, but in the example it could work. Have a look at ARM template string functions

Related

how to find the key of the minimum value in a nested json?

I have a JSON as shown below. Every group has a name, percent_cpu and percent_memory key. As an example I have shown up to 2 groups, there can be up to N.
[
{
"name": "esx1",
"percent_cpu": 10,
"percent_memory": 20,
},
{
"name": "esx2",
"percent_cpu": 30,
"percent_memory": 15,
},
...
]
I want to compare each group based on percent_cpu key and find out the lowest value and get the value of the name key from that group.
If anyone can point me in the right direction or show me an example that would be great.
The answer that I'm looking for here is [{"name":"esx1"}, {"name":"esxN"}]
min can accept a key argument.
So, assuming you loaded your json into the list l, min(l, key= lambda s: s["percent_cpu"]) should give you
{
"name": "esx1",
"percent_cpu": 10,
"percent_memory": 20,
}

Storing array type data in MYSQL table

To try and simplify, let me see if this makes it any easier. I've created 3 fictitious JSON representation of states I want to store in MySQL. I'm looking for some suggestions as to the best way to store them. In particular the "state" data is the part that I'm not too sure how to approach.
If it helps, I'm find for creating a specific DB table per round.
User State for Round 1
{
"user": 1,
"round": 0,
"move": 0,
"points": 10,
"state": [
100, 200, 300, 10, 50
]
}
User state for Round 2
{
"user": 1,
"round": 1,
"move": 3,
"points": 150,
"state": [
10, 50, 800
]
}
User state for Round 3
{
"user": 1,
"round": 2,
"move": 7,
"points": 1175,
"state": [
[ 100, 200, 300 ],
[ 40, 10, 20 ],
[ 800, 1000, 50, 90, 20 ]
]
}
Original post below
I'm currently working on a game that utilizes MySQL. The game works on rounds, where a round represents a set of moves. Possible moves can either be represented by a one dimensional array or multi-dimensional array. For example:
[ 100, 200, 300, 50, 10 ]
or
[
[ 100, 200, 300, 50, 10 ],
[ 20, 500, 10, 5, 800 ],
[ 5, 1, 4 ]
]
The values will always be integers. And as one can see, in the multi-dimensional case, each row may not be the same length. In the one-dimensional case, length can be arbitrary. However, for a given round, the array lengths are fixed. The variance of length as well as whether is is one or multi dimensional is defined per round.
One can think of the multi-dimensional case of a round that is comprised of sub-rounds. Once a sub-round is passed, it goes onto the next sub-round until all sub-rounds are completed.
A player playing a round can exit, but the state will be saved. Hence it is possible for a player to have multiple "saved" rounds.
As the player can play on different devices, I don't want to store the round state on the device. Rather I want to serve the state on demand. Currently the client submits a move to the server and the server will respond back with an updated state. But right now, there is no persistence so if the server goes down, then the state is lost (still in dev, so not an issue right now)
I'm looking for approaches towards saving this state with MySQL.
One approach I thought of was having a DB table per round, but this doesn't quite solve how the one-dimensional or multi-dimensional array would be stored. Here I'd imagine the user_id would be the primary key.
The other would be a DB table which can handle storing arbitrary data. Here I'd imagine the primary key would be a composite of user_id and round_id.
Of course there is some additional state the table would need to store like the number of moves, the points earned in the round, etc.
To represent "array data" would JSON Data Type or encoding the data in a string or binary blob be best? I would prefer something that won't consume a ton of extra time in decoding (for example, some JSON encode/decoders are not that speedy). Trying to determine some good options to try.
The server code is C++.

Regex Return First Match

I have a weather file where I would like to extract the first value for "air_temp" recorded in a JSON file. The format this HTTP retriever uses is regex (I know it is not the best method).
I've shortened the JSON file to 2 data entries for simplicity - there are usually 100.
{
"observations": {
"notice": [
{
"copyright": "Copyright Commonwealth of Australia 2017, Bureau of Meteorology. For more information see: http://www.bom.gov.au/other/copyright.shtml http://www.bom.gov.au/other/disclaimer.shtml",
"copyright_url": "http://www.bom.gov.au/other/copyright.shtml",
"disclaimer_url": "http://www.bom.gov.au/other/disclaimer.shtml",
"feedback_url": "http://www.bom.gov.au/other/feedback"
}
],
"header": [
{
"refresh_message": "Issued at 12:11 pm EST Tuesday 11 July 2017",
"ID": "IDN60901",
"main_ID": "IDN60902",
"name": "Canberra",
"state_time_zone": "NSW",
"time_zone": "EST",
"product_name": "Capital City Observations",
"state": "Aust Capital Territory"
}
],
"data": [
{
"sort_order": 0,
"wmo": 94926,
"name": "Canberra",
"history_product": "IDN60903",
"local_date_time": "11/12:00pm",
"local_date_time_full": "20170711120000",
"aifstime_utc": "20170711020000",
"lat": -35.3,
"lon": 149.2,
"apparent_t": 5.7,
"cloud": "Mostly clear",
"cloud_base_m": 1050,
"cloud_oktas": 1,
"cloud_type_id": 8,
"cloud_type": "Cumulus",
"delta_t": 3.6,
"gust_kmh": 11,
"gust_kt": 6,
"air_temp": 9.0,
"dewpt": 0.2,
"press": 1032.7,
"press_qnh": 1031.3,
"press_msl": 1032.7,
"press_tend": "-",
"rain_trace": "0.0",
"rel_hum": 54,
"sea_state": "-",
"swell_dir_worded": "-",
"swell_height": null,
"swell_period": null,
"vis_km": "10",
"weather": "-",
"wind_dir": "WNW",
"wind_spd_kmh": 7,
"wind_spd_kt": 4
},
{
"sort_order": 1,
"wmo": 94926,
"name": "Canberra",
"history_product": "IDN60903",
"local_date_time": "11/11:30am",
"local_date_time_full": "20170711113000",
"aifstime_utc": "20170711013000",
"lat": -35.3,
"lon": 149.2,
"apparent_t": 4.6,
"cloud": "Mostly clear",
"cloud_base_m": 900,
"cloud_oktas": 1,
"cloud_type_id": 8,
"cloud_type": "Cumulus",
"delta_t": 2.9,
"gust_kmh": 9,
"gust_kt": 5,
"air_temp": 7.3,
"dewpt": 0.1,
"press": 1033.1,
"press_qnh": 1031.7,
"press_msl": 1033.1,
"press_tend": "-",
"rain_trace": "0.0",
"rel_hum": 60,
"sea_state": "-",
"swell_dir_worded": "-",
"swell_height": null,
"swell_period": null,
"vis_km": "10",
"weather": "-",
"wind_dir": "NW",
"wind_spd_kmh": 4,
"wind_spd_kt": 2
}
]
}
}
The regex expression I am currently using is: .*air_temp": (\d+).* but this is returning 9 and 7.3 (entries 1 and 2). Could someone suggest a way to only return the first value?
I have tried using lazy quantifier group, but have had no luck.
This regex will help you. But I think you should capture and extract the first match with features of the programming language you are using.
.*air_temp": (\d{1,3}\.\d{0,3})[\s\S]*?},
To understand the regex better: take a look at this.
Update
The above solution works if you have only two data entries. For more than two entries, we should have used this one:
header[\s\S]*?"air_temp": (\d{1,3}\.\d{0,3})
Here we match the word header first and then match anything in a non-greedy way. After that, we match our expected pattern. thus we get the first match. Play with it here in regex101.
To capture the negative numbers, we need to check if there is any - character exists or not. We do this by ? which means 'The question mark indicates zero or one occurrence of the preceding element'.
So the regex becomes,
header[\s\S]*?"air_temp": (-?\d{1,3}\.\d{0,3}) Demo
But the use of \K without the global flag ( in another answer given by mickmackusa ) is more efficient. To detect negative numbers, the modified version of that regex is
air_temp": \K-?\d{1,2}\.\d{1,2} demo.
Here {1,2} means 1~2 occurance/s of the previous character. We use this as {min_occurance,max_occurance}
I do not know which language you are using, but it seems like a difference between the global flag and not using the global flag.
If the global flag is not set, only the first result will be returned. If the global flag is set on your regex, it will iterate through returning all possible results. You can test it easily using Regex101, https://regex101.com/r/x1bwg2/1
The lazy/greediness should not have any impact in regards to using/not using the global flag
If \K is allowed in your coding language, use this: Demo
/air_temp": \K[\d.]+/ (117steps) this will be highly efficient in searching your very large JSON text.
If no \K is allowed, you can use a capture group: (Demo)
/air_temp": ([\d.]+)/ this will still move with decent speed through your JSON text
Notice that there is no global flag at the end of the pattern, so after one match, the regex engine stops searching.
Update:
For "less literal" matches (but it shouldn't matter if your source is reliable), you could use:
Extended character class to include -:
/air_temp": \K[\d.-]+/ #still 117 steps
or change to negated character class and match everything that isn't a , (because the value always terminates with a comma):
/air_temp": \K[^,]+/ #still 117 steps
For a very strict match (if you are looking for a pattern that means you have ZERO confidence in the input data)...
It appears that your data doesn't go beyond one decimal place, temps between 0 and 1 prepend a 0 before the decimal, and I don't think you need to worry with temps in the hundreds (right?), so you could use:
/air_temp": \K-?[1-9]?\d(?:\.\d)? #200steps
Explanation:
Optional negative sign
Optional tens digit
Required ones digit
Optional decimal which must be followed by a digit
Accuracy Test Demo
Real Data Demo

jq: Turn an array of objects into individual objects and use each array index as a new key

I have several large json objects (think GB scale), where the object values in some of the innermost levels are arrays of objects. I'm using jq 1.4 and I'm trying to break these arrays into individual objects, each of which will have a key such as g__0 or g__1, where the numbers correspond to the index in the original array, as returned by the keys function. The number of objects in each array may be arbitrarily large (in my example it is equal to 3). At the same time I want to keep the remaining structure.
For what it's worth the original structure comes from MongoDB, but I am unable to change it at this level. I will then use this json file to create a schema for BigQuery, where an example column will be seeds.g__1.guid and so on.
What I have:
{
"port": 4500,
"notes": "This is an example",
"seeds": [
{
"seed": 12,
"guid": "eaf612"
},
{
"seed": 23,
"guid": "bea143"
},
{
"seed": 38,
"guid": "efk311"
}
]
}
What I am hoping to achieve:
{
"port": 4500,
"notes": "This is an example",
"seeds": {
"g__0": {
"seed": 12,
"guid": "eaf612"
},
"g__1": {
"seed": 23,
"guid": "bea143"
},
"g__2": {
"seed": 38,
"guid": "efk311"
}
}
}
Thanks!
The following jq program should do the trick. At least it produces the desired results for the given JSON. The program is so short and straightforward that I'll let it speak for itself:
def array2object(prefix):
. as $in
| reduce range(0;length) as $i ({}; .["\(prefix)_\($i)"] = $in[$i]);
.seeds |= array2object("g__")
So, you essentially want to transpose (pivot) your data in BigQuery Table such that instead of having data in rows as below
you will have your data in columns as below
Thus, my recommendation would be
First, load your data as is to start with
So now, instead of doing schema transformation outside of BigQuery, let’s rather do it within BigQuery!
Below would be an example of how to achieve transformation you are looking for (assuming you have max three items/objects in array)
#standardSQL
SELECT
port, notes,
STRUCT(
seeds[SAFE_OFFSET(0)] AS g__0,
seeds[SAFE_OFFSET(1)] AS g__1,
seeds[SAFE_OFFSET(2)] AS g__2
) AS seeds
FROM yourTable
You can test this with dummy data using CTE like below
#standardSQL
WITH yourTable AS (
SELECT
4500 AS port, 'This is an example' AS notes,
[STRUCT<seed INT64, guid STRING>
(12, 'eaf612'), (23, 'bea143'), (38, 'efk311')
] AS seeds
UNION ALL SELECT
4501 AS port, 'This is an example 2' AS notes,
[STRUCT<seed INT64, guid STRING>
(42, 'eaf412'), (53, 'bea153')
] AS seeds
)
SELECT
port, notes,
STRUCT(
seeds[SAFE_OFFSET(0)] AS g__0,
seeds[SAFE_OFFSET(1)] AS g__1,
seeds[SAFE_OFFSET(2)] AS g__2
) AS seeds
FROM yourTable
So, technically, if you know max number of items/object in seeds array – you can just manually write needed SQL statement, to run it against real data.
Hope you got an idea
Of course you can script /automate process – you can find examples for similar pivoting tasks here:
https://stackoverflow.com/a/40766540/5221944
https://stackoverflow.com/a/42287566/5221944

How can I use RegEx to extract data within a JSON document

I am no RegEx expert. I am trying to understand if can use RegEx to find a block of data from a JSON file.
My Scenario:
I am using an AWS RDS instance with enhanced monitoring. The monitoring data is being sent to a CloudWatch log stream. I am trying to use the data posted in CloudWatch to be visible in log management solution Loggly.
The ingestion is no problem, I can see the data in Loggly. However, the whole message is contained in one big blob field. The field content is a JSON document. I am trying to figure out if I can use RegEx to extract only certain parts of the JSON document.
Here is an sample extract from the JSON payload I am using:
{
"engine": "MySQL",
"instanceID": "rds-mysql-test",
"instanceResourceID": "db-XXXXXXXXXXXXXXXXXXXXXXXXX",
"timestamp": "2017-02-13T09:49:50Z",
"version": 1,
"uptime": "0:05:36",
"numVCPUs": 1,
"cpuUtilization": {
"guest": 0,
"irq": 0.02,
"system": 1.02,
"wait": 7.52,
"idle": 87.04,
"user": 1.91,
"total": 12.96,
"steal": 2.42,
"nice": 0.07
},
"loadAverageMinute": {
"fifteen": 0.12,
"five": 0.26,
"one": 0.27
},
"memory": {
"writeback": 0,
"hugePagesFree": 0,
"hugePagesRsvd": 0,
"hugePagesSurp": 0,
"cached": 505160,
"hugePagesSize": 2048,
"free": 2830972,
"hugePagesTotal": 0,
"inactive": 363904,
"pageTables": 3652,
"dirty": 64,
"mapped": 26572,
"active": 539432,
"total": 3842628,
"slab": 34020,
"buffers": 16512
},
My Question
My question is, can I use RegEx to extract, say a subset of the document? For example, CPU Utilization, or Memory etc.? If that is possible, how do I write the RegEx? If possible, I can use it to drill down into the extracted document to get individual data elements as well.
Many thanks for your help.
First I agree with Sebastian: A proper JSON parser is better.
Anyway sometimes the dirty approach must be used. If your text layout will not change, then a regexp is simple:
E.g. "total": (\d+\.\d+) gets the CPU usage and "total": (\d\d\d+) the total memory usage (match at least 3 digits not to match the first total text, memory will probably never be less than 100 :-).
If changes are to be expected make it a bit more stable: ["']total["']\s*:\s*(\d+\.\d+).
It may also be possible to match agains return chars like this: "cpuUtilization"\s*:\s*\{\s*\n.*\n\s*"irq"\s*:\s*(\d+\.\d+) making it a bit more stable (this time for irq value).
And so on and so on.
You see that you can get fast into very complex expressions. That approach is very fragile!
P.S. Depending of the exact details of the regex of loggy, details may change. Above examples are based on Perl.