Can Jinja print a negative decimal number with more than 2 decimal places? - jinja2

I've got an API call that's returning the following JSON:
{
"account_id": "accountID",
"processed_length": 41,
"sentiment": "NEGATIVE",
"sentiment_score": -0.800000011920929,
"text": "This is the worst possible solution ever."
}
I'm trying to print sentiment_score as its current value but {{sentiment_score}} is printing as 0. {{sentiment_score|float}} printed as 0.0. How can I get the full value and that it's negative in Jinja? {{sentiment}} and {{text}} are printing their values just fine.

I was able to get it to show the minus sign by using {{0 + sentiment_score}}.
I didn't have the problem of truncated numbers though.

I was able to get the value to print properly in QuickBase Pipelines by just using the output value from the Iterate over JSON step.

Related

Redshift JSON Parsing

I have some JSON data in Redshift table of type character varying. An example entry is:
[{"value":["*"], "key":"testData"}, {"value":"["GGG"], key: "differentData"}]
I want to return vales based on keys, how can i do this? I'm attempting to do something like
json_extract_path_text(column, 'value') but unfortunately it errors out. Any ideas?
So the first issue is that your string isn't valid JSON. There are mismatched and missing quotes. I think you mean:
[{"value":["*"], "key":"testData"}, {"value":["GGG"], "key": "differentData"}]
I don't know if this is a data issue or a transcription error but these functions won't work unless the json text is valid.
The next thing to consider is that at the top level this json is an array so you will need to use json_extract_array_element_text() function to pick up an element of the array. For example:
json_extract_array_element_text('json string', 0)
So putting this together we can extract the first "value" with (untested):
json_extract_path_text(
json_extract_array_element_text(
'[{"value":["*"], "key":"testData"}, {"value":["GGG"], "key": "differentData"}]', 0
), 'value'
)
Should return the string ["*"].

How do I search for a string in this JSON with Python

My JSON file looks something like:
{
"generator": {
"name": "Xfer Records Serum",
....
},
"generator": {
"name: "Lennar Digital Sylenth1",
....
}
}
I ask the user for search term and the input is searched for in the name key only. All matching results are returned. It means if I input 's' only then also both the above ones would be returned. Also please explain me how to return all the object names which are generators. The more simple method the better it will be for me. I use json library. However if another library is required not a problem.
Before switching to JSON I tried XML but it did not work.
If your goal is just to search all name properties, this will do the trick:
import re
def search_names(term, lines):
name_search = re.compile('\s*"name"\s*:\s*"(.*' + term + '.*)",?$', re.I)
return [x.group(1) for x in [name_search.search(y) for y in lines] if x]
with open('path/to/your.json') as f:
lines = f.readlines()
print(search_names('s', lines))
which would return both names you listed in your example.
The way the search_names() function works is it builds a regular expression that will match any line starting with "name": " (with varying amount of whitespace) followed by your search term with any other characters around it then terminated with " followed by an optional , and the end of string. Then applies that to each line from the file. Finally it filters out any non-matching lines and returns the value of the name property (the capture group contents) for each match.

netlogo: no " " in csv spreadsheet since NetLogo 6.0.3

I want to use the syntax to substitute "#N/A" instead of the calculated value 0, but "" is not displayed in the csv file in NetLogo 6.0.3 (This is displayed ⇒ #N/A. I want to calculate the average value by mixing "#N/A" with numerical data in Excel, but #N/A is displayed as calculation result. If "#N/A" is displayed as a csv file, it could be calculated with Excel. In NetLogo 6.0.1, this was possible. What should I do with NetLogo 6.0.3?
The "correct" way to do this is to handle it in excel by ignoring N/As in your average. That way, you preserve those values as N/As and so have to be conscious about how you deal with them. You can do this by calculating the average with something like =AVERAGE(IF(ISNUMBER(A2:A5), A2:A5)) and then entering with ctrl+shift+enter instead of just enter. That, of course, is kind of annoying.
To solve it on the netlogo side, report the value "\"#N/A\"" instead of "#N/A". That will preserve the quotes when you import into excel. Alternatively, you could output pretty much any other string other than "#N/A". For instance, reporting "not-a-number" would make it a string, or even just using an empty string. The quotes you see in excel are actually part of the string, not just indicators that the field is a string. In general, fields in CSV don't have a type. Excel just interprets what it can as a number. It treats the exact field of #N/A as special, so modifying it in any way (not just adding quotes around it) will prevent it from interpreting in that special way.
It's also worth noting that this was a bug in previous versions of NetLogo (I'm assuming you're using BehaviorSpace here; the CSV extension has always worked this way). There was no way to output a string without having a quote at the beginning and end of the string. That is, the string value itself would have quotes in it. This behavior is a consequence of fixing it. Now, you can output true #N/A values if you want to, which there was no way of doing before.
Maybe this will work for you. Assuming you have the csv extension enabled:
extensions [ csv ]
You can use a reporter that replaces 0 values in a list (or list of lists) with the string value "#NA" (or "N/A" if you want, but for me #NA is what works with Excel).
to-report replace-zeroes [ list_ ]
if list_ = [] [ report [] ]
let out map [ i ->
ifelse-value is-list? i
[ replace-zeroes i ]
[ ifelse-value ( i != 0 ) [ i ] [ "#NA" ] ]
] list_
report out
end
As a quick check:
to test
ca
; make fake list of lists for csv output
let fake n-values 3 [ i -> n-values 5 [ random 4 ] ]
; replace the 0 values with the NA values
let replaced replace-zeroes fake
; print both the base and 0-replaced lists
print fake
print replaced
; export to csv
csv:to-file "replaced_out.csv" replaced
reset-ticks
end
Observer output (random):
[[0 0 2 2 0] [3 0 0 3 0] [2 3 2 3 1]]
[[#NA #NA 2 2 #NA] [3 #NA #NA 3 #NA] [2 3 2 3 1]]
Excel output:

Cannot parse JSON String data to Integer

Suppose my JSON is like this.
// {
// "count": 32,
// "weight": 1.13,
// "name": "grape",
// "isFruit": true
// "currentPrice" : "30.00"
// }
If I read my JSON like this,
String current = json.getString("currentPrice");
the current variable will have value as "30.00". Is there any way that I can parse this as an Integer? I tried doing Integer.parseInt but It is giving an error like Number format exception for input string "30.00".
I tried removing quotes by applying regex but didn't work.
You need to use
parseInt('current')
parseInt(num); // default way (no radix)
parseInt(num, 10); // parseInt with radix (decimal)
parseFloat(num) // floating point
Number(num); // Number constructor
to get current
You want parseFloat(). 30.00 isn't an integer, even though it's numerically EQUAL to the integer 30.
If you want it as an integer, you can use Math.floor() to convert it to one, or you can use parseInt() to get the integer portion, but if you really want the whole value (if it might not always be whole), parse it as a float.

Parsing JSON object in RUBY with a wildcard?

Problem:
I'm relatively new to programming and learning Ruby, I've worked with JSON before but have been stumped by this problem.
I'm taking a hash, running hash.to_json, and returning a json object that looks like this:
'quantity' =
{
"line_1": {
"row": "1",
"productNumber": "111",
"availableQuantity": "4"
},
"line_2": {
"row": "2",
"productNumber": "112",
"availableQuantity": "6"
},
"line_3": {
"row": "3",
"productNumber": "113",
"availableQuantity": "10"
}
I want to find the 'availableQuantity' value that's greater than 5 and return the line number.
Further, I'd like to return the line number and the product number.
What I've tried
I've been searching on using a wildcard in a JSON query to get over the "line_" value for each entry, but with no luck.
to simply identify a value for 'availableQuantity' within the JSON object greater than 5:
q = JSON.parse(quantity)
q.find {|key| key["availableQuantity"] > 5}
However this returns the error: "{TypeError}no implicit conversion of String into Integer."
I've googled this error but I can not understand what it means in the context of this problem.
or even
q.find {|key, value| value > 2}
which returns the error: "undefined method `>' for {"row"=>"1", "productNumber"=>111, "availableQuantity"=>4}:Hash"
This attempt looks so simplistic I'm ashamed, but it reveals a fundamental gap in my understanding of how to work with looping over stuff using enumerable.
Can anyone help explain a solution, and ideally what the steps in the solution mean? For example, does the solution require use of an enumerable with find? Or does Ruby handle a direct query to the json?
This would help my learning considerably.
I want to find the 'availableQuantity' value that's greater than 5 and [...] return the line number and the product number.
First problem: your value is not a number, so you can't compare it to 5. You need to_i to convert.
Second problem: getting the line number is easiest with regular expressions. /\d+/ is "any consecutive digits". Combining that...
q.select { |key, value|
value['availableQuantity'].to_i > 5
}.map { |key, value|
[key[/\d+/].to_i, value['productNumber'].to_i]
}
# => [[2, 112], [3, 113]]