netlogo: no " " in csv spreadsheet since NetLogo 6.0.3 - csv

I want to use the syntax to substitute "#N/A" instead of the calculated value 0, but "" is not displayed in the csv file in NetLogo 6.0.3 (This is displayed ⇒ #N/A. I want to calculate the average value by mixing "#N/A" with numerical data in Excel, but #N/A is displayed as calculation result. If "#N/A" is displayed as a csv file, it could be calculated with Excel. In NetLogo 6.0.1, this was possible. What should I do with NetLogo 6.0.3?

The "correct" way to do this is to handle it in excel by ignoring N/As in your average. That way, you preserve those values as N/As and so have to be conscious about how you deal with them. You can do this by calculating the average with something like =AVERAGE(IF(ISNUMBER(A2:A5), A2:A5)) and then entering with ctrl+shift+enter instead of just enter. That, of course, is kind of annoying.
To solve it on the netlogo side, report the value "\"#N/A\"" instead of "#N/A". That will preserve the quotes when you import into excel. Alternatively, you could output pretty much any other string other than "#N/A". For instance, reporting "not-a-number" would make it a string, or even just using an empty string. The quotes you see in excel are actually part of the string, not just indicators that the field is a string. In general, fields in CSV don't have a type. Excel just interprets what it can as a number. It treats the exact field of #N/A as special, so modifying it in any way (not just adding quotes around it) will prevent it from interpreting in that special way.
It's also worth noting that this was a bug in previous versions of NetLogo (I'm assuming you're using BehaviorSpace here; the CSV extension has always worked this way). There was no way to output a string without having a quote at the beginning and end of the string. That is, the string value itself would have quotes in it. This behavior is a consequence of fixing it. Now, you can output true #N/A values if you want to, which there was no way of doing before.

Maybe this will work for you. Assuming you have the csv extension enabled:
extensions [ csv ]
You can use a reporter that replaces 0 values in a list (or list of lists) with the string value "#NA" (or "N/A" if you want, but for me #NA is what works with Excel).
to-report replace-zeroes [ list_ ]
if list_ = [] [ report [] ]
let out map [ i ->
ifelse-value is-list? i
[ replace-zeroes i ]
[ ifelse-value ( i != 0 ) [ i ] [ "#NA" ] ]
] list_
report out
end
As a quick check:
to test
ca
; make fake list of lists for csv output
let fake n-values 3 [ i -> n-values 5 [ random 4 ] ]
; replace the 0 values with the NA values
let replaced replace-zeroes fake
; print both the base and 0-replaced lists
print fake
print replaced
; export to csv
csv:to-file "replaced_out.csv" replaced
reset-ticks
end
Observer output (random):
[[0 0 2 2 0] [3 0 0 3 0] [2 3 2 3 1]]
[[#NA #NA 2 2 #NA] [3 #NA #NA 3 #NA] [2 3 2 3 1]]
Excel output:

Related

Google Sheet Scripts -> ss.getRange() -> JSON.Stringify removing the brackets at the element level for 1 dimension arrays

If you want to stringify column A,B,C for a few rows it makes sense that JSON.stringify returns something like [ ["1a","2a","3a"], ["1b","2b", "3b"] ].
However if you are using just one column i.e. a 1 dimensional array, then what JSON.stringify does is terrible: [ ["1a"], ["1b"] ]
What my API expects is ["1a","1b"]
What I am missing?: How can I tell it to properly format it?
From the question
However if you are using just one column i.e. a 1 dimensional array, then what JSON.stringify does is terrible: [ ["1a"], ["1b"] ]
It looks that you have a misconception, as getValues() returns a bi-dimensional no matter if the range refers to a single row or a single column. Anyway, one way to convert the bi-dimentional array into a one-dimension array is by using Array.prototype.flat().
let column = [[1],[2],[3]]
console.log(column.flat())

jq filter to ignore values in select statement based on array values

Given the following JSON input :
{
"hostname": "server1.domain.name\nserver2.domain.name\n*.gtld.net",
"protocol": "TCP",
"port": "8080\n8443\n9500-9510",
"component": "Component1",
"hostingLocation": "DC1"
}
I would like to obtain the following JSON output :
{
"hostname": [
"server1.domain.name",
"server2.domain.name",
"*.gtld.net"
],
"protocol": "TCP",
"port": [
"8080-8080",
"8443-8443",
"9500-9510"
],
"component": "Component1",
"hostingLocation": "DC1"
}
Considering :
That the individual values in the port array may, or may not, be separated by a - character (I have no control over this).
That if an individual value in the port array does not contain the - separator, I then need to add it and then repeat the array value after the - separator. For example, 8080 becomes 8080-8080, 8443 becomes 8443-8443 and so forth.
And finally, that if a value in the port array is already of the format value-value, I should simply leave it unmodified.
I've been banging my head against this filter all afternoon, after reading many examples both here and in the official jq online documentation. I simply can't figure out how to accomodate consideration #3 above.
The filter I have now :
{hostname: .hostname | split("\n"), protocol: .protocol, port: .port | split("\n") | map(select(. | contains("-") | not)+"-"+.), component: .component, hostingLocation: .hostingLocation}
Yields the following output JSON :
{
"hostname": [
"server1.domain.name",
"server2.domain.name",
"*.gtld.net"
],
"protocol": "TCP",
"port": [
"8080-8080",
"8443-8443"
],
"component": "Component1",
"hostingLocation": "DC1"
}
As you can see above, I subsequently lose the 9500-9510 value as it already contains the - string which my filter weeds out.
If my logic does not fail me, I would need to stick an if statement within my select statement to conditionally only send array values that do not contain the string - to my select statement but leave array values that do contain the separator untouched. However, I cannot seem to figure this last piece out.
I will happily accept any alternative filter that yields the desired output, however I am also really keen on understanding where my logics fails in the above filter.
Thanks in advance to anyone spending their valuable time helping me out!
/Joel
First, we split the hostname string by a newline character (.hostname /= "\n") and do the same with the port string (.port /= "\n"). Actually, we can combine these identical operations into one: (.hostname, .port) /= "\n"
Next, for every element of the port array (.port[]) we split by any non-digit character (split("[^\\d]";"g")) resulting in an array of digit-only strings, from which we take the first element (.[0]), then a dash sign, and finally either the second element, if present, otherwise the first one again (.[1]//.[0])
With your input in a file called input.json, the following should convert it into the desired format:
jq '
(.hostname, .port) /= "\n" |
.port[] |= (split("[^\\d]";"g") | "\(.[0])-\(.[1]//.[0])")
' input.json
Regarding your considerations:
As we split at any non-digit character, it makes no difference what other character separates the values of a port range. If more than one character could separate them (e.g. an arrow -> or with spaces before and after the dash sign -), simply replace the regex [^\\d] with [^\\d]+ for capturing more than one non-digit character.
and 3. We always produce a range by including a dash sign and a second value, which depending on the presence of a second item may be either that or the first one again.
Regarding your approach:
Inside map you used select which evaluates to empty if the condition (contains("-") | not) is not met. As "9500-9510" does indeed contain a dash sign, it didn't survive. An if statement inside the select statement wouldn't help because even if select doesn't evaluate to empty it still doesn't modify anything, it just reproduces its input unchanged. Therefore, if select is letting through both cases (containing and not containing dash signs) it becomes useless. You could, however, work with an if statement outside of the select statement, but I considered the above solution as a simpler approach.

Netlogo: Using .csv as a cross-reference to raster value

I am trying to import a raster file that contains land-cover codes. Once the raster sets the patch variables to these land-cover codes, I want to link those codes to a separate .csv that has vegetation-specific parameters for each land-cover code. Thus each patch will be assigned the .csv variables based on its land-cover code. I'm completely stumped as to how to do this. More generally, how can I use a .csv as a cross-reference file? I don't have any code examples but here is an example of the kind of .csv I want to use:
Table example
So this .csv would assign the GR1 variables to multiple patches with land-cover code GR1
I agree with JenB for sure, especially if your values table is relatively short. However, if you have a lot of values, it might work to use the csv and table extensions together to make a dictionary where the 'land-cover code' acts as the key to retrieve the other data for your patch. So one path would be:
Read the csv
Take one of the columns as the key
Keep the remaining columns as a list of values
Combine these two lists into a list of lists
Make a dictionary out of those two lists
Have each patch query the dictionary for the values of interest
So with this example csv table:
lcover,fuel,type
GR1,15,a
GR2,65,b
GR3,105,a
And these extensions and variables:
extensions [ csv table ]
globals [ csvRaw keyValList dataList patchDataDict ]
patches-own [ land-cover fuel patchType]
We can run a code block to do all these steps (more explanation in comments):
to setup
ca
; Load the csv
set csvRaw but-first csv:from-file "landCoverMeta.csv"
print csvRaw
; Pull first value (land cover)
set keyValList map first csvRaw
print keyValList
; Pull data values
set dataList map but-first csvRaw
print dataList
; Combine these two lists into a list of lists
let tempList ( map [ [ a b ] -> list a b ] keyValList dataList )
; Make a dictionary with the land cover as the key
; and the other columns as the value (in a list)
set patchDataDict table:from-list tempList
ask patches [
; Randomly set patch 'land cover' for this example
set land-cover one-of [ "GR1" "GR2" "GR3" ]
; Query the dictionary for the fuel column (item 0 since
; we've used landcover as the key) and for the type (item 1)
set fuel item 0 table:get patchDataDict land-cover
set patchType item 1 table:get patchDataDict land-cover
]
; Do some stuff based on the retrieved values
ask patches [
set pcolor fuel
if patchType = "a" [
sprout 1
]
]
reset-ticks
end
This generates a toy landscape where each patch is assigned a fuel and patchType value according to a query based on the first column of that csv:
Hopefully that gets you started

How do I search for a string in this JSON with Python

My JSON file looks something like:
{
"generator": {
"name": "Xfer Records Serum",
....
},
"generator": {
"name: "Lennar Digital Sylenth1",
....
}
}
I ask the user for search term and the input is searched for in the name key only. All matching results are returned. It means if I input 's' only then also both the above ones would be returned. Also please explain me how to return all the object names which are generators. The more simple method the better it will be for me. I use json library. However if another library is required not a problem.
Before switching to JSON I tried XML but it did not work.
If your goal is just to search all name properties, this will do the trick:
import re
def search_names(term, lines):
name_search = re.compile('\s*"name"\s*:\s*"(.*' + term + '.*)",?$', re.I)
return [x.group(1) for x in [name_search.search(y) for y in lines] if x]
with open('path/to/your.json') as f:
lines = f.readlines()
print(search_names('s', lines))
which would return both names you listed in your example.
The way the search_names() function works is it builds a regular expression that will match any line starting with "name": " (with varying amount of whitespace) followed by your search term with any other characters around it then terminated with " followed by an optional , and the end of string. Then applies that to each line from the file. Finally it filters out any non-matching lines and returns the value of the name property (the capture group contents) for each match.

Importing JSON dates into SAS gives incorrect $ format where datetime data expected

I am trying to import data containing some date-columns/fields into SAS. The data are in JSON format, and hence need to be converted before import. This I use SAS libname JSON for.
But when I convert/import the data, SAS does not interpret the dates as proper dates, and allow me to manipulate data with date-constraints and so on. Instead, SAS imports the dates as Format = $ whatever that is. But the data show in the imported data. SAS imports the data without errors but any other date-field than the 'date_fi' in the data are not properly formatted as a date.
I am using the following script
filename resp "C:\Temp\transaktioner_2017-07.json" lrecl=1000000000 ;
filename jmap "C:\Temp\transaktioner.map"; filename head
"c:\temp\header.txt";
options metaserver="DOMAIN" metaport=8561
metarepository="Foundation" metauser="USER"
metapass='CENSORED';
libname CLIENT sasiola tag=SOMETAG port=10011
host="DOMAIN"
signer="https://CENSORED";
proc http HEADEROUT=head
url='http://VALID_PATH/acubiz_sas/_design/view/_view/bymonth?key="2017-07"'
method= "GET" CT="application/json" out=resp; run; libname space
JSON fileref=resp map=jmap ;*automap=create;
LIBNAME SASDATA BASE "D:\SASData";* outencoding='UTF-8';
Data SASDATA.Transaktioner ; Set space.Rows_value; run;
data null; if exist("Acubiz.EMS_TRANSAKTIONER", "DATA") then
rc=dosubl("proc sql noprint; drop table Acubiz.EMS_TRANSAKTIONER;
quit;"); run;
data Acubiz.EMS_TRANSAKTIONER; set sasdata.transaktioner; run;
proc metalib; omr (library="/Shared Data/SAS Visual
Analytics/Autoload/AcubizEMSAutoload/Acubiz_EMS"
repname="Foundation"); folder="/Shared Data/SAS Visual
Analytics/Autoload/AcubizEMSAutoload"; select ("EMS_TRANSAKTIONER");
run; quit;
libname CLIENT clear;
libname space clear;
For this conversion, I use the following JSON map.file called 'transaktioner.map'.
The field date_fi imports in the proper date format which I can manipulate as date-format in SAS Visual Analytics, but confirmeddate_fi does not.
The most important parts of this file are here.
{
"NAME": "date_fi",
"TYPE": "NUMERIC",
"INFORMAT": [ "e8601dt19", 19, 0 ],
"FORMAT": ["DATETIME", 20],
"PATH": "/root/rows/value/date_fi",
"CURRENT_LENGTH": 20
},
{
"NAME": "confirmeddate_fi",
"TYPE": "NUMERIC",
"INFORMAT": [ "e8601dt19", 19, 0 ],
"FORMAT": ["DATETIME", 20],
"PATH": "/root/rows/value/confirmeddate_fi",
"CURRENT_LENGTH": 20
},
Does any of you know how I might import the data and interpret the date-fields as such.
I have been messing with different informants in the JSON map-file to solve this riddle and have managed to get to where I can import the data without errors, but SAS does not interpret the date fields as such.
The actual fields are explained here with some examples (taken from the imported data):
Reference that works
date_fi: "2017-07-14T00:00:00" (Apparantly never timestamped but use T00:00:00 - checked 9 instances)
Should work
invoicedate_fi: "2017-08-01T00:00:00" (Apparantly never timestamped but use T00:00:00 - checked 9 instances)
invoicedate_fi: "2017-07-19T00:00:00"
invoicedate_fi: "2017-07-17T00:00:00"
arrivaldate_fi: "2017-08-13T00:00:00" (Apparantly never timestamped but use T00:00:00 - checked 9 instances)
departuredate_fi: "2017-08-09T00:00:00" (Apparantly never timestamped but use T00:00:00 - checked 9 instances)
Do not work as numeric - even though they are specified as dates in map-file (for use with SAS JSON Libname)
markedreadydate_fi: "2017-08-02T11:41:56" (This field is often but not always timestamped)
markedreadydate_fi: "2017-07-31T15:08:03"
markedreadydate_fi: "2017-07-19T00:00:00"
confirmeddate_fi: "2017-07-21T00:00:00" (This field is often but not always timestamped)
confirmeddate_fi: "2017-08-06T20:11:26"
confirmeddate_fi: "2017-07-14T18:38:41"
confirmeddatefinance_fi: "2017-07-31T15:54:10" (This field is often but not always timestamped)
confirmeddatefinance_fi: "2017-08-17T10:33:32"
confirmeddatefinance_fi: "2017-07-26T08:21:34"
markedreadydate_fi: "2017-07-19T00:00:00" (This field is often but not always timestamped)
Does anyone have pertinent info on this issue, as I am at my wit's end? And have exhausted SAS Tech Support about this date-issue.
PS: As a proof of concept, we are importing approx 110.000 rows. And the import finishes without any errors.
A good PDF explaining the different ISO formats in SAS can be found here
Apparantly the solution is to start to import the date-columns as CHARACTERs instead of numbers. And hence do the conversion to date-format in the SAS code like so:
Data SASDATA.Transaktioner(drop=
arrivaldate_fi_temp
departuredate_fi_temp
confirmeddate_fi_temp
confirmeddatefinance_fi_temp
datetoshow_fi_temp
date_fi_temp
invoicedate_fi_temp
markedreadydate_fi_temp
);
Set space.Rows_value(rename=(
confirmeddate_fi=confirmeddate_fi_temp
datetoshow_fi=datetoshow_fi_temp
date_fi=date_fi_temp
invoicedate_fi=invoicedate_fi_temp
markedreadydate_fi=markedreadydate_fi_temp
arrivaldate_fi=arrivaldate_fi_temp
departuredate_fi=departuredate_fi_temp
confirmeddatefinance_fi=confirmeddatefinance_fi_temp
));
*length invoicedate_fi 8.;
format
confirmeddate_fi
datetoshow_fi
date_fi
invoicedate_fi
markedreadydate_fi
arrivaldate_fi
departuredate_fi
confirmeddatefinance_fi
datetime20.;
if confirmeddate_fi_temp ne '' then confirmeddate_fi=input(confirmeddate_fi_temp,E8601DT19.); else confirmeddate_fi=.;
if datetoshow_fi_temp ne '' then datetoshow_fi=input(datetoshow_fi_temp,E8601DT19.); else datetoshow_fi=.;
if date_fi_temp ne '' then date_fi=input(date_fi_temp,E8601DT19.); else date_fi=.;
if invoicedate_fi_temp ne '' then invoicedate_fi=input(invoicedate_fi_temp,E8601DT19.); else invoicedate_fi=.;
if markedreadydate_fi_temp ne '' then markedreadydate_fi=input(markedreadydate_fi_temp,E8601DT19.); else markedreadydate_fi=.;
if arrivaldate_fi_temp ne '' then arrivaldate_fi=input(arrivaldate_fi_temp,E8601DT19.); else arrivaldate_fi=.;
if departuredate_fi_temp ne '' then departuredate_fi=input(departuredate_fi_temp,E8601DT19.); else departuredate_fi=.;
if confirmeddatefinance_fi_temp ne '' then confirmeddatefinance_fi=input(confirmeddatefinance_fi_temp,E8601DT19.); else confirmeddatefinance_fi=.;
run;
I will then remove all specifics for NUMERIC type for importing date-fields in the map file. This way the JSON libname does NOT take care of interpreting the date-formats. SAS does.
ie. the map file specification must be changed back to something like this for alle date-fields.
{
"NAME": "date_fi",
"TYPE": "CHARACTER",
"PATH": "/root/rows/value/date_fi",
"CURRENT_LENGTH": 19
},