Currently I'm playing around with exporting my data from Netlogo to a CSV file and then loading it into Tableau with the following code..
to write-result-to-file
; if nothing to write then stop
if empty? Result-File [stop]
; Open file
file-open Result-File
; Write into the file
file-print (word Days-passed "," num-susceptible "," num-infected "," num-recovered)
; Close file
file-close
end
Where am running into trouble is when I load the data into tableau it isn't properly picking up the measures/dimensions. Is there a way in Netlogo to specify the headers of each of my rows/columns before they are exported to the CSV file?
This question was asked and answered over on NetLogo Users. James Steiner's answer is copied below, with a few typos in the code corrected. It's really quite elegant.
You can print the headers to your results-file during setup!
You might want to make a subroutine to handle all writing to the file, so you don't have to repeat code:
to write-csv [ #filename #items ]
;; #items is a list of the data (or headers!) to write.
if is-list? #items and not empty? #items
[ file-open #filename
;; quote non-numeric items
set #items map quote #items
;; print the items
;; if only one item, print it.
ifelse length #items = 1 [ file-print first #items ]
[file-print reduce [ (word ?1 "," ?2) ] #items]
;; close-up
file-close
]
end
to-report quote [ #thing ]
ifelse is-number? #thing
[ report #thing ]
[ report (word "\"" #thing "\"") ]
end
You would call it with
write-csv "myfilename.csv" ["label1" "label2" "label3"]
to write the column headers in your setup routine, and then
write-csv "myfilename.csv" [10.0 "sometext" 20.3]
to write a row of data - in this case a number, a string, and another number.
Related
I have a json file filled with a list of data where each element has one field called url.
[
{ ...,
...,
"url": "us.test.com"
},
...
]
In a different file I have a list of mappings that I need to replace the affected url fields with, formatted like this:
us.test.com test.com
hello.com/se hello.com
...
So the end result should be:
[
{ ...,
...,
"url": "test.com"
},
...
]
Is there a way to do this in Vim or do I need to do it programmatically?
Well, I'd do this programmatically in Vim ;-) As you'll see it's quite similar to Python and many other scripting languages.
Let's suppose we have json file open. Then
:let foo = json_decode(join(getline(1, '$')))
will load json into VimScript variable. So :echo foo will show [{'url': 'us.test.com'}, {'url': 'hello.com/se'}].
Now let's switch to a "mapping" file. We're going to split all lines and make a Dictionary like that:
:let bar = {}
:for line in getline(1, '$') | let field = split(line) | let bar[field[0]] = field[1] | endfor
Now :echo bar shows {'hello.com/se': 'hello.com', 'us.test.com': 'test.com'} as expected.
To perform a substitution we do simply:
:for field in foo | let field.url = bar->get(field.url, field.url) | endfor
And now foo contains [{'url': 'test.com'}, {'url': 'hello.com'}] which is what we want. The remaining step is to write the new value into a buffer with
:put =json_encode(foo)
You could…
turn those lines in your mappings file (/tmp/mappings for illustration purpose):
us.test.com test.com
hello.com/se hello.com
...
into:
g/"url"/s#us.test.com#test.com#g
g/"url"/s#hello.com/se#hello.com#g
...
with:
:%normal Ig/"url"/s#
:%s/ /#
The idea is to turn the file into a script that will perform all those substitutions on all lines matching "url".
If you are confident that those strings are only in "url" lines, you can just do:
:%normal I%s#
:%s/ /#
to obtain:
%s#us.test.com#test.com#g
%s#hello.com/se#hello.com#g
...
write the file:
:w
and source it from your JSON file:
:source /tmp/mappings
See :help :g, :help :s, :help :normal, :help :range, :help :source, and :help pattern-delimiter.
I need to write an output in a JSON file which grows longer over time. What I have:
{
"something": {
"foo" : "bar"
}
}
And I use (spit "./my_file" my-text :append true).
With append, I include new entries, and it looks like this:
{
"something": {
"foo": "bar"
}
},
{
"something": {
"foo": "bar"
}
},
}
My problem is that I need something like this:
[
{
"something": {
"foo": "bar"
}
},
{
"something": {
"foo": "bar"
}
}
]
But I really do not know how to include new data within the [ ]
If you want to perform updates in-place -- meaning you care more about performance than safety -- this can be accomplished using java.io.RandomAccessFile:
(import '[java.io RandomAccessFile])
(defn append-to-json-list-in-file [file-name new-json-text]
(let [raf (RandomAccessFile. file-name "rw")
lock (.lock (.getChannel raf)) ;; avoid concurrent invocation across processes
current-length (.length raf)]
(if (= current-length 0)
(do
(.writeBytes raf "[\n") ;; On the first write, prepend a "["
(.writeBytes raf new-json-text) ;; ...before the data...
(.writeBytes raf "\n]\n")) ;; ...and a final "\n]\n"
(do
(.seek raf (- current-length 3)) ;; move to before the last "\n]\n"
(.writeBytes raf ",\n") ;; put a comma where that "\n" used to be
(.writeBytes raf new-json-text) ;; ...then the new data...
(.writeBytes raf "\n]\n"))) ;; ...then a new "\n]\n"
(.close lock)
(.close raf)))
As an example of usage -- if no preexisting out.txt exists, then the result of the following three calls:
(append-to-json-list-in-file "out.txt" "{\"hello\": \"birds\"}")
(append-to-json-list-in-file "out.txt" "{\"hello\": \"trees\"}")
(append-to-json-list-in-file "out.txt" "{\"goodbye\": \"world\"}")
...will be a file containing:
[
{"hello": "birds"},
{"hello": "trees"},
{"goodbye": "world"}
]
Note that the locking prevents multiple processes from calling this code at once with the same output file. It doesn't provide safety from multiple threads in the same process doing concurrent invocations -- if you want that, I'd suggest using an Agent or other inherently-single-threaded construct.
There's also some danger that this could corrupt a file that has been edited by other software -- if a file ends with "\n]\n\n\n" instead of "\n]\n", for example, then seeking to three bytes before the current length would put us in the wrong place, and we'd generate malformed output.
If instead you care more about ensuring that output is complete and not corrupt, the relevant techniques are not JSON-specific (and call for rewriting the entire output file, rather than incrementally updating it); see Atomic file replacement in Clojure.
Complete Julia newbie here.
I'd like to do some processing on a CSV. Something along the lines of:
using CSV
in_file = CSV.Source('/dir/in.csv')
out_file = CSV.Sink('/dir/out.csv')
for line in CSV.eachline(in_file)
replace!(line, "None", "")
CSV.writeline(out_file, line)
end
This is in pseudocode, those aren't existing functions.
Idiomatically, should I iterate on 1:CSV.countlines(in_file)? Do a while and check something?
If all you want to do is replace a string in the line, you do not need any CSV parsing utilities. All you do is read the file line by line, replace, and write. So:
infile = "/path/to/input.csv"
outfile = "/path/to/output.csv"
out = open(outfile, "w+")
for line in readlines(infile)
newline = replace(line, "a", "b")
write(out, newline)
end
close(out)
This will replicate the pseudocode you have in your question.
If you need to parse and read the csv field by field, use the readcsv function in base.
data=readcsv(infile)
typeof(data) #Array{Any,2}
This will return the data in the file as a 2 dimensional array. You can process this data any way you want, and write it back using the writecsv function.
for i in 1:size(data,1) #iterate by rows
data[i, 1] = "This is " * data[i, 1] # Add text to first column
end
writecsv(outfile, data)
Documentation for these functions:
http://docs.julialang.org/en/release-0.5/stdlib/io-network/?highlight=readcsv#Base.readcsv
http://docs.julialang.org/en/release-0.5/stdlib/io-network/?highlight=readcsv#Base.writecsv
I want turtles to read and adopt data from csv file. I have written the following code: the problem is even-though the data gets loaded, i'm unable to make the individual turtles take on each of the income values. Any assistance to this effect would be appreciated
extensions [csv]
breed [households household]
households-own [income]
globals [income-data]
to setup
load-income-data
setup-households
end
to load-income-data
set income-data []
file-open "income.csv"
while [ not file-at-end? ]
[ set income-data sentence income-data ( file-read-line)
]
user-message "income data loading complete!"
file-close
end
to setup-households
create-households 700
ask one-of households
[ setxy random-xcor random-ycor
set income income-data
]
end
Have a look at the File Input Example in the NetLogo Model Library (Code Examples). You need to use a foreach to loop through the imported values / agents.
I have a script that scrapes HTML article pages of a webshop. I'm testing with a set of 22 pages of which 5 article pages have a product description and the others don't.
This code puts the right info on screen:
if doc.at_css('.product_description')
doc.css('div > .product_description > p').each do |description|
puts description
end
else
puts "no description"
end
But now I'm stuck on how to get this correctly to output the found product descriptions to an array from where I'm writing them to a CSV file.
Tried several options, but none of them works so far.
If I replace the puts description for #description << description.content, then all the descriptions of the articles end up in the upper lines in the CSV although they do not belong to the articles in that line.
When I also replace "no description" for #description = "no description" then the first 14 lines in my CSV recieve 1 letter of "no description" each. Looks funny, but it is not exactly what I need.
If more code is needed, just shout!
This is the CSV code I use in the script:
CSV.open("artinfo.csv", "wb") do |row|
row << ["category", "sub-category", "sub-sub-category", "price", "serial number", "title", "description"]
(0..#prices.length - 1).each do |index|
row << [
#categories[index],
#subcategories[index],
#subsubcategories[index],
#prices[index],
#serial_numbers[index],
#title[index],
#description[index]]
end
end
It sounds like your data isn't lined up properly. If it were you should be able to do:
CSV.open("artinfo.csv", "w") do |csv|
csv << ["category", "sub-category", "sub-sub-category", "price", "serial number", "title", "description"]
[#categories, #subcategories, #subsubcategories, #prices, #serial_numbers, #title, #description].transpose.each do |row|
csv << row
end
end