How to store centrality values in a file sequentially? - csv

I'm using network extension "nw"I calculated centrality metric like betweenness and I'm trying to print the values of nodes sequentially to
csv files
I wish the result consist two column the first is turtle-id and second the betweenness of turtle
--
and so on
to save
file-open "turtles.csv"
Let Dk1 [ nw:betweenness-centrality] of turtle-set sort turtles
if is-number? Dk1 [ set Dk1 precision Dk1 2 ]
file-print(word "betweenness-centrality: " Dk1)
file-close ;
end
The result of this code changes every time it is executed and They are different from what they appear in world

Remember that agentsets in NetLogo are unordered while lists are ordered. The sort primitive returns a list. In this case, sort turtles returns a list of turtles sorted by who number. However, if you then turn that list back into a turtle-set you'll lose the ordered properties of the list.
Instead of using the of primitive to get a list of betweenness-centrality values in an agentset, just iterate over the list that is returned by sort. For example:
foreach sort turtles [ a-turtle -> show [who] of a-turtle ]

Related

Is there a way to sort a list of ordered tuples of strings into a list of strings while maintaining the order of the strings in the tuples?

I have a list of tuples. Each tuples is ordered according to a string that appears before another. So for the first tuple in list ‘gi-‘ linearly proceeds ‘ba-‘.
list = [(‘gi-‘, ‘ba-‘), (‘be-‘, ‘ke-‘), (‘be-‘, ‘ba-‘), (‘gi-‘, ‘ke-‘), … ]
I am trying to make a function to give the ordered_list. Notice that I expect some strings to be unordered with respect to each other, such as ‘ke-‘ and ‘ba-‘.
ordered_list = [‘gi-‘, ‘be-‘, {‘ke-‘, ba-} … ]
I’m not sure what sorting method I would use to do this task. This is a nltk project that is similar to forming a Cinque style order of adverbs.

Netlogo: is it possible to import timestamp and turtle from the same CSV?

I'm new to netlogo. I have been able to load and print a CSV file but now that I want to generate the turtles from CSV I have trouble finding a good solution.
My csv file is structured
as follows
I have ran into some online examples treating the generate turtles with agents from rows, but my # of observations are too large to pivot. I would like to generate a turtle set based on each column which changes into the next value after a tick. Or is this not possible and should I make another csv with initial turtle list without datetime and value shifts? Advise is appreciated.
Yeah, that is definitely possible and not too complicated to do. You can start by using the number of columns to create as many turtles as you need, then using tick counts and turtle identifiers to choose and update the data you are reading from your csv. The following code should work as long as your time intervals are regular.
extension [csv time]
globals [my-csv current-time]
turtles-own [value]
to setup
set my-csv csv:from-file "my-csv.csv"
set current-time time:anchor-to-ticks (time:create "2011/01/01 00:00") 15 "minutes"
create ((length item 0 my-csv) - 1) turtles
end
to go
tick
update-turtle-values
end
to update-turtle-values
ask turtles [
set value (item who (item ticks my-csv))
]
end

How to automatically change the GIS data file on model runs?

General context: I have a model in NetLogo working with bees and resources in the landscape. I need to run the model with GIS data from one season of the year, and then I need to run the same model but with GIS data from another season of the year.
Objectively, I need to run the same model with two different initial conditions: GIS data 1 and GIS data 2.
Question: I would like to implement something that, at the end of every iteration with GIS data 1, automatically initializes the environment with the information from GIS data 2, running the model with this new data. Would you have any idea how to do this?
You can create a global variable which is used to check what run is being run.
In some degree of pseudocode:
globals [
control-variable
]
to setup
clear things ; see note about this below in the answer.
ifelse (control-variable = 0)
[import GIS data 1]
[import GIS data 2]
end
to go
run-the-model
set control-variable control-variable + 1
manage-runs
end
to manage-runs
ifelse (control-variable = 1)
[setup
go]
[stop]
end
Note that, at the beginning of setup, I didn't use clear-all as the standard practice would suggest. This is because, as from the NetLogo Dictionary, clear-all:
Combines the effects of clear-globals, clear-ticks, clear-turtles, clear-patches, clear-drawing, clear-all-plots, and clear-output.
... meaning that control-variable too would have been deleted at the beginning of the second setup.
Therefore, you should instead use all the <clear-something> that are relevant to your code AND manually clear all your globals apart from control-variable.

Cypher LOAD CSV - how to create a linked list of nodes ordered by a property?

Im new to Neo4j and looking for some guidance :-)
Basically I want to create the graph below from the csv below. The NEXT relationship is created between Points based on the order of their property sequence. I would like to be able to ignore if sequences are consecutive. Any ideas?
(s1:Shape)-[:POINTS]->(p1:Point)
(s1:Shape)-[:POINTS]->(p2:Point)
(s1:Shape)-[:POINTS]->(p3:Point)
(p1)-[:NEXT]->(p2)
(p2)[:NEXT]->(p3)
and so on
shape_id,shape_pt_lat,shape_pt_lon,shape_pt_sequence,shape_dist_traveled
"1-700-y11-1.1.I","53.42646060879","-6.23930113514121","1","0"
"1-700-y11-1.1.I","53.4268571616632","-6.24059395687542","2","96.6074531286277"
"1-700-y11-1.1.I","53.4269700485041","-6.24093540883784","3","122.549696670773"
"1-700-y11-1.1.I","53.4270439028769","-6.24106779537932","4","134.591291249566"
"1-700-y11-1.1.I","53.4268623569266","-6.24155684094256","5","172.866609667575"
"1-700-y11-1.1.I","53.4268380666968","-6.2417384245122","6","185.235926544428"
"1-700-y11-1.1.I","53.4268874080753","-6.24203735638874","7","205.851454672516"
"1-700-y11-1.1.I","53.427394066848","-6.24287421729846","8","285.060040065768"
"1-700-y11-1.1.I","53.4275257974236","-6.24327509689195","9","315.473852717259"
"1-700-y11-1.2.O","53.277024711771","-6.20739084216546","1","0"
"1-700-y11-1.2.O","53.2777605784999","-6.20671521402849","2","93.4772699644143"
"1-700-y11-1.2.O","53.2780318605927","-6.2068238246152","3","124.525619356934"
"1-700-y11-1.2.O","53.2786209984572","-6.20894363498438","4","280.387737910482"
"1-700-y11-1.2.O","53.2791038678913","-6.21057305710353","5","401.635418300665"
"1-700-y11-1.2.O","53.2790975844245","-6.21075327761739","6","413.677012879457"
"1-700-y11-1.2.O","53.2792296384738","-6.21116766400758","7","444.981964564454"
"1-700-y11-1.2.O","53.2799500357098","-6.21065767664905","8","532.073870043666"
"1-700-y11-1.2.O","53.2800290799386","-6.2105343995296","9","544.115464622458"
"1-700-y11-1.2.O","53.2815594673093","-6.20949562301196","10","727.987702875002"
It is the 3rd part that I cant finish. Creating the NEXT relationship!
//1. Create Shape
USING PERIODIC COMMIT 10000
LOAD CSV WITH HEADERS FROM
'file:///D:\\shapes.txt' AS csv
With distinct csv.shape_id as ids
Foreach (x in ids | merge (s:Shape {id: x} ));
//2. Create Point, and Shape to Point relationship
USING PERIODIC COMMIT 10000
LOAD CSV WITH HEADERS FROM
'file:///D:\\shapes.txt' AS csv
MATCH (s:Shape {id: csv.shape_id})
with s, csv
MERGE (s)-[:POINTS]->(p:Point {id: csv.shape_id,
lat : csv.shape_pt_lat, lon : csv.shape_pt_lat,
sequence : toInt(csv.shape_pt_sequence), dist_travelled : csv.shape_dist_traveled});
//3.Create Point to Point relationship
USING PERIODIC COMMIT 10000
LOAD CSV WITH HEADERS FROM
'file:///D:\\shapes.txt' AS csv
???
You'll want APOC Procedures installed for this one. It has both a means of batch processing, and a quick way to link all nodes in a collection together.
Since you already have all shapes the the points of the shape in the db, you don't need to do another load csv, just use the data you've got.
We'll use apoc.periodic.iterate() to batch process each shape, and apoc.nodes.link() to link all ordered points in the shape by relationships.
CALL apoc.periodic.iterate(
"MATCH (s:Shape) RETURN s",
"WITH {s} as shape
MATCH (shape)-[:POINTS]->(point:Point)
WITH shape, point
ORDER by point.sequence ASC
WITH shape, COLLECT(point) as points
CALL apoc.nodes.link(points,'NEXT')",
{batchSize:1000, parallel:true}) YIELD batches, total
RETURN batches, total
EDIT
Looks like there may be a bug when using procedure calls within the apoc.periodic.iterate() where no mutating operations occur (attempted this after including a SET operation in the second part of the query to set a property on some nodes, the property was not added).
Unsure if this is a general case of procedure calls being executed within procedure calls, or if this is specific to apoc.periodic.iterate(), or if this only occurs with both iterate() and link().
I'll file a bug if I can learn more about the cause. In the meantime, if you don't need batching, you can forgo apoc.periodic.iterate():
MATCH (shape:Shape)-[:POINTS]->(point:Point)
WITH shape, point
ORDER by point.sequence ASC
WITH shape, COLLECT(point) as points
CALL apoc.nodes.link(points,'NEXT')

Accessing 2 columns in a 2D list in python

So I'm new to using 2D lists in python. Basically I have a huge excel file in csv format. I have stored all the cells into a list called matrix. However, I only need the information in columns 4 - 5. I tried using range for the 2D list, but it doesn't seem to work. The two columns contain customer IDs and a True statement, respectively. My main purpose is just to count how many times each customer ID appears and store it into another 2D array. I've only gotten this far:
with open('authlog_20140305-20140617.csv','r') as file:
contents = csv.reader(file)
matrix = list()
for row in contents:
matrix.append(row)
for item in matrix: # what I want is so that I only read columns 4 - 5 in matrix
for item2 in uniqueIDs:
if(item != item2):
item2.append(item)
Some help would be greatly appreciated!
I don't know what uniqueIDs is, but I assume it is a list or set, right?
This code:
for line in matrix:
is going to iterate over each line of your matrix. For you to see the 4th and 5th columns, you would just need to use line[3] for the 4th and line[4] for the 5th (remember that lists in python are 0 indexed).
After that you can do what you need with that information.
I am going to have a leap of faith and assume that you need to count items in 4th column iff 5th column is equal to string "true" (or some other simple if condition):
import csv
from collections import Counter
with open('authlog_20140305-20140617.csv','r') as file:
contents = csv.reader(file)
c = Counter(row[3] for row in contents if row[4]=='true')
print(dict(c))
see doc about collections.Counter and generator expressions
sample data:
1,2,3,4,true
1,2,3,4,true
1,2,3,5,true
1,2,3,5,false
output: {'4': 2, '5': 1}