I want to run my NetLogo model to see how the change in landscape scenario 'baseline' and 'future' affects agents' annual travel distance. I have initialized agents on a random patch within their residential postcode boundary.
To be able to compare changes in travel, I should have them initialized from the same patch for both scenarios. I have been trying using random-seed but can't get it to work. I want to run the model for 100 different initial patches but maintain the same patch for each baseline and future scenario in a single run (ideally that is two runs, one with baseline scenario and one future scenario.
When I use random-seed in setup, it initializes agents from different patches for each scenario.
I tried setting a global variable random-seed-turtles and tried 2 runs in behaviour space with two different seeds.
[ "random-seed-turtles" 1 2 ]
[ "landscape-scenario" "baseline" "future"]
It creates turtles from the same patch for each run for baseline but differs for future scenario.
Is there a way to code so that I can have a different initial patch for turtles for each of the 100 runs but same origin for individual runs.
e.g.
run1
baseline my_home = patch 113 224
future my_home = patch 113 224
Also, does the place where you insert the random-seed command matter?
The patch value (landscape availability) changes every tick, reading from a raster prepared for that timestep. Landscape_scenario is a chooser on the interface with the values for 'baseline' and 'future' each reading a different set of rasters. Does this interfere random-seed?
NOTE: The answer below assumes that your situation is such that you want the baseline and the future scenarios to be run as part of a single model iteration.
Also, I propose two (very similar) approaches. The one you'll prefer will depend on your specific situation and needs, that we don't know since you didn't share the structure of your model.
You have to create a turtles-own variable where each turtle will directly store its starting patch.
Then, when you close your baseline scenario and prepare your future scenario, you will have to manually delete all the variables that you want to delete except that variable where turtles stored their starting patch (so, for example, you shouldn't use clear-all or clear-turtles in that passage, because such commands would also clear the turtles-own variable that you want to keep).
To show the approach with an example, see the code below:
globals [
; Put here your gobal variables.
]
patches-own [
; Put here your patches' variables.
]
turtles-own [
my-start
my-value ; I only included this to show that you have to manually clear the turtles' variables EXCEPT from 'my-start'.
]
to setup
clear-all
reset-ticks
; Do here what you need to do to prepare the baseline scenario.
create-turtles 20 [
setxy random-xcor random-ycor
set my-value random 10
set my-start patch-here
]
end
to go
run-model
clear-baseline-scenario
prepare-future-scenario
run-model
end
to run-model
; Put here all the things you need to have to run your model,
; including a stop condition (and ticks, if you want them).
end
to clear-baseline-scenario
clear-globals
clear-patches
reset-ticks
; Now, you also have to manually clear all your turtles' variables EXCEPT
; from 'my-start':
ask turtles [
set my-value 0
]
end
to prepare-future-scenario
; Do here what you need to do to prepare the future scenario,
; and also tell your agents to go to their starting patch:
ask turtles [
move-to my-start
]
end
There is another approach which basically is just the same solution but applied to patches instead of turtles.
In this case, you will have a patches-own variable that signals if a patch is or isn't a starting patch (for example by taking 1/0 value, or TRUE/FALSE if you prefer).
Then, when going from the baseline scenario to the future scenario, you will clear all the things you need to clear except from that patches-own variable (i.e. without using clear-all or clear-patches in that passage, because such commands would also clear the patches-own variable that you want to keep).
This approach is doable if you are not interested in having exactly the same turtle starting on that same patch, but you are happy to have any turtle starting on any of the starting patches.
So it will be something like:
globals [
; Put here your gobal variables.
]
patches-own [
slope ; I only included this to show that you have to manually clear the patches' variables EXCEPT from 'starting-patch?'.
starting-patch?
]
turtles-own [
; Put here your turtles' variables.
]
to setup
clear-all
reset-ticks
; Do here what you need to do to prepare the baseline scenario.
create-turtles 20 [
setxy random-xcor random-ycor
set starting-patch? TRUE
]
ask patches [
set slope random 5
if (starting-patch? = 0) [
set starting-patch? FALSE
]
]
end
to go
run-model
clear-baseline-scenario
prepare-future-scenario
run-model
end
to run-model
; Put here all the things you need to have to run your model,
; including a stop condition (and ticks, if you want them).
end
to clear-baseline-scenario
clear-globals
clear-turtles
reset-ticks
; Now, you also have to manually clear all your patches' variables that you
; want to clear EXCEPT from 'my-start':
ask patches [
set slope 0
]
end
to prepare-future-scenario
; Do here what you need to do to prepare the future scenario,
; and also tell your agents to go to an empty starting patch:
create-turtles 20 [
move-to one-of patches with [(starting-patch?) AND (not any? turtles-here)]
]
end
Related
I'm relatively new to using Octave. I'm working on a project that requires me to collect the RGB values of all the pixels in a particular image and compare them to a list of other values. This is a time-consuming process that takes about half a minute to run. As I make edits to my code and test it, I find it annoying that I need to wait for 30 seconds to see if my updates work or not. Is there a way where I can run the code once at first to load the data I need and then set up an artificial starting point so that when I rerun the code (or input something into the command window) it only runs a desired section (the section after the time-consuming part) leaving the untouched data intact?
You may set your variable to save into a global variable,
and then use clear -v instead of clear all.
clear all is a kind of atomic bomb, loved by many users. I have never understood why. Hopefully, it does not close the session: Still some job for quit() ;-)
To illustrate the proposed solution:
>> a = rand(1,3)
a =
0.776777 0.042049 0.221082
>> global a
>> clear -v
>> a
error: 'a' undefined near line 1, column 1
>> global a
>> a
a =
0.776777 0.042049 0.221082
Octave works in an interactive session. If you run your script in a new Octave session each time, you will have to re-compute all your values each time. But you can also start Octave and then run your script at the interactive terminal. At the end of the script, the workspace will contain all the variables your script used. You can type individual statements at the interactive terminal prompt, which use and modify these variables, just like running a script one line at the time.
You can also set breakpoints. You can set a breakpoint at any point in your script, then run your script. The script will run until the breakpoint, then the interactive terminal will become active and you can work with the variables as they are at that point.
If you don't like the interactive stuff, you can also write a script this way:
clear
if 1
% Section 1
% ... do some computations here
save my_data
else
load my_data
end
% Section 2
% ... do some more computations here
When you run the script, Section 1 will be run, and the results saved to file. Now change the 1 to 0, and then run the script again. This time, Section 1 will be skipped, and the previously saved variables will be loaded.
I have a polygon feature data set of emergency service zones for Tucson Metropolitan Area and want to copy the polygon attributes to the patches. The code is only creating/coloring these patches.(Picture shown below).
enter image description here
I want to create a patch that covers the entire emergency service zone (second picture shown below)
enter image description here
Here is my code. I tried using the vertex of the polygon and was not successful. I tried using the center of the polygon and I came out with a different output than what I want.
My code is:
to setup-gis ;; copy gis features to patches
clear-patches
show "Loading patches..."
gis:apply-coverage ESZs-dataset "STATION_NO" emergency-zone
foreach gis:feature-list-of ESZs-dataset [ feature ->
ask patches [
let centroid1 gis:location-of gis:centroid-of feature
ask patch item 0 centroid1 item 1 centroid1 [
set emergency-zone gis:property-value feature "STATION_NO"
set pcolor yellow
;show emergency-zone
]
]
]
show "Done"
end
foreach gis:feature-list-of ESZs-dataset [ ;for each polygon
polygon ->
ask patches gis:intersecting polygon [
set emergency-zone (gis:property-value polygon "STATION_NO")
set pcolor yellow ]
]
If I understood well, you want to give attribute values to all the patches that intersecting corresponding polygon.. So you should ask patches that are intersecting with ask patches gis:intersecting
I am developing a CPU in VHDL. I am using ModelSim for simulation and testing. In the simulation script I load a program from a binary file to the instruction memory. Now I want to automatically check if the program fits into memory and abort simulation if it doesn't. Since the memory is basically an array of std_logic_vectors, all I would have to do is read the corresponding signal attribute for use in a comparison. My problem is: How do I access a VHDL signal attribute in TCL inside ModelSim?
The closest I have gotten so far is to use the describe command:
describe sim/:tb:uut:imem:mem_array
which prints something like
# Array(0 to 255) [length 256] of
# Array(31 downto 0) [length 32] of
# VHDL standard subtype STD_LOGIC
Now, of course I could parse the length out of there via string operations. But that would not be a very generic solution. Ideally I would like to have something like this:
set mem_size [get_attribute sim/:tb:uut:imem:mem_array'length]
I have searched stackoverflow, googled up and down and searched through the commands in the command reference manual, but I could not find a solution. I am confident there must be a rather easy solution and I just lack the proper wording to successfully search for it. To me, this doesn't look overly specific and I am sure this could come in hand on many occasions when automating design testing. I am using version 10.6.
I would be very grateful if an experienced ModelSim user could help me out.
Disclaimer: I'm not a Tcl expert, so there's probably a more optimized solution out there.
There's a command called examine that you can use to get the value of obejcts.
I created a similar testbench here with a 256 x 32 array, the results were
VSIM> examine -radix hex sim/:tb:uut:imem:mem_array
# {32'hXXXXXXXX} {32'hXXXXXXXX} {32'hXXXXXXXX} {32'hXXXXXXXX} {32'hXXXXXXXX} ...
This is the value of sim/:tb:uut:imem:mem_array at the last simulation step (i.e.,
now).
The command return a list of values for each match (you can use wildcards), so
in our case, it's a list with a single item. You can get the depth by counting
the number of elements it returns:
VSIM> llength [lindex [examine sim/:tb:uut:imem:mem_array] 0]
# 256
You can get the bit width of the first element by using examine -showbase -radix hex,
which will return 32'hFFFFFFFF, where 32'h is the part you want to parse. Wrapping
that into a function would look like
proc get_bit_width { signal } {
set first_element [lindex [lindex [examine -radix hex -showbase $signal] 0] 0]
# Replace everything after 'h, including 'h itself to return only the base
return [regsub "'h.*" $first_element ""]
}
Hope this gives some pointers!
So, I actually found an easy solution. While further studying of the command reference manual brought to light that it is only possible to access a few special signal attributes and length is not one of them, I noticed that ModelSim automatically adds a size object to its object database for the memory array. So I can easily use
set ms [examine sim/:tb:uut:imem:mem_array_size]
to obtain the size and then check if the program fits.
This is just perfect for me, elegant and easy.
I was wondering in what way you can group variables that patches-own to loop over them? I am using NetLogo 5.3.1.
Specifically I am doing this:
patches-own[some-variable other-variables]
to setup
gis:apply-coverage dataset-1 "some-variable" some-variable
;this line above for 1000 other-variables
end
and I would like to do it like this:
globals [group-variables]
patches-own [some-variable other-variables]
to setup
set group-variables (list some-variable other-variables)
foreach group-variables[
gis:apply-coverage dataset-1 "?" ?
]
end
But this seems to be impossible: setup is now turtle/patch only. I also got the message that gis:apply-coverage is expecting something, but got anything instead.
What other way can I use to group these variables somehow, without slowing the program down?
I have looked at lists, arrays and tables but the problem is the gis:apply-coverage demands a patch variable. This excludes arrays and tables. Lists would need to be defined in a patch context, but the gis:apply-coverage needs to be called in an observer context. The read-from-string variable does not support reading a variable and making a string of everything and then calling run on it does not improve execution speed.
I think the main problem is that you use the ? variable as a string ("?"). This cannot work, because it does not refer to the current foreach loop variable.
Maybe there are better solutions, but I got it to work by using the run primitive, which allows to create a command from a combination of strings and variables.
Here is a short example, using the countries dataset from the GIS code examples:
extensions[gis]
globals [group-vars shp]
patches-own [CNTRY_NAME POP_CNTRY]
to load-multiple-vars-from-shp
ca
; Load Data
set shp gis:load-dataset "C:/Program Files/NetLogo 5.3.1/app/models/Code Examples/GIS/data/countries.shp"
; Print properties
print gis:property-names shp
; Select two properties to write to patch-variable
set group-vars (list "CNTRY_NAME" "POP_CNTRY")
; Loop over group-vars
foreach group-vars
[
; Apply coverage of current variable
run (word "gis:apply-coverage shp \"" ? "\"" ?)
]
; Visualize patch variables to check if everything is working
ask patches
[
set plabel substring (word CNTRY_NAME) 0 1
set pcolor POP_CNTRY
]
end
it seems that it is not possible for me to trigger an event in OpenNMS using a threshold...
first the fact (as much detail as i can)
i want to monitor a html file, better, the content.
if a value is not what i expected OpenNMS should call be.
my html file:
Document Count: 5
in /var/lib/opennms/rrd/snmp/NODE are two files named: "documentCount" (.jbr & .meta)
--> because of the http-datacollection-config.xml
in my logfiles is written:
INFO [LegacyScheduler-Thread-2-of-50] RrdUtils: updateRRD: updating RRD file /var/lib/opennms/rrd/snmp/21/documentCount.jrb with values '1385031023:5'"
so the "5" is collected correctly.
now i created a threshold for this case:
<threshold type="high" ds-type="node"
value="4.0" rearm="2.0" trigger="1" triggeredUEI="uei.opennms.org/threshold/highThresholdExceeded"
filterOperator="or" ds-name="documentCount"
/>
in my collectd-configuration.xml is the threshold also enabled:
in my opinion the threshold of 4 is exceeded, because the value is 5. so the highTresholdEvent should be fired. BUT IT DOESNT.
so i'm here to ask if someone had an idea.
regards dawn
Check collectd.log with the following
tail -f collectd.log | grep -i thresholding
Threshold checking was moved to evaluate while the data is being retrieved a while back as opposed to a post process of rrd files.
Even with the log setting at info you should find some clues as to why the threshold rule is not matching any data.