netlogo gis:apply-coverage: NaN error - gis

I am trying to apply the polygon feature attribute of a zone code from the shape file to the patches. It should be easy enough with gis:apply-coverage, but the value prints out as NaN for all the patches (should be either 0,1,2,3... the value of the Zone_Code attribute).
I have already tried changing the minimum threshold value to 0.000001, and the zones are fairly large, so I don't think that is the problem. The rest of the shape files have worked with no problems, although I haven't used apply-coverage with them. I'm using Netlogo version 5.3
code:
gis:set-coverage-minimum-threshold 0.000001
gis:apply-coverage zones-dataset "ZONE_CODE" landuse_type
ask patches
[ print landuse_type ]

It does seem like that should work if your shapefile polygons are covering some of your patches. Have you checked that the loaded zone shapefile has the correct projection/envelope etc? If the shapefile is offset from the NetLogo world it may not overlay any patches. You could quickly confirm that your shapefile is at the very least intersecting your patches:
ask patches gis:intersecting zones-dataset [
set pcolor blue
]
If that doesn't work, maybe your world-envelope and the envelope of "zones-dataset" do not overlap. Otherwise, your code looks fine and worked for me using a made-up polygon shapefile.

Related

GeoServer Repeat rendering (EPSG:4326)

I try to make .dwg to .shp with Arcgis for autocad and Qgis
I define the coordinate system as EPSG:4326. like:
and display fine on the Qgis:
But use GeoServer, Repeat rendering :
The coordinate system I set:
But when I set the SRS to EPSG:3857,It work fine!
what happened? can you help me?
The data is not in 4326.
With this projection (4326), the bounds are +- 180 and +- 90 degrees. Your first screenshot and the Geoserver one show coordinates with values around 73000.
--> the data source has the wrong coordinate.
--> QGIS manage to display it
--> Geoserver fails at this
You need to fix the coordinate system at the source

How to get Autodesk Viewer LayerManager to RestoreState properly

I've encountered a bug in the Autodesk Viewer LayerManager extension that breaks the restoreState functionality. I am saving the state of a multilayer DWG file using getState and re-applying that state using restoreState. When i restore state, most or all of the layers are hidden, even if they weren't when i saved the state.
It looks like this is an issue with how the state is being saved and interpreted. I dug into the state JSON and found the list of visible layers (state.objectSet[0].isolated) in this form:
["0","1","2","3","4","5"]
After some experimenting i found out that the LayerManager is expecting either the integer indices of the layers or the string names of the layers. Something like:
[0,1,2,3,4,5]
or
["layer0","layer1","layer2","layer3","layer4","layer5"]
(assuming those are the names of each layer)
So the current implementation breaks because it looks for layers with the names "0", "1", "2", etc. no matter what the actual layer names are.
I am wondering if there is a way to fix or work around this. A temporary solution is to parse the state JSON and cast the layer numbers to integers but that is a bit of a hack.
This is a known issue and is currently being looked into by our Engineering. Can stay tuned to our Forge Blog and look out for the releases notes to keep tabs on a fix.
In the meantime as quick workaround you can programmatically reveal all the layers once all graphics are loaded:
viewer.addEventListener(Autodesk.Viewing.GEOMETRY_LOADED_EVENT, ()=>viewer.showAll())

Octave contourf() Not Coloring in the Line

I'm having trouble filling in my curves using contourf() on Octave. I'm running Octave 3.6.4 on Mac OS X 10.8.5.
When I use contour(x,fp,data3) I get the following, which is correct:
[Contour plot of data, not filled]
However, when I want try contourf(x,fp,data3) in order to fill in the gaps I get this monstrosity:
Contourf plot of data, filled, but not correct
What can I do to fix this? I've read the contourf() documentation and can't see anything there that I'm missing. Any advice would be helpful.
Thanks!
P.S. Here's a link to a smaller version of the data file: https://www.dropbox.com/s/lmvdzi7l42tasr8/Ch942.csv?dl=0. The whole file is huge, so this represents the first few lines but still shows the problem when plotted in Octave.
Unfortunately I don't have enough reputation points to post more than two links, so I've deleted the contour() plot that looks right, but isn't filled in. Sorry.

Faster-RCNN Evaluation

I am training a Faster-RCNN(VGG-16 architecture) on INRIA Person dataset. I was trained for 180,000 training steps. But when I evaluate the network, it gives varying result with the same image.
Following are the images
I am not sure why does it give different results for the same set of weights.The network is implemented in caffe.
Any insight into the problem is much appreciated.
Following image shows different network losses
Recently, I also prepared my own dataset to training, and got the similar results as yours.
The following is my experiences and share with you:
Check input format include images and your bounding box csvfile or xml (where always put on Annotation file) whether all bounding box (x1, y1, x2, y2) correct?
Then check roidb/imdb loading python script (put on FasterRCNN/lib/datasets/pascal_roi.py, and maybe yours is inria.py),
make sure _load_xxx_annotation() correctly loaded all bounding box by print bounding_box and filename. Importantly, if your script is copied and modified the pascal_roi.py or any prototype script, please check whether it will saved all roi and image info into cache file, if yes you need to delete that cache file when you change any configure files and re-try.
Finally, make sure that all bounding box correctly generating when network is training (e.g. print minibatch variable to show filename and corresponding x1, y1, x2, y2 shown at FasterRCNN/lib/roi_data_layer/layer.py). If roi generator generate correctly, the bounding box will not differ with your manually select bounding box largely.
Some similar issue may cause this problem as well.

How to extract data from an embed Raphael dataset to CSV?

Attempting to extract the data from this Google Politics Insights webpage from "Jan-2012 to the Present" for Mitt Romney and Barack Obama for the following datasets:
Search Trends Based on volume
Google News Mentions Mentions in articles and blog posts
YouTube Video Views Views from candidate channels
For visual example, here's what I mean:
Using Firebug I was able to figure out the data is stored in a format readable by Raphael 2.1.0; looked at the dataset and nothing strikes me as a simple way to convert the data to CSV.
How do I convert the data per chart per presidential candidate into a CSV that has a table for "Search Trends", "Google News Mentions", and "YouTube Video Views" broken down by the smallest increment of time with the results measured in the graph are set to a value of "0.0 to 1.0"? (Note: The reason for "0.0 to 1.0" is the graphs do not appear to give volume info, so the volume is relative to the height of the graph itself.)
Alternatively, if there's another source for all three datasets in CSV, that would work too.
First thing to do is to find out where the data comes from, so I looked up the network traffic in my developer console, and found it very soon: The data is stored as json here.
Now you've got plenty of data for each candidate. I don't know exactly in what relation these numbers are but they definitely are used for their calulation in the graph. I found out that the position in the main.js is on line 392 where they calculate the data with this expression:
Math.log(dataPoints[i][j] * 100.0) / Math.log(logScaleBase);
My guess is: Without the logarithm and a bit exponential calculation you should get the right results.