Bit of a challenge here which I've been grappling with for some time. I'll explain my full work flow so you can reproduce if needed.
I'm creating virtual landscapes in Google SketchUp which I ultimately would like to use in Netlogo to examine how turtles interact with them.
My problem is that by the time I get the landscapes into Netlogo the units don't seem to relate to the original 3D model.
Step 1: Create simple hill on a 50m by 50m square in Sketchup using the Toposhaper extension.
Step 2: Export to .dae file and import into Meshlab, ensure the Meshlab model has the same dimensions as the Sketchup model by adjusting the units with the assistance of the measuring tool. Export from meshlab as .xyz file.
Step 3: Import .xyz file into QGis as points by adding a new layer from delimited file. Selecting field_1 and field_2 as X and Y fields.
Step 4: Create raster of points using Raster > Interpolation > Interpolation. Add field_3 as interpolation attribute, set number of columns to 50 by 50 (to correspond to the 50m x 50m 3D model), adjust cell size X and Y to match to ensure Netlogo will read the resulting .asc file.
Step 5: Finally, I setup a model in Netlogo to receive the raster. Firstly, in model settings I set the the min and max pxor and pycor to 0 and 50. Then, using the Gis Extension, I import the raster apply the z-value to a patch variable called elevation:
to load-gis
set elevation gis:load-dataset "cone_50.asc"
gis:set-world-envelope-ds gis:envelope-of elevation
gis:apply-raster elevation target-elev
end
Now, each patch of my 50 by 50 Netlogo world should have an elevation value taken from my 50 by 50 raster. In theory, adding all the elevation values together should (roughly) give me the total volume of the raised area of the hill? The figure I get is higher however and the problem gets worse with larger volumes.
Can anyone help?
Related
I'm currently working on my bachelor project and I'm using the PointNet deep neural network.
My project group and I have created a dataset of point clouds(an unsorted list of x amount of 3d coordinates) and segmentation files, but we can't train PointNet to predict segmentation with the dataset.
Each segmentation file is a list containing the same amount of rows, as points in the corresponding point cloud, and each row is either a 1 or a 2, depending on the corresponding point belonging to segment 1 or 2.
When PointNet predicts it outputs a list of x elements, where each element is the segment that PointNet predicts the corresponding point belongs to.
When we run the benchmark dataset from the original PointNet implementation, the system runs and can predict segmentation, so we know that the error is in the dataset somewhere, even though we have tried our best to have our dataset look like the original benchmark dataset.
The implemented PointNet uses pytorch conv2d, maxpool2d and linear transformation. For calculating the loss, both the nn.functional.nll_loss and the nn.NLLLos functions have been used. When using the nn.NLLLos the weight parameter was set to a tensor of [1,100] to combat potential imbalance of the data.
These are the thing we have tried:
We have tried downsampling the point clouds i.e remove points using voxel downsampling
We have tried downscaling and normalize all values so they are between 0 and 1, using this formula (data - np.min(data)) / (np.max(data) - np.min(data))
We have tried running an euclidean clustering function on the data, to have each scanned object for it self
We have tried replicating another dataset, which was created using the same raw data, which we know have worked before
In the attached link, images of the datafiles with a description can be found.
Cheers everyone
I'm not a dev, I'm doing this for a school project. I'm trying to put the following dataset into a surface plot in windows gnuplot. qt type terminal, if that's important.
https://files.catbox.moe/nbc6l1.json
As you can see, it's a huge set of data. Pulled directly from an image and into a csv file, which I converted to json.
When I type in "splot 'C:\Users\tyler\ESRP Data\sampleOutput.json'", this is what I get.
As you can see, there's only a single line, when there should be something approaching an intensity chart in a 3 dimensional space. Is it a problem with the data? Do I need a specific command to do this?
It would help if you attached an example of your image data to the question, and also if you provided a link to a plot similar to the one you are trying to create. There are many different styles one might use to represent a surface. I will attempt to guess at a possible solution.
Input image (scribbled in GIMP and saved as a png image):
Gnuplot surface plot:
set border -1
unset tics
# surface represented by colored lines in 3D
# down-sample by 4x in each dimension to get an interpretable surface
set palette defined (0 "blue", 1 "white")
splot 'scribble.png' binary filetype=png every 4:4:4 using 1:2:3:3 with lines lc palette
My ultimate goal is to convert landcover raster (.tif) objects to an sf object representing the raster's grid and the original values of each cell within each geometry. I have been able to do this for smaller rasters doing the following:
library(sf)
library(stars)
# import raster using stars
landcover_stars <- read_stars(my_raster.tif)
# convert to sf object using st_as_sf
landcover_grid_sf <- st_as_sf(landcover_stars)
In larger rasters (e.g. my largest raster is currently 11482x12607 cells), however, the read_stars() function imports the raster as a "stars proxy", which is a step taken to handle large raster datasets by the package. While stars proxy objects are not accepted by the st_as_sf function, it is possible to set "proxy = FALSE" in the function. If I do this in my largest dataset, however, running st_as_sf(landcover_stars) with the resulting object will crash my laptop {16 GB RAM, i7 2.70GHz processor}.
Is there a way I can proceed to ease the load on my machine when converting very large star objects to sf?
In addition - could it be that it is actually the newly generated sf object what is depleting my machine?
Here is a dummy raster in case youd like to test it, with integer values randomly generated ranging from 1 to 10:
raster(nrows=12000, ncols=12000, xmn=0, xmx=10, vals = floor(runif(12000*12000, min=0, max=11)))
I would like to modify the ImageNet caffe model as described bellow:
As the input channel number for temporal nets is different from that
of spatial nets (20 vs. 3), we average the ImageNet model filters of
first layer across the channel, and then copy the average results 20
times as the initialization of temporal nets.
My question is how can I achive the above results? How can I open the caffe model to be able to do those changes to it?
I read the net surgery tutorial but it doesn't cover the procedure needed.
Thank you for your assistance!
AMayer
The Net Surgery tutorial should give you the basics you need to cover this. But let me explain the steps you need to do in more detail:
Prepare the .prototxt network architectures: You need two files: the existing ImageNet .prototxt file, and your new temporal network architecture. You should make all layers except the first convolutional layers identical in both networks, including the names of the layers. That way, you can use the ImageNet .caffemodel file to initialize the weights automatically.
As the first conv layer has a different size, you have to give it a different name in your .prototxt file than it has in the ImageNet file. Otherwise, Caffe will try to initialize this layer with the existing weights too, which will fail as they have different shapes. (This is what happens in the edit to your question.) Just name it e.g. conv1b and change all references to that layer accordingly.
Load the ImageNet network for testing, so you can extract the parameters from the model file:
net = caffe.Net('imagenet.prototxt', 'imagenet.caffemodel', caffe.TEST)
Extract the weights from this loaded model.
conv_1_weights = old_net.params['conv1'][0].data
conv_1_biases = old_net.params['conv1'][1].data
Average the weights across the channels:
conv_av_weights = np.mean(conv_1_weights, axis=1, keepdims=True)
Load your new network together with the old .caffemodel file, as all layers except for the first layer directly use the weights from ImageNet:
new_net = caffe.Net('new_network.prototxt', 'imagenet.caffemodel', caffe.TEST)
Assign your calculated average weights to the new network
new_net.params['conv1b'][0].data[...] = conv_av_weights
new_net.params['conv1b'][1].data[...] = conv_1_biases
Save your weights to a new .caffemodel file:
new_net.save('new_weights.caffemodel')
I am trying to visualize slope for an elevation raster using QGIS terain analysis tool. The results are no what I would expect.
Elevation raster is from NASA's SRTM program. I picked a relatively mountainous region to run a test N39W121.
elevation model looks like this
but resulting slope raster only has two values 0 and 89.9 .
I used default setting's in QGIS' DEM tool, set to slope mode. Can anyone help me firgure out what I'm doing wrong. Is it a problem with the original data, or is it settings? I am at a loss. Calculating hillshade and ruggedness index with the same tool produced results as expected
By default, SRTM's map horizaontal units are in degrees (WGS84), where the vertical units for SRTM is in meters. This either needs to be compensated for in QGIS's DEM analysis settings or the SRTM raster needs to be converted to a projection that uses meters for its map units.