Uncertainty about the term "image lable file" (ITK) - itk

I've been using ITK recently and I'm not really skillful on it, so I
apologize if my question seems childish !
here is the question: I've built a couple of ITK Examples on my machine (Win
7 x64) successfully and then tried to run an example to test the result, as
I was interested to watershed segmentation I tried it (WS3D), but except the
input and output image files which has to be defined, then it needs an
additional parameter to be defined called : LabelImageFile, exactly like
this :
WS3D InputImageFile LabelImageFile OutputImageFile
And unfortunately I have no idea what is LabelImageFile, how can I obtain a
Label Image for a specific image ?I'd be so much grateful if anyone can help
me, many thanks in advance,
Shawn

The watershed segmentation algorithm looks for a function f that is a continuous height function defined over an image domain. Such that a catchment basin is defined as the set of points whose paths of steepest descent terminate at the same local minimum of f. This catchment basin is your watershed result.
The label image that the filter is expecting is the function f or the height function. You can generate one such image by applying gradient magnitude to the original image.

Related

Transform Edit2D Areas

I am using the Edit2D extension on an svf created from a 2D dwg file and have a question about transforms. The Autodesk.Edit2D.Polygon's that are created have a getArea() method which is great. However it's not in the correct unit scale. I tested one and something that should be roughly 230sf in size is coming back as about 2.8.
I notice that the method takes an argument of type Autodesk.Edit2D.MeasureTransform which I'm sure is what I need, however I don't know how to get that transform. I see that I can get viewer.model.getData().viewports[1].transform. However, that is just an array of 16 numbers and not a transform object so it creates an error when I try to pass it in.
I have not been able to find any documentation on this. Can someone tell me what units this is coming back in and/or how to convert to the same units as the underlying dwg file?
Related question, how do I tell what units the underlying DWG is in?
EDIT
To add to this, I tried to get all polylines in the drawing which have an area property. In this case I was able to figure out that the polyline in the underlying dwg was reporting its area in square inches (not sure if that's always the case). I generated Edit2D polygons based on the polylines so it basically just drew over them.
I then compared the area property from the polyline to the result of getArea() on the polygon to find the ratio. In this case it was always about 83 or 84 times smaller than the square foot value of the polyline it came from (there is some degree of error in my tracing system so I don't expect they would be exact at this point). However, that doesn't fit any unit value that I know of. So remaining questions:
What unit is this?
Is this consistent or do I need to look somewhere else for this scale?
Maybe you missed the section 3.2 Units for Areas and Lengths of https://forge.autodesk.com/en/docs/viewer/v7/developers_guide/advanced_options/edit2d-use/
If you use Edit2D without the MeasureExtension, it will display all coordinates in model units. You can customize units by modifying or replacing DefaultUnitHandler. More information is available in the Customize Edit2D tutorial.
and https://forge.autodesk.com/en/docs/viewer/v7/developers_guide/advanced_options/edit2d-customize/
BTW, we can get the DefaultUnitHandler by edit2dExt.defaultContext.unitHandler
Ok after a great deal of experimentation and frustration I think I have it working. I ended up looking direction into the js for the getArea() method in dev tools. Searching through the script, I found a class called DefaultMeasureTransform that inherits from MeasureTransform and takes a viewer argument. I was able to construct that and then pass it in as an argument to getArea():
const transform = new Autodesk.Edit2D.DefaultMeasureTransform(viewer);
const area = polygon.getArea(transform);
Now the area variable matches the units in the original cad file (within acceptable rounding error anyway, it's like .05 square inches off).
Would be nice to have better documentation on the coordinate systems, am I missing it somewhere? Either way this is working so hopefully it helps someone else.

ITK difference betweeen SetInitialTransform and SetMovingInitialTransform

I am now using ITK library to image registration. I wonder, when setting initial parameters for ImageRegistrationMethodv4 type registration, shall I use SetMovingInitialTransform and SetFixedInitialTransform like in the tutorial, or just SetInitialTransform??
The "transform" in SetInitialTransform means transform for moving image or for fixed image? Thank you:)
(Please read this with caution--I do not have the library with me to test this answer; it's based on memory only.)
I believe SetInitialTransform() refers to the transform which is actually optimized by the registration method. In other words, it is a collection of transform parameters that specify an "initial guess" for the optimization process; these parameters will then start moving around at each iteration. (They are therefore applied to the moving image.)
I think SetMovingInitialTransform() and SetFixedInitialTransform() refer to static initial transforms which do no change at all during the registration process. They merely "set up" the moving and fixed images to desired starting locations, if you are not satisfied with their default positions in space.
If you have some simple 2D images, try testing this answer out with simple initial transformations, like a 5-unit translation transformation or something.
You could try reading the ImageRegistrationMethov4 documentation for a little more info.

Solidworks hollow/shell a STL Part

I'm pretty new at solidworks!!
But I've been able to create a solid from a stl files. It's a Truncated tetrahedron shape.
Now I wanted this shape to be hollow (for 3D printing and adding threads).
So I've searched for a while and found a tutorial for the shell tool. This didn't work out because it gave me an error. That the faces may offset in adjacent spaces.
So I thought if I had one part and then a the same part but scale it 3mm. Place them on the same spot and then subtract them of some sort. It would give me the same shelled shape I want.
Would this work and does anybody know a way to do this or has a better way to hollow out my solid.
STL & PART upload.
Files Google Drive
If you have SOLIDWORKS Professional or Premium, you can use ScanTo3D to turn the part into a Solid / Surface body. At that point, you can manipulate the geometry as you would anything else in SOLIDWORKS.
Here's a video showing both turning on ScanTo3D and how to use it.
https://youtu.be/ZjzqWCfNfmQ
"So I thought if I had one part and then a the same part but scale it
3mm. Place them on the same spot and then subtract them of some sort.
It would give me the same shelled shape I want."
use the move/copy body command to copy it
use the scale command to scale it
use the combine feature to subject the smaller body from the main body
Alternatively use the check geometry feature to find any faulty faces and ALWAYS run import diagnostics on an imported body. if you can find and fix a faulty face try the shell tool again. If the minimum radius is too small then you will need to manually offset faces using the offset surface command

Tiled Game Background and the "'float' object cannot be interpreted as an integer" error

I am working my way through an entry level Pygame tutorial on a Windows 7 machine and for the following code, I am getting this error:
"builtins.TypeError: 'float' object cannot be interpreted as an integer"
# 6 - Draw the screen elements
for x in range(width//grass.get_width()+1):
for y in range(height//grass.get_height()+1):
screen.blit(castle,(0,30))
Through my research on this site I found that using the int division separator (//) got me past the error, but alas my grass image won't tile. I know this code works with (/) on my Linux machine because I have completed the game previously. If you take the time to look into this I truly appreciate your help! :-)
Well it seems that the problem is because in your loop you are bliting a castle, and not a grass tile.
this should fix it:
screen.blit(grass,(x*grass.get_width(),y*grass.get_height())

Python graphics skipping every other pixel when drawing a .PPM file from a function

I'm writing a program for a college course. I import a .PPM file saved as a 2-d array from main into the function. Then I have to update the pixels of a graphics window (which is opened in main) using .setPixel and color_RGB() method and functions.
The pixels are updating, however there is a white pixel in between each colored pixel for some reason. It's not the PPM file (they were supplied by my professor and I've tried multiple ones), so it has to be my function.
Warning: I am not allowed to use anything in my program that we have not yet covered in our course (it's a first year, 4 month course so the scope is not massive). I don't need to know exactly HOW to fix it, as much as I need to know why it's doing it (AKA: I need to be able to explain how I fixed it, and why it was breaking in the first place).
Here is my function:
def Draw_Pic(pic,pic_array, sizeX, sizeY, gfx_window):
for y in range(sizeY):
for x in range(0, sizeX, 3):
pixel_color = color_rgb(pic_array[y][x],pic_array[y][x+1],pic_array[y][x+2])
pic.setPixel(x,y,pixel_color)
gfx_window.update()
You are using range(0, sizeX, 3) which creates a list with values 0 to sizeX with increment 3.
So your x goes 0..3..6..9 etc. Makes perfect sense for the part where you assemble pixel color from 3 components, but then you do pic.setPixel(x,y,colors) using the same interleaved x.
Hope that helped.
P.S. By the way, why "colors" and not "color"?
edit Also, that way you'll copy only 1/3 of the image in pic_array.