I need advice on how to proceed. I have an object in 3DS max on which an image with "text" should be "pasted". And some parts of this text should be golden (reflective) as I should proceed so that I can edit a given part of the text in 3ds max.
Well thank you
Your best go here should be adding a texture! Adding a reflection map would very much help seeing the metal aspect of the gold where you want to!
Related
I am doing some cat poop research and I tried to use YoloV5 to detect different types of poop in the litter box. I collected about 130 poop pictures (Just poop pictures with no background) and labeled them and use roboflow to get annotations, then I follow colab notebook to train data and get the best.pt file for detection. When I run detection using a ramdom litter box picture, the rectangle just marked the whole image or half of the image instead of marking the poops in that image.
Then I tried to labled 3 litter box images (Marked poops inside the litter box image) and do it all over again. But when I run detection using a litter box image. Nothing happened. I am so confused. Is it because poop shapes and color are so different to one and the other so it caused the detection didn't work.
Anyone could give me some clues on how to lable the images and train them?
Thank you
enter image description here
First i must say that your project is interesting and funny as well, no offence.
Your problem must be due to the number of training images. We cant expect the model to detect after training it with 130 images. Experts say we must use at least 1500 images for single class.
And some tips for labelling images in roboflow
Draw a box which includes all the parts of your interest. Dont leave any areas.
Try to avoid overlapping areas.
I'm implementing credit card ocr engine.
My first try was to use opencv for detecting ROI and through the ROI to CNN.
Actually it recognizes quiet well from CNN however, it can not detect ROI well while the card digits are embossed with similar color.
Second try was to adapt YOLO or retinanet for detecting and recognizing.
However, YOLO lasts too many hours for trainig and for retinanet, it can not detect/recognize from credit card. (However it catches number from natural scenes quiet well.)
I dont know how can I implement this.
If you can give me some advise, it would be super thanksful.
Thank you in advance.
For example instead of inferring a batch of 64 28x28 images and adding 64 results together why can I add a layer to the network and crop out these 64 images from a 224x224 input image? It seems this would be more elegant and faster.
gif of different lighting
How do you do this? I find it odd I can’t find slice examples like this and I am guess I must be using the wrong terms or someone asking the question wrong.
I tried the slice layer but it keep wanting to slice the 8 bit gray. For example to create four 224x224 2bit images.
Any Ideas?
By the way my application is really cool! I am doing unsupervised grouping of 3D objects using many different lighting angles. This eliminates manual labeling of classes!
https://github.com/GemHunt/lighting-augmentation
Thanks Much! Paul Krush
This was answered for now on the Caffe Google Group:
https://groups.google.com/forum/#!topic/caffe-users/_uii8kTMOdM
One or more of these answers will work for me:
1.) Python pre and post processing is better
2.) Check out windowing
3.) Try the reshape layer (1,1,224,224)->(1,64,28,28)
if i have a map - say, of the united states, or an anatomy picture of the human body, or some sort of image with discrete sections, and i have a plain black and white outline of the image, what is the easiest way to determine the co-ordinates of the discrete sections, for use in an html image map?
I've used the coffee cup map maker, which is tedious (but also the best manual image-map maker i could find). Is there something - maybe free, maybe expensive - that can do this task automatically?
cheers!
I quite like iMapBuilder, I installed the Chrome app and find it very easy to create image maps there for both regular and irregular areas.
You can import your image, set the area and actions, then simply add the embed code to your site.
I don't create too many image maps so maybe this isn't quite what you're looking for but I hope it helps.
I am looking to develop some code that will be able to by looking at images downloaded from google maps, categorize which part of the image depicts the land and which part depicts the sea.
I am a bit of a newbie to computer vision and machine learning so I am looking for a few pointers on specific techniques or API's that may be useful (I am not looking for the code to this solution).
What I have some up with so far:
Edge detection may not be much help (on its own). Although it gives quite a nice outline of the coast, artefacts on the surface/above the sea may give false positives for land mass (stuff like clouds, ships, etc).
Extracting the blue colour element of an image may give a very good indication of which is sea or not (as obviously, the sea has a much higher level of blue saturation than the land)
Any help is of course, greatly appreciated.
EDIT (for anyone who may want to do something similar):
Use the static google maps API to
fetch map images (not satellite
photos, these have too much
noise/artefacts to be precise).
Example url-
http://maps.google.com/maps/api/staticmap?sensor=false&size=1000x1000¢er=dover&zoom=12&style=feature:all|element:labels|visibility:off&style=feature:road|element:all|visibility:off
To generate my threshold images I used the Image processing lab. I would apply the normalized RGB -> extract blue channel filter and then apply Binarization -> otsu threshold. This has produced extremley useful images without the need to fiddle with thresholds values (the algorithm is very clever so I won't muddy the waters and attempt to explain it)
I assume you are using the satellite view images from Google Maps otherwise you wouldn't have written about ships or other artefacts.
As you already said it might be a good idea to simply try to extract the blue image part.
Just having a look at the blue channel of an RGB image isn't going to work (I just tried), since the woods and so on will not give a good threshold value on the water.
So you can try converting the image to YCbCr color space and have a look at the chrominance channels there.
This an example I just made with a screenshot from google maps. I converted it to YCbCr in Matlab and just took the Cb channel.
You can then binarize this image by a well set threshold, which shouldnt be too hard to find.
You probably will still have small artefacts for which you could use morphological operators (Opening the image several times).
This should remove small artefacts and leave the parts that are land and the parts that are water.
Hope it helps... if not, please keep asking...
EDIT
I've just tried again with another screenshot in matlab:
Convert Image to YCbCr colorspace
Just have a look at Cb channel
find threshold on Cb image either fixed or by i.e. Otsu's method which finds an appropriate thresholdl in a bipartite histogram
perform opening or other filters to eliminate small noises
The original image I made:
After applying a threshold on the Cb image:
After applying an opening (5) on the image
I just picked a threshold manually... You might get better results by having a look which threshold would work better... But as you see this should also work on the different colors of water from rivers and ocean.
You are looking for a segmentation algorithm, that assigns each pixel to one of two classes (land, sea). One of the simplest approaches is to use thresholding.
define a threshold t
if pixel value > t -> assign pixel to land
else assign pixel to sea (usually you will have a bitmap, where you keep track of the pixel class)
Since this approach works best if you can distinguish land and sea masses easily, I would suggest that you compare the hue value of the pixels (i. e. find a threshold between blue and green).