I'm sorry the title is vague. I'm creating a pokedex site and I have the pokemon and a photo of Ash. Ash is 140cm tall, the only problem being that some pokemon are tiny and some are huge, I want the images to be no bigger then 400px tall and wide.
So for the biggest pokemon, it is 880cm tall, I need to have that be 400px tall and shrink Ash to be the right height relative to the pokemon. But for the smallest pokemon I need to make Ash 400px tall and change the pokemon's height to be relative to Ash.
I dont even know where to start with this problem so any ideas would be wonderful!
Don't display Ash's photo in full scale when you get out your 400 pixel limit. Use a cross-multiplication when you have a very tall Pokemon to decrease Ash's picture size, and on the other side, use another to enlarge Ash's picture when the Pokemon is very small.
With a 880 cm Pokemon for 400 pixels, while Ash is 140 cm tall, it means that Ash's picture should be (140*(400/880)) = 63 pixels height. Value 400/880 gives you that each centimeter for the Pokemon is 0.45 pixels, and 880/400 gives you that each pixel represents 2.2 cm.
Obviously, you can also prefer to use round numbers, and go for a "1 pixel = 3 cm" scale, so that your Pokemon will use a 294 pixels height image instead.
For very tiny Pokemons, you do the same operation, but by enlarging Ash's picture.
At 400 pixels height, Ash's picture has these characteristics: each pixel worths 0.35 cm, or one centimeter uses 2.85 pixels. So a 10 cm Pokemon will needs a 29 pixels height picture (10*(400/140) = 28.57).
When confident with the formula, you can compute automatically the scale in your program. Since it's very basic arithmetic, implementation language shouldn't be a real problem - JS can do that perfectly, or PHP.
In psuedo-code, I would do something like this
Assuming you have store 'height' in an Ash object and every pokemon object, and that you can later use their 'imgHeight' property later on to size your images.
setScaled = (big, small) => {
big.imgHeight = '400px';
small.imgHeight = (big.height / 400) * small.height
small.imgHeight = (400 / big.height) * small.height;
return [big, small]
}
for(poke in dex){
if(poke.height > 140){
scaled = setScaled(poke, ash);
//now scaled[0] is poke, scaled[1] is ash
}else{
scaled = setScaled(ash, poke);
//now scaled[0] is ash, scaled[1] is poke
}
}
Related
I have a dataset that provides bounding box coordinates in the following format.
height- 84 width- 81 x - 343 y - 510. Now, I want to normalize these values (0-1) to train them using the yolov5 model. I have looked online and found that I can normalize these values in 2 ways. Way 1:
Normalized(Xmin) = (Xmin+w/2)/Image_Width
Normalized(Ymin) = (Ymin+h/2)/Image_Height
Normalized(w) = w/Image_Width
Normalized(h) = h/Image_Height
Way 2: divide x_center and width by image width, and y_center and height by image height.
Now, I am not sure which way I should follow to normalize the values in the given dataset. Can anyone suggest me any solution? Also, the size of the given images in my dataset is 1024 x 1024. Now, if I convert the images in 512 x 512 size, how do I figure the new bounding box coordinates i.e what will be the value of height widht x and y?
First, Yolov5 will resize your images and bounding boxes for you, so you don't have to worry about that. By default, it will resize the longest side to 640px and the shortest side will be resized to a length that preserves the proportion of the original image.
About the normalization [0-1]. Yolov5 expects the center points of the bbox, not the minimum points, so if your box dimensions areheight = 84px and width = 81px and those x and y are the minimum points of the bbox (i'm not sure from your post), your formula works, because you're computing the center points:
Normalized(**x_center**) = (Xmin+w/2)/Image_Width
Normalized(**y_center**) = (Ymin+h/2)/Image_Height
...
About the resizing:
https://github.com/ultralytics/yolov5/discussions/7126#discussioncomment-2429260
I read the following code:
https://github.com/endernewton/tf-faster-rcnn/blob/a3279943cbe6b880be34b53329a4fe3f971c2c37/lib/model/config.py#L63
600 is the pixel size of an image's shortest side, and 1000 is the max pixel size of the longest side of a scaled input image.
Could anybody explain this? and how to determine these sizes? Shall we change these sizes?
These are used in prep_im_for_blob function in here. Where target_size is __C.TRAIN.SCALES = (600,), and max_size is __C.TRAIN.MAX_SIZE = 1000. What it does is scales the image so that the minimum size of the resized image is equal to __C.TRAIN.SCALES. However if the resulting image becomes bigger than __C.TRAIN.MAX_SIZE it scales so that maximum size of resized image is equal to __C.TRAIN.MAX_SIZE. If your input image typically falls within 600~1000 pixels in range, you don't need to change these values.
I made an interface for a game, using extended viewport and when i resize the screen the aspect ratio changes and every element in scene is scales, but when this happens this is what i get :
This is the most annoying issue i dealt with, any advice ? I tried making the tower n times bigger and then just setting bigger world size for the viewport but same thing happens, idk what is this extra pixels on images..
I'm loading image from atlas
new TextureRegion(skin.getAtlas().findRegion("tower0"));
the atlas looks like this:
skin.png
size: 1024,1024
format: RGBA8888
filter: Nearest,Nearest
repeat: none
tower0
rotate: false
xy: 657, 855
size: 43, 45
orig: 43, 45
offset: 0, 0
index: -1
In the third picture, you are drawing your source image just slightly bigger than it's actual size in screen pixels. So there are some boundaries where extra pixels have to be filled in to make it fill its full on-screen size. Here are some ways to fix this.
Use linear filtering. For the best appearance, use MipMapLinearLinear for the min filter. This is a quick and dirty fix. The results might look slightly blurry.
Draw your game to a FrameBuffer that is sized to the same aspect ratio as you screen, but shrunk down to a size where your sprites will be drawn pixel perfect to their original scale. Then draw that FrameBuffer to the screen using an upsampling shader. There are some good ones you can find by searching for pixel upscale shaders.
The best looking option is to write a custom Viewport class that sizes your world width and height such that you will be always be drawing the sprites pixel perfect or at a whole number multiple. The downside here is that your world size will be inconsistent across devices. Some devices will see more of the scene at once. I've used this method in a game where the player is always traveling in the same direction, so I position the camera to show the same amount of space in front of the character regardless of world size, which keeps it fair.
Edit:
I looked up my code where I did option 3. As a shortcut, rather than writing a custom Viewport class, I used a StretchViewport, and simply changed its world width and height right before updating it in the game's resize() method. Like this:
int pixelScale = Math.min(
height / MIN_WORLD_HEIGHT,
width / MIN_WORLD_WIDTH);
int worldWidth = width / pixelScale;
int worldHeight = height / pixelScale;
stretchViewport.setWorldWidth(worldWidth);
stretchViewport.setWorldHeight(worldHeight);
stretchViewport.update(width, height, true);
Now you may still have rounding artifacts if your pixel scale becomes something that isn't cleanly divisible for both the screen width and height. You might want to do a bit more in your calculations, like round pixelScale off to the nearest common integer factor between screen width and height. The tricky part is picking a value that won't result in a huge variation in amounts of "zoom" between different phone dimensions, but you can quickly test this by experimenting with resizing a desktop window.
In my case, I merged options 2 and 3. I rounded worldWidth and worldHeight up to the nearest even number and used that size for my FrameBuffer. Then I draw the FrameBuffer to the screen at just the right size to crop off any extra from the rounding. This eliminates the possibility of variations in common factors. Quite a bit more complicated, though. Maybe someday I'll clean up that code and publish it.
Suppose you have an image that is 1500 × 1500 CSS pixels to be displayed on your web page, but you expect that all of your users will be using a mobile device whose width and height do not exceed the equivalent of 500 CSS pixels. What is the best practice?
Keep the original image and set the width and height attributes to 500 or less in the HTML.
Display the image as is, setting the width and height attributes to 1500 or less in the HTML.
Keep the original image and set the width and height attributes to 500px or less in the CSS.
Resize the image to 500 X 500 CSS pixels or smaller in a program such as Photoshop.
I want to decrease an 480 X 480 bitmap image size to 30 X 30 pixel size but keeping the whole height and width intact. (I do not want to scale or use height/width property! )
So if i divide 480/16 = 30. So i need to take average pixel values of 30 pixel elements and put it into new image.
How to take the average in actionscript 3.0? I looked at getpixels() method, is their any simple way/methods to achieve this?
Let me put in more simple way - I am trying to reduce pixels in an bitmap image from 480 X 480 to 30 X 30, the height and width remain same and i expect some amount of distortion after converting image to 30 X 30.
I did scaling but it reduces width and height, if i again increase width and height it just regains normal pixels. Thanks!
Why don't you simply then make a copy of the whole image in code, but use the simple scaling to scale the copy, and only present that to the user. Also look at this from Stack Overflow
How to resize dynamically loaded image into flash (as3)