Drawing shapes versus rendering images? - pygame

I am using Pygame 1.9.2a with Python 2.7 for designing an experiment and have been so far using Pygame only on a need basis and am not familiar with all Pygame classes or concepts (Sprites, for instance, I have no idea about).
I am required to draw many (45 - 50 at one time) shapes on the screen at different locations to create a crowded display. The shapes vary from displaced Ts , displaced Ls to line intersections. [ Like _| or † or ‡ etc.]! I'm sorry that I am not able to post an image of this because I apparently do not have a reputation of 10, which is necessary to post images.
I also need these shapes in 8 different orientations. I was initially contemplating generating point lists and using these to draw lines. But, for a single shape, I will need four points and I need 50 of these shapes. Again, I'm not sure how to rotate these once drawn. Can I use the Pygame Transform or something? I think they can be used, say on Rects. Or will I have to generate points for the different orientations too, so that when drawn, they come out looking rotated, that is, in the desired orientation?
The alternative I was thinking of was to generate images for the shapes in GIMP or some software like that. But, for any screen, I will have to load around 50 images. Will I have to use Pygame Image and make 50 calls for something like this? Or is there an easier way to handle multiple images?
Also, which method would be a bigger hit to performance? Since, it is an experiment, I am worried about timing precision too. I don't know if there is a different way to generate shapes in Pygame. Please help me decide which of these two (or a different method) is better to use for my purposes.
Thank you!

It is easer to use pygame.draw.rect() or pygame.draw.polygon() (because you don't need to know how to use GIMP or InkScape :) ) but you have to draw it on another pygame.Surface() (to get bitmap) and than you can rotate it, add alpha (to make transparet) and than you can put it on screen.
You can create function to generate images (using Surface()) with all shapes in different orientations at program start. If you will need better looking images you can change function to load images created in GIMP.
Try every method on your own - this is the best method to check which one is good for you.
By the way: you can save generated images pygame.image.save() and then load it. You can have all elements on one image and use part of image Surface.get_clip()

Related

3D Animation out of html files using Plotly

I am using Plotly with Python 3.11 in order to create some html files that contain a 3D animation of a surface and a set of points.
Since it is important to me to see the geometric relation between this set of points and the surface, having that picture as an html file instead of a static format is of great value to me, because I can move around the image to see better how those points relate to the surface.
Now, those points I was talking about are particles in motion: they have a trajectory, therefore I can capture an html of the situation at every time, but as in separate html files. In that sense, I would find extremely useful to create an animation out of those html, so that I can decide when to stop the motion to see the position in case something interesting happens.
Does anyone know if such a thing would be possible to do? I have been searching around and the most information that I get is for "static" animations, meaning that I cannot "move" in a given frame.
Thank you in advance!!

How to draw a path partially in HTML5's canvas?

Let's say I have curved path made using a serie of bezierCurveTo() calls. I'd like to make it appear progressively in an animation, by increasing the percentage of it that is drawn frame-after-frame. The problem is that I cannot find a standard way to draw only a part of a canvas path - would someon know of a good way (or even a tricky way) to achieve this?
Sure...and Simon Porritt did all the hard math for us!
jsBezier is a small lib with a function: pointAlongCurveFrom(curve, location, distance) that will let you incrementally plot each point along your Bezier curve.
jsBezier is available on GitHub: https://github.com/sporritt/jsBezier
Just found a small library that does exactly that: https://github.com/camoconnell/lazy-line-painter
It relies on the Raphael lib (http://raphaeljs.com/), and the two put together do not make too big a payload.

Best way for pre-rendering with HTML5's Canvas?

I'm trying to do a game development API for Google's GWT to make Canvas Games, and I got a question with the prerendering issue.
First: I am not entirely sure how browsers/Javascript/GWT manage a deleted canvas, if its data stay on memory or not, after using a removeChild() or RootPanel.Remove() (with GWT), or even the correct method to remove it from memory.
So the solution I've came about is using multiple (as needed) big, hidden canvases as a pre-render palette and use drawImage() magic to jump around the prerendered images drawing on the main context, and having my own problems with insertion, removal, empty spaces, etc.
Is this the best solution? Or should I try using one little canvas for every little image and texture that is prerendered? Or should I try something completely different whatsoever?
Thanks in advance, and sorry for my spelling.
using a canvas to pre-render your items is a good idea, however it's not always the best choice.
If your items are complex (with gradient, shadows and visual effect), so yes it will be good. But if your items are simple (images, polygons, simple bezier curves, ...), your framerate won't increase but can decrease (because of the drawImage). It's then better to render in realtime.
From my experiments, you won't lose performance by using several small canvas (may be few memory) but it can be easier to manage than a big canvas (like an object oriented scene).
If your items change sometimes, you are sure to easily manage the size of your temporary canvases.
Hope this help.

programmatically create Background Images in Flex 3

I'm developing a visualization for certain parts of a Warehouse with Flex 3. In this visualization there are lot of blocks where 1 to x pallets can be placed where x is between 9 and 15. I need to represent each pallet with a black square, each place which is already assigned to a pallet but not physically taken with a grey square and each free place with a white square. I first thought to just use a canvas for each place on a block and change their color if the state changes. But the hundreds of canvases which are there as a result of this approach are not updated quickly enough for my purposes (screen freezes for a few seconds).
I don't want to use embedded images because of the great amount of images I had to embed in the application (those Images appear in 4 orientations).
My idea was to create background images which reflect the state of the whole block only when needed for that certain state and cache them, so that the computation time is spread over the whole runtime.
My problem now is I don't know how to create them in a way that I can use them as "backgroundImages". As far as I understand I would need them as a class object but I don't know how to achieve that, when not embedding the images.
I'm of course open to better approaches to solve my problem. Thanks for your support.
I would suggest using Graphics property of a Sprite for example. It provides basic drawing API, like drawing lines, circles and rectangles.
Besides, you can draw bitmap images on the Graphics to produce more advances results.

How can I turn an image file of a game map into boundaries in my program?

I have an image of a basic game map. Think of it as just horizontal and vertical walls which can't be crossed. How can I go from a png image of the walls to something in code easily?
The hard way is pretty straight forward... it's just if I change the image map I would like an easy way to translate that to code.
Thanks!
edit: The map is not tile-based. It's top down 2D.
I dabble in video games, and I personally would not want the hassle of checking the boundaries of pictures on the map. Wouldn't it be cleaner if these walls were objects that just happened to have an image property (or something like it)? The image would display, but the object would have well defined coordinates and a function could decide whether an object was hit every time the player moved.
I need more details.
Is your game tile based? Is it 3d?
If its tile based, you could downsample your image to the tile resolution and then do a 1:1 conversion with each pixel representing a tile.
I suggest writing a script that takes each individual pixel and determines if it represents part of a wall or not (ie black or white). Then, code your game so that walls are built from individual little block, represented by the pixels. Shouldn't be TOO hard...
If you don't need to precompute anything using the map info. You can just check in runtime logic using getPixel(x,y) like function.
Well, i can see two cases with two different "best solution" depending on where your graphic comes from:
Your graphics is tiled, and thus you can easily "recognize" a block because it's using the same graphics as other blocks and all you would have to do is a program that, when given a list of "blocking tiles" and a map can produce a "collision map" by comparing each tile with tiles in the "blocking list".
Your graphics is just some graphics (e.g. it could be a picture, or some CG graphics) and you don't expect pixels for a block to be the same as pixels from another block. You could still try to apply an "edge detection" algorithm on your picture, but my guess is then that you should rather split your picture in a BG layer and a FG layer so that the FG layer has a pre-defined color (or alpha=0) and test pixels against that color to define whether things are blocking or not.
You don't have much blocking shapes, but they are usually complex (polygons, ellipses) and would be unefficient to render using a bitmap of the world or to pack as "tile attributes". This is typically the case for point-and-click adventure games, for instance. In that case, you're probably to create path that match your boundaries with a vector drawing program and dig for a library that does polygon intersection or bezier collisions.
Good luck and have fun.