Is there a way to set a scale/minimum size to an ellipsoid the same way that minimumpixelsize (and maximumscale) does with a model? - zooming

I want to use a czml to draw some spheres floating around an area of the globe, and I want them to keep the same size regarless of zoom (within limits). This is trivial with a gltf model, but I cannot find a way to do it with an ellipsoid.

Related

Drawing over terrain with depth test?

i'm trying to render geometrical shapes over uneven terrain (loaded from heightmap / shapes geometry is also generated based on averaged heights across the heightmap however they do not fit it exactly). I have the following problem - somethimes the terrain shows through the shape like showed on the picture.
Open Image
I need to draw both terrain and shapes with depth testing enabled so they do not obstruct other objects in the scene.. Could someone suggest a solution to make sure the shapes are always rendered on top ? Lifting them up is not really feasible... i need to replace the colors of actual pixel on the terrain and doing this in pixel shader seems too expensive..
thanks in advance
I had a similar problem and this is how I solved it:
You first render the terrain and keep the depth buffer. Do not render
any objects
Render solid bounding box of the shape you want to put on the terrain.
You need to make sure that your bounding box covers all
the height range the shape covers
An over-conservative estimation is to use the global minimum and maximum elevation of the entire
terrain
In the pixel shader, you read depth buffer and reconstructs world space position
You check if this position is inside your shape
In your case you can check if its xy (xz) projection is within the given distance from
the center of your given circle
Transform this position into your shape's local coordinate system and compute the desired color
Alpha-blend over the render target
This method results in shapes perfectly aligned with the terrain surface. It also does not produce any artifacts and works with any terrain.
The possible drawback is that it requires using deferred-style shading and I do not know if you can do this. Still, I hope this might be helpful for you.

Canvas: Mouse Events

I know it is not possible to add eventhandlers to specific circles or rectangles in canvas. But there are some nice frameworks like EaselJS, KineticJS, Paper.js, Fabric.js that support the eventhandling on specific elements.
Can someone explain me how do they work?
I think there are only two solutions.
1. You create for each element a new canvas region and put them on each other. In this way you can give each region an event handler.
2. You have only one canvas region and one event handler. In this way you have to do complex mathematical calculations to find out if a specific element was clicked. If you have only circles or rectangles, this solution might be easy. But if you have path with lots of curves, this solution is quite difficult.
I don't want to use the libraries. So it would be nice, if someone can help me.
Here's a BRIEF summary of how canvas drawing libraries work
An unaltered canvas is just a big bitmap. Once you draw shapes on the canvas, they are unaccessible, forgotten pixels.
Canvas drawing libraries store all your shapes into “retained” objects. Each shape object has sufficient information about itself to allow the drawing library to redraw it whenever necessary.
The canvas drawing libraries are the "controllers" for objects. The libraries have the algorithms necessary to track, manipulate and redraw all shape objects as necessary.
The following information is retained about every shape object:
Basic identification
ID,
Shape name
Parent or Container
Inherent properties of the shape:
Rectangular shapes( rect, image, text) know width and height.
Circular shapes (circles, elipses, regular polygons, stars) know radius and sidecount.
Lines know length.
Curved shapes (arcs, beziers, paths) know anchor points and control points.
Text also knows…well, the text!
Images also know their pixel data (usually stored in javascript Image objects)
Transformational information:
Starting X/Y coordinate
Translations—accumulated movements off the starting coordinate.
Rotations—accumulated rotations of this shape (usually in radians).
Scalings—accumulated resizings
Other transforms (less common) are skews and warps
Layering information—the current z-index
Styling information:
StrokeColor,
StrokeWidth,
FillColor,
Opacity,
isVisible,
lineCaps,
cornerRadius
Tracking abilities:
Bounding box—the smallest rectangle that completely contains this shape
This is used for “hit testing” to see if the mouse is inside this object (for selecting and dragging)
If you don't want to use a library, you may find my answer in this thread helpful. As markE says once the canvas is written to there is no way of tracking that data (unless you care to loop through each individual pixel and test its colour; though that is only really useful for pixel level collision detection).

When drawing on a canvas, should calculations be done relative to cartesian plane coordinates?

I've been seeing a lot of canvas-graphics-related javascript projects and libraries lately and was wondering how they handle the coordinate system. When drawing shapes and vectors on the canvas, are the points calculated based on a cartesian plane and converted for the canvas, or is everything calculated directly for the canvas?
I tried playing around with drawing a circle by graphing all its tangent lines until the line intersections start to resemble a curve and found the difference between the cartesian planes I'm familiar with and the coordinate system used by web browsers very confusing. The function for a circle, for example, "y^2 + x^2 = r^2" would need to be translated to "(y-1)^2 + (x-1)^2 = r^2" to be seen on the canvas. And then negative slopes were positive slopes on the canvas and everything would be upside down :/ .
The easiest way for me to think about it was to pretend the origin of a cartesian plane was in the center of the canvas and adjust my coordinates accordingly. On a 500 x 500 canvas, the center would be 250,250, so if I ended up with a point at 50,50, it would be drawn at (250 + 50, 250 - 50) = (300, 200).
I get the feeling I'm over-complicating this, but I can't wrap my mind around the clean way to work with a canvas.
Current practice can perhaps be exemplified by a quote from David Flanagan's book "JavaScript : The Definitive Guide", which says that
Certain canvas operations and attributes (such as extracting raw
pixel values and setting shadow offsets) always use this default
coordinate system
(the default coordinate system is that of the canvas). And it continues with
In most canvas operations, when you specify the coordinates
of a point, it is taken to be a point in the current coordinate system
[that's for example the cartesian plane you mentioned, #Walkerneo],
not in the default coordinate system.
Why is using a "current coordinate system" more useful than using directly the canvas c.s. ?
First and foremost, I believe, because it is independent of the canvas itself, which is tied to the screen (more specifically, the default coordinate system dimensions are expressed in pixels). Using for instance a Cartesian (orthogonal) coordinate system makes it easy for you (well, for me too, obviously :-D ) to specify your drawing in terms of what you want to draw, leaving the task of how to draw it to the transformations offered by the Canvas API. In particular, you can express dimensions in the natural units of your drawing, and perform a scale and a translation to fit (or not, as the case may be...) your drawing to the canvas.
Furthermore, using transformations is often a clearer way to build your drawing since it allows you to get "farther" from the underlying coord system and specify your drawing in terms of higher level operations ('scale', 'rotate', 'translate' and the more general 'transform'). The abovementioned book gives a very nice exemple of the power of this approach, drawing a Koch (fractal) snowflake in many fewer lines that would be possible (if at all) using canvas coordinates.
The HTML5 canvas, like most graphics systems, uses a coordinate system where (0,0) is in the top left and the x-axis and y-axis go from left to right and top down respectively. This makes sense if you think about how you would create a graphics system with nothing but a block of memory: the simplest way to map coordinates (x,y) to a memory slot is to take x+w*y, where w is the width of a line.
This means that the canvas coordinate system differs from what you use in mathematics in two ways: (0,0) is not the center like it usually is, and y grows down rather than up. The last part is what makes your figures upside down.
You can set transformations on the canvas that make the coordinate system more like what you are used to:
var ctx = document.getElementById('canvas').getContext('2d');
ctx.translate(250,250); // Move (0,0) to (250, 250)
ctx.scale(1,-1); // Make y grow up rather than down

Calculate 3D coordinates from 2D Image plane accounting for perspective without direct access to view/projection matrix

First time asking a question on the stack exchange, hopefully this is the right place.
I can't seem to develop a close enough approximation algorithm for my situation as I'm not exactly the best in terms of 3D math.
I have a 3d environment in which I can access the position and rotation of any object, including my camera, as well as run trace lines from any two points to get distances between a point and a point of collision. I also have my camera's field of view. I do not have any form of access to the world/view/projection matrices however.
I also have a collection of 2d images that are basically a set of screenshots of the 3d environment from the camera, each collection is from the same point and angle and the average set is taken at about an average of a 60 degree angle down from the horizon.
I have been able to get to the point of using "registration point entities" that can be placed in the 3d world that represent the corners of the 2d image, and then when a point is picked on the 2d image it is read as a coordinate with range 0-1, which is then interpolated between the 3d positions of the registration points. This seems to work well, but only if the image is a perfect top down angle. When the camera is tilted and another dimension of perspective is introduced, the results become more grossly inaccurate as there no compensation for this perspective.
I don't need to be able to calculate the height of a point, say a window on a sky scraper, but at least the coordinate at the base of the image plane, or which if I extend a line out from my image from a specified image space point I need at least the point that the line will intersect with the ground if there was nothing in the way.
All of the material I found about this says to just deproject the point using the world/view/projection matrices, which I find straightforward in itself except I don't have access to these matrices, just data I can collect at screenshot time and other algorithms use complex maths I simply don't grasp yet.
One end goal of this would be able to place markers in the 3d environment where a user clicks in the image, while not being able to run a simple deprojection from the user's view.
Any help would be appreciated, thanks.
Edit: Herp derp, while my implementation for doing so is a bit odd due to the limitations of my situation, the solution essentially boiled down to ananthonline's answer about simply recalculating the view/projection matrices.
Between position, rotation and FOV of the camera, could you not calculate the View/Projection matrices of the camera (songho.ca/opengl/gl_projectionmatrix.html) - thus allowing you to unproject known 3D points?

How can I turn an image file of a game map into boundaries in my program?

I have an image of a basic game map. Think of it as just horizontal and vertical walls which can't be crossed. How can I go from a png image of the walls to something in code easily?
The hard way is pretty straight forward... it's just if I change the image map I would like an easy way to translate that to code.
Thanks!
edit: The map is not tile-based. It's top down 2D.
I dabble in video games, and I personally would not want the hassle of checking the boundaries of pictures on the map. Wouldn't it be cleaner if these walls were objects that just happened to have an image property (or something like it)? The image would display, but the object would have well defined coordinates and a function could decide whether an object was hit every time the player moved.
I need more details.
Is your game tile based? Is it 3d?
If its tile based, you could downsample your image to the tile resolution and then do a 1:1 conversion with each pixel representing a tile.
I suggest writing a script that takes each individual pixel and determines if it represents part of a wall or not (ie black or white). Then, code your game so that walls are built from individual little block, represented by the pixels. Shouldn't be TOO hard...
If you don't need to precompute anything using the map info. You can just check in runtime logic using getPixel(x,y) like function.
Well, i can see two cases with two different "best solution" depending on where your graphic comes from:
Your graphics is tiled, and thus you can easily "recognize" a block because it's using the same graphics as other blocks and all you would have to do is a program that, when given a list of "blocking tiles" and a map can produce a "collision map" by comparing each tile with tiles in the "blocking list".
Your graphics is just some graphics (e.g. it could be a picture, or some CG graphics) and you don't expect pixels for a block to be the same as pixels from another block. You could still try to apply an "edge detection" algorithm on your picture, but my guess is then that you should rather split your picture in a BG layer and a FG layer so that the FG layer has a pre-defined color (or alpha=0) and test pixels against that color to define whether things are blocking or not.
You don't have much blocking shapes, but they are usually complex (polygons, ellipses) and would be unefficient to render using a bitmap of the world or to pack as "tile attributes". This is typically the case for point-and-click adventure games, for instance. In that case, you're probably to create path that match your boundaries with a vector drawing program and dig for a library that does polygon intersection or bezier collisions.
Good luck and have fun.