I wanted to see if it was possible to plot WNBA shot charts using bigquery GIS. I saw a lot of articles about bigquery gis for latitude/longtitude data and one post about doing it with images, but was confused and not sure if it's the right use case for BQGIS.
I have the court dimensions and the shots taken this season from the wnba's stats site. If possible I would love to see what zones different shots were taken in and be able to plot out the whole court and then also "zones" within the court so for any point I could check what "zone" it's in (i.e. low post, right wing, etc.). I have the points in half court form so would only be plotting half a court.
I've transposed the points to feet so they all fall within the boundaries of the halfcourt. The rectangle dimensions of the half court are (0, 0) [Lower left corner], (50, 0) Lower Right corner, (0, 47) [Upper left corner], (50, 47) [Upper right corner]
The arc for the three point line is a little more complicated, but i have that as well as well as the other zone dimensions.
You can use it this way, although it might be not the best way to use BigQuery GIS.
Seems like you map court position to coordinates. This is fine for GIS systems working with planar geometries. BQ GIS would interpret them as lat/lon, on a sphere, and with ranges up to 50 degrees this would introduce spherical distortion. While usually BQ GIS provides more precise answer thanks to spherical geography, you'll get less precise one.
If you do go this route, I suggest using smaller ranges, e.g. scale everything down 100x and use range (0, 0) to (0.5, 0.47) to minimize these distortions, the distortions should be tiny at this scale.
Related
I've been seeing a lot of canvas-graphics-related javascript projects and libraries lately and was wondering how they handle the coordinate system. When drawing shapes and vectors on the canvas, are the points calculated based on a cartesian plane and converted for the canvas, or is everything calculated directly for the canvas?
I tried playing around with drawing a circle by graphing all its tangent lines until the line intersections start to resemble a curve and found the difference between the cartesian planes I'm familiar with and the coordinate system used by web browsers very confusing. The function for a circle, for example, "y^2 + x^2 = r^2" would need to be translated to "(y-1)^2 + (x-1)^2 = r^2" to be seen on the canvas. And then negative slopes were positive slopes on the canvas and everything would be upside down :/ .
The easiest way for me to think about it was to pretend the origin of a cartesian plane was in the center of the canvas and adjust my coordinates accordingly. On a 500 x 500 canvas, the center would be 250,250, so if I ended up with a point at 50,50, it would be drawn at (250 + 50, 250 - 50) = (300, 200).
I get the feeling I'm over-complicating this, but I can't wrap my mind around the clean way to work with a canvas.
Current practice can perhaps be exemplified by a quote from David Flanagan's book "JavaScript : The Definitive Guide", which says that
Certain canvas operations and attributes (such as extracting raw
pixel values and setting shadow offsets) always use this default
coordinate system
(the default coordinate system is that of the canvas). And it continues with
In most canvas operations, when you specify the coordinates
of a point, it is taken to be a point in the current coordinate system
[that's for example the cartesian plane you mentioned, #Walkerneo],
not in the default coordinate system.
Why is using a "current coordinate system" more useful than using directly the canvas c.s. ?
First and foremost, I believe, because it is independent of the canvas itself, which is tied to the screen (more specifically, the default coordinate system dimensions are expressed in pixels). Using for instance a Cartesian (orthogonal) coordinate system makes it easy for you (well, for me too, obviously :-D ) to specify your drawing in terms of what you want to draw, leaving the task of how to draw it to the transformations offered by the Canvas API. In particular, you can express dimensions in the natural units of your drawing, and perform a scale and a translation to fit (or not, as the case may be...) your drawing to the canvas.
Furthermore, using transformations is often a clearer way to build your drawing since it allows you to get "farther" from the underlying coord system and specify your drawing in terms of higher level operations ('scale', 'rotate', 'translate' and the more general 'transform'). The abovementioned book gives a very nice exemple of the power of this approach, drawing a Koch (fractal) snowflake in many fewer lines that would be possible (if at all) using canvas coordinates.
The HTML5 canvas, like most graphics systems, uses a coordinate system where (0,0) is in the top left and the x-axis and y-axis go from left to right and top down respectively. This makes sense if you think about how you would create a graphics system with nothing but a block of memory: the simplest way to map coordinates (x,y) to a memory slot is to take x+w*y, where w is the width of a line.
This means that the canvas coordinate system differs from what you use in mathematics in two ways: (0,0) is not the center like it usually is, and y grows down rather than up. The last part is what makes your figures upside down.
You can set transformations on the canvas that make the coordinate system more like what you are used to:
var ctx = document.getElementById('canvas').getContext('2d');
ctx.translate(250,250); // Move (0,0) to (250, 250)
ctx.scale(1,-1); // Make y grow up rather than down
First time asking a question on the stack exchange, hopefully this is the right place.
I can't seem to develop a close enough approximation algorithm for my situation as I'm not exactly the best in terms of 3D math.
I have a 3d environment in which I can access the position and rotation of any object, including my camera, as well as run trace lines from any two points to get distances between a point and a point of collision. I also have my camera's field of view. I do not have any form of access to the world/view/projection matrices however.
I also have a collection of 2d images that are basically a set of screenshots of the 3d environment from the camera, each collection is from the same point and angle and the average set is taken at about an average of a 60 degree angle down from the horizon.
I have been able to get to the point of using "registration point entities" that can be placed in the 3d world that represent the corners of the 2d image, and then when a point is picked on the 2d image it is read as a coordinate with range 0-1, which is then interpolated between the 3d positions of the registration points. This seems to work well, but only if the image is a perfect top down angle. When the camera is tilted and another dimension of perspective is introduced, the results become more grossly inaccurate as there no compensation for this perspective.
I don't need to be able to calculate the height of a point, say a window on a sky scraper, but at least the coordinate at the base of the image plane, or which if I extend a line out from my image from a specified image space point I need at least the point that the line will intersect with the ground if there was nothing in the way.
All of the material I found about this says to just deproject the point using the world/view/projection matrices, which I find straightforward in itself except I don't have access to these matrices, just data I can collect at screenshot time and other algorithms use complex maths I simply don't grasp yet.
One end goal of this would be able to place markers in the 3d environment where a user clicks in the image, while not being able to run a simple deprojection from the user's view.
Any help would be appreciated, thanks.
Edit: Herp derp, while my implementation for doing so is a bit odd due to the limitations of my situation, the solution essentially boiled down to ananthonline's answer about simply recalculating the view/projection matrices.
Between position, rotation and FOV of the camera, could you not calculate the View/Projection matrices of the camera (songho.ca/opengl/gl_projectionmatrix.html) - thus allowing you to unproject known 3D points?
Given a particular zoom level, how accurate is the scale provided by the satellite view in google maps?
Can one use it to ~accurately determine the square footage of a given building in the picture?
Thanks.
The imagery is very accurate, and at the finest zoom levels (19 or 20), you will be able to perform area calculation with great precision. The location information in Google maps would definitely be more accurate than trying to get readings using a handheld GPS device (there are some app out there that let you walk around a perimeter setting waypoints, and then calculating the internal area based on those waypoints).
Here is a relatively painless utility that demonstrates this:
http://www.daftlogic.com/projects-google-maps-area-calculator-tool.htm
One issue if you are trying to calculate square footage using the imagery however would be determining the number of stories.
Not sure about the accuracy. At the 200 ft. zoom level I superimposed the scale over Rice Stadium in Houston and it shows the playing field as a little over 200 ft. long and 50 meters wide. That means the width is about right but the length is way off since the standard football field is 300 ft. long. Probably has something to do with the angle of the photo. If the satellite is directly overhead it's probably more accurate. Just a thought.
the graphic scale is not consistent as you zoom in and out.
I placed two zoom images into a CAD program and sized the images by measuring the graphic scale. I got two different sized maps.
I have a camera with the coordinates x,y at height h that is looking onto the x-y-plane at a specific angle, with a specific field of view. I want to calculate the 4 corners the camera can see on the plane.
There is probably some kind of formula for that, but I can't seem to find it on google.
Edit: I should probably mention that I mean a camera in the 3D-Graphics sense. Specifically I'm using XNA.
I've had to do similar things for debugging graphics code in 3D games. I found the easiest way of thinking about it was creating the vectors representing the corners of the screen and then calculating the intersection with whatever relevant objects (in this case, a plane).
Take a look at your view-projection (or whatever your Camera's matrix stack looks like multiplied together) and realize the screen space they output to has corners with homogonized coordinates of (-1, -1), (-1, 1), (1, -1), (1, 1). Knowing this, you're left with one free variable and can solve for the vector representing the corner of the camera.
Although, this is a pain. It's much easier to construct a vector for each corner as if the camera isn't rotated or translated and then transform them by the view matrix to give you world space vectors. Then you can calculate the intersection between the vectors and the plane to get your four corners.
I have a day job, so I leave the math to you. Some links that may help with that, however:
Angle/Field of view calculations
Line plane intersection
ignoring lens distortions and assuming the lens is almost at the focal point then you just have a triangle formed by the sensor size and the lens, then the lens to the subject - similar trianlges gives you the size of the subject plane.
If you want a tilted object plane that's just a projection onto the perpendicular object plane
I have a set of coordinates data from 3rd party provider. However when I plot those coordinates on google maps with annotations, the annotated points are not exactly on the position they should be. For example, some points should be placed on the road, however they are placed slightly off the road.
My question is, how to solve this kind of discrepancy?
Thanks!
Coordinates (lat and long), by themselves, do not describe a position on the Earth. You need a third piece of information, called the datum. The datum for google maps is WGS84. The datum establishes such things as where 0,0 is on the Earth's surface.
If you've received coordinates, and those coordinates are based on a different datum, then they will not plot correctly on Google Maps.
On the other hand, if the points came from any kind of mobile device (even if it is using WGS84), there are inherent inaccuracies in such measurements (thankfully generally down to < 5m for GPS these days, I believe) that mean that they will not align 100%.