Whitch is right :spherical area or cartesian area - gis

I am asking a theoretical question here. I was comparing the areas calculated by MapInfo with the ones calculated by ArcGIS and I always add differences. When Googling that, I ended-up to that link explaining the differences :
Area calculation MapInfo/FME. Basically, the default method for MapInfo uses spherical while the defaut method for ArcGIS is cartesian. When I changed the default parameter of both GIS, I ended-up having the results of the other GIS, so both are correct.
But now, whitch result is better or I'd rather say « righter », spherical or cartesian?
Thanks

In simple terms, spherical area takes account of the curvature of the surface of the earth whereas cartesian measurements are calculated as if the objects were on a flat plane. Generally speaking this means that spherical measurements should be more accurate.

Related

Plotting lineArcs with turf.js that don't match up with their surrounding geodesic strings

Background
We are supplied with some AIXM data (an XML based superset of GML) which describes polygon areas on a map as a mix of GeodesicStrings (a list of coordinates) and ArcByCentrePoints (a centre point coordinate with a radius, start bearing and end bearing). We are taking this data and converting it into a simple list of coordinates that we then display using a Google maps polyline.
Problem
When we plot a shape with an arc, the start and end points of the arc usually don't match up with the end point of the preceding line and the start point of the subsequent line. It looks as if the radial distance is out by an amount which doesn't appear to be proportional to the radius. See screenshot: interestingly the smaller arc at the top seems fine but the larger arc is inset.
We're pretty sure the data is correct because it looks fine when we use a third party tool to visualise it, so we're doing something wrong.
Implementation
We are using the turf.js library to convert the arc description into a set of points using their lineArc function. Internally this utilises their destination function which "uses the Haversine formula to account for global curvature". We combine these generated points, in the correct sequence, with the points taken directly from the preceding and subsequent GeodesicString elements to give us our final polygon.
Data
Input: Fragment of AIXM (GML) describing polygon
Output: Resulting list of points
Help!
I'm aware this question is light on code but I hope I've described the problem adequately and that some kind person with more GIS knowledge than me (>0) might be able to point me in the right direction. Thanks :)
I've given a couple of presentations on debugging and one of the things I say is that you should keep an open mind and shouldn't get too fixated on a possible cause of a bug because you can waste a lot of time tracking down a false lead.
Sadly in this case I didn't take my own advice. I was so obsessed with the idea that the problem arose from a complex cause, such as issues with the implementation of the Haversine formula, that I overlooked the far simpler answer. My code was taking a string representation of the radius, including the units (e.g. nautical miles or meters) and converting it into kilometres. Sadly I was using parseInt rather than parseFloat as part of this and so instantly losing precision. It was a simple as that - a schoolboy error.
Big thanks to Stefano Borghi, a maintainer of Turf JS, for all his help with this and for helping me see the wood for the trees.

What is a coordinate reference system in GIS? How is it different from a projection system?

We hear a lot about CRS in GIS. I am working with QGIS and whenever I add a layer I need to specify the CRS but what I am confused about what exactly is a CRS and how is it different projection system. Why do countries have their own CRS and how is it determined?
If I understood the question correctly, most APIs use the WGS84 format to specify geographical coordinates, which is briefly explained in this Wikipedia article. Basically, the coordiantes are polar coordinates referring to an ellipsoid whose center is located at earth's center of gravity.
The earth is a sphere, and not a completely round one. However we would still like it as if it was flat to make proper maps and make measurements.
For example, using WGS84 on a map of Norway would make it look horribly distorted.
that is why different regions have their own projections for their own cartographic needs.
I found a good definition of a CRS here. For al intends and purposes it is the same as its projection.

Heat map visualization for discrete values on Google Maps

I'm working on the following scenario: I have a geographical location and I need to create a heat-map visualization of travel times (by car) from that location to anywhere around. I'm planning on using Google Distance Matrix API for getting travel duration. But, since it has a limit on the no of API calls, I need to somehow limit the calls.
My plan, so far, is the following: compute the travel duration (basically a numeric value) to a set of points evenly distributed on a grid around the given position (e.g. 0.5km east, 0.5 km east-0.5km north, 0.5 km east-1 km north etc.). This points would represent the centers of square-shaped areas and I will consider the travel duration to the center as the travel duration to anywhere in the area. Display these areas as colored squares on a Google Maps in a heatmap style.
A good example of something that looks alike is this: http://project.wnyc.org/transit-time/#40.72280,-73.95464,12,709 .
So, my questions are:
Does it seem like a good strategy?
Is there a better visualisation strategy for something like this?
How can I create those square-shaped colored areas on Google Maps?
Thanks!
Calculating duration would surely involve traffic flow rather than simply distance. If your calculations are purely on distance you could use the Google Maps direction requests to calculate the distance to each point.
I'm not sure a heat map is the way forward for this scenario.
There a number of way you could achieve this. Here's a few:
a. Use a custom overlay
(https://developers.google.com/maps/documentation/javascript/examples/overlay-simple)
b. Draw polygons on the map and give them different colours based on
the journey duration. This would involve taking the area in question and slicing it up in to polygons however you need to. These polygons could take the same shape as your example. You would need to be rather precise with your latlng. SQL's spacial querys would help you here depending on the tech your using. (https://developers.google.com/maps/documentation/javascript/examples/polygon-arrays)
c. Depending on how specific you wanted to be you could draw circles with different radius value and different colours.
d. You could make custom markers in the shapes you require and add them to the map in the correct latlng in order to fill an area. You could have different markers for different duration and add them accordingly.
I'm sure there are other options as well.

How to calculate the angle between three points based on Latitude/Longitude

Below is the map that captures from Google Map. I want to calculate the angle ABC. I have the coordinate (Latitude/Longitude) of three points.
Is there any approach to resolve my problem?
Thanks
You can find the heading between any two points using the google.maps.geometry.spherical.computeHeading method of the Google Maps Javascript API v3:
computeHeading(from:LatLng, to:LatLng) | number | Returns the heading from one LatLng to another LatLng. Headings are expressed in degrees clockwise from North within the range [-180,180).
The angle between the two will be the difference between the 2 headings.
Example using computeHeading in this answer
You can approximate the angle using the law of cosines. I say approximate because the curvature of the Earth is going to have some non-zero effect on the calculation.
In your example it should suffice to calculate the distances between the points and then perform the appropriate manipulations on the law of cosines. Refer to the second formula in the applications of the law of cosines wiki article and the corresponding picture.

Calculate 3D coordinates from 2D Image plane accounting for perspective without direct access to view/projection matrix

First time asking a question on the stack exchange, hopefully this is the right place.
I can't seem to develop a close enough approximation algorithm for my situation as I'm not exactly the best in terms of 3D math.
I have a 3d environment in which I can access the position and rotation of any object, including my camera, as well as run trace lines from any two points to get distances between a point and a point of collision. I also have my camera's field of view. I do not have any form of access to the world/view/projection matrices however.
I also have a collection of 2d images that are basically a set of screenshots of the 3d environment from the camera, each collection is from the same point and angle and the average set is taken at about an average of a 60 degree angle down from the horizon.
I have been able to get to the point of using "registration point entities" that can be placed in the 3d world that represent the corners of the 2d image, and then when a point is picked on the 2d image it is read as a coordinate with range 0-1, which is then interpolated between the 3d positions of the registration points. This seems to work well, but only if the image is a perfect top down angle. When the camera is tilted and another dimension of perspective is introduced, the results become more grossly inaccurate as there no compensation for this perspective.
I don't need to be able to calculate the height of a point, say a window on a sky scraper, but at least the coordinate at the base of the image plane, or which if I extend a line out from my image from a specified image space point I need at least the point that the line will intersect with the ground if there was nothing in the way.
All of the material I found about this says to just deproject the point using the world/view/projection matrices, which I find straightforward in itself except I don't have access to these matrices, just data I can collect at screenshot time and other algorithms use complex maths I simply don't grasp yet.
One end goal of this would be able to place markers in the 3d environment where a user clicks in the image, while not being able to run a simple deprojection from the user's view.
Any help would be appreciated, thanks.
Edit: Herp derp, while my implementation for doing so is a bit odd due to the limitations of my situation, the solution essentially boiled down to ananthonline's answer about simply recalculating the view/projection matrices.
Between position, rotation and FOV of the camera, could you not calculate the View/Projection matrices of the camera (songho.ca/opengl/gl_projectionmatrix.html) - thus allowing you to unproject known 3D points?