Google Heatmap - Visualizing data when there is a wide variance in weights assigned - google-maps

We are creating a Google Map with heatmap layer. We are trying to show the energy use of ~1300 companies spreadout over the United States. For each of the companies we have their lat/long and energy use in kWh. Our plan is to weight the companies on the heatmap by their kWh use. We have been able to produce the map with the heatlayer, however, because we have such a huge variance in energy use (ranging from thousands to billions of kWh), the companies using smaller amounts of energy are not showing up at all. Even when you zoom in on their location nothing you can't see any coloring on the map.
Is there a way to have all companies show up in the heatmap, no matter how small their energy use is? We have tried setting the MaxIntensity, but still have some of the smaller companies not showing up. We are also concerned about setting the MaxIntensity too low since we are then treating a companies using 50 million kWh the same as one using 3 billion kWh. Is there anyway to set a MinIntensity? Or to have some coloring visible on the map for all the companies?

Heatmap layers accept a gradient property, expecting an array of colors as its value. These colors will always have linear mapping against your sample starting from zero. Also, the first color (let's say, gradient[0]) should be transparent, for it's supposed to map zeroes or nulls. If you give a non transparent color to the first gradient point, then the whole world will have that color.
This means that if, for example, you enter a gradient of 20 points, all points weighting less than 1/20th of the maximum will show as interpolate between gradient[0] (transparent) and gradient[1] (the first non transparent color in your gradient). This will result in semi transparent datapoints for non normalized samples.
If you need to somehow flatten your values universe, you'll have to feed the Heatmap with precomputed values. For example, the value of log(kWh) will be a flatter curve to represent.
Another workaround would be to offset every value with a fraction of the maximum (for example, 10% of the maximum), so the minimum will be displaced from the zero in at least one color interval.

Related

google java maps api - heatmap too faint

I am using geocoding and maps api to heatmap a lot of entries (12000+) (these will be filtered down to ~5-600 a map) currently using a random 500 dataset from these.
The problem is some of these addresses haven't geocoded correctly (e.g. showing miles away sometimes not on same continent) which is not an issue on it's own (happy for these to be ignored and just be in the oblivion) however these are drastically reducing visibility of map so when zoomed in even with opacity set as high as possible they are a barely visible pinprick on map.
Is there a simple way of just stopping these few erroneous entries from interfering or will I have to weed them out?
Below is sample of how it looks...
Compared to how I'd like it to look (different data set created created previously in fusion tables)...
(these are same zoom levels on google maps, top one just cropped more to show how difference)
For anyone else who come across this...
maxIntensity: The maximum intensity of the heatmap. By default, heatmap colors are dynamically scaled according to the greatest concentration of points at any particular pixel on the map. This property allows you to specify a fixed maximum. Setting the maximum intensity can be helpful when your dataset contains a few outliers with an unusually high intensity.
I found setting this on a sliding scale was best to dynamically adjust map depending on amount of data points in map.

heatmap layer colours based on value

I am using the Javascript v3 API and I have a heatmap working with my data displaying where I have collected certain information. I want to create different heatmap overlays based on the data.
So I have a load of mobile signal strength data that I am plotting on the map and I want to show the good signal areas in green, bad in red but in areas where there are good and bad samples have an orange/yellow overlay.
I have found the 'weight' but it seems to be based on the number of occurrences of samples rather than the value of those samples. Can anyone help?
You may increase the weight to get a result that is more differentiated based on the weight, e.g. by using Math.pow
weight:Math.pow(signalStrength, 2)
(Modify the exponent to get a result that fits your needs)

Reducing flicker in a real-time graph?

I am rendering a scatter plot every 5 seconds where the X-axis denotes time and Y-axis denotes a set of names ordered alphabetically.
A set of data points (say, 'X's) can optionally be grouped into a category and so I use a color to show this. Therefore all 'X's with the same color belong to the same category and so on.
Problem: I have tens of thousands of 'Name's and they can appear randomly on the graph at some point in time. The real purpose is to provide the user with a graph that provides the ability to monitor these names. Therefore, every time I render the graph, I get the list of points to be rendered and the underlying graph library: Flotr2 takes care of assigning colors to the sets of points. Therefore, if the dataset contains two categories of points, it assigns two colors and if a point belonging to a new category arrives, it assigns a third color. As a result of this, what I am observing due to this is a flicker effect:
And when the point disappears, the colors revert back to the ones before. Is there a good way to solve this problem? I have two specific problems:
Colors keep changing for every new point being added
A new point added somewhere shifts every other point vertically in either direction. For instance, if Category 2.5 is added, it ends up shifting Category 2 down and Category 1 up because the alphabetical order should be preserved.
In a scenario which is highly dynamic, such a graph tends to be useless because of the dynamism it shows visually. One obvious solution I can think of is to pre-allocate space for all points and all categories possible in the graph so that an appearance of a new point will not change anything but it just draws a point somewhere. However, I am not clear if this approach is ideal for large data sets where the set of names and categories change often.
Is there a good way to solve this problem? I am open to other graph types that mitigate this problem. In short, I want a real-time display that is capable of showing the appearance of new names on a time axis.

Map shading based on distance

I have a google map which presents the distance from a particular location.
The map consists of a set of polygons, where a polygon encircles an area which is the same distance from the point. So in other words, I colour a region which is between 0 and 5 minutes from the point in one colour, between 5 and 10 in another colour, and so on up to 120 minutes. This gives me 20 different colours.
What rgb colours would you recommend I use to give a nice contrast on my map. Perhaps there is a standard algorithm for this. Otherwise I can use a lookup table since its only 20 different colours.
Thanks,
Barry
One possibility is to choose a single colour and set the Opacity so that the circles get progressively fainter as the radius increases. Whether this is a good solution may depend on what the importance of the information is.
UPDATE: A better solution is to adopt the MySociety colours as in this map which shows travel times to London. They've done a lot of this and if you write to them, they'll almost certainly let you have the scales they use.

Effective data structure for overlapping spatial areas

I'm writing a game where a large number of objects will have "area effects" over a region of a tiled 2D map.
Required features:
Several of these area effects may overlap and affect the same tile
It must be possible to very efficiently access the list of effects for any given tile
The area effects can have arbitrary shapes but will usually be of the form "up to X tiles distance from the object causing the effect" where X is a small integer, typically 1-10
The area effects will change frequently, e.g. as objects are moved to different locations on the map
Maps could be potentially large (e.g. 1000*1000 tiles)
What data structure would work best for this?
Providing you really do have a lot of area effects happening simultaneously, and that they will have arbitrary shapes, I'd do it this way:
when a new effect is created, it is
stored in a global list of effects
(not necessarily a global variable,
just something that applies to the
whole game or the current game-map)
it calculates which tiles
it affects, and stores a list of those tiles against the effect
each of those tiles is
notified of the new effect, and
stores a reference back to it in a
per-tile list (in C++ I'd use a
std::vector for this, something with
contiguous storage, not a linked
list)
ending an effect is handled by iterating through
the interested tiles and removing references to it, before destroying it
moving it, or changing its shape, is handled by removing
the references as above, performing the change calculations,
then re-attaching references in the tiles now affected
you should also have a debug-only invariant check that iterates through
your entire map and verifies that the list of tiles in the effect
exactly matches the tiles in the map that reference it.
Usually it depends on density of your map.
If you know that every tile (or major part of tiles) contains at least one effect you should use regular grid – simple 2D array of tiles.
If your map is feebly filled and there are a lot of empty tiles it make sense to use some spatial indexes like quad-tree or R-tree or BSP-trees.
Usually BSP-Trees (or quadtrees or octrees).
Some brute force solutions that don't rely on fancy computer science:
1000 x 1000 isn't too large - just a meg. Computers have Gigs. You could have an 2d array. Each bit in the bytes could be a 'type of area'. The 'effected area' that's bigger could be another bit. If you have a reasonable amount of different types of areas you can still use a multi-byte bit mask. If that gets ridiculous you can make the array elements pointers to lists of overlapping area type objects. But then you lose efficiency.
You could also implement a sparse array - using a hashtable key'd off of the coords (e.g., key = 1000*x+y) - but this is many times slower.
If course if you don't mind coding the fancy computer science ways, they usually work much better!
If you have a known maximum range of each area effect, you could use a data structure of your choosing and store the actual sources, only, that's optimized for normal 2D Collision Testing.
Then, when checking for effects on a tile, simply check (collision detection style, optimized for your data structure) for all effect sources within the maximum range and then applying a defined test function (for example, if the area is a circle, check if the distance is less than a constant; if it's a square, check if the x and y distances are each within a constant).
If you have a small (<10) amount of effect "field" shapes, you can even do a unique collision detection for each effect field type, within their pre-computed maximum range.