I am rendering a scatter plot every 5 seconds where the X-axis denotes time and Y-axis denotes a set of names ordered alphabetically.
A set of data points (say, 'X's) can optionally be grouped into a category and so I use a color to show this. Therefore all 'X's with the same color belong to the same category and so on.
Problem: I have tens of thousands of 'Name's and they can appear randomly on the graph at some point in time. The real purpose is to provide the user with a graph that provides the ability to monitor these names. Therefore, every time I render the graph, I get the list of points to be rendered and the underlying graph library: Flotr2 takes care of assigning colors to the sets of points. Therefore, if the dataset contains two categories of points, it assigns two colors and if a point belonging to a new category arrives, it assigns a third color. As a result of this, what I am observing due to this is a flicker effect:
And when the point disappears, the colors revert back to the ones before. Is there a good way to solve this problem? I have two specific problems:
Colors keep changing for every new point being added
A new point added somewhere shifts every other point vertically in either direction. For instance, if Category 2.5 is added, it ends up shifting Category 2 down and Category 1 up because the alphabetical order should be preserved.
In a scenario which is highly dynamic, such a graph tends to be useless because of the dynamism it shows visually. One obvious solution I can think of is to pre-allocate space for all points and all categories possible in the graph so that an appearance of a new point will not change anything but it just draws a point somewhere. However, I am not clear if this approach is ideal for large data sets where the set of names and categories change often.
Is there a good way to solve this problem? I am open to other graph types that mitigate this problem. In short, I want a real-time display that is capable of showing the appearance of new names on a time axis.
Related
I have a problem regarding the erase function of ET GeoWizard in ArcMap. For the original from ArcMap, i dont have the necessary license. But maybe it would be the same problem anyway. I have a feature class with many overlapping buffer polygons. These are derived from surfaces and have a lot of information stored in the attribute table. There are 55 in number. In order to experience the additional area per area created by the 10km buffer, I created this. However, these now also overlap with other areas that are not to be recorded again in terms of area. So I wanted to get the "origin faces" cut out of each of the 55 buffer rings. But without the surface being cut up into several individual pieces. if this cannot be avoided, appear in the attribute table as at least one feature or attributes, it also remains with 55 features in the attribute table. Do you know why this happens and how to avoid/solve it?
Have now first saved all individually, applied the erase function individually (or via batch) and then merged them again. But I still have to do this for several people and would find it very exciting for the future, where I have a mistake in thinking or where the problem lies in the program. I'd be happy for help :)
I am charting a dataset that contains hundreds of datapoints as we're logging information about every 15 minutes and I want to report over a month's worth of data. When charting a single day's worth of data the readability is fine. But when I plot a month's worth of data, it is no long really readable due to me enabling markers. I have markers enabled as a requirement is to have it readable when printed in black & white.
What's the best practice to make the chart readable? Aggregating the data per hour/ per day is not an option as these are gauge readings and the customer doesn't want to lose any fidelity in the readings.
I know this was asked a long time ago, and I'm not 100% sure what your interval will be. Still this worked for me, and hopefully it will help someone else.
I needed to have the markers show up for every-other data point, so I used this expression for the name of the Marker Type:
=IIF(RowNumber(Nothing) Mod 2 = 0, "Diamond", "None")
This is based on the commonly used code for alternating rows, but I think you should be able to modify the concept to get to what you need.
Generally you need to set a max duration you want to view and test what that will look like for your users. You can do a few things but all involve the 'tick marks' and labels for the most part. On a chart object you can click once on a child element of the chart and select it. You may do this for the vertical or horizontal axis of a standard line chart in my example. Let's say you click on the vertical axis near the labels. It will show a dotted rectangle to show it is selected. Right Click and choose 'Vertical Access Properties'.
Select 'Major Tick Marks' on the left pane.
Your first option may be to 'Hide Major tick marks' select that and then do not see anything but labels.
Your second may be that you change the 'Interval' from AUTO to something else. Change the 'Interval type:' to something else and then change the Interval as needed.
Maybe you need dynamic control, you can then select the 'fx' expressions to choose a type potentially that changes. EG: Say you have a date range and someone selects month, maybe you want to make it weeks then, someone selects a year you want months, etc...
What you are describing is hard when you give a person a lot of data, you can also set just the beginning and end and then the line will adjust as necessary. The key in good charting is to allow a user to see what they need but make it readable no matter what. When you have broad options I find that less labels and ticks are best.
I am trying to randomly generate a directed graph for the purpose of making a puzzle game similar to the ice sliding puzzles from pokemon.
This is essentially what I want to be able to randomly generate: http://bulbanews.bulbagarden.net/wiki/Crunching_the_numbers:_Graph_theory
I need to be able to limit the size of the graph in an x and y dimension. In the example in the link, it would be restricted to an 8x4 grid.
The problem I am running in to is not randomly generating the graph, but randomly generating a graph which I can properly map out in a 2d space, since I need something (like a rock) on the opposite side of a node, to make it visually make sense when you stop sliding. The problem with this is sometimes the rock ends up in the path between two other nodes or possibly on another node itself, which causes the entire graph to become broken.
After discussing the problem with a few people I know, we came to a couple of conclusions that may lead to a solution. Including the obstacles in the grid as part of the graph when constructing it. Start out with a fully filled grid and just draw a random path and delete out blocks that will make that path work, though the problem then becomes figuring out which ones to delete so that you don't accidentally introduce an additional, shorter path. We were also thinking a dynamic programming algorithm may be beneficial, though none of us are too skilled with creating dynamic programming algorithms from nothing. Any ideas or references about what this problem is officially called (if it's an official graph problem) would be most helpful.
I wouldn't look at it as a graph problem, since as you say the representation is incomplete. To generate a puzzle I would work directly on a grid, and work backwards; first fix the destination spot, then place rocks in some way to reach it from one or more spots, and iteratively add stones to reach those other spots, with the constraint that you never add a stone which breaks all the paths to the destination.
You might want to generate a planar graph, which means that the edges of the graph will not overlap each other in a two dimensional space. Another definition of planar graphs ist that each planar graph does not have any subgraphs of the type K_3,3 (complete bi-partite with six nodes) or K_5 (complete graph with five nodes).
There's a paper on the fast generation of planar graphs.
I'm writing a game where a large number of objects will have "area effects" over a region of a tiled 2D map.
Required features:
Several of these area effects may overlap and affect the same tile
It must be possible to very efficiently access the list of effects for any given tile
The area effects can have arbitrary shapes but will usually be of the form "up to X tiles distance from the object causing the effect" where X is a small integer, typically 1-10
The area effects will change frequently, e.g. as objects are moved to different locations on the map
Maps could be potentially large (e.g. 1000*1000 tiles)
What data structure would work best for this?
Providing you really do have a lot of area effects happening simultaneously, and that they will have arbitrary shapes, I'd do it this way:
when a new effect is created, it is
stored in a global list of effects
(not necessarily a global variable,
just something that applies to the
whole game or the current game-map)
it calculates which tiles
it affects, and stores a list of those tiles against the effect
each of those tiles is
notified of the new effect, and
stores a reference back to it in a
per-tile list (in C++ I'd use a
std::vector for this, something with
contiguous storage, not a linked
list)
ending an effect is handled by iterating through
the interested tiles and removing references to it, before destroying it
moving it, or changing its shape, is handled by removing
the references as above, performing the change calculations,
then re-attaching references in the tiles now affected
you should also have a debug-only invariant check that iterates through
your entire map and verifies that the list of tiles in the effect
exactly matches the tiles in the map that reference it.
Usually it depends on density of your map.
If you know that every tile (or major part of tiles) contains at least one effect you should use regular grid – simple 2D array of tiles.
If your map is feebly filled and there are a lot of empty tiles it make sense to use some spatial indexes like quad-tree or R-tree or BSP-trees.
Usually BSP-Trees (or quadtrees or octrees).
Some brute force solutions that don't rely on fancy computer science:
1000 x 1000 isn't too large - just a meg. Computers have Gigs. You could have an 2d array. Each bit in the bytes could be a 'type of area'. The 'effected area' that's bigger could be another bit. If you have a reasonable amount of different types of areas you can still use a multi-byte bit mask. If that gets ridiculous you can make the array elements pointers to lists of overlapping area type objects. But then you lose efficiency.
You could also implement a sparse array - using a hashtable key'd off of the coords (e.g., key = 1000*x+y) - but this is many times slower.
If course if you don't mind coding the fancy computer science ways, they usually work much better!
If you have a known maximum range of each area effect, you could use a data structure of your choosing and store the actual sources, only, that's optimized for normal 2D Collision Testing.
Then, when checking for effects on a tile, simply check (collision detection style, optimized for your data structure) for all effect sources within the maximum range and then applying a defined test function (for example, if the area is a circle, check if the distance is less than a constant; if it's a square, check if the x and y distances are each within a constant).
If you have a small (<10) amount of effect "field" shapes, you can even do a unique collision detection for each effect field type, within their pre-computed maximum range.
I have an app that finds other users within a 20 mile radius on a google map and associates an icon with each of them. However, I do not want their exact points to be given but rather an approximation. I've wrestled with a few ideas on how to do this:
Only Geocode the Zip Code, make graphic icons for 1-99, use the icon to represent how many results are within the zip code, and use the info window to show hyperlinks to the individual results. The only problem is, I'd like each individual icon to be shown because it just looks a lot better.
Add/Subtract a random number to the lat/lng values stored with each user and add a translucent circle around the icon.
What do you guys suggest?
It depends on the level of privacy you want (the 1st option protects privacy better), but I'd be tempted to go with randomly moving the indicators because it's a more natural representation (people on a map, not groups of people on a map) without too much of a compromise in terms of usefulness.
That depends on how hard you think someone will try to defeat your system.
If you plan to track these positions over time, you give away more information over time than you do in a snapshot. For instance, if you choose a fixed-offset from the center of the circle, it may be possible to find this offset by mapping the path over time to the street map. On the other hand if you continually change the offset, the position may be discoverable by averaging.
Here's one possible scheme based on hysteresis. Leave the visible circle in place until the user exits an invisible bounding circle with a random radius. Then compute a new visible circle with a different random offset, and also set up a new invisible circle with a different random radius. This should generate a visible-circle movement that is almost impossible to reverse engineer, but also avoids lots of jittery movement.