Drawing two-dimensional point-graphs - language-agnostic

I've got a list of objects (probably not more than 100), where each object has a distance to all the other objects. This distance is merely the added absolute difference between all the fields these objects share. There might be few (one) or many (dozens) of fields, thus the dimensionality of the distance is not important.
I'd like to display these points in a 2D graph such that objects which have a small distance appear close together. I'm hoping this will convey clearly how many sub-groups there are in the entire list. Obviously the axes of this graph are meaningless (I'm not even sure "graph" is the correct word to use).
What would be a good algorithm to convert a network of distances into a 2D point distribution? Ideally, I'd like a small change to the distance network to result in a small change in the graphic, so that incremental progress can be viewed as a smooth change over time.
I've made a small example of the sort of result I'm looking for:
Example Graphic http://en.wiki.mcneel.com/content/upload/images/GraphExample.png
Any ideas greatly appreciated,
David
Edit:
It actually seems to have worked. I treat the entire set of values as a 2D particle cloud, construct inverse square repulsion forces between all particles and linear attraction forces based on inverse distance. It's not a stable algorithm, the result tends to spin violently whenever an additional iteration is performed, but it does always seem to generate a good separation into visual clusters:
alt text http://en.wiki.mcneel.com/content/upload/images/ParticleCloudSolution.png
I can post the C# code if anyone is interested (there's quite a lot of it sadly)

Graphviz contains implementations of several different approaches to solving this problem; consider using its spring model graph layout tools as a basis for your solution. Alternatively, its site contains a good collection of source material on the related theory.

The previous answers are probably helpful, but unfortunately given your description of the problem, it isn't guaranteed to have a solution, and in fact most of the time it won't.
I think you need to read in to cluster analysis quite a bit, because there are algorithms to sort your points into clusters based on a relatedness metric, and then you can use graphviz or something like that to draw the results. http://en.wikipedia.org/wiki/Cluster_analysis
One I quite like is a 'minimum-cut partitioning algorithm', see here: http://en.wikipedia.org/wiki/Cut_(graph_theory)

You might want to Google around for terms such as:
automatic graph layout; and
force-based algorithms.
GraphViz does implement some of these algorithms, not sure if it includes any that are useful to you.
One cautionary note -- for some algorithms small changes to your graph content can result in very large changes to the graph.

Related

How to make CNN learn positional constraints?

I am working on image segmentation problem in medical domain using fully connected CNN.
The problem is that for particular image, it could have a lot of similar structures. Our task is to find the correct one. One thing that I'd like to make the CNN learn is that there should not be a structure below another structure which is found first on the top. In the ground truth images, it is implicitly shown because there is only one structure in each image. Is it possible to achieve it with CNN? If not, what could be done to achieve it?
With a traditional CNN, positional constraints cannot be learned, because the learning all takes place in convolutional layers which are spatially invariant. One caveat to this is that a CNN will learn relative arrangements of features to a certain extent (if feature A is always above feature B, successful classification of pixels belonging to A will implicitly decrease the likelihood of pixels above being classified as B, at least for pixels that are "sufficiently close", because the boundary region will be the opposite of what the CNN has been trained on). If you do not consider that sufficient, you would need to either design a custom layer that somehow considers position (although if there is only one structure in each ground truth image I'm not sure your data is sufficient to teach anything about relative locations of multiple objects as-is beyond the aforementioned caveat) or just post-process the CNN output with a non-learning algorithm that is designed based on your expert knowledge of these positional constraints. As a fellow medical computer vision engineer, I would recommend the latter, especially since it sounds like you are dealing with a hard no-exceptions rule (why bother trying to learn a rule that is already simple?).

Training Faster R-CNN with multiple objects in an image

I want to train Faster R-CNN network with my own images to detect faces. I have checked quite a few Github libraries, but this is the example of the training file I always find:
/data/imgs/img_001.jpg,837,346,981,456,cow
/data/imgs/img_002.jpg,215,312,279,391,cat
But I can't find an example how to train with images containing couple objects. Should it be:
1) /data/imgs/img_001.jpg,837,346,981,456,cow,215,312,279,391,cow
or
2) /data/imgs/img_001.jpg,837,346,981,456,cow
/data/imgs/img_001.jpg,215,312,279,391,cow
?
I just could not help myself but quote FarCry3 here: "The definition of insanity is doing the same thing over and over and expecting different results."
(Note that this is purely in an entertaining context, and not meant to insult you in any way; I would not take the time to answer your question if I didn't think it worthwile)
In your second example, you would feed the exact same input data, but require the network to learn two different outcomes. But, as you already noted, it is not very common for many of the libraries to support multiple labels per image.
Oftentimes, this is purely done for the sake of simplicity, as it requires you to change your metrics, to accomodate for multiple outputs: Instead of having one-hot encoded targets, you now could have multiple "targets".
This is even more challenging in the task of object detection (and not object classification, as described before), since you now have to decide how you represent your targets.
If it is possible at all, I would personally restrict myself to labeling one class per image, or have a look at another image library that does support that, since the effort of rewriting that much code is probably not worth the minute improvement in the results.

Camera image recognition with small sample set

I need to visually recognise some flat pictures showed to camera. There are not many of them (maybe 30) but discrimination may depend on details. The input may be partly obscured or shadowed and is suspect to lighting changes.
The samples need to be updatable.
There are many existing frameworks for object detection, with the most reliable ones depending on deep learning methods (mostly convolutional networks). However, the pretrained models are not well optimised to discern flat imagery of course, and even if I start training from scratch, updating the system for new samples would take a cumbersome training process, if I am right about how this works.
Is it possible to use deep learning while still keeping the sample pool flexible?
Is there any other well known reliable method to detect images from a small sample set?
One can use well trained networks for visual classification like Inception or SqueezeNet, slice of the last layer(s) and add a simple statistical algorithm (for example k-nearest neighbour) that can be directly teached by the samples in a non-iterative fashion.
Most classification-related calculations like lighting and orientation insensitivity are already handled by the pre-trained network then, while the network's output keep enough information to allow statistical algorithms decide the image class.
An implementation using k-nearest neighbour is shown here: https://teachablemachine.withgoogle.com/ , the source is hosted here: https://github.com/googlecreativelab/teachable-machine .
Use transfer learning; you’ll still need to build a training set, but you’ll get better results than starting with random weights. Try to find a model trained on images similar to yours. You might also do some black box testing of the selected model with your curated images to baseline it’s response curve to your images.

Can cesium move 5000 objects?

I tried to enhance the czml example to move 100, 500 and 1000 objects instead of few by adding loop into the built-in czml code, and the map was stucked after 1000 objects. I saw the lots-of-satellites too, but I think that there are just few hundrends. If cesium doesn't have the means to do that, how can I enhance it to add fast layer of my own ? Is there any way to combine three.js for this enhancement ?
The result has to look something like this.
The short answer is, yes, Cesium can handle 5000 objects. The largest single Cesium app I have personally worked on involved over 35,000 time-dynamic objects.
The full answer is a little more involved. If all you are talking about is Billboard rendering, 5000 is easy. If you want to involve more complex types of visualization, with lots of dynamic geometry and polylines, then it can get a little more complicated. It also depends on the browser and CPU/GPU requirements that you are targeting. Some aspects of Cesium are currently CPU bound, while other things (such as static geometry) are GPU bound. Chrome beats Firefox hands-down in the performance department. Furthermore, it's really easy to write slow JavaScript code, so if you run into problems it's important to use the profiler (the one included with Chrome is great) to pinpoint exactly where the app is spending most of its time (it may not be Cesium).
Cesium developers are always on the lookout to improve performance and there's actually a lot of work being done in the CZML & DynamicScene area right now. If you run into a specific bottleneck that you are having trouble getting past, we'd be happy to help point you in the right direction.

Territory Map Generation

Is there a trivial, or at least moderately straight-forward way to generate territory maps (e.g. Risk)?
I have looked in the past and the best I could find were vague references to Voronoi diagrams. An example of a Voronoi diagram is this:
.
These hold promise, but I guess i haven't seen any straight-forward ways of rendering these, let alone holding them in some form of data structure to treat each territory as an object.
Another approach that holds promise is flood fill, but again I'm unsure on the best way to start with this approach.
Any advice would be much appreciated.
The best reference I've seen on them is Computational Geometry: Algorithms and Applications, which covers Voronoi diagrams, Delaunay triangulations (similar to Voronoi diagrams and each can be converted into the other), and other similar data structures.
They talk about all the data structures you need but they don't give you the code necessary to implement it (which may be a good exercise). In terms of code, an Amazon search shows the book Computational Geometry in C, which presumably comes with the code (although since you're stuck in C, you'd mind as well get the other one and implement it in whatever language you want). I also don't have any experience with this book, only the first.
Sorry to have only books to recommend! The only decent online resource I've seen on them are the two Wikipedia articles, which doesn't really tell you implementation details. This link may be helpful though.
Why not use a map of primitives (triangles, squares), distribute the starting points for the countries (the "capitals"), and then randomly expanding the countries by adding a random adjacent primitive to the country.
CGAL is a C++ library that has data structures and algorithms used in Computational Geometry.
I'm actually dealing with exactly this kind of stuff for my company's video game. The most useful info I've found are at these two links:
Paul Bourke's page at UWA, with his 1989 paper on Delaunay and a series of implementation links.
A great explanation of the psudocode and a visual of doing Delaunay at codeGuru.com.
In terms of rendering these - most of the implementations I've found will need massaging to get what you'd want, but since using this for a game map would lead to a number of points plus lines between them, it could be a very simple matter to do draw this out to screen.