Plotting tools for dashboard - html

After browsing for a while for available solutions, it is really hard to choose the most appropriate tool for creating dashboard & populating it with plots. I would want to have an html page with multiple plots and tables depicted. I'm thinking to have data input stored in csv files, appropriately formatted.
The requirements are:
plot coordinates are showing on mouse hover
ability to show coordinates of points on a plotted line (points in
scatter plot or bar values for bar chart) 'sticking' to the nearest
lines on hover, with appropriate handling of multiple lines (show several y values for same x)
ability to interactively switch plotted data on/off
easily embeddable into html page, doesn't require additional plugins installed
a good variety of plot types
not too slow to load and stable, there could be ~50 plots on one page
(this is for internal use only, so quickness is not that important)
does it all with minimal effort
So far I checked out (by no means a final opinion, correct me if I'm wrong):
gnuplot+canvas - looks good, but samples on their page fail to work
well for me, not always getting mouse clicks right
python+matplotlib+mplh5canvas - feels a bit raw, as I understand
some of the stuff above I'd need to implement in Python myself
RGraph looks awesome at first glance, not sure if it is good since never heard of
it and don't have any experience in js, hard to customize(?)
some other random stuff which seemed bad enough
Suggestions?

RGraph looks awesome, and is awesome. It's not difficult to use and there are a lot of examples online.
Rgraph Example page
They've got 22types of graphs, correct me if I'm wrong, and as I already said, easy to use.
Documentation about possibilities and stuff for each type of graph is also available on the website.

I end up using Highcharts because of its very advance features and ready-to-use templates (in addition to the things mentioned in the question).

Related

FirebaseVisionImage / ML Toolkit cropRect() support

I am posting this question by request of a Firebase engineer.
I am using the Camera2 API in conjunction with Firebase-mlkit vision. I am using both barcode and on-platform OCR. The things I am trying to decode are mostly labels on equipment. In testing the application I have found that trying to scan the entire camera image produces mixed results. The main problem is that the field of view is too wide.
If there are multiple bar codes in view, firebase returns multiple results. You can sort of work around this by looking at the coordinates and picking the one closest to the center.
When scanning text, it's more or less the same, except that you get multiple Blocks, many times incomplete (you'll get a couple of letters here and there).
You can't just narrow the camera mode, though - for this type of scanning, the user benefits from the "wide" camera view for alignment. The ideal situation would be if you have a camera image (let's say for the sake of argument it's 1920x1080) but only a subset of the image is given to firebase-ml. You can imagine a camera view that has a guide box on the screen, and you orient and zoom the item you want to scan within that box.
You can select what kind of image comes from the Camera2 API but firebase-ml spits out warnings if you choose anything other than YUV_420_488. The problem is that there's not a great way in the Android API to deal with YUV images unless you do it yourself. That's what I ultimately ended up doing - I solved my problem by writing a Renderscript that takes an input YUV, converts it to RGBA, crops it, then applies any rotation if necessary. The result of this is a Bitmap, which I then feed into either the FirebaseVisionBarcodeDetectorOptions or FirebaseVisionTextRecognizer.
Note that the bitmap itself cases mlkit runtime warnings, urging me to use the YUV format instead. This is possible, but difficult. You would have to read the byte array and stride information from the original camera2 yuv image and create your own. The object that comes from camear2 is unfortunately a package-protected class, so you can't subclass it or create your own instance - you'd essentially have to start from scratch. (I'm sure there's a reason Google made this class package protected but it's extremely annoying that they did).
The steps I outlined above all work, but with format warnings from mlkit. What makes it even better is the performance gain - the barcode scanner operating on an 800x300 image takes a tiny fraction as long as it does on the full size image!
It occurs to me that none of this would be necessary if firebase paid attention to cropRect. According to the Image API, cropRect defines what portion of the image is valid. That property seems to be mutable, meaning you can get an Image and change its cropRect after the fact. That sounds perfect. I thought that I could get an Image off of the ImageReader, set cropRect to a subset of that image, and pass it to Firebase and that Firebase would ignore anything outside of cropRect.
This does not seem to be the case. Firebase seems to ignore cropRect. In my opinion, firebase should either support cropRect, or the documentation should explicitly state that it ignores it.
My request to the firebase-mlkit team is:
Define the behavior I should expect with regard to cropRect, and document it more explicitly
Explain at least a little about how images are processed by these recognizers. Why is it so insistent that YUV_420_488 be used? Maybe only the Y channel is used in decoding? Doesn't the recognizer have to convert to RGBA internally? If so, why does it get angry at me when I feed in Bitmaps?
Make these recognizers either pay attention to cropRect, or state that they don't and provide another way to tell these recognizers to work on a subset of the image, so that I can get the performance (reliability and speed) that one would expect out of having to ML correlate/transform/whatever a smaller image.
--Chris

Doing an Image Search?

Is it possible to perform an image search with google maps? For example, if I had a small section of a map, showing road configurations, but there were no labels to indicate street names or place names, is there a possible way to do an image search, similar to what you can do with regular google, to be able to identify that location? I have tried this with the regular google, and it does not work. Does anyone know of software or an app that has the ability to do this? Thanks!
Yes it may be possible , But I do not think any software or app that is currently available a program would be need to be written that takes that small section of a map that you have and it would have to be a rather good quality image and preferaby in a raw Bitmap format and then the small section of the map you provide would have to be overlaid over the main google map and then moved around the map... scanned and each pixel compared frame by frame as the search picture is scanned across the bigger map and then a best-fit process would be utilized , and then when it finds a match it can then let you know just how closely it matches and what the confidence level it is that that the location found is actually correct , it would be best to narrow it down as much as possible as well, also a bit of artificial intelligence might be very useful here too, a face recognition A.I. program could be modified to complete that task. but as far as I know none of that exists in a single readily available app or program , maybe someday someone would create such a program? it is possible.

toggle buttons to control rgl 3d scatterplot not working

We are working on a meta analysis about lizard niches and convergent evolution, and created a 3d plot with PCA scores, were dots are lizard species from 24 different families.
We decided to use our 3d plots as supplementary material for our manuscript because there are very interesting patterns that are obscured when plotted in 2 dimensions. For example, all nocturnal species stick together, and are separated from the rest in the third plane.
So, to make the figures really useful, I wanted to include some controls to turn some objects on/off (like lizard families, or ellipsoids, or functional groups). I tried using Plotly, that gives you very nice and interactive plots, but symbols in Plotly are limited to only five for 3D plots.
I finally did it with R using the rgl package. I had to create new symbols by overlapping pre-existent ones, but and at the end I got what I needed.
I followed an online tutorial to create interactive controls (https://cran.r-project.org/web/packages/rgl/vignettes/WebGL.html) that can be embedded in html, and with a lot of effort I got those controls working. The problem is that it only works in my computer, only in Chrome (does not work in Firefox, nor Safari). I asked one of the creators of the package and he told me that the tutorial was intended to be use with markdown, and given I changed the html code, the weird behavior was not surprising.
After that I learn some Rmarkdown and html to better understand the tutorial, and re-do the code. Now I have the plots and I have the buttons, but when I compile the script using Knitr, the buttons don't work. Some of them do nothing, some others turn on different set of points.
I am sorry for the length of this post, but I really tried everything and I can't find a solution.
Here is a link to a sample of my dataset, my R script, and the HTML generated using Knitr: https://drive.google.com/open?id=0B-fCxMGN3utrbWgtZzhWYVpxSjg
Thank you so much in advance!
The issue is that there are at least two different schemes for embedding rgl scenes in a web page and for linking a button to the rgl scene, and you're mixing them without providing the "glue" to make them work together.
An rglwidget() always has an elementId. Normally it's some random string, but if you want to refer to that scene, you should specify one.
The toggleButton() function uses the older scheme for inserting the scene into your web page. So you need to translate the elementId to the prefix that it uses.
So try something like this:
```{r results='asis'}
library(rgl)
x <- plot3d(rnorm(10), rnorm(10), rnorm(10))
rglwidget(elementId = "theplot")
elementId2Prefix("theplot")
toggleButton(x["data"], prefix = "theplot")
```

Optical character recognition

Hey everyone,
I'm trying to create a program in Java that can read numbers of the screen, and also recognise images on the screen. I was wondering how i can achieve this?
The font of the numbers will always be the same. I have never programmed anything like this before, but my idea of how it works is to have the program take a screenshot, then overlay the image of the numbers with the section of the screenshot image and check if they match, repeating this for each numbers. If this is the correct way to do this, how would i put that in code.
Thanks in advance for any help.
You could always train a neural net to do it for you. They can get pretty accurate sometimes. If you use something like Matlab it actually has capabilities for that already. Apparently there's a neural network library for java (http://neuroph.sourceforge.net/) although I've never used it personally.
Here's a tutorial about using neuroph: http://www.certpal.com/blogs/2010/04/java-neural-networks-and-neuroph-a-tutorial/
You can use a neural network, support vector machine, or other machine learning construct for this. But it will not do the entire job. If you do a screen shot, you are going to be left with a very large image that you will need to find the individual characters on. You also need to deal with the fact that the camera might not be pointed straight at the text that you want to read. You will likely need to use a series of algorithms to lock onto the right parts of the image and then downsample it in a way that size becomes neutral.
Here is a simple Java applet I wrote that does some of this.
http://www.heatonresearch.com/articles/42/page1.html
It lets you draw on a relatively large area and locks in on your char. Then it recognizes it. I am using the alphabet, but digits should be easier. The complete Java source code is included.
One simpler approach could be to use template matching. If the fonts are same, and/or the size (in pixels)is known, then simple template matching can do the job for you. ifsize of input is unknown, you might have to create copies of images at different scales and do the matching at each scale.
One with the extreme value(highest or lowest depending on the method you follow for template matching) is your result.
Follow this link for details

Heatmap Tools For Web Apps

I have a client that want to have a web app I'm building for them display heat maps of the data.
I haven't worked with heat maps at all and I was wondering if anyone knew of some good tools for generating them.
Thanks.
Heat maps are often used in place of a more conventional term: kernel density estimators. If you need to compute these on the fly, consider GRASS GIS- specifically, the v.kernel or v.neighbors modules. These will generate a continuous estimate (i.e. raster surface) of density, at some target resolution (defined by the current region settings). GRASS GIS can be controlled via Python code, so it would be a simple matter to write a Python wrapper around the underlying modules, that could export the results to your web application.
For small datasets, the R project has several functions for reading/writing spatial data, and computing kernel density estimates.
I realize this is an old, old post -- but the next guy to stumble across this page might try gheat for heatmaps in webapps. There are ports for Django and Google App Engine, if you happen to be using those backends.
I'll assume this is data with three values per data-point - we'll call them x, y and z.
It would really help if x and y were spatial coordinates as that makes things easier.
Anyway, generate a bitmap of x by y (scaled appropriately).
For each x and y pair in the data, scale z to between 0 and 1 (or 0 and however many colours you have in your map), and plot z as a colour represented by that value.
E.g. a simple map could just use the R portion of RGB, in which case you'd have 256 graduations for your red.
Most likely, you'd want something more fancy, but you should be able to get the idea.
If your datapoints are spread apart, you can either plot them as rectangles that take up the space, or smoothly interpolate between them.
NOTE: THere is a web-based tool that does it here. I found it linked from the Wikipedia article on heatmaps. There's a java one too linked from there too.
If you want to generate heatmaps on the clientside (with JavaScript) I can recommend heatmap.js to you. It uses the HTML5 Canvas element to generate dynamic web heatmaps, you can new add data at any time and the heatmaps refresh.
Pyheat is another good library in python for building heatmaps.
Gheat is already mentioned in a reply by J.J.
I have a compelling heatmaps tool for both PC and mobile web pages to recommend: http://miapex.com
Might I suggest my own jQuery plugin?
jQuery Hottie makes it easy to take normal markup and add a background color like so:
<!-- Coloring elements is easy! -->
<ul id="example1">
<li>1<li>
<li>2<li>
<li>3<li>
<li>4<li>
<li>5<li>
</ul>
$('ul#example1 li').hottie();