How to setup a heatmap, for temperature on Grafana - heatmap

I'd like to be able to setup a heatmap on Grafana that will show me the temperatures recorded over say a week showing me what the frequency of temperature over 24 hours. So for example I've noticed that my temperature seems to peak around 7PM so I would want 7PM to show the average darkest red blob to show that that is generally the warmest hour then the boxes getting lighter and going down as it goes on and cools. I have 3 sensors logging in an influx database tagged by sensor location. I've absolutely no idea how'd I'd even start with something like this. I've tried setting up a heatmap (which sounds like the sort of thing I'd want). I've also found carpet plot mentioned but haven't tried it yet and am unsure if it would do what I want. Can anyone suggest the best way to do this and how I'd proceed?
Thank you for your time.

Related

How to do calculated Attributes in a database

I hope you can help me to understand how to solve my issue. Bases is the creation of a database where working hours of employees should be stored.
With these hours a little math is involved and the combination of database+math seems to be a problem.
I want to store the theoretically workable hours of an employee:
52(weeks a year)*40 hours a week = 2080 Minus holiday etc -> 1900 hours expected time yearly.
His actually worked time would go up by 8 hours each day until he reaches 1900, what would already be an issue. Meaning i dont really know how to implemnt that.
But the problem continues:
This time shall be split between 12 months equally. Okay so 1900 devided by 12 in 12 different columns... sounds stupid but now he reports sick in february and his actual time decreases within this month and accordingly his overall working time decreases as well.
Also there are things like parttime workers or people taking a sabbathical and also these hours need to be conncted to different projects (another table in same db)
In excel this issue was relatively easy to solve but with a pure db Iam kinda lost as how to approach it.
So 2 questions: is something like this even possible in a Mysql DB (i somehow doubt it).
And how would i do it?( in the db or with some additional software/frontend)
It sounds like you are trying to make a DTR system (Daily Time Reoords). I do recommend to design the database that will cater flexible scenarios to all types of employees. A common ground of storing information (date and time) that be able to calculate their working hours of these people.
You worry about the algorithms later it will follow based on your database design.

Weka J48 Gets stuck on Building Model on Training Data

I'm trying to use Weka to look at my data sets. When I load my data set in, and I got to classify and I choose J48 and click start it will being normally, my bird in the bottom right hand corner will go back and forth and there will be a x 1 next to it. The status will update to "Building model on train data" but then after a second or two the bird will stop and sit back down, and it will change to x 0. No further progress is made after that.
The file I am looking at is a csv file with 5 columns. The first row, is a row of labels, and the total amount of rows are 1971 (in each column obviously).
I have done some research on this and found no solutions. Possibly I'm looking in the wrong place? Any guidance or resolutions to this issue would be much appreciated!
Img of Screen when stopped
May have found a solution myself... It could of been due to the size of the data. I reduced the data amount and it started loading what looks like large matrix's. I'll provide two screenshots. Does this seem like a safe assumption to make? It was due to data size? Confirmation would be appreciated.
Screenshot of J48 Results 1
Screenshot of J48 Results 2

importXML is making too many Mapquest transactions

I am trying to use the following formula and a key I obtained from mapquest (for the free service) to initiate their location services and calculate the distances between one variable location and 20 of my plant locations.
=importXML("http://mapquestapi.com/directions/v2/route?key=*****************&outFormat=xml&from=" & $B$3 & "&to=" & 56244,"//response/route/distance")
This is working flawlessly expect after using it for a short period of time I have received an email stating I have used 80% of my allotment (15000 transactions) for the month.
The variable location has only been changed around 20-25 times this month so I don't see how I could have used that many transactions. Can someone explain what exactly this formula is doing and how I could make it more efficient if possible? I feel like it has to be using transaction that are unnecessary. Keep in mind I do not need the actual directions all I need is the driving mileage required.
Thanks in advance.

Filter records from a database on a minimum time interval for making graph

We have a MySQL database table with statistical data that we want to present as a graph, with timestamp used as the x axis. We want to be able to zoom in and out of the graph between resolutions of, say, 1 day and 2 years.
In the zoomed out state, we will not want to get all data from the table, since that would mean to much data being shipped through the servers, and the graph resolution will be good enough with less data anyway.
In MySQL you can make queries that only select e.g. every tenth value and similar, which could be usable in this case. However, the intervals between values stored in the database isn't consistent, two values can be separated by as little as 10 minutes and as much as 6 hours, possibly more.
So the issue is that it is difficult to calculate a good stepping interval for the query, if we skip every tenth value for some reslution, that may work for series 10 minutes inbetween, but for 6 hour intervals we will throw away too much and the graph will end up having a too low resolution for comfort.
My impression is that MySQL isn't able to have a stepping interval depend on time so it would skip rows that are e.g. in the vicinity of five minutes of am included rows.
One solution could be to set 6 hours as a minimal resolution requirement for the graph, so we don't throw away values unless 6 hours is represented by a sufficiently small distance in the graph. I fear that this may result in too much data being read and sent through the system if the interval actually is smaller.
Another solution is to have more intelligence in the Java code, reading sets of data iteratively from low resolution and downwards until the data is good enough.
Any ideas for a solution that would enable us to get optimal resolution in one read, without too large result sets being read from the database, while not putting too much load on the database? I'm having wild ideas about installing an intermediate NoSQL component to store the values in, that might support time intervals the way I want - not sure if that actually is an option in the organisation.

good design for a db that decreases resolution of accuracy after time

I have something like 20,000 data points in a database and I want to display it on the google annotated graph. I think around 2000 points would be a good number to actually use the graph for, so I want to use averages instead of the real amount of data points I have.
This data counts the frequency of something a certain time. it would be like Table(frequency, datetime)
So for the first week I will have datetime have an interval of every 10 minutes, and frequency will be an average of all the frequencies of that time interval (of 10 minutes). Similarly, for the month after that I will I have a datetime interval of an hour etc.
I think this is something you can see on google finance too, after some time the resolution of the datapoints decreases even when you zoom in.
So what would be a good design for this? Is there already a tool that exists to do something like this?
I already thought of (though it might not be good) a giant table of all 20,000 points and several smaller tables that represent each time interval (1 week, 1 month etc) that are built through queries to the larger table and constantly updated and trimmed with new averages.
Keep the raw data in the db in one table the. Have a second reprti g table which you use a script or query to populate from the raw table. The transformation that populates the reporting table can group and average the buckets however you want. The important thing Is to not transform your data on initial insert--keep all your raw data. That way you can always rollback or rebuild if you mess something up.
ETL. Learn it. Love it. Live it.