Grafana: Combination of variables (Athena Dataset) - mysql

Goal: I got an Athena Dataset that is visualized with grafana. With this i want to create several variables so i can precisely select individual areas. The test-data has a format similar to this one:
Time SensorID Location Measurement
/ 1 Berlin 12.1
/ 2 London 14.0
/ 3 NewYork 23.3
/ 3 Sydney 45.1
/ 2 London 1.3
/ 1 NewYork 17.3
/ 2 Berlin 18.9
/ 3 Sydney 4.8
I now want 2 variables where i can select the SensorID and Location at the same time. For example if i select SensorID = 1 and Location = Berlin => Measurement in my Grafana Graph should be 12.1.
Is there a solution to solve this issue, because the syntax for the athena plugin is very new to me even if it is similar to mysql. I tried to create the syntax but it wont work for me (see the pictures below):
Creation of the first variable
Creation of the panel function for the different variables
I would really look forward to hear about possible solutions or help for the athena syntax :)

Related

Is there a way to combine these variables in a way that makes sense?

Hello stack overflow community!
I am a sociology student working on a thesis project comparing home value appreciation and neighborhood racial composition over time.
I'm currently using two separate data sources and trying to combine them in a way that makes sense without aggregating anything.
The first data source is GIS data which has information on home sales in each year by home. The second is census data which has yearly estimates of racial composition by census tract. Both are in .csv formats.
My goal is to create a set of variables for each home row in the GIS data which represents the racial composition for the tract the home is in at the year it was sold (e.g. home 1 | 2010| $500,000 | Census tract 10 | 10% white).
I began doing this by going into Stata and using the following strategy:
For example, if I'm looking at a home sold in 2010 in Census tract 10 and I find that this tract was 10% white in 2010, using something like
If censustract=10 and year=2010, replace percentwhite = 10
However, this seemed incredibly time consuming, as I'm using data that go back decades and a couple dozen Census tracts.
Does anyone have any suggestions on how I might do this smarter, not harder? The first thought I had was to aggregate the data by census tract and year, but was hoping to avoid that if possible. Thank you so much in advance for your help and have a terrific day and start to the new year!
It sounds like you can simply merge census data onto your GIS data. That will be much less painful than using -replace-. Here's an example:
*GIS data: information on home sales in each year by home
clear
input censustract house_id year house_value_k
10 100 2010 200
11 101 2020 500
11 102 1980 100
end
tempfile GIS_data
sa `GIS_data'
*census data: yearly estimates of racial composition by census tract
clear
input censustract year percentwhite
10 2010 20
10 2000 10
11 2010 25
11 2000 5
end
tempfile census_data
sa `census_data'
*easy method: merge the census data onto your GIS data
use `GIS_data', clear
mer m:1 censustract year using `census_data'
drop if _merge==2
list
*hard method: use -replace-
use `GIS_data', clear
gen percentwhite=.
replace percentwhite=20 if censustract==10 & year==2010
replace percentwhite=10 if censustract==10 & year==2000
replace percentwhite=25 if censustract==11 & year==2010
replace percentwhite=5 if censustract==11 & year==2000
list
Both methods "work", but using -merge- is much easier and less prone to errors.
Note: I intentionally created the data sets so that the merge wouldn't be perfect. You will likely want to drop some of the observations in that case. In the code above I dropped when _merge==2

MChip Select 4 tag values (9F7E and D5)

I'm working on a profile for MasterCard EMV cards on M/Chip Select 4 version 1.1b and I need some help understanding the data elements for the 9F7E (Application Life Cycle Data) tag value and D5 (Application Control) tag value. Unfortunately, the MasterCard SSF form doesn't explain information. From our card vendor we found a document in which we found application Id number is it similar to ALCD(9F7E)? And how could could I found D5 value
9F7E is a 48 bytes long field organized into two 4 parts.
2 bytes - version number - for your case 03
7 bytes - type approval id - which is provided by MasterCard while
certifying applet.
20 bytes - Contents are issuer specific denoting application identification.
20 bytes application code identification
For options available on D5, download MChip 4 Version 1.1 Issuer Guide to Debit and Credit Parameter Management.pdf from MasterCard Connect. It has bit by bit information.
You can find this informations in:
M/Chip 4 Version 1.1 Issuer Guide to Debit and Credit Parameter Management
There, you will see next (tag D5 value related):
The Application Control activates or de-activates functions in the application.
The coding of the Application Control data element varies depending on the
version of M/Chip 4, and on whether the Lite or Select application is being
used.
You are using M/Chip Select 4 version 1.1b, so, there it is:
And, if you have M/Chip Lite 4 version 1.1b, first byte will be like:
Second byte value is same for both:
Hope this can help you.

Time-aware WMS Service

Am trying to develop WMS Service using ThinkGeo Map Suite WMS Server Edition. There is a requirement for viewing past data. I am new to GIS and as per my research seems it is possible to make WMS time-aware. Am specifically looking for a example or some suggestion which can point me in right direction how to achieve time-aware WMS using think geo SDK.
Thanking you in advance
As per the WMS specification "Time" parameter has to be passed for enabling temporal rendering for maps. The parameter format has to be in ISG 8601:2000 specification.
Format :
ccyy-mm-ddThh:mm:ss.sssZ
EXAMPLE 1 : ccyy Year only
EXAMPLE 2 : ccyy-mm Year and month
EXAMPLE 3 : ccyy-mm-dd Year, month and day
EXAMPLE 4 : ccyy-mm-ddThhZ Year, month, day and hour in UTC
Period format :
EXAMPLE 1 : P1Y — 1 year
EXAMPLE 2 : P1M10D — 1 month plus 10 days
EXAMPLE 3 : PT2H — 2 hours
EXAMPLE 4 : PT1.5S — 1.5 seconds
ThinkGeo WMS server edition do not have a property for this in GetMapRequest class, but can be handled using custom parameters. If some one has a better solution would be helpful.

Weka Decision Tree

I am trying to use weka to analyze some data. I've got a dataset with 3 variables and 1000+ instances.
The dataset references movie remakes and
how similar they are (0.0-1.0)
the difference in years between the movie and the remake
and lastly if they were made by the same studio (yes or no)
I am trying to make a decision tree to analyze the data. Using the J48 (because that's all I have ever used) I only get one leaf. Im assuming I'm doing something wrong. Any help is appreciated.
Here is a snippet from the data set:
Similarity YearDifference STUDIO TYPE
0.5 36 No
0.5 9 No
0.85 18 No
0.4 10 No
0.5 15 No
0.7 6 No
0.8 11 No
0.8 0 Yes
...
If interested the data can be downloaded as a csv here http://s000.tinyupload.com/?file_id=77863432352576044943
Your data set is not balanced cause there are almost 5 times more "No" then "Yes" for class attribute. That's why J48 is tree which is actually just one leaf that classifies everything as "NO". You can do one of these things:
sample your data set so you have equal number of No and Yes
Try using better classification algorithm e.g. Random Forest (it's located few spaces below J48 in Weka explorer GUI)

PathFinding Algorithm: How to handle changing Weights Efficiently

So, I have a simple pathfinding algorithm which precalculates the shortest route to several target endpoints, each of which has a different weight. This is somewhat equivalent to having one endpoint with a node between it and each endpoint, though the edges there have different weights. The algorithm it uses is a simple spreading algorithm, which in 1d looks this this (| means wall, - means space):
5 - - - 3 | - - - 2 - - - - 2
5 4 - - 3 | - - - 2 - - - - 2 : Handled distance 5 nodes
5 4 3 - 3 | - - - 2 - - - - 2 : Handled distance 4 nodes
5 4 3 2 3 | - - - 2 - - - - 2 : Handled distance 3 nodes
5 4 3 2 3 | - - 1 2 1 - - 1 2 : Handled distance 2 nodes
Done. Any remaining rooms are unreachable.
So, let's suppose I have a precalculated pathfinding solution like this one, where only the 5 is a target:
- - - - | 5 4 3 2 1 -
If I change the wall to a room. Recomputing is simple. Just re-handle all distance nodes (but ignore the ones which already exist). However, I am not able to figure out an efficient way to handle what to do if the 4 became a wall. Clearly the result is this:
- - - - | 5 | - - - -
However, in a 2d solution I'm not sure how to efficiently deal with 4. It is easily possible to store that 4 depends on 5 and thus needs recaculation, but how do I determine its new dependency and values safely? I'd rather avoid recalculating the entire array.
One solution, which is better than nothing, is (roughly) to only recalculate array elements with a manhattan distance of 5 from the 5, and to maintain source information.
This would basically mean reapplying the algorithm to a selected area But can I do better?
Hmmm. One solution I've come up with is this:
Keep a list of nodes that are reachable most quickly from each node. If a node becomes a wall, check which node it was reachable from and grab the corresponding list. Then Recheck all those nodes using the standard algorithm. When reaching a node where the new distance is smaller, mark it as being in need of retesting.
Take all the neighbors of the marked nodes which are unmarked and reapply the algorithm on them, ignoring any marked nodes that this technique hits. If the reapplied algorithm increases the value of a marked node, use the new value.