Non Scaled SSRS Line Chart with mulitple series - reporting-services

I am trying to present time series of multiple sensors on a single SSRS (v14) line chart
I need to plot N series, with each independently plotting the series data in the space provided by the chart (independent vertical axis)
More about the data
There can be anywhere from ~1-10 series
The challenge is that they are different orders of magnitude.
One might be degrees F (~0-212)
One might be Carbon ppm (~1-16)
One might be Ftlbs Thrust (~10k-100k)
the point is , they have no relation and can be very different
The exact value is not important. I can hide the vertical axis
More about what I am trying to do
The idea is to show the multiple time series, plotted together against time for the 4 hours before and after
'an event'. Its not the necessarily the exact value that is important. the subject matter expert would be looking for something odd (temperature falls, thrust spikes, etc).
Things I have tried
If there were just 2 series, i could easily use the 2nd axis available in the SSRS chart. Thats exactly the idea I am chasing. But in this case, I want N series to plot using its own axis.
I have tried stacking N transparent graphs on top of each other. This would be a really ugly solution, but SSRS even wont let you do it. It unstacks them for you.
I have experimented with the Allow Scale Breaks property on the Vert Axis. This would solve the problem but we don't like the 'double jagged line'
Turning on Logarithmic scale is a possibility. It does do a better job of displaying all the data. but its not really what we want. Its going to change the shape of data that ranges over a couple orders of magnitude.
I tried the sparkline component and am having the same problem.

This approach is essentially the same a Greg's answer above. I've had to do this same process in the past comparing trends of data even though the units were dissimilar.
I took a very simple approach of adding an additional column to the query that showed each value as a percentage of the maximum value in each series.
As an example (just 2 series here for clarity) I started with data like this in myTable
Series Month myValue
A Jan 4
A Feb 8
A Mar 16
B Jan 200
B Feb 300
B Mar 400
My Dataset query would be something like.
SELECT *, myValue / MAX(myValue) OVER(PARTITION BY Series) as myPlotValue FROM myTable
This gives us a final dataset which looks liek this.
Series Month myValue myPlotValue
A Jan 4 0.25
A Feb 8 0.5
A Mar 16 1
B Jan 200 0.5
B Feb 300 0.75
B Mar 400 1
As you can see all plot values are now between 0 and 1.
I created that charts using the myPlotValue field and had the option of using the original values from the myValue field as datapoint labels.

After talking to some math people, this is a standard problem and it is solved by a process called normalization of the data.
Essentially you are changing all the series to fit in a given range (usually 0-1)
You can scale and add an offset if that makes sense for your problem domain somehow.
https://www.statisticshowto.datasciencecentral.com/normalized/

Related

Find the Relationship Between Two Logarithmic Equations

No idea if I am asking this question in the right place, but here goes...
I have a set of equations that were calculated based on numbers ranging from 4 to 8. So an equation for when this number is 5, one for when it is 6, one for when it is 7, etc. These equations were determined from graphing a best fit line to data points in a Google Sheet graph. Here is an example of a graph...
Example...
When the number is between 6 and 6.9, this equation is used: windGust6to7 = -29.2 + (17.7 * log(windSpeed))
When the number is between 7 and 7.9, this equation is used: windGust7to8 = -70.0 + (30.8 * log(windSpeed))
I am using these equations to create an image in python, but the image is too choppy since each equation covers a range from x to x.9. In order to smooth this image out and make it more accurate, I really would need an equation for every 0.1 change in number. So an equation for 6, a different equation for 6.1, one for 6.2, etc.
Here is an example output image that is created using the current equations:
So my question is: Is there a way to find the relationship between the two example equations I gave above in order to use that to create a smoother looking image?
This is not about logarithms; for the purposes of this derivation, log(windspeed) is a constant term. Rather, you're trying to find a fit for your mapping:
6 (-29.2, 17.7)
7 (-70.0, 30.8)
...
... and all of the other numbers you have already. You need to determine two basic search paramteres:
(1) Where in each range is your function an exact fit? For instance, for the first one, is it exactly correct at 6.0, 6.5, 7.0, or elsewhere? Change the left-hand column to reflect that point.
(2) What sort of fit do you want? You are basically fitting a pair of parameterized equations, one for each coefficient:
x y x y
6 -29.2 6 17.7
7 -70.0 7 30.8
For each of these, you want to find the coefficients of a good matching function. This is a large field of statistical and algebraic study. Since you have four ranges, you will have four points for each function. It is straightforward to fit a cubic equation to each set of points in Cartesian space. However, the resulting function may not be as smooth as you like; in such a case, you may well find that a 4th- or 5th- degree function fits better, or perhaps something exponential, depending on the actual distribution of your points.
You need to work with your own problem objectives and do a little more research into function fitting. Once you determine the desired characteristics, look into scikit for fitting functions to do the heavy computational work for you.

Cumulative Frequency Tables and Chart Output

I'm working with some rather large time-series data sets related to futures prices and am in the process of converting some calculations which I previously did in Excel to R. This conversion has been relatively straightforward thus far but I m having a bit of trouble replicating my histograms with their cumulative frequency distributions in R as I had them in Excel. If you're familiar with Excel, the Histogram function in the Data Analysis Toolpack automatically creates a Cumulative Frequency Distribution table with the cumulative percentages of each, in this case, Price Level, next to the histogram.
I've had some success creating some basic histograms using ggplot, here is a snippet of that code:
ggplot(data=CrudeRaw, aes(x=CrudeRaw$X7_1_F))+
geom_histogram(breaks=seq(X7_F_M_L, X7_F_M_H, by=0.01),
col="blue",
fill="white",
alpha= 0.2)+
labs(title="X7 1 Month Price Distribution", x="Price Levels",
y="Frequency") +
xlim(c(X7_F_M_L, X7_F_M_H)) +
ylim(c(0,100))
Several questions regarding formatting and usage.
a) CrudeRaw is a dataframe which contains roughly 276 rows, and no less then 50 columns. For the purposes of this project I've chopped the data into 20 period, 60 period, 120 period, 180 period, and 240 period subsets. The data is in chronological order by date.
Question(s): ggplot cannot take numeric data types, only data frames, so I can only feed it the entire df even though I am interested in creating distributions for the aforementioned subsets. Is there a way that I can still do this?
b) How do I get every bin (price) to show up on the x-axis rather than a number marking every 5 bins (-15, -10, -5, 0, 5 ..., 15)?
c) I've successfully created a cumulative frequency table using the follow code,
round(cbind(cumsum(table(X7_F)))/NROW(X7_F),2)
But I'd like a way to a) output each of these tables (of which there are many) to a CSV file OR, ideally create a "report" of sorts with R which can be saved to a pdf, or perhaps even within the histogram which the table/data is associated with.
d) I've done some searching on how to output data to a CSV file, but it wasnt clear from the examples I went over how I could output multiple arrays to the same sheet or workbook, en masse. That is, I would like to output my 20, 60, 120, 180, and 240 period arrays of prices to the same workbook. I'm thinking that by creating another dataframe that I could then pass these subsets of the data to the ggplot function like I mentioned I was having trouble doing in part a)
e) Lastly (for now) how do I overlay the CFD onto my histograms?
Please advise if you require any additional information or colour in order to help me and many thanks in advance for your responses!

Difference between quantile results and iqr

I'm trying to understand a little more about how Octave calculates quartiles and interquartile range. Consider the following:
A=[1 4 7 10 14];
quantile(A, [0.25 0.75])
ans = 3.2500 11.0000
This result seems consistent with Method 3 on the Wikipedia page about quartiles. Given that the interquartile range is Q3-Q1, I'd expect the result to be 7.75.
However, running iqr(A) gives a result of 6. Clearly this is calculated from 10 minus 4 from the original data, which is consistent with Method 2 from the same Wikipedia page.
What is the reason for using two different methods for calculating Q1 and Q3?

Report builder 2.0 expression to sort values highest to lowest on a chart

I have a 3d cylinder chart that I am having some problems with. I want to effectively sort the cylinders with the highest value at the back and the lowest value at the front. Otherwise the tallest valuest cover the smallest values.
I have tried sorting both a-z and z-a but I really need it to be dynamic based on the values. I have also tried sorting the values by the actual value field. both a-z and z-a but this seems to return completely random results.
the data in the database (example) looks like. I use a parameter to separate by supplier.
Date catgeory_Type cost supplier
01/01/2013 apple $5 abc
01/01/2013 pear $10 def
01/01/2013 bannana $15 cgi
01/02/2013 apple $7 etc
01/02/2013 pear $12 etc
01/02/2013 banana $18 etc
I believe I need some form of expression that sorts the values based on cost. as both a-z and z-a in the instance would provide cylinders that blocked other cylinders.
I have tried sorting the series group by :=Sum(Fields!cost.Value, "DataSet1") and =Fields!cost.Value but this seems to return random results.
I would be happy even if I could achieve a custom sort such as sort by "bannana, pear, apple" although for some "suppliers" this would still cause me an issue.
edit 1: strangely enough this works with a line chart but not a 3d cylinder
edit 2: example
attached is an example. I want the tallest cylinders at the back. but methods mentioned above do not work
In chart area properties -> 3D-options , Enable,
series clustering
Choose this option to cluster series groups. When multiple series for
bar or column charts are clustered, they are displayed along two
distinct rows in the chart area. If series are not clustered, their
corresponding data points are displayed adjacent to each other in one
row. This option is applicable only to bar and column charts.
Also try changing the Rotation & Inclination degrees, to get a better look.
Decrease wall thickness also.

calculating the density of a set

(I wish my mathematical vocabulary was more developed)
I have a website. On that website is a video. As a user watches the video, a bit of javascript stores how far they have gotten so far in the video. When they stop watching the video, that number of seconds is stored. There's no pattern to when the js will do this, unfortunately.
So if one person is watching the video, we might see this set:
3
6
8
10
12
16
And another person might get bored immediately:
1
3
This data is all stored in the same place, anonymously. So the sorted table with all this info would look like this:
1
3
3
6
8
10
12
16
Finally, the amount of times the video is started at all is stored. In this case it would be 2.
So. How do I get the average 'high-time' (the farthest reached point in the video) for all of the times the video was played?
I know that if we had a value for every second:
1
2
3
4
5
6
7
...
14
15
16
1
2
3
Then we could count up the values and divide by the number of plays:
(19) / 2 = 9.5
Or if the data was otherwise uniform, say in increments of 5, then we could count that up and multiply it by 5 (in the example, we would have some loss of precision, but that's ok):
5
10
15
5
(4) * 5 / 2 = 10
So it seems like I have a general function which would work:
count * 1/d = avg
where d is the density of the numbers (in the example above with 5 second increments, 1/5).
Is there a way to derive the density, d, from a set of effectively random numbers?
Why not just keep the last time that has been provided, and average across those? If you either throw away, or only pay attention to, the last number, it seems like you could just average over these.
You might also want to check out the term standard deviation as the raw average of this might not be the most useful measurement. If you have the standard deviation as well, it could help you realize that you have an average of 7, but it is composed of mostly 1's and 15's.
If you HAVE to have all the data, like you suggested, I will try and think about this a little bit more. I'm not totally certain how you can associate a value with all the previous values that came with it. Do you ALWAYS know the sequence by which numbers are output? If so, I think I know of a way you could derive the 'last' one, which might be slightly computationally expensive.
If you only have a sequence of integers, I think you may be able to increase each value (exponentially?) to 'compensate' for the fact that a later value 'contains' earlier values. I'm still working through this idea, but maybe it will give someone else a seed. What if you average over the sum of these, and then take the base2 logarithm of this average? Does that provide any kind of useful metric? That should 'weight' the later values to the point where they compensate for the sum of earlier values. I think.
In python-esk:
sum = 0
numberOf = 0
for node in nodes:
sum = sum + node.value ^ 2
numberOf = numberOf + 1
weightedAverage = log(sum/numberOf, 2)
print weightedAverage
print "Thanks Brian"
I think that #brian-stiner is on the right track in one of his comments.
Start with something like:
1
3
3
6
8
10
12
16
Turn that into numbers and counts.
1, 1
3, 2
6, 1
8, 1
10, 1
12, 1
16, 1
And then reading from the end down, find all of the points that happened more often than any remaining ones.
3, 2
16, 1
Take differences in counts.
3, 1
16, 1
And you have an estimate of stopping places.
This will not be an unbiased estimate. But if the JavaScript is independently inconsistent and the number of people is large, the biases should be fairly small.
It won't be right, but it will be close enough for government work.
Assuming increments are always around 5, some missing, some a bit longer or shorter. Then it won't be easy (possible?) to do this exactly. My suggestion: compute something like a 'moving count'. Similar to moving average.
So, for second 7: count how many numbers are 5,6,7,8 or 9 and divide by 5. That will give you a pretty good guess of how many people watched the 7th second. Do the same for second 10. The difference would be close to the number of the people who left between second 7 and 10.
To get the total time watched for each user, you'll have parse the list smallest to largest. If you have 4 views, you'll go through your list until you find that you no longer have 4 identical numbers, the last number where you had 4 identical numbers is the maximum of the first view. Then you'll look for when the 3 identical numbers stop, and so on. For example:
4 views data:
1111222233334445566778
4 views side by side:
1 1 1 1
2 2 2 2
3 3 3 3 <- first view max is 3 seconds
4 4 4 <- second view max is 4 seconds
5 5
6 6
7 7 <- third view max is 7 seconds
8 <- fourth view max is 8 seconds
EDIT- Oh, I just noticed that they are not uniform. In that case, the moving average would probably be your best bet.
The number of values roughly corresponds to the number of time periods in which your javascript sends the values (minus 1/2 if the video stop is accompanied with a obligatory time posting, since its moment is random within the interval).
If all clients have similar intervals and you know them, you may just use:
SELECT (COUNT(*) - 0.5) * 5.0 / (SELECT counter FROM countertable)
FROM ticktable
5.0 is the interval between the posts here.
Note that it does not even look at the values: you could as well just store "ticks".
For the max time, you could use MAX() on your field. Perhaps something like...
SELECT MAX(play_time) AS maxTime FROM video
Which would give you the longest time someone has played the video for.
If you want other things, like AVG() then you'll need more complex queries, for collecting on a per-user basis etc etc.
MySQL also contains a Standard Deviation function called STDDEV() and STD() which could help you too.