Use 2 Datasets inside a Matrix - reporting-services

I'm creating a report that shows how many sq ft my company worked during a time period and what's the cost per sq ft. I have this 2 Datasets
ServiceProviderSqFt
ServiceProviderID
ServiceProviderName
Total
Month
CostSqFt
ServiceProviderID
ServiceProviderName
Cost
So the matrix I created looks like this:
ServiceProvider | Expr(Months) | *Cost Per Sq Foot |
ServiceProvider | Sum(Total) |missing|
So, the word missing is where I'm having problems. I need to put over there the Cost for each provider, so It can looks like this:
Service Provider | Jan | Cost Per Sq Foot
Provider 1 | 250 | 1.10 |
Any thoughts?
Thanks in advanced

If you have to use 2 separate datasets, you can use the Lookup function. There are many resources on that out there. The best option with SSRS is to combine those datasets at the database level as subqueries so that you can work with just the one dataset in the report. Hopefully this points you in the right direction.

Related

SSRS - Reducing Processing time

My reports are paramaterized stored procedures, with no filtering at the report. No graphs or eye candy, just data, with 1 or 2 levels of grouping, with the data ordered in SP.
The user can chose which columns they wish to see - typically they choose the supplied defaults, but can choose from up to 100 additional optional columns.
The Tablix has logic to "hide" columns the user doesn't want to see.
The stored procedure part is fast, but the Processing Time at SSRS takes typically about 95% of the Total Time.
Any ideas on how to make SSRS process a set of columns (that could be different for each user) more quickly? Even hidden columns seem to be fully processed - is there any way to make SSRS more efficient at ignoring what it won't need?
Thanks for your thoughts.
SSRS 2016, Oracle 12G
I would use a matrix and have the optional columns returned as rows from the SP so instead of something like
ColumnA | ColumnB | Optional1 | Optional2 | Optional3 | Optional4
ABC DEF 5 10 15
GHI KJL 20 25
It would return something like
ColumnA | ColumnB | OptionalCol | Amount
ABC DEF 'Optional1' 5
ABC DEF 'Optional2' 10
ABC DEF 'Optional4' 15
GHI KJL 'Optional1' 20
GHI KJL 'Optional4' 25
In report you could use a matix with a column group grouped on OptionalCol
This might make the SP slightly slower but would mean SSRS only has to render enough columns for the data selected. It also makes the design a lot simpler as you don't have to worry about hiding columns.

How to group after two fields and concatenate information from third field in query

I want to group data in ONE table after two columns (Numbers) and concatenate the information from a third column (short text) for the grouped results.
I am a non programming end user of Access with only little experience in SQL and no experience in VBA but I managed to follow the steps in this other question which already gets me half way to solving my own problem.
Concatenate multiple rows in one field in Access?
That´s why I will reuse the example data used in that post.
One probably just needs to tweak the code slightly for everything to work as intended.
The slightly changed example data from a referenced question looks like this:
Table "YourTable"
Year | Order Number | Product Types
2014 | 100001 | TV
2014 | 100001 | Phone
2016 | 100001 | Internet
2014 | 100002 | Phone
2014 | 100002 | Phone
2014 | 100003 | TV
2014 | 100003 | Internet
2015 | 100003 | Phone
2015 | 100003 | Notebook
For each available combination of Year and OrderNumber I want all corresponding differing entries in the column "product Types" listed separated by a slash or semicolon.
To do this for only one column to group by (Order Number) you can find the solution under the above linked question in the answer by HansUp:
https://stackoverflow.com/a/12428291/3954188
He uses the function "Concatenate values from related records" provided by Allen Browne to achieve the desired result and also provides the final query as an example. Everything works fine for grouping after one column using these resources.
How would I modify the query to get it working the way I´d like it to or is this impossible and needs another solution?
Please post the modified function code and/or query if possible. I managed to implement the function and the example solution from the other question but I´m not well versed in using SQL or VBA.
(I´m using Win 7, 64bit and MS Office 2013)
Include Year and Order Number in your query's GROUP BY. Then you want to concatenate the Product Types values within each of those groups.
I stored your sample data in an Access 2010 table named YourTable. With those data, this is the output from the query below ...
Year Order Number Expr1
---- ------------ --------------
2014 100001 Phone;TV
2014 100002 Phone;Phone
2014 100003 Internet;TV
2015 100003 Notebook;Phone
2016 100001 Internet
SELECT
y.Year,
y.[Order Number],
ConcatRelated
(
'[Product Types]',
'YourTable',
'[Year]=' & y.Year & ' AND [Order Number]=' & y.[Order Number],
'[Product Types]',
';'
) AS Expr1
FROM YourTable AS y
GROUP BY
y.Year,
y.[Order Number];
YOU CAN GET IT using following SQL (ON MYSQL)
SELECT Year , Order Number ,GROUP_CONCAT( Product Types) FROM TABLE GROUP BY Year , Order Number

Add variance in the rows of table

I've a SSRS report which should look like below,
--------------------------------
Year Product Total customers
--------------------------------
2015 prd1 100
prd2 50
prd3 60
2014 prd1 80
prd2 60
prd3 60
Varience
Prd1 20
Prd2 -10
Prd3 0
I've done the year wise grouping and the data mapping. But I'm not sure how to add variance(between 2015-2014) in each row based on the each product of the year
Update:
My dataset looks like this
Year CategoryId CategoryDesc TotalCustomerCount
2013 Prd1 Testproduct 100
2013 Prd2 Testprod2 50
2013 Prd3 Tesrprod3 45
2014 Prd1 Testproduct 80
2014 Prd2 Testprod2 60
You can see that some products may miss out in a year.
Note: The dataset is created from a Dimesional cube and not from SQL queries.
It is kind of hard to tell exactly without knowing what your current dataset looks like.
But I believe that Stanislovas' example will be of little use to you because his example only works if your dataset has a single row for each product, with columns with the total for each year. Which I'm guessing you do not have because you used row grouping to get the above result. If you did, you could've used column-grouping instead of row-grouping to get a better overview.
You have two possibilities:
Replace your current dataset completely with a dataset that has columns for each year value (like in Stanislovas' example). To achieve this kind of dataset you need your query to look like this for example:
SELECT DISTINCT(myTable.Product), t1.Total AS 'Total2014', t2.Total AS 'Total2015'
FROM myTable
JOIN (SELECT Product, SUM(Total) AS Total
FROM myTable
WHERE Year = 2014
GROUP BY Product) as t1 ON t1.Product = myTable.Product
JOIN (SELECT Product, SUM(Total) AS Total
FROM myTable
WHERE Year = 2015
GROUP BY Product) as t2 ON t2.Product = myTabel.Product
This can then be used to make a table that looks like this:
---------------------------------------
| Product | 2014 | 2015 | Variance |
---------------------------------------
| prd1 | 100 | 80 | 20 |
| prd2 | 50 | 60 | -10 |
| prd3 | 60 | 60 | 0 |
...
Or you can add a second datasource that has calculated these differences before sending it to the reporter. Here is an example to help you with your query: https://stackoverflow.com/a/15002915/4579864
If you need any more help, just leave a comment and I'll try and explain furthur. This should at least get you started.
I think the easiest way to handle this requeriment is generating the data from the query using T-SQL. However the output you require can be produced from SSRS using a dataset with the same structure as the dataset you provided in the update.
In order to recreate your scenario I used this dataset.
Year Product CategoryDesc TotalCustomerCount
2013 Prd1 Testproduct 100
2013 Prd2 Testprod2 50
2013 Prd3 Tesrprod3 45
2014 Prd1 Testproduct 80
2014 Prd2 Testprod2 60
My approach is take the minimum and maximum year values (2013 and 2014) and the product in every row and look up the TotalCustomerCount to substract it. This is a step by step guide.
First, create a calculated field in your dataset. Right click the dataset in Report Data pane, call it Year_Product and set this expression in the field source textbox.
=Fields!Year.Value & "-" & Fields!Product.Value
This will produce an additional field called Year_Product which has Year and Product fields concatenated with the - character in middle. In example: 2013-Prd1, 2014-Prd1, 2013-Prd3 etc.
Now create a tablix with this data arrangement:
In the cell highlighted in red use this expression:
=Lookup(Max(Fields!Year.Value,"DataSet13") & "-" &
Fields!Product.Value,Fields!Year_Product.Value,Fields!TotalCustomerCount.Value,"DataSet13")
-
Lookup(Min(Fields!Year.Value,"DataSet13") & "-" &
Fields!Product.Value,Fields!Year_Product.Value,Fields!TotalCustomerCount.Value,"DataSet13")
This will look up the max year and product of the row in the Year_Product field and get the TotalCustomerCount.
Year_Product: 2014-Prd1 and TotalCustomerCount: 80
Year_Product: 2013-Prd1 and TotalCustomerCount 100
The above example produces -20 since 80 - 100 = -20
It will show us repeated products because every product may be present two times. To avoid this it is necessary to sort the tablix by Product, go to tablix properties and set the below sorting option.
Now hide duplicated rows. Go to tablix properties / Row visibility and select Show or hide based on an expression.
Use this expression ton conditionally hide the duplicated products:
=IIF(
Fields!Product.Value=PREVIOUS(Fields!Product.Value),
true,
false
)
Finally if you preview the report you will see something like this (I recreated your both tables.)
Note I am using only one dataset, the same that you provide in your
question.
Hopefully this what you are looking for let me know if you need further help.

ssrs iif conditional sum based on dense rank

I looked at this thread already but get #Error
SSRS Conditional Summing
Back Story:
I have a ssrs report to qa. Total calls value is going up based on orders value i.e. total calls value repeats if a sales person took 5 orders.
This should not be the case. Example:
left side is wrong right side is correct at employee level in grey.
abc | 500 | order-001
not
abc |500 | order-001
abc |500 | order-002
abc |500 | order-003
So i modified the SP to use Dense rank function.
Now within SSRS
At supervisor level I want to do a sum of total calls
=sum(IIF(Fields!Dense_Rank.Value=1 Or "NULL",Fields!TotalCalls.Value,0))
but this expression is evaluating to #Error at Supervisor level.
Finally, I wish to get a quick fix for this not re-invent the wheel or change the requirements.
Any help would be greatly appreciated.
Assuming Fields!Dense_Rank.Value refers to a column in your dataset called Dense_Rank (naming fields after t-sql functions is not generally advised, as this may lead to confusion), I think what you are trying to achieve is the following:
=sum(
IIF
(
(Fields!Dense_Rank.Value=1 Or Fields!Dense_Rank.Value Is Nothing),
Fields!TotalCalls.Value,
0
)
)

Should I worry about 1B+ rows in a table?

I've got a table which keeps track of article views. It has the following columns:
id, article_id, day, month, year, views_count.
Let's say I want to keep track of daily views / each day for every article. If I have 1,000 user written articles. The number of rows would compute to:
365 (1 year) * 1,000 => 365,000
Which is not too bad. But let say. The number of articles grow to 1M. And as time passes by to 3 years. The number of rows would compute to:
365 * 3 * 1,000,000 => 1,095,000,000
Obviously, over time, this table will keep growing. And quite fast. What problems will this cause? Or should I not worry since RDBM's handle situations like this quite commonly?
I plan on using the views data in our reports. Either break it down to months or even years. Should I worry about 1B+ rows in a table?
The question to ask yourself (or your stakeholders) is: do you really need 1-day resolution on older data?
Have a look into how products like MRTG, via RRD, do their logging. The theory is you don't store all the data at maximum resolution indefinitely, but regularly aggregate them into larger and larger summaries.
That allows you to have 1-second resolution for perhaps the last 5-minutes, then 5-minute averages for the last hour, then hourly for a day, daily for a month, and so on.
So, for example, if you have a bunch of records like this for a single article:
year | month | day | count | type
-----+-------+-----+-------|------
2011 | 12 | 1 | 5 | day
2011 | 12 | 2 | 7 | day
2011 | 12 | 3 | 10 | day
2011 | 12 | 4 | 50 | day
You would then at regular periods create a new record(s) that summarises these data, in this example just the total count for the month
year | month | day | count | type
-----+-------+-----+-------|------
2011 | 12 | 0 | 72 | month
Or the average per day:
year | month | day | count | type
-----+-------+-----+-------+------
2011 | 12 | 0 | 2.3 | month
Of course you may need some flag to indicate the "summarised" status of the data, in this case I've used a 'type' column for finding the "raw" records and the processed records, allowing you to purge out the day records as required.
INSERT INTO statistics (article_id, year, month, day, count, type)
SELECT article_id, year, month, max(day), sum(count), 'month'
FROM statistics
WHERE type = 'day'
GROUP BY article_id, year, month, type
(I haven't tested that query, it's just an example)
The answer is "it depends". but yes, it will probably be a lot to deal with.
However - this is generally a problem of "cross that bridge when you need to". It's a good idea to think about what you could do if this becomes a problem for you in the future.. but it's probably too early to actually implement any suggestions until they're necessary.
My suggestion, if it ever occurs, is to not keep the individual records for longer than X-months (where you adjust X according to your needs). Instead, you'd store the aggregated data that you currently feed into your reports. What you'd do is run, say, a daily script that looks at your records and grabs any that are over X months old... and create a "daily_stats" object of some sort, then delete the originals (or better yet, archives them somewhere).
This will ensure that only X-months worth of data are ever in the db - but you still have quick access to an aggregated form of the stats for long-timeline reports.
It's not something you need to worry about if you can put some practices in place.
Partition the table; this should make archiving easier to do
Determine how much data you need at present
Determine how much data you can archive
Ensure that the table has the right build, perhaps in terms of data types and indexes
Schedule for a time when you will archive partitions that meet the aging requirements
Schedule for index checking (and other table checks)
If you have a DBA in your team, then you can discuss it with him/her, and I'm sure they'll be glad to assist.
Also, like what is used in many data warehouses, and I just saw #Taryn's post (which I agree with -> )store aggregated data as well. This is quickly suggested based on the data you keep in the involved table. If you have trouble with possible editing/updating of records, then it brings to light (even more) the fact that you will just have to set restrictions like how much data to keep (which means this data is what can be modified) and have procedures+jobs in place to ensure that the aggregated data is checked/updated daily and can be updated/checked manually when any changes are made. This way, data integrity is maintained. Discuss with your DBA what other approaches you can take...
By the way, in case you didn't already know.. Aggregated data are normally needed for weekly or monthly reports, and many other reports based upon an interval. Granulize your aggregation as needed, but not so much that it becomes too tedious or seemingly exaggerated.