Partial Data Set in WEBI 4.0 - business-objects

When I run a query in Web Intelligence, I only get a part of the data.
But I want to get all the data.
The resulting data set I am retrieving from database is quite large (10 million rows). However, I do not want to have 10 million rows in my reports, but to summarize it, so that the report has the most 50 rows.
Why am I getting only a partial data set as a result of WEBI query?
(I also noticed that in the bottom right corner there is an exclamation mark, that indicates I am working with partial data set, and when I click on refresh I still get the partial data set.)
BTW, I know I can see the SQL query when I built it using query editor, but can i see the corresponding query when I make a certain report? If yes, how?
UPDATE: I have tried the option by editing the 'Limit size of result set to:' in the Query Options in Business Layer by setting the value to 9 999 999 and the again by unchecking this option. However, I am still getting the partial result.
UPDATE: I have checked the number of rows in the resulting set - it is 9,6 million. Now it's even more confusing why I'm not getting all the rows (the max number of rows was set to 9 999 999)
SELECT
I_ATA_MV_FinanceTreasury.VWD_Segment_Value_A.Description_TXT,
count(I_ATA_MV_FinanceTreasury.VWD_Party_A.Party_KEY)
FROM
I_ATA_MV_FinanceTreasury.VWD_Segment_Value_A RIGHT OUTER JOIN
I_ATA_MV_FinanceTreasury.VWD_Party_A ON
(I_ATA_MV_FinanceTreasury.VWD_Segment_Value_A.Segment_Value_KEY=I_ATA_MV_FinanceTreasury.VWD_Party_A.Segment_Value_KEY)
GROUP BY 1

The "Limit size of result set" setting is a little misleading. You can choose an amount lower than the associated setting in the universe, but not higher. That is, if the universe is set to a limit of 5,000, you can set your report to a limit lower than 5,000, but you can't increase it.
Does your query include any measures? If not, and your query is set to retrieve duplicate rows, you will get an un-aggregated result.
If you're comfortable reading SQL, take a look at the report's generated SQL, and that might give you a clue as to what's going on. It's possible that there is a measure in the query that does not have an aggregate function (as it should).

While this may be a little off-topic, I personally would advise against loading that much data into a Web Intelligence document, especially if you're going to aggregate it to 50 rows in your report.
These are not the kind of data volumes WebI was designed to handle (regardless whether it will or not). Ideally, you should push down as much of the aggregation as possible to your database (which is much better equipped to handle such volumes) and return only the data you really need.
Have a look at this link, which contains some best practices. For example, slide 13 specifies that:
50.000 rows per document is a reasonable number
What you need to do is to add a measure to your query and make sure that this measure uses an aggregate database function (e.g. SUM()). This will cause WebI to create a SQL statement with GROUP BY.
Another alternative is to disable the option Retrieve duplicate rows. You can set this option by opening the data provider's properties.
.

Related

What is the Optimized way to Paginate Active Record Objects with Filter?

I want to display the Users list in pagination with my rails API, However I have few constraints here before displaying the users I want to check users who have access to the view files, Here is the code:
def verified_client
conditions = {}
conditions[:user_name] = fetch_verified_users_with_api_call # returns[user_1,user_2, ....]
#users = User.where(conditions).where('access NOT LIKE ?', 'admin_%').ordered
will_paginate(#users, params[:page])
end
Q1) Is there a way where I don't have to make sql call when users try to fetch subsequent pages(page 2, page 3.. page n)?
Q2) What would happen when verified_users list return million on items? I suspect the SQL will fail
I could have used limit and offset with the Query, but I will not know the total result and page size to achieve the same I have to fire one more SQL call to get count and write up own logic to get number of pages.
Generated SQL:
select *
from users
where user_name IN (user_1, user_2 .... user_10000)
AND (access NOT LIKE 'admin_%')
That query is hard to optimize. It probably does essentially all the work for each page and there is no good way to prevent this scan. Adding these may help:
INDEX(access)
INDEX(user, access)
I have seen 70K items in an IN list, but I have not heard of 1M. What is going on? Would it be shorter to say which users are not included? Could there be another table with the user list? (Sometimes a JOIN works better than IN, especially if you have already run a Select to get the list.)
Could the admins be filtered out of the IN list before building this query? Then,
INDEX(user)
is likely to be quite beneficial.
Is there at most one row per user? If so, then pagination can be revised to be very efficient. This is done by "remembering where you left off" instead of using OFFSET. More: http://mysql.rjweb.org/doc.php/pagination
Q1) Is there a way where I don't have to make sql call when users try
to fetch subsequent pages(page 2, page 3.. page n)?
The whole idea of pagination is that you make the query faster by returning a small subset of the total number of records. In most cases the number of requests for the first page will vastly outnumber the other pages so this could very well be a case of premature optimization that might do more harm then good.
If is actually a problem its better adressed with SQL caching, ETags or other caching mechanisms - not by loading a bunch of pages at once.
Q2) What would happen when verified_users list return million on items? I suspect the SQL will fail
Your database or application will very likely slow to a halt and then crash when it runs out of memory. Exactly what happens depends on your architecture and how grumpy your boss is on that given day.
Q1) Is there a way where I don't have to make sql call when users try to fetch subsequent pages(page 2, page 3.. page n)?
You can get the whole result set and store it in your app. As far as the database is concerned this is not slow or non-optimal. Then performance including memory is your app's problem.
Q2) What would happen when verified_users list return million on items? I suspect the SQL will fail
What will happen is all those entries will be concatenated in the SQL string. There is likely a maximum SQL string size and a million entries would be too much.
A possible solution is if you have a way to identify the verified users in the database and do a join with that table.
What is the Optimized way to Paginate Active Record Objects with Filter?
The three things which are not premature optimizations with databases is (1) use indexed queries not table scans, (2) avoid correlated sub-queries, and (3) reduce network turns.
Make sure you have an index it can use, in particular for the order. So make sure you know what order you are asking for.
If instead of the access field starting with a prefix if you had a field to indicate an admin user you can make an index with the first field as that admin field and the second field as what you are ordering by. This allows the database to sort the records efficiently, especially important when paging with offset and limit.
As for network turns you might want to use paging and not worry about network turns. One idea is to prefetch the next page if possible. So after it gets the results of page 1, query for page 2. Hold the page 2 results until viewed, but when viewed then get the results for page 3.

SSRS IIF efficiency

I am using the following expression to tier the sales figures.
=sum(iif(Fields!InitialValue.Value>=500000 and Fields!InitialValue.Value<1000000,Fields!InitialValue.Value,nothing))
Basically, I just change the greater than and less than values for each cell. We have 4 tiers.
From what I understand, the IIF statement will go through each line and evaluate it before returning anything.
I am also averaging the size of each new account, so I have 8 cells that evaluate the data each time. I will also need to add how many accounts are in each tier, which means 12 passes at the same data. It takes some time to generate this report.
Is this the most efficient method?
Thanks in advance for all your help!
One way you could increase the efficiency at this, from what I can tell is one of two way, the way I would do it is add a column to your query that labels each row by Tier, this would mean when the data gets to SSRS it is already set and never needs to be evaluated. My theory is SSRS is not as smart as a query optimizer. Another way to do it, that may or may not speed it up, is add a calculated field to your data set that especially does the same thing. I believe this would have SSRS calculate it once and that is is.

storing rows order in mysql

I need to give the ability to change order of displaying rows to my script admin page.
for that there is a default order for newly added rows (the go to the end of list) and admin should be able to change the position of an specific row.
I'm going to act the rows like a doubly linked list to be able to re-position rows.
Is it OK to use linked list method for saving the display position of mysql rows?
Is there a better method?
Should I use a separate table to store orders or it is OK to add two next & prev columns to original table?
Is it possibe then to use mysql order statement with this method?
Edit: I also thought of using spaced order codes (e.g. 0, 100, 200, ...) but this has a limit that may be reached
I think you'll be better off just storing the ordering position in a dedicated field, instead of trying to implement a linked list.
The issue with the linked list is that is requires some sort of list traversal to "reconstruct" the order before you can display it to the user. Normally, you'd employ a recursive query to do that, but unfortunately MySQL doesn't support recursive queries, so you'll either need to fiddle with stored procedures, or end-up making a database round-trip for each and every list node.
All in all, just updating the order field of several rows from time to time (when you need to reorder) is probably cheaper than traversing the list every time (when you need to display it), especially if you mostly move rows by small distancees. And if you introduce gaps (as you already mentioned), the number of rows that you'll actually need to update will fall dramatically, at the price of increased complexity.
You may also be able to piggy-back the order field onto the clustering mechanism offered by InnoDB.
YMMV, of course, but I'd advise benchmarking the simple order field approach on representative amounts of data before attempting to implement anything more sophisticated...

SSAS calculated measure: Access relational database

I recently asked a question about many-to-many relationships and how they can be used to calculate intersections that got answered pretty fine. Now, there is another nice-to-have requirement for our cube to extend that to more data. The general question remains: How many orders contain both product x and y?
However, the measure groups are now much larger, currently about 1.4 billion rows. I tried to implement that using the method described in the other post, with several hidden cross-referenced measure groups. However, this is simply too much for our hardware, the cube is reaching sizes next to 0.5 TB, and querys take several minutes to complete.
Now I would try to use another option: Can I access our relational database in a calculated measure? It seems I can, using UDFs like described in this article. I could write a Function in c# that queries our relational database and returns all the orders that contain the products chosen by the user. But in order to do that, I need to supply all the dimensional data the user has selected to the UDF. I also need the UDF to return the calculated value so it can be output as the result of the calculated member. Is that possible? If yes, how? The example microsoft provides only includes a small deterministic string-function as the UDF.
Here my own results:
It seems to be possible, though with limitations. The class Microsoft.AnalysisServices.AdomdServer.Context can provide you with the currentMember of each Hierarchy, however this does not work with Excel-Style-Subselects. It either contains a single member or the AllMember.
Another option is to get the MDX query using the dmv SELECT * FROM $System.DISCOVER_SESSIONS. There will be a column on that view which contains the last mdx query for a given session. However in order to not overwrite your own last query, you will need to not use the current connection, but to open a new one. The session id can be obtained through Microsoft.AnalysisServices.AdomdServer.Context.CurrentConnection.SessionID.
The second approach is ok for our use-case. It does not allow you to handle axes, since the udf-function has a cell-scope, but you don't know which cell you are in. If anyone of you knows anything about that last bit, please tell me. Thanks!

Statistical Process Control Charts in SQL Server 2008 R2

I'm hoping you can point me in the right direction.
I'm trying to generate a control chart (http://en.wikipedia.org/wiki/Control_chart) using SQL Server 2008. Creating a basic control chart is easy enough. I'd just calculate the mean and standard deviations and then plot them.
The complex bit (for me at least) is that I would like the chart to reset the mean and the control limits when a step change is identified.
Currently I'm only interested in a really simple method of identifying a step change, 5 points appearing consecutively above or below the mean. There are more complex ways of identifying them (http://en.wikipedia.org/wiki/Western_Electric_rules) but I just want to get this off the ground first.
The process I have sort of worked out is:
Aggregate and order by month and year, apply row numbers.
Calculate overall mean
Identify if each data item is higher, lower or the same as the mean, tag with +1, -1 or 0.
Identify when their are 5 consecutive data items which are above or below the mean (currently using a cursor).
Recalculate the mean if 5 points are above or 5 points are below the mean.
Repeat until end of table.
Is this sort of process possible in SQL server? It feels like I maybe need a recursive UDF but recursion is a bit beyond me!
A nudge in the right direction would be much appreciated!
Cheers
Ok, I ended up just using WHILE loops to iterate through. I won't post full code but the steps were:
Set up a user defined table data type in order to pass data into a stored procedure parameter.
Wrote accompanying stored procedure that uses row numbers and while loops to iterate along each data value in the input table and then uses the current row number to do set based processing on a subset of the input data (to check if following 5 points are above/below mean and recalculate the mean and standard deviations when this flag is tripped).
Outputs table with original values, row numbers, months, mean values, upper control limit and lower control limit.
I've also got one up and running that works based on full Nelson rules and will also state which test the data has failed.
Currently it's only been used by me as I develop it further so I've set up an Excel sheet with some VBA to dynamically construct a SQL string which it passes to a pivot table as the command text. That way you can repeatedly ping the USP with different data sets and also change a few of the other parameters on how the procedure runs (such as adjusting control limits and the like).
Ultimately I want to be able to pass the resulting data to Business Objects reports and dashboards that we're working on.