I am creating a departmental bar chart that shows time frames for a set of tasks. Some departments share tasks, others are unique. I have the chart running except that I don't want all possible tasks listed for every department. I would only like to display those tasks that the department actually did.
Here is an example of the data (# in days):
IT Pending 5
IT In Process 8
CD Pending 10
CD 1st Inspection 15
CD Re-inspection 5
In this case I don't want to see "1st Inspection" or "Re-inspection" for IT because IT doesn't do that job nor do we want CD to have "In Process".
Is it possible to remove these unneeded series for a category?
The primary reason for asking this is because our data set is so large, it is nearly impossible to read the report. I think removing these unneeded columns would really help.
It must have been a long week for me. I switched how I was generating the graph and got what I wanted. My data was fine, I am now using the status for the category field and got it working.
Related
I have 6 Datasets each one is the same query but has a different WHERE clause based on employee type. They generate 6 tables. At the top of the report there is a summary table which uses reportitems!textboxXX.value to grab the totals for 2 particular fields from all the tables. I'm also using a StartDate and EndDate parameter. Each reportitems! expression in the table comes from a different dataset in the report if that is relevant.
When I run the report using dates from yesterday back to the 9th of May I get the desired output.
But when I go to the 8th I get multiple rows all the same.
and as I go even further back I get even more rows of the same information. To my knowledge nothing interesting or different happened on the 8th of May and the tables further down the report look exactly the same. Any idea what might be causing this? The design view looks like this if that helps. There's no grouping or filters on the table.
Still not certain about the mechanics behind this but I found a 'solution' so I figured I'd post it. I just made a new dataset for my summary tables where the query was simply SELECT 1. Now I get a single row every time.
Lifecycle report
Before I spend a few hundred hours working on a report I wanted to get some feedback to determine if SSRS can do what I want to accomplish.
I have a database that stores data about product lifecycles. The database stores a variety of stages that a product may be in the form "Product Name: Stage Description: start date: end date".
When I query the database the data comes back with one row per stage, so if a product has 4 stages (example: planned, Production, Retire, Remove) I get 4 rows of data. This is the first problem that can probably be solved in SQL by concatenating all stages into a single row.
The data needs to be displayed horizontally plotted against a time scale (in years and quarters) with different colors to represent each stage using the associated start and end date for each stage. In addition, there are special conditions where we may pay extended support for a product that has gone "end of life" from a manufacturer. I would like to indicate where this occurs with a symbol (like a $ sign).
If SSRS cannot do this I would be interested in other suggested products.
Comments, opinions and suggestions appreciated.
[Lifecycle Report]
A tale of two cities almost...I have 17,000 rows of data that come in as a pair of strings in 2 columns. There are always 5 item numbers and 5 Item Unit counts per row (unit counts are always 4 characters). They have to match up unit and item or it's invalid. What I'm trying to do is "unpivot" the strings into individual rows - Item Number and Item Units
So here's an example of one row of data and the two columns
Record ID Column: 0
Item Number Column: A001E10 A002E9 A003R20 A001B7 XA917D3
Item Units Column: 001800110002000300293
I wrote a C# windows app test harness to unpivot the data into individual rows and it works fine and dandy. So it basically unpivots the data into 85,000 (5 times 17,000) rows and displays it to me in a grid which is what I expect (ID, Item Number and Item Units).
0 | A001E10 | 0018
0 | A002E9 | 0011
and so on...
In my SSIS app I added a script task to process this same data and basically used the same code that my test harness uses. When I run my task I can see it loads the 17,000 rows but it only generates 15,000 +/- on the output so obviously something isn't right.
What I'm thinking is that I don't have the script task setup correctly even though it is using the same code that my test harness uses in that it's dropping records for some reason.
If I go back into my task and give it a particular record ID that it didn't get in the first pass, it will process that ID and generate the right output. So this tells me that the record is ok but for some reason it misses it or drops it on the initial process. Maybe something to do with buffers?
Well - I figured it out.
We have a sequence task with tons of dataflow tasks inside it that are running in parallel. We're relying on the engine to prioritize and handle the data extract and load correctly. However, this one particular script task is not handled by the engine correctly within that sequence container.
The clue was that you could run the script task itself outside of the whole process and it worked fine. So we pulled the script task out of the sequence task and put it by itself after the sequence task and now it runs correctly.
I have a project table which lists every project. I have a cost center table which lists every cost center. I have an analyst table which shows the project, the cost center, and the analyst assigned to them. The projects and cost centers are dropdowns lists. Every project should have every cost center included in it. For every project and cost center combination, there should be an analyst assigned to it. How do I see which ones I have missed? The query I keep trying has two outer joins and Access doesn't like that. With 30 projects and 15 cost centers it is easy to forget to assign an analyst to one of the combinations.
It would also be helpful to have some kind of query that easily shows who is assigned to what projects, preferably in a crosstab format (similar to a pivot table). I think I can do that if I have the corect query that links these 3 tables together and shows every project with every cost center and which analyst is assigned to them.
If my setup with 3 tables is the main problem I can redo the database design. I thought I was designing it correctly by having a seperate table for projects and cost centers and a 3rd table that combines them with the analysts. But now that I can't figure out how to get this query to work I am thinking maybe that wasn't the best design idea.
Sorry. I figured it out. I guess writing the question helped me think it through.
I used one query that had the project table and the cost center table in it. This created a list of every possible combination.
I then made a second query that linked the first query to the analyst table. I forced the query to show every combination from the first query and then tell me when an analyst matched that combination. This way I get blanks for every time I missed adding an analyst. It was also very easy to turn this 2nd query into a pivot table that shows all of the blanks.
Sorry again for posting this question.
I'm trying to select the top ten most similar properties for a given property in a realty site and I wondering if you guys could help me out. The variables I'm working with would be price(int), area(int), bathrooms(int), bedrooms(int), suites(int), parking(int). At the moment I'm thinking of ordering by ABS(a-b) but wouldn't that be slow if I had to calculate that every time a property is viewed? (I'm not sure I could cache this since the database is constantly being updated) Is there another option?
Thanks for your help!
One solution could be to create a new table containing the result ready. Like this:-
property_id similar_properties_ids
--------------------------------------
1 2,5,8
2 3,10
...
...
And a cron running at regular intervals doing the calculation for all the properties and filling up the similar_properties_ids.
So, at runtime, you don't have the calculation overhead but the downside is that you get results which are a little old (updated during the last cron run).