Second dataset result not showing on first time execution - reporting-services

I had created a SSRS reports where it generates list of studnts with selected subjects and it marks which was displayed in matrix successfully. After that I need to tshow the overall analysis of each subject in the same report.
So I had created total 2 procedures.
First one for dataset 1 list of students with respective subject wise marks.
As like temp table here I had created a table where I push the analysis of each subject in the new table.
And by using the second procedure for dataset 2 I planned for fetching the data of subject wise analysis.
Every time the second proc executes I'm deleting the data of that table.
My problem is I'm not getting the data from dataset 2 on the firt execution.From the second run I'm able to get the data.
Whenever I had changed the parameters for the first time It's not geting the data.
ALTER Proc [dbo].[SP_Get_IGCSESubjectMarks_GetLastTerm_HTS2] --7,'1,17,8','2537,2555,2558,2568'
(
#ReportId int=7,
#SubjectId varchar(200),
#SectionId varchar(200)
)
AS
BEGIN
-------------------
------------------- Some code
Insert into #temp (Name,Class,SubjectName,Section,enrollNo,TermName,TestName,Marks)
select Name,Class,'Total',Section,enrollNo,TermName,'Percentage',SUM(Marks)*100/sum(maxmarksare) from #temp1
GROUP BY Name,Class,Section,enrollNo,SubjectName,TermName,SubjectOrder
--TmpIgcseData is the new table for which I'm pushing the subject wise analysis This was the table used in 2nd dataset for fetching data.
delete from TmpIgcseData
insert into TmpIgcseData(enrollno,SName,SubjectName,TestName,Marks,OrderNumber) select enrollno,Name,SubjectName,TestName,Marks,SubjectOrder from #temp
select #UID as Id,* from #temp
drop table #temp
drop table #temp1
end
--------------------------------------------------------------
ALTER PROC [dbo].[IgcesResultAnalysis_HTS2]
AS
begin
------------
------------ Some code.
select * from #distsubjects
--Deleting the data from the table
delete from TmpIgcseData
End

This is really bad practice. You are editing data in a permanent table (TmpIgcseData). What happens if two people execute the report? You also cannot rely on the execution order of the datasets in SSRS.
It is much better to pass the required parameters to both procedures and do all the work within each procedure. In other words, do not rely on another proc to prepare the data unless you call that proc from with your main proc and restrict the tables used to temp tables within the scope of the main proc.

Related

Split Table data into two table from one data set in ssrs

I need to split the number of rows into two table from one Dataset in
SSRS.
First table should have first 30 records and second table start with
row number 31. Number of records may be increased so this should be
dynamic. Need to do this in SSRS design only not in SP.
I have tried Expression Rownumber(Nothing)/30 in table filter but
cannot use Rownumber function in table filter.
Please suggest.
I don't think you can use any kind of aggregation in a table filter so you would have to look at alternatives.
If you cannot change the stored proc then you could dump the results of the stored proc into a temp table then do additional processing on that. You can do all this in your report's dataset query.
for example
CREATE TABLE #t (myFirstSPColumn int, mySecondSPColumn varchar(10))
INSERT INTO #t
EXEC myStoredProc
SELECT *,
(ROW_NUMBER() OVER(ORDER BY CountryID)-1) / 30 as TableNumber
FROM #t
This will run the stored proc, put the results into a temp table and then add a TableNumber column which you can use directly in your report

extract data from sql, modify it and save the result to a table

This may seem like a dumb question. I am wanting to set up an SQL db with records containing numbers. I would like to run an enquiry to select a group of records, then take the values in that group, do some basic arithmetic on the numbers and then save the results to a different table but still have them linked with a foreign key to the original record. Is that possible to do in SQL without taking the data to another application and then importing it back? If so, what is the basic function/procedure to complete this action?
I'm coming from an excel/macro/basic python background and want to investigate if it's worth the switch to SQL.
PS. I'm wanting to stay open source.
A tiny example using postgresql (9.6)
-- Create tables
CREATE TABLE initialValues(
id serial PRIMARY KEY,
value int
);
CREATE TABLE addOne(
id serial,
id_init_val int REFERENCES initialValues(id),
value int
);
-- Init values
INSERT INTO initialValues(value)
SELECT a.n
FROM generate_series(1, 100) as a(n);
-- Insert values in the second table by selecting the ones from the
-- First one .
WITH init_val as (SELECT i.id,i.value FROM initialValues i)
INSERT INTO addOne(id_init_val,value)
(SELECT id,value+1 FROM init_val);
In MySQL you can use CREATE TABLE ... SELECT (https://dev.mysql.com/doc/refman/8.0/en/create-table-select.html)

load TableC from TableB based on value of TableA in SSDT/SSIS

I have 3 tables-
--server 1
CREATE TABLE TableA (GROUP_ID INT
,STATUS VARCHAR(10))
--server 2
CREATE TABLE TableB (GROUP_ID INT
,NAME VARCHAR(10)
,STATE VARCHAR(50)
,COMPANY VARCHAR(50))
-- server 1
CREATE TABLE TableC (GROUP_ID INT
,NAME VARCHAR(10)
,STATE VARCHAR(50)
,COMPANY VARCHAR(50))
Sample data
INSERT INTO TableA (1, 'READY'),(2,'NOT READY),(3,'READY'),(4,'NOT READY')
INSERT INTO TableB (1, Mike, 'NY', 'aaa'), (1, Rick, 'OK','bbb'), (2, Smith, 'TX','ccc'), (3, Nancy, 'MN','bbb'), (4, Roger, 'CA','aaa')
I am trying to build a SSDT(SSIS 2012) package to load the data in TableC from TableB for only those GROUP_ID which has STATUS= 'READY' in TableA and change STATUS ='LOADED'
I need to accomplish this by using a project level parameters or variables for TableA-GROUP_ID and STATUS because i will be doing this for about 60 tables and those values might change.
I must build a SSIS package, it is a requirement.
using linked server is not preferred. unless its impossible to achieve through SSIS.
Any help would be appreciated.
As the two tables are on separate servers, you could create a Data Flow with two Sources. You'll need to set up Connection Managers to both databases, then point one Source to the database holding TableA, and the other to the database holding TableB. Once this is done, you can join the two with a Merge Join, and then discard the records which don't have the value or values you want using a Conditional Split. It would ultimately look a bit like this:
First you'll need to set up the Sources as already discussed. However, since you want to use a Merge Join, you'll need to sort the output from the sources. You can do this in SSIS with a Sort transform, but you're better off just building an ORDER BY clause into your SELECT statement that you have in the source, and then telling SSIS that the output is sorted:
Right click on each Source, and select Show Advanced Editor.
Go to the Input and Output Properties tab.
Select OLE DB Source Output, then set IsSorted on the right-hand side to True.
Expand OLE DB Source Output, then expand Output Columns.
Click on the column you're sorting by (presumably GROUP_ID), and set SourceKeyPosition to 1.
Here's an image of that last bit in case you're at all lost - it can be a little fiddly getting around the properties in SSIS if you're not used to it:
Since the STATUS value you want to change might load, you could set this up in the Project Parameters. Just go to that page from the Solution Explorer, and click to add a new parameter. You should end up with something like this:
As you're using 2012, you'll be able to configure this value after release in SSMS, avoiding the need to re-work this or create a configuration file.
When you set up the Conditional Split, you have a couple of options. If you might want to send rows with other STATUS values into other tables in future, then you should look for cases where the STATUS has a value of READY, but if you only care about the READY rows you can also do it the way I have here:
When you drag the output of the Conditional Split to the destination, it'll ask which output you want to use. If you've set it up the same way I have, use Conditional Split Default Output, and it'll pass through all rows which don't meet one of the conditions you've stated.
If you need to update the values of the data while you're loading it, it depends where you want the updates to show. If you want to leave TableA and TableB alone, but change the value in TableC, then you could set up a Derived Column transform after the Conditional Split and before the Destination. You could then replace the value in the STATUS column with one you set (this can be parameterised, as above):
If you want to update the STATUS field in TableA, then you should go back to the Control Flow, and after the Data Flow you've been working on, add an Execute SQL Task which is connected to the database holding TableA, and which runs a simple SQL update statement.
If this is going to be running outside of business hours and you know there won't be any new rows during this time, you can simply update all rows which currently have a STATUS of READY. If you need to update the rows more precisely because the situation might be continuing to change while you work, then you might need to re-think this - one option would be to grab all of the GROUP_ID values you want to update at the beginning, store that in a variable, and use the variable as a parameter in the Source select statements and Execute SQL Task update statement. You could also choose to work in a loop instead, but that would obviously be a lot slower than operating on the rows in bulk.
This part is from my original answer before the question was updated, but I'll leave it here in case it's useful to anyone else:
If the tables (A and B) are in the same database, instead of the Conditional Split you could set the source up to be a select statement which joins Table A to Table B, and has a WHERE clause that only selects the rows with a STATUS of READY:
select GROUP_ID, NAME, STATE, COMPANY
from TableA a
inner join TableB b
on a.GROUP_ID = b.GROUP_ID
where a.STATUS = 'READY';

increment a value when a row is selected SQL

Is there any way to essentially keep track of how many times a row has been pulled from a SQL table?
For example in my table I have a column count. Every time a SQL statement pulls a particular row (lets call it rowA), rowA's 'count' value increases 1.
Either in the settings of the table or in the statement would be fine, but i cant find anything like this.
I know that I could split it into two statements to achieve the same thing, but I would prefer to only send one.
The best way to do this is to restrict read-access of the table to a stored procedure.
This stored procedure would take various inputs (filter options) to determine which rows are returned.
Before the rows are returned, their counter field is incremented.
Note that the update and the select command share the same where clause.
create procedure Select_From_Table1
#pMyParameter varchar(20), -- sample filter parameter
as
-- First, update the counter, only on the fields that match our filter
update MyTable set Counter = Counter + 1
where
MyFilterField like CONCAT('%', #pMyParameter, '%') -- sample filter enforcement
-- Now, return those rows
select
*
from
MyTable
where
MyFilterField like CONCAT('%', #pMyParameter, '%') -- sample filter enforcement
A decent alternative would be to handle it on the application side in your data-access layer.

SQL Server 2008 - how to automatically drop and create and output table?

I would like to set up a table within a SQL Server DB that stores the results from a long and complex query that takes almost an hour to run. After running the query the rest of the analysis is done by colleagues using Excel pivot tables.
I would prefer not to output the results to text, and want to keep it within SQL Server and then just set up Excel to pivot directly from the server.
My problem is that the output will not always have exactly the same columns, and manually setting up an output table to INSERT INTO every time would be tedious.
Is there a way to create a table on the fly based on the type of data you are selecting?
E.g. if I want to run:
SELECT
someInt,
someVarchar,
someDate
FROM someTable
And insert this into a table called OutputTable, which has to look like this
CREATE TABLE OutputTable
(
someInt int null
someVarchar varchar(255) null,
someDate date null,
) ON [primary]
Is there some way to make SQL Server interrogate the fields in the select statement and then automatically generate the CREATE TABLE script?
Thanks
Karl
SELECT
someInt,
someVarchar,
someDate
INTO dbo.OutputTable
FROM someTable
...doesn't explicitly generate a CREATE script (at least not one you can see) but does the job!