I want to use the Phonograph writeback dataset for downstream analysis in Foundry. When I make an edit to Phonograph, will the writeback dataset be automatically updated too?
No, the writeback dataset only gets built automatically when registration with Phonograph is updated. Typically, users:
Build the writeback dataset on a recurring schedule appropriate for the use-case (e.g. every day), or
Build on-demand when analysis is needed.
Related
I have developed an SSIS package to run 3 reports from Reporting Services that are data driven subscriptions.
When I run the SSIS job it executes all the 3 reports at once, what I need is to run the reports sequentially, in other words, one by one. How can I do this?
This is an expected behavior. When you trigger a data driven subscription job, the SQL Server Agent starts the job and that completes the whole transaction. The SSIS package would then go on to trigger the next data driven subscription job and the next ( assuming you have put the job-triggering in sequence).
Now if you want to create a dependency in the way the jobs should run i.e. Job1 followed by Job2 followed by Job3, you need to manually write additional piece of code. The way to go about it would be to monitor the status code of the subscription.
In the ReportServer database there is a table called dbo.Subscriptions containing a column 'LastStatus'. Currently in my local db, I don't have any subscriptions and also am not able to find any documentation for the table. But I am pretty sure this would be either a boolean or a status flag such as 'Sucess' or 'Failure. Upon triggering the first job, you would need to write a .net Code to monitor this status with a polling interval. Once you get the desired outcome, move on to triggering the next job.
Hope this is clear. I would edit this answer with an working example.
I have a folder with around 15 reports in it, these are Report Server reports. To run each report individually will take a while, so I want them to run together. So, what I want to be able to do is somehow run all the reports in this folder, is this possible?
This is somewhat of an ambiguous question. Let me explain. What are you asking specifically?
Q: Can you run multiple reports at the same time?
A: Yes, and there are several ways to accomplish this.
1. You can use SQL agents
2. Use batch files with task scheduler
3. Use an SSIS package and use an agent to run them at specific times...etc...
Hopefully one of the reports does not depend on another and another thing that you have to take in to consideration is how hard you will be hitting the SSRS or SQL server. Running them all at one time may take longer than one at a time. depending on the bandwidth of the SQL Server and what tables are going to be locked up during each of these processes.
You might want to give a little more detail in your question...
I would recommend an SSIS package, especially as it also one of the options presented by #Michael that can email the Excel workbook too which you mentioned in an earlier comment.
The following resource covers quite well the execution and export of an SSRS report using SSIS, including code you will need as a starting point: Executing an SSRS Report from an SSIS Package.
You could save some time in coding the solution by using the following custom Task that can be integrated into SSIS: SSIS ReportGenerator Task.
There is one problem in your requirements though which is merging reports into one Excel workbook where I assume you want separate sheets for each report within the same workbook?
Reporting Services can use multiple worksheets (to divide a report up into pages a.k.a pagination) but only for a single report; it can't merge reports into one Excel file. This can be accomplished with custom code however. There's a somewhat basic example here: Merging workbooks into a master workbook with separate sheet for each file.
One way to run all the reports at once is to add subscription to all of them and set same subscription start time in all of the reports. what will happen is once the start time arrived all the reports will run simultaneously and will generate excel/pdf (any format specified) file at shared location.
Assuming a fairly conventional SSRS 2012 report (in Visual Studio 2012) with a main report, a set of sub-reports, a shared dataset that is populated at the start of the report, and a shared datasource.
Is there any simple way within a sub-report's custom code (this is VBA, right?) to access the shared dataset, either to read or update records locally? (No updates back to the database itself.) I'm seeing hints out there that this is possible but no clear examples yet.
And if the above is possible, assuming that a call in the sub-report changed a record in the shared dataset, could that record change be displayed in the main report body?
Yes and No.
I think the overall concept would work but a few points won't.
I don't think you'd be able to use the report dataset with VBA. The code won't have access to the report's datasource directly. You'd probably need to use ADO to access the db from VB.
The only way to see the updated data would be to refresh the report - either manually or automatically on the timer.
I don't see how the subreport is going to figure out what to update the value to. You might have some idea that I'm not seeing right now.
The easier way I see this working would be to use parameters that default to NULL. Then select the row to update with one parameter and the value with another. Then have an UPDATE in your main query that only runs if your parameters are populated.
We have many SSIS packages that move, import, export around large amount of data. What is the best way to get alerts or notifications if expected amount of data is not processed? or How to get daily report on how different SSIS packages are functioning. Is there a way to write/use a custom component and simply plug it in to SSIS packages instead of writing custom component for each package?
For your first question, we use user variables in SSIS to log the
number of rows processed by each step along with the package name
and execution id. You can then run reports on the history table,
and if any of the executions have a large variance in the rowcounts
processed, you can trigger an event.
Yes. See here, or in the alternative, google "custom ssis
component tutorial".
There is a constant change (!) in our database, new columns are often added.
Is reporting services the tool to choose for reporting in this case?
Case1: Developers add a new column to a table used in a report. Will the old reports created with a report model based on the old table still work?
Case2: Developers add a new column, and end users want to be able to report on it. If we update the report model, will the old reports based on the old report model still work? Or do we have to create a new report model every time the end user wants to report on a newly created column?
Regards
Lars
Reporting services has required strategies for change management. So, adding new column to a table in the underlying data source does not affect the reports.
If you want to include a newly added table column into your report model you should update (not create from scratch) your report model. Updating the report model automatically insert your new column to the model and does not break your old reports. on the other hand, updating report model does not update/delete the existing item if you change them (like table/column name or column data type etc) in the underlying source. You should manually change them at the report model and at the affected reports.
So, in your case, you won't be having any problem with reporting services.
Here i'm adding a change management section of the Reporting Services/Report Model document and strongly suggest you to read it.
Change Management
Models and the reports based on them
have many internal and external
dependencies. Therefore, you need to
consider the impact of changes
introduced into the dependency chain.
Report models based on relational data
sources use GUID attributes to
identify each entity, attribute, and
role. As mentioned, the report
model-generation process sets the
GUIDs, which are re-created at each
generation. For that reason and to
preserve edits on the report model,
generating a new report model each
time a change occurs is not an option.
You must work with the existing model
and update it, either manually or by
using the update options described
below.
The Semantic Query Engine
manages missing attributes when they
are not critical to report processing.
This functionality is in place to keep
reports running when security
attributes preclude users from seeing
some attributes in the report that may
be allowed to other users. Thus, if a
user is not allowed to access a field
such as the employee home telephone
number, the Employee Listing report
will run for that user but will not
show the excluded information. This
functionality works to your advantage
when models are edited to delete a
non-critical attribute. The report
will still run after you have removed
an attribute, although the report
might show a blank field. However,
query or report processing can be
broken by other changes to the model.
Remember that you should not overwrite
a model generated from a relational
data source when any reports depend on
it.
Schema Changes
If the underlying schema changes and
report model entities or attributes
are affected, you might have to update
the report model accordingly. To do so
in BIDS, use the Autogenerate command
on the Reporting Model menu. You can
also select Autogenerate from the
model item's context menu. By using
the context menu, you can select which
item on the model you want to update
without having to update the entire
model.
The autogeneration process will
show informational, warning, and alert
messages. These messages will show all
items in the model that are
out-of-sync with the underlying DSV,
even though those items are not
specifically included in the item
selected for autogeneration. This
functionality helps detect potential
errors than may lead to unpredictable
errors when running reports based on
the model.
Automatic update affects
newly added items only. The
autogeneration process will add any
new entity, attribute, or role found
in the DSV, but will not delete or
change any entity, attribute, or role.
Therefore, you need to manually manage
updated or deleted items. The messages
shown at the end of the generation
process will highlight any entity,
attribute, or role that needs to be
updated in the resulting out-of-sync
model. You will have to update the
model manually or revert the DSV
changes to maintain model-to-schema
coherence.
Data Source Changes
You can develop and test your model in
a development environment and then
deploy the model in a production
environment easily by changing the
connection string in the data source
file that the DSV uses. The two data
source schemas must be identical.
Note that the DSV contains statistics
based on the actual database data. As
mentioned in the section "Statistics
in Report Model Generation," the value
of those statistics will drive some
algorithm decisions during the model
generation. Therefore, if the
development database data is
significantly different from the
production database data, the model
might not be optimized for the data
that will eventually be used.
Hope this help.