I'm trying to make a query which will export a report for multiple clients.
Right now the way I have it set up is with multiple queries, each one for each client, but I would like to make a single query, because eventually I will run into hundreds of queries and reports, which could be done in just 1 query and 1 report.
What I would like is a query with a parameter, which would be [client], and when I execute the query, it would run for each client inside the [client] parameter. Obviously, each client would be "saved" in a different file.
Related
I'm trying to create a query that grabs records that fall within a specified time frame, hence BETWEEN. However, I need to do this from an interactive report which user can specify the endDate and startDate parameters.
I've read that RedShift may not necessarily support variables or parameters and that I may need to use temp tables, but my requirements are that users can pass in values. I'm confused on how using temp tables with pre-defined values allows for in-determinant values to be passed to the base query...
Here's my initial attempt (dont laugh)
prepare prep_select_plan(date)
AS select TOP 10 * from table WHERE date BETWEEN $1 AND $2;
EXECUTE prep_select_plan(#startDate);
EXECUTE prep_select_plan(#endDate);
DEALLOCATE prep_select_plan;
Is there another platform that would allow me to create a web based, interactive report with the ability to have end users enter parameter values?
Update*
I've attached the dataset properties window to elicit feedback on how to pass value to the RedShift query.
When connecting to a database like RedShift or MySQL using ODBC, you have to use a more generic syntax for parameters. Rather than using the # symbol and the parameter name, you just use a ? in the query. For example, one line would look like this:
EXECUTE prep_select_plan(?);
You can map which report parameters go to the query parameters in the parameters tab of the dataset properties. The parameters are simply mapped in the order they appear, you can't reference the same name in multiple places in the query. It's not as user friendly, but SSRS is primarily designed to work with SQL Server.
I have a single shared dataset which calls a stored procedure. I have multiple tables which use the same dataset and has filters on the table itself to only include certain records.
Does the dataset get called for each table or does it only get called once?
The easiest thing to do is run the report and see what happens in the database. In this example I have used SQL Server Profiler to view the database activity. I have tested using a simple report run through Visual Studio.
Dataset:
Report with two tables, different filters, same Dataset:
Run the report:
Check what has been recorded in SQL Server Profiler:
You can see that the Dataset query has been run only once. So in this case we can say that referencing a Dataset multiple times will not cause it to be loaded multiple times.
With SSRS it's always risky to say this will always be the case in all scenarios, but based on this example it seems like a good bet.
I have one sql function, let's say theFunction(item_id). It takes an item id and computes one value as its return. I read one table from the DB and I suppose to compute a new value to append for each row by this function given the item_id particular to taht row. Which desing block would do this form me with the following SQL (if not wrong).
select thsFunction(item_id);
I assume that the block gives me item_id of each row as a variable.
You can use another table input step, and have it accept fields from previous steps and execute for every row (both config options are at the bottom of the step's window).
Beware that this is a rather slow implementation. Each query is executed separately and as such each row requires a round trip to the database.
Alternatively, you can use the Row SQL Script. I believe it allows you to pass all SQL statements in a single trip to the database.
An SQL function is probably much more efficient to run in the database, for all rows at once, in stead of making a separate call into the database from PDI for each row to execute the function. So if performance is at all a relevant concern, I'd suggest a whole different strategy:
Write your rows to a table in the database. End your transformation here.
On the job level, first execute your transformation from above, then execute the function in an "Execute SQL script..." component, giving it an SQL command somewhat like "UPDATE my_temp_table set target_col = theFunction(item_id)".
Continue your job with the remaining steps in a new transformation, starting from that table as input.
This of course presupposes that you don't have too many other threads going on, but if your transofrmation is simple and linear -- or at least if it can be made single-linear at this particular step -- it may be possible to split it up into two parts before and after this SQL call.
I'm migrating the data from an Access database to SQL Server via the SQL Server Migration Assistant (SSMA). The Access application will continue to be used with the local tables converted to linked tables.
One continuous form hangs for 15 - 30 seconds when it's loading. It displays approximately 2000 records. When I looked in SQL Server Profiler to see what it was doing, it was making a separate call to the backend database for each record in the form. So the delay when the form opens is caused by the 2000-odd separate calls to the database.
This is amazingly inefficient. Is there any way to get Access to make a single call to the backend database and retrieve all the records at once?
I don't know if this is relevant but the Record Source for the form is a view in the SQL Server backend database, which is linked to via an Access linked table (so, hopefully, Access just sees it as a table, not a view). I needed an Instead Of trigger on the view in SQL Server, and a unique index on the linked table in Access, to allow the records to be updated via the form.
If the act of opening that continuous form really does generate ~2000 separate SQL queries (one for every row in the view) then that is unusual behaviour for Access interacting with a SQL Server linked "table". Under normal circumstances what takes place is:
Access submits a single query to return all of the Primary Key values for all rows in the table/view. This query may be filtered and/or sorted by other columns based on the Filter and Order By properties of the form. This gives Access a list of the key values for every row that might be displayed in the form, in the order in which they will appear.
Access then creates a SQL prepared statement using sp_prepexec to retrieve entire rows from the table/view ten (10) rows at a time. The first call looks something like this...
declare #p1 int
set #p1=4
exec sp_prepexec #p1 output,N'#P1 int,#P2 int,#P3 int,#P4 int,#P5 int,#P6 int,#P7 int,#P8 int,#P9 int,#P10 int',N'SELECT "ID","AgentName" FROM "dbo"."myTbl" WHERE "ID" = #P1 OR "ID" = #P2 OR "ID" = #P3 OR "ID" = #P4 OR "ID" = #P5 OR "ID" = #P6 OR "ID" = #P7 OR "ID" = #P8 OR "ID" = #P9 OR "ID" = #P10',358,359,360,361,362,363,364,365,366,367
select #p1
...and each subsequent call uses sp_execute, something like this
exec sp_execute 4,368,369,370,371,372,373,374,375,376,377
Access repeats those calls until it has retrieved enough rows to fill the current page of continuous forms. It then displays those forms immediately.
Once the forms have been displayed, Access will "pre-fetch" a couple of more batches of rows (10 rows each) in anticipation of the user hitting PgDn or starting to scroll down.
If the user clicks the "Last Record" button in the record navigator, Access again uses sp_prepexec and sp_execute to request enough 10-row batches to fill the last page of the form, and possibly pre-fetch another couple of batches in case the user decides to hit PgUp or start scrolling up.
So in your case if Access really is causing SQL Server to run individual queries for every single row in the view then there may be something particular about your SQL View that is causing it. You could test that by creating an Access linked table to a single SQL Table or a simple one-table SQL View, then use SQL Server Profiler to check if opening that linked table causes the same behaviour.
Turned out the problem was two aggregate fields. One field's Control Source was =Count(ID) and the other field's Control Source was =Sum(Total_Qty).
Clearing the control sources of those two fields allowed the form to open quickly. SQL Server Profiler shows it calling sp_execute, as Gord Thompson described, to retrieve seven batches of 10 rows at a time. Much quicker than making 2000 calls to retrieve one row at a time.
I've come across the same problem again but this time with a different cause. I'm including it here for completeness, to help anyone in a similar situation:
This time the underlying query was hanging and SQL Server Profiler showed the same behaviour as before, with Access making separate calls to the SQL Server database to bring back one record at a time, for every record in the query.
The cause turned out to be the ORDER BY clause in the query. I guess Access had to pull back all records in the linked table from SQL Server before being able to order them. Makes sense when I think of it. Although I don't know why Access doesn't just pull all records through at once, instead of getting the records one at a time.
I would try setting the Recordset Type to Snapshot (on the Data tab of the Form's property sheet and/or the property sheet of the query you are using for the form source)
I have an input csv file with columns eid,ename,designation. Next i use Lookup transformation, inside look up am using query like
select * from employee where ename=?
i need to pass parameter ? from csv file. That is ename which is in csv file has to be passed into the query using Lookup transformation.
Inside Lookup i have changed mode to Partial cache, and inside Advanced tab, i selected Modify the SQL Statement and placed my query, and clicke on paramters tab. But i don't know like how to pass the parameter.
you cant add parameters to your lookup query. If by adding the parameters your goal is to reduce the amount of data read from the database, you don't have to worry, the "partial cache" will do that for you.
Partial cache means that the lookup query is not executed on the validation phase (like the full cache option) and that rows are being added to the cache as they are being queried from the database one by one. So, if you have one million rows on your lookup cache and your query only have reference to 10 of those rows, your lookup will do 10 selects to your database and end up with 10 rows only.