Data source retains schema after drop and add - ssis

I have several packages that are almost identical. Differ only by columns added/removed in different database versions. When I copy a package and modify the data flow of the copy, I delete the OLE DB Data Source and add a new one. Once the new one is defined, its preview shows exactly what I expect. Columns, however, are from the OLE DB source that was deleted. It's like it is being cached somewhere.
Seems like I need to close the package and re-open it after removing the data source. Is there some other way to clear this cached state? What's going on internally that causes this to happen?
More... it looks like it's the parametrized connection manager that is holding on to previous parameters until the package is closed and re-opened.

If I understand your work flow, you are copy and pasting packages and then tweaking the source definition in the data flow. The challenge is that the CustomerID in one system is varchar(7) and defined as varchar(12) in another. The "trick" becomes having the design engine recognize the metadata change and behave accordingly.
My usual hack is radically change the source. I find using the query SELECT 1 as foo does the trick. After doing that, the metadata for the OLE DB Source component drops all references to existing columns which percolates to the downstream components. I then switch back to the proper source and double click the first red X to have it map the IDs from old to new.
If you want a more brain surgical route than civil war surgery, change the column name in your source for anything that should have registered a metadata change. Thus SELECT T.MyColumn, T.IsFine FROM dbo.MyTable AS T becomes SELECT T.MyColumnX, T.IsFine FROM dbo.MyTable AS T Now only the first column gets kiboshed throughout the dataflow. Reset it back to the "right" column name and all is well.
Internally, I don't know but that never stops me from guessing. The Validation fires off, the SSIS Engine recognizes that the data types are still compatible so it doesn't change the existing metadata. A column no longer existing is enough to make it sit up and take notice and so the cached sizing goes away.
Some folks like to try and use the Advanced properties to change the sizes but I find I have better success just using the above approach than changing the size only to have the Designer slap my hand and disallow my proposed changes.

Related

Power Automate updating expression automatically to wrong value?

Not sure if any one else has noticed this behavior in Power Automate. So I would click on a dynamic content expression, and see something I needed to fix like body('parse json')?['variable_1']?[variabl_2] but [variable_2] is not inside ['variable_1']. Then after deleting the ?['variable_1'] value and clicking update, then clicking the expression again it would pull up the old value, body('parse json')?['variable_1']?[variabl_2]. I had to delete the whole object body('parse json')?['variable_1']?[variabl_2] then go to Dynamic content and re-add it then update and then it worked. I believe it has to do with changing my Json Schema around or possibly about how the cache is done. Like if the dynamic content was created with the old schema where [variable_2] was in [variable_1] then it would keep correcting to the old version because you can't get [variable_2] without having ?[variable_1] in place. I'll try to recreate this phenomenon again but was curious if any of you had seen it? I think it means you need to delete variables related to old json schema or it may autocorrect based on old json schema it is pointing to.

How do i refresh csv data set in quicksight and not replace the data set as this loses my calcs

I am looking to refresh a data set in quicksight, this is in Spice. The data set comes from a csv file that has been updated and now has more data than the original file I uploaded.
I can't seem to find a way to simply repoint to the same file with same format. I know how to replace the file but whenever i do this it states that it can't create some of my calculated fields and so drops multiple rows of data!
I assume I'm missing something obvious but I can't seem to find the right method or any help on the issue.
Thanks
Unfortunately, QuickSight doesn't support refreshing file data-sets to my knowledge. One solution, however, is to put your CSV in S3 and refresh from there.
The one gotcha with this approach is that you'll need to create a manifest file pointing to your CSV. This isn't too difficult and the QuickSight documentation is pretty helpful.
You can replace the datasource by going into the Analysis and clicking on the pencil icon as highlighted in Step 1. By replacing dataset, you will not lose any new calculated fields that might have been calculated already on the old dataset.
If you try to replace the data source by going into the Datasets as highlighted below, you'll lose all calculated fields and modifications etc
I don't know when this was introduced but you can now do this exact thing through the "Edit Dataset", starting either from the Dataset page or from the 'pencil' -> Edit dataset inside an Analysis. It is called "update file" and will result in an updated dataset (additional or different data) without losing anything from your analysis including calculated fields, filters etc.
The normal caveat applies in that the newer uploaded file MUST contain the same column names and datatypes as the original - although it can also contain additional columns if needed.
Screenshots:

Editing SQL code in multiple queries at one time

I'd like to automate a procedure some. Basically, what I do is import a few spreadsheets from Excel, delete the old spreadsheets that I previously imported, and then change a few queries to reflect the title of the new imports. And then I change the name of the queries to reflect that I've changed them.
I suppose I could make this a bit easier by keeping the imported documents the same name as the old ones, so I'm open to doing that, but that still leaves changing the queries. That's not too difficult, either. The name stays pretty much the same, except the reports I'm working with are dated. I wish I could just do a "find and replace" in the SQL editor, but I don't think there's anything like that.
I'm open to forms, macros, or visual basic. Just about anything.
I've just been doing everything manually.
Assuming I have correctly understood the setup, there are a few ways in which this could be automated, without the need to continually modify the SQL of the queries which operate on the imported spreadsheet.
Following the import, you could either execute an append query to transfer the data into a known pre-existing table (after deleting any existing data from the table), avoiding the need to modify any of your other queries. Alternatively, you could rename the name of the imported table.
The task is then reduced to identifying the name of the imported table, given that it will vary for each import.
If the name of the spreasheet follows logical rules (you mention that the sheets are dated), then perhaps you could calculate the anticipated name based on the date on which the import occurs.
Alternatively, you could store a list of the tables present in your database and then query this list for additions following the import to identify the name of the imported table.

SSIS excel formatting wont change from text field in destination editor **work around in place**

I have created an SSIS package in Visual Studio 2008 that take's a SQL select statement and populates a excel sheet, the excel sheet is duplicated from a template file with all the formatting and cells set up.
The issue I am having is that no matter what I do I can not change the excel destination formatting to anything other than general, it overwrites the source destination and puts decimal numbers a '1.50 always adding the ' to fields.
i have tried inserting a row as per some suggestions as people think this is where SSIS scans for formatting types. However the field always comes up as Unicode string [DT_WSTR] in the advance editor and always defaults back if i change them.
Please can someone help! Happy to provide any additional info if I've missed anything, I've seen some posts with the same issue, but none of the solutions seem to be working or i'm missing something else.
****Update****
Figured out the reason behind none of the recommended fixes working, this was due to using a select statement in the excel destination instead of selecting the table.
This essentially wipes out any change if changing formatting.
So what I decided in the end was to create a data only sheet(which is hidden) using the basic table data access mode, then reference that in a front end sheet with all the formatting all ready in and using a =value(C1) formula to return just the value. Protected the cells to hide the formula's.
I have found that, when I change a Data Flow Task in SSIS, that exports to (or imports from) Excel, I often have to "start over", or SSIS will somehow retain the some of the properties of the old Data Flow Task: data types, column positions... For me, that often means:
1) Deleting the Source and Destination objects within the Data Flow Task, AND ALSO deleting/recreating the Connection Object for the Excel spreadsheet. I've done this enough times that I now save myself time by copy/pasting my Source and Destination names to-and-from a Notepad window, and I choose names that remind me of the objects they referred to (the table and file, respectively).
2) Remembering to rebuild the ARROW's metadata, too: after you change and/or recreate the Source object, you have to remember to DOUBLE-CLICK THE ARROW NEXT, before re-creating the Destination. That shows the arrow's metadata, but it also creates/updates the arrow's metadata.
3) When recreating the destination, DELETE THE SPREADSHEET from prior runs (or rename or move, etc.), and have SSIS recreate it. (In your new destination object, there's a button to create that spreadsheet, using the metadata.)
If you still have problems after the above, take a look at your data types... make sure you've picked SQL datatypes that SSIS supports.
At the link below, about 2/3rds of the way down the page, you'll find a table "Mapping of Integration Services Data Types to Database Data Types", with SSIS data types in the 1st column ("Data Type"), and your T-SQL equivalent data types in the 3rd column ("SQL Server (SqlClient)"):
Integration Services Data Types
Hope that helps...

How do I re-use reports on different datasets?

What is the best way to re-use reports on different tables / datasets?
I have a number of reports built in BIRT, which get their data from a flat (un-normalized) MySQL table, the data which in turn has been imported from an excel sheet.
In BIRT, I've constructed my query like this, such that I can change the field names and re-use the report:
SELECT * FROM
(SELECT index as "Index", name as "Name", param1 as "First Parameter" FROM mytable) t
However, then when I switch to a new client's data, I need to change the query to the new data source and this doesn't seem sustainable or anywhere near a good practice.
So... what is a good practice?
Is this a reporting issue, or a database-design issue?
Do I create a standard view that the report connects to?
If I have a standard view, do I create a different view with the same structure for each data table, or keep replacing the view with a reference to the correct data table each time I run the report?
What's annoying is the excel sheets keep changing - new columns are added, and different clients name their data differently. Even if I can standardize this, I'd store different client data in different tables... so would I need to create a different report for each client, or pass in the table name to the report?
There are two ways and the path you choose is really dictated by how much flexibility you have architecturally.
First, you are on the right track by renaming your selected columns to a common name since that name is what is used to bind the data to the control on the report. Have you considered a stored procedure to access the data? This removes the query from the report and allows you to set up the stored proc on any database to return the necessary columns. If you cannot off-load to a stored proc, you can always rely on altering the query text at run-time. Because BIRT reports are not compiled (they are XML) you can change the query based on parameters and have it executed for each run of the design. Look at the onCreate event for the Data Set and you can access this.queryText and do any dynamic string substitution you need via JavaScript. Hidden parameters are a good way to help alter/tune the query. If you build the Data Set correctly, the changing of the underlying data could be as easy as changing the Data Source and then re-associating the Data Set to the new Data Source (in the edit data set window). I have done this MANY times and it works well. If you are going down this route, I would add the Data Source(s), Data Set(s) and any controls that they provide data to a report library. With the library you can use the controls in many reports and maintain them in one spot. If you update the library, all the reports using the library get updated as well.
Alternatively, if you want to really commit to a fully re-usable strategy that allows you to build a library of reusable components you could check out the free Reusable Component Library at BIRT Exchange (Reusable Component Library). In my opinion this strategy would give you the re-use you are looking for but at the expense of maintainability. It is abstraction to the point of obfuscation. It requires totally generic names for columns and controls that make debugging very difficult. While it would not be my first choice (the option above would be) others have used it successfully so I thought I would include it here since it directly speaks to your question.