Crystal Reports XI - export shared variable string to csv truncates the string - csv

I have two databases that store information on customer appointments:
AppointmentMaster has 1 record for each appoint:
Customer Name ApptDate ApptID
------------------------------------------------
2554 Smith,Bob 20140301 100
2468 Jones, Grace 20140301 101
2795 Roberts, Sam 20140302 102
2408 Harris, Chuck 20140305 103
AppointmentDetails holds a record for each operation performed at the appointment (sometimes none, sometimes dozens):
ApptID Operation OpDescription
------------------------------------------------
100 A10 Corrected the A10 unit.
100 IA Resolved issues with internal account.
100 C5 Brief consult with client.
101 A10C Replaced cage on A10 unit.
101 U1 Updated customer account.
103 C5 Brief consult with client.
My client needs a CSV file that contains 1 line per appointment. One of the fields in the CSV is a pipe separated listing of any and all operation codes performed at the appointment. The CSV file would look like this:
"2554", "Smith,Bob", "20140301", "A10|IA|C5|"
"2468", "Jones, Grace", "20140301", "A10C|U1|"
"2795", "Roberts, Sam", "20140302", ""
"2408", "Harris, Chuck", "20140305", "C5|"
I have a crystal report created that displays the fields correctly, however when I go to export to CSV I am seeing a file like this:
"2554", "Smith,Bob", "20140301", "C5|"
"2468", "Jones, Grace", "20140301", "U1|"
"2795", "Roberts, Sam", "20140302", ""
"2408", "Harris, Chuck", "20140305", "C5|"
Only the last Operation is getting exported into CSV even though all of them display.
If I export as PDF, Excel or Record Style the file has all of the operations. Unfortunately I need a CSV. I am trying to avoid having to do multiple reports and stitch them together with a script if possible; The client wants to be able to easily run and export this themselves on demand.
I created three formula fields to initialize, update and display a shared variable that concatenates the operations together.
My report is grouped by the ApptID and looks like this:
Group Header #1 (suppressed)
{#InitializeOperations}:
WhilePrintingRecords;
shared StringVar Operations := "";
Details (suppressed)
{#UpdateOperations}:
WhilePrintingRecords;
shared StringVar Operations := Operations + {AppointmentDetails.Operation} + "|";
Group Footer #1
{AppointmentMaster.Customer}
{AppointmentMaster.Name}
{AppointmentMaster.ApptDate}
{#DisplayOperations}:
WhilePrintingRecords;
shared StringVar Operations;
I have tried using evaluateAfter(#UpdateOperations) instead of WhilePrintingRecords on the #DisplayOperations, and have even tried removing any Evalutation Time command from it as well, but I still can't get the desired effect in the CSV file despite having it look correct on screen and every other way I have tried to export it.
Any help you can provide is appreciated.

Related

How to split table data separate names Excel files using an SSIS package?

I'm working with a set of data from SQL Server that I'd like to get into a group of Excel files. This task needs to be automated to run on a monthly basis. The data looks like
sk needs to be automated to run on a monthly basis. The data looks like
Site ID FirstName LastName
------ ------- --------- ---------
North 111 Jim Smith
North 112 Tim Johnson
North 113 Sachin Tedulkar
South 201 Horatio Alger
South 205 Jimi Hendrix
South 215 Bugs Bunny
I'd like the results to look like
In Excel file named **North.xls**
ID FirstName LastName
111 Jim Smith
112 Tim Johnson
113 Sachin Tedulkar
In Excel file named **South.xls**
ID FirstName LastName
201 Horatio Alger
205 Jimi Hendrix
215 Bugs Bunny
There are between 70 and 100 values in the Site column that I'd like to split upon. I'm using SSIS to perform this task, but I'm getting stuck after I've pulled the data from SQL Server with a OLE DB Source task. What should come next? If there is an easier way to do this using other tools I'm open to that too.
You can create a Execute SQL Task, which executes a SELECT DISTINCT on the column "Site" an stores the values in a object variable.
In the next step you build a Foreach Loop Container, which iterates the object variable.
The Foreach Loop Container has a Dataflow Task. In the Dataflow you have a ADO.NET Source, you build an expression for the SQL-Statement.
In the Expression you build a dynamic SELECT, in the where Part you restrict to the current iteration.
Redirect the Dataflow to a Flat File Destination. In the Expression of the Flat File Destination you can name the File with current iteration.
Do have any questions? Do you need Screenshots?
Update:
A more detailed explanation with screenshots:
Create a execute SQL Task:
It should return a full result set and in the SQLStatment property write the SELECT Distinct query on your Site column.
Define the Result as "0" and map it to a variable of type Object.
Create an Foreach Loop Container:
Set the Enumerator to "Foreach ADO Enumerator" and select your variable, which you have already definided in part 1, in the "ADO object source variable" Combobox.
Map a new variable of type string in the Resultset. This variable is iteration of the object variable in the Loop.
Now you place a Dataflow Task in the ForEachLoop Container.
You can either use an "OLE DB Source" or an "ADO NET Source" as your data source.
I will explain the "ADO NET Source":
Add this construct to your Data Flow:
Configure the ADO.NET Soure like this:
Add an expression to the ADO.NET Source:
Open the expression editor and select the property [ADO NET Source].[SQLCommand]. In this Expression Editor you can build dynamic SQL querys.
Expression are very powerfull. Here is the documentation: https://learn.microsoft.com/en-us/sql/integration-services/expressions/integration-services-ssis-expressions?view=sql-server-2017
The expression should look something like this:
"SELECT [Site]
,[ID]
,[FirstName]
,[LastName]
FROM [Test].[dbo].[Sites]
where Site = '" + #[User::sIterator] + "'"
Now every loop passage, the sql-query will select another site.
Make the the FileName dynamic with Expressions.
Create an Connection Manger for your "Flat File Destination".
Select the Expression Property of the connection Manger, like we did before in Part 5 for the Data Flow Task.
Now build your Expression for the Property "ConnectionString". The ConnectionString is the full Path including the filename.
"E:\\" + #[User::sIterator] + ".csv"
Dont forget you have to qoute "\" in expressions with "\". So always write "\\" not "\".

Skip rows when importing CSV to PowerPivot

I frequently need to pull some CSV reports and analyze them using powerpivot. The "issue" is that the tool spits out the report like this:
Report Name Keywords (Group contains 778600, Campaign contains us-en)
Client XYZ
Scope Entire Account
Date Range 3/12/2015
Filters Campaign contains us-en; Group contains 778600; Clicks > 0; Reduced Dimension
Keyword Account Publisher Campaign Group Search Bid $ Status Destination URL
Total for all 2 keywords
Keyword Account Publisher Campaign Group Search Bid $ Status Destination URL
bla bla bla Account Name Publisher Name Campaign Name Group Name 1 Active URL
So what i always need to do is to remove the first 9 rows of the CSV prior to importing. Usually i can do this on Notepad++, but sometimes the CSV is so large that i actually can't really open it to edit. So far i'm using a program called 010 Editor, but i have only some days left of it.
Is there an easy way to skip those rows when importing?
Thanks a lot
You can use Power Query (free to download) to load data to Power Pivot. It allows you to skip the first x rows and filter out rows with blank/null values. Once you are able to get this to work once, you can copy the M code to use it on other CSVs. Or you can automate it as a function and just feed it file locations.

ssis get 3 row and all the data from 9 row

in my csv file data is like this
************* file format***************************
filename, abc
date,20141112
count,456765
id,1234
,,
,,
,,
name,address,occupation,id,customertype
sam,hjhjhjh,dr,1,s
michael,dr,2,m
tina,dr,4,s
*********************more than 30000 records in each load *************************************
i have got the file in above format and i want to take date and count from 2nd and 3rd row and than the data starts from 9th row. is it possible without script task i am not so good with scripting
can anyone plz help how t get this.
With out using a script task also it is possible to do. The flow is like...
Pull 2 DFT into your package, 1 to reformat your text file and split it to 2 separated text file. 1 for your 2nd & 3rd row and another 1 for more the 9th row. The another DFT will do your rest operation which is quite simple.
1st DFT--> Flat file source--> Row Number Transformation (You can get this new transformation from this link as per your sql version <http://microsoft-ssis.blogspot.in/p/ssis-addons.html>) -->conditional split (1-->RowNumber == 2 || RowNumber == 3,2-->RowNumber > 8)-->Put the result into 2 different flat files _1 & _2 as per your convenience naming.
Now you are ready with your required 2 flat files as source to your 2nd DFT...
*If it solves your problem, mark it as answer.

Creating / Appending a Flat File Destination based on date.

The Backstory:
I have a process that loads physician demographic data into our system. This data can come in at any time and at any interval between updates. The data is what we call "Term-by-Exclusion", meaning that the source file takes precedence, and any physician record in the db that is not in the source file is marked as "Termed" or Inactive.
The Problem:
I need to be able to output the data from the source data, into a flat file destination as a daily report to a companion COBOL system. The source data is loaded into an ETL.PhysicianLoad table prior to processing and the ETL table is wiped prior to each new processing transaction, so retaining a full days' records is not possible as it stands now, without the output file.
Example: ProcessOutput_10152013.txt
The output file ideally needs to be a comprehensive of the entire days' processing. Meaning I want to continuously append to that days' file until the end of that day, then email a notification stating the file is ready for pickup. Any data that comes in after the turn of the day should then be placed in newly created file.
Output should look like this (no headers)
BatchID | LastName | FirstName | MiddleInitial | Date
0001 | Smith | John | A | 10/15/13
0001 | Smith | Sue | R | 10/15/13
0001 | Zeller | Frank | L | 10/15/13
0002 | Peters | Paula | D | 10/15/13
0002 | Rivers | Patrick | E | 10/15/13
0002 | Waters | Oliver | G | 10/15/13
What I am thinking:
I am thinking about using a CurrentDate Variable that will hold the current date comparing it to an expression based variable called FileName which will concatenate the current mmddyyyy to "ProcessOutput_.txt". My thinking is that I should be able to locate a file with that name in the destination folder and if it exists, I should be able to write to it. Otherwise I will have to create a new file. I can then set my Flat File Destination via expression to the FileName Variable.
Can anyone see a better way of doing this or any issues that may arise from this solution I am not seeing?
My thought process was in the right place, but flawed.
Here is how I solved the problem.
After trying to build my control/data flows using the logic in the original question, I discovered that I was working myself into a corner.
So that got me thinking again, how can I do this the easiest possible way
First, do I have the correct Variables defined? No..
CurrentDate - has to be there to define the date portion of the file name.
FileName - has to be present for obvious reasons.
So what did I miss?
FileExists (Type: boolean) - Something that will identify the existence of the file.
PlaceholderFile (Type: String) - Generic FileName Variable
Now what to do with it?
Add a VB Script Task to the control flow, that sets the FileExists flag.
'Check to see if ProspectivePhysician_<currentdate>.txt exists.
Dts.Variables("User::FileExists").Value = File.Exists(Dts.Variables("User::FileName").Value.ToString)
Now that we have the existence of the destination file defined, create the data flow object from the source table. Checking the FileExists Variable in a conditional split. Seperating the data flow into two branches. Create two Flat File Destinations called "Existing" and "New", setting them both to the same flat file location for the time being.
If you attempt to run the package at this point, you will receive Validation Errors from one of the two destinations, as the first is holding ownership of the file and will not allow the second to validate the file.
How to fix this...Use Expressions to swap the actual FileName value back and forth.
For the Existing Flat File Connection String Value, use the following Expression:
#[User::FileExists] == True ? #[User::FileName] : #[User::PlaceholderFile]
For the New Flat File Connection String value, use the following Expression:
#[User::FileExists] == True ? #[User::PlaceholderFile] : #[User::FileName]
Finally, Right click on each of the Flat File Destination Objects in the Data Flow and set the Overwrite property to True on the New Flat File Destination, and False on the Existing Destination. This will assure that the Append action is used on the existing file.

Pulling data from a text file to generate a report

Have a program in MS-Access, using VBA. I need to come up with an If statement to pull data from a text file. The data is a list of procedures and prices. I have to pull the prices from the text file to show in a report how much each procedure costs.
ID PID M1 M2 M3 Total
1 11120390(procedure)
2 180(price) 360 180 540 1080(total Price)
3 2 1 3 6(Units sold)
4
5 200(Price) 200 600 800 1600(total price)
6 1 3 4 8(Units Sold)
7 11120390(procedure)
The table in the text file is setup like this and I need to Pull the procedure number and the price of each procedure from the text file.
This is a general answer to a vaguely-presented question. You typically have to go through these steps:
Make a connection to the file
Open the file
Parse the file (as Simon was
saying): go through it as a series
of strings, find an orientation
point, get to the relevant parts
Import the relevant parts, perhaps
in a holding table
Present the data in typical Access
fashion (query, report)
And if the file isn't well structured or correctly generated, you'll need extra parsing code and perhaps error handling to deal with aberrations.