SSRS Text Data Output - Header and Details on the same row? - reporting-services

I have turned headers off in the report server config file and am attempting to output the header rows above the details in CSV output. What is happening instead is the header is displaying on the same line as the details. If I add another table with the header row in it, it works, but leaves a one row gap between the header and the content. Any help getting this data to line up correctly would be greatly appreciated.

You could investigate using XSLT to transform the XML output into a desired format. This is really the only option I know of that I've used in the past for making a custom CSV type output. You could then undo the alteration to the server wide (?) config file as the XSLT file would be applied to just that report, making it easier for deployment
http://msdn.microsoft.com/en-us/library/ms159716(v=sql.90).aspx
(Probably more up to date links out there, just Google/Bing "SSRS XSLT" etc)

Related

Pipeline unable to read field of plain text file

Using Apache Hop latest version I'm trying to read in a plain text file. This text file is old and basically only structured by its lines (it has no delimiter, no seperator, no enclosure, etc.). I would like to read and process the lines of this file as rows in my transformation.
I use the "Text file input" transformation to read the file. Apparently reading it works, but I seem have no field available when trying to retrieve the fields. It simply states that no fields were found.
When I run the "preview records" I do get empty records equal to the number if lines in the file, so that is good. However there is no data shown as there is no field detected.
Curiously enough, when I press "Show file content" I DO get the desired content, nicely structured in the rows as desired, so I know the file is being read correctly.
Does anyone know how to best read these kind of files?
PS: The files can be anywhere from 10 to 100000 lines.
When there is no header row with field names or Hop is not able to detect any fields you can also create a field in the fields tab and it will put content in there.
As we just use a position based approach and split the content using the specified delimiter everything should go in "field1" when no delimiter is found in the data.
Figured it out. The naming is a bit misleading, but you can use the "CSV File input" and then set a TAB as delimited. Then use preview on your file and you should find that the lines are actually being parsed.

I'm getting errors in documents generated with python-docx, specifically if I include tables from a template

I am using python-docx to programmatically insert data into a new document. When opening the new file, I get the following error message.
Word found unreadable content in document_name. Do you want to recover the contents of this document? If you trust the source of this document, click Yes.
Here is the process that my code is going through to get to this point:
Copy a docx file that we will call our findings templates to a
working folder
Copy another docx file that is our report document to the same working folder
Locate a table in our findings document that we want to include in the report
Fill in some data in the table, and put the now completed table into the report document.
Save the report document as a new file, called generated.docx
What I have figured out so far:
If I don't fill in any information in the table, and just copy it
from the findings templates into the report, I still get the above
error message.
If I insert other data into the report without the
table from the findings templates the document is all good with no
errors.
The source files have no errors, at least Word doesn't complain when opening either the findings document or the report document.
If I let Word correct the errors, all hyperlinks in the document are broken, the text for the link is there along with the link style, but the target is missing, and when looking at the document after hitting alt+F9, you can see { HYPERLINK } indicating the missing target as well.
After quite a bit of googling and finding some similar answers that haven't resolved the issue, I feel like this might be relevant. The tables in the findings document contain a large number of merged cells. It is only one table, not nested tables as I initially thought they were.
Heading is 2 rows deep with 4 merged cells on the left for the finding title and then on the right are two columns with headings and relevant data below. Then the body of the table is a mixture of merged cells per row. Some rows will have all cells merged, others with have 2 cells merged out of 3.
Here is the code I am using to snag the table from the findings document:
for table in findings_templates.tables:
row = table.rows[0]
for cell in row.cells:
if title.lower() in cell.text.lower():
severity = get_severity_from_template(table)
for item in severity_array:
if severity in item[1]:
anchor = item[0]
# snip
# Insert some data into table here
# snip
addTableAfterParagraph(report_document, table, title)
return True
Since the errors occur with our without modification, ill leave out the modification code. Here is the code that inserts the table into the template document:
def addTableAfterParagraph(report_document, table, title):
for para in report_document.paragraphs:
if para.text == title:
p = para._p
p.addnext(table._tbl)
Additionally, I added some print lines for table._tbl.xml and I don't see much of a difference between the source table and the one inserted into the document except for the first line has a few differing xmlns tags.
I'd love some troubleshooting tips, or any suggestions. Let me know if any more information is needed. Thanks in advance!
UPDATE: It's the hyperlinks in the source table that are causing the issue. I'm marking this solved for now and may open another more specific question if I can't figure it out.
I ended up reading data from the source document tables, then creating my own tables programmatically, and inserting that data back in along with performing any transforms, such as creating hyperlinks, styles, etc.
It was painful, but ultimately solved the issue and provides flexibility in the future.

Removing Headers in eText in BI Publisher

We have a scenario where we should not display the header in the output in CSV using eText template.
Our output looks like this:
Header000001 Header000002
------------ ------------
Adetail1 Bdetail1
Adetail2 Bdetail2
Adetail3 Bdetail3
Desired output is:
Adetail1 Bdetail1
Adetail2 Bdetail2
Adetail3 Bdetail3
We tried all possible options in eText template like removing header section, verifying the data using BI Publisher Desktop tool, verifying logs etc.
We are not getting any error in BI Publisher Desktop tool.
Same question has been posted by somebody some time ago and it was resolved, but solution was not provided.
It would be very helpful if anybody can provide the exact solution.
The header will just be another block in your eText template. You can use the <DISPLAY CONDITION> command to skip printing that block in the output. The display condition command specifies when the enclosed record or data field group should be displayed. The command parameter is a boolean expression. When it evaluates to true, the record or data field group is displayed. Otherwise the record or data field group is skipped. You can just give condition as false, and that block will be skipped.
I have created a template using the provided data xml to output a CSV, without headers. A delimiter based template is used, but the header is not printed.
Access it from here.

How to Read In Fixed Width File Using SSIS with Multiple Lines Into SQL Server 2008 DB Table

I have attempted numerous ways and i have also researched the topic on the Internet only to find single line fixed width files being read in using Flat File Connection. How do you do this when you have three or more differing lines?
When configuring the Flat File Connection Manager Editor, ensure you select 'Ragged right' from the Format drop-down list.
The fixed width option is confusing and it is not suitable for a file with a carriage return / line feed at the end of each row.
Once you have selected Ragged right, head to the Advanced section and add every single column that is contained within your text file to the list and set its InputColumnWidth and OutputColumnWidth.
These values represent in the length of each column within your text file.

Look up transformation is SSIS is not sending Error data rows into file, Why?

I have started working with SSIS recently and I am hit a dead end with the look up transformation error rows redirect to a file. I have configured to send the rows that do not have a match to a flat file destination. But the file will not contain the collumn data, which is blank however the error code and error collumn into the text file.
This is the data contained in the text file:
,-1071607778,0
,-1071607778,0
,-1071607778,0
The first position suppose to be the data in my field but seems like its blank for a reason i don't understand. Any body help me clarify this, What am i missing???
that looks like the error number, not the row data