Reporting Services 2008R2 throws "native compiler error [BC30494] Line is too long." - reporting-services

An unexpected error occurred while compiling expressions. Native compiler return value: '[BC30494] Line is too long.'.
When RS throws this error, the typical scenario appears to be that there are too many text boxes on a specific data region; and the only known measure seems to be to 'minify' text box names (ie. rename TextBox345 to T345).
My report is not that large (<100 text boxes); but I make extensive use of the Lookup() function to set many of the textbox style properties from a styles dataset (>2500 Lookup() calls).
So my guess is that the VB code-behind that gets generated for the Lookup() function is quite verbose and therefore breaks the 64K limit for a generated VB code block per data region.
Can I test my hypothesis? Ie. is there a way I can inspect the generated VB code?
Any suggestions as to how to fix/dodge this problem? Needless to say that using abbreviated names in my case didn't cut it.

Quite a delayed response but for the sake of posterity:
The .vb source file of the generated code exists on disk temporarily within the directory C:\Users\{RS Service Account Name}\AppData\Local\Temp. As you mentioned, if any line goes past 65535 characters then it will fail compilation due to a VB limitation
This issue was just fixed in Reporting Services 2012 SP2 CU5; the KB article is located here. Unfortunately mainstream support for SQL 2008 R2 has ended so the fix is unlikely to be backported. As for workarounds:
Shorten the textbox names to the absolute minimum possible (e.g., use all possible single character names, then all possible two character names)
Try using subreports to split up the report
Try to rework your dataset query so that you can reduce the Lookup() calls

Related

How can I read in a TXT file in Access that is over 255 char/line and contains control char?

I am running Access 2010. I need to read in a TXT file into a string. Each line can be anywhere from 40 to 320 char long, ending in a CR. The biggest problem is the TXT file of various lines contains comma's (,) and quotations (") as part of the data.
Is there a trick to doing this? Even if it is getting each char, and testing to see if it is a CR....
To accomplish this task, you will need to write your own import code that will read directly from the file. The Microsoft Access import features will not handle a file like this very well, and since you want to analyze each line in code, it is better to handle reading it yourself.
There are many approaches you can take, and all will involve File handles and Opening the file. But, the best approach is to use a class that does all of the dirty work for you.
One such class is the LargeTextFile class that can be found in any of the Microsoft Access Developer's Handbooks (Volume 1) for Access 97, 2000, 2002 or 2003, written by Getz, Litwin, and Gilbert (Sybex), if you have access to one of them.
Another option would be the clsReadTextFile class, available for free on the Access MVP Site (The Access Web) site:
http://www.theaccessweb.com/downloads/clsReadTextFile.txt
Using clsReadTextFile you can process your file, line by line using code similar to this:
Dim file As New clsReadTextFile
Dim line As String
file.FileName = "C:\MyFile.txt"
file.cfOpenFile
Do While Not file.EndOfFile
file.csGetALine
line = file.Text
If InStr(line, "MySearchText") Then
'Do something
End If
Loop
file.cfCloseFile
The line string variable will contain the text of the line just read, and you can write code to parse it how you need and process it appropriately. Then the loop will go on to read the next line. This will allow you to process each line of the file manually in your code.
It is not clear from your post as to whether or not you can - or have tried - to use the tools available in the product for this task. Access 2010 offers linking to a .txt file as well as appending a .txt file to a table. These are standard features in the External tab of the ribbon.
The Large Text (formerly Memo) field type allows ~4K characters. Not sure if you wish to attempt to bring in all the txt data into a single field - if so then this limit is important.
If the CRs of the text document imply a new record/row of data - rather than a continuous string for the entire document - - AND - - - if there is any consistent structure within all rows of data - then the import wizard can use either character count or symbols (i.e. comma if they exist) - as the means to separate/segregate each individual row of data into separate fields in a single row of a table.

Working on migration of SPL 3.0 to 4.2 (TEDA)

I am working on migration of 3.0 code into new 4.2 framework. I am facing a few difficulties:
How to do CDR level deduplication in new 4.2 framework? (Note: Table deduplication is already done).
Where to implement PostDedupProcessor - context or chainsink custom? In either case, do I need to remove duplicate hashcodes from the list or just reject the tuples? Here I am also doing column updating for a few tuples.
My file is not moving into archive. The temporary output file is getting generated and that too empty and outside load directory. What could be the possible reasons? - I have thoroughly checked config parameters and after putting logs, it seems correct output is being sent from transformer custom, so I don't know where it is stuck. I had printed TableRowGenerator stream for logs(end of DataProcessor).
1. and 2.:
You need to select the type of deduplication. It is not a big difference if you choose "table-" or "cdr-level-deduplication".
The ite.businessLogic.transformation.outputType does affect this. There is one Dedup only. You can not have both.
Select recordStream for "cdr-level-deduplication", do the transformation to table row format (e.g. if you like to use the TableFileWriter) in xxx.chainsink.custom::PostContextDataProcessor.
In xxx.chainsink.custom::PostContextDataProcessor you need to add custom code for duplicate-handling: reject (discard) tuples or set special column values or write them to different target tables.
3.:
Possibly reasons could be:
Missing forwarding of window punctuations or statistic tuple
error in BloomFilter configuration, you would see it easily because PE is down and error log gives hints about wrong sha2 functions be used
To troubleshoot your ITE application, I recommend to enable the following debug sinks if checking the StreamsStudio live graph is not sufficient:
ite.businessLogic.transformation.debug=on
ite.businessLogic.group.debug=on
ite.businessLogic.sink.debug=on
Run a test with a single input file only and check the flow of your record and statistic tuples. "Debug sinks" write punctuations markers also to debug files.

How do you check a GUID against a list of known GUIDs in an SSRS expression?

I was writing an expression in SSRS/Visual Studio 2008, trying to compare a GUID to a list of known GUIDs... however, I was running up against errors in Visual Studio when I attempted that. Here is my code:
IIf(Fields!Id.Value = "E1A5AA02-6B0F-4D0D-87B6-E88773314B73" ...
It took a little digging, and eventually led me to this question to find the answer, but I used a combination of string conversion and casing to yield the result:
IIf(UCase(CType(Fields!Id.Value, GUID).ToString) = "E1A5AA02-6B0F-4D0D-87B6-E88773314B73" ...
For completeness, I probably should have wrapped UCase around both sides of the equation, just in case.

What does Backpatching mean?

What does backpatching mean ? Please illustrate with a simple example.
Back patching usually refers to the process of resolving forward branches that have been planted in the code, e.g. at 'if' statements, when the value of the target becomes known, e.g. when the closing brace or matching 'else' is encountered.
In intermediate code generation stage of a compiler we often need to execute "jump" instructions to places in the code that don't exist yet. To deal with this type of cases a target label is inserted for that instruction.
A marker nonterminal in the production rule causes the semantic action to pick up.
Some statements like conditional statements, while, etc. will be represented as a bunch of "if" and "goto" syntax while generating the intermediate code.
The problem is that, These "goto" instructions, do not have a valid reference at the beginning(when the compiler starts reading the source code line by line - A.K.A 1st pass). But, after reading the whole source code for the first time, the labels and references these "goto"s are pointing to, are determined.
The problem is that can we make the compiler able to fill the X in the "goto X" statements in one single pass or not?
The answer is yes.
If we don't use backpatching, this can be achieved by a 2 pass analysis on the source code. But, backpatching lets us to create and hold a separate list which is exclusively designed for "goto" statements. Since it is done in only one pass, the first pass will not fill the X in the "goto X" statements because the comipler doesn't know where the X is at first glance. But, it does stores the X in that exclusive list and after going through the whole code and finding that X, the X is replaced by that address or reference.
Backpaching is the process of leaving blank entries for the goto instruction where the target address is unkonown in the forward transfer in the first pass and filling these unknown in the second pass.
Backpatching:
The syntax directed definition can be implemented in two or more passes (we have both synthesized attributes and inherited attributes).
Build the tree first.
Walk the tree in the depth-first order.
The main difficulty with code generation in one pass is that we may not know the target of a branch when we generate code for flow of control statements
Backpatching is the technique to get around this problem.
Generate branch instructions with empty targets
When the target is known, fill in the label of the branch instructions (backpatching).
backpatching is a process in which the operand field of an instruction containing a forward reference is left blank initially. the address of the forward reference symbol is put into this field when its definition is encountered in the program.
Back patching is the activity of filling up the unspecified information of labels
by using the appropriate semantic expression in during the code generation process.
It is done by:
boolean expression.
flow of control statement.

Why is SSIS complaining that "There is a partial row at the end of the file"?

I'm importing a flat file into a database using a Data Flow Task in SSIS. The file is very simple: it contains three comma-separated values per row. Whenever I run this task, however, I receive a warning from the Flat File component:
Warning: 0x8020200F: There is a partial row at the end of the file.
This warning seems to happen regardless of the size of the file: even with only a handful of rows in the file, visually validated (with extended characters and whatnot visible) I still receive it. Moreover, it doesn't seem to matter whether I have a blank row at the end of the file or I just end it without a trailing CR+LF.
How can I get rid of this warning so I can run my package with WarnAsError enabled?
(BTW, it seems someone else may have had a similar problem in There is a partial row at the end of the file, though it wasn't much of a question.)
I have found three things to try if you encounter this problem. In at least two out of the three cases, SSIS was ignoring rows of my input file with only the above warning to show for it. Because of that, I do not recommend ignoring this warning!
Step 1: verify that your flat file is valid
This error will appear when you have an invalid input file. This can be especially hard to detect if your input file has millions of lines, as mine do, but it's vital that you discover file format violations because SSIS will happily give you this warning and continue on its way without importing the offending lines or, in some cases, the lines after the offending lines. The easiest way I found to discover a problem with the source file is to check the number of rows that are being imported successfully. If it's vastly different than the number you expect in your flat file, something may have gone wrong in the middle somewhere.
Step 2: try a dummy line at the end (fixed-width only)
If you are using a fixed-width format input file, Microsoft may have a helpful KB article for you. Basically, they suggest that you add a dummy line at the end of the file.
I am not using fixed-width files, so I can't say how useful this technique is.
Step 3: turn off text qualification for non-text
This is the tricky one, because I believe the TextQualified property is True by default. If your input file uses non-text fields (integers, etc.), then you must tell SSIS that it should not expect those columns to be qualified as text. Essentially, your input file will be invalid in spite of looking perfectly valid.
TextQualified is a property of the columns in your Flat File Connection Manager.
To change it, open up your connection manager, click "Advanced", and then click on a non-text column. Make sure the TextQualified property is set to False. You will need to do this for all of your non-text columns.
If the byte width of a line in the file is known, you can always double check that the total byte size of the file can be divided by the expected line size to give you a nice round line count number (as opposed to a decimal).
It helps also to know from your source just how many records are expected, but if you don't have this you can at least double check the resultant loaded tables record count against the calculation of line count while loading the file.
I've seen this error often when a source flat text file is missing it's last \r\n at the end of the file.
Running on Windows 64 bit is perfect. It led to no missing row, but I lost the last row when running on Windows 2008.
My workaround is
1. open the ssis in BIDs on the Windows 2008.
2. open the file connection manager make sure Text Qualifier set to
3. rebuild it
All work fine in both Windows 7 and Windows 2008.