SSIS: Data at the root level is invalid - ssis

I want to iterator the value in xpath /root/filepath in xml_1.xml by using ForEachLoop container in SSIS, see my xml file full path as below:
my XML file content looks like this:
<root>
<filepath>1.txt</filepath>
<filepath>2.txt</filepath>
</root>
and below screenshots are my packages in SSIS:
but when I run this package, I always get following error message:
SSIS package "C:\Users\zhanzhex\source\repos\Integration Services Project1\Integration Services Project1\loopThrowXml.dtsx" starting.
Error: 0xC0010014 at Foreach Loop Container, Foreach NodeList Enumerator: Data at the root level is invalid. Line 1, position 1.
Warning: 0x80019002 at Foreach Loop Container: SSIS Warning Code DTS_W_MAXIMUMERRORCOUNTREACHED. The Execution method succeeded, but the number of errors raised (1) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors.
Warning: 0x80019002 at loopThrowXml: SSIS Warning Code DTS_W_MAXIMUMERRORCOUNTREACHED. The Execution method succeeded, but the number of errors raised (1) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors.
SSIS package "C:\Users\zhanzhex\source\repos\Integration Services Project1\Integration Services Project1\loopThrowXml.dtsx" finished: Failure.
The program '[1168] DtsDebugHost.exe: DTS' has exited with code 0 (0x0).
can anybody tell me why and how to fix it?

Related

mariadb c-connector bind / execute mess up memory allocation

i'm using mariadb c-connector with prepare, bind and execute. it works usualy. but one case end up in "corrupted unsorted chunks" and core dumping when freeing bind buffer. i suggest the whole malloc organisation is messed up after calling mysql_stmt_execute(). my test's MysqlDynamic.c show:
the problem only is connected to x509cert variable bound by bnd[9]
freeing memory only fails if bnd[9].is_null = 0, if is_null execute end normally
freeing memory (using FreeStmt()) after bind and before execute end normally
print of bnd[9].buffer before execute show (void*) is connected to the correct string buffer
same behavior for setting bnd[9].buffer_length to STMT_INDICATOR_NTS or strlen()
other similar bindings (picture, bnd[10]) do not lead to corrupted memory and core dump.
i defined a c structure test for test data in my test program MysqlDynamic.c which is bound in MYSQL_BIND structure.
bindings for x509cert (string buffer) see bindInsTest():
bnd[9].buffer_type = MYSQL_TYPE_STRING;
bnd[9].buffer_length = STMT_INDICATOR_NTS;
bnd[9].is_null = &para->x509certI;
bnd[9].buffer = (void*) para->x509cert;
please get the details out of source file MysqlDynamic.c. please adapt defines in the source to your environment, verify content, and run it. you will find compile info in source code. MysqlDynymic -c will create the table. MysqlDynamic -i will insert 3 records each run. And 'MysqlDynamic -d` drop the the table again.
MysqlDynamic -vc show:
session set autocommit to <0>
connection id: 175
mariadb server ver:<100408>, client ver:<100408>
connected on localhost to db test by testA
>> if program get stuck - table is locked
table t_test created
mysql connection closed
pgm ended normaly
MysqlDynamic -i show
ins2: BufPara <92> name<master> stamp<> epoch<1651313806000>
cert is cert<(nil)> buf<(nil)> null<1>
picure is pic<0x5596a0f0c220> buf<0x5596a0f0c220> null<0> length<172>
ins1: BufPara <91> name<> stamp<2020-04-30> epoch<1650707701123>
cert is cert<0x5596a0f181d0> buf<0x5596a0f181d0> null<0>
picure is pic<(nil)> buf<(nil)> null<1> length<0>
ins0: BufPara <90> name<gugus> stamp<1988-10-12T18:43:36> epoch<922337203685477580>
cert is cert<(nil)> buf<(nil)> null<1>
picure is pic<(nil)> buf<(nil)> null<1> length<0>
free(): corrupted unsorted chunks
Aborted (core dumped)
checking t_test table content show all records are inserted as expected.
you can disable loading of x509cert and/or picture by commenting out the defines line 57/58. the program than end normally. you also can comment out line 208. the buffers are then indicated as NULL.
Questions:
is there a generic coding mistake in the program causing this behavior?
can you run the program in your environment without core dumping? i'm currently using version 10.04.08.
any improvment in code will be welcome.

ssis execute process task capture exit code into variable

I'm trying to capture exit code from an ssis execute process task into a variable. The exit code is a string, although the value itself is an integer, value of 0 or 1 is success, other mean failure.
As far as I know I cannot specify multiple success values within the "Success value property", so I decided to capture the exit code into a variable, pass it to Execute script task and evaluate there whether the exit code represents success or failure.
I've set up a string variable to capture the exit code of my app, with type string.
Unfortunately, the value is empty after Process task execution, no matter whether I put my variable directly into StandardOutputVariable or into Expressions tab:
On execution, in the Locals windows in debug mode I see the value is empty (e.g. {}).
Is there a way to overcome this?
I'd appreciate any feedback.
Whatever you are capturing in the variable, is not the exit code. It is the output value (StandardOutput).
For getting the exit code, what you have to do is, in the command arguments, add the below suffix
Executable: cmd.exe
Arguments: /C mycmd_with_parameters;echo %ERRORLEVEL%
%errorlevel% will be having the exit code of the executable. You are outputting that to standard output. More info on %Errorlevel%
There is one caveat:
When an external command is run by CMD.EXE, it will detect the
executable's Return or Exit Code and set the ERRORLEVEL to match. In
most cases the ERRORLEVEL will be the same as the Exit code, but there
are some cases where they can differ.
An Exit Code can be detected directly with redirection operators
(Success/Failure ignoring the ERRORLEVEL) this can often be more
reliable than trusting the ERRORLEVEL which may or may not have been
set correctly.
Now, you will get the exitcode of the executable as standardoutput and you can capture it to the variable, as you have configured.
The StandardOutputVariable of an Execute Process Task contains the terminal output. So, if you are using a bash executable, all ECHO results should be saved on that variable.
You can check the variable values during runtime debug.

unable to open a csv with fortran

I am trying to read an input from csv file (input.csv, in the same folder of this f95 file) which has some numbers in certain columns. while I execute my program, it gives error
program received signal SIGSEGV: segmentation fault - invalid memory reference. backtrace =#0 ffffffff
the code is below; it terminates after the first print statement.
program section
implicit none
integer::no_num,count_le,count_cve,count_cvx,count_ce
integer::i,j
double precision,allocatable,dimension(:)::x1,y1,x2,y2
double precision,allocatable,dimension(:)::xc,yc,rc,sa,ea
character (2),allocatable,dimension(:):: asc,asl,sd
character(100)::rand_char ! random characters/txt
print*,"program"
open(unit=100,file='input.csv')
print*,"check"
close(100)
end program

Error while indexing csv file

I am trying to index csv file in Endeca.Indexing is working fine in the case the line length is less than 65536.For large data it is throwing below exception.
FATAL 02/18/14 15:45:53.122 UTC (1392738353122) FORGE {baseline}: TextObjectInputStream: while reading "/opt/soft/endeca/apps/MyApp/data/processing/TestRecord.csv", delimiter " " not found within allowed distance of 65536 characters. ............................................. .............................................. ERROR 02/17/14 16:10:58.060 UTC (1392653458060) FORGE {baseline}: I/O Exception: Error reading data from Java: EdfException thrown in: edf/src/format/Shared/TextObjectInputStream.cpp:76. Message is: exit called
How can I increase this limit to index large data(having more than 65537 character in single line) in Endeca ?.
I imagine you've fixed this. If not, your error is when the row delimiter isn't set correctly in your Record Adapter.
If your records are legitimately that long in a CSV file, switch to XML or something else.

Getting DRAM_Reads and DRAM_Writes in command line mode of CUDA Profiler

I am trying to use the CUDA Profiler in command line; I am interested in DRAM_Reads and DRAM_Writes - and I am providing the following counters in my CUDA_PROFILE_LOG file:
fb_subp0_read_sectors
fb_subp0_write_sectors
fb0_subp0_read_sectors
fb0_subp0_write_sectors
fb1_subp0_read_sectors
fb1_subp0_write_sectors
But I notice in my cuda_profile files, there is an error like:
NV_Warning: Ignoring the invalid profiler config option: fb0_subp0_read_sectors
NV_Warning: Ignoring the invalid profiler config option: fb0_subp0_write_sectors
NV_Warning: Ignoring the invalid profiler config option: fb1_subp0_read_sectors
NV_Warning: Ignoring the invalid profiler config option: fb1_subp0_write_sectors
The values I get from fb_subp0_read_sectors and fb_subp0_write_sectors counters are not equal to what I get from NVidia Visual Profiler, which is perhaps because I am not passing correct counters to the config file.
The GPU is Tesla M2050 and CUDA 4.1 is used. How do I get DRAM_Reads and DRAM_Writes in command line?
EDIT: After doing a bit of read-up, I think GPU could either have fb0/1... or fb... counters. But even if I have:
fb_subp0_read_sectors
fb_subp0_write_sectors
fb_subp1_read_sectors
fb_subp1_write_sectors
I get the warning:
NV_Warning: Counter 'fb_subp1_read_sectors' is not compatible with other selected counters and it cannot be profiled in this run.
NV_Warning: Counter 'fb_subp1_write_sectors' is not compatible with other selected counters and it cannot be profiled in this run.
Thanks,
Sayan
Not all counters can be profiled in a single run, due to hardware constraints.
According to the warning message, you may try profiling the first two counters in the first run, and then the last two in the second run.