What is the simple way to find the column name from Lineageid in SSIS - ssis

What is the simple way to find the column name from Lineageid in SSIS. Is there any system variable avilable?

I remember saying this can't be that hard, I can write some script in the error redirect to lookup the column name from the input collection.
string badColumn = this.ComponentMetaData.InputCollection[Row.ErrorColumn].Name;
What I learned was the failing column isn't in that collection. Well, it is but the ErrorColumn reported is not quite what I needed. I couldn't find that package but here's an example of why I couldn't get what I needed. Hopefully you will have better luck.
This is a simple data flow that will generate an error once it hits the derived column due to division by zero. The Derived column generates a new output column (LookAtMe) as the result of the division. The data viewer on the Error Output tells me the failing column is 73. Using the above script logic, if I attempted to access column 73 in the input collection, it's going to fail because that is not in the collection. LineageID 73 is LookAtMe and LookAtMe is not in my error branch, it's only in the non-error branch.
This is a copy of my XML and you can see, yes, the outputColumn id 73 is LookAtme.
<outputColumn id="73" name="LookAtMe" description="" lineageId="73" precision="0" scale="0" length="0" dataType="i4" codePage="0" sortKeyPosition="0" comparisonFlags="0" specialFlags="0" errorOrTruncationOperation="Computation" errorRowDisposition="RedirectRow" truncationRowDisposition="RedirectRow" externalMetadataColumnId="0" mappedColumnId="0"><properties>
I really wanted that data though and I'm clever so I can union all my results back together and then conditional split it back out to get that. The problem is, Union All is an asynchronous transformation. Async transformations result in the data being copied from one set of butters to another resulting in...new lineage ids being assigned so even with a union all bringing the two streams back together, you wouldn't be able to call up the data flow chain to find that original lineage id because it's in a different buffer.
Around this point, I conceded defeat and decided I could live without intelligent/helpful error reporting in my packages.

I know this is a long dead thread but I tripped across a manual solution to this problem and thought I would share for anyone who happens upon this same problem. Granted this doesn't provide a programmatic solution to the problem but for simple debugging it should do the trick. The solution uses a Derived Column as an example but this seems to work for any Data Flow component.
Answer provided by Todd McDermid and taken from AskSQLServerCentral:
"[...] Unfortunately, the lineage ID of your columns is pretty well hidden inside SSIS. It's the "key" that SSIS uses to identify columns. So, in order to figure out which column it was, you need to open the Advanced Editor of the Derived Column component or Data Conversion. Do that by right clicking and selecting "Advanced Editor". Go to the "Input and Output Properties" tab. Open the first node - "Derived Column Input" or "Data Conversion Input". Open the "Input Columns" tab. Click through the columns, noting the "LineageID" property of each. You may have to do the same with the "Derived Column Output" node, and "Output Columns" inside there. The column that matches your recorded lineage ID is the offending column."

For anyone using SQL Server versions before SS2016, here are a couple of reference links for a way to get the Column name:
http://www.andrewleesmith.co.uk/2017/02/24/finding-the-column-name-of-an-ssis-error-output-error-column-id/
which is based on:
http://toddmcdermid.blogspot.com/2016/04/finding-column-name-for-errorcolumn.html
I appreciate we aren't supposed to just post links, but this solution is quite convoluted, and I've tried to summarise by pulling info from both Todd and Andrew's blog posts and recreating them here. (thank you to both if you ever read this!)
From Todd's page:
Go to the "Inputs and Outputs" page, and select the "Output 0" node.
Change the "SynchronousInputID" property to "None". (This changes
the script from synchronous to asynchronous.)
On the same page, open the "Output 0" node and select the "Output
Columns" folder. Press the "Add Column" button. Change the "Name"
property of this new column to "LineageID".
Press the "Add Column" button again, and change the "DataType"
property to "Unicode string [DT_WSTR]", and change the "Name"
property to "ColumnName".
Go to the "Script" page, and press the "Edit Script" button. Copy
and paste this code into the ScriptMain class (you can delete all
other method stubs):
public override void CreateNewOutputRows() {
IDTSInput100 input = this.ComponentMetaData.InputCollection[0];
if (input != null)
{
IDTSVirtualInput100 vInput = input.GetVirtualInput();
if (vInput != null)
{
foreach (IDTSVirtualInputColumn100 vInputColumn in vInput.VirtualInputColumnCollection)
{
Output0Buffer.AddRow();
Output0Buffer.LineageID = vInputColumn.LineageID;
Output0Buffer.ColumnName = vInputColumn.Name;
}
}
} }
Feel free to attach a dummy output to that script, with a data viewer,
and see what you get. From here, it's "standard engineering" for you
ETL gurus. Simply merge join the error output of the failing
component with this metadata, and you'll be able to transform the
ErrorColumn number into a meaningful column name.
But for those of you that do want to understand what the above script
is doing:
It's getting the "first" (and only) input attached to the script
component.
It's getting the virtual input related to the input. The "input" is
what the script can actually "see" on the input - and since we
didn't mark any columns as being "ReadOnly" or "ReadWrite"... that
means the input has NO columns. However, the "virtual input" has
the complete list of every column that exists, whether or not we've
said we're "using" it.
We then loop over all of the "virtual columns" on this virtual
input, and for each one...
Get the LineageID and column name, and push them out as a new row on
our asynchronous script.
The image and text from Andrew's page helps explain it in a bit more detail:
This map is then merge-joined with the ErrorColumn lineage ID(s)
coming down the error path, so that the error information can be
appended with the column name(s) from the map. I included a second
script component that looks up the error description from the error
code, so the error table rows that we see above contain both column
names and error descriptions.
The remaining component that needs explaining is the conditional split
– this exists just to provide metadata to the script component that
creates the map. I created an expression (1 == 0) that always
evaluates to false for the “No Rows – Metadata Only” path, so no rows
ever travel down it.
Whilst this solution does require the insertion of some additional
plumbing within the data flow, we get extremely valuable information
logged when errors do occur. So especially when the data flow is
running unattended in Production – when we don’t have the tools &
techniques available at design time to figure out what’s going wrong –
the logging that results gives us much more precise information about
what went wrong and why, compared to simply giving us the failed data
and leaving us to figure out why it was rejected.

There is no simple way to find out column name by lineage id.
If you want to know this using BIDS You have to inspect all components inside dataflow using Advanced properties, Input and Output columns tab and see LineageID for each column and input/output path.
But You can:
inspect XML - this is very difficult
write .NET application and use FindColumnByLineageId
However, second solution includes a lot of coding and understanding of pipeline because You have to programmaticaly: open the package, iterate over tasks, iterate inside containers, iterate over transformations inside data flows to find particular component to use proposed method.

Here is a solution that:
Works at package runtime (not pre-populating)
Is automated through a Script Task and Component
Doesn't involve installing new assemblies or custom components
Is nicely BIML compatible
Check out the full solution here.
EDIT
Here is the short version.
Create 2 Object variables, execsObj and lineageIds
Create Script Task in Control flow, give it ReadWrite access to both variables
Add an assembly reference to your script task for Microsoft.SqlServer.DTSPipelineWrap.dll (may required SQL Client SDK be installed; required for the MainPipe object below)
Insert the following code into your Script Task
Dictionary<int, string> lineageIds = null;
public void Main()
{
// Grab the executables so we have to something to iterate over, and initialize our lineageIDs list
// Why the executables? Well, SSIS won't let us store a reference to the Package itself...
Dts.Variables["User::execsObj"].Value = ((Package)Dts.Variables["User::execsObj"].Parent).Executables;
Dts.Variables["User::lineageIds"].Value = new Dictionary<int, string>();
lineageIds = (Dictionary<int, string>)Dts.Variables["User::lineageIds"].Value;
Executables execs = (Executables)Dts.Variables["User::execsObj"].Value;
ReadExecutables(execs);
Dts.TaskResult = (int)ScriptResults.Success;
}
private void ReadExecutables(Executables executables)
{
foreach (Executable pkgExecutable in executables)
{
if (object.ReferenceEquals(pkgExecutable.GetType(), typeof(Microsoft.SqlServer.Dts.Runtime.TaskHost)))
{
TaskHost pkgExecTaskHost = (TaskHost)pkgExecutable;
if (pkgExecTaskHost.CreationName.StartsWith("SSIS.Pipeline"))
{
ProcessDataFlowTask(pkgExecTaskHost);
}
}
else if (object.ReferenceEquals(pkgExecutable.GetType(), typeof(Microsoft.SqlServer.Dts.Runtime.ForEachLoop)))
{
// Recurse into FELCs
ReadExecutables(((ForEachLoop)pkgExecutable).Executables);
}
}
}
private void ProcessDataFlowTask(TaskHost currentDataFlowTask)
{
MainPipe currentDataFlow = (MainPipe)currentDataFlowTask.InnerObject;
foreach (IDTSComponentMetaData100 currentComponent in currentDataFlow.ComponentMetaDataCollection)
{
// Get the inputs in the component.
foreach (IDTSInput100 currentInput in currentComponent.InputCollection)
foreach (IDTSInputColumn100 currentInputColumn in currentInput.InputColumnCollection)
lineageIds.Add(currentInputColumn.ID, currentInputColumn.Name);
// Get the outputs in the component.
foreach (IDTSOutput100 currentOutput in currentComponent.OutputCollection)
foreach (IDTSOutputColumn100 currentoutputColumn in currentOutput.OutputColumnCollection)
lineageIds.Add(currentoutputColumn.ID, currentoutputColumn.Name);
}
}
Create Script Component in Dataflow with ReadOnly access to lineageIds and the following code.
public override void Input0_ProcessInputRow(Input0Buffer Row)
{
Dictionary<int, string> lineageIds = (Dictionary<int, string>)Variables.lineageIds;
int? colNum = Row.ErrorColumn;
if (colNum.HasValue && (lineageIds != null))
{
if (lineageIds.ContainsKey(colNum.Value))
Row.ErrorColumnName = lineageIds[colNum.Value];
else
Row.ErrorColumnName = "Row error";
}
Row.ErrorDescription = this.ComponentMetaData.GetErrorDescription(Row.ErrorCode);
}

Related

How to set up SSIS parent package such that 4 child packages can run at the same time with different parameter values passed in?

I have created a child SSIS package that executes according to the "ProcessName" variable value that is specified initially. Now, I wish to create a parent package such that I can execute 4 child package tasks with different ProcessName values passed in to be executed in parallel. How can I maintain my child package and pass in different values to each of the 4 execute packages task such that the ProcessNames variable values are different for each of them? I am new to SSIS and would deeply appreciate if someone could advice or give a direction on how I could go about doing so.
I would see this as a pattern like the following
The "trick" here is that within each Sequence Container, SEQC, I need to define my variable that holds my parameter value. That variable needs to be scoped to the container - otherwise, there is only one SSIS variable and the 4 processes that attempt to initialize that value will be in conflict.
In the SSIS Variables menu, there is a Move Variable icon (second one listed)
Here you can see that I have ParameterValue defined in both "SEQC Opt 1a" and "SEQC Opt 1b" and they're initialized with different values.
The first step within the Sequence container is an Execute SQL Task where I pull back the intended parameter value. Maybe that is not needed in your case but it can be helpful to have a repository of run-time values. In the case of 1b, this is much more what my execution pattern looks like. I have a query that pulls back any packages to be run within the scope of this container and the starting value. e.g.
ContainerName|PackageName|StartingValue
SEQC Opt 1a |Child0.dtsx|100
SEQC Opt 1a |Child1.dtsx|200
SEQC Opt 1a |Child2.dtsx|300
SEQC Opt 1b |Child5.dtsx|600
SEQC Opt 1b |Child6.dtsx|700
SEQC Opt 1b |Child7.dtsx|800
This table pattern allows me to dynamically run packages in both parallel and in serial. Assuming Child7 and Child2 in the above set are very slow but the other 4 packages are relatively fast. The fast ones would start up, do their work and complete and the next runs. There are limits to how many parallel operations can fire at once so you can't scale infinitely across processes so a balance of serial and parallel operations makes sense.
Once you have your pattern working for one sequence container: copy, paste, rename and assuming you lookup in a table based on the task name as I show above, it's ready to go.
NOTE for everyone reading this answer: This answer is not full/complete with examples/full steps. From comment above I am posting this now so requestor can see it and get started.
This was from notes I wrote myself a long while back on how to do this for myself. I am posting it as answer as it is helpful and too large to post as comment. Plus I have not rewritten anything in what wrote for myself and what I am posting.
Currently I can not find my full code yet to post full details/steps. If/when I do I will post here, but this should be good details on what/how to do it. Plus this gives info on how to handle child package error trapping as well.
-- my notes I saved for myself posting as answer:
Steps for creating child packages:
Create any variables needed in the child package
Create the coorisponding variable name in the parent package (the name doesnot have to be the same, and maybe want to name it something to identify it as a child package variable)
Child Package:
Need to set up: Package Configurations
a. Right click on package and click Package Paramaters
b. Click the checkox to Enable Package configurations
Click Add and set the paramaters:
a. Configuration Type: Pareknt Package variable
b. Specify the configuration setting directly: Put the parent variable name in here that the child package is going to access
c. Click "Next"
d. In "Objects" window scroll down to the variable you are setting from the parent variable name you selected above and click the "Value" option under Properties for that variable name
e. Click "Next"
f. Under Configuration Name: Set a detailed name for what this variable is/does.
Error Handling (NOTE: This is not required but you wont capture the child error messages if you dont do this):
a. Go to Event Handlers Tab
b. Under drop down (on top right) select OnError
c. Add a Scrit Task
d. Pass as read only variables:
System::ErrorDescription
System::SourceName
System::PackageName
e. Copy/paste the code below into the script task uin the Main() function.
----- this is for the error handling
public void Main()
{
// build our the error message
string ErrorMessageToPassToParent = "Package Name: " + Dts.Variables["System::PackageName"].Value.ToString() + Environment.NewLine + Environment.NewLine +
"Step Failed On: " + Dts.Variables["System::SourceName"].Value.ToString() + Environment.NewLine + Environment.NewLine +
"Error Description: " + Dts.Variables["System::ErrorDescription"].Value.ToString();
// have to do this FIRST so you can access variable without passing it into the script task from SSIS tool box
// Populate collection of variables. This will include parent package variables.
Variables vars = null;
Dts.VariableDispenser.GetVariables(ref vars);
// checks if this variable exists in parent first, and if so then will set it to the value of the child variable
// (do this so if parent package does not have the variable it will not error out when trying to set a non-existant varaible)
if (Dts.VariableDispenser.Contains("OnError_ErrorDescription_FromChild") == true)
{
// Lock the to and from variables.
// parent variable
Dts.VariableDispenser.LockForWrite("User::OnError_ErrorDescription_FromChild");
// Need to call GetVariables again after locking them. Not sure why - perhaps to get a clean post-lock set of values.
Dts.VariableDispenser.GetVariables(ref vars);
// Set parentvar = childvar
vars["User::OnError_ErrorDescription_FromChild"].Value = ErrorMessageToPassToParent;
vars.Unlock();
}
Dts.TaskResult = (int)ScriptResults.Success;
}
Parent Package:
Add this variable to propertly capture the child error messages (not required but you wont capture chidl error messages if you dont):
variable: OnError_ErrorDescription_FromChild
Error Handling(NOTE: This is not required but you wont capture the child error messages if you dont do this):
a. Go to Event Handlers Tab
b. Under drop down (on top right) select OnError
c. Add a Scrit Task
d. Pass as read only variables:
User::OnError_ErrorDescription_FromChild
e. Copy/paste the code below into the script task uin the Main() function.
----- this is for the error handling
public void Main()
{
// get the varaible from the parent package for the error
string ErrorFromChildPackage = Dts.Variables["User::OnError_ErrorDescription_FromChild"].Value.ToString();
// do a check if the value is empty or not (so we knwo if the error came from the child package or the occurent in the parent package itself
if (ErrorFromChildPackage.Length > 0)
{
// Then raise the error that was created in the child package
Dts.Events.FireError(0, "Capture Error From Child Package Failure",
ErrorFromChildPackage
, String.Empty, 0);
//Dts.TaskResult = (int)ScriptResults.Failure;
} // end if the error length of variable is > 0
Dts.TaskResult = (int)ScriptResults.Success;
}
NOTES:
For error handling:
a. The child package error handling is written so it wont fail if the variables or error handling does not exist in parent package.
b. If you include the error handling (and variable) in the parent package it MUST exist in the child package though.

SSIS Compilation problems - DirectRowToOutput

Input buffer does not contain a definition for 'DirectRowToOutput0' or likewise for the the other properties below.
Row.DirectRowToOutput0();
Row.ErrorMessage = ex.Message;
Row.DirectRowToFailedValidation();
I had some packages on SSIS package store, and attempted to import them using the Package Import Wizard project. But it had some issues and compilation failed, and completely broke all previous script components so I fished the code out of some backups, and pasted it back into some new script tasks.
'ErrorMessage' I did add to a new output flow and column, but it looks like things don't work that way anymore.
New Script tasks appear to be C# 2012.
What have I missed? Am struggling to find which documentation I should be using, and these version conflicts are really hard to deal with.
Using SSDT 2017.
"DirectRowToOutputX()" means "Provide support for filtering outputs to named output groups". In other words, if you ADD SUPPORT for output filtering, then you get the functionality that you list above. Here's how:
When you configure your Script Component, you need to click Inputs and Outputs, then in the inputs and outputs pane, select the output that you'd like to filter. Then in Common Properties, select ExclusionGroup and set it to some value other than zero. Now go back and edit your script and the Row.DirectRowToOutput0() statement will work. Code example below.
public override void Input0_ProcessInputRow(Input0Buffer Row)
{
bool keepThisRow = ValidateMyRow(Row);
if (keepThisRow)
{
// Get/set Row values, do something useful
Row.DirectRowToOutput0(); // for Output0
}
/* Else do nothing - the row will be filtered OUT of the output if not explicitly
* included
*/
}

Saving JSON data to DB in Zotonic

Im trying to write a small app that retrieves a JSON file (it contains a list of items, which all have some properties), saves its contents to the DB and then displays some of it later on. I have Zotonic up and running, and generating some HTML is no problem.
ATM i'm stuck trying to figure out how to define a custom resource and how to get the data from the JSON in the DB. When the data is there I should be fine, that part seems covered ok by the documentation.
I wrote some standalone erlang scripts that fetch the data and I noticed that Zotonic has a library for decoding JSON so that part should be fine. Any tips on where to put which code or where to look further?
The z_db module allows for creating custom tables by using:
z_db:create_table(Table, Cols, Context).
The Table variable is your table name which can be either an atom or a list containing a single atom.
The Cols is a list of column definitions, which are defined by records. Currently the record definition (you can find this in include/zotonic.hrl) is:
-record(column_def, {name, type, length, is_nullable=true, default, primary_key}).
See Erlang docs on records for more info on records
Example code which I put in users/sites/[sitename]/models/m_[sitename].erl:
init(Context) ->
case z_db:table_exists(?table,Context) of
false ->
z_db:create_table(tablename,
[
#column_def{name=id, type="serial"},
#column_def{name=gid, type="integer", is_nullable=false},
#column_def{name=magnitude, type="real"},
#column_def{name=depth, type="real"},
#column_def{name=location, type="character varying"},
#column_def{name=time, type="integer"},
#column_def{name=date, type="integer"}
], Context);
true -> ok
end,
ok.
Pay attention to what options of the record you specify. Most of the errors I got were e.g. from specifying a length on the integer fields.
The models/m_sitename:init/1 does not get called on site start. The sitename:init/1 does get called so I call the init function there to ensure the table exists. Example:
init(Context) ->
m_sitename:init(Context).
It is called by Zotonic with the Context variable of the site automatically. You can get this variable manually as well with z:c(sitename).. So if you call the m_sitename:init(Context). from somewhere else you would do:
m_sitename:init(z:c(sitename)).
Next, insertion in the DB can be done with:
z_db:insert(Table, PropList, Context).
Where Table is again an atom or a list containing a single atom representing the table name. Context is the same as above.
PropList is a property list which is a list containing tuples consisting of two elements where the first is an atom and the second is its associated value/property. Example:
PropList = [
{row, Value},
{anotherrow, AnotherValue}
].
Table = tablename.
Context = z:c(sitename).
z_db:insert(Table, PropList, Context).
See Erlang docs on Property Lists for more info on property lists.
=== The dependencies have been updated so if you build from source the step directly below is no longer needed ===
The JSON part is bit more tricky. Included with Zotonic are mochijson2 and as a secondary dependency also jiffy. The latest version of jiffy contains jiffy:decode/2 which allows you to specify maps as a return type. Much more readable than the standard {struct, {struct, <<"">>}} monster. To update to the latest version edit the line in deps/twerl/rebar.config that says
{jiffy, ".*", {git, "https://github.com/davisp/jiffy.git", {tag, "0.8.3"}}},
to
{jiffy, ".*", {git, "https://github.com/davisp/jiffy.git", {tag, "0.14.3"}}},
Now run z:m(). in the Zotonic shell. (you must do this after every change in your code).
Now check in the Zotonic shell if there is a jiffy:decode/2 available by typing jiffy: <tab>, it will show a list of available functions and their arity.
To retrieve a JSON file from the internet run:
{ok, {{_, 200, _}, _, Body}} = httpc:request(get, {"url-to-JSON-here", []}, [], [])
Which will yield the variable Body with the contents. See Erlang docs on http client for more info on this call.
Next convert the contents of Body to Erlang terms with:
JsonData = jiffy:decode(Body, [return_maps]).
What you have to do next depends a lot on the structure of your JSON resource. Keep in mind that everything is now in binary UTF-8 encoded strings! If you print JsonData to screen (just enter JsonData. in your Zotonic/Erlang shell) you will see a lot of #map{<<"key"", <<"Value">>} this.
My data was nested so I had to extract the needed data like this:
[{_,ItemList}|_] = ListData.
This gave me a list of maps, and in order to deal with them as individual items I used the following function:
get_maps([]) ->
done;
get_maps([First|Rest]) ->
Map = maps:get(<<"properties">>, First),
case is_map(Map) of
true ->
map_to_proplist(Map),
get_maps(Rest);
false -> done
end,
done;
get_maps(_) ->
done.
As you might remember, the z_db:insert/3 function needs a property list to populate rows, so that what the call to map_to_proplist/1 is for. How this function looks is completely dependent on how your data looks but as an example here is what worked for me:
map_to_proplist(Map) ->
case is_map(Map) of
true ->
{Value1,_} = string:to_integer(binary_to_list(maps:get(<<"key1">>, Map))),
{Value2,_} = string:to_float(binary_to_list(maps:get(<<"key2">>, Map))),
{Value3,_} = string:to_float(binary_to_list(maps:get(<<"key3">>, Map))),
Value4 = binary_to_list(maps:get(<<"key4">>, Map)),
{Value5,_} = string:to_integer(binary_to_list(maps:get(<<"key5">>, Map))),
{Value6,_} = string:to_integer(binary_to_list(maps:get(<<"key6">>, Map))),
PropList = [{rowname1, Value1}, {rowname2, Value2}, {rowname3, Value3}, {rowname4, Value4}, {rowname5, Value5}, {rowname6, Value6}],
m_sitename:insert_items(PropList,z:c(sitename)),
ok;
false ->
ok
end.
See the documentation on string:to_list/1 as to why the tuples are needed when casting. The call to m_sitename:insert_items(PropList,z:c(sitename)) calls the z_db:insert/3 in models/m_sitename.erl but wrapped in a catch:
insert_items(PropList,Context) ->
(catch z_db:insert(?table, PropList, Context)).
Ok, quite a long post but this should get you up and running if you were looking for this answer.
The above was done with Zotonic 0.13.2 on Erlang/OTP 18.
A repost (except the JSON part) of my post in the Zotonic Developers group.

Can not query nodes but can see all the nodes and properties in the data browser

I have imported a few nodes with properties via the Cypher CSV import (command below) and the nodes seem to have loaded correctly as I can view them in the REST-API (the data browser). When I execute a MATCH (n) RETURN n query, all of the nodes are displayed in the Results pane, and when I click on one of the nodes the properties are displayed in the left pane of the browser (I would attach a screen shot showing what I am trying to refer to here which would make this issue A LOT more clear and easy to understand, but apparently us neophytes are prohibited from providing such useful information).
However, when I try to query any of the nodes directly, I get no rows returned. By "query the nodes directly" I am referring to querying with a WHERE condition where I ask for a specific property:
MATCH (n)
WHERE n:Type="Idea"
RETURN n
Type is one of the properties of the node. No rows are returned from the query. I can click on the node in the Stream pane to open the properties dialog, and I can see the Type property is clearly "Idea."
Am I missing something? The nodes and properties seemed to have loaded into the DB correctly, but I can't seem to query anything. Is "ID" a restricted term? Do I even need an "ID" property (i thought I read somewhere you shouldn't trust the auto-generated IDs as they aren't guaranteed to be unique over time)?
Import statement used to load the nodes is below:
$ auto-index name, ID
$ import-cypher -i ProjectNodesCSV.csv -o ProjectOut.csv CREATE (n:Project {ID:{ID},Name: {Name}, Type: {Type}, ProjectGroupName: {ProjectGroupName}, ProjectCategoryName: {ProjectCategoryName}, UnifierID: {UnifierID}, StartDate: {StartDate}, EndDate: {EndDate}, CapitalCosts: {CapitalCosts}, OandMCosts: {OandMCosts}}) RETURN ID(n) as ID, n.Name as Name

Passing a variable value from Control Flow to Data Flow in SSIS

I have a fairly straightforward SSIS package where I can't sucessfully pass the value of a package-scoped variable from the Control Flow to a Data Flow task. Consider the below diagram:
The Execute SQL task gets values from a list of "machines". This is used to control a ForEach Loop Container, which works very well. Next a script task performs some math and assigns a single number to a package scoped variable (integer type). I have added message boxes that pop up during the loop so that I can verify that the value of this variable is being set properly.
The last icon is a data flow where I want to use the variable value. I have a simple script task that contains just a message box showing me the current value of this same variable. Every time, the variable is the value that I initially set in the designer (BIDS). Therefore, the value is not being "passed" to the data flow. I have verified multiple times that the names of the variables are correct (including case sensitive values).
This should be pretty simple, and I am getting frustrated with this issue. I would greatly appreciate and suggestions or comments. Thank you!
How are you setting the package variable from your script task? It should look like this (c#):
DTS.Variables["testVariable"].Value = "some value";
Then to test it from the script component in your dataflow task:
public override void PostExecute()
{
base.PostExecute();
MessageBox.Show(Variables.testVariable, "test");
}
I did this in a test package and it worked fine.
EDIT
Also make sure that you added the variable to the ReadWriteVariables section of the properties for the script tasks.