I am trying to insert a Generic list to SQL Server with SQLBulkCopy,
And i have trouble wit Identity Field
I wan t my destination table to generate identity field
How should i handle this,
here is my code
using (var bulkCopy = new SqlBulkCopy(ConfigurationManager.ConnectionStrings["ConnectionString"].ConnectionString))
{
bulkCopy.BatchSize = (int)DetailLines;
bulkCopy.DestinationTableName = "dbo.tMyTable";
var table = new DataTable();
var props = TypeDescriptor.GetProperties(typeof(tBFFormularyStatusList))
//Dirty hack to make sure we only have system data types
//i.e. filter out the relationships/collections
.Cast<PropertyDescriptor>()
.Where(propertyInfo => propertyInfo.PropertyType.Namespace.Equals("System"))
.ToArray();
foreach (var propertyInfo in props)
{
bulkCopy.ColumnMappings.Add(propertyInfo.Name, propertyInfo.Name);
table.Columns.Add(propertyInfo.Name, Nullable.GetUnderlyingType(propertyInfo.PropertyType) ?? propertyInfo.PropertyType);
}
var values = new object[props.Length];
foreach (var item in myGenericList)
{
for (var i = 0; i < values.Length; i++)
{
values[i] = props[i].GetValue(item);
}
table.Rows.Add(values);
}
bulkCopy.WriteToServer(table);
}
exception
Property accessor 'ID' on object 'ProcessFlatFiles.DetailsClass' threw the following exception:'Object does not match target type.'
I have also tried
using (var bulkCopy = new SqlBulkCopy(ConfigurationManager.ConnectionStrings["ConnectionString"].ConnectionString, SqlBulkCopyOptions.KeepIdentity))
{
Finally I got this worked this way
using (var bulkCopy = new SqlBulkCopy(ConfigurationManager.ConnectionStrings["ConnectionString"].ConnectionString, SqlBulkCopyOptions.KeepNulls & SqlBulkCopyOptions.KeepIdentity))
{
bulkCopy.BatchSize = (int)DetailLines;
bulkCopy.DestinationTableName = "dbo.myTable";
bulkCopy.ColumnMappings.Clear();
bulkCopy.ColumnMappings.Add("SourceColumnName", "DestinationColumnName");
bulkCopy.ColumnMappings.Add("SourceColumnName", "DestinationColumnName");
bulkCopy.ColumnMappings.Add("SourceColumnName", "DestinationColumnName");
bulkCopy.ColumnMappings.Add("SourceColumnName", "DestinationColumnName");
.
.
.
.
bulkCopy.ColumnMappings.Add("SourceColumnName", "DestinationColumnName");
bulkCopy.WriteToServer(datatable);
}
I know this is an old question, but I thought it was worth adding this alternative:
(If you already have correct schema, can skip 1,2,3)
Perform a simple TOP 1 select from the table to return the a datatable with the destination table's schema
Use DataTable's Clone method to generate a datatable with same schema and no data
Insert your data into this table
Perform SqlBulkCopy's WriteToServer (if column orders match, then the identity values can be read. If the option isn't provided in SqlBulkCopy's constructor then the default is to ignore these values and let the destination provide them).
The important point is that if you have the columns in the correct order (including identity columns), everything is handled for you.
Related
I am in the need of multiple staged WebViews for holding multiple loaded websites at the same time.
I was hoping to manage this by making an array of webviews object, so i could call them later as view[i].
var view:Array=[webview0, webview1, webview2];
for each (var v in view){
var v:StageWebView = new StageWebView();
This gives error: 1086: Syntax error: expecting semicolon before left bracket.
Does someone know how to make an array like that?
You're doing something really weird there in terms of syntax. If you just want an Array of freshly created instances, it goes like that:
// Initialize the array.
var Views:Array = new Array;
// This loop counts 0,1,2.
for (var i:int = 0; i < 3; i++)
{
// Create a new instance.
// Yes, you can omit () with new operator if there are no arguments.
var aView:StageWebView = new StageWebView;
// Assign the new element to your array.
Views[i] = aView;
}
Or, if you need only 3 then you don't need to go algorithmic.
var Views:Array = [new StageWebView, new StageWebView, new StageWebView];
Not on topic but related:
Here is an example of one HTML page hold multiple StageWebViews
https://www.w3schools.com/graphics/tryit.asp?filename=trymap_basic_many
I have an Excel file that loosely resembles the following format:
I'll explain the next step of the SSIS element first as the column names are not "important" as I am un-pivoting the data in a data flow to start getting it usable:
The issue is, the file will be updated - years and quarters will be removed (historical), new ones added to replace the old ones. That means, as we all know, the metadata on a data flow is broken.
The cell range and position etc. will always remain the same.
Is there a way it can be handled in a data flow with the column names (2016q1) being fluid?
Thanks
You're going to like this as it also does the pivot:
Using C# Script component source:
Add namespace:
Using System.Data.OleDb;
Add your 4 output columns and select data types:
Add code to new row section.
public override void CreateNewOutputRows()
{
/*
Add rows by calling the AddRow method on the member variable named "<Output Name>Buffer".
For example, call MyOutputBuffer.AddRow() if your output was named "MyOutput".
*/
string fileName = #"C:\test.xlsx";
string SheetName = "Sheet1";
string cstr = "Provider=Microsoft.ACE.OLEDB.12.0;Data Source=" + fileName + ";Extended Properties=\"Excel 12.0;HDR=YES;IMEX=1\"";
OleDbConnection xlConn = new OleDbConnection(cstr);
xlConn.Open();
OleDbCommand xlCmd = xlConn.CreateCommand();
xlCmd.CommandText = "Select * from [" + SheetName + "$]";
xlCmd.CommandType = CommandType.Text;
OleDbDataReader rdr = xlCmd.ExecuteReader();
//int rowCt = 0; //Counter
while (rdr.Read())
{
for (int i = 2; i < rdr.FieldCount; i++) //loop from 3 column to last
{
Output0Buffer.AddRow();
Output0Buffer.ColA = rdr[0].ToString();
Output0Buffer.ColB = rdr[1].ToString();
Output0Buffer.FactName = rdr.GetName(i);
Output0Buffer.FactValue = rdr.GetDouble(i);
}
//rowCt++; //increment counter
}
xlConn.Close();
}
If the columns remain in order, then you can skip header rows and select 1st row does not contain headers.
I have an question. I'm trying to insert some values into an mySql database, but I have a small issue.
for (int i = 0; i < Chauffeurlijst.Count; i++)
{
db.Insert("Insert into route (Voertuigen_ID,Chauffeurs_ID,Planning_ID) VALUES(#voertuigid,#chauffeurid,#planningid", new List<KeyValuePair<string, object>>
{
new KeyValuePair<string, object>("#voertuigid", Voertuigenlijst[i].ID),
new KeyValuePair<string, object>("#chauffeurid", Chauffeurlijst[i].ID),
new KeyValuePair<string, object>("#planningid", planning.ID)
});
}
I have two notable variables: A voertuigenlijst(a list of voertuig) and a chauffeurlijst (a list of chauffeur) But the problem is that it could be that Voertuigenlijst's count is smaller than the count of Chauffeurlijst. It could also be the same. When the chauffeurlijst is smaller the program will ofcourse crash because the last item in the list of voertuigen already has been allocated. What I would like to do is when the voertuigenlijst[i] doesn't exist anymore I want to do i-1. Is there a nice solution for this problem?
You can do this by using another local variable to determine the index for Voertuigenlijst. Basically, you want to check that the index is within your range. If not, then use the last index in your list. For example --
for (int i = 0; i < Chauffeurlijst.Count; i++)
{
var j = i < Voertuigenlijst.Count ? i : Voertuigenlijst.Count - 1;
db.Insert("Insert into route (Voertuigen_ID,Chauffeurs_ID,Planning_ID) VALUES(#voertuigid,#chauffeurid,#planningid",
new List<KeyValuePair<string, object>>
{
new KeyValuePair<string, object>("#voertuigid", Voertuigenlijst[j].ID),
new KeyValuePair<string, object>("#chauffeurid", Chauffeurlijst[i].ID),
new KeyValuePair<string, object>("#planningid", planning.ID)
});
}
This will use the last Voertuig in the list if Chauffeurlijst.Count > Voertuigenlijst.Count. However, this code does not handle the case where Voertuigenlijst.Count > Chauffeurlijst.Count. I'm sure you can figure out a good solution using this example though. You may also want to handle the case where one is empty.
When adding a derived column to a data flow with ezAPI, I get the following warnings
"Add stuff here.Inputs[Derived Column Input].Columns[ad_zip]" on "Add
stuff here" has usage type READONLY, but is not referenced by an
expression. Remove the column from the list of available input
columns, or reference it in an expression.
I've tried to delete the input columns, but either the method is not working or I'm doing it wrong:
foreach (Microsoft.SqlServer.Dts.Pipeline.Wrapper.IDTSInputColumn100 col in derFull.Meta.InputCollection[0].InputColumnCollection)
{
Console.WriteLine(col.Name);
derFull.DeleteInputColumn(col.Name);
}
I have the following piece of code that fixes the problem.
I got it from a guy called Daniel Otykier. So he is propably the one that should be credited for it... Unlesss he got it from someone else :-)
static public void RemoveUnusedInputColumns(this EzDerivedColumn component)
{
var usedLineageIds = new HashSet<int>();
// Parse all expressions used in new output columns, to determine which input lineage ID's are being used:
foreach (IDTSOutputColumn100 column in component.GetOutputColumns())
{
AddLineageIdsFromExpression(column.CustomPropertyCollection, usedLineageIds);
}
// Parse all expressions in replaced input columns, to determine which input lineage ID's are being used:
foreach (IDTSInputColumn100 column in component.GetInputColumns())
{
AddLineageIdsFromExpression(column.CustomPropertyCollection, usedLineageIds);
}
var inputColumns = component.GetInputColumns();
// Remove all input columns not used in any expressions:
for (var i = inputColumns.Count - 1; i >= 0; i--)
{
if (!usedLineageIds.Contains(inputColumns[i].LineageID))
{
inputColumns.RemoveObjectByIndex(i);
}
}
}
static private void AddLineageIdsFromExpression(IDTSCustomPropertyCollection100 columnProperties, ICollection<int> lineageIds)
{
int lineageId = 1;
var expressionProperty = columnProperties.Cast<IDTSCustomProperty100>().FirstOrDefault(p => p.Name == "Expression");
if (expressionProperty != null)
{
// Input columns used in expressions are always referenced as "#xxx" where xxx is the integer lineage ID.
var expression = expressionProperty.Value.ToString();
var expressionTokens = expression.Split(new[] { ' ', ',', '(', ')' });
foreach (var c in expressionTokens.Where(t => t.Length > 1 && t.StartsWith("#") && int.TryParse(t.Substring(1), out lineageId)))
{
if (!lineageIds.Contains(lineageId)) lineageIds.Add(lineageId);
}
}
}
Simple but not 100% Guaranteed Method
Call ReinitializeMetaData on the base component that EzApi is extending:
dc.Comp.ReinitializeMetaData();
This doesn't always respect some of the customizations and logic checks that EzAPI has, so test it carefully. For most vanilla components, though, this should work fine.
100% Guaranteed Method But Requires A Strategy For Identifying Columns To Ignore
You can set the UsageType property of those VirtualInputColumns to the enumerated value DTSUsageType.UT_IGNORED using EzApi's SetUsageType wrapper method.
But! You have to do this after you're done modifying any of the other metadata of your component (attaching other components, adding new input or output columns, etc.) since each of these triggers the ReinitializeMetaData method on the component, which automatically sets (or resets) all UT_IGNORED VirtualInputColumn's UsageType to UT_READONLY.
So some sample code:
// define EzSourceComponent with SourceColumnToIgnore output column, SomeConnection for destination
EzDerivedColumn dc = new EzDerivedColumn(this);
dc.AttachTo(EzSourceComponent);
dc.Name = "Errors, Go Away";
dc.InsertOutputColumn("NewDerivedColumn");
dc.Expression["NewDerivedColumn"] = "I was inserted!";
// Right here, UsageType is UT_READONLY
Console.WriteLine(dc.VirtualInputCol("SourceColumnToIgnore").UsageType.ToString());
EzOleDbDestination d = new EzOleDbDestination(f);
d.Name = "Destination";
d.Connection = SomeConnection;
d.Table = "dbo.DestinationTable";
d.AccessMode = AccessMode.AM_OPENROWSET_FASTLOAD;
d.AttachTo(dc);
// Now we can set usage type on columns to remove them from the available inputs.
// Note the false boolean at the end.
// That's required to not trigger ReinitializeMetadata for usage type changes.
dc.SetUsageType(0, "SourceColumnToIgnore", DTSUsageType.UT_IGNORED, false);
// Now UsageType is UT_IGNORED and if you saved the package and viewed it,
// you'll see this column has been removed from the available input columns
// ... and the warning for it has gone away!
Console.WriteLine(dc.VirtualInputCol("SourceColumnToIgnore").UsageType.ToString());
I was having exactly your problem and found a way to solve it. The problem is that the EzDerivedColumn has not the PassThrough defined in it's class.
You just need to add this to the class:
private PassThroughIndexer m_passThrough;
public PassThroughIndexer PassThrough
{
get
{
if (m_passThrough == null)
m_passThrough = new PassThroughIndexer(this);
return m_passThrough;
}
}
And alter the ReinitializeMetadataNoCast() to this:
public override void ReinitializeMetaDataNoCast()
{
try
{
if (Meta.InputCollection[0].InputColumnCollection.Count == 0)
{
base.ReinitializeMetaDataNoCast();
LinkAllInputsToOutputs();
return;
}
Dictionary<string, bool> cols = new Dictionary<string, bool>();
foreach (IDTSInputColumn100 c in Meta.InputCollection[0].InputColumnCollection)
cols.Add(c.Name, PassThrough[c.Name]);
base.ReinitializeMetaDataNoCast();
foreach (IDTSInputColumn100 c in Meta.InputCollection[0].InputColumnCollection)
{
if (cols.ContainsKey(c.Name))
SetUsageType(0, c.Name, cols[c.Name] ? DTSUsageType.UT_READONLY : DTSUsageType.UT_IGNORED, false);
else
SetUsageType(0, c.Name, DTSUsageType.UT_IGNORED, false);
}
}
catch { }
}
That is the strategy used by other components. If you want to see all the code you can check my EzApi2016#GitHub. I'm updating the original code from Microsoft to SQL Server 2016.
I have 20 fields on form, how to update fields modified by user during runtime and how to check which fields have changed so that i can only update those values in table using LINQ. I am working on windows application using C# and VS2010
Please refer the code (Currently i am passing all values, i knw this is not the correct way)
private void UpdateRecord(string groupBoxname)
{
using (SNTdbEntities1 context = new SNTdbEntities1())
{
{
Vendor_Account va = new Vendor_Account();
var Result = from grd in context.Vendor_Account
where grd.Bill_No == dd_billNo.Text
select grd;
if (Result.Count() > 0)
if ((dd_EditProjectName.Text!= "Select") && (dd_billNo.Text!="Select") && (dd_editVendorName.Text!="Select"))
{
foreach (var item in Result)
{
va.Account_ID = item.Account_ID;
}
va.Amount_After_Retention = Convert.ToDecimal(txt_AD_AfterRet.Text);
va.Balance = Convert.ToDecimal(txt_AD_Balance.Text);
va.Bill_Amount = Convert.ToDecimal(txt_AD_BillAmount.Text);
va.Bill_Date = Convert.ToDateTime(dt_AD_BillDate.Text);
va.Bill_No = dd_billNo.Text;
va.Comments = txt_AD_Comments.Text;
va.Paid_Till_Date = string.IsNullOrEmpty(txt_AD_Paid.Text)?0:Convert.ToDecimal(txt_AD_Paid.Text);
va.Project_Name = dd_EditProjectName.Text;
va.Retention_Perc = Convert.ToDecimal(txt_retPerc.Text);
va.Amount_After_Retention = Convert.ToDecimal(txt_AD_AfterRet.Text);
va.Vendor_Name = dd_editVendorName.Text;
va.Vendor_Code = txt_AD_Code.Text;
context.Vendor_Account.ApplyCurrentValues(va);
//context.Vendor_PersonalInfo.AddObject(vpi);
context.SaveChanges();
MessageBox.Show("Information Updated Sucessfully!");
lbl_Warning.Text = "";
entityDataSource1.Refresh();
}
else
{
MessageBox.Show("Vendor Name,Project Name and Bill No cannot be blank!!");
}
}
}
}
Entity framework will do that task.
Since you didn't provide any code, I cannot be precise about the answer but please check those links:
http://msdn.microsoft.com/en-us/library/aa697427(v=vs.80).aspx, section:Manipulating Data and Persisting Changes
http://www.codeproject.com/KB/database/sample_entity_framework.aspx
Note that the SaveChanges() function will update any modification done the records in EF.
Create some field dublicates, and compare value from the form element with the local dublicate, if it was changed than update it.