Dynamically create a CSV file with FileHelpers - csv

FileHelpers supports a feature called "RunTime Records" which lets you read a CSV file into a DataTable when you don't know the layout until runtime.
Is it possible to use FileHelpers to create a CSV file at runtime in the same manner?
Based on some user input, the CSV file that must be created will have different fields that can only be known at runtime. I can create the needed Type for the FileHelper engine as described in their reading section, but I can't figure out what format my data needs to be in to be written.
var engine = new FileHelpers.FileHelperEngine(GenerateCsvType());
engine.WriteStream(context.Response.Output, dontKnow);
EDIT
Alternatively, can anyone suggest a good CSV library that can create a CSV file without knowing its fields until runtime? For example, create a CSV file from a DataTable.

In fact the library only allows now to read runtime records but for writing purpouses you can use the DataTableToCsv method like this:
CsvEngine.DataTableToCsv(dt, filename);
Let me known if that helps.

I know this is an old question, but I ran into same issue myself and spent some time looking for solution, so I decided to share my findings.
If you are using FileHelpers RunTime Records to create your definition you can populate same definition using reflection.
For example if you create a definition
DelimitedClassBuilder cb = new DelimitedClassBuilder("Customers", ",");
cb.AddField("StringField", "string");
Type t = cb.CreateRecordClass();
FileHelperEngine engine = new FileHelperEngine(t);
Now you can use same type created by FileHelpers to populate your values as follows:
object customClass = Activator.CreateInstance(t);
System.Reflection.FieldInfo field = customClass.GetType().GetField("StringField", System.Reflection.BindingFlags.Public | System.Reflection.BindingFlags.Instance);
if (field != null)
{
field.SetValue(customClass, "StringValue");
}
And then write it to file or string:
string line = engine.WriteString(new object[] { customClass });

Related

Closed XML data validation for allowing any numbers

I want to add data validation such that it allows only numbers.CLOSED XML
I think you can do it in memory but you can also use the "Data Validation" in Excel.
But if you use the Excel's data validation you need to be sure that the data you will insert programmatically are only number.
Anyway you can create an excel's templete file with the excel's data validation and than use closedXML for getting the file's template and then insert the data.
Basically for getting the template you can use this code(C#):
var workbook = new XLWorkbook("BasicTable.xlsx");
Info at : https://github.com/ClosedXML/ClosedXML/wiki/Loading-and-Modifying-Files
For the excel's data validation here there are some example :
https://www.got-it.ai/solutions/excel-chat/excel-tutorial/data-validation/data-validation-how-to-allow-numbers-only

Save api json to database using laravel

Is it possible to get data from an api url and save it directly to database when working with laravel? the data i get from the url is of the format {"name":"100KVA SUKAM Generator","level":"5.965"}.
Yes, you can create table with json type field (or text) and keep data there:
$table->json('data_from_api');
https://laravel.com/docs/5.2/migrations#writing-migrations
If you want to persist data as usual data, you can use mass assignment. First, convert JSON to an array with [json_decode][1] and save data like that:
$data = json_decode($jsonData, true)
Model::create($data);
Don't forget to add all columns to a $fillable property of a model.

How to use ReplaceRows from .NET Google.Apis.Fusiontables.v2 (stream csv)?

Goal: to update a Fusion Table by replacing old rows by new ones from a csv file without headers using ReplaceRows().
I am using the Google.Apis.Fusiontables.v2 library.
I have read and reread the documentation, but still can`t get my code working.
Authentication is working and I am able to perform simple INSERTs without issue:
string sql = "INSERT INTO 11t9VLt3vzb46oGQMaS2LTSPWUyBYNcfi1shkmvag (rpu_id, NO_BAIL, 'Usage (description)', 'Use (description)', 'Sup. louable m2', 'Sup. Utilisable m2', 'SumTotal Lou', 'Percent Lou', 'SumTotal Util', 'Percent Util') VALUES (9999,1111,'Test','Test En',1,2,3,4,5,6)"
Sqlresponse sqlRspnse = service.Query.Sql(sql).Execute();
I have tried ReplaceRowsMediaUpload and ReplaceRowsMediaUpload directly from the TableResource class without luck.
Calling the upload function from the service object doesn't error out, but I'm not sure what to do next that would actually replace the rows in the Fusion Table (service is a FusiontablesService):
StreamReader str = new StreamReader(Server.MapPath("~") + #"\sample2.csv");
service.Table.ReplaceRows("1X7JMLFy75uq20UnU6cLrGTTDfp6lLuD1Fc3vYYjQ", str.BaseStream, "text/csv").Upload();
I've tried:
service.Table.ReplaceRows("1X7JMLFy75uq20UnU6cLrGTTDfp6lLuD1Fc3vYYjQ").Execute()
following the upload, but this just puts the Fusion table in "stuck" mode.
Can someone please provide the lines required to make ReplaceRows work? (Explanations would be appreciated, but aren't necessary!).
You should change "text/csv" for "application/octet-stream". (See accepted MIME type here: https://developers.google.com/fusiontables/docs/v2/reference/table/replaceRows)
StreamReader str = new StreamReader(Server.MapPath("~") + #"\sample2.csv");
service.Table.ReplaceRows("1X7JMLFy75uq20UnU6cLrGTTDfp6lLuD1Fc3vYYjQ", str.BaseStream, "application/octet-stream").Upload();
The call to Upload should be enough.
Also, try to create a new table to test it out, to be sure it is setup correctly.
You can use a REST API call to replace a row in your Google Fusion table directly instead of writing methods to do that. Here is an example:
POST https://www.googleapis.com/upload/fusiontables/v2/tables/tableId/replace
Please refer to this document for more details, it has a testing environment tool too.

Populate Derived Column with File's Date Modified

I'm a wannabe to .Net and SQL and am working on an SSIS package that is pulling data from flat files and inputting it into a SQL table. The part that I need assistance on is getting the Date Modified of the files and populating a derived column I created in that table with it. I have created the following variables: FileDate of type DateTime, FilePath of String, and SourceFolder of String for the path of the files. I was thinking that the DateModified could be populated in the derived column w/i the DataFlow, using a Script Component? Can someone please advise on if I'm on the right track? I appreciate any help. Thanks.
A Derived Column Transformation can only work with Integration Services Expressions. A script task would allow you to access the .net libraries and you would want to use the method that #wil kindly posted or go with the static methods in System.IO.File
However, I don't believe you would want to do this in a Data Flow Task. SSIS would have to evaluate that code for every row that flows through from the file. On a semi-related note, you cannot write to a variable until the ... event is fired to signal the data flow has completed (I think it's OnPostExecute but don't quote me) so you wouldn't be able to use said variable in a downstream derived column at any rate. You would of course, just modify the data pipeline to inject the file modified date at that point.
What would be preferable and perhaps your intent is to use a Script Task prior to the Data Flow task to assign the value to your FileDate variable. Inside your Data Flow, then use a Derived Column to add the #FileDate variable into the pipeline.
// This code is approximate. It should work but it's only been parsed by my brain
//
// Assumption:
// SourceFolder looks like a path x:\foo\bar
// FilePath looks like a file name blee.txt
// SourceFolder [\] FilePath is a file that the account running the package can access
//
// Assign the last mod date to FileDate variable based on file system datetime
// Original code, minor flaws
// Dts.Variables["FileDate"].Value = File.GetLastWriteTime(System.IO.Path.Combine(Dts.Variables["SourceFolder"].Value,Dts.Variables["FilePath"].Value));
Dts.Variables["FileDate"].Value = System.IO.File.GetLastWriteTime(System.IO.Path.Combine(Dts.Variables["SourceFolder"].Value.ToString(), Dts.Variables["FilePath"].Value.ToString()));
Edit
I believe something is amiss with either your code or your variables. Do your values approximately line up with mine for FilePath and SourceFolder? Variables are case sensitive but I don't believe that to be your issue given the error you report.
This is the full script task and you can see by the screenshot below, the design-time value for FileDate is 2011-10-05 09:06 The run-time value (locals) is 2011-09-23 09:26:59 which is the last mod date for the c:\tmp\witadmin.txt file
using System;
using System.Data;
using Microsoft.SqlServer.Dts.Runtime;
using System.Windows.Forms;
namespace ST_f74347eb0ac14a048e9ba69c1b1e7513.csproj
{
[System.AddIn.AddIn("ScriptMain", Version = "1.0", Publisher = "", Description = "")]
public partial class ScriptMain : Microsoft.SqlServer.Dts.Tasks.ScriptTask.VSTARTScriptObjectModelBase
{
enum ScriptResults
{
Success = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Success,
Failure = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Failure
};
public void Main()
{
Dts.Variables["FileDate"].Value = System.IO.File.GetLastWriteTime(System.IO.Path.Combine(Dts.Variables["SourceFolder"].Value.ToString(), Dts.Variables["FilePath"].Value.ToString()));
Dts.TaskResult = (int)ScriptResults.Success;
}
}
}
C:\tmp>dir \tmp\witadmin.txt
Volume in drive C is Local Disk
Volume Serial Number is 3F21-8G22
Directory of C:\tmp
09/23/2011 09:26 AM 670,303 witadmin.txt

SSIS - Is there a Data Flow Source component that will handle CSV files where the column order may change?

We have written a number of SSIS packages that import data from CSV files using the Flat File Source.
It now seems that after these packages are deployed into production, the providers of these files may deliver files where the column order of the files changes (Don't ask!). Currently if this happens, our packages will fail.
For example, an additional column is inserted at the beginning of each row. In this case, the flat file source continues to use the existing column order, which obviously has a detrimental effect on the transformation!
Eg. Using a trivial example, the original file has the following content :
OurReference,Client,Amount
235,MFI,20000.00
236,MS,30000.00
The output from the flat file source is :
OurReference Client Amount
235 ClientA 20000.00
236 ClientB 30000.00
Subsequently, the file delivered changes to :
OurReference,ClientReference,Client,Amount
235,A244,ClientA,20000.00
236,B222,ClientB,30000.00
When the existing unchanged package is run against this file, the output from the flat file source is :
OurReference Client Amount
235 A244 ClientA,20000.00
236 B222 ClientB,30000.00
Ideally, we would like to use a data source that will cope with this problem - ie which produces output based on the column names, instead of the column order.
Any suggestions would be welcomed!
Not that I know of.
A possibility to check for the problem in advance is to set up two different connection managers, one with a single flat row. This one can read the first row and tell if it's OK or not and abort.
If you want to do the work, you can take it a step further and make that flat one-field row the only connection manager, and use a script component in your flow to parse the row and assign to the columns you need later in the flow.
As far as I know, there is no way to dynamically add columns to the flow at runtime - so all the columns you need will need to be added to the script task output. Whether they can be found and get parsed from the each line is up to you. Any "new" (i.e. unanticipated) columns cannot be used. Columns which are missing you could default or throw an exception.
A final possibility is to use the SSIS object model to modify the package before running to alter the connection manager - or even to write the entire package dynamically using the object model based on an inspection of the input file. I have done quite a bit of package generation in C# using templates and then adding information based on metadata I obtained from master files describing the mainframe files.
Best approach would be to run a check before the SSIS package imports the CSV data. This may have to be an external script/application, because I don't think you can manipulate data in the MS Business Intelligence Studio.
Here is a rough approach. I will write down the limitations at the end.
Create a flat file source. Put the entire row in one column.
Do not check Column names in first data row.
Create a Script Component
Code:
public override void Input0_ProcessInputRow(Input0Buffer Row)
{
string sRow = Row.Column0;
string sManipulated = string.Empty;
string temp = string.Empty;
string[] columns = sRow.Split(',');
foreach (string column in columns)
{
sManipulated = string.Format("{0}{1}", sManipulated, column.PadRight(15, ' '));
}
/* Note: For sake of demonstration I am padding to 15 chars.*/
Row.Column0 = sManipulated;
}
Create a flat file destination
Map Column0 to Column0
Limitation: I have arbitrarily padded each field to 15 characters. Points to consider:
1. Do we need to have each field of same size?
2. If yes, what is that size?
A generic way to handle that would be to create a table to store the file name, fields, and field sizes.
Use the file name to dynamically create the source and destination connection manager.
Use the field name and corresponding field size to decide the padding. Not sure, if you need this much flexibility. If you have any question, please respond.