Appending data to SSIS flexible file storage - ssis

[SSIS] Do we have any settings for SSIS Flexible file destination to support "Append" data for Apache Parquet file? I am writing my data to the file in a loop and looks like every time it is writing, it is overwriting the existing data. Appreciate any help. Thanks
Writing data in loop. Expecting data to be appended, but getting overwritten

The Flexible File Task is the Azure equivalent of the File System Task which is involved with copying/deleting files and folders.
The Flexible File Destination is the sink for writing parquet files. Based on this MSDN question from 2 years ago citing that Append was a requested feature going on for 4 years
https://learn.microsoft.com/en-us/answers/questions/226219/flexible-file-destination-enable-appending-data-mo

Related

Load CSV data as RDF using Ontorefine CLI

I'm trying to programmatically add a csv file that's generated everyday to a GraphDB repository. I have already created the CSV to RDF mapping using Ontorefine. How does one use the CSV and the mapping now to add RDF triples programmatically.
Use the open source CLI https://github.com/Ontotext-AD/ontorefine-client (that's probably what #aksanoble refers to).
Please note that the CLI is not yet available in Ontotext Refine 1.0 (which was split off from GraphDB), and will be available in September. In the meantime, you could use GraphDB 9.11.
We are working on extended ETL pipeline scenarios, including
Reuse of cleaning and transformation scripts between projects
Run all cleaning, transformation and RDF data update or download steps on a new dataset automatically
BTW, is your file stored locally or accessed through a URL? We have an idea to handle the latter case specially.

Merits of JSON vs CSV file format while writing to HDFS for downstream applications

We are in the process of extracting source data (xls) and injecting to HDFS. Is it better to write these files as CSV or JSON format? We are contemplating choosing one of them, but before making the call, we are wondering what are the merits & demerits of using either one of them.
Factors we are trying to figure out are:
Performance (Data Volume is 2-5 GB)
Loading vs Reading Data
How much easier it is to extract Metadata (Structure) info from either of these files.
Injected data will be consumed by other applications which support both JSON & CSV.

Efficiently Aggregate Many CSVs in Spark

Pardon my simple question but I'm relatively new to Spark/Hadoop.
I'm trying to load a bunch of small CSV files into Apache Spark. They're currently stored in S3, but I can download them locally if that simplifies things. My goal is to do this as efficiently as possible. It seems like it would be a shame to have some single-threaded master downloading and parsing a bunch of CSV files while my dozens of Spark workers sit idly. I'm hoping there's an idiomatic way to distribute this work.
The CSV files are arranged in a directory structure that looks like:
2014/01-01/fileabcd.csv
2014/01-01/filedefg.csv
...
I have two years of data, with directories for each day, and a few hundred CSVs inside of each. All of those CSVs should have an identical schema, but it's of course possible that one CSV is awry and I'd hate for the whole job to crash if there are a couple problematic files. Those files can be skipped as long as I'm notified in a log somewhere that that happened.
It seems that every Spark project I have in mind is in this same form and I don't know how to solve it. (e.g. trying to read in a bunch of tab-delimited weather data, or reading in a bunch of log files to look at those.)
What I've Tried
I've tried both SparkR and the Scala libraries. I don't really care which language I need to use; I'm more interested in the correct idioms/tools to use.
Pure Scala
My original thought was to enumerate and parallelize the list of all year/mm-dd combinations so that I could have my Spark workers all processing each day independently (download and parse all CSV files, then stack them on top of eachother (unionAll()) to reduce them). Unfortunately, downloading and parsing the CSV files using the spark-csv library can only be done in the "parent"/master job, and not from each child as Spark doesn't allow job nesting. So that won't work as long as I want to use the Spark libraries to do the importing/parsing.
Mixed-Language
You can, of course, use the language's native CSV parsing to read in each file then "upload" them to Spark. In R, this is a combination of some package to get the file out of S3 followed by a read.csv, and finishing off with a createDataFrame() to get the data into Spark. Unfortunately, this is really slow and also seems backwards to the way I want Spark to work. If all my data is piping through R before it can get into Spark, why bother with Spark?
Hive/Sqoop/Phoenix/Pig/Flume/Flume Ng/s3distcp
I've started looking into these tailored tools and quickly got overwhelmed. My understanding is that many/all of these tools could be used to get my CSV files from S3 into HDFS.
Of course it would be faster to read my CSV files in from HDFS than S3, so that solves some portion of the problem. But I still have tens of thousands of CSVs that I need to parse and am unaware of a distributed way to do that in Spark.
So right now (Spark 1.4) SparkR has support for json or parquet file structures. Csv files can be parsed, but then the spark context needs to be started with an extra jar (which needs to be downloaded and placed in the appropriate folder, never done this myself but my collegues have).
sc <- sparkR.init(sparkPackages="com.databricks:spark-csv_2.11:1.0.3")
sqlContext <- sparkRSQL.init(sc)
There is more information in the docs. I expect that a newer spark release would have more support for this.
If you don't do this you'll need to either resort to a different file structure or use python to convert all your files from .csv into .parquet. Here is a snippet from a recent python talk that does this.
data = sc.textFile(s3_paths, 1200).cache()
def caster(x):
return Row(colname1 = x[0], colname2 = x[1])
df_rdd = data\
.map(lambda x: x.split(','))\
.map(caster)
ddf = sqlContext.inferSchema(df_rdd).cache()
ddf.write.save('s3n://<bucket>/<filename>.parquet')
Also, how big is your dataset? You may not even need spark for analysis. Note that also as of right now;
SparkR has only DataFrame support.
no distributed machine learning yet.
for visualisation you will need to convert a distributed dataframe back into a normal one if you want to use libraries like ggplot2.
if your dataset is no larger than a few gigabytes, then the extra bother of learning spark might not be worthwhile yet
it's modest now, but you can expect more from the future
I've run into this problem before (but w/ reading a large qty of Parquet files) and my recommendation would be to avoid dataframes and to use RDDs.
The general idiom used was:
Read in a list of the files w/ each file being a line (In the driver). The expected output here is a list of strings
Parallelize the list of strings and map over them with a customer csv reader. with the return being a list of case classes.
You can also use flatMap if at the end of the day you want a data structure like List[weather_data] that could be rewritten to parquet or a database.

load excel files into mysql automatically [duplicate]

This question already has answers here:
Automate transfer of csv file to MySQL
(3 answers)
Closed 8 years ago.
I would like to know what would be the best way to automate the loading of an excel file into a mysql database.
The file would most likely be .csv, although, if there is a solution for text files, i can live with that. The data in to file would have to replace what is already in the database table.
I am searching for a solution meanwhile, and have found several for doing approximately this manually, as in, loading a file once, but i need this to happen every few minutes, if it is possible.
There is a native MySQL feature that allows importing a CSV file easily: LOAD DATA INFILE. All you need to do is declare your field- and line-separator correctly, if the default settings do not match your input file.
Please note that a CSV file is not an Excel file. It is a file format that Excel happens to be able to read.
If you really want to import Excel files (a .xlsx file, taht is), then you need some external library to first parse the Excel file, as MySQL is not able to read it natively.

Indirection in SSIS

Is it possible to perform any sort of indirection in SSIS?
I have a series of jobs performing FTP and loops through the files before trying to run another DTSX package on them. Currently this incurs a lot of repeated cruft to pull down the file and logging.
Is there any way of redesigning this so I only need one package rather than 6?
Based on your comment:
Effectively the 6 packages are really 2 x 3. 1st for each "group" is FTP pull
down and XML parsing to place into flat tables. Then 2nd then transforms and
loads that data.
Instead of downloading files using one package and inserting data into tables using another package, you can do that in a single package.
Here is a link containing an example which downloads files from FTP and saves it to local disk.
Here is a link containing an example to loop through CSV files in a given folder and inserts that data into database.
Since you are using XML files, here is a link that shows how to loop through XML files.
You can effectively combine the above examples into a single package by placing the control flow tasks one after the other.
Let me know if this is not what you are looking for.