Import csv with paths to pdf files - csv

I have a custom content type (product) containing a title, some text and a file (a pdf file).
A lot of products have to be imported to the Drupal CMS. I discovered the plugin "Feeds" which seems to fulfill my requirements.
I managed to import a csv files containing a title and some text.
Now in my csv file I have a column containing a path to all the pdf files (like C:\mypdfs\product1.pdf). How can I manage that those pdf files are imported by Feeds? In "Mapping" configuration I'm not sure what target I have to select for those column which contains the path to a local file.

Using Feeds Tamper module you can manipulate the value of imported data for one target. Here you can build a custom tamper (See How) and use it to process the retrieved value (file path), using file_get_contents to get the file from imported path and file_save_data to save a file in Drupal, getting a field object that you can attach to an entity (this could help).

Related

how to use google Data Prep for several files located in Google Cloud Storage?

I imported a text file from in GCS and did some preparations using DataPrep and write them back to GCS as CSV files. What I want to do is, do this for all the text files in that bucket Is there a way to do this for all the files in that bucket(in GCS) at once?
Below is my procedure. I selected a textfile from GCS(can't select more than one text file) and did some preparations(rename columns .create new columns and etc). Then write it back to GCS as CSV.
You can use the Dataset with parameters feature to load several files at once.
You can then use a wildcard to select all the files that you want to load.
Note that all the files need to have the same schema (same columns) for this to work.
See https://cloud.google.com/dataprep/docs/html/Create-Dataset-with-Parameters_118228628 for more information on how to use this feature.
An other solution is to add all the files into a folder* and to use the large + button to load all the files in that folder.
[*] technically under the same prefix on GCS

Export BigQuery table to GCS as CSV or JSON generates a file type file

After running a query and saving the results into a table, I went on exporting its content into a GCS bucket.
When in the table, I clicked Export and the following screen showed up
Because the table was bigger than 1 GB, I've used
bucketname/all_years*
Then, because I wanted it in both CSV and JSON, specified the Export format CSV, started the export and repeated for JSON.
Didn't notice if I got CSV and JSON files inside of the bucket (I deleted it right away due to costs, but my memory tells they weren't .csv / .json already inside of the bucket) and once I downloaded the content from the bucket to my Windows machine, I got a file of type file:
To go around this I had to go to every file properties and add .csv / .json and click OK
Why is that even though i specified the export format as .CSV and .JSON I got a file of type file?
Just setting Export Format controls file format but not file extension - You should explicitly set file extension
So, instead of bucketname/all_years* - you should use bucketname/all_years*.json for example or bucketname/all_years*.csv

How to rename a file, using exiftool, to a new name contained in CSV import file

Using exiftool, I have exported image file data to a CSV file. I have manipulated that data and now I want to import the new values to the original files. Everything works as I would expect, except that I want also to rename images to new file names contained in the CSV file (those new names were generated manually as well as by programs; they could not be generated by a rule). I know how to rename files using data that is in the source image file (e.g. I've found advice on how to incorporate the camera model name into the file name), but I don't know how to rename the source image file to the name that I have specified in the CSV file.
It would be simple enough to do this renaming separately from exiftool, but I'm curious to know whether exiftool can do it. It seems to be able to do pretty much anything else.
According to this thread, Phil Harvey (creator of exiftool) says it isn't possible to rename files from a CSV file. It's "a feature to prevent people from messing up their files unintentionally."

Creating a CSV file with the Report Generation Toolkit in Labview

I want to create .csv files with the Report Generation Toolkit in Labview.
They must actually be .csv files which can be opened with Notepad or something similar.
Creating a .csv is not that hard, it's just a matter of adding the extension to the file name that's going to be created.
If I create a .csv file this way it opens nicely in excel just the way it should, but if I open it in Notepad it shows all kind of characters and it doesn't even come close to the data I wrote to the file.
I create the files with the Labview code below:
Link to image (can't post image yet because I've got to few points)
I know .csv files can be created with the Write to Spreadsheet VI but I would like to use the Report Generation Toolkit because it's pretty easy to add columns and rows to the file and that is something I really need.
you can use the Robust CSV package on the lavag.org forum to read and write 2D arrays to CSV files.
http://lavag.org/files/file/239-robust-csv/
Calling a file "csv" does not make it a CSV file. I never used the toolkit to generate an Excel file, but I'm assuming it creates an XLS or XLSX file, regardless of what extension you give it, which is why you're seeing gibberish (probably XLS, since it's been around for a while and I believe XLSX is XML, not binary).
I'm not sure what your problem is with the write spreadsheet VI. It has an append input, so I assume you can use that to at least add rows directly to a file, although I can't say I ever tried it. I would prefer handling all the data in memory explicitly, where you can easily use the array functions to add rows or columns to the array and then overwrite the entire file.

SSIS adding XML to a fixed width flat file

I have a package that looks for new files and parses them into their respective tables. Since the filenames include a date, I have to search for a specific string that identifies the file instead of using a static filename. Everything works, except for that using this dynamic method of finding the file seems to be adding a ton of XML formatting to the file, which is then being inserted as records into the database. When I point the flat file connection manager at file directly using it's name, this problem does not happen. Any ideas?