Is it possible to specify the name of the output file in a Foundry transform? [duplicate] - palantir-foundry

This question already has answers here:
How can I have nice file names & efficient storage usage in my Foundry Magritte dataset export?
(3 answers)
Closed 6 months ago.
I have a PySpark transform in Palantir Foundry that's outputting to a csv file for export into other systems.
Currently, using the write_dataframe method the name of the file looks like this:
spark/part-00002-cfba77d5-c6ce-4b2a-ac9a-59173c7ede5a-c000.snappy.csv
is it possible to specify a filename, such as "my_export.csv" ?

It's likely easier to accomplish this using an export task rather than via a transform. Some documentation on export tasks is available here, but it is described in more detail in the in-platform docs.
If you're using a file system or SFTP export task, there is an option to rewrite paths in the task config. For example,
rewritePaths:
".*": "my_export.csv"
would rename all files to my_export.csv. I wouldn't recommend doing exactly that, as you'll have a collision if there are multiple files, but you can also capture part of the existing file name and use it to make the renamed files unique:
rewritePaths:
"^spark/(.*)": "my_export-$1.csv"

Related

how to use google Data Prep for several files located in Google Cloud Storage?

I imported a text file from in GCS and did some preparations using DataPrep and write them back to GCS as CSV files. What I want to do is, do this for all the text files in that bucket Is there a way to do this for all the files in that bucket(in GCS) at once?
Below is my procedure. I selected a textfile from GCS(can't select more than one text file) and did some preparations(rename columns .create new columns and etc). Then write it back to GCS as CSV.
You can use the Dataset with parameters feature to load several files at once.
You can then use a wildcard to select all the files that you want to load.
Note that all the files need to have the same schema (same columns) for this to work.
See https://cloud.google.com/dataprep/docs/html/Create-Dataset-with-Parameters_118228628 for more information on how to use this feature.
An other solution is to add all the files into a folder* and to use the large + button to load all the files in that folder.
[*] technically under the same prefix on GCS

Write json object to file.json in assets using angular [duplicate]

This question already has answers here:
angular2 http.post() to local json file
(2 answers)
Closed 2 years ago.
I am using angular to create an application and has a requirement to store a json object currently stored in a variable to file.json located in src/app/assets using angular.
I have searched a lot and have not found a way to do this.
Ask if need any more information.
You cannot write files with Angular. Don't forget that the Angular app is not running in the directory structure you create. It's not even running on a server. It's running in the browser as compiled JavaScript. It has no direct write access to any filesystem.
If you need to write to server-side files in your application, you need some server-side code. This can be achieved, for example, with NodeJS (Express, NestJSā€¦) if you want to stick with JavaScript. Either way, you can't write files directly with Angular.

How to bulk import documents with custom metadata from csv to Alfresco repo?

I have an excel file (or csv), that holds a list of documents with their properties and absolute paths in local hard drive.
Now that we are going to use Alfresco (v5.0.d) as DMS, I have already created a custom aspect which reflect the csv fields and I'm looking for a better approach to import all document from the csv file into Alfresco repository.
You could simply write java application to parse your csv and upload files, file by file using the RESTful api and do not forget to replicate the folder tree in your alfresco repo (as it is not recommended to have more than 1000 folders/documents on the same level in the hierarchy since it would require some tweaking in a few non trivial usecases).
To create the folder, refer to this answer.
To actually upload the files, refer to my answer here.

BMC Remedy user 7.5 Could i make a macro that can read a .csv file?

I need to create a macro in BMC remedy user 7.5, that can read a csv file and update all items contain in the csv file?
Is it possible ?
I have to make a large bundle of item and edit there location.
Thank you
You can't create a macro. But you can use the Remedy Import Tool. It has the capability to automatically import a CSV file. It takes as inputs a mapping file and the CSV file (with full paths of course).
Check out the guide titled "BMC Remedy Action Request System 7.6.04 Configuration Guide". You should find what you're looking for there.
Best of luck,
Mike

CSVDE export file-column order wrong?

I'm using CSVDE to export data from our active directory into a CSV file, which then gets imported into a database. I'm using the -l switch to specify the columns that I'd like to export, but they don't come out in the same order consistently. Is there a workaround for this that doesn't involve opening the file in Excel? This is a nightly batch process and we'd like it to run unattended.
Thanks!
If you simply want a command-line utility that can re-order the CSV (and do much else as well), take a look at my FOSS CSV stream editor, CSVfix.
Per the docs:
LDAP can return attributes in any
order, and csvde does not attempt to
impose any order on the columns.
How about writing a python script to read reorder the csv file? You may find the python csv module useful for this.