Upload Data in MapQuest DMv2 through CSV using Data Manager API call - csv

I need to upload data in MapQuest DMv2 through a CSV file. After going through the documentation I found following syntax of uploading data-
http://www.mapquestapi.com/datamanager/v2/upload-data?key=[APPLICATION_KEY]&inFormat=json&json={"clientId": "[CLIENT_ID]","password": "[REGISTRY_PASSWORD]","tableName": "mqap.[CLIENT_ID]_[TABLENAME]","append":true,"rows":[[{"name":"[NAME]","value":"[VALUE]"},...],...]}
This is fair enough if I want to put individual rows in in rows[], but there is no mention of the procedure to follow to upload data through a CSV file. It has been clearly mentioned that "CSV, KML, and zipped Shapefile uploads are supported ". How can I achieve it though this Data Manager API service?

Use a multipart post to upload the csv instead of the rows. You can see it working here.

I used the CURL program to accomplish that. Here is an example of a CURL.exe command line. You can call it from a batch file, or in my case, from a C# program.
curl.exe -F clientId=XXXXX -F password=XXXXX -F tableName=mqap.XXXXX_xxxxx -F append=false --referer http://www.mapquest.com -F "file=#C:\\file.csv" "http://www.mapquestapi.com/datamanager/v2/upload-data?key=KEY&ambiguities=ignore"

Related

Published https://docs.google.com/spreadsheets redirects to other URL (CSV data)

We auto-publish a Google Docs Spreadsheet (one tab as CSV). Google docs is providing a fixed URL that refers to the CSV. We import this CSV in another tool for product data import.
Suddenly this URL is redirected by Google Spreadsheet. If we go again in "File/Publish To The Internet" we can the same URL for that CSV.
Question: How can get the URL without redirection again?
Error: Source file
https://docs.google.com/spreadsheets/d/e/2PACX-1vTQsBEmvOwFwxORMqYg2N6LzzYqdqsdDCjxqsdqsdH72gdMCP4xrs1lsN37RO4h1-rjJsQ/pub?gid=501162839&single=true&output=csv doesn't exist (HTTPS : File not found ! (HTTP/1.0 307 Temporary Redirect)). Please check the source file path.
In short, the collection process needs to follow the Location header. Depending how you're getting the CSV this might be simple or a pain. I collect CSVs using curl so just adding the -L switch is sufficient to make sure the incoming files are the CSV we're looking for instead of the HTML that we were getting without -L. Without knowing what utility or process you're using to download the CSV I can't be more specific, unfortunately.

Load thousands of JSON files into BigQuery

I have around 10,000 JSON files, and I want to load them into BigQuery. As BQ only accept ndJSON, I spent hours searching for a solution, but I can't find a easy and clean way to convert all the files to ndJSON.
I tested cat test.json | jq -c '.[]' > testNDJSON.json and it works well to convert a file, but how to convert all the files at once?
Right now, my ~10k files are on a GCP bucket, and weight ~5go.
Thanks!
Did you come across Dataprep in your search? Dataprep can read data from Cloud Storage, help you format the data and insert data to BigQuery for you.
Alternatively, you can use Cloud DataFlow I/O transform to deal with this automatically. See the link below for reference.
Hope this helps.
my suggestion is to use a Google-provided Cloud Dataflow template to transfer your files to BQ, you can use the one called Cloud Storage Text to BigQuery
, it's important to consider the UDF function to transform your JSON files.

Get IFC schema version

Opening an *.ifc file we can find "File_Schema" in the Header, for example:
HEADER;
...
FILE_SCHEMA (('IFC4'));
ENDSEC;
We are downloading IFC stream file and it would be nice to know the file schema version for it.
Is it somehow possible to get this information via DataManagement API?
This is already an old post, but just to mention that for those who download the file before any other operation: once downloaded, the following command can be used (on a Unix-like environment) to get exactly the IFC schema (e.g. "IFC2X3", "IFC4"):
grep "^FILE_SCHEMA" file.ifc | cut -d"'" -f2
Of course this command can be integrated in a program written in Node.js for example (using childProcess.exec), or any other programming language. Note that this is usually faster than streaming the file and searching in it, or even using a language-specific library to "grep" the file, especially for big IFC files.

Export RTP statistics from wireshark/tshark into XML or CSV

Basically I would like to export the analysics of wireshark to RTP streams into CSV or XML format to read it again for some tests. I can do the following using tshark through command line.
tshark -r rtp.pcap -q -z rtp,streams
Is there a way to specify and output file and it's format? If there's a way to do this through wireshark directly, it's welcome.
Note: what need to store is the overall statistics of all the streams not the detailed one per each stream.
You can save the output to a text file using the redirect operator. i.e. > output.txt. This is very basic and difficult to parse but unfortunately, there does not seem to be any way to control the format of the output. The -T -E -e combination outputs details from each packet and the -w option outputs a raw file.
Wireshark
Go to Telephony -> RTP -> Show All Streams
You can copy the values to the clipboard in CSV.
See also Wireshark Wiki

Use text from CSV in Imagemagick

I have a CSV with information relating to images such as name etc. , is there a way that I can use the CSV file information to push to imagemagick (via PHP on windows) to add to the images?
Sure, check out http://php.net/manual/en/function.fgetcsv.php to get the CSV information into the script, then I'd assume you are using the PHP ImageMagick extension, so just run the commands against the given image, or read the image in from some variable from the file. If you are not using the extension in PHP, you could try running it through the exec PHP method: http://php.net/manual/en/function.exec.php