Mistmatch between Socrata JSON and CSV metadata - socrata

There is a mismatch between the columns defined in the JSON metadata and the CSV data for a dataset.
For example, the metadata listing shows the columns - name, address1, address2, city, ...
https://data.montgomerycountymd.gov/api/views/ecam-8hbr.json
But the CSV listing has address1, address2, category, city, ...
https://data.montgomerycountymd.gov/resource/ecam-8hbr.csv?$limit=2&$offset=0

It's not that they don't match, the fields are just presented in different orders.
The CSV lists the columns in alphabetical order:
"address1","address2","category",...,"zip"
Whereas the JSON metadata lists the fields in the same order that the dataset was originally defined (when data was first uploaded). Look at the dataset in the Web UI, the fields are in the same order as the JSON metadata. Note the location column in the dataset is broken out into component columns in the export.
"name","address1",...,"location","location_city","location_address","location_zip","location_state"
Both representations have the same 34 standard fields. The JSON metadata includes an additional 4 fields that appear to be computed and related to other datasets, so not included in the CSV export.

Related

Data conversion from DT_STR to Currency to load with in 2 destinations

I have a data flow task that consists of loading data from a flat file source (Products.csv) to transfer them to:
Staging table: to save valid data
InvalidRows: a flat file destination to save invalid data
The data that I need to load are shown in the screenshot.
So, my data conversion component allows to convert 2 attributes: ProductID and Price (as shown in the screenshot).
I'm sure that the result should be as follows:
Staging table: contains the two products where ID=4 and ID=6
InvalidRows: contains the product where ID=5
When executing, I get the following result:
"The value could not be converted because of a potential loss of
data."
which is caused by the price column that should be converted from DT_STR to DT_CY.
It's a conversion problem, and I tried many things (data conversion / derived column), but without success.

JSON in R: fetching data from JSON

I have a dataframe of more than 10000 customers with 4 columns: customer_id,x,y,z
here x,y,z columns has data stored in JSON format and i want to fetch that data, consider that they took a survey and have answered different questions , some customers have less data inside these variables and some has more. But the name of the variable inside is same. I want an ouptput in a dataframe which contains customer_id and all the information available inside x,y,z

MS Access - split one text field dynamically into columns

I have an Excel file with 900+ column I need to import on regular basis into Access. Unfortunately I get the Excel file as such and can't change the data structure. The good news is I only need few columns of those 900+. Unfortunately MS Access can't work with files more than 255 columns.
So the idea is to import as csv file with all columns in each row in just text field. And then using VBA in Access via split to break it out again.
Question:
As I don't need all columns I want to only keep some items. So I have as input a list of column numbers I need to keep. The list is dynamic in a sense it is user defined. There is a table with all item numbers users wants to have.
I can relatively easy split the sourceTbl field.
SELECT split(field1, vbTab) from sourceTbl
If I would know I always need to extract certain columns I could probalby write in some
SELECT getItem(field1, vbTab, 1), getItem(field1, vbTab, 4), ...
Where getItem would be custom function to return item number i. Problem is which/how many columns to retrieve is not static. I read that dynamically from another table that lists the item numbers to keep.
Sample Data:
sourceTbl: field1 = abc;def;rtz;jkl;wertz;hjk
columnsToKeep: 1,4,5
Should output: abc, jkl, wertz
Excel files have around 20k rows each. About 100 MB data per file. Talking about 5 files per import. Filtered on the needed columns all data imported is about 50 MB.

Exporting empty csv in SSRS from data driven subscription

How do I remove the commas from an exported csv file created from an SSRS (2012) data driven subscription when the data returned is empty? The report is created daily then exported as a csv file. When the report returns no results the csv file exported consists of one line of commas only. When it does return results it outputs a proper csv file. The file needs to be generated daily regardless of the results but needs to be empty without the commas when no data is returned?
e.g. when data is returned:
Name1, Address1, City1, State1, Zip1
Name2, Address2, City2, State2, Zip2
e.g. when data is not returned I need it to be blank but it currently creates:
,,,,

Matching Field Names option not working during import to FileMaker with CSV

I have created product table with filed name product_id, prod_name and description. But while importing field from csv file fields in csv file are not matching.
Please any one tell me how to achieve this?
CSV files don't include field names so you can't use the Matching Names optio.
But you can drag the fields up and down so that they match up and then in a script you can import by last order.
I found that checking the box marked "Don't import first record (contains field names)" tells FM that the first row contains field names and then the "matching names" option for arrange by becomes available. I'm in FM Pro 16 and using MS Excel (.xlsx) format to import.