I've got an audit log for PowerBI we are trying to import into our data warehouse. Issue is we can't break the rows up to be under the correct headings with how it is currently formatted. Below is a photo of how it is downloaded. The main issue is that every row doesn't have the same column headers in it. E.g. ItemName isn't in the last two rows of our data.
Downloaded Data:
Example of data not under correct columns
Related
I have a CSV file that I am using as a source in SSIS. The file contains additional blank columns in the file
There is an additional column in S, U, V; is there a way I can remove the column through SSIS Script Task before using it as a source file.
Perhaps I misunderstand the problem, but you have a CSV with blank columns that you'd like to import into SQL Server but you do not want to import the blank columns into your table.
So, don't create the target columns in your table. That's it. There's no problem with having "extra" data in your data pipeline, just don't land it.
The Flat File Connection manager must* have a definition for all the columns. As it appears you do not headers for the blank columns and a column name is required, you will need to set up the flat file connection manager as my file has no header columns but then skip 1 row to avoid the header row. It might be a data starts on line setting - doing this from memory. By specifying no header row, that means you need to manually provide your column names. I favor naming them something obvious like IgnoreBlankColumn_S, IgnoreBlankColumn_U, IgnoreBlankColumn_V that way future maintainers will know this is an intentional design decision since the source data has no data.
*You can write a query against a text file which would allow you to only pull in specific columns but this is not going to be worth the effort.
Here is my raw Excel data:
Here is my PivotTable to give you an idea of how I would like my JSON structure to be:
But when I convert from .XLSX to .CSV to .JSON and then load this file into Firebase, here is what my data looks like (from what I can see, a completely different structure to my PivotTable):
It looks like it has structured the data according to its row number. Any ideas please?
The Firebase Realtime Database stores JSON data. It doesn't store tables, nor rows and columns, nor spreadsheets.
So during your conversion process the data gets converted from the table you have in Excel to the most corresponding JSON structure, which means that each row in your table become a top-level node in the JSON, and then each cell in that row became a property with the column heading as the property name, and the value from the cell as the value of that property.
If there was any code involved in this conversion, you will have to modify the code to generate the structure you want. If you've tried to do that but got stuck, edit your question to include the minimal, complete/standalone code with which we can easily reproduce the problem.
In magento I am importing bulk products. Every month I gets CSV of whole data(all products). I want to upload only those product which attributes value actually got changed.
For example :
If I have 5 products in magento. I have csv with those 5 products. From those 5 products only 1 product's description get changed in new csv. So I want to import only that changed product.
If this is not possible then can we get all changed products after import ?
Thanks.
Usually, you should get an incremental csv, with only the data that's changed.
One thing that you could potentially do is, after loading the product and setting new data from the CSV, you can use $product->dataHasChangedFor($field) to determine if the new data is different than the original one for the particular field.
To see more about how this works, you can check the implementation in Varien_Object. Basically Magento stores original data that's loaded separately, so it allows comparing it with the newly set data.
Cheers.
I want to create JasperReports reports based on dynamically generated CSV files.
Generated CSV file contain two different column headers. I want to omit the first header details and only need to read values from second header.
Refer the attached image.
I want to skip the first 3 rows and need to read csv from 4th or 5th row onwards.
Please note that I unable to delete first header. How can we perform this requirement to generate such report?
'AMOUNT' datatype is BigDecimal and Based on 'AMOUNT', perform several calculations also.
How can we view reports with iReport with this requirement?
I am currently experiencing difficulties when trying to append data to existing tables.
I have about 100 CSV files that I would like to create a single table from; all the tables have different column structures but this isn't really an issue as the associated field names are in the first row of each file.
First, I create a new table from one of the files indicating that my field names are in the first row. I change the particular fields that have more than 256 characters to memo fields and import the data.
I then add to the table the fields that are missing.
Now, when I try to append more data, I again select that my field names are in the first row, but now I receive a truncation error for data that is destined for the memo fields.
Why is this error occurring? Is there a workaround for this?
edit
Here is an update regarding what I've attempted to solve the problem:
Importing and appending tables will not work unless they have the exact same structure. Moreover, you cannot create a Master table with all fields and properties set, then append all tables to the master. You still receive truncation errors.
I took CodeSlave's advice and attempted to upload the table, set the fields that I needed to be Memo fields, and then append the table. This worked, but again, the memo fields are not necessarily in the same order in every data file, and I have 1200 data files to import into 24 tables. Importing the data table by table is just NOT an option for this many tables.
I expect what you are experiencing is a mismatch between the source file (CSV) and the destination table (MS Access).
MS Access will make some guesses about what the field types are in you CSV file when you are doing the import. However, it's not perfect. Maybe it's seeing a string as a memo or float as a real. It's impossible for me to know without seeing the data.
What I would normally do, is:
Import the second CSV into it's own (temporary) table
Clean up the second table
Then use an SQL query to append those records from the second table to the first table.
Delete the second table
(repeat for each CSV file you are loading).
If I knew ahead of time that every CSV file was already identical in structure, I'd be inclined to instead concatenate them all together into one, and only have to do the import/clean-up once.
Had a very similar problem - trying to import a CSV file with large text fields (>255 chars) into an existing table. Declared the fields as memo but were still being truncated.
Solution: start an import to link a table and then click on the Advanced button. Create a link specification which defines the relevant fields as memo fields and then save the link specification. Then cancel the import. Do another import this time the one you want which appends to an existing table. Click on the Advanced button again and select the link specification you just created. Click on finish and the data should be imported correctly without truncation.
I was having this problem, but noticed it always happened to only the first row. So by inserting a blank row in the csv it would import perfectly, then you need to remove the blank row in the Access table.
Cheers,
Grae Hunter
Note: I'm using Office 2010