Magento 2.1.1 We can't find required columns: sku - csv

My apologies if a solution has been provided elsewhere. I have searched and could not find anything similar to what I am experiencing. I am trying to upload categories on a Magento CE 2.1.1 website. I have a file with almost 4000 categories and sub categories and the only practical way is to upload via a csv file.
I downloaded a sample file to use and when I upload the same sample file it's working fine when I click Check data" button. However, when I replace the values on the rows with my own and save the file as csv with UTF-8 text encoding, I am getting an error message below. This is also happening when I save the file as csv even without changing the values. I have tested this with a csv file saved from both Mac Numbers and Windows Excel.
I only need to upload Categories (and not products) but I am not sure if this is possible.
File links:
Importing
Not importing
Actual project sample file
The files are quite similar but strangely one is working and the other is not.
Error
We can't find required columns: sku.
Column names: "sku;store_view_code;attribute_set_code;product_type;categories;product_websites;name;description;short_description;weight;product_online;tax_class_name;visibility;price;special_price;special_price_from_date;special_price_to_date;url_key;meta_title;meta_keywords;meta_description;base_image;base_image_label;small_image;small_image_label;thumbnail_image;thumbnail_image_label;swatch_image;swatch_image_label;created_at;updated_at;new_from_date;new_to_date;display_product_options_in;map_price;msrp_price;map_enabled;gift_message_available;custom_design;custom_design_from;custom_design_to;custom_layout_update;page_layout;product_options_container;msrp_display_actual_price_type;country_of_manufacture;additional_attributes;qty;out_of_stock_qty;use_config_min_qty;is_qty_decimal;allow_backorders;use_config_backorders;min_cart_qty;use_config_min_sale_qty;max_cart_qty;use_config_max_sale_qty;is_in_stock;notify_on_stock_below;use_config_notify_stock_qty;manage_stock;use_config_manage_stock;use_config_qty_increments;qty_increments;use_config_enable_qty_inc;enable_qty_increments;is_decimal_divided;website_id;related_skus;related_position;crosssell_skus;crosssell_position;upsell_skus;upsell_position;additional_images;additional_image_labels;hide_from_product_page;bundle_price_type;bundle_sku_type;bundle_price_view;bundle_weight_type;bundle_values;bundle_shipment_type;associated_skus" are invalid

This might be because you opened the file in Excel which will add a BOM to the start of the file. When the Magento importer tries to read the file, it expects the first header/cell to say sku, but it instead sees the BOM.
Two ways to solve this:
1) Don't open it in excel - use Google sheets, or a text editor if you are feeling brave,
2) If you opened the file in excel, close it, open it in notepad++, click encoding up the top and set to "Encode in UTF-8" (NOT "Encode in UTF-8-BOM"). Then save and you are good to go.

Related

Create a CSV table from a JSON file with a BOM marker in a Logic App

I'm trying to create a CSV table which has emoji's in it and send it as an attachment via logic app. So far, my app creates the JSON with the emoji's in it fine, and it looks as though the input going into the Create CSV Table action is also fine. But when I download the CSV created from my email, the emoji's are in gibberish and I realise that is because the CSV has been created without a BOM marker.
Any ideas how I can fix this? I've already tried the below solution but I don't think it works because of the conversion from CSV into bytes required to attach a file to an email.
https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/create-csv-files-with-bom-marker-in-logic-app/ba-p/2919113
One of the workaround is to transform the downloaded csv file. In order to print the emojis to the csv file follow the below steps
Open Microsoft Excel.
Navigate to Data >> Get External Data >> From Text.
Add the location of the CSV file, that you want to import and click Transform Data. make sure you choose the file Origin to 65001: Unicode (UTF-8) and Delimiter to comma.
Then the file will open in a Power Query Editor.
Navigate to Home >> Close and Load.
Now you can see the pictures getting inserted to your file.
Before:-
After:-

How to retrieve original pdf stored as MySQL mediumblob?

A table containing almost four thousand records includes a mediumblob field for each record that contains the record's associated PDF report. Under both MySQL Workbench and phpMyAdmin the relevant DOCUMENT column displays the data as a BLOB button or link. In the case of phpMyAdmin the link also indicates the size of the data the Blob contains.
The issue is that when the Blob button/link is clicked, under MySQL Workbench opening any of the files using the SQL Editor only displays the raw Blob data and under phpMyAdmin th link only allows the Blob data to be saved as a .bin file instead of displaying or saving the data as a viewable PDF file. All previous attempts to retrieve the original PDFs using PHP have failed - see related earlier thread: Extract Pdf from MySql Dump Saved as Text.
The filename field in the table shows that all the stored files are PDF files. Further research and tests indicate that the mediumblob data has been stored as application/octet-streams.
My question is how can the original PDFs be retrieved as readable PDFs? Is it possible for a .bin file saved from the database to be converted or used to recover the original PDF file?
Any assistance would be greatly appreciated.
In line with my assumption and Isaac's suggestion the only solution was to be able to speak to one of the software developers. It transpires that the documents have been zipped using an third-party library as well as the header being removed before then being stored in the database.
The third-party library used is version 2.0.50727 of Chilkat, available from www.chilkatsoft.com. That version no longer appears to be available, but hopefully at least one of the later versions may do the job.
Thanks again for everyone's input and assistance.
Based on the discussion in the comments, it sounds like you'll need to either refer to the original source code or consult with the original developer to determine exactly how the data was stored.
Using phpMyAdmin to download the mediumblob data as a file will download a .bin file in many cases, I actually don't recall how it determines content type (for instance, a PNG file will download with a .png extension, but most other binary files simply download as a .bin when phpMyAdmin isn't sure what the extension should be, PDF included). So the behavior you're seeing from phpMyAdmin is expected and correct, but since the .bin file doesn't work when it's renamed to .pdf that means something has probably gone wrong with the import and upload.
BLOB data is often stored in a pretty standardized way, but it seems your data doesn't follow that method.
Without us seeing the code directly, we can't guess what exactly happened with storing the data and would only be guessing.

Here XYZ Studio issue with uploading files

It seems that the XYZ Studio has some problems with accepting files. The upload of .geojson and .csv files is recommended but it tells me i am trying to upload "unsupported file types". It still worked a few weeks ago but i cannot upload any .geojson and .csv files right now.
Kindly crosscheck the names in the header of your csv file. If the file does not have columns labelled Latitude and Longitude, the xyz studio may give you a message saying that you are trying to upload an unsupported file.
I ran into a similar issue. Turns out HERE Studio prefers comma (,) CSVs only. If modifying in excel and it gets saved as a caret (^) CSV, the uploader will only read the file as one wide column and pop out errors.
If HERE is listening, some documentation on properly formatted file types, formats, and limitations along with sample code for the .CSVs, json, shape, and GEOjson files would be immensely helpful to users of Studio as there is little in the way on the API/platform documentation.

CodedUI test does not read data from CSV input file

I am having difficulty mapping a CSV file with the Coded UI test method. This is most likely a stupid question but I cannot seem to find a solution for my problem, at least not one that works. I have made sure to set the property of the CSV file to Copy always.
I have also imported the CSV file by writing the following line above the test method.
[DataSource("Microsoft.VisualStudio.TestTools.DataSource.CSV", "|DataDirectory|\\Data\\login.csv", "login#csv", DataAccessMethod.Sequential), DeploymentItem("login.csv"), TestMethod]
The file name is login.csv and it resides in the Data folder.
The test will compile without any problem but once the test executes the fields that should receive input from the CSV file are left empty and the execution is interrupted. I've tried replacing the data from the CSV file by using Strings and it works perfectly fine. The piece of code I am using to import each parameter is:
TestContext.DataRow["Username"].ToString()
Also, the CSV file contains something along the following lines:
Username,Password,Fullname
admin#mail.com,password,Admin
Is there anyone who can point what it is I am forgetting.
Update: I pinpointed the issue, it seems like the issue only revolves around the first column in the csv file. When I try to import any of the other values it works perfectly fine.
Some text files start with a Byte Order Mark (BOM). The CSV reader within Coded UI does not handle the BOM and treats it as part of the first field name. The screen shot below shows the debug trace of a CSV file with a BOM and that same file shown in Notepad++. The DataRow.ItemArray[...] values are as expected. The DataRow.Table.Columns.ResultsView[...] shows the field names, but the first field name includes the BOM.
This CSV file with a BOM was created in Visual Studio using Solution Explorer => Add => New item => C# => General => Text file. Previously I have created a spread sheet with Microsoft Excel and saved it as a CSV file, that file did not have a BOM. I have also created files with Notepad++ and saved as CSV and they did not have a BOM. It appears that Visual Studio creates files with a BOM but when editing CSV files it does not add a BOM.
Visual Studio can create files with the correct encoding. Within "Step 2 - Create a data set" of this Microsoft page it states the text below. (Thanks also to Holistic Developer for providing very similar details in a comment.):
It is important to save the .csv file using the correct encoding. On the FILE menu, choose Advanced Save Options and choose Unicode
(UTF-8 without signature) – Codepage 65001 as the encoding.
For Visaul Studio 2010, i could solve issue be selecting "Western European (Windows) - Codepage 1252" encoding for CSV files.
Summary of steps:
In visual studio 2010, Open CSV file > Go to File menu > Select " Advanced Save Options" > Select "Western European (Windows) - Codepage 1252" > Save.
This should help.
This is not the best solution but its kind of a workaround. I simply set the first element to something random and since I don't need access to the first element it doesn't matter that I don't have access to it.
If anyone finds a correct way to solve this problem I'd be grateful for your solution.

Creating a CSV file with the Report Generation Toolkit in Labview

I want to create .csv files with the Report Generation Toolkit in Labview.
They must actually be .csv files which can be opened with Notepad or something similar.
Creating a .csv is not that hard, it's just a matter of adding the extension to the file name that's going to be created.
If I create a .csv file this way it opens nicely in excel just the way it should, but if I open it in Notepad it shows all kind of characters and it doesn't even come close to the data I wrote to the file.
I create the files with the Labview code below:
Link to image (can't post image yet because I've got to few points)
I know .csv files can be created with the Write to Spreadsheet VI but I would like to use the Report Generation Toolkit because it's pretty easy to add columns and rows to the file and that is something I really need.
you can use the Robust CSV package on the lavag.org forum to read and write 2D arrays to CSV files.
http://lavag.org/files/file/239-robust-csv/
Calling a file "csv" does not make it a CSV file. I never used the toolkit to generate an Excel file, but I'm assuming it creates an XLS or XLSX file, regardless of what extension you give it, which is why you're seeing gibberish (probably XLS, since it's been around for a while and I believe XLSX is XML, not binary).
I'm not sure what your problem is with the write spreadsheet VI. It has an append input, so I assume you can use that to at least add rows directly to a file, although I can't say I ever tried it. I would prefer handling all the data in memory explicitly, where you can easily use the array functions to add rows or columns to the array and then overwrite the entire file.