WooCommerce CSV Import Suite Creating Duplicate Products - csv

We are trying to get the CSV importer to work properly and we're having issues because it's creating duplicate products. Ultimately we want to import hundreds of product variations, but we're not close to getting that working yet. We've eliminated all of the possible variables that we can think of to getting this to work and we have a pretty simple test that's failing.
What we did was this:
Export all of the products (WooCommerce >> CSV Import Suite >>
Export Products Tab, Limit = unlimited, Offset=0, Columns = All
Columns. We've tested it with "include hidden data" checked and
unchecked.)
Save the CSV file to the desktop (Windows) and didn't
open it or edit it in any way.
Click on the import button, upload the files and click on the final button to start the process.
I would expect it to skip every product in the import file because it already exists in the database, but it routinely adds 8 of the 67 products as new ones. Each time we've tested has been the same 8 products and the option for including hidden data on the export doesn't impact the results.
Has anyone seen this issue? Any ideas on a workaround or fix?
If not, does anyone have any suggestions on how to de-duplicate the records?

Check the ID column. If it's a new product you're gonna want to make the ID cell blank.

woocommerce CSV product import suite does not need the post id but to know whether you are updating creating or removing the correct attributes; from a variation you will always need a unique sku , if you just use the parents sku or the same for every variation you end up with duplicates or missing variation; due to attributes changing but sku being the same.
Woocommerce seems to care more for the sku over attributes it is also not smart enough to account for missing attributes. GIST is UNIQUE SKU ALWAYS, if uploading to existing products and not sure if attributes have changed; use override to remove blank cells this will clear unwanted attributes not in use.

Related

Remove simple product from grouped product during csv import in magento 2

I have added some associated_sku (of simple products) to one grouped product & removed it from another grouped product.
Actually I wrongly associated a simple product to a grouped product initially in my CSV file. I wanted to remove it from the old grouped product and add it to the new one.
My import behavior is Add or Update.
But after importing all data. I see the newly associated/added simple products are showing with the new grouped product as these should. But these are not removed from the old grouped product from which I have removed it from associated_sku column. How I can achieve this?
Following link help me alot to achieve what is needed. It is clear we can achieve it only by programmatically removing these in magento. With current import behaviour "Add|Update" only this solution is possible.
https://minhducnho.wordpress.com/2018/02/13/magento-2-remove-unlink-child-product-from-grouped-product-programmatically/
But, the code mentioned is very specific. You need to created $product at the top and little more adjustments accordingly.

Editing SQL code in multiple queries at one time

I'd like to automate a procedure some. Basically, what I do is import a few spreadsheets from Excel, delete the old spreadsheets that I previously imported, and then change a few queries to reflect the title of the new imports. And then I change the name of the queries to reflect that I've changed them.
I suppose I could make this a bit easier by keeping the imported documents the same name as the old ones, so I'm open to doing that, but that still leaves changing the queries. That's not too difficult, either. The name stays pretty much the same, except the reports I'm working with are dated. I wish I could just do a "find and replace" in the SQL editor, but I don't think there's anything like that.
I'm open to forms, macros, or visual basic. Just about anything.
I've just been doing everything manually.
Assuming I have correctly understood the setup, there are a few ways in which this could be automated, without the need to continually modify the SQL of the queries which operate on the imported spreadsheet.
Following the import, you could either execute an append query to transfer the data into a known pre-existing table (after deleting any existing data from the table), avoiding the need to modify any of your other queries. Alternatively, you could rename the name of the imported table.
The task is then reduced to identifying the name of the imported table, given that it will vary for each import.
If the name of the spreasheet follows logical rules (you mention that the sheets are dated), then perhaps you could calculate the anticipated name based on the date on which the import occurs.
Alternatively, you could store a list of the tables present in your database and then query this list for additions following the import to identify the name of the imported table.

Importing CSV product data - updating price column

I’ve searched but can’t find exactly what I’m trying to do, or just losing my mind... probably both.
I’ve got a data feed, with the following fields: Product ID, Name, Retail Price, Sales Price. Right now I dump this into a Google Sheets file via scripts, trim the necessary fields, export via CSV and back in MySQL but I need the data to go right from the feed CSV URL to MySQL, and correct a few fields.
For some reason, some of the prices have weird values, such as 4728.7376 while others have 282. I’d like to basically trim andything past the decimal point. When I tried that, I have some rows that don’t have any prices (it’s annoying) and seems to break the import.
Any suggestions on simplifying this to pull from the URL feed, fix issues and import into MySQL — which is through Google Cloud, but currently testing using MAMP. I can only get it to work importing using varchar(100) and need to do calculations with those columns; as long as the value is present.
Thanks in advance!

While import product in magento how can we know which particular product get updated?

In magento I am importing bulk products. Every month I gets CSV of whole data(all products). I want to upload only those product which attributes value actually got changed.
For example :
If I have 5 products in magento. I have csv with those 5 products. From those 5 products only 1 product's description get changed in new csv. So I want to import only that changed product.
If this is not possible then can we get all changed products after import ?
Thanks.
Usually, you should get an incremental csv, with only the data that's changed.
One thing that you could potentially do is, after loading the product and setting new data from the CSV, you can use $product->dataHasChangedFor($field) to determine if the new data is different than the original one for the particular field.
To see more about how this works, you can check the implementation in Varien_Object. Basically Magento stores original data that's loaded separately, so it allows comparing it with the newly set data.
Cheers.

Insert missing rows in CSV of incrementally numbered files generated by directory listing?

I have created a CSV from a set of files in a directory that are numbered incrementally:
img1_1.jpg, img1_2.jpg ... img1_1999.jpg, img1_2000.jpg
The CSV output is like so:
filename, datetime
eg:
img1_1.JPG,2011-05-11 09:16:33.000000000
img1_3.jpg,2011-05-11 10:10:55.000000000
img1_4.jpg,2011-05-11 10:17:31.000000000
img1_6.jpg,2011-05-11 10:58:37.000000000
The problem is, there are a number of files missing in the listing, as some of the files don't exist. As a result, when imported, the actual row number does not match the file number.
Can anyone think of a reasonably efficient way to insert the missing rows so that the row number and filename matches up other than manually inserting rows for the missing ones? (There are over 800 missing rows).
Background
A previous programmer developed an uploader script and did not save the creation time of the mysql record in the database. I figured the easiest way to find the creation time for the majority of the records would be to output a directory listing of all the files and combine them in a spreadsheet.
You exactly need to do what you write in your comment to answer #tadman.
A text parser script to inject the missing lines with e.g. a date/time value that reflects the record is an empty one, i.e. there is no real data is behind it (e.g. date it to 1950-01-01 00:00:00). When it is done, bulk import the CSV.I think this must be the best and most efficient solution.
Also, think about any future insert/delete/update events might occur to your data.
That would possibly break the chain you initially have had, so you might prefer instead, to introduce a numeric field for the jpegs IDs (and index that field), and leave the PK "as is" (auto increment).
In this case you can avoid CSV manipulation, as well as being chained to your AUTO PK (means: you will not get in trouble if a new jpeg arrives with an ID which was previously deleted, or existing ID, etc).
So the solution really depends on how you want to use this table in the future. If you give more details, I am sure the community can come up with even more ideas.
If it's a one-time thing, it might be easiest to open up your csv in a spreadsheet.
If your table above is in sheet1, you could put something like the following in sheet2 (this is openoffice, but there are similar functions for Excel)
pre_filename | filename | datetime
img1_1 | = A2&".JPG" | =OFFSET(Sheet1.$B$1;MATCH(B2;Sheet1.$A$2:$A$4;0);0)
You should be able to select the three cells above and drag them down to however many you need.