I need to access the eBay Merchant Integration Platform via SFTP. Using the web CSV upload is not an option, because I want to automate the process.
It works as I can upload products, update quantities, prices, and receive offers periodically. However, I fail to delete an inventory item (which is not the same as setting the quantity to 0, which works fine).
I tried to upload my delete-inventory.csv to the store/inventory folder on the SFTP server, but the error message in the response CSV looks as if eBay interprets the file as a 'normal' inventory.csv.
My question is: To which of the folders (e.g. store/availability, store/distribution, store/product) should I upload my delete-inventory.csv if not to the store/inventory folder?
Call me dumb but I'm unable to find it in the docs.
Here's my delete-inventory.csv:
SKU,Action,Channel ID,Format
test-sku,DeleteInventory,EBAY_US,FIXED_PRICE
And here's the eBay response CSV:
SKU, Group ID, Locale, ePID, Channel ID, Item ID, Status, Message Type, Message ID, Message
test-sku,,,,,,FAILED,ERROR,335101,Invalid request for SKU. Atleast one of shipToLocationAvailability or offers is required.
NB: The same happened when I removed the optional columns Channel ID and Format from my delete-inventory.csv or when I played around with optional/mandatory columns. It does not seem to be a syntactical problem.
Thanks for your help!
Reference: https://developer.ebay.com/devzone/merchant-products/mipng/regular/content/user-guide/definitions-delete-inventory-feed.html?tocpath=Managing%20inventory%7CFeed%20definitions%7C_____10
I asked the eBay support and they told me to use the store/deleteInventory folder. This folder had neither appeared in the directory tree of two SFTP clients on two computers nor in the web interface, but after I contacted them, it was suddenly there...
While EBAY MIP is marketed as "easy to use", it is far from being so. If even the sample files contain errors much less a file from scratch.
Uploads are producing error codes that does not even have proper explanations. More over, an upload will produce an error, you have it corrected, then different error message occurs on future uploads.
Today is my second day uploading a test file and I am still not successful.
Related
When a section renamed get sections API doesn't reflect the updated name whereas get page api shows updated parent section name. This seems to be bug/ data inconsistency in ON API.
On change of anything at page level updates the lastModifiedDateTime for a section but nothing gets changed at notebook level. This again seems to be like some data inconsitency issue.
Can somebody clear this confusion.
(Note - All above can be tested using MS Graph API Explorer
)
These are two separate topics:
Section renaming
This is a known limitation/bug in OneNote - if you rename a section in OneNote Online (in your browser), then the API GET ~/notebooks/id/sections or GET ~/sections will give you the "old" name. This is because OneNote Online doesn't actually rename a file, it only marks the file as "to be renamed" - if you were to look at the file itself in OneDrive/SharePoint it would still have the old name.
Once the OneNote Native Client sees the section (for example OneNote for Windows) sees the section that has been marked as "to be renamed", it actually renames the file.
The OneNote API GET ~/sections/id/pages actually looks at the section binaries and is able to tell whether the section is renamed or not, which is why that name can be trusted as the "most up to date" one.
I have communicated this feedback to our team and we are exploring alternatives - I encourage you to start an item in uservoice so we can better understand impact.
https://onenote.uservoice.com/forums/245490-onenote-developer-apis
LastModifiedTime (LMT) on notebook/section clarifications:
The LMT of a section is equal to max(LMT of pages under it).
The LMT of a section group however is not max (LMT of sections and section groups under it). A section group is a folder and its LMT should behave like that of a folder in a traditional file system (reflects time of last add/delete of a file/folder directly under it).
However, there is nothing stopping you from using $expand and calculating the LMT (as you understand it) yourself based on the entities below the notebook/section group.
https://blogs.msdn.microsoft.com/onenotedev/2014/12/16/beta-get-onenote-entities-in-one-roundtrip-using-expand/
I have a following scenario, 2 revit files, ModelA.rvt and ModelB.rvt. They are cross-referenced together, zipped and uploaded twice under diferrent object key (ModelA.zip, ModelB.zip). ZIP files are identical, very small(4MB) and containing both files. They both are uploaded succesfuly in a loop using:
PUT https://developer.api.autodesk.com/oss/v2/buckets/:bucketKey/objects/:objectName
Files are overwritten with token scope data:write and a post job called with x-ads-force = true in case of model update. Then I call the POST JOB 2x in a loop, once with ModelA.rvt as rootFilename for ModelA.zip and secondly with ModelB.rvt for ModelB.zip. Both post jobs are done sucesfully.
Right after I am getting manifest for both zip files each 10 secs. ModelB.zip is translated 100% in a few secs, but ModelA.zip never finishes (few hours so far), just hangs for no reason. On friday I thought that is just temporary issue, but no it still lasts.
I tried this scenario 3x times, each time with different set of files today and 3 days back. Same result. This one is the easiest one and they are all already present on the cloud. Still have no idea what is going on.
When I list bucket objects, zip files are never present. Another weird thing. Other files with non-zip extension are.
Does anyone have a clue what is causing this, what could be possible workaround? That is serious issue, because it corrupts usability and reliability of the whole API.
The linked revit files need to be in one zipfile with the new v2 API. See this post for more details: http://adndevblog.typepad.com/cloud_and_mobile/2016/07/translate-referenced-files-by-derivative-api.html
While trying to import some Android projects into Eclipse, I have noticed that every file in the project is 0 bytes after they are imported. These projects are stored on Drive, so there is some chance of reverting them back to the previous version.
Reverting files to previous versions is easy to do when you've got a few files - you simply do it through a browser. However, I have hundreds of files and I need to fetch one revision back for each. I have been able to download a number of files by hand thus far, but there has to be a better way.
I have asked Google support and actually got a response back, but it's clear that there is no built-in functionality to do this. So I have started looking at the Drive API but I can see that there might be a bit of a learning curve.
Wondering if anyone has run into this before? Ideally I would like to identify one folder and for each file underneath, fetch the last version of the file. If anyone has a good approach for this, I would love to hear it.
thanks!
The pseudeo code to do what you want is
# get the id of the folder https://developers.google.com/drive/v2/reference/files/list
fid=file.list(q=title = 'foo')[0]
# get the children of that folder https://developers.google.com/drive/v2/reference/children/list
children = file.children(fid).maxresults=999
# for each child,
for id in children.id
# get the revisions https://developers.google.com/drive/v2/reference/revisions/get
revisions = file.revisions(id)
# iterate, or take item[1] whatever works best for you, and use its downloadUrl to fetch the file
With each call that you make, you'll need to provide an access token. For something like this, you can generate an access token using the oauth playground https://developers.google.com/oauthplayground/
You'll also need to register a project at the cloud/api console https://code.google.com/apis/console/
So ye, it's a disproportionate amount of learning to do something fairly simple. It's a few minutes work for somebody familiar with drive, and I would guess 3 days for somebody who isn't. You might want to throw it up on freelancer.com.
someone experiment the same error while downloading CSV quotes from Yahoo!Finance web service ?
Trying to download by my APP but also by URL in a web browser I obtain this error:
in the l1 tag (last trade price) a number similar to 5.05544704E8, ...
int the d1 tag (last trade date) the following date 1/1/1970
The problem appened in the last few days, (now is 21/07/2011) why?
thank's
Yahoo financial results appear to be unreliable recently. This problem is not the only one, recently some stocks were given "N/A" (via CSV only, web pages were fine), some indexes were unavailable, etc. So this is not your error, it is just Yahoo web service giving wrong results because of their bugs or other problems.
Question:
How can I get a list of all checked out files per user in Sourcegear Vault?
Use of this functionality:
From time to time we have developers leaving files checked out and although this results in drastic punishment (they owe a coffee to the person who needed the checked out file) we are still left with files checked out and work held up.
We would like to display a list of all current number of files checked out by each developer. This way they can check if they have anything checked out before they go home or out the office.
In the Vault Client app, use the Search tab at the bottom of the window.
Select Search By: "Checked Out By" to see a list of all files checked out by a specific user, or by any user.
You can choose to search a specific sub-folder, or from the root, recursively or not.
To automate this, use the Vault Command-Line client (vault.exe)
vault -host myhost.mydomain -user something -password something -repository myrepo listcheckouts
Will give you a list, in XML, of all checked-out files and their users. You can transform the results, or use the command-line client's source code (provided as an example with the Vault .NET API) as a starting point to write your own version.
The various clients and APIs can be grabbed from http://sourcegear.com/vault/downloads.html - didn't want to link to a specific version that would be outdated after the next release.