Stale data from Microsoft Graph and Excel API - json

We're using the Microsoft Graph .NET Client Library to send requests to the Excel API in order to read or write to Excel files in Office365. We have noticed that the data that we get back from the API is sometimes stale.
For instance, if we add a row to an Excel file, and then immediately read all rows from the same file, even if the add request succeeds, the row will still be missing from the data that we read back. If we wait for a few seconds, the row will show up. This problem does not reproduce consistently, and the delay time varies from less than a second to sometimes tens of seconds. The same problem occurs in update or delete operations as well.
Based on this, we speculate that behind the API, data takes a significant amount of time to propagate across all of Microsoft's servers, and if our requests are not always routed to the same server, we will occasionally hit a server that does not have the latest data.
Could someone who is working on either the Microsoft Graph API or the Excel API verify this guess? We understand that as Microsoft transitions from shipping packaged software to building cloud services, there will be problems and challenges, so we don't expect an immediate solution. However, since our business depends greatly on this API, if there is a known problem, please let us know so that at least we can try to find a workaround on our end.
Any response would be greatly appreciated. Thank you in advance.

Please check
https://dev.office.com/blogs/power-your-apps-with-the-new-excel-rest-api
Copied from the above URL:
Note: Any request that modifies the workbook should be performed in a
persisted session. Find more details on how to create a persisted
session in our documentation.
Create a persisted session
POST .../workbook/CreateSession
content-type: Application/Json
authorization: Bearer {access-token}
{ "persistChanges": true }
Response
HTTP code: 201, Created
content-type: application/json;odata.metadata
{ "#odata.context": "https://graph.microsoft.com/v1.0/$metadata#microsoft.graph.sessionInfo", "id": "{session-id}", "persistChanges": true}
Usage The session ID returned from the CreateSession call is then
passed as a header on subsequent API requests using the
workbook-session-id HTTP header.
GET .../workbook/Worksheets
authorization: Bearer {access-token}
workbook-session-id: {session-id}

Related

ExactOnline: The remote server returned an error: (400) Bad Request

I'm trying to connect to ExactOnline server using HttpWebRequest from C#. When I try to get the Response, I get an exception: "The remote server returned an error: (400) Bad Request".
The Web Request looks like:
Method: GET
Address: "https://start.exactonline.nl/api/v1/3175257/Logistics/Items?$select=Code&$top=1"
Accept: application/json
ContentType: application/json
Authorization: "Bearer access_token"
where access_token looks like:
"stampNL001.gAAAAGivCOkntSKiT0xYatuOkLEkbA0cCcPAbdDZGctQSAHRuaJ1KfvMY1QjnKWLM4BnRNRh8Vpg9H-3ISW6Vs1Xr0EXjHxgxH1o-n4BJAySMw1tCF-v9heoQ_vQjS2zz8SZtYj1OT9U8DSJnvKzdd6dVKN90G3NA6k80EiS95wgxsVSBAIAAIAAAADO4MGzvH-iyio7XsXArprV_ey-zH9H-NPT2n4CBbjlIJ8gIkjLFvXrcJrZ2lwUBFOrgaHQwfU8dvmnSyRRzlZEe9wSfcpX16BPB7tZzrR_mdQozAtgWVxtIdzxUIHlqaFk0BNhOIfMdDxnagivTdo3HNdTVg9N8K0lx-TX4aNeeoRgzMho46Z1ix1te6rJ8_GjJeAjl7iyVDYqoK_D2Zlaa6cIYNillNlaOYxV2e95tcKoMLPRKUx3ULBtht_joijvA8raWhNBxHiJZQsIyCbTCJuC-dARqicrbdOqNkv769oRgnhLokWHt44dLpwQJ990eWqj1R6ppmF-W5s6d5EpQsLqkFSiPtpIHkao3D4Yxv6BCD8bhsjfjwAiISyyIPt7GbVv4OPZ7dDTMBZbWJBX2JLPWsxiPqb1Y1dOUPMxfFty9mM22qBXq8VA3EyA96-JwNqgIy4eP5hbXmeEU-BOxnF4vp_dZEZU-iM5fV-uYjZYduVtMNBHW-ubQZ811_rv1trx0TP7eEz8dbcfNlB0uAcb6NR-5tC2qwV0wb59qOjO2HQhb0TKGslPjefjwyhNK4ZVSWL0Cr_1KzxpKjA1suY12gBv_J6vQ4js3dlW1MxwypJaUzMMBvtGPqS2N3zcLvrMth1wiB7IjxfA5jd3hRo5_F3iCLTeDtLxToKpNA"
The same code (same input) worked two weeks ago.
What do I do wrong? Thanks.
The answer is in the response of this endpoint, if you query it without $filter. look at the following screenshot https://imgur.com/X4ufb94 it shows your endpoint call. it gives an error 400 bad request and states the answer to your problem: $filter is required for this endpoint.
Now look at this screenshot with $filter added https://imgur.com/c7fiGTx it gives back data without errors. i don't have any logistics data to show but it shows i don't have any and no error.
More in depth:
It looks like $filter has started to become an enforced required addition to certain endpoints. It was stated as 'required' for over a year, but just recently they started to actively block queries without the $filter parameter (speaking from experience).
From the release notes from august 2021:
Mandatory filtering for properties on 14 REST API endpoints To help
keep API traffic in Exact Online running efficiently, we have made
filtering mandatory for several properties within 12 API endpoints.
Filtering helps ensure that only relevant data is retrieved when you
make API calls, so you don’t have to work with a large amount of data.
PS. I post this as a new answer since my original answer was swiftly hidden end subsequently deleted (within an hour) by many mods. Corrections/changes to that answer on the other hand are not reviewed as spediently. Still waiting a day later. Since i think i am correct in my answer (i recently had to deal with the same issue on some code that had been running just fine for many months) i post it as a separate and new answer, for anyone looking for a real answer to the same issue.

How this webpage data access works?

I'm trying to get data from this site: [1] https://www.eurobet.it/it/scommesse/#!/calcio/?temporalFilter=TEMPORAL_FILTER_OGGI_DOMANI
I found this link where I can get the data in JSON format: [2] https://www.eurobet.it/detail-service/sport-schedule/services/discipline/calcio?prematch=1&live=0&temporalFilter=TEMPORAL_FILTER_OGGI_DOMANI
But there is a problem:
The JSON link Doesn't work every time in fact sometimes I get a 404 error.
I noticed that if I open the first link [1] before opening the second [2] it works perfectly.
This error is also more frequent when I try to scrape other data on the same site: [3] https://www.eurobet.it/detail-service/sport-schedule/services/discipline/calcio/piu-giocate/u-o-goal?prematch=1&live=0&temporalFilter=TEMPORAL_FILTER_OGGI_DOMANI
In this link [3] I try to get all "u-o-goal" odds but this link works only if (before starting my program to scrape data) in the main link [1] I press the "U/O GOAL" button -> https://i.stack.imgur.com/Nei5u.png
In my code, I'm using Java and htmlunit to scrape the data.
My question is: how this webpage works, why couldn't I open directly the links [2]/[3], I know that there is a sort of request and approval system behind but I can't see where.
You cannot directly open these URLs since the website (and many like it) will use cookies and bot-prevention techniques/session tracking so they can gather data about usage of their website. eg. they set a "Referer".
I'm not going to code a solution for you but I can at least help you understand what you need to do to get to where you want...
I've attempted to summarise how I'd typically unpick a request like this to recreate it, but in its essence, you need to understand the sequence of HTTP requests being made (this is how the web works - HTTP requests).
First you typically start with no session cookies and you access the site directly (no referer).
Once you access a website, typically the server responds with a session cookie for you to communicate back to the server a unique session ID so it has some sort of record of your browser having already been in contact.
Your browser may make more requests (asynchronously) and in doing so typically sends the cookies and the referring URL (usually the base Url will work... just don't use something that starts with something other than "https://www.eurobet.it"
anything else you're going to need to figure it out. Lots of headers are optional. Lots of query params have defaults.
https://stackoverflow.com/a/64671815/7619034 - here's an answer I've given before that answers this type of question which comes up often enough.
so to explain a bit further, for your specific scenario...
When you access https://www.eurobet.it/it/scommesse/#!/calcio/?temporalFilter=TEMPORAL_FILTER_OGGI_DOMANI, the server responds with HTTP headers:
...
set-cookie: __cfduid=dd38d***********41125; ...
...
The rest doesn't look that relevant:
Going straight to the other request: https://www.eurobet.it/detail-service/sport-schedule/services/discipline/calcio?prematch=1&live=0&temporalFilter=TEMPORAL_FILTER_OGGI_DOMANI
This HTTP request takes (as input):
cookie: __cfduid=dd38d***********41125; mbox=session#6661556c.....b6e8cc1fa6f03#1608242987; at_check=true; s_ecid=MCMID%***********2021453010; AMCVS_45F10C3A53DAEC9F0A490D4D%40AdobeOrg=1; AMCV_45F10C3A53DAEC9F0A490D4D%40AdobeOrg=1075005958%7CMCIDTS%7C18614%7CMCMID%7C91883906030825914429183258312021453010%7CMCAID%7CNONE%7CMCOPTOUT-1608248327s%7CNONE%7CvVersion%7C4.4.1; s_cc=true
...
referer: https://www.eurobet.it/it/scommesse/
...
x-eb-accept-language: it_IT
x-eb-marketid: 5
x-eb-platformid: 1
Cookies are set in an initial request (typically) using Set-Cookie header and then are passed back to the server in subsequent requests using the cookie header.
I'm not certain how many of these values are relevant but you'd need to figure out where each came from in the chain of HTTP requests between the initial one and this one and you'd need to replicate them (see url above of my previous answer - warning this can be time consuming).
The other headers can be set statically most likely since they probably aren't due to change.
If you have access to curl on the command line, you can attempt to reconstruct some of these requests by hand. Some will be time sensitive since cookies do expire after an amount of time (see set-cookie header details for exactly when). Once you've reconstructed a working request, you can then start coding it in your application.
If you can work all this out you should be able to re-construct the chain of HTTP GET requests to get the JSON data you want. Good luck!

Creating scene definition against beta server gets 422 error

I'm trying to use the Unity AR/VR Toolkit with a SVF file I've created by following the test-2legged script. My understanding from this answer is that the script needs to be updated to use the new server (https://developer-api-beta.autodesk.io) and a URL-safe encoded URN everywhere. I've done that, but when I try to create the scene definition (PUT /arkit/v1/${urn}/scenes/${scene}) I get a 422 with a msg of "must be a valid Bearer token for the requested resource (TK1-003)" (I've tried giving the token all scopes listed in the Forge docs).
This works fine with the server in the non-modified test-2legged script. I'm operating on a file that's already in Forge so I'm not including the bucket, and I've experimented with including and not including the object ID; the default server (https://developer-api.autodesk.io) works fine with just an urn; the beta server I can't get to work no matter what I try.
I don't see any documentation for this endpoint so I'm not sure if it's use changed between the servers. As far as I can tell, in the toolkit I can't get SVF files created against the non-beta server (I get 404's for the meshes), so I'm assuming I should be on the beta server, but I can't get it to work.
I took a look into the source code and TK1-003 means that the Bearer token was either invalid, expired, and the token was missing the data:create data:write scoped for this operation. Note that in case, you call and API with an invalid token, the server bans you for couple of minutes.
If you still having issues, please let me know and I'll remote assist you.

Checking for new data on server

I have an application that downloads data via NSURLConnection in the form of a JSON object; it then displays the data to the user. As new data may be created on the server at any point, what is the best way to 'realise' this and download this data?
At the moment I am planning on having the application download all the data every 30-40 seconds, and then check the data downloaded against the current data: if it is the same do nothing; if it is different, procede with the alterations. However, this seems a bit unnecessary, especially as the data may not change for a while. Is there a more efficient way of updating the application data when new server data is created?
Use ETag if the server supports it.
Wikipedia ETag
"If the resource content at that URL ever changes, a new and different ETag is assigned."
You could send a HTTP HEAD request to the server with the "If-Modified-Since" header set to the time you recieved the last version. If the server handles this correctly, it should return 304 (Not Modified) while the file is unchanged; so as soon as it doesn't return that, you GET the file and procede as usual.
See HTTP/1.1: Header Field Definitions

webRequest API not capturing all page requests from application

I am trying to download JSON data from a web application. The URL/API is static and I can use it to call the webpage that returns the data. There is a session variable parameter that needs to be added to the URL/API call to connect to the server and download the JSON data which is created when you launch the application, but times out if the application is not actively used. My current process is to open the developer tools, launch the web application and when the specific JSON request is made I copy the parameter value then add it to a script that mimics the page request and downloads the JSON data.
I am trying to avoid manually copying and pasting this session variable parameter. I want to be able to automatically capture the web request, parse out the value that I need, set a cookie on my machine and then pick up the cookie by a php script to initiate the JSON data download with the valid session value.
I have looked into creating an extension in chrome using the chrome.webRequest.onResponseStarted with the following code:
chrome.webRequest.onCompleted.addListener(function(details) {
console.log(details);
chrome.cookies.set(
{ url: "http://localhost/MySite/", name: "MyCookie", value: "Tested" }
);
}, {urls:["<all_urls>"]} );
This code works for the main web requests but it doesn’t pick up all the JSON data requests that are made by the application. The application is swf format which is most likely the problem, but I can see the requests in the Network Panel tab of the Developer Tools and they are captured using chrome://net-internals which that leads me to believe that I should be able to capture them somehow.
I have looked into chrome.devtools.network but I cannot seem to figure out how that is supposed to work. Any advice or direction would be greatly appreciated.