MapMyFitness i/o usage - json

(This post may cause me bad reputation, but anyway)
There's the portal for sport activity sharing - MapMyFitness
And here's their API
I want to test Post Workout in I/O docs. The fields are:
activity_type /v7.0/activity_type/16/
aggregates test
name Run / Jog
privacy /v7.0/privacy_option/3/
start_datetime Sat, 14 Dec 2013 12:22:43 GMT
start_locale_timezone US/Central
But still I have the next error:
"error_message": "Could not deserialize body as given content type"
What am I doing wrong?
P.S. Unfortunately, I didn't find any community or active forum to help.

I am missing some of the information required to fully answer your question, but I have a hunch that you are requesting a content-type that is not 'application/json'. The workout API only supports JSON responses.
If you look again at the I/O Docs for POST Workout, you'll see the Content-type field is filled in with application/json by default.

Related

ExactOnline: The remote server returned an error: (400) Bad Request

I'm trying to connect to ExactOnline server using HttpWebRequest from C#. When I try to get the Response, I get an exception: "The remote server returned an error: (400) Bad Request".
The Web Request looks like:
Method: GET
Address: "https://start.exactonline.nl/api/v1/3175257/Logistics/Items?$select=Code&$top=1"
Accept: application/json
ContentType: application/json
Authorization: "Bearer access_token"
where access_token looks like:
"stampNL001.gAAAAGivCOkntSKiT0xYatuOkLEkbA0cCcPAbdDZGctQSAHRuaJ1KfvMY1QjnKWLM4BnRNRh8Vpg9H-3ISW6Vs1Xr0EXjHxgxH1o-n4BJAySMw1tCF-v9heoQ_vQjS2zz8SZtYj1OT9U8DSJnvKzdd6dVKN90G3NA6k80EiS95wgxsVSBAIAAIAAAADO4MGzvH-iyio7XsXArprV_ey-zH9H-NPT2n4CBbjlIJ8gIkjLFvXrcJrZ2lwUBFOrgaHQwfU8dvmnSyRRzlZEe9wSfcpX16BPB7tZzrR_mdQozAtgWVxtIdzxUIHlqaFk0BNhOIfMdDxnagivTdo3HNdTVg9N8K0lx-TX4aNeeoRgzMho46Z1ix1te6rJ8_GjJeAjl7iyVDYqoK_D2Zlaa6cIYNillNlaOYxV2e95tcKoMLPRKUx3ULBtht_joijvA8raWhNBxHiJZQsIyCbTCJuC-dARqicrbdOqNkv769oRgnhLokWHt44dLpwQJ990eWqj1R6ppmF-W5s6d5EpQsLqkFSiPtpIHkao3D4Yxv6BCD8bhsjfjwAiISyyIPt7GbVv4OPZ7dDTMBZbWJBX2JLPWsxiPqb1Y1dOUPMxfFty9mM22qBXq8VA3EyA96-JwNqgIy4eP5hbXmeEU-BOxnF4vp_dZEZU-iM5fV-uYjZYduVtMNBHW-ubQZ811_rv1trx0TP7eEz8dbcfNlB0uAcb6NR-5tC2qwV0wb59qOjO2HQhb0TKGslPjefjwyhNK4ZVSWL0Cr_1KzxpKjA1suY12gBv_J6vQ4js3dlW1MxwypJaUzMMBvtGPqS2N3zcLvrMth1wiB7IjxfA5jd3hRo5_F3iCLTeDtLxToKpNA"
The same code (same input) worked two weeks ago.
What do I do wrong? Thanks.
The answer is in the response of this endpoint, if you query it without $filter. look at the following screenshot https://imgur.com/X4ufb94 it shows your endpoint call. it gives an error 400 bad request and states the answer to your problem: $filter is required for this endpoint.
Now look at this screenshot with $filter added https://imgur.com/c7fiGTx it gives back data without errors. i don't have any logistics data to show but it shows i don't have any and no error.
More in depth:
It looks like $filter has started to become an enforced required addition to certain endpoints. It was stated as 'required' for over a year, but just recently they started to actively block queries without the $filter parameter (speaking from experience).
From the release notes from august 2021:
Mandatory filtering for properties on 14 REST API endpoints To help
keep API traffic in Exact Online running efficiently, we have made
filtering mandatory for several properties within 12 API endpoints.
Filtering helps ensure that only relevant data is retrieved when you
make API calls, so you don’t have to work with a large amount of data.
PS. I post this as a new answer since my original answer was swiftly hidden end subsequently deleted (within an hour) by many mods. Corrections/changes to that answer on the other hand are not reviewed as spediently. Still waiting a day later. Since i think i am correct in my answer (i recently had to deal with the same issue on some code that had been running just fine for many months) i post it as a separate and new answer, for anyone looking for a real answer to the same issue.

ARPC Verification Failure on POS

We are successfully processing transactions and verifying ARQC data using KW command on a Thales 9000 HSM however the POS is failing to verify ARPC with ISO Error Code Z1.
Below is our response data. We have been doing some research online and consulting industry experts but no luck.
Tag 8A - 00
Tag 91 - FEA27497000000000000000000000000
Tag 9F36 - 006C
Any help is greatly appreciated.
ARPC is a cryptographically calculated value. What you have here seems too low on entropy to be the result of such calculation.
Are you sure you have the request and response to HSM right? It does not seem like it. You might be interpreting the response incorrectly, but you might want to add the log showing what you have sent to and received from HSM.

Stale data from Microsoft Graph and Excel API

We're using the Microsoft Graph .NET Client Library to send requests to the Excel API in order to read or write to Excel files in Office365. We have noticed that the data that we get back from the API is sometimes stale.
For instance, if we add a row to an Excel file, and then immediately read all rows from the same file, even if the add request succeeds, the row will still be missing from the data that we read back. If we wait for a few seconds, the row will show up. This problem does not reproduce consistently, and the delay time varies from less than a second to sometimes tens of seconds. The same problem occurs in update or delete operations as well.
Based on this, we speculate that behind the API, data takes a significant amount of time to propagate across all of Microsoft's servers, and if our requests are not always routed to the same server, we will occasionally hit a server that does not have the latest data.
Could someone who is working on either the Microsoft Graph API or the Excel API verify this guess? We understand that as Microsoft transitions from shipping packaged software to building cloud services, there will be problems and challenges, so we don't expect an immediate solution. However, since our business depends greatly on this API, if there is a known problem, please let us know so that at least we can try to find a workaround on our end.
Any response would be greatly appreciated. Thank you in advance.
Please check
https://dev.office.com/blogs/power-your-apps-with-the-new-excel-rest-api
Copied from the above URL:
Note: Any request that modifies the workbook should be performed in a
persisted session. Find more details on how to create a persisted
session in our documentation.
Create a persisted session
POST .../workbook/CreateSession
content-type: Application/Json
authorization: Bearer {access-token}
{ "persistChanges": true }
Response
HTTP code: 201, Created
content-type: application/json;odata.metadata
{ "#odata.context": "https://graph.microsoft.com/v1.0/$metadata#microsoft.graph.sessionInfo", "id": "{session-id}", "persistChanges": true}
Usage The session ID returned from the CreateSession call is then
passed as a header on subsequent API requests using the
workbook-session-id HTTP header.
GET .../workbook/Worksheets
authorization: Bearer {access-token}
workbook-session-id: {session-id}

Comprehensive CEP (Proton) REST API documentation

I've searched the repo + FiWare Wikis and was unable to find any detailed API documentation.
I saw this: http://forge.fiware.org/plugins/mediawiki/wiki/fiware/index.php/Complex_Event_Processing_Open_RESTful_API_Specification
I'm running a CEP instance on Fiware Cloud, and keep getting 500's and 405's for the calls I'm trying.
Yet it often references the user guide for more details on each endpoint parameter. Is there a more recent version?
Last release was more than a year ago, according to that spec. Are the docs up to date with the latest API version?
Else I'll have to reverse-engineer the API...
PS: CEP instance is running # http://130.206.117.120:8080/
Let me know if there are some sanity checks I can make ;)
I'm also using CEP and for REST API I haven't found newer specs (from the doc you linked). However, I didn't have any problems getting the resources.
You can try
GET: http://130.206.117.120:8080/ProtonOnWebServerAdmin/resources/instances/ProtonOnWebServer
It shows state and definitions-url of the engine.
I found it helpful to browse through examples from engine also, or from tuts.
I saw that your instance uses DoSAttack definition and I have tried POSTing to engine: POST: http://130.206.117.120:8080/ProtonOnWebServer/rest/events, header: Content-Type: application/json, payload: {"Name":"TrafficReport","volume":"22"} and got 200 OK.
For more complex tasks (apart from REST API) this PAGE lists latest specs.
Hope it helps!

Chrome HEAD request?

Why does Chrome send a HEAD request? Example in logs:
2013-03-04 07:43:51 W3SVC7 NS1 GET /page.html 80 - *.*.*.* HTTP/1.1 Mozilla/5.0+(Windows+NT+6.1;+WOW64)+AppleWebKit/537.22+(KHTML,+like+Gecko)+Chrome/25.0.1364.97+Safari/537.22
2013-03-04 07:43:51 W3SVC7 NS1 HEAD / - 80 - *.*.*.* HTTP/1.1 Mozilla/5.0+(Windows+NT+6.1;+WOW64)+AppleWebKit/537.22+(KHTML,+like+Gecko)+Chrome/25.0.1364.97+Safari/537.22
I have a ban system, and this head request really annoying, and its happening exactly the same second with GET request.
What is the nature of it? any help appreciated.
p.s: I noticed that the head requests are all only to my homepage.
RFC 2616 states:
9.4 HEAD
The HEAD method is identical to GET except that the server MUST NOT
return a message-body in the response. The metainformation contained
in the HTTP headers in response to a HEAD request SHOULD be identical
to the information sent in response to a GET request. This method can
be used for obtaining metainformation about the entity implied by the
request without transferring the entity-body itself. This method is
often used for testing hypertext links for validity, accessibility,
and recent modification.
The response to a HEAD request MAY be cacheable in the sense that the
information contained in the response MAY be used to update a
previously cached entity from that resource. If the new field values
indicate that the cached entity differs from the current entity (as
would be indicated by a change in Content-Length, Content-MD5, ETag
or Last-Modified), then the cache MUST treat the cache entry as
stale.
Most likely it is trying to verify the clients cookie/session is valid with the server.