How to return the full comments when using the VSTS Api to get a list of commits? - azure-pipelines-release-pipeline

I have a script for using the VSTS api to generate a log of commits between two different versions, however the response from the VSTS api returns the comments as truncated. This is the most important part of the log for my purposes and as such would like to receive the comments in full from the api.
The response comes back like this:
I could obviously loop through each commit Id and get the full comments by calling the api for each of these Ids, but as this script will be part of our release process I want it to be as quick as possible, and all those extra calls would add unnecessary time to the script.
From what I can see there is no way to get the full comments when getting the list of commits, but I'm hoping someone out there can help me with this?
Thanks

Use API 2.0 if you can, there is no 'commentTruncated' anymore.
All the messages are returned in full.
The longest commit message I came across was 770 characters, 105 words across 13 lines.
Finding the repository id: Get a list of repositories
Get a list of commits, by repository id

Adding the max comment length parameter to some high value did it for me:
_apis/tfvc/changesets?maxCommentLength=1000

Related

how does google-cloud-function generate function-execution-id?

A Cloud Function triggered by an HTTP request has a corresponding function-execution-id for each calling request (in the request and response header). It is used for tracing and viewing the log of a specific request in Stack Driver Logging. In my case, it is a string of 12 characters. When I continuously do HTTP requests to a cloud function and see the function-execution-id, I get the result below:
j8dorcyxyrwb
j8do4wolg4i3
j8do8bxu260m
j8do2xhqmr3s
j8dozkdlrjzp
j8doitxtpt29
j8dow25ri4on
On each line, the first 4 characters are the same "j8do" but the rest are different, so I wonder what is the structure of function-execution-id.
How was it generated?
The execution id is opaque, meaning that it doesn't contain any useful data. It is just a unique ID. How it was generated should not be of any issue to you, the consumer. From examination, it looks like it might be some time-based value similar to UUIDv1, but any code that you write that consumes these IDs should make no assumptions about how they were generated.

How to find how many json endpoints an api has

I’m in the middle of making an Express app. It’s just a learning project.
I’m getting some info from an Anime api called jikan.me, it provides info about different Anime series like a picture url and synopsis.
For example one is at https://api.jikan.me/anime/16 .
Now, the jikan api might have a json endpoint at anime/1 but there's nothing at anime/2.
I want to find a list of all the numbers (https://api.jikan.me/anime/[numbers]) that actually contain endpoints.
I've tried simply going to https://api.jikan.me/anime but it returns error: No ID/Path Given.
I'm expecting there is likely no absolute answer to this problem but that I might learn something about server-side code along the way.
Where would I begin to look to find this info?
This is a bit late but, Jikan is an unofficial REST API for MyAnimeList. The IDs are respective to the IDs on MAL. For example; https://myanimelist.net/anime/1 can be parsed through https://api.jikan.moe/anime/1 but the ID 2 does not exist on MAL. It's a 404, hence that error.
To initially get some IDs, you can try the search endpoint.
Furthermore, I'll be releasing REST 2.2 quite soon (this month) which will give you the ability to parse from pages like these and thus you'll get another endpoint that provides a handful of IDs to get their data from.
Source: I'm the developer of Jikan
If it's not in the documentation it's probably information not available to you... a REST api needs to be specifically configured to offer certain endpoints, that number at the end might just be an ID that's searched for in an internal database and there's no way for the application to know if there's gonna be something there; all they can do is return an error message for you to handle as is the case here.

How to retrieve git commit history for specific time period with cURL?

I am trying to retrieve commit history as JSON and output in a txt file.
curl https://api.github.com/repos/username/repo/commits > commitHistory.txt
With above curl commend, I only get the 1st page of the commit history. I would like to retrieve full commit history or maybe set a date range when doing it. How should I do it?
You can use since and until parameters to get commits only for specific time period:
curl https://api.github.com/repos/username/repo/commits?since=2016-11-01T00:00:00Z&until=2016-11-01T23:59:59Z
For details: See api doc .
API requests from GitHub are automatically paginated for large result sets, so you'll need to inspect the Link: header and make further requests while ever there are more results. The API documentation offers more information:
Requests that return multiple items will be paginated to 30 items by default. You can specify further pages with the ?page parameter. For some resources, you can also set a custom page size up to 100 with the ?per_page parameter. Note that for technical reasons not all endpoints respect the ?per_page parameter, see events for example.
curl 'https://api.github.com/user/repos?page=2&per_page=100'
Note that page numbering is 1-based and that omitting the ?page parameter will return the first page.
For more information on pagination, check out our guide on Traversing with Pagination.
You could also do this using a Python library like github3.py (or equivalent), which will handle pagination for you.
In terms of a specific date range, philipjkim's answer is correct: use the since and until parameters.

REST: Updating Multiple Resources With One Request - Is it standard or to be avoided?

A simple REST API:
GET: items/{id} - Returns a description of the item with the given id
PUT: items/{id} - Updates or Creates the item with the given id
DELETE: items/{id} - Deletes the item with the given id
Now the API-extension in question:
GET: items?filter - Returns all item ids matching the filter
PUT: items - Updates or creates a set of items as described by the JSON payload
[[DELETE: items - deletes a list of items described by JSON payload]] <- Not Correct
I am now being interested in the DELETE and PUT operation recycling functionality that can be easily accessed by PUT/DELETE items/{id}.
Question: Is it common to provide an API like this?
Alternative: In the age of Single Connection Multiple Requests issuing multiple requests is cheap and would work more atomic since a change either succeeds or fails but in the age of NOSQL database a change in the list might already have happend even if the request processing dies with internal server or whatever due to whatever reason.
[UPDATE]
After considering White House Web Standards and Wikipedia: REST Examples the following Example API is now purposed:
A simple REST API:
GET: items/{id} - Returns a description of the item with the given id
PUT: items/{id} - Updates or Creates the item with the given id
DELETE: items/{id} - Deletes the item with the given id
Top-resource API:
GET: items?filter - Returns all item ids matching the filter
POST: items - Updates or creates a set of items as described by the JSON payload
PUT and DELETE on /items is not supported and forbidden.
Using POST seems to do the trick as being the one to create new items in an enclosing resource while not replacing but appending.
HTTP Semantics POST Reads:
Extending a database through an append operation
Where the PUT methods would require to replace the complete collection in order to return an equivalent representation as quoted by HTTP Semantics PUT:
A successful PUT of a given representation would suggest that a subsequent GET on that same target resource will result in an equivalent representation being returned in a 200 (OK) response.
[UPDATE2]
An alternative that seems even more consistent for the update aspect of multiple objects seems to be the PATCH method. The difference between PUT and PATCH is described in the Draft RFC 5789 as being:
The difference between the PUT and PATCH requests is reflected in the way the server processes the enclosed entity to modify the resource identified by the Request-URI. In a PUT request, the enclosed entity is considered to be a modified version of the resource stored on the origin server, and the client is requesting that the stored version be replaced. With PATCH, however, the enclosed entity contains a set of instructions describing how a resource currently residing on the origin server should be modified to produce a new version. The PATCH method affects the resource identified by the Request-URI, and it also MAY have side effects on other resources; i.e., new resources may be created, or existing ones modified, by the application of a PATCH.
So compared to POST, PATCH may be also a better idea since PATCH allows an UPDATE where as POST only allows appending something meaning adding without the chance of modification.
So POST seems to be wrong here and we need to change our proposed API to:
A simple REST API:
GET: items/{id} - Returns a description of the item with the given id
PUT: items/{id} - Updates or Creates the item with the given id
DELETE: items/{id} - Deletes the item with the given id
Top-resource API:
GET: items?filter - Returns all item ids matching the filter
POST: items - Creates one or more items as described by the JSON payload
PATCH: items - Creates or Updates one or more items as described by the JSON payload
You could PATCH the collection, e.g.
PATCH /items
[ { id: 1, name: 'foo' }, { id: 2, name: 'bar' } ]
Technically PATCH would identify the record in the URL (ie PATCH /items/1 and not in the request body, but this seems like a good pragmatic solution.
To support deleting, creating, and updating in a single call, that's not really supported by standard REST conventions. One possibility is a special "batch" service that lets you assemble calls together:
POST /batch
[
{ method: 'POST', path: '/items', body: { title: 'foo' } },
{ method: 'DELETE', path: '/items/bar' }
]
which returns a response with status codes for each embedded requests:
[ 200, 403 ]
Not really standard, but I've done it and it works.
Updating Multiple Resources With One Request - Is it standard or to be avoided?
Well, sometimes you simply need to perform atomic batch operations or other resource-related operations that just do not fit the typical scheme of a simple REST API, but if you need it, you cannot avoid it.
Is it standard?
There is no universally accepted REST API standard so this question is hard to answer. But by looking at some commonly quoted api design guidelines, such as jsonapi.org, restfulapi.net, microsoft api design guide or IBM's REST API Conventions, which all do not mention batch operations, you can infer that such operations are not commonly understood as being a standard feature of REST APIs.
That said, an exception is the google api design guide which mentions the creation of "custom" methods that can be associated via a resource by using a colon, e.g. https://service.name/v1/some/resource/name:customVerb, it also explicitly mentions batch operations as use case:
A custom method can be associated with a resource, a collection, or a service. It may take an arbitrary request and return an arbitrary response, and also supports streaming request and response. [...] Custom methods should use HTTP POST verb since it has the most flexible semantics [...] For performance critical methods, it may be useful to provide custom batch methods to reduce per-request overhead.
So in the example case you provided you do the following according to google's api guide:
POST /api/items:batchUpdate
Also, some public APIs decided to offer a central /batch endpoint, e.g. google's gmail API.
Moreover, as mentioned at restfulapi.net, there is also the concept of a resource "store", in which you store and retrieve whole lists of items at once via PUT – however, this concept does not count for server-managed resource collections:
A store is a client-managed resource repository. A store resource lets an API client put resources in, get them back out, and decide when to delete them. A store never generates new URIs. Instead, each stored resource has a URI that was chosen by a client when it was initially put into the store.
Having answered your original questions, here is another approach to your problem that has not been mentioned yet. Please note that this approach is a bit unconventional and does not look as pretty as the typical REST API endpoint naming scheme. I am personally not following this approach but I still felt it should be given a thought :)
The idea is that you could make a distinction between CRUD operations on a resource and other resource-related operations (e.g. batch operations) via your endpoint path naming scheme.
For example consider a RESTful API that allows you to perform CRUD operations on a "company"-resource and you also want to perform some "company"-related operations that do not fit in the resource-oriented CRUD-scheme typically associated with restful api's – such as the batch operations you mentioned.
Now instead of exposing your resources directly below /api/companies (e.g. /api/companies/22) you could distinguish between:
/api/companies/items – i.e. a collection of company resources
/api/companies/ops – i.e. operations related to company resources
For items the usual RESTful api http-methods and resource-url naming-schemes apply (e.g. as discussed here or here)
POST /api/companies/items
GET /api/companies/items
GET /api/companies/items/{id}
DELETE /api/companies/items/{id}
PUT /api/companies/items/{id}
Now for company-related operations you could use the /api/companies/ops/ route-prefix and call operations via POST.
POST /api/companies/ops/batch-update
POST /api/companies/ops/batch-delete
POST /api/companies/ops/garbage-collect-old-companies
POST /api/companies/ops/increase-some-timestamps-just-for-fun
POST /api/companies/ops/perform-some-other-action-on-companies-collection
Since POST requests do not have to result in the creation of a resource, POST is the right method to use here:
The action performed by the POST method might not result in a resource that can be identified by a URI.
https://www.rfc-editor.org/rfc/rfc2616#section-9.5
As far as I understand the REST concept it covers an update of multiple resources with one request. Actually the trick here is to assume a container around those multiple resources and take it as one single resource. E.g. you could just assume that list of IDs identifies a resource that contains several other resources.
In those examples in Wikipedia they also talk about resources in Plural.

What are the common ways for deletion local/client objects using REST API?

Is there common design pattern for dispatching deleted objects to the requestor (client of the API)?
Challenges we are having:
When object is deleted on the API completely, client will not know
that object is gone and will keep it locally (as API only shows objects changed after the certain date)
If we enable object's property to show that is deleted (ex. "deleted = TRUE") then
eventually number of objects in the API grows and slows down the transfer rate.
Another option we looking into is to have separate Endpoints on the API to show list of deleted objects only (is this the pattern that anyone uses?).
I'm looking for most "RESTful way" to delete local objects.
The way I handle it is a variation on your #1: each item has a last updated field in the database, and if something is deleted, I make an entry in another table of deleted items, and it's updated value is when it was deleted.
The client makes a request asking for "changes since X" which is their own locally stored last updated value...it returns new data, and an array of deleted items. Then on the client I purge those values
Stale data is always a problem with client/server applications. If clients loads some data, then some object is deleted on the server, and then client sends a DELETE request, the RESTFul thing to do would be to return a 404, which indicated "not found". If the client knows that if it sends a DELETE, and gets a 404, the resource was deleting from underneath...
What if you think of your resource not as a list, but rather as a changeset?
Eg. changesets what you have in git or SVN.
This way, there's always a "head" version, and the client always has some version, and the resource is the change between client's last and head.
That way you can apply whatever you've learned by examining/using version control systems.
If you need anything more complex, the science behind is called Operational Transformation (OT) - http://en.wikipedia.org/wiki/Operational_transformation