I want to know what are all reasons of qraphQL to be used instead of rest api.
As much I know instead of making multiple requests (to reduce HTTP request), can make a group of HTTP requests in one request using graphQL.
Can anybody describe little more, please?
Thanks in advance.
There are many articles covering this question in more details available on the internet. I am trying to give a short overview here.
GraphQL offers a couple of advantages over REST.
Main difference
In a REST interface, everything is about resources. For example, you'd get the resources "car" with ID 25 and ID 83 by calling an endpoint like this:
GET /cars/25
GET /cars/83
Note, how you have to call the interface twice. The endpoint ("cars") and your resource are coupled.
In GraphQL you could get both cars with one call, using this example query:
GET /api?query={ car(ids: [25, 83]) { model, manufacturer { address } } }
Note, how you even specified the exact data you want to fetch (model, manufacturer and its address). Compared to REST, the endpoint ("api") is not resource-specific anymore.
Some advantages
As already mentioned in the question, you can reduce the amount of HTTP operations with the help of GraphQL queries (avoid underfetching).
By specifying exactly, which data you want to fetch, you are able to reduce the overhead being transmitted over the interface (avoid overfetching).
By using flexible queries with GraphQL, you're more likely to avoid coupling the interface consumer too tight to the producer by not implementing exactly the requirements of a specific consumer into a REST interface with defined endpoints.
Because each consumer exactly specifies which data is required with GraphQL, you can gather more detailed statistics on data usage in your backend.
Related
I have to implement a REST endpoint that receives start and end dates (among other arguments). It does some computations to generate a result that is a kind of forecast according to the server state at invocation epoch and the input data (imagine a weather forecast for next few days).
Since the endpoint does not alter the system state, I plan to use GET method and return a JSON.
The issue is that the output includes also an image file (a plot). So my idea is to create a unique id for the file and include an URI in the JSON response to be consumed later (I think this is the way suggested by HATEOAS principle).
My question is, since this image file is a resource that is valid only as part of the response to a single invocation to the original endpoint, I would need a way to delete it once it was consumed.
Would it be RESTful to deleting it after serving it via a GET?
or expose it only via a DELETE?
or not delete it on consumption and keep it for some time? (purge should be performed anyway since I can't ensure the client consumes the file).
I would appreciate your ideas.
Would it be RESTful to deleting it after serving it via a GET?
Yes.
or expose it only via a DELETE?
Yes.
or not delete it on consumption and keep it for some time?
Yes.
The last of these options (caching) is a decent fit for REST in HTTP, since we have meta-data that we can use to communicate to general purpose components that a given representation has a finite lifetime.
So this reference of the report (which includes the link to the plot) could be accompanied by an Expires header that informs the client that the representation of the report has an expected shelf life.
You might, therefore, plan to garbage collect the image resource after 10 minutes, and if the client hasn't fetched it before then - poof, gone.
The reason that you might want to keep the image around after you send the response to the GET: the network is unreliable, and the GET message may never reach its destination. Having things in cache saves you the compute of trying to recalculate the image.
If you want confirmation that the client did receive the data, then you must introduce another message to the protocol, for the client to inform you that the image has been downloaded successfully.
It's reasonable to combine these strategies: schedule yourself to evict the image from the cache in some fixed amount of time, but also evict the image immediately if the consumer acknowledges receipt.
But REST doesn't make any promises about liveness - you could send a response with a link to the image, but 404 Not Found every attempt to GET it, and that's fine (not useful, of course, but fine). REST doesn't promise that resources have stable representations, or that the resource is somehow eternal.
REST gives us standards for how we request things, and how responses should be interpreted, but we get a lot of freedom in choosing which response is appropriate for any given request.
You could offer a download link in the JSON response to that binary resource that also contains the parameters that are required to generate that resource. Then you can decide yourself when to clean that file up (managing disk space) or cache it - and you can always regenerate it because you still have the parameters. I assume here that the generation doesn't take significant time.
It's a tricky one. Typically GET requests should be repeatable as an import HTTP feature, in case the original failed. Some people might rely on it.
It could also be construed as a 'non-safe' operation, GET resulting in what is effectively a DELETE.
I would be inclined to expire the image after X seconds/minutes instead, perhaps also supporting DELETE at that endpoint if the client got the result and wants to clean up early.
I have a question to RESTful services. In REST the POST method is used to create an entity.
And GET is used to query entities. Right?
As I read in another posts it is not allowed in HTTP to send a GET request with a body.
But when I want to send Json to make a query, what is the best way? Are there any best practices or how do you solve such json queries?
Thanks for your answers
In REST the POST method is used to create an entity. And GET is used to query entities. Right?
Not really. GET is used to fetch representations of resources. POST is deliberately vague -- anything not worth standardizing can use POST.
when I want to send Json to make a query, what is the best way?
There is no best way to do it, just trade offs.
The basic plot of HTTP is that you GET representations of resources. If the resource you want doesn't exist, you create a new one. So the "REST" flow would look something like sending a request to the server to create a "the answer to my query" resource, and then using GET to obtain the current representation of that resource. Which is great, because we can fetch the latest representation of that resource any time we're worried that our copy is out of date. Other people with the same query can use the same resource, so we can use a general-purpose cache to take a lot of the work. The end result is "web scale".
OK, not that great, because we learned that sending information over insecure channels is a bad idea; but we can put a general-purpose caching proxy in front of our server, and get some scale that way.
But "create a new resource" is a lot of ceremony when you only expect to need the query once.
Creating a new resource was using POST in this situation anyway, so why not return a representation of the solution right away? And the answer is, go right ahead! that works great... but doesn't give you any cache support at all. You are effectively performing a remote call under the guise of modifying a resource.
Also, POST doesn't promise idempotent semantics -- on an unreliable network, requests can get lost, and general purpose components won't know that in this particular case it is harmless to just repeat the same request.
PUT has idempotent semantics... but it also has very specific opinions about the contents of the payload that don't match "query" at all.
You can dig through other standardized methods, but there aren't really any good fits. The only methods that are close are SEARCH and REPORT, which are coupled to WebDAV semantics.
You can invent your own non standard method; but general purpose components won't understand it.
You can standardize a new method with the semantics you need, but that's a lot of work.
Or you can just use POST.
Remember, the web took over the world using nothing more than GET and POST. So it's probably fine.
Lets say you have a set of resources which look like:
/v1/API/Events
/v1/API/Transactions
where Events look like:
{
"id":1,
"name":"blah",
"date":"2010-01-11",
"duration":1231231,
"transaction_id":3
}
except that the transaction_id can be left out or set to null
Transactions look like:
{
"id":1,
"name":"transaction_name",
"date":"2015-01-01"
}
Now here is the issue, there are some times where it would be beneficial to be able to get a Tranaction and it's events at the same time. It would definitely be beneficial to be able to POST a Transaction with it's events. i.e.
{
"name":"new_transaction_one",
"events": [
{
"name":"blob",
"date":"2010-01-01,
"duration":10
},
{
"name":"blob_2",
"date":"2010-01-010,
"duration":15
},
]
and it would also be useful to be able to make a GET request like:
/v1/API/Transactions/1?withEvents=Y
Other options would be to have another resource:
/v1/API/TransactionsWithEvents
But if you have objects with several different sets of child records, you would have to have a lot of different combinations. I also don't like that they have different paths event though we are talking about the same resource.
I'm leaning towards using query parameters in the GET request but I'm wondering if there are any gotchas.
Here is a case where typical RESTful API scaffolding reaches it's limits. It would probably be preferable to create some convenience methods (especially for POST) that can work across the different resources, for example, creating a transaction and associated events in a single POST. You will find that most any service with a relative level of complexity needs to have the convenience methods to prevent the user from having to (using same example) create the transaction, read the transaction id from response, then create events, then create transaction to event relations.
Tying it back to your example, that may mean you have a method like
POST /v1/API/CreateTransactionWithEvents
You may not need similar convenience endpoint for the case of returning events with transactions, as I think your parameter string approach may make sense here since you are just enriching the data returned from the record with related events.
GET /v1/API/Transactions/{ID}?withEvents=1
This is a bit more of a gray area and really subject to what works best within your other API's (so you are not doing something totally different), with providing clear API to clients, etc.
Just think of typical resource-related endpoints (i.e for Transactions and Events) as the main backbone for your RESTful service, with adding convenience methods as appropriate to address specific resource CRUD use cases that are not easily handled by backbone endpoints. This could be cases where you want to prevent client from making a series of API calls to get to something you might provide in a single call, or when you need to do something like atomically added records across resources with a single API call.
Lets say that the API is well documented and every possible response field is described.
Should web application's server API exclude null fields in a JSON response to lower the amount of traffic? Is this a good idea at all?
I was trying to calculate the amount of traffic reduced for a large app like Twitter, and the numbers are actually quite convincing.
For example: if you exclude a single response field, "someGenericProperty":null, which is 26 bytes, from every single API response, while Twitter is reportedly having 13 billion API requests per day, the amount of traffic reduction will be >300 Gb.
More than 300 Gb less traffic every day is quite a money saver, isn't it? That's probably the most naive and simplistic calculation ever, but still.
In general, no. The more public the API and and the more potential consumers of the API, the more invariant the API should be.
Developers getting started with the API are confused when a field shows up some times, but not other times. This leads to frustration and ultimately wastes the API owner's time in the form of support requests.
There is no way to know exactly how downstream consumers are using an API. Often, they are not using it just as the API developer imagines. Elements that appear or disappear based on the context can break applications that consume the API. The API developer usually has no way to know when a downstream application has been broken, short of complaints from downstream developers.
When data elements appear or disappear, uncertainty is introduced. Was the data element not sent because the API considered it to be irrelevant? Or has the API itself changed? Or is some bug in the consumer's code not parsing the response correctly? If the consumer expects a fields and it isn't there, how does that get debugged?
On the server side, extra code is needed to strip out those fields from the response. What if the logic to strip out data the wrong? It's a chance to inject defects and it means there is more code that must be maintained.
In many applications, network latency is the dominating factor, not bandwidth. For performance reasons, many API developers will favor a few large request/responses over many small request/responses. At my last company, the sales and billing systems would routinely exchange messages of 100 KB, 200 KB or more. Sometimes only a few KB of the data was needed. But overall system performance was better than fetching some data, discovering more was needed then sending additional request for that data.
For most applications some inconsistency is more dangerous than superfluous data is wasteful.
As always, there are a million exceptions. I once interviewed for a job at a torpedo maintenance facility. They had underwater sensors on their firing range to track torpedoes. All sensor data were relayed via acoustic modems to a central underwater data collector. Acoustic underwater modems? Yes. At 300 baud, every byte counts.
There are battery-powered embedded applications where every bytes counts, as well as low-frequency RF communication systems.
Another exception is sparse data. For example, imagine a matrix with 4,000,000 rows and 10,000 columns where 99.99% of the values of the matrix are zero. The matrix should be represented with a sparse data structure that does not include the zeros.
It's definitely dependent from the service and the amount of data it provides; it should be evaluate the ratio about null / not null data and set a threshold over than it worth to exclude that elements.
Thanks for sharing, it's an interesting point as for me.
The question is on a wrong side - JSON is not the best format to compress or reduce traffic, but something like google protobuffers or bson is.
I am carefully re-evaluating nullables in the API scheme right now. We use swagger (Open API) and json scheme does not really have something like nullable type and I think there is a good reason for this.
If you have a JSON response that maps a DB integer field which is suddenly NULL (or can be according to DB scheme), well it is indeed ok for relational DB but not at all healthy for your API.
I suggest to adopt and follow a much more elegant approach, and that would be to make better use of "required" also for the response.
If the field is optional in the response API scheme and it has null value in the DB do not return this field.
We have enabled strict scheme checks also for the API responses, and this gives us a much better control of our data and force us not to rely on states in the API.
For the API client that of course means doing checks like:
if ("key" in response) {
console.log("Optional key value:" + response[key]);
} else {
console.log("Optional key not found");
}
I am developing an online game where characters can perform complex actions against other objects and characters. I am building a REST API, and having a lot of trouble trying to follow even some of the most basic standards. I know that REST isn't always the answer, but for a variety of reasons it makes sense for me to use REST since the rest of the API uses it appropriately.
Here are some tricky examples:
GET /characters/bob/items
This returns an array of items that Bob is carrying.
I need to perform a variety of 'operations' against these items, and im having a very difficult time modeling this as 'resources'.
Here are some potential operations, depending on the nature of the item:
throw, eat, drop, hold
This is complicated because these 'operations' are only suitable for certain items. For example, you can't eat a sword. Moreover, 'eat' essentially has a side-effect of 'deleting' the resource. Using 'throw' may also 'delete' the resource. Using 'drop' may 'transform' the resource into another resource type. 'Throw' requires that I provide a 'location'. 'Hold' requires that I supply which hand to hold the item in. So how do you model these operations as resources? None of them are 'alike' because they each require different parameters and result in completely different behaviors.
Currently, I have an 'actions' resource that I POST these arbitrary actions to. But this feels way too RPC and non-standardized/discoverable:
POST /actions/throw
{
characterId: 5,
itemId: 10,
x: 100,
y: 150
}
I try to stick to resources and GET/POST/PUT/PATCH/DELETE where possible, but the base verbs tend to map directly to CRUD calls. Other, more complex operations generally can't be mapped without additional information.
Focusing on the resources, I'd probably do something like this (posting messages to the resources):
POST /characters/bob/items/{bombId}?action=throw
POST /characters/bob/items/{foodId}?action=eat
POST /characters/bob/items/{potionId}?action=add&addedItem={ingredientId}
Return an error when the action is not appropriate for the item.
Where I want a resource to “do a complex action” while remaining RESTful, I'd POST a complex document to the resource that describes what I want to happen. (The complex document could be in XML, JSON, or any number of other formats.) This is somewhat distinct from the more common pattern of mapping POST to “create a child resource”, but the meaning of POST is “do non-idempotent action defined by body content”. That's a reasonable fit for what you're after.
As part of the HATEOAS principle of discovery, when you GET the resource which you will later POST to, part of the document returned should say what these complex action documents are and where they should be sent to. Logically, think of filling in a form and submitting it (even if the “form” is actually slots in a JSON document or something like that).