JSON over Diameter protocol - json

I am new to Diameter and have this basic question.
I have 2 peers talking to each other over Diameter protocol. I need to send some data between these 2 entities and I am trying to decide whether JSON or XML is supported over Diameter. What is the best way to transfer file content over Diameter? Is it possible to transfer JSON data over Diameter?
Any sample links or code samples would be helpful.
Thanks in advance...

You can send any kind of data you want with Diameter, but keep in mind that it is designed for transmitting Authentication, Authorization and Accounting (AAA) data. This is control data that is primarily used for granting access, enforcing policy, and measuring usage. The actual network traffic that Diameter controls flows over completely different networks. So if control is what you are after then you should next research which interface(s) you would need for your application. There are many good online resources for that, including RFCs, IETF, 3GPP, and wikipedia.

Diameter peers use Commands to communicate, where Commands are sets of AVPs (Attribute-Value Pair). Commands and AVPs are defined by the applications that use them.
Why do you want to use Diameter to file transfer?

I think you can usee the Diameter protocol for JSON (never tried myself though). But Diameter is a protocol for a specific purpose uses AVPs.

It is not possible to send json or xml data on diameter protocol. Diameter is a strict protocol which works by having well defined command and attribute codes.

Related

BLE communication data type

Bluetooth newbie here.
Is there a best practice among the data type used for BLE communication?
In my case I am setting up a ESP32 that acts as Server: it has a single Characteristic with a Notify property, so it repeatedly sends data to all the Clients once they connect (a raspberry pi as Client, in my case).
Right know the data exchanged is just bytes (based on the Neil Kolban "BLE_notify" Arduino example) but it would be great to send Strings or better JSON data. Is that possible?
You can send in whatever format you want if you use custom characteristic uuids. Although generally BLE data transfer is slow so you'd better make your data as compact as possible. Json might not be the best option here. I've not heard of anyone sending json over BLE. Some binary format is what's generally used.

What is the RESTful way to return a JSON + binary file in an API

I have to implement a REST endpoint that receives start and end dates (among other arguments). It does some computations to generate a result that is a kind of forecast according to the server state at invocation epoch and the input data (imagine a weather forecast for next few days).
Since the endpoint does not alter the system state, I plan to use GET method and return a JSON.
The issue is that the output includes also an image file (a plot). So my idea is to create a unique id for the file and include an URI in the JSON response to be consumed later (I think this is the way suggested by HATEOAS principle).
My question is, since this image file is a resource that is valid only as part of the response to a single invocation to the original endpoint, I would need a way to delete it once it was consumed.
Would it be RESTful to deleting it after serving it via a GET?
or expose it only via a DELETE?
or not delete it on consumption and keep it for some time? (purge should be performed anyway since I can't ensure the client consumes the file).
I would appreciate your ideas.
Would it be RESTful to deleting it after serving it via a GET?
Yes.
or expose it only via a DELETE?
Yes.
or not delete it on consumption and keep it for some time?
Yes.
The last of these options (caching) is a decent fit for REST in HTTP, since we have meta-data that we can use to communicate to general purpose components that a given representation has a finite lifetime.
So this reference of the report (which includes the link to the plot) could be accompanied by an Expires header that informs the client that the representation of the report has an expected shelf life.
You might, therefore, plan to garbage collect the image resource after 10 minutes, and if the client hasn't fetched it before then - poof, gone.
The reason that you might want to keep the image around after you send the response to the GET: the network is unreliable, and the GET message may never reach its destination. Having things in cache saves you the compute of trying to recalculate the image.
If you want confirmation that the client did receive the data, then you must introduce another message to the protocol, for the client to inform you that the image has been downloaded successfully.
It's reasonable to combine these strategies: schedule yourself to evict the image from the cache in some fixed amount of time, but also evict the image immediately if the consumer acknowledges receipt.
But REST doesn't make any promises about liveness - you could send a response with a link to the image, but 404 Not Found every attempt to GET it, and that's fine (not useful, of course, but fine). REST doesn't promise that resources have stable representations, or that the resource is somehow eternal.
REST gives us standards for how we request things, and how responses should be interpreted, but we get a lot of freedom in choosing which response is appropriate for any given request.
You could offer a download link in the JSON response to that binary resource that also contains the parameters that are required to generate that resource. Then you can decide yourself when to clean that file up (managing disk space) or cache it - and you can always regenerate it because you still have the parameters. I assume here that the generation doesn't take significant time.
It's a tricky one. Typically GET requests should be repeatable as an import HTTP feature, in case the original failed. Some people might rely on it.
It could also be construed as a 'non-safe' operation, GET resulting in what is effectively a DELETE.
I would be inclined to expire the image after X seconds/minutes instead, perhaps also supporting DELETE at that endpoint if the client got the result and wants to clean up early.

Autodesk Forge randomly loses object and room information

I'm using Autodesk Forge to integrate with our remodeling tool. In particular, I need to count objects of different families and types and determine to what room they actually belong. I use Model Derivative API for this purpose. To keep the room/area information I convert .rvt files to .nwc files as suggested here. However, when I retrieve data with GET /modelderivative/v2/designdata/{urn}/metadata/{guid}/properties I face the following problems from time to time:
Room information sometimes disappears from Objects for some reason
Objects disappear from result data for some reason (but they seem to exist when I browse them in A360)
I have no idea, what can be the reason for this.
I have no explanation for the disappearance of room data or objects for you.
If you can provide a reproducible case demonstrating that, I will gladly pass it on to the development team for analysis.
If you are interested in an immediate reliable solution and full control, which I assume is the case, I would suggest following the second bullet item in the advice provided by Eason in the previous answer that you refer to above:
Extract all the room information and object relationships you are interested in via the Revit API, store that data somewhere yourself, and use it later on wherever you like to your heart's content.
Then you will be completely safe and independent of all other components and their unpredictable behaviour.
If the only information that you need is the room containing each family instance, I can even implement a suitable Revit add-in for you.
Another suggestion that might help, if that is indeed the data you require: determine that information in a Revit add-in and attach it to each family instance in your own personal shared parameter. That will ensure that it remains intact through the translation process. Afaik, all shared parameter data is retained, independent of other behaviour.

REST API - file (ie images) processing - best practices

We are developing server with REST API, which accepts and responses with JSON. The problem is, if you need to upload images from client to server.
Note: and also I am talking about a use-case where the entity (user) can have multiple files (carPhoto, licensePhoto) and also have other properties (name, email...), but when you create new user, you don't send these images, they are added after the registration process.
The solutions I am aware of, but each of them have some flaws
1. Use multipart/form-data instead of JSON
good : POST and PUT requests are as RESTful as possible, they can contain text inputs together with file.
cons : It is not JSON anymore, which is much easier to test, debug etc. compare to multipart/form-data
2. Allow to update separate files
POST request for creating new user does not allow to add images (which is ok in our use-case how I said at beginning), uploading pictures is done by PUT request as multipart/form-data to for example /users/4/carPhoto
good : Everything (except the file uploading itself) remains in JSON, it is easy to test and debug (you can log complete JSON requests without being afraid of their length)
cons : It is not intuitive, you cant POST or PUT all variables of entity at once and also this address /users/4/carPhoto can be considered more as a collection (standard use-case for REST API looks like this /users/4/shipments). Usually you cant (and dont want to) GET/PUT each variable of entity, for example users/4/name . You can get name with GET and change it with PUT at users/4. If there is something after the id, it is usually another collection, like users/4/reviews
3. Use Base64
Send it as JSON but encode files with Base64.
good : Same as first solution, it is as RESTful service as possible.
cons : Once again, testing and debugging is a lot worse (the body can have megabytes of data), there is increase in size and also in processing time in both - client and server
I would really like to use solution no. 2, but it has its cons... Anyone can give me a better insight of "what is best" solution?
My goal is to have RESTful services with as much standards included as possible, while I want to keep it as simple as possible.
OP here (I am answering this question after two years, the post made by Daniel Cerecedo was not bad at a time, but the web services are developing very fast)
After three years of full-time software development (with focus also on software architecture, project management and microservice architecture) I definitely choose the second way (but with one general endpoint) as the best one.
If you have a special endpoint for images, it gives you much more power over handling those images.
We have the same REST API (Node.js) for both - mobile apps (iOS/android) and frontend (using React). This is 2017, therefore you don't want to store images locally, you want to upload them to some cloud storage (Google cloud, s3, cloudinary, ...), therefore you want some general handling over them.
Our typical flow is, that as soon as you select an image, it starts uploading on background (usually POST on /images endpoint), returning you the ID after uploading. This is really user-friendly, because user choose an image and then typically proceed with some other fields (i.e. address, name, ...), therefore when he hits "send" button, the image is usually already uploaded. He does not wait and watching the screen saying "uploading...".
The same goes for getting images. Especially thanks to mobile phones and limited mobile data, you don't want to send original images, you want to send resized images, so they do not take that much bandwidth (and to make your mobile apps faster, you often don't want to resize it at all, you want the image that fits perfectly into your view). For this reason, good apps are using something like cloudinary (or we do have our own image server for resizing).
Also, if the data are not private, then you send back to app/frontend just URL and it downloads it from cloud storage directly, which is huge saving of bandwidth and processing time for your server. In our bigger apps there are a lot of terabytes downloaded every month, you don't want to handle that directly on each of your REST API server, which is focused on CRUD operation. You want to handle that at one place (our Imageserver, which have caching etc.) or let cloud services handle all of it.
small 2023 update: If possible, but CDN in front of the pictures, it usually will save you a lot of money and make the pictures even more available (i.e. no issues when peaks happen).
Cons : The only "cons" which you should think of is "not assigned images". User select images and continue with filling other fields, but then he says "nah" and turn off the app or tab, but meanwhile you successfully uploaded the image. This means you have uploaded an image which is not assigned anywhere.
There are several ways of handling this. The most easiest one is "I don't care", which is a relevant one, if this is not happening very often or you even have desire to store every image user send you (for any reason) and you don't want any deletion.
Another one is easy too - you have CRON and i.e. every week and you delete all unassigned images older than one week.
There are several decisions to make:
The first about resource path:
Model the image as a resource on its own:
Nested in user (/user/:id/image): the relationship between the user and the image is made implicitly
In the root path (/image):
The client is held responsible for establishing the relationship between the image and the user, or;
If a security context is being provided with the POST request used to create an image, the server can implicitly establish a relationship between the authenticated user and the image.
Embed the image as part of the user
The second decision is about how to represent the image resource:
As Base 64 encoded JSON payload
As a multipart payload
This would be my decision track:
I usually favor design over performance unless there is a strong case for it. It makes the system more maintainable and can be more easily understood by integrators.
So my first thought is to go for a Base64 representation of the image resource because it lets you keep everything JSON. If you chose this option you can model the resource path as you like.
If the relationship between user and image is 1 to 1 I'd favor to model the image as an attribute specially if both data sets are updated at the same time. In any other case you can freely choose to model the image either as an attribute, updating the it via PUT or PATCH, or as a separate resource.
If you choose multipart payload I'd feel compelled to model the image as a resource on is own, so that other resources, in our case, the user resource, is not impacted by the decision of using a binary representation for the image.
Then comes the question: Is there any performance impact about choosing base64 vs multipart?. We could think that exchanging data in multipart format should be more efficient. But this article shows how little do both representations differ in terms of size.
My choice Base64:
Consistent design decision
Negligible performance impact
As browsers understand data URIs (base64 encoded images), there is no need to transform these if the client is a browser
I won't cast a vote on whether to have it as an attribute or standalone resource, it depends on your problem domain (which I don't know) and your personal preference.
Your second solution is probably the most correct. You should use the HTTP spec and mimetypes the way they were intended and upload the file via multipart/form-data. As far as handling the relationships, I'd use this process (keeping in mind I know zero about your assumptions or system design):
POST to /users to create the user entity.
POST the image to /images, making sure to return a Location header to where the image can be retrieved per the HTTP spec.
PATCH to /users/carPhoto and assign it the ID of the photo given in the Location header of step 2.
There's no easy solution. Each way has their pros and cons . But the canonical way is using the first option: multipart/form-data. As W3 recommendation guide says
The content type "multipart/form-data" should be used for submitting forms that contain files, non-ASCII data, and binary data.
We aren't sending forms,really, but the implicit principle still applies. Using base64 as a binary representation, is incorrect because you're using the incorrect tool for accomplish your goal, in other hand, the second option forces your API clients to do more job in order to consume your API service. You should do the hard work in the server side in order to supply an easy-to-consume API. The first option is not easy to debug, but when you do it, it probably never changes.
Using multipart/form-data you're sticked with the REST/http philosophy. You can view an answer to similar question here.
Another option if mixing the alternatives, you can use multipart/form-data but instead of send every value separate, you can send a value named payload with the json payload inside it. (I tried this approach using ASP.NET WebAPI 2 and works fine).

Is using HTML5 Server-sent-events (SSE) ReSTful?

I am not able to understand if HTML5s Server-sent-events really fit in a ReST architecture. I understand that NOT all aspects of HTML5/HTTP need to fit in a ReST architecture. But I would like to know from experts, which half of HTTP is SSE in (the ReSTful half or the other half !).
One view could be that it is ReSTful, because there is an 'initial' HTTP GET request from the client to the server and the remaining can just be seen as partial-content responses of just a different Content-type ("text/event-stream")
A request sent without any idea of how many responses are going to come as response(events) ? Is that ReSTful ?
Motivation for the question: We are developing the server-side of an app, and we want to support both ReST clients (in general) and Browsers (in particular). While SSEs will work for most of the HTML5 browser clients, we are not sure if SSEs are suitable for support by a pure ReST client. Hence the question.
Edit1:
Was reading Roy Fielding's old article, where he says :
"In other words, a single user request results in a potentially large number of server obligations. As such, a benevolent user can produce a disproportionate load on the publisher or broker that is distributing notifications. On the Internet, we don’t have the luxury of designing just for benevolent users, and thus in HTTP systems we call such requests a denial-of-service exploit.... That is exactly why there is no standard mechanism for notifications in HTTP"
Does that imply SSE is not ReSTful ?
Edit2:
Was going through Twitter's REST API.
While REST puritans might debate if their REST API is really/fully REST, just the title of the section Differences between Streaming and REST seems to suggest that Streaming (and even SSE) cannot be considered ReSTful !? Anyone contends that ?
I think it depends:
Do your server-side events use hypermedia and hyperlinks to describe possible state changes?
The answer to that question is the answer to whether or not they satisfy REST within your application architecture.
Now, the manner in which those events are sent/received may or may not adhere to REST - everything I have read about SSE suggests that they do not. I suspect it will impact several principles, especially layering - though if intermediaries were aware of the semantics of SSE you could probably negate this.
I think this is orthogonal as it's just part of the processing directive for HTML and JavaScript that the browser (via the JavaScript it is running) understands. You should still be able to have client-side application state decoupled from server-side resource state.
Some of the advice I've seen on how to deal with scaling using SSE don't fit REST - i.e. introducing custom headers (modifying the protocol).
How do you respect REST while using SSE?
I'd like to see some kind of
<link rel="event" href="http://example.com/user/1" />
Then the processing directives (including code-on-demand such as JavaScript) of whatever content-type/resource you are working with tell the client how to subscribe and utilize the events made available from such a hyperlink. Obviously, the data of those events should itself be hypermedia containing more hyperlinks that control program flow. (This is where I believe you make the distinction between REST and not-REST).
At some point the browser could become aware of that link relationship - just like a stylesheet and do some of that fancy wire-up for you, so all you do is just listen for events in JavaScript.
While I do think that your application can still fit a REST style around SSE, they are not REST themselves (Since your question was specifically about their use, not their implementation I am trying to be clear about what I am speaking to).
I dislike that the specification uses HTTP because it does away with a lot of the semantics and effectively tunnels an anemic protocol through an otherwise relatively rich one. This is supposedly a benefit but strikes me as selling dinner to pay for lunch.
ReST clients (in general) and Browsers (in particular).
How is your browser not a REST client? Browser are arguably the most REST client of all. It's all the crap we stick in to them via JavaScript that makes then stop adhering to REST. I suspect/fear that as long as we continue to think about our REST-API 'clients' and our browser clients as fundamentally different we will still be stuck in this current state - presumably because all the REST people are looking for a hyperlink that the RPC people have no idea needs to exist ;)
I think SSE can be used by a REST API. According to the Fielding dissertation, we have some architectural constraints the application MUST meet, if we want to call it REST.
client-server architecture: ok - the client triggers while the server does the processing
stateless: ok - we still store client state on the client and HTTP is still a stateless protocol
cache: ok - we have to use no cache header
uniform interface
identification of resources: ok - we use URIs
manipulation of resources through representations: ok - we can use HTTP methods with the same URI
self-descriptive messages: ok, partially - we use content-type header we can add RDF to the data if we want, but there is no standard which describes that the data is RDF coded. we should define a text/event-stream+rdf MIME type or something like that if that is supported.)
hypermedia as the engine of application state: ok - we can send links in the data
layered system: ok - we can add other layers, which can transform the data stream aka. pipes and filters where the pump is the server, the filters are these layers and the sink is the client
code on demand: ok - optional, does not matter
Btw. there is no such rule, that you cannot use different technologies together. So you can use for example a REST API and websockets together if you want, but if the websockets part does not meet at least with the self-descriptive message and the HATEOAS constraints, then the client will be hard to maintain. Scalability can be another problem, since the other constraints are about that.