BLE communication data type - json

Bluetooth newbie here.
Is there a best practice among the data type used for BLE communication?
In my case I am setting up a ESP32 that acts as Server: it has a single Characteristic with a Notify property, so it repeatedly sends data to all the Clients once they connect (a raspberry pi as Client, in my case).
Right know the data exchanged is just bytes (based on the Neil Kolban "BLE_notify" Arduino example) but it would be great to send Strings or better JSON data. Is that possible?

You can send in whatever format you want if you use custom characteristic uuids. Although generally BLE data transfer is slow so you'd better make your data as compact as possible. Json might not be the best option here. I've not heard of anyone sending json over BLE. Some binary format is what's generally used.

Related

Websockets over Protocol buffers(protobufs)/binary vs JSON/text performance

Are protobufs sent as binary data over WebSockets faster than JSON sent as text data over WebSockets? As on paper, this seems to be true, even taking into account the small overhead generated by handling bytes on both sides. Did anyone really had a chance to try this and has some concrete results? Thanks!
So I've made a small project for researching this and I've got some results. You can find the project here, you can find more information in the README and in the results package.
To answer the question; YES, protocol buffers are faster than JSON over 100_000 messages sent as ping pong (no processing over them except marshalling and unmarshalling). But the difference is not as notable as I would have expected.

REST API - file (ie images) processing - best practices

We are developing server with REST API, which accepts and responses with JSON. The problem is, if you need to upload images from client to server.
Note: and also I am talking about a use-case where the entity (user) can have multiple files (carPhoto, licensePhoto) and also have other properties (name, email...), but when you create new user, you don't send these images, they are added after the registration process.
The solutions I am aware of, but each of them have some flaws
1. Use multipart/form-data instead of JSON
good : POST and PUT requests are as RESTful as possible, they can contain text inputs together with file.
cons : It is not JSON anymore, which is much easier to test, debug etc. compare to multipart/form-data
2. Allow to update separate files
POST request for creating new user does not allow to add images (which is ok in our use-case how I said at beginning), uploading pictures is done by PUT request as multipart/form-data to for example /users/4/carPhoto
good : Everything (except the file uploading itself) remains in JSON, it is easy to test and debug (you can log complete JSON requests without being afraid of their length)
cons : It is not intuitive, you cant POST or PUT all variables of entity at once and also this address /users/4/carPhoto can be considered more as a collection (standard use-case for REST API looks like this /users/4/shipments). Usually you cant (and dont want to) GET/PUT each variable of entity, for example users/4/name . You can get name with GET and change it with PUT at users/4. If there is something after the id, it is usually another collection, like users/4/reviews
3. Use Base64
Send it as JSON but encode files with Base64.
good : Same as first solution, it is as RESTful service as possible.
cons : Once again, testing and debugging is a lot worse (the body can have megabytes of data), there is increase in size and also in processing time in both - client and server
I would really like to use solution no. 2, but it has its cons... Anyone can give me a better insight of "what is best" solution?
My goal is to have RESTful services with as much standards included as possible, while I want to keep it as simple as possible.
OP here (I am answering this question after two years, the post made by Daniel Cerecedo was not bad at a time, but the web services are developing very fast)
After three years of full-time software development (with focus also on software architecture, project management and microservice architecture) I definitely choose the second way (but with one general endpoint) as the best one.
If you have a special endpoint for images, it gives you much more power over handling those images.
We have the same REST API (Node.js) for both - mobile apps (iOS/android) and frontend (using React). This is 2017, therefore you don't want to store images locally, you want to upload them to some cloud storage (Google cloud, s3, cloudinary, ...), therefore you want some general handling over them.
Our typical flow is, that as soon as you select an image, it starts uploading on background (usually POST on /images endpoint), returning you the ID after uploading. This is really user-friendly, because user choose an image and then typically proceed with some other fields (i.e. address, name, ...), therefore when he hits "send" button, the image is usually already uploaded. He does not wait and watching the screen saying "uploading...".
The same goes for getting images. Especially thanks to mobile phones and limited mobile data, you don't want to send original images, you want to send resized images, so they do not take that much bandwidth (and to make your mobile apps faster, you often don't want to resize it at all, you want the image that fits perfectly into your view). For this reason, good apps are using something like cloudinary (or we do have our own image server for resizing).
Also, if the data are not private, then you send back to app/frontend just URL and it downloads it from cloud storage directly, which is huge saving of bandwidth and processing time for your server. In our bigger apps there are a lot of terabytes downloaded every month, you don't want to handle that directly on each of your REST API server, which is focused on CRUD operation. You want to handle that at one place (our Imageserver, which have caching etc.) or let cloud services handle all of it.
small 2023 update: If possible, but CDN in front of the pictures, it usually will save you a lot of money and make the pictures even more available (i.e. no issues when peaks happen).
Cons : The only "cons" which you should think of is "not assigned images". User select images and continue with filling other fields, but then he says "nah" and turn off the app or tab, but meanwhile you successfully uploaded the image. This means you have uploaded an image which is not assigned anywhere.
There are several ways of handling this. The most easiest one is "I don't care", which is a relevant one, if this is not happening very often or you even have desire to store every image user send you (for any reason) and you don't want any deletion.
Another one is easy too - you have CRON and i.e. every week and you delete all unassigned images older than one week.
There are several decisions to make:
The first about resource path:
Model the image as a resource on its own:
Nested in user (/user/:id/image): the relationship between the user and the image is made implicitly
In the root path (/image):
The client is held responsible for establishing the relationship between the image and the user, or;
If a security context is being provided with the POST request used to create an image, the server can implicitly establish a relationship between the authenticated user and the image.
Embed the image as part of the user
The second decision is about how to represent the image resource:
As Base 64 encoded JSON payload
As a multipart payload
This would be my decision track:
I usually favor design over performance unless there is a strong case for it. It makes the system more maintainable and can be more easily understood by integrators.
So my first thought is to go for a Base64 representation of the image resource because it lets you keep everything JSON. If you chose this option you can model the resource path as you like.
If the relationship between user and image is 1 to 1 I'd favor to model the image as an attribute specially if both data sets are updated at the same time. In any other case you can freely choose to model the image either as an attribute, updating the it via PUT or PATCH, or as a separate resource.
If you choose multipart payload I'd feel compelled to model the image as a resource on is own, so that other resources, in our case, the user resource, is not impacted by the decision of using a binary representation for the image.
Then comes the question: Is there any performance impact about choosing base64 vs multipart?. We could think that exchanging data in multipart format should be more efficient. But this article shows how little do both representations differ in terms of size.
My choice Base64:
Consistent design decision
Negligible performance impact
As browsers understand data URIs (base64 encoded images), there is no need to transform these if the client is a browser
I won't cast a vote on whether to have it as an attribute or standalone resource, it depends on your problem domain (which I don't know) and your personal preference.
Your second solution is probably the most correct. You should use the HTTP spec and mimetypes the way they were intended and upload the file via multipart/form-data. As far as handling the relationships, I'd use this process (keeping in mind I know zero about your assumptions or system design):
POST to /users to create the user entity.
POST the image to /images, making sure to return a Location header to where the image can be retrieved per the HTTP spec.
PATCH to /users/carPhoto and assign it the ID of the photo given in the Location header of step 2.
There's no easy solution. Each way has their pros and cons . But the canonical way is using the first option: multipart/form-data. As W3 recommendation guide says
The content type "multipart/form-data" should be used for submitting forms that contain files, non-ASCII data, and binary data.
We aren't sending forms,really, but the implicit principle still applies. Using base64 as a binary representation, is incorrect because you're using the incorrect tool for accomplish your goal, in other hand, the second option forces your API clients to do more job in order to consume your API service. You should do the hard work in the server side in order to supply an easy-to-consume API. The first option is not easy to debug, but when you do it, it probably never changes.
Using multipart/form-data you're sticked with the REST/http philosophy. You can view an answer to similar question here.
Another option if mixing the alternatives, you can use multipart/form-data but instead of send every value separate, you can send a value named payload with the json payload inside it. (I tried this approach using ASP.NET WebAPI 2 and works fine).

JSON over Diameter protocol

I am new to Diameter and have this basic question.
I have 2 peers talking to each other over Diameter protocol. I need to send some data between these 2 entities and I am trying to decide whether JSON or XML is supported over Diameter. What is the best way to transfer file content over Diameter? Is it possible to transfer JSON data over Diameter?
Any sample links or code samples would be helpful.
Thanks in advance...
You can send any kind of data you want with Diameter, but keep in mind that it is designed for transmitting Authentication, Authorization and Accounting (AAA) data. This is control data that is primarily used for granting access, enforcing policy, and measuring usage. The actual network traffic that Diameter controls flows over completely different networks. So if control is what you are after then you should next research which interface(s) you would need for your application. There are many good online resources for that, including RFCs, IETF, 3GPP, and wikipedia.
Diameter peers use Commands to communicate, where Commands are sets of AVPs (Attribute-Value Pair). Commands and AVPs are defined by the applications that use them.
Why do you want to use Diameter to file transfer?
I think you can usee the Diameter protocol for JSON (never tried myself though). But Diameter is a protocol for a specific purpose uses AVPs.
It is not possible to send json or xml data on diameter protocol. Diameter is a strict protocol which works by having well defined command and attribute codes.

Raw CAN data from OBD2

I am new to OBD 2. I want to get RAW CAN data from my vehicle(Duster from Renault,India). I am using OBDlink connector. Basically my question is how to extract only CAN data from vehicle OBD connector? I want to get the data from CAN. Is this possible?
Any comments are appreciable.
Cheapest best option IMO is a Teensy with CAN bus shield you can read more about here: https://oshpark.com/shared_projects/VeJFD9qA
The other off the shelve option would be to use an OBDLink SX adapter, they can support reading at the commonly used 500kbps.
If you're serious about receiving all CAN data and not losing any frames, you should buy a SocketCAN-capable CAN adapter. These do not pipe all the frames through serial ICs like ELM327 or STNxxxx and allow for better bandwidth.

Is it worth to exclude null fields from a JSON server response in a web application to reduce traffic?

Lets say that the API is well documented and every possible response field is described.
Should web application's server API exclude null fields in a JSON response to lower the amount of traffic? Is this a good idea at all?
I was trying to calculate the amount of traffic reduced for a large app like Twitter, and the numbers are actually quite convincing.
For example: if you exclude a single response field, "someGenericProperty":null, which is 26 bytes, from every single API response, while Twitter is reportedly having 13 billion API requests per day, the amount of traffic reduction will be >300 Gb.
More than 300 Gb less traffic every day is quite a money saver, isn't it? That's probably the most naive and simplistic calculation ever, but still.
In general, no. The more public the API and and the more potential consumers of the API, the more invariant the API should be.
Developers getting started with the API are confused when a field shows up some times, but not other times. This leads to frustration and ultimately wastes the API owner's time in the form of support requests.
There is no way to know exactly how downstream consumers are using an API. Often, they are not using it just as the API developer imagines. Elements that appear or disappear based on the context can break applications that consume the API. The API developer usually has no way to know when a downstream application has been broken, short of complaints from downstream developers.
When data elements appear or disappear, uncertainty is introduced. Was the data element not sent because the API considered it to be irrelevant? Or has the API itself changed? Or is some bug in the consumer's code not parsing the response correctly? If the consumer expects a fields and it isn't there, how does that get debugged?
On the server side, extra code is needed to strip out those fields from the response. What if the logic to strip out data the wrong? It's a chance to inject defects and it means there is more code that must be maintained.
In many applications, network latency is the dominating factor, not bandwidth. For performance reasons, many API developers will favor a few large request/responses over many small request/responses. At my last company, the sales and billing systems would routinely exchange messages of 100 KB, 200 KB or more. Sometimes only a few KB of the data was needed. But overall system performance was better than fetching some data, discovering more was needed then sending additional request for that data.
For most applications some inconsistency is more dangerous than superfluous data is wasteful.
As always, there are a million exceptions. I once interviewed for a job at a torpedo maintenance facility. They had underwater sensors on their firing range to track torpedoes. All sensor data were relayed via acoustic modems to a central underwater data collector. Acoustic underwater modems? Yes. At 300 baud, every byte counts.
There are battery-powered embedded applications where every bytes counts, as well as low-frequency RF communication systems.
Another exception is sparse data. For example, imagine a matrix with 4,000,000 rows and 10,000 columns where 99.99% of the values of the matrix are zero. The matrix should be represented with a sparse data structure that does not include the zeros.
It's definitely dependent from the service and the amount of data it provides; it should be evaluate the ratio about null / not null data and set a threshold over than it worth to exclude that elements.
Thanks for sharing, it's an interesting point as for me.
The question is on a wrong side - JSON is not the best format to compress or reduce traffic, but something like google protobuffers or bson is.
I am carefully re-evaluating nullables in the API scheme right now. We use swagger (Open API) and json scheme does not really have something like nullable type and I think there is a good reason for this.
If you have a JSON response that maps a DB integer field which is suddenly NULL (or can be according to DB scheme), well it is indeed ok for relational DB but not at all healthy for your API.
I suggest to adopt and follow a much more elegant approach, and that would be to make better use of "required" also for the response.
If the field is optional in the response API scheme and it has null value in the DB do not return this field.
We have enabled strict scheme checks also for the API responses, and this gives us a much better control of our data and force us not to rely on states in the API.
For the API client that of course means doing checks like:
if ("key" in response) {
console.log("Optional key value:" + response[key]);
} else {
console.log("Optional key not found");
}