Using
NotificationHubClient hub = NotificationHubClient.CreateClientFromConnectionString(notificationHubConnection, notificationHubName, enableTestSend);
NotificationOutcome outcome = await hub.SendDirectNotificationAsync(fcmNotification, deviceUri);
I am able to send and receive notifications using FCM via the Azure hub to a Xamarin Android app, finally. However the payload is not present in the received RemoteMessage even though the sent fcmNotification json payload looks good and passes validation. I am basically looking at the RemoteMessage.Data property, but not finding the expected payload array. Looking at RemoteMessage structure, I haven't found any part of the payload array either.
I know that the Azure hub manipulates the notification by adding the necessary headers like content type, e.g. "application/json". Are there any other settings that are needed to be passed to enable the "data" only payload?
Additional settings are not necessary, but the format of the entire notification contents has to have this type of structure:
"{ \"data\":{ \"A\": \"aaa\",\"B\": \"bbb\",\"C\": \"ccc\",\"D\": \"ddd\",\"E\": \"eee\",\"F\": \"fff\"}}"
The number of data elements is up to you. The data element name can be anything as well as the associated content except for the need for backslash usage for special characters. Both the element name and content can be inserted using variables creating a traditional composite string.
What is especially important is the inserted spaces as shown. Also note that traditional Json formatting is not acceptable because of the need for those spaces.
Related
We want to send some events to Application Insights with data showing which features a user owns, and are available for the session. These are variable, and the list of items will probably grow/change as we continue deploying updates. Currently we do this by building a list of properties dynamically at start-up, with values of Available/True.
Since AI fromats each event data as JSON, we thought it would be interesting to send through custom data as JSON so it can be processed in a similar fashion. Having tried to send data as JSON though, we bumped into an issue where AI seems to send through escape characters in the strings:
Eg. if we send a property through with JSON like:
{"Property":[{"Value1"},..]}
It gets saved in AI as:
{\"Property\":[{\"Value1\"},..]} ).
Has anyone successfully sent custom JSON to AI, or is the platform specifically trying to safeguard against such usage? In our case, where we parse the data out in Power BI, it would simplify and speed up a lot some queries by being able to send a JSON array.
AI treats custom properties as strings, you'd have to stringify any json you want to send (and keep it under the length limit for custom property sizes), and then re-parse it on the other side.
I have a situation where I have to write a api to create a resource and amongst datafields that I need to accept is a string that is basically contents of a html file. As I see it I have a choice between structuring the entire thing as a json object where this field is a string field with urlencoded html string , and having the Content Type as multipart/form-data where each of the fields and the html string (UTF-8 encoded) is a part in the message.
Not using json is something I am not comfortable with as I feel violating the REST standards in not structuring the content of the entity I am about to create thus there is a loss of information for the consumers as they can't tell immediately looking at my api definition about what data to feed to it. But practically multipart/form-data handles stuff like html file content better and more efficient as I will not have to urlencode it and can control the char-encoding also.
What will be a better approach in current context and upholding RESTful principles ? Also are there other trade-offs i should be aware of ? what about parsing a json with a huge string field (~ 200 Kb)embedded?
EDIT :- I was reading some similar questions on SO and one approach that stood out was the 2-step approach of making a first call with metadata to create the entity and then upload the file as an UPDATE process to the created entity wherein we use multipart/form-data. In that context, I guess , what I am asking is how sound is an approach where I send both metadata and the file in a single api call as multipart data , where each metadata field is actually a part in the multipart message as is the file.
The canonical way to upload files to REST API is using the multipart/form-data. As W3 recommendation guide says:
The content type "multipart/form-data" should be used for submitting
forms that contain files, non-ASCII data, and binary data.
Multipart/form-data has advantages over base64 to represent binary data. Is sticked to REST/Http philosophy, and simplify the develop of API clients.
Returning values from Forms: multipart/form-data
W3 Recommendation guide
The good practice is to use multipart/form-data whenever files are uploaded to the server along with database fields. Do not send a base64 JSON string as the request to your Rest API as it might corrupt the file or degrade the performance of your application.
As far as documenting multipart/form-data Rest API for your consumers is concerned you have to force your API consumers to use the same form fields which you have predefined in your web service.
Returning Values from Forms: multipart/form-data
I started using FormData objects everywhere on the client-side, in lieu of regular form input fields, for dynamic REST posts. FormData is presented in a positive light in various tutorials, so I went with it.
However, down the line, this caused me problems when decoding the form data into my Go structs. FormData objects are sent as "multipart/form-data" (regardless of files being sent) and I believe my decoder in Go didn't convert the raw data back to string form. Eventually my SQL queries were throwing panics, as hex data was being sent in where strings should have been.
So with some adjustment, I could use FormData however I've decided to revert to the simple universal recommendation: Use "multipart/form-data" only for special cases like when sending files. Otherwise, just use regular "application/x-www-form-urlencoded".
I'm using Django REST Framework and all the API calls come from Android and iOS apps. The system works perfectly most of the time, however, when an internal server error happens and I get an email from Django, the POST of the WSGIRequest contains <could not parse> instead of the actual posted JSON data (even though 'CONTENT_TYPE': 'application/json' is also in the header, and the data is sent as JSON).
This is really annoying, as it would be great to see the request body that actually causes the error, not just the stacktrace.
The <could not parse> part is very similar to this question (in the ModPythonRequest part): django request.POST contains <could not parse>, except the actual problem is slightly different. Also the reference link in that question (https://stackoverflow.com/questions/12471661/mod-python-could-not-parse-the-django-post-request-for-blackberry-and-some-andro) seems to have gone down, even though the name looked very promising.
I'm on Django 1.6.2 and DRF 2.3.13.
The POST dictionary of the WSGIRequest is always going to be invalid, because it is intended to hold the parsed form data when the Content-Type is application/x-www-form-urlencoded or multipart/form-data.
The data you want is in the body attribute of the WSGIRequest object, which isn't output when that object is converted to a string to be written to the log.
When using Django REST framework, you will typically want to access request.DATA (which will handle whatever formats you have parsers configured for - defaulting to form content and JSON) instead of Django's standard request.POST, which will only handle form encoded data.
Recently I run into some weird issue with http header usage ( Adding multiple custom http request headers mystery) To avoid the problem at that time, I have put the fields into json string and add that json string into header instead of adding those fields into separate http headers.
For example, instead of
request.addHeader("UserName", mUserName);
request.addHeader("AuthToken", mAuthorizationToken);
request.addHeader("clientId","android_client");
I have created a json string and add it to the single header
String jsonStr="{\"UserName\":\"myname\",\"AuthToken\":\"123456\",\"clientId\":\"android_client\"}";
request.addHeader("JSonStr",jsonStr);
Since I am new to writing Rest and dealing with the Http stuff, I don't know if my usage is proper or not. I would appreciate some insight into this.
Some links
http://lists.w3.org/Archives/Public/ietf-http-wg/2011OctDec/0133.html
Yes, you may use JSON in HTTP headers, given some limitations.
According to the HTTP spec, your header field-body may only contain visible ASCII characters, tab, and space.
Since many JSON encoders (e.g. json_encode in PHP) will encode invisible or non-ASCII characters (e.g. "é" becomes "\u00e9"), you often don't need to worry about this.
Check the docs for your particular encoder or test it, though, because JSON strings technically allow most any Unicode character. For example, in JavaScript JSON.stringify() does not escape multibyte Unicode, by default. However, you can easily modify it to do so, e.g.
var charsToEncode = /[\u007f-\uffff]/g;
function http_header_safe_json(v) {
return JSON.stringify(v).replace(charsToEncode,
function(c) {
return '\\u'+('000'+c.charCodeAt(0).toString(16)).slice(-4);
}
);
}
Source
Alternatively, you can do as #rocketspacer suggested and base64-encode the JSON before inserting it into the header field (e.g. how JWT does it). This makes the JSON unreadable (by humans) in the header, but ensures that it will conform to the spec.
Worth noting, the original ARPA spec (RFC 822) has a special description of this exact use case, and the spirit of this echoes in later specs such as RFC 7230:
Certain field-bodies of headers may be interpreted according to an
internal syntax that some systems may wish to parse.
Also, RFC 822 and RFC 7230 explicitly give no length constraints:
HTTP does not place a predefined limit on the length of each header field or on the length of the header section as a whole, as described in Section 2.5.
Base64encode it before sending. Just like how JSON Web Token do it.
Here's a NodeJs Example:
const myJsonStr = JSON.stringify(myData);
const headerFriendlyStr = Buffer.from(myJsonStr, 'utf8').toString('base64');
res.addHeader('foo', headerFriendlyStr);
Decode it when you need reading:
const myBase64Str = req.headers['foo'];
const myJsonStr = Buffer.from(myBase64Str, 'base64').toString('utf8');
const myData = JSON.parse(myJsonStr);
Generally speaking you do not send data in the header for a REST API. If you need to send a lot of data it best to use an HTTP POST and send the data in the body of the request. But it looks like you are trying to pass credentials in the header, which some REST API's do use. Here is an example for passing the credentials in a REST API for a service called SMSIfied, which allows you to send SMS text message via the Internet. This example is using basic authentication, which is a a common technique for REST API's. But you will need to use SSL with this technique to make it secure. Here is an example on how to implement basic authentication with WCF and REST.
From what I understand using a json string in the header option is not as much of an abuse of usage as using http DELETE for http GET, thus there has even been proposal to use json in http header. Of course more thorough insights are still welcome and the accepted answer is still to be given.
In the design-first age the OpenAPI specification gives another perspective:
an HTTP header parameter may be an object,
e.g. object X-MyHeader is
{"role": "admin", "firstName": "Alex"}
however, within the HTTP request/response the value is serialized, e.g.
X-MyHeader: role=admin,firstName=Alex
Let's say I have a northwind database and I use ADO.NET Entity Data Model which I automatically generate from the tables in database. Then I add a new WCF data service that inherits from DataService. When I start the web application, that runs the service I can request data like this:
http://machine/Northwind.svc/Orders
This will return all orders from order table in atom/xml format. The problem is I do not want XML. I want JSON. I think I tried all kinds of settings (web.config) and attributes in my application, but I still get XML. No matter what. I can only get JSON, when I use fiddler and change the request header to accept JSON.
I do not like the concept of content negotiation. I want always to return data in JSON format. How can I achieve that?
Keep in mind that I did not create any model objects, they are automatically created based on database tables and relationships.
Well - content negotiation comes with HTTP. In any case, you could intercept the incoming request and add/overwrite the Accept header to always specify the JSON. There's a sample how to support JSONP which uses a similar trick, I think you should be able to modify it to always return JSON as well. http://archive.msdn.microsoft.com/DataServicesJSONP.
The behavior you criticize is defined by specification of OData protocol. OData defaults to Atom and client can control media type of the representation either by Accept HTTP header or by $format parameter in query string (but I'm not sure if WCF Data services already support this).