I'm writing an app using Node.js and Express.js. The app has a (small) REST API and then a web front end. I use MongoDb.
For the API, I tend POST data to some endpoint and then do processing or whatever, and dump it in a database. However, I have some database schema I would like to enforce. What are my options / best practices for enforcing a specific structure on my POST data so I know that certain fields are present and of specific types.
It would be nice if this could be done at the middleware level, but it isn't necessary. What do people usually do for validation / schema enforcement?
node-validator is what you are looking for. You can use it as a standalone module like this
var check = require('validator').check;
//Validate
check('test#email.com').len(6, 64).isEmail(); //Methods are chainable
check('abc').isInt(); //Throws 'Invalid integer'
Or you can use express-validator which is built on top of node-validator as a middleware.
Here is a more recent benchmark of various JSON schema validators.
Also, for best practices you may want to check out JSON-schema which attempts to lay out a standard way of defining how a JSON object should be defined.
Related
what is the best way to handle Json api request in symfony?
for example I have api of customers that allowed me to add new customer, edit and delete, and read as well
my question is how to develope it in symfony, as a bundle? like i have a database? with entity class?
if you have any tutorial it will be great
As a Client:
You can use Guzzle to make HTTP calls for you and then convert the json responses using native json_decode.
Other option is to use a Rest client, you will find many of them on packagist, here is one:
https://packagist.org/packages/Mashape/unirest-php
As a Server:
FOSRestBundle, its gonna provide you with many features that you'll need. here is a tutorial:
http://williamdurand.fr/2012/08/02/rest-apis-with-symfony2-the-right-way/
Of course you should check for more sources as well, I have a project of my own which uses it and you can check some code:
https://github.com/renatomefidf/sammui
You can also check the LiipHelloBundle for other multiple usages and examples:
https://github.com/liip/LiipHelloBundle
Good luck!
I am working on a demo using MarkLogic to store emails exported from Outlook as XML, so that they stay searchable and accessible when I move away from Outlook.
I am using an AngularJS front-end calling either the native MarkLogic REST services of own REST services written in JAVA using Jersey.
MarkLogic SEARCH REST service works very well to get back a list of references to documents based on various search criteria, but I also want to display information stored inside the found documents.
I would like to avoid multiple REST calls and to get back only the needed information, so I am trying to use the EVAL REST service to run an xQuery.
It works well to get XML back (inside a multipart/mixed message) but I don't seem to be able to get JSON instead which would be much more convenient and is very easy with most other MarkLogic REST services.
I could use "json:transform-to-json()" in my xQuery or transform the XML to JSON in my JAVA code, but that does not look very elegant to me.
Is there a more efficient method to get where I am trying to go ?
First, json:transform-to-json seems plenty elegant to me. But of course it's not always the right answer.
I see three options you haven't mentioned.
server-side transforms - REST search supports server-side transforms which transform each document when you perform a bulk read by query. Those server-side transforms could generate any json you need.
search extract-document-data - this the simplest way to extract portions of documents. But it seems best if your documents are json to match your json response. Otherwise you get xml in your json response . . . unless you're ok with that.
custom search snippets - another very powerful way to customize what search returns
All of these options don't require the privileges that eval requires, which is a very good thing. Since eval allows execution of arbitrary code on your server, it requires special privileges and should be used with great care. Two other options before you use eval are (1) custom xquery installed in an http server, and (2) REST extensions.
The answers from Sam are what I would suggest. Specifically I would set a search option for search-extract-document-data (This is a search API option. If you are posting the request, then you can add the option in the XML you post back. If you are using GET, then you need to register the option ahead of time and call it. Relevant URLs to assist:
https://docs.marklogic.com/guide/rest-dev/search#id_48838
https://docs.marklogic.com/guide/search-dev/appendixa#id_44222
As for json.. ML8 will transform content. Use the accept-header or just add format=json to your results...
Example - xml which is what my content is stored as:
http://localhost:8000/v1/search?q=watermellon
...
<search:result index="1" uri="/sample/collections/1.xml" path="fn:doc("/sample/collections/1.xml")" score="34816" confidence="0.5982239" fitness="0.6966695" href="/v1/documents?uri=%2Fsample%2Fcollections%2F1.xml" mimetype="application/xml" format="xml">
<search:snippet>
<search:match path="fn:doc("/sample/collections/1.xml")/x">
<search:highlight>watermellon</search:highlight>
</search:match>
</search:snippet>
</search:result>
...
Example - json which is what my content is stored as:
http://localhost:8000/v1/search?q=watermellon&format=json
...
"index":1,
"uri":"/sample/collections/1.xml",
"path":"fn:doc(\"/sample/collections/1.xml\")",
"score":34816,
"confidence":0.5982239,
"fitness":0.6966695,
"href":"/v1/documents?uri=%2Fsample%2Fcollections%2F1.xml",
"mimetype":"application/xml",
"format":"xml",
"matches":[
{
"path":"fn:doc(\"/sample/collections/1.xml\")/x",
"match-text":[
{
"highlight":"watermellon"
}
]
}
]
}
...
For real heavy-lifting, you can use server-side transforms as in Sam's description. One note about this. Server-side transformations are not part of the search API, but part of the REST API. Just mentioning it so you have some idea of which tool you are using in each case..
I've started using Apache Nifi and I'm still learning it and experimenting with it. I really want to use Nifi to get JSON documents from API's and put them in my Elasticsearch database. So far using the built-in getTwitter and putElasticsearch controllers this works.
However now I want to do this with other APIs than Twitter, and I'm kinda stuck here. First off I really don't even know which controller to use? I would think getHttp or invokeHttp even with 'GET' as http verb then but it doesn't seem to work. If I use the getHttp I have to give an SSL service with keystore and truststore .. like why would I have to do that?
Apache Nifi is still quite new so hard to find decent guides / information about these kinds of things. I have read and searched the documentation but haven't gotten the wiser.
An example JSON to pick up from an API is:
https://api.ssllabs.com/api/v2/getEndpointData?host=www.bnpparibasfortis.be&s=193.58.4.82
Thanks in advance for anyone that can offer some help / insight.
What processor you use to get the JSON data is entirely dependent on the API you want to hit. The GetHttp or InvokeHttp processors should work to grab the data from a URL. If you'll notice, the SSL service is an optional property for both GetHttp and InvokeHttp so you only need to you use it when you want to communicate via HTTPS. Also, from the UI you can right click on a processor and then click "usage" to bring up the documentation for that processor.
At this link[1] you can find a NiFi template that uses GetHttp to get JSON data from randomuser.me and does various processing on it. It's primarily a template to show-case the different Avro processors but the method of grabbing the JSON should be relevant.
[1] https://github.com/hortonworks-gallery/nifi-templates/blob/master/templates/Convert_To_Avro_From_CSV_and_JSON.xml
I have a situation where I have to write a api to create a resource and amongst datafields that I need to accept is a string that is basically contents of a html file. As I see it I have a choice between structuring the entire thing as a json object where this field is a string field with urlencoded html string , and having the Content Type as multipart/form-data where each of the fields and the html string (UTF-8 encoded) is a part in the message.
Not using json is something I am not comfortable with as I feel violating the REST standards in not structuring the content of the entity I am about to create thus there is a loss of information for the consumers as they can't tell immediately looking at my api definition about what data to feed to it. But practically multipart/form-data handles stuff like html file content better and more efficient as I will not have to urlencode it and can control the char-encoding also.
What will be a better approach in current context and upholding RESTful principles ? Also are there other trade-offs i should be aware of ? what about parsing a json with a huge string field (~ 200 Kb)embedded?
EDIT :- I was reading some similar questions on SO and one approach that stood out was the 2-step approach of making a first call with metadata to create the entity and then upload the file as an UPDATE process to the created entity wherein we use multipart/form-data. In that context, I guess , what I am asking is how sound is an approach where I send both metadata and the file in a single api call as multipart data , where each metadata field is actually a part in the multipart message as is the file.
The canonical way to upload files to REST API is using the multipart/form-data. As W3 recommendation guide says:
The content type "multipart/form-data" should be used for submitting
forms that contain files, non-ASCII data, and binary data.
Multipart/form-data has advantages over base64 to represent binary data. Is sticked to REST/Http philosophy, and simplify the develop of API clients.
Returning values from Forms: multipart/form-data
W3 Recommendation guide
The good practice is to use multipart/form-data whenever files are uploaded to the server along with database fields. Do not send a base64 JSON string as the request to your Rest API as it might corrupt the file or degrade the performance of your application.
As far as documenting multipart/form-data Rest API for your consumers is concerned you have to force your API consumers to use the same form fields which you have predefined in your web service.
Returning Values from Forms: multipart/form-data
I started using FormData objects everywhere on the client-side, in lieu of regular form input fields, for dynamic REST posts. FormData is presented in a positive light in various tutorials, so I went with it.
However, down the line, this caused me problems when decoding the form data into my Go structs. FormData objects are sent as "multipart/form-data" (regardless of files being sent) and I believe my decoder in Go didn't convert the raw data back to string form. Eventually my SQL queries were throwing panics, as hex data was being sent in where strings should have been.
So with some adjustment, I could use FormData however I've decided to revert to the simple universal recommendation: Use "multipart/form-data" only for special cases like when sending files. Otherwise, just use regular "application/x-www-form-urlencoded".
Let's say I have a northwind database and I use ADO.NET Entity Data Model which I automatically generate from the tables in database. Then I add a new WCF data service that inherits from DataService. When I start the web application, that runs the service I can request data like this:
http://machine/Northwind.svc/Orders
This will return all orders from order table in atom/xml format. The problem is I do not want XML. I want JSON. I think I tried all kinds of settings (web.config) and attributes in my application, but I still get XML. No matter what. I can only get JSON, when I use fiddler and change the request header to accept JSON.
I do not like the concept of content negotiation. I want always to return data in JSON format. How can I achieve that?
Keep in mind that I did not create any model objects, they are automatically created based on database tables and relationships.
Well - content negotiation comes with HTTP. In any case, you could intercept the incoming request and add/overwrite the Accept header to always specify the JSON. There's a sample how to support JSONP which uses a similar trick, I think you should be able to modify it to always return JSON as well. http://archive.msdn.microsoft.com/DataServicesJSONP.
The behavior you criticize is defined by specification of OData protocol. OData defaults to Atom and client can control media type of the representation either by Accept HTTP header or by $format parameter in query string (but I'm not sure if WCF Data services already support this).