Best Practices when using JSON to update DB - json

I am working on an API for my first web app, a scheduling system, with Django on the back end.
Shifts are sent to the front end in a single JSON object (schedule), that contains shift objects. This object can then be modified by the front end and sent back to have the server update the database according to any changes in the JSON object.
My question: is it better to use "marker" properties, such as {... "delete": true...} and {... "new": true...} within the shift objects to let the server know what has changed, or should the back end figure it out on its own by comparing the incoming data to existing data?
The first option seems to me to allow for fewer database queries, while the second option seems more robust (i.e. it does not depend on the front end to properly tag its changes).
Also, as this is my first attempt at web dev, any related recommendations or criticisms are encouraged.

Related

API POST Endpoints and Image Processing

So, for my android app, not only do I get certain data that I would like to POST to an API endpoint via JSON format, but one of the data pieces is also an image. Everything besides the image goes into a postgresql database. I want to put the images somewhere (no important where) then put the link to that image in the database.
Here's the thing, while that image is connected to the other pieces of data I send to the API endpoint that gets put into the database, I would be sending the image somewhere else and then the link be put in at a different time. So here's my mental gymnastic I am trying to get over:
How would I send these two separate data pieces (an image and then all other data in a single JSON object) and have the image associated with that JSON object that get's put into the database without the image and data getting all mixed up due to multiple users doing the same thing?
To simplify, say I have the following information as a single JSON object going to an endpoint called api.example.com/frontdoor. The object looks something like this:
{
"visitor_id": "5d548e53-c351-4016-9078-b0a572df0bca",
"name": "John Doe",
"appointment": false,
"purpose": "blahblahblah..."
}
That JSON object is consumed by the server and is then put into their respective tables in the database.
At the same time, and image is taken and given a uuid as a file name and send to api.example.com/face, then the server processes it and somehow a adds link to the image in the proper database entry row.
The question is, how do I accomplish that? How would I go about relating these two pieces of data that get sent to two different places?
In the end, I plan on having a separate endpoint such as api.example.com/visitors provide a JSON object with a list of all visits that looks something like:
{
"visits": [
{
"visitor_id": "5d548e53-c351-4016-9078-b0a572df0bca",
"name": "John Doe",
"appointment": false,
"purpose": "blahblahblah..."
"image": "imgbin.example.com/faces/c3118272-9e9d-4c54-8824-8cf4cfaa679f.png"
},
...
]
}
Mainly, I am trying to get my head around the design of all of this so I can start writing code. Any help would be appreciated.
As I understand, your question is about executing an action on the server side where two different sub services are involved - one service to update the text data in a sql db and another to store an image and then put the image's reference back to the main data. There are two approaches that come to my mind.
1) Generate a unique id on the client side and associate that to both json object upload an the image upload. And then when your image is uploaded, the image upload service can take this ID, find the corresponding record in SQL and update the image path. However generating client side unique IDs are not a recommended approach because there is a chance of collision such that more than 1 client generates the same ID which will break the logic. To work around this, before uploading, client can make a call to an ID generation service which will uniquely generate the ID on the server side and send it back to the client and then client can perform the upload step using the same approach. The downside to this approach is that the client needs to make an extra call to the server to get the unique ID. Advantage of this approach is that the UI can get separate updates for the data and the image as in when the data upload service is successful, it can say that the data is successfully updated and when the image is uploaded at some point in time later, then it can say that image upload is completed. Thus, the responses of each upload can be managed differently in this case. However if the data and image upload has to happen together and has to be atomic (the whole upload fails if either of data or image upload fails) then this approach can't be used because the server must group both these actions in a transaction.
2) Another approach is to have a common endpoint for both image and data upload. Both image and data get uploaded together in a single call to the server and the server first generates a unique ID and then makes two parallel calls to data upload service and image upload service and both these sub service calls get this unique ID as the parameter. If both uploads have to be atomic then the server must group these sub service calls in a transaction. Regarding returning response, it can be synchronous or asynchronous. If the UI needs to be kept waiting for the uploads to succeed, then the response will be synchronous and the server will have to wait for both these sub services to complete before returning a response. But if UI doesn't need to be kept waiting then the server can respond immediately after making calls to these sub services with a message that the upload request has been accepted. In this case, the sub services calls are processed asynchronously.
In my opinion, approach 2 is better because that way server has more control over grouping the related actions together. Regarding response, it depends on the use case. If the user cares about whether his post was properly recorded on the server (like making a payment) then it is better to have synchronous implementation. However if user initiates the action and leaves (as in the case of generating a report or sending an email) then it can have asynchronous implementation. Asynchronous implementation is better in terms of server utilization because server is free to accept other requests rather than waiting for the sub services' actions to complete.
These are 2 general approaches. I am sure there will be several variations or may be entirely different approaches for this problem.
Ah, too long of an answer, hope it helps. Let me know if further questions.

How to check if the change in nested data is permissible

We have a nested JSON structure in our web app on the frontend like Rows > Columns > Elements > Rows > Columns > Elements ...
We also have an API call which sends the entire data as JSON to backend.
In the backend we have a set of several permissions, like column size change, row background change, element ordering change, etc that are permitted or denied for various types of users.
We want to identify in the backend if the change of the nested structure is permissible.
Example 1 [Update data]:
The user has CHANGED the size of a 'Column', where the size is represented as a property in 'Column' object.
or
Example 2 [Remove/Add data]:
The user has removed/added an 'Element' from a 'Column'.
We know that we can do full traverse on the entire tree, and understand if the change was permissible or not, but we are looking for a better and faster, resource saving solution for concurrent connections and many users/big trees.
This question seems to be general for different technologies, but I want to let you know that we are using Laravel / Lumen / Dingo in the backend & Ember.js on the frontend.
Thanks for reading and helping :)
One option is to not send the entire JSON to the server, but to instead send json patch (see http://jsonpatch.com/). Then on the server, have rules that affectively hash the paths in the patch to permissions. In other words, since you are only sending the change and not the entire JSON, the need to parse the entir
You can have a API for returning permissions (have model Permission).
Then check for that permission for any actions you need in frontend by using ember-can.
By this, you can ensure that when you send back data for updating from front to back, it is complying the permissions defined in backend and no need for many back n forth
I think you can have type for each change. For example column change is ----> colChange(or simpleChange). Send the type of change with json. Permission can be checked by change type. Also there can be groups of change types and permission can be sat to groups. In case if you don't send data for each change there must be stack of user changes(push type of change into stack on each user change). Send that stack with json to backend .

Review before writing to database from UI

This is more of a question on design approach. I have an application which has the following details:
UI in Angular
UI uses an api which is in Node/Express
Database is just a JSON file for now.
I want to move to mongoDb from the JSON file. What I'd like is, whenever anyone uses the UI to make changes to the database, I'd like to review the changes before they are updated in the database. what is the best way to achieve this?
This was easier for me with the JSON file because I was creating a pull request on git where I would review all the changes and then update.
Things that I have thought:
Let the UI write to a separate clone collection(table) and then review them and update the main collection accordingly. Not sure if this is the right way to do it.
Are you yourself wanting to review changes, or wanting an end user to review before saving? If it's you, you have a few options:
You can create a mongodb collection of pending objects that will get moved to a different collection once they're approved. This is OK, but not great because you'll end up shuttling objects around and it's probably more reasonable to use a flag to do aggregate grouping instead of collection-based delineation
You can simply use a property on an object as a flag and send objects that are pending review to your db with that flag enabled (using some property like true, 1, or another way of saying "this is true/on/enabled etc.")
If you want an end-user to be able to save, you can use mongoose hooks/middleware to fire off validators or whatever you want and return a response with meaningful data back to your angular UI. From there, you can have a user 'review' what they're saving. This doesn't persist or get saved, it's only saved once they send everything back up again (if that's how you choose to build the save process).

What are the common ways for deletion local/client objects using REST API?

Is there common design pattern for dispatching deleted objects to the requestor (client of the API)?
Challenges we are having:
When object is deleted on the API completely, client will not know
that object is gone and will keep it locally (as API only shows objects changed after the certain date)
If we enable object's property to show that is deleted (ex. "deleted = TRUE") then
eventually number of objects in the API grows and slows down the transfer rate.
Another option we looking into is to have separate Endpoints on the API to show list of deleted objects only (is this the pattern that anyone uses?).
I'm looking for most "RESTful way" to delete local objects.
The way I handle it is a variation on your #1: each item has a last updated field in the database, and if something is deleted, I make an entry in another table of deleted items, and it's updated value is when it was deleted.
The client makes a request asking for "changes since X" which is their own locally stored last updated value...it returns new data, and an array of deleted items. Then on the client I purge those values
Stale data is always a problem with client/server applications. If clients loads some data, then some object is deleted on the server, and then client sends a DELETE request, the RESTFul thing to do would be to return a 404, which indicated "not found". If the client knows that if it sends a DELETE, and gets a 404, the resource was deleting from underneath...
What if you think of your resource not as a list, but rather as a changeset?
Eg. changesets what you have in git or SVN.
This way, there's always a "head" version, and the client always has some version, and the resource is the change between client's last and head.
That way you can apply whatever you've learned by examining/using version control systems.
If you need anything more complex, the science behind is called Operational Transformation (OT) - http://en.wikipedia.org/wiki/Operational_transformation

Hoping to port a working jQuery Validator implementation's rules to JSONSchema

I'm attempting to move an existing (and working) client-side jQuery validation schema to JSONSchema to allow myself to validate arbitrary JSON on both the client and server.
My application is essentially a bunch of gigantic forms with lots of complex logic determining which questions should be asked based on the user's response to other questions. The forms each have over 200 fields.
Right now I'm only doing client-side validation and that works well about 99% of the time. Browser issues have cropped up on a few occasions, but nothing catastrophic. That being said, I want to do server-side validation (!).
After reading the JSONSchema draft and browsing around some of the v3 implementations, it seems like I might lose some of the more complex rules that my application has come to depend upon. I want to be sure that I'm not missing something before moving too far in any direction.
Some examples:
"If x == 10, then y is required, otherwise it's optional"
10 could be a literal value, an enum, etc., but I need to be able to reference another field in the same structure and guarantee it's value not only exists, but is equivalent to a specific type / value.
I think this is addressed in this thread on the JSONSchema list.
"If x = today's date, and y = tomorrow's date, then x > y"
This logic will be used to ensure that the "from" date comes before the "to" date.
From what I can see there's nothing like this and the only way I can see doing it is passing in a freshly eval-ed chunk of JSON as the schema.
The closest thing I've found to meet the above needs is CERNY.
If I'm barking up the wrong tree, please let me know. I'm also looked into running backbone.js on both the client and server.
tl;dr;
I want to maintain one set of validation rules for large and complex forms and apply these validation rules to arbitrary JSON documents on both the client and server side.
there is many tricks but not all of them are possible. For example
if x == 10 then y is required can be achieved with something like (draft 3):
"type":[
{"properties":{"x":{"enum":[10]}, "y":{"required":true}}},
{"properties":{"x":{"disallow":[{"enum":[10]}]}}}
]
Let's say, it's possible but very tricky… a schema is basically supposed to validate the structure, not it's content (even if there is few properties for this)
Another possible way I personally do like is to "extend" the current validation graph with an external url based schema. The idea is to send parameters of the current document on an url which one will return a relevant schema according to those parameters.
Example:
{
"extends":{"$ref":"http://checkCustomValidity/{x}/{y}/"};
}
Where at "runtime" the schema sent back could be a {"disallow":"any"} if not allowed or {} if ok
This is useful as the url can be both used for the client and server side (your client will not be completely standalone but in some cases, you just cannot)
A real life usage for this is in cases where it is obliged to use a remote service. For example if I do have to check if my nickname is already used on the server during the registration. I code a server side web service answering to the request path: http://www.server.com/isNicknameUsed/{nickname}