REST: Updating multiple records - json

I need to update multiple records using a single HTTP request. An example is selecting a list of emails and marking them as 'Unread'. What is the best (Restful) way to achieve this?
The way I doing right now is, by using a sub resource action
PUT http://example.com/api/emails/mark-as-unread
(in the body)
{ids:[1,2,3....]}

I read this site - http://restful-api-design.readthedocs.io/en/latest/methods.html#actions - and it suggests to use an "actions" sub-collection. e.g.
POST http://example.com/api/emails/actions
(in the body)
{"type":"mark-as-unread", "ids":[1,2,3....]}
Quotes from the referenced webpage:
Sometimes, it is required to expose an operation in the API that inherently is non RESTful. One example of such an operation is where you want to introduce a state change for a resource, but there are multiple ways in which the same final state can be achieved, ... A great example of this is the difference between a “power off” and a “shutdown” of a virtual machine.
As a solution to such non-RESTful operations, an “actions” sub-collection can be used on a resource. Actions are basically RPC-like messages to a resource to perform a certain operation. The “actions” sub-collection can be seen as a command queue to which new action can be POSTed, that are then executed by the API. ...
It should be noted that actions should only be used as an exception, when there’s a good reason that an operation cannot be mapped to one of the standard RESTful methods. ...

Create an algorithm-endpoint, like
http://example.com/api/emails/mark-unread
bulk-update is an algorithm name, a noun. It gets to be the endpoint name in REST, the list of ids are arguments to this algorithm. Typically people send them as URL query arguments in the POST call like
http://example.com/api/emails/mark-unread?ids=1,2,3,4
This is very safe, as POST is non-idempotent and you need not care about any side effects. You might decide differently and if your bulk update carries entire state of such objects opt for PUT
http://example.com/api/emails/bulk-change-state
then you would have to put the actual state into the body of the http call.
I'd prefer a bunch of simple algo like mark-unread?ids=1,2,3,4 rather than one monolithic PUT as it helps with debugging, transparent in logs etc

It a bit complicated to get array of models into an action method as argument. The easiest approach is to form a json string from your client and POST all that to the server (to your action mehtod). You can adopt the following approach
Say your email model is like this:
public class Email
{
public int EmailID {get; set;}
public int StatusID {get; set;}
// more properties
}
So your action method will take the form:
public bool UpdateAll(string EmailsJson)
{
Email[] emails = JsonConvert.DeserializeObject<Emails[]>(EmailsJson);
foreach(Email eml in emails)
{
//do update logic
}
}
Using Json.NET to help with the serialization.
On the client you can write the ajax call as follows:
$.ajax({
url: 'api/emailsvc/updateall',
method: 'post',
data: {
EmailsJson: JSON.stringify([{
ID: 1,
StatusID:2,
//...more json object properties.
},
// more json objects
])
},
success:function(result){
if(result)
alert('updated successfully');
});

Related

Apply OData function on retrieved data in a query

I just started to work with Odata and I had an impression the OData querying is kind of flexible.
But in some cases I want to retrieve updated/newly calculated data on the fly. In my case this data is SalaryData values. At some point, I want them to be slightly tweaked with additional applied calculation function. And the critical point that this action must occur on the retrieval of the data with the general request query.
But I don't know, is that applicable to use function in this case?
Ideally, I want to have the similar request:
/odata/Employee(1111)?$expand=SalaryData/CalculculationFunction(40)
Here I want to apply CalculculationFunction with parameters on SalaryData.
Is that possible to do it in OData in this way? Or should I create an entity set of salary data and retrieve calculated data directly using the query something like
/odata/SalaryData(1111)/CalculculationFunction(40)
But this way is least preferable for me, because I don't want to use id of SalaryData in request
Current example of the function I created:
[EnableQuery(MaxExpansionDepth = 10, MaxAnyAllExpressionDepth = 10)]
[HttpGet]
[ODataRoute("({key})/FloatingWindow(days={days})")]
public SingleResult<Models.SalaryData> MovingWindow([FromODataUri] Guid key, [FromODataUri] int days)
{
if (days <= 0)
return new SingleResult<Models.SalaryData>(Array.Empty<Models.SalaryData>().AsQueryable());
var cachedSalaryData = GetAllowedSalaryData().FirstOrDefault(x => x.Id.Equals(key));
var mappedSalaryData = mapper.Map<Models.SalaryData>(cachedSalaryData);
mappedSalaryData = Models.SalaryData.FloatingWindowAggregation(days, mappedSalaryData);
var salaryDataResult = new[] { mappedSalaryData };
return new SingleResult<Models.SalaryData>(salaryDataResult.AsQueryable());
}
There is always an overlap between What is OData Compliant Routing vs What can I do with Routes in Web API. It is not always necessary to conform to the OData (V4) specification, but a non-conforming route will need custom logic on the client as well.
The common workaround for this type of request is to create Function endpoint bound to the Employee item that accepts the parameter input that will be used to materialize the data. The URL might look like this instead:
/odata/Employee(1111)/WithCalculatedSalary(40)?$expand=SalaryData
This method could then internally call the existing MovingWindow function from the SalaryDataController to build the results. You could also engineer both functions to call a common set based routine.
The reason that you you should bind this function to the EmployeeController is that the primary identifying resource that correlates the resulting data together is the Employee.
In this way OData v4 compliant clients would still be able to execute this function and importantly would be able to discover it without any need for customisations.
If you didn't need to return the Employee resource as part of the response then you could still serve a collection of SalaryData from the EmployeeController:
/odata/Employee(1111)/CalculatedSalary(days=40)
[EnableQuery(MaxExpansionDepth = 10, MaxAnyAllExpressionDepth = 10)]
[HttpGet]
[ODataRoute("({key})/FloatingWindow(days={days})")]
public IQueryable<Models.SalaryData> CalculatedSalary([FromODataUri] int key, [FromODataUri] int days)
{
...
}
builder.EntitySet<Employee>("Employee")
.EntityType
.Function("CalculatedSalary")
.ReturnsCollectionFromEntitySet<SalaryData>("SalaryData")
.Parameter<int>("days");
$compute and $search in ASP.NET Core OData 8
The OData v4.01 specification does have support for System Query Option $compute which was designed to enable clients to append computed values into the response structure, you could hijack this pipeline and define your own function that can be executed from a $compute clause, but the expectation is that system canonical functions are used with a combination of literal values and field references.
The ASP.Net implementation has only introduced support for this in the OData Lib v8 runtime, as yet I have not yet found a good example of how to implement custom functions, but syntactically it is feasible.
The same concept could be used to augment the $apply execution, if this calculation operates over a collection and effectively performs an aggregate evaluation, then $apply
It might be that your current CalculculationFunction can be translated directly into a $compute statement, otherwise if you promote some of the calculation steps (metadata) as columns in the schema (you might use SQL Computed Columns for this...) then $compute could be a viable option.

How to handle POST requests which result in creating interdependent different resources in Spring?

I'm currently building a HATEOAS/HAL based REST application with Spring MVC and JPA (Hibernate). Basically the application gives access to a database and allows data retrieval/creation/manipulation.
So far I've already got a lot of things done including a working controller for one of the resources, let's call it x.
But I don't want to give the API user the opportunity to create just an x resource, because this alone would be useless and could be deleted right away. He/she also has to define a new y and a z resource to make things work. So: Allowing to create all those resources independently would not break anything but maybe produce dead data like a z resource floating around without any connection, completely invisible und useless to the user.
Example: I don't want the user to create a new customer without directly attaching a business contract to the customer. (Two different resources: /customers and /contracts).
I did not really find any answers or best practice on the web, except for some sort of bulk POSTing, but only to one resource, where you would POST a ton of customers at once.
Now the following options come to my mind:
Let the user create the resources as he/she wants. If there are customers created and never connected to a contract - I don't care. The logic here would be: Allow the user to create /customers (and return some sort of id, of course). Then if he/she wants to POST a new /contract later I would check if the customer's id given exists and if it does: create the contract.
Expect the user, when POSTing to /customers, to also include contract data.
Option 1 would be the easiest way (and maybe more true to REST?).
Option 2 is a bit more complicated, since the user does not send single resources any more.
Currently, the controller method for adding a customer starts like that:
#RequestMapping(value = "", method = RequestMethod.POST)
public HttpEntity<Customers> addCustomer(#RequestBody Customers customer) {
//stuff...
}
This way the JSON in the RequestBody would directly fit in my customers class and I can continue working with it. Now with two (or more) expected resources included in the RequestBody this cannot be done the same way any more. Any ideas on how to handle that in a nice way?
I could create some sort of wrapper class (like CustomersContracts), that consists of customers and contract data and has the sole purpose of storing this kind of data in it. But this seems ugly.
I could also take the raw JSON in the RequestBody, parse it and then manually create a customer and a contract object from it, save the customer, get its id and attach it to the contract.
Any thoughts?
Coming back to here after a couple of months. I finally decided to create some kind of wrapper resource (these are example class names):
public class DataImport extends ResourceSupport implements Serializable {
/* The classes referenced here are #Entitys */
private Import1 import1;
private Import2 import2;
private List<Import3> import3;
private List<Import4> import4;
}
So the API user always has to send an Import1 and Import2 JSON object and an Import3 and Import4 JSON array (can also be empty).
In my controller class I do the following:
#RequestMapping(*snip*)
public ResponseEntity<?> add(#RequestBody DataImport dataImport) {
Import1 import1 = dataImport.getImport1();
Import2 import2 = dataImport.getImport2();
List<Import3> import3 = dataImport.getImport3();
List<Import4> import4 = dataImport.getImport4();
// continue...
}
I still don't know if it's the best way to do this, but it qorks quite well.

handle a String[] via the PortletPreferences (Liferay6.2)

I have built a MVCPortlet that runs on Liferay 6.2.
It uses a PortletPReferences page that works fine to set/get String preferences parameters via the top right configuration menu.
Now I would need to store there a String[] instead of a regular String.
It seems to be possible as you can store and get some String[] via
portletPreferences.getValues("paramName", StringArrayData);
I want the data to be stored from a form multiline select.
I suppose that I need to call my derived controller (derived from DefaultConfigurationAction) and invoke there portletPreferences.setValues(String, String[]);
If so, in the middle, I will neeed the config jsp to pass the String[] array to the controller via a
request.setAttribute(String, String[]);
Do you think the app can work this way in theory?
If so, here are the problems I encountered when trying to make it work:
For any reason, in my config jsp,
request.setAttribute("paramName", myStringArray);
does not work ->
actionRequest.getAttribute("paramName")
retrieves null in my controller
This is quite a surprise as this usually works.
Maybe the config.jsp works a bit differently than standard jsps?
Then, how can I turn my multiline html select into a String[] attribute?
I had in mind to call a JS function when the form is submitted.
this JS function would generate the StringArray from the select ID (easy)
and then would call the actionURL (more complicated).
Is it possible?
thx in advance.
In your render phase (e.g. in config.jsp) you can't change the state of your portlet - e.g. I wouldn't expect any attributes to persist that are set there. They might survive to the end of the render phase, but not persist to the next action call. From a rendered UI to action they need to be part of a form, not request attributes.
You can store portletpreferences as String[], no problem, see the API for getting and setting them
I think maybe you can use an array in client side, and you can update the javascript array, when user is selecting new values.
So you have the javascript array, then when user click on the action, you can execute the action from javascript also, something like this:
Here "products" is the array with your products.
A.io.request(url, {type: 'POST',
data: {
key: products
},
on: {
success: function(event, id, obj) {
}
}
});
From Action methd you can try to get the parameter with:
ParamUtil.getParameterValues(request,"key");

Optimize Large Switch Statement - AS3

I have a very large switch statement that handles socket messages from a server. It currently has a little over 100 cases and will continue to grow over time. I feel like I should be doing something more optimized than a switch statement.
My idea:
Have a large array of function callbacks. Then I could simple do something like
myArrayOfCallbacks[switchValue](parameters);
This should turn something that was O(n) where n is the number of switch cases into constant time right? I would think it would be a pretty good optimization.
Any comments or suggestions on a different method?
I would go with client realisation that accompanies backend. So you will be able go without collections.
if (eventType in responseController) {
//Before call, you could do secure checks, fallbacks, logging, etc.
responseController[eventType](data);
}
//Where eventType is 'method' name,
//for example command from the socket server is 'auth',
//if you have implemented method `auth` in your responseController
//it will be called
Since you're calling a switch-case on one value, you'd better arrange the possible values into a static array of possible values, and you get another static array of corresponding functions to call. Then you do like this:
public static const possibleValues:Array=['one value','two value',...];
// in case of ints, use only the second array
public static const callbacks:Array=[oneFunction,twoFunction,...];
// make sure functions are uniform on parameters! You can use 1 parameter "message" as is
...
var swtichValue=message.type; // "message" is an Object representing the message
// with all its contents
var callbackIndex:int=possibleValues.indexOf(switchValue);
if (callbackIndex>=0) if (callbacks[callbackIndex]) callbacks[callbackIndex](message);
So yes, you were pretty right in your guess.

Function Parameter best practice

I have question regarding the use of function parameters.
In the past I have always written my code such that all information needed by a function is passed in as a parameter. I.e. global parameters are not used.
However through looking over other peoples code, functions without parameters seem to be the norm. I should note that these are for private functions of a class and that the values that would have been passed in as paramaters are in fact private member variables for that class.
This leads to neater looking code and im starting to lean towards this for private functions but would like other peoples views.
E.g.
Start();
Process();
Stop();
is neater and more readable than:
ParamD = Start(paramA, ParamB, ParamC);
Process(ParamA, ParamD);
Stop(ParamC);
It does break encapsulation from a method point of view but not from a class point of view.
There's nothing wrong in principle with having functions access object fields, but the particular example you give scares me, because the price of simplifying your function calls is that you're obfuscating the life cycle of your data.
To translate your args example into fields, you'd have something like:
void Start() {
// read FieldA, FieldB, and FieldC
// set the value of FieldD
}
void Process() {
// read FieldA and do something
// read FieldD and do something
}
void Stop() {
// read the value of FieldC
}
Start() sets FieldD by side effect. This means that it's probably not valid to call Process() until after you've called Start(). But the code doesn't tell you that. You only find out by searching to see where FieldD is initialized. This is asking for bugs.
My rule of thumb is that functions should only access an object field if it's always safe to access that field. Best if it's a field that's initialized at construction time, but a field that stores a reference to a collaborator object or something, which could change over time, is okay too.
But if it's not valid to call one function except after another function has produced some output, that output should be passed in, not stored in the state. If you treat each function as independent, and avoid side effects, your code will be more maintainable and easier to understand.
As you mentioned, there's a trade-off between them. There's no hard rule for always preferring one to another. Minimizing the scope of variables will keep their side effect local, the code more modular and reusable and debugging easier. However, it can be an overkill in some cases. If you keep your classes small (which you should do) then the shared variable would generally make sense. However, there can be other issues such as thread safety that might affect your choice.
Not passing the object's own member attributes as parameters to its methods is the normal practice: effectively when you call myobject.someMethod() you are implicitly passing the whole object (with all its attributes) as a parameter to the method code.
I generally agree with both of Mehrdad and Mufasa's comments. There's no hard and fast rule for what is best. You should use the approach that suits the specific scenarios you work on bearing in mind:
readability of code
cleanliness of code (can get messy if you pass a million and one parameters into a method - especially if they are class level variables. Alternative is to encapsulate parameters into groups, and create e.g. a struct to whole multiple values, in one object)
testability of code. This is important in my opinion. I have occassionally refactored code to add parameters to a method purely for the purpose of improving testability as it can allow for better unit testing
This is something you need to measure on a case by case basis.
For example ask yourself if you were to use parameter in a private method is it ever going to be reasonable to pass a value that is anything other than that of a specific property in the object? If not then you may as well access the property/field directly in the method.
OTH you may ask yourself does this method mutate the state of the object? If not then perhaps it may be better as a Static and have all its required values passed as parameters.
There are all sorts of considerations, the upper most has to be "What is most understandable to other developers".
In an object-oriented language it is common to pass in dependencies (classes that this class will communicate with) and configuration values in the constructor and only the values to actually be operated on in the function call.
This can actually be more readable. Consider code where you have a service that generates and publishes an invoice. There can be a variety of ways to do the publication - via a web-service that sends it to some sort of centralized server, or via an email sent to someone in the warehouse, or maybe just by sending it to the default printer. However, it is usually simpler for the method calling Publish() to not know the specifics of how the publication is happening - it just needs to know that the publication went off without a hitch. This allows you to think of less things at a time and concentrate on the problem better. Then you are simply making use of an interface to a service (in C#):
// Notice the consuming class needs only know what it does, not how it does it
public interface IInvoicePublisher {
pubic void Publish(Invoice anInvoice);
}
This could be implemented in a variety of ways, for example:
public class DefaultPrinterInvoicePublisher
DefaultPrinterInvoicePublisher _printer;
public DefaultPrinterInvoicePublisher(DefaultPrinterFacade printer) {
_printer = printer
}
public void Publish(Invoice anInvoice) {
printableObject = //Generate crystal report, or something else that can be printed
_printer.Print(printableObject);
}
The code that uses it would then take an IInvoicePublisher as a constructor parameter too so that functionality is available to be used throughout.
Generally, it's better to use parameters. Greatly increases the ability to use patterns like dependency injection and test-driven design.
If it is an internal only method though, that's not as important.
I don't pass the object's state to the private methods because the method can access the state just like that.
I pass parameters to a private method when the private method is invoked from a public method and the public method gets a parameter which it then sends to the private method.
Public DoTask( string jobid, object T)
{
DoTask1(jobid, t);
DoTask2(jobid, t);
}
private DoTask1( string jobid, object T)
{
}
private DoTask2( string jobid, object T)
{
}