Grails JSON response of models and submodels (one to many) - json

I'm very new to Grails and so I hope I have an easy question for you.
I have a DomainModel and inside this model is a related Model (one-to-many). Let's say Service and a service as 'n' tasks.
I select (via findAllBy()) e.g. 3 services and every service has at least one task or three for example.
Now my question. I don't want return "render foundServices as JSON". Reason: I don't want to let the people all over the world know my model definition and some maybe "secret" properties, which are all filled automatically by the database return/select. Is that a right thought, or is it a "too much and too deep security"-thought?
So I tried to find out how can I return relevant datas I need in a similiar way than these objects.
I tried:
List<Service> servicesSelection = Service.findAllByCompany("someCompany")
ArrayList services = new ArrayList();
for (Service service: servicesSelection) {
ArrayList myService = new ArrayList()
myService .add(service.id)
myService .add(service.getServiceName())
for (Tasks task: service.tasks) {
ArrayList serviceTasks = new ArrayList()
serviceTasks.add(task.id)
serviceTasks.add(task.getTaskName())
myService.add(serviceTasks)
}
services.add(myService)
}
render services as JSON
1) Is this too much "overhead"?
2) Do you think "ok doesn'n matter, return the whole DomainModel (from search result)"
3) If I put my own 'array lists" together, how can this be done to be similiar to domain-models to access easily all properties and the 'n' task-list in each service
Thank you very much!

It's not too much overhead if your security requirements dictate that certain information not be shared. In most cases, I don't think it's a problem to just convert the whole domain object to JSON, but your app might be a special case.
You could write the code to do this in a manner more consistent with Groovy/Grails practice:
def services = []
for (s in Service.findAllByCompany("someCompany")) {
def tasks = []
for (t in s.tasks) {
tasks << [id: t.id, taskName: t.taskName]
}
def service = [id: s.id, serviceName: s.serviceName, tasks: tasks]
services << service
}
render services as JSON
I just noticed that your code also didn't provide keys for the ids and names (using lists instead of maps), which is probably something you'd want to do and which the example code I've written does.

Related

Is there a work-a-round to serializing some Microsoft Graph Entities with interfaces like Domain

I wanted to generate a test program to execute against our client tenants to verify we could handle all the data our new Microsoft graph app collects. My plan was to serialize the data using
XmlSerializer serializer = new XmlSerializer(typeof(List<T>));
It failed on the first entity I tried, Microsoft.Graph.Domain ( in this case with the error
Cannot serialize member Microsoft.Graph.Entity.AdditionalData of type ... because it is an interface.
A search on stack overflow found suggestions to decorate the problematic class property with XmlIgnore so XmlSerializer will ignore it, others recommended implementing a new IXmlSerializer. One post seemed to propose using serializing to XAML.
Open to a better way to collect real customer data which I can import into my unit tests? As a developer I do not have direct access to customer accounts.
Does anyone have other suggestions on how to serialize Microsoft Graph Entities.
I replaced my XmlSerializer with a Json one.
public void SerializeObjectsToJson<T>(List<T> serializableObjects)
{
var jsonStr = JsonConvert.SerializeObject(serializableObjects);
}
public List<T> DeSerializeObjectsFromXml<T>()
{
TextReader textReader = new StreamReader(fqpathname, Encoding.UTF8);
var jsonStr = textReader.ReadToEnd();
data = JsonConvert.DeserializeObject<List<T>>(jsonStr);
}
This all seems to work with Domain, User, SubscribedSkus, Organization, etc.

Passing scala slick table in the akka actor message

I want to send slick table as part of the akka actor message . So that the remote actor at the other end can connect to the database and can do CRUD operations on the mysql database. Am unable to get my head over the slick types and i find compiler/eclipse complaining. How can i get this done. Is it a good idea to pass slick queries as part of actor messages.
object RemoteActorMessages {
case class Create(table: Table[A])
case class RunQuery(query: Query[_, _, _])
case Result(code: Int, message: String)
}
class DBActor extends Actor {
def recieve = {
case Create(table) => createTable(table)
case RunQuery(query) => runQuery(query)
case ... //so on
}
}
def createTable(table: Table[M]): Future[A] = Future {
db.withSession(implicit session => tableQuery[table].ddl.create)
}
def runQuery(query: Query[_, _, _]): Future[A] = Future {
db.withSession { implicit session => {
query.run
}
}
}
warning: code might have some type errors.Discretion is appreciated from the viewers
I am confused about how to send results back to the sender of the messages. for example: query.list.run gives back list of model objects. So, how should i frame by Result message
I think this is a case of 'when you have a hammer everything becomes nails'. I believe this is not the right use case for the actors. A reason (not the only one) would be that DB operations are 'slow' and they would block the actor threads for a long time.
Arguably you want a service that manages the table operations, using Futures and a custom Execution Context to isolate the impact (for example, in Play is done like this). Something like:
object DBService {
def createTable() : Future[Boolean] = ???
...
}
Actors should only receive commands like CreateTable that then call the corresponding method in the service.
Incidentally, this would simplify your use case as the service could know more about the table and other Slick specifics, whilst the actor would be oblivious to them.
Not the only way, but arguably simpler.

Reflectively save domain class instances in Grails

The problem is as follows: I want to handle a POST request with JSON body. The body consists of an array of JSON Objects, without further nesting, i.e. simple HashMaps. All of these objects represent JSON-serialized domain classes from an Android Application, which will have their counterpart in my Grails app. I am thinking of parsing the JSON body, iterating through every element and saving each node as its corresponding domain class instance.
a) How should I save the instance? I am quite new to Grails/Groovy so please excuse any huge mistakes. Code so far is
public static Object JSONArray2Instances(String json, Class type) {
def slurper = new JsonSlurper()
def result = slurper.parseText(json)
//we only want to parse JSON Arrays
if (!(result instanceof JSONArray))
return null
result.each {
def instance = it.asType(type)
// now I need to save to domain class!
}
}
b) where do I place the corresponding code? Currently it is in /grails-app/src/groovy. Where do the tests go? (Since it is not a 'real' Grails component)
c) Is an intermediate command object more appropriate?
Your code should go in to the controller which is handling the request. Please take a look at
gson-grails plugin which has examples of how to serialize and deserialze objects and map them to domain objects. Please take a look at the grails basics where they talk about the conventions used in the grails application and the layout. There are good examples at grails site. Hope this helps
I solved my problem as follows, based on help provided by the comment from allthenutsandbolts. : (Grails-Gson plugin was not needed)
Let N2696AdminAction be the name of a Domain Class
in my controller:
class N2696AdminActionController extends RestfulController{
static responseFormats = ['json', 'xml']
def JSONHandlerService
N2696AdminActionController() {
super(N2696AdminAction)
}
#Override
#Transactional
def save(){
if (request!=null)
JSONHandlerService.instancesfromJSON(request.JSON)
}
}
then I delegate persisting to my service as follows
class JSONHandlerService {
def instancesfromJSON(Object request){
//we only want to parse JSON Arrays
if (!(request instanceof JSONArray))
return null
request.each {
def domainClass = Class.forName("${it.type}",
true, Thread.currentThread().getContextClassLoader())
def newDomainObject = domainClass.newInstance(it)
newDomainObject.save(failOnError:true, flush:true, insert: true)
}
}
}
type is a Json attribute which holds the full (package inclusive) name for my class. This way, I can save to multiple Domain Classes with the same POST request.

Advice on filtering data and code reuse with Onion Architecture

Here are my questions and then I'll give you the background for them:
I would prefer to use Method 2 as my application design, so is there a way to provide filtering like Method 1 without introducing references to non-business code and without allowing access to the database model in the Core project?
How do you handle code reuse? The namespaces for each object are something like Project.Core.Domain or Project.Core.Services, but if feels weird making the namespace something like CompanyName.Core.Domain when it is not stored in that project. Currently, I'm copying the source code files and renaming namespaces to handle this, but I'm wondering if there is an organizational way to handle this or something else I hadn't thought of?
Technologies I'm using:
ASP.NET MVC 3
Linq-to-SQL
StructureMap
Moq
MSTest
Method 1:
Here's how I used to setup my web projects:
The Data project would contain all repositories, and Linq data contexts. In a repository, I would return a collection of objects from the database using IQueryable.
public IQueryable<Document> List()
{
return from d in db.Documents
select d;
}
This allowed me to setup filters that were static methods. These were also stored in the Data project.
public static IQueryable<Document> SortByFCDN(this IQueryable<Document> query)
{
return from d in query
orderby d.ID
select d;
}
In the service layer, the filter could be applied like this.
public IPagedList<Document> ListByFCDN(int page, IConfiguration configuration)
{
return repository.List().SortByFCDN().ToPagedList(page, configuration.PageSize, configuration.ShowRange);
}
Therefore, the repository would only have to provide a ListAll method that returned all items as an IQueryable object and then the service layer would determine how to filter it down before returning the subset of data.
I like this approach and it made my repositories cleaner while leaving the bulk of the code in the services.
Method 2
Here's how I currently setup my web projects:
Using the Onion Architecture:
Core: Contains business domain model, all interfaces for the application, and the service class implementations.
Infrastructure: Contains the repository implementations, Linq data contexts, and mapping classes to map the Linq database model to the business model.
Since I'm separating my business code from database code, I do not want to add references in the Core project to things like Linq to gain access to IQueryable. So, I've had to perform the filtering at the repository layer, map the database model to the domain model, and then return a collection of domain objects to the service layer. This could add additional methods into my repositories.
This is what I ended up doing:
1) Created a filtering enum object in the Core project.
public enum FilterType
{
SortFCDN
}
2) In the service class (also within the Core project), do something like:
public IPagedList<Document> ListByFCDN(int page)
{
Dictionary<FilterType, object> filters = new Dictionary<FilterType, object>();
filters.Add(FilterType.SortFCDN, "");
return repository.List(page, filters);
}
3) In the repository (under the Infrastructure project):
public IPagedList<Document> List(int page, Dictionary<FilterType, object> filters)
{
//Query all documents and map to the model.
return (from d in db.DbDocuments
select d).Filter(filters).Map(
page,
configuration.Setting("DefaultPageSize", true).ToInt(),
configuration.Setting("DefaultShowRange", true).ToInt());
}
4) Create a filters class in the Infrastructure project:
public static class DocumentFilters
{
public static IQueryable<DbDocument> Filter(this IQueryable<DbDocument> source, Dictionary<FilterType, object> filters)
{
foreach (KeyValuePair<FilterType, object> item in filters)
{
switch (item.Key)
{
case FilterType.SortFCDN:
source = source.SortFCDN();
break;
}
}
return source;
}
public static IQueryable<DbDocument> SortFCDN(this IQueryable<DbDocument> source)
{
return from d in source
orderby d.ID
select d;
}
}
The service layer (Core project) can then decide what filters to apply and pass those filters to the repository (Infrastructure project) before the query executes. Multiple filters can be applied as long as only one per FilterType is applied.
The filters dictionary can hold the type of filter and any value/object that needs to be passed into the filter. New filters can easily be added as well.

Linq to SQL and concurrency with Rob Conery repository pattern

I have implemented a DAL using Rob Conery's spin on the repository pattern (from the MVC Storefront project) where I map database objects to domain objects using Linq and use Linq to SQL to actually get the data.
This is all working wonderfully giving me the full control over the shape of my domain objects that I want, but I have hit a problem with concurrency that I thought I'd ask about here. I have concurrency working but the solution feels like it might be wrong (just one of those gitchy feelings).
The basic pattern is:
private MyDataContext _datacontext
private Table _tasks;
public Repository(MyDataContext datacontext)
{
_dataContext = datacontext;
}
public void GetTasks()
{
_tasks = from t in _dataContext.Tasks;
return from t in _tasks
select new Domain.Task
{
Name = t.Name,
Id = t.TaskId,
Description = t.Description
};
}
public void SaveTask(Domain.Task task)
{
Task dbTask = null;
// Logic for new tasks omitted...
dbTask = (from t in _tasks
where t.TaskId == task.Id
select t).SingleOrDefault();
dbTask.Description = task.Description,
dbTask.Name = task.Name,
_dataContext.SubmitChanges();
}
So with that implementation I've lost concurrency tracking because of the mapping to the domain task. I get it back by storing the private Table which is my datacontext list of tasks at the time of getting the original task.
I then update the tasks from this stored Table and save what I've updated
This is working - I get change conflict exceptions raised when there are concurrency violations, just as I want.
However, it just screams to me that I've missed a trick.
Is there a better way of doing this?
I've looked at the .Attach method on the datacontext but that appears to require storing the original version in a similar way to what I'm already doing.
I also know that I could avoid all this by doing away with the domain objects and letting the Linq to SQL generated objects all the way up my stack - but I dislike that just as much as I dislike the way I'm handling concurrency.
I worked through this and found the following solution. It works in all the test cases I (and more importantly, my testers!) can think of.
I am using the .Attach() method on the datacontext, and a TimeStamp column. This works fine for the first time that you save a particular primary key back to the database but I found that the datacontext throws a System.Data.Linq.DuplicateKeyException "Cannot add an entity with a key that is already in use."
The work around for this I created was to add a dictionary that stored the item I attach the first time around and then every subsequent time I save I reuse that item.
Example code is below, I do wonder if I've missed any tricks - concurrency is pretty fundamental so the hoops I'm jumping through seem a little excessive.
Hopefully the below proves useful, or someone can point me towards a better implementation!
private Dictionary<int, Payment> _attachedPayments;
public void SavePayments(IList<Domain.Payment> payments)
{
Dictionary<Payment, Domain.Payment> savedPayments =
new Dictionary<Payment, Domain.Payment>();
// Items with a zero id are new
foreach (Domain.Payment p in payments.Where(p => p.PaymentId != 0))
{
// The list of attached payments that works around the linq datacontext
// duplicatekey exception
if (_attachedPayments.ContainsKey(p.PaymentId)) // Already attached
{
Payment dbPayment = _attachedPayments[p.PaymentId];
// Just a method that maps domain to datacontext types
MapDomainPaymentToDBPayment(p, dbPayment, false);
savedPayments.Add(dbPayment, p);
}
else // Attach this payment to the datacontext
{
Payment dbPayment = new Payment();
MapDomainPaymentToDBPayment(p, dbPayment, true);
_dataContext.Payments.Attach(dbPayment, true);
savedPayments.Add(dbPayment, p);
}
}
// There is some code snipped but this is just brand new payments
foreach (var payment in newPayments)
{
Domain.Payment payment1 = payment;
Payment newPayment = new Payment();
MapDomainPaymentToDBPayment(payment1, newPayment, false);
_dataContext.Payments.InsertOnSubmit(newPayment);
savedPayments.Add(newPayment, payment);
}
try
{
_dataContext.SubmitChanges();
// Grab the Timestamp into the domain object
foreach (Payment p in savedPayments.Keys)
{
savedPayments[p].PaymentId = p.PaymentId;
savedPayments[p].Timestamp = p.Timestamp;
_attachedPayments[savedPayments[p].PaymentId] = p;
}
}
catch (ChangeConflictException ex)
{
foreach (ObjectChangeConflict occ in _dataContext.ChangeConflicts)
{
Payment entityInConflict = (Payment) occ.Object;
// Use the datacontext refresh so that I can display the new values
_dataContext.Refresh(RefreshMode.OverwriteCurrentValues, entityInConflict);
_attachedPayments[entityInConflict.PaymentId] = entityInConflict;
}
throw;
}
}
I would look at trying to utilise the .Attach method by passing the 'original' and 'updated' objects thus achieving true optimistic concurrency checking from LINQ2SQL. This IMO would be preferred to using version or datetime stamps either in the DBML objects or your Domain objects. I'm not sure how MVC allows for this idea of persisting the 'original' data however.. i've been trying to investigate the validation scaffolding in the hope that it's storing the 'original' data.. but i suspect that it is as only as good as the most recent post (and/or failed validation). So that idea may not work.
Another crazy idea i had was this: override the GetHashCode() for all of your domain objects where the hash represents the unique set of data for that object (minus the ID of course). Then, either manually or with a helper bury that hash in a hidden field in the HTML POST form and send it back to your service layer with your updated domain object - do the concurrency checking in your service layer or data layer (by comparing the original hash with a newly extracted domain object's hash) but be aware that you need to be checking for and raising concurrency exceptions yourself. It's nice to use the DMBL functions but the idea of abstracting away the data layer is so to not depend on the particular implementation's features etc. So having full control of the optimistic concurrency checking on your domain objects in your service layer (for example) seems like a good approach to me.