Where does aggregated data fit in in a REST JSON API and a mobile client? - json

I have an iOS app talking with a REST JSON API. I have mapped the model resources 1-1 with the controllers/endpoints in the API. E.g. User (/users), Friendship (/friendships), Rating (/ratings), RatedObject (/rated_objects) etc. And on the device I'm using RestKit/CoreDate to store/sync all objects.
Now I start to get the need for different kind of aggregated data, e.g. different rating averages and rating counts on the RatedObject depending on the friendship type. I have solved it now by adding the data to the RatedObject type:
RatedObject
name
size
ratingAverageFromCloseFriends
ratingCountFromCloseFriends
ratingAverageFromFamily
ratingCountFromFamily
ratingAverageFromAllFriends
ratingCountFromAllFriends
But when it starts to get more kinds of aggregated data on different kind of objects, it is getting over hand. And I also sometimes need to get the average from only one specific friend, and that can't be added to the model.
I store all data locally on the iOS device, and the aggregated data should be easy to update from the server.
How should I solve this and make it easy and natural for both the client and the server?

Related

is storing frequently used data in json in mysql worth it?

in vue.js app the main focus is working with prospects. prospects have many things like contacts, listings, and half a dozen other objects/tables.
they also have interactions, which could have 30 or more per prospect, while most things like emails or phones would have 1-3 results. I load 50 prospects at a time in to the front end
I'm trying to decide if loading it all into the front end to work 50 prospects at a time is a good idea, or if i should have a json column with interactions as part of the prospects table that i would update each time an interaction is saved, with minimal info like date, type, subject...
it seems like an extra step (and duplicate data, how important is that?) to update the json column with each interaction, but also seems like it would save looking up and loading data all the time
I'm not a programmer, but have been teaching myself how to do something i need done for my business with tutorials and youtube, any opinions from those who deal with this professionally would be appreciated
also, if anyone wants to tell me how to ask this question in a better formatted way, I'm open ears
thanks
Imagine if you have 1000 data, but you are sending only 50 of them, and your user do a filter by price. Will you display only the filtered data from 50 or 1000 of them?
That depends on whether you want to expose all 1000 data to the front end. It's a choice between that, and calling your server api everytime.
If you are calling the server, consider using a cache like Redis to store your results .
Pseudo code.
Request Received
Check Redis Cache - Redis.get('key')
If key exist - return cache.
Else -
check mysql for latest results.
Redis.set('key', latest results);
CreateRequest Received
- Write to mysql
- Redis.delete('key') // next request to view will create new cache with fresh data.
Your key can be anything like, e.g your url ('/my/url')
https://laravel.com/docs/8.x/redis

Batch Processing - Odata

I want to make requests to allow grouping of multiple operations into a single HTTP request payload
I have an API Key that allows me to make Get Requests and return tables in a Database as JSON blocks. Certain attributes are 'expandable' and OData (Open Data Protocol) allows you to 'expand' multiple attributes within the "CompanyA" table (ie Marketing, Sales, HR)
http://api.blahblah.com/odata/CompanyA?apikey=b8blachblahblachc&$expand=Marketing,Sales,HR
I would like to select multiple tables, (the request above only contains 1 table which was Company A) and understand this is possible via "Batch Requests"
https://www.odata.org/documentation/odata-version-3-0/batch-processing/
The documentation above alongside Microsoft's is hard to translate into my noted desire.
I wanted it to be as simple as, but I know it is not and can't figure out how to get there:
http://api.blahblah.com/odata/CompanyA,CompanyB,CompanyC?apikey=b8blachblahblachc
The end goal is to have one JSON file that contains detail about each table in the DB , rather than have to write each individual query and save it file as below:
http://api.blahblah.com/odata/CompanyA?apikey=b8blachblahblachc
http://api.blahblah.com/odata/CompanyB?apikey=b8blachblahblachc
http://api.blahblah.com/odata/CompanyC?apikey=b8blachblahblachc

Different companies listed in Exact Online REST API system divisions and XML API Administrations

When I download the list of companies using the end point Administrations either through the user front end or directly using a HTTP GET, I receive an XML with contents such as:
<?xml version="1.0" encoding="UTF-8"?>
<eExact xsi:...>
<Administrations>
<Administration>
...
</Administration>
I can receive the list of companies also using the REST API system/divisions.
In general the number and names companies listed in both are equal, although some fields are present in XML API that are not present in the REST API and vice versa.
However, sometimes the contents are different. For instance, today I had scenario where there were only 2 companies listed in the XML topic, but over 900 in system/divisions.
This occurs both using the APIs directly as through Invantive SQL.
Why is the outcome different?
You can also use one of the four views:
AllAdministrations (similar to Administrations)
AllAdministrationCustomers (-)
AllAdministrationClassifications (similar to AdministrationClassifications)
AllAdministrationAssignedTypes (similar to AdministrationAssignedTypes)
These query the administrations across all subscriptions an accountant has access too.
All topic are read using a specific company (in the URL named division) to retrieve the data from.
System/divisions REST API returns ALL companies accessible for the current user, so the outcome does not depend on the division used in the URL request.
However, the XML topic Administrations returns ONLY companies accessible for the current user that are of the SAME customer account of the division used in the URL request.
A customer account is number of 1 or more companies which is independently billed. For entrepreneur licenses, this is generally the same list of companies.
However, for an accountant it differs what company is used, since they may have hundreds of different customers each with their own licenses plus many companies under their own customer code.
In general, it is more wise to use the system/divisions.
However, when you need additional fields or for instance the classifications of a company, you will need to use the XML API. The easiest way to determine the minimum number of companies to retrieve the XML API Administrations data for is to:
First retrieve all system/divisions.
For every different value of customercode, find one division, for instance the minimum value.
For each of these divisions, access the end point Administrations.
Combine the output of each of those.

Storing userID and other data and using it to query database

I am developing an app with PhoneGap and have been storing the user id and user level in local storage, for example:
window.localStorage["userid"] = "20";
This populates once the user has logged in to the app. This is then used in ajax requests to pull in their information and things related to their account (some of it quite private). The app is also been used in web browser as I am using the exact same code for the web. Is there a way this can be manipulated? For example user changes the value of it in order to get info back that isnt theirs?
If, for example another app in their browser stores the same key "userid" it will overwrite and then they will get someone elses data back in my app.
How can this be prevented?
Before go further attack vectors, storing these kind of sensitive data on client side is not good idea. Use token instead of that because every single data that stored in client side can be spoofed by attackers.
Your considers are right. Possible attack vector could be related to Insecure Direct Object Reference. Let me show one example.
You are storing userID client side which means you can not trust that data anymore.
window.localStorage["userid"] = "20";
Hackers can change that value to anything they want. Probably they will changed it to less value than 20. Because most common use cases shows that 20 is coming from column that configured as auto increment. Which means there should be valid user who have userid is 19, or 18 or less.
Let me assume that your application has a module for getting products by userid. Therefore backend query should be similar like following one.
SELECT * FROM products FROM owner_id = 20
When hackers changed that values to something else. They will managed to get data that belongs to someone else. Also they could have chance to remove/update data that belongs to someone else agains.
Possible malicious attack vectors are really depends on your application and features. As I said before you need to figure this out and do not expose sensitive data like userID.
Using token instead of userID is going solved that possible break attemps. Only things you need to do is create one more columns and named as "token" and use it instead of userid. ( Don't forget to generate long and unpredictable token values )
SELECT * FROM products FROM owner_id = iZB87RVLeWhNYNv7RV213LeWxuwiX7RVLeW12

Couchbase - Splitting a JSON object into many key-value entries - performance improvement?

Say my Couchbase DB has millions of user objects, each user object contains some primitive fields (score, balance etc.)
And say I read & write most of those fields on every server request.
I see 2 options of storing the User object in Couchbase:
A single JSON object mapped to a user key (e.g. user_555)
Mapping each field into a separate entry (e.g. score_555 and balance_555)
Option 1 - Single CB lookup, JSON parsing
Option 2 - Twice the lookups, less parsing if any
How can I tell which one is better in terms of performance?
What if I had 3 fields? what if 4? does it make a difference?
Thanks
Eyal
Think about your data structure and access patterns first before worrying if json parsing or extra lookups will add overhead to your system.
From my perspective and experience I would try to model documents based upon logical object groupings, I would store 'user' attributes together. If you were to store each field separately you'd have to do a series of lookups if you ever wanted to provide a client or service with a full overview of the player profile.
I've used Couchbase as the main data store for a social mobile game, we store 90% of user data in a user document, this contains all the relevant fields such as score,level,progress etc. For the majority of operations such as a new score or upgrades we want to be dealing with the whole User object in the application layer so it makes sense to inflate the user object from the cb document, alter/read what we need and then persist it again if there have been changes.
The only time we have id references to other documents is in the form of player purchases where we have an array of ids that each reference a separate purchase. We do this as we wanted to have richer information on each purchase (date of transaction,transaction id,product type etc) that isn't relevant to the user document as when a purchase is made we verify it's legitimate and then add to the User inventory and create the separate purchase document.
So our structure is:
UserDoc:
-Fields specific to a User (score,level,progress,friends,inventory)
-Arrays of IDS pointing to specific purchases
The only time I'd consider splitting out some specific fields as you outlined above would be if your user document got seriously large but I think it'd be best to divide documents up per groupings of data as opposed to specific fields.
Hope that helped!