Decoupling Client and Server - Common Patterns? - json

Suppose you have a mobile app that needs to ask the server for 20 other Users near your current location. The URL to get this data might look something like this (not escaped):
https://example.com/api/users?lat=40.240239&long=-111.657920&count=20
The server could then respond in one of two ways:
Return all User objects directly, as JSON, in one large array.
Return an array of UUIDs corresponding to the Users who match the request. The client would then have two choices:
a) Send a request for all User objects in one big batch:
https://example.com/api/users?ids=[1,2,3,4...]
b) Send requests for each User independently:
https://example.com/api/users?id=1
https://example.com/api/users?id=2 ...
Currently, my application implements Option #1. In the name of responsiveness, I've eliminated every possible network round-trip by returning as much data as possible in as few requests as possible. However, I'm starting to see problems with this choice due to the fact that my client and server logic are very tightly coupled. It's difficult to maintain, versioning is a nightmare, and caching on the client side is much more difficult than say, Option #2b.
Based on your experience, which option do you recommend (or a different method entirely)? What would you consider the "industry standard" way of serving data to a mobile app?

Related

Meteor performance comparison of publishing static data vs. getting data via HTTP Get request

I am building an app that receives a bunch of static data that is read only. The user does not change the data, or send any data to the server. The app just gets the data and presents it to the user in various views.
Like for example a parts list, with part numbers and prices. This data is currently stored in mongoDB.
I have few options for getting the data to the client. I could just use meteor's publication system, and have the client subscribe to the data it needs.
Or I could map all the data the client needs into one JSON file, save the JSON file to Amazon S3, and have the client make simple GET request to grab the data.
If we wanted this app to scale to many, many users, would not using meteor publication be the best? Or would either method be similar in terms of performance? Using meteor publication system would be the easiest, but I am worried that going down this route would lead to performance issues if a lot of clients request the data. If the performance between publishing and get request is about the same, I would just stick with the publication as its the easiest.
In this case Meteor will provide better performance. If your data is mostly server to client driven then clients do not have to worry about polling the server and the server will not have to worry about handling the request.
Also Meteor requires very little resources to send data to the client because the connection is persistent. Take an app like code fights which is built on Meteor constantly has thousands of connections to and from it, its performance runs great.
As a side note, if you are ready to serve your static data as a JSON file in a separate server (AWS S3), then it means you do not expect that data to be that big, so that it can be handled in a single file and entirely loaded in client's memory.
In that case, you might even want to reconsider the need to perform any separate request (whether HTTP or Meteor Pub/Sub).
For instance, simply embedding the data in your app, or served through SSR / Fast Render package.
Then if you are really concerned about your scalability, you might even reconsider the need to use Meteor, since you do not seem to need any client-server interactivity (no real need for Pub/Sub, no reactivity…). After your prototype is ready, you could rework it as a separate and static SPA, so that you do not even need to serve it through Node / Meteor.

Database Security when hosted on client

I have a database along with REST API for clients to access the data. For performance and other reasons, I need to move the application along with the data to the client's physical server. Is there a way for me to encrypt the data in the database, so the only way the client can get access to it is through API that I expose, and not by cracking MySql and getting at raw data. I do not want the client to see the data stored on my DB, as I feel they will steal it or share it. What can I do to accomplish that?
One idea:
Is it possible to implement some form of one-way encryption, where its based on the lookup value provided in api.
e.g. api lookup by email, that is then gets one-way encrypted compared in the DB for match, and returns a record. This way if they happen to look at my database, the can not see list of emails, all they see is data that is something similar /etc/passwd file.
No.
From the 10 Immutable Laws of Security
Law #3: If a bad guy has unrestricted physical access to your computer, it's not your computer anymore
What you want is fundamentally impossible, without caveats. Always and everywhere.

Technology stack for a multiple queue system

I'll describe the application I'm trying to build and the technology stack I'm thinking at the moment to know your opinion.
Users should be able to work in a list of task. These tasks are coming from an API with all the information about it: id, image urls, description, etc. The API is only available in one datacenter and in order to avoid the delay, for example in China, the tasks are stored in a queue.
So you'll have different queues depending of your country and once that you finish with your task it will be send to another queue which will write this information later on in the original datacenter
The list of task is quite huge that's why there is an API call to get the tasks(~10k rows), store it in a queue and users can work on them depending on the queue the country they are.
For this system, where you can have around 100 queues, I was thinking on redis to manage the list of tasks request(ex: get me 5k rows for China queue, write 500 rows in the write queue, etc).
The API response are coming as a list of json objects. These 10k rows for example need to be stored somewhere. Due to you need to be able to filter in this queue, MySQL isn't an option at least that I store every field of the json object as a new row. First think is a NoSQL DB but I wasn't too happy with MongoDB in the past and an API response doesn't change too much. Like I need relation tables too for other thing, I was thinking on PostgreSQL. It's a relation database and you have the ability to store json and filter by them.
What do you think? Ask me is something isn't clear
You can use HStore extension from PostgreSQL to store JSON, or dynamic columns from MariaDB (MySQL clone).
If you can move your persistence stack to java, then many interesting options are available: mapdb (but it requires memory and its api is changing rapidly), persistit, or mvstore (the engine behind H2).
All these would allow to store json with decent performances. I suggest you use a full text search engine like lucene to avoid searching json content in a slow way.

What database/technology to use for a notification system on a node.js site?

I'm looking to implement notifications within my node.js application. I currently use mysql for relational data (users, submissions, comments, etc). I use mongodb for page views only.
To build a notification system, does it make more sense (from a performance standpoint) to use mongodb vs MySQL?
Also, what's the convention for showing new notifications to users? At first, I was thinking that I'd have a notification icon, and they click on it and it does an ajax call to look for all new notifications from the user, but I want to show the user that the icon is actually worth clicking (either with some different color or a bubble with the number of new notifications like Google Plus does).
I could do it when the user logs it, but that would mean the user would only see new notifications when they logged out and back in (because it'd be saved in their session). Should I poll for updates? I'm not sure if that's the recommended method as it seems like overkill to show a single digit (or more depending on the num of notifications).
If you're using node then you can 'push' notifications to a connected user via websockets. The linked document is an example of one well known websocket engine that has good performance and good documentation. That way your application can send notifications to any user, or sets of users, or everyone based on simple queries that you setup.
Data storage is a different question. Generally mysql does have poor perfomance in cases of high scalability, and mongo does generally have a quicker read query response, but it depends on what data structure you wish to use. If your data is in a simple key-value structure with no real need for relational data, then perhaps using a memory store such as Redis would be the most suitable.
This answer has more information on your question too if you want to follow up and investigate more.

Sharing Ruby variables between Sinatra requests

I am trying to write a simple quiz game in sinatra and I need to have common objects accessible for all users (lobby state, chat messages etc.). The problem is that Sinatra reloads the code after every request and objects become in initial state. How to implement it?
Well, the topic is a bit tricky. Sinatra actually doesn't reset the server state:
require 'sinatra'
GlobalState = {}
GlobalState[:some_counter] = 0
get '/' do
response = "GlobalState[:some_counter]: #{GlobalState[:some_counter]}"
GlobalState[:some_counter] += 1
response
end
This code has nothing wrong: if you run it and go to http://localhost:4567 you will see GlobalState[:some_counter] incremented as expected.
But it is discouraged for the following reasons, that are mainly related to the web nature of the application:
Since the data is stored in a Ruby object, if you stop the server you loose the data. however, if you don't need persistent data it's not a problem
When you run a web app, usually you have simultaneous instances of your app, in order to serve multiple requests concurrently. There are a couple of ways in order to accomplish it:
Forks: multiple processes of the same application. They don't share memory, so Ruby global state variables become useless
Threads: threads share memory, so you can access to global state, but you have to manage concurrent accesses to the same global object, with non trivial consequences
You can't associate data to the user, and vice versa: this because HTTP doesn't provides methods for state preserving (it is a stateless protocol). In order to resolve it you need either:
Client-side data storing: cookies, HTML5 Local Storage...
Server-side data storing: sessions (not really server-side only, you need at least to associate sessions to the respective clients, usually storing session ids into cookies)
For these reasons the web apps data management is not trivial. Anyway don't worry, you don't have to reinvent the wheel; the solutions are in hand:
Sinatra cookies for client-side data storing
Sinatra sessions for client-server data sharing
Databases for data persistence
There isn't a way to do this without some type of persistent store. You would have to store information in either the database or cookies.