Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I have a 10GB sql dbase and want to provide access to that data to a mobile app using a rest api.The mobile app will be used by less than 100 users. My DB is a bit sluggish as it was not built for so much data, but has grown over the years. My question is: Will the rest api create more burden for my DB?
Rest Api isn't gonna create any burden on DB if it's normal client, server things.
Let me give a quick example how's rest api works.
Client<---(REST API protocol)----->server<-----(Do query optimization to improve performance of your db and similar kind of optimization)------>db
So before Rest Api, server used to keep some data of client mostly known as session data. But it was creating a burden for server as more memory use and also it was dependent on states of user in somewhat way. mean to do certain operations user has to follow a certain steps before.
But in rest api architecture, every method/call is independent of previous call.
so basically REST architecture is an another design to communicate between 2 or more (services , clients whatever ).
So I don't see that rest api is gonna affect your db. (though again it depends on your product/service architecture design and developers quality etc.)
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 13 days ago.
This post was edited and submitted for review 12 days ago and failed to reopen the post:
Original close reason(s) were not resolved
Improve this question
I have a full stack app that uses React, Node.js, Express, and MySQL. I want the react app to respond to database updates similar to Firebase: When data changes, I want a real-time notification sent to my app.
I want to use stock MySQL (no plugins), so that I can use AWS RDB or whatever.
I will use socket.io to push the real-time notifications to the web app.
To avoid off-target responses, I'll summarize various approaches that are not what I am looking for:
The server could poll, or each client could poll. (Not real-time, but included for completeness. When I search, polling is the only solution I find.)
Write a wrapper that handles all MySQL updates, handles subscriptions, and sends the notifications. This is a complicated component that adds complexity. Firebase is popular because it both increases performance and reduces complexity. I like Firebase a lot but want to do the same thing with MySQL.
Use Firebase to handle the real-time notifications. The MySQL wrapper could use Firebase to handle the subscriptions and notifications, but there is still the problem of triggering the notifications in the first place. Also, I don't want to use Firebase. (For example, my application needs to run in an air-gapped environment.)
The question: Using a stock MySQL database, when a table changes, can a notification server discover the change in real-time (no polling), so that it can send notifications?
To clarify: by "stock MySQL", I mean no plugins, no need for C compilers, and even no need for root access.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I have been building a web application for 50k users. My application will include:
APIs + Socket server: NestJS + SocketIO
Database server: MySQL
Frontend server: ReactJS
I'm going to choose EC2 instances for those. Could you help me to choose appropriate instances for each server (eg. t2.xlarge or ...)? My application will have 3 environments: develop, staging & production.
Thanks!
Nobody can provide the information you seek.
Every application is different. Some apps are compute-intensive (eg video transcoding), some are memory-intensive (eg data manipulation) and some are network-intensive (eg video chat). Also, the way users interact with apps are different with each app.
The only way you will know the "appropriate instances for each server" is to setup a test platform, select a particular server configuration, then simulate typical usage of your application with the desired number of users (eg 50k). Monitor each server (CPU, RAM) and find any bottlenecks. Then, adjust instance type and app configurations, and test again.
Yes, it's a lot of work, but that's the only way you'll really know what system sizes and configurations are required. Or, of course, you can simply get real users on your app, monitor it very closely and make changes on-the-fly.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
The current system that I am working on handles a variety of dataset sizes, most are around 100 but a handful of clients have 250,000 or more results. We need to handle a search across these results fields, pagination for varying page sizes up to 50, and filtering all results on a specific field.
Currently the server is setup to do all of these functions. Something to consider would be that a search would fire off a backend call, a column filter would fire off a backend call, etc. So lots of, most likely, faster calls to the backend.
The client could do these things on a cached large dataset, but it would probably be slower filtering/sorting when the dataset reaches the higher end of the spectrum.
Our primary considerations are speed and user experience. The backend approach would likely be faster & more frequent calls, but would cause lots of short load times and spinners for the user. The frontend would likely be a long initial load time and faster loads/data changes for the additional operations like filter/sort.
To those that have run into similar issues, what do you recommend? What were your concerns? Could you offer some good resources for this type of issue? Any and all assistance would be helpful.
PS sorry if this doesn't fit the standard code questions on SO, just looking for experienced help on this issue.
in case of large date you have to use server side sorting, search and pagination
for performance you have to cache your http calls if you are calling the same endpoints within a given time period a couple of times.
you can find online many example of caching HTTP calls using RXJS, using handy operators like shareReplay that can cache and replay data for every subscriber and that makes you avoid making many calls to the server
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I’m writing an android app that will sync with a MySQL db on my webserver (there will also be a website reading from/writing to the same dB). The android app will store a copy of the data locally in a sqlite db to provide access while offline. If the user creates a row while offline, that record will be uploaded to the server the next time a data connection is available. I’m designing the app and website myself so I have the ability to set it up as I see fit (meaning it doesn’t have to conform to someone else’s server).
The sqlite db will have a column for id (which will represent the id as stored on the server) and a localID column. When the server receives the data, it will acknowledge the new data by returning an array (in json format) of the id numbers as stored on the server.
What would be better for this type of scenario: a transaction-safe engine or non-transaction-safe (such as isam)? It’s my understanding that isam would be faster and take less space to store but I can’t deal with losing data. I’m thinking that if the android app doesn’t receive the confirmation, it would resubmit the data. It seems like that would prevent data loss but I need a second (more-experienced) opinion. If you would go with a transaction-safe db, which would you recommend as I’ve never worked with one?
TIA!
A real database should be your default choice until you've seen that it's not fast enough.
Consider using UUIDs to generate IDs on the client that are guaranteed to be unique on the server.
Have you thought about how you would handle updates from multiple devices that both had off-line changes? You should consider some known patterns for dealing with this kind of synchronization.
Stack Overflow question
Data Replication book
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
In my 4 years of experience,I have developed a lot of web applications. Now, the concept of programmable web getting more and more popular, new APIs are being released almost everyday. I would like to develop a java API/library for a few of these endpoints.Ex stackapps,reddit,digg etc... What I would like to know from you people is ,
How is the API of the regular web
apps differ from the API of these
libraries. Or what is the difference
between these two from design
perspective
What are the best API development
practices.
What are all the factors that I need to consider before designing the API
.
Please comment, if the details are not sufficient.
Stability
If you offer an API to your web app, it is probably because you want other people to build applications using it. If it is not stable they will hate you for forcing them to follow through your frequent changes. If this takes too long, their site might remain non-functional for a long time while they are figuring out the new way of doing things in your API.
Compactness
You want the API to be complete but compact, as in not too much to remember.
Orthogonality
Design it so there is one and only one way to change each property or trigger an action. Actions in an orthogonal API should have minimal (if ever) side effects.
Also, it's not a good practice to remove a feature from a public API once released.
Security and Authentication
Since the API is web-exposed, you will have to authenticate each request and grant appropriate access. Security common sense applies here.
Fast Responses or Break into pieces
I believe in a web environment we should have fast responses and avoid requests that will take too long to complete. If it's unavoidable then it is better to send an ACK and break the task into several pieces and subsequent calls.
From my experience, all good API were not made to solve a generic problem, but to solve a problem for some that requires a certain abstraction. This abstraction is then evolving as the requirement and/or the underlying layer change.
So instead of finding the API that will do it all, I'd start by finding one or two good case problem were your API could help.