Sharding logic for mysql Database [closed] - mysql

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
Usually sharding logic are placed in appserver. Query will be passed to the specific shard by querying the meta. Apart from appserver, where else sharding logic can be placed so that load in appserver can be reduced.

A fairly popular solution is to use the Ambassador pattern.
It is understood that an additional application will be launched next to the main one. Shard distribution logic will be placed in the ambassador. Thus, the main service will be unloaded.
In first step client sends request with proper payload.
In second step appserver makes decision that this payload has to be saved in database and sends request with data to ambassador app.
The ambassador, in turn, has an algorithm for distributing data among shards(By meta information or hash functions).
In the last step information is sent to the desired shard.
With this approach, the distribution service and unnecessary load are removed from the main service.
It is worth noting that it is important to place the ambassador in one physical host to reduce network latency.
This may not be the best approach, but just one option.

Related

Is cloud functions a valid replacement/implementation of a distributed system? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I want to process a list of data parallelly; processing of each element of the data won't affect other.
With for example google pub/sub + cloud functions, I could achieve something scalable and parallel, which looks like a distributed system.
I have little knowledge about distributed programming, and it seems that it takes a lot of time to master.
So I would like to know is this a replacement or a valid implementation of distributed system?
For the specific use case you're talking about - dividing work among function invocations to run in parallel - yes, it sounds like that would be adequate.
I would be very hesitant to call it a full "distributed system" (at least not without your very strict definition of what that really is). If you take wikipedeia's explanation of distributed computing, you might have a very basic system in place, but lack of a peer-to-peer direct messaging system probably makes it unsuitable for many of the listed applications you see on that page.
The bottom line I think you should really consider is if it satisfies the requirements of the problem at hand. Whether or not it's a "distributed system" is mostly irrelevant - either it works or it doesn't for that use case.

Would it be faster to use server-side or client-side pagination/filtering/searching? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
The current system that I am working on handles a variety of dataset sizes, most are around 100 but a handful of clients have 250,000 or more results. We need to handle a search across these results fields, pagination for varying page sizes up to 50, and filtering all results on a specific field.
Currently the server is setup to do all of these functions. Something to consider would be that a search would fire off a backend call, a column filter would fire off a backend call, etc. So lots of, most likely, faster calls to the backend.
The client could do these things on a cached large dataset, but it would probably be slower filtering/sorting when the dataset reaches the higher end of the spectrum.
Our primary considerations are speed and user experience. The backend approach would likely be faster & more frequent calls, but would cause lots of short load times and spinners for the user. The frontend would likely be a long initial load time and faster loads/data changes for the additional operations like filter/sort.
To those that have run into similar issues, what do you recommend? What were your concerns? Could you offer some good resources for this type of issue? Any and all assistance would be helpful.
PS sorry if this doesn't fit the standard code questions on SO, just looking for experienced help on this issue.
in case of large date you have to use server side sorting, search and pagination
for performance you have to cache your http calls if you are calling the same endpoints within a given time period a couple of times.
you can find online many example of caching HTTP calls using RXJS, using handy operators like shareReplay that can cache and replay data for every subscriber and that makes you avoid making many calls to the server

Will a rest api drain server resources [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I have a 10GB sql dbase and want to provide access to that data to a mobile app using a rest api.The mobile app will be used by less than 100 users. My DB is a bit sluggish as it was not built for so much data, but has grown over the years. My question is: Will the rest api create more burden for my DB?
Rest Api isn't gonna create any burden on DB if it's normal client, server things.
Let me give a quick example how's rest api works.
Client<---(REST API protocol)----->server<-----(Do query optimization to improve performance of your db and similar kind of optimization)------>db
So before Rest Api, server used to keep some data of client mostly known as session data. But it was creating a burden for server as more memory use and also it was dependent on states of user in somewhat way. mean to do certain operations user has to follow a certain steps before.
But in rest api architecture, every method/call is independent of previous call.
so basically REST architecture is an another design to communicate between 2 or more (services , clients whatever ).
So I don't see that rest api is gonna affect your db. (though again it depends on your product/service architecture design and developers quality etc.)

Django Celery One request splitting in multiple tasks [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I am looking to achieve something like this:
To have one input json request, which will encapsulate what to do than:
split that input into 3 or more sub-requests depending on json, like put in database
an agent will wake up since he is processing one part of that request, like putting data to some server
another agent will woke up since the request is for him too, he will like upload data to some other server
meanwhile another request could do state information about request whats part did executed and finished
Is Django + Celery good for this ?
Main goal is to with one request serve parts independently, so like when processing request when waiting for the server in one part of request will not ommiting other part of request which will be processed without any lag.
If your json contains all sub-requests and can be handled asynchronously, seems like this is a job for RxJava which can handle event based programs using observable sequences. Best to read the docs first to see if they fit your use case.

Twitter xAuth vs open source [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I am developing an open source desktop twitter client. I would like to take advantage on the new xAuth authentication method, however my app is open source which means that if I put the keys directly into the source file, it may be a vulnerability (am I correct? The twitter support guy told me).
On the other hand, putting the key directly into a binary also doesn't make sense. I am writing my application in python, so if I just supply the pyc files, it is one more seconds to get the keys, thanks to the excellent reflection capatibilities of Python. If I create a small .so file with the keys, it is also trivial to obtain the key by looking at the raw binary (keys has fixed length and character set).
What is your opinion? Is it really a secutiry hole to expose the API keys?
Security hole? In broad terms, yes. Realistically though, these aren't nuclear launch codes we're talking about.
About the worst thing that could happen is that someone could take and use your app's keys to do something against Twitter's TOS that will end up getting the keys banned. No user data would be vulnerable since you're not distributing the user tokens (that would be much worse from a security standpoint). Since anyone can register an app in 2 seconds at no cost, the only reason to do that kind of impersonation would be specifically to besmirch the reputation of you or your app.
One thing you could do is leave them out of the source code but make it clear that user's compiling from source need to obtain their own keys and put them in the appropriate place, but leave them in the binary version that you distribute. Not 100% secure, but makes it that little bit harder that will deter a certain number of n'er-do-wells.