User based rate limit implementation - mysql

I have an express.js backend server, and I am using MySQL to store the users' information. I want to implement an API rate limit that limits a certain number of requests a user can make in a minute; however, if I store the request count per minute in a database and get the count every time an API request is made, this system can easily be abused. Is there a better way to do this?

Related

Strategy to implement paid API in the mobile application

I'm developing an app that shows the score of sports-related games in real-time. I'm using a paid API that has limited no. of requests and to show the score in real-time, I'm using a short polling technique (hit the API after every 2-3 seconds to see if any change happens in the score)
If I directly place that API url in the application, then every application user would be directly hitting an API. Assuming 10 users are using an application, then 10 API calls would be deducted after specified time interval (2-3 seconds), right?
So what should be the strategy (better way or approach) to do this thing to prevent multiple API calls?
What I could come up with his store the API JSON response in the MYSQL database. This way, I would be serving the data to application users through the database (this way, users would hit the database, not an actual API) Is it the correct way to do it?
Store the API JSON response into the MYSQL database
Then reconvert the MySQL database into the JSON format
and then the application users would be polling the database JSON response
I don't know if this is the correct way to do it! That's why posted this question
Thank you

Mysql - how to limit the number of transcations per second on a table

How to limit the number of transcations per second on a table in Mysql?
Like to prevent brute force login via an API
As David says, do this on the API. You cannot and should not limit your database. There's no way to distinguish the origin of the query, so you'll just shut down the database for everyone if one person decides to flood it, making a denial-of-service attack easier.
As for a solution there are many examples.
Nginx has a rate-limiting feature built in that can limit requests per interval of time, and is very flexible. This can be focused on particular endpoints, paths, or other criteria, making it easy to protect whatever parts of your system are vulnerable.
You'll also need to block clients that are trying to attack your system. Consider something like fail2ban which can read logs and automatically block source traffic from offenders. Log every failed attempt and this tool can do the rest.

Amazon API submitting requests too quickly

I am creating a games comparison website and would like to get Amazon prices included within it. The problem I am facing is using their API to get the prices for the 25,000 products I already have.
I am currently using the ItemLookup from Amazons API and have it working to retrieve the price, however after about 10 results I get an error saying 'You are submitting requests too quickly. Please retry your requests at a slower rate'.
What is the best way to slow down the request rate?
Thanks,
If your application is trying to submit requests that exceed the maximum request limit for your account, you may receive error messages from Product Advertising API. The request limit for each account is calculated based on revenue performance. Each account used to access the Product Advertising API is allowed an initial usage limit of 1 request per second. Each account will receive an additional 1 request per second (up to a maximum of 10) for every $4,600 of shipped item revenue driven in a trailing 30-day period (about $0.11 per minute).
From Amazon API Docs
If you're just planning on running this once, then simply sleep for a second in between requests.
If this is something you're planning on running more frequently it'd probably be worth optimising it more by making sure that the length of time it takes the query to return is taken off that sleep (so, if my API query takes 200ms to come back, we only sleep for 800ms)
Since it only says that after 10 results you should check how many results you can get. If it always appears after 10 fast request you could use
wait(500)
or some more ms. If its only after 10 times, you could build a loop and do this every 9th request.
when your request A lot of repetition.
then you can create a cache every day clear context.
or Contact the aws purchase authorization
I went through the same problem even if I put 1 or more seconds delay.
I believe when you begin to make too much requests with only one second delay, Amazon doesn't like that and thinks you're a spammer.
You'll have to generate another key pair (and use it when making further requests) and put a delay of 1.1 second to be able to make fast requests again.
This worked for me.

Storing socket.io data

I'm developing an app using socket.io where users send and receive data to users who are present in there channels/rooms. Here, I need your suggestion for storing data that is passed from user to a channel. So when some one enters that channel he should be able to get the data from that particular channel he entered.
So how will I save the data to the particular channel?
I had planned for storing data in MySQL database, which will have channel id, channel name, and channel message columns.
But I think it will be a problem if number of users increases and inserting each message as a new row into database?
Please help me the best way for these query.
Until you have thousands of simultaneous users, it hardly matters. Just use whatever you are most comfortable with. When you get thousands of users you can change the architecture, if necessary.

Amazon SQS to funnel database writes

Assume I am building netflix and I want to log each view by the userID and the movie ID
The format would be viewID , userID, timestamp,
However in order to scale this, assume were getting 1000 views a second. Would it make sense to queue these views to SQS and then our queue readers can un-queue one by one and write it to the mysql database. This way the database is not overloaded with write requests.
Does this look like something that would work?
Faisal,
This is a reasonable architecture; however, you should know that writing to SQS is going to be many times slower than writing to something like RabbitMQ (or any local) message queue.
By default, SQS FIFO queues support up to 3,000 messages per second with batching, or up to 300 messages per second (300 send, receive, or delete operations per second) without batching. To request a limit increase, you need to file a support request.
That being said, starting with SQS wouldn't be a bad idea since it is easy to use and debug.
Additionally, you may want to investigate MongoDB for logging...check out the following references:
MongoDB is Fantastic for Logging
http://blog.mongodb.org/post/172254834/mongodb-is-fantastic-for-logging
Capped Collections
http://blog.mongodb.org/post/116405435/capped-collections
Using MongoDB for Real-time Analytics
http://blog.mongodb.org/post/171353301/using-mongodb-for-real-time-analytics