Ec2 instance type for an api with complex database and data calculation - mysql

I need advise from any one who uses aws ec2 instance to host their projects.
Currently I have a php project(for backend api) and a reactjs (for frontend). When testing locally, the api response time is 3 seconds(I still optimizing my backend code to reduce it to 2 seconds), but my main concern is when deployed to a staging machine in aws using t3.medium for the backend and t2.medium for frontend, the response time is at least 19 seconds. Here are my goals
1. For staging, at least 5 seconds response time, since this is mainly used for testing purposes.
2. For production, I want the response time same as my local machine.My local machine uses i-7 and 16 gig of ram(with of course too many other application running and lots of tabs open (google chrome) locally).The initial target users for the production is 10-15 users, but will grow once our app will be tested well and stable(I mean data should be accurate).
At first my plan is to test all the available ec2 instance types and see which of them suits in my requirements particularly the response time, but a friend told me that it will cost me a lot since every time an ec2 instance is provisioned, aws will charge for the resources used.Also, what are the best approach, since my backend api has lot of scripts that is being run .The scripts is actually calling the amazon selling partner api and advertising api, which is currently, a very slow api itself, some of their endpoints has a response time of at least 30 seconds, that is why I decided to run them in backeground tru cron jobs.THis scripts also perform database writes after the response from amazon api is successful.
Thank you

Related

Retrieve streaming data from API using Cloud Functions

I want to stream real time data from Twitter API to Cloud Storage and BigQuery. I have to ingest and transform the data using Cloud Functions but the problem is I have no idea how to pull data from Twitter API and ingest it into the Cloud.
I know I also have to create a scheduler and a Pub/Sub topic to trigger Cloud Functions. I have created a Twitter developer account. The main problem is actually streaming the data into Cloud Storage.
I'm really new to GCP and streaming data so it'll be nice to see a clear explanation on this. Thank you very much :)
You have to design first your solution. What do you want to achieve? Streaming or Microbatches?
If streaming, you have to use the streaming API of Twitter. In short, you initiate a connection and you stay up and running (and connected) receiving the data.
If batches, you have to query an API and to download a set of message. In a Query-response mode.
That being said, how to implement it with Google Cloud. Streaming is problematic because you have to be always connected. And with serverless product you have timeout concern (9 minutes for Cloud Functions V1, 60 minutes for Cloud Run and Cloud Functions V2).
However you can imagine to invoke regularly your serverless product, stay connected for a while (let say 1h) and schedule trigger every hour.
Or use a VM to do that (or a pod on a K8S container)
You can also consider microbatches where you invoke every minute your Cloud Functions and to get all the messages for the past minutes.
At then end, all depends on your use case. What's the real time that you expect? which product do you want to use?

.NET Core API stalls on high load

I'm currently in the first phase of optimizing a gaming back-end. I'm using .NET Core 2.0, EF and MySQL. Currently all of this is running on a local machine, used for dev. To do initial load testing, I've written a small console app that simulates the way the final client will use the API. The API is hosted under IIS 8.5 on a Windows Server 2012R2 machine. The simulating app is run on 1-2 separate machines.
So, this all works very well for around 100-120 requests/s. The CPU load is around 15-30 on the server, the number of connections on the MySQL server averaging around 100 (I've set the max_connections to 400, and it's never near that value). Response times are averaging way below 100ms. However, as soon as we enter request figures a bit higher than that, it seems the system completely stalls on intervals. The CPU load drops to < 5, and the response times in the same time skyrockets. So, it kind of acts like a traffic jam situation. During the stall, both the MySQL and the dotnet exe's seem to "rest".
I do realize I'm no where near a production setup on anything, the MySQL instance is in dev etc. However, I'm still curious what would be the cause of this. Any ideas?

deployed php app can have more than ten thousand hits per day

After deploying my PHP application to google cloud platform, it may get around 10000 hits simultaneously, so, Can the platform handle such no. of request at time.
Thanks
I'm guessing that you'd need more than one server to handle 10k requests/second, but both the VM (Compute Engine) and App Engine platforms have scaled to over 1M requests/second.
I don't have a link for App Engine, but it's possible to handle that level of traffic -- see this article about Snapchat for an example

How to create a C# WCF application with high availability and performance

I have developed a C# WCF application, which when called performs inserts and updates in a MySQL 5.6 database, running on a Windows 2008 server, with IIS. The requests can range from a single update or insert for 1 row, to 1000 updates or 1000 inserts per request.
Initially the 3rd party remote connections was minimal. But now the load and number of requests has increased.
Therefore, I'm now looking at providing the best possible solutions in terms of a high availability service, with redundant MySQL fail-over; whilst being able to ensure that the service can handle the number of requests; providing rapid response.
Can anyone offer any advice on how to achieve this.

Application design for data persistence over unreliable internet

I've an Flex actionscript 3 schedule reminding app which talks to a web-service through the internet over wifi. The problem is the wifi connection is unreliable and there are frequent dropouts. The schedule which the app reminds doesn't change very frequently. So instead of calling the web-service for finding the schedule every day/hour the app can store the data locally. Also, if the user updates the schedule on the app, the web-service is updated that the task on the schedule is complete. This data can also be stored locally so that when the user uses the app next time and there is an internet connection, the app can update the web-service.
What are the suggestions for the application design in such a case? Are there any examples?
For storing the schedule locally, use a shared object. Here is a tutorial on the subject, if you haven't used them before.
Any time the user adds/edits an item, attempt to send it to the server. Make sure to store the changed/new item in the shared object. If it fails, have the application periodically (eg every min or every 10 sec or every 15 mins, depending on how you want to set it up) check for a successful connection. As soon as it has a successful connection, have the app sync with the server. Make sure the server sends back a signal for successful saving before the app stops trying to send changes.
Does your application run all the time, or just for brief stints? It would only be able to sync when the app is open on the user's computer, of course. How frequently do you lose/regain connectivity?