i created aws acc for my aws tutorials and select the free trial. i deployed 2 jar files to test. ntg big, just to-do app, a minimal code just to test if it works for my tutorials. n 2 days later i received an alert which i set up to trigger 1% of $1. heres the breakdown:
i am aware that i have to stop terminate the application when im done with tutorial for the day, but what im curious why the request is so high in 2 days?
Related
I'm not an advanced programmer.
I have a webapp in Google Appscripts where users use it to punch in and punch out their working hours. I have 5 setinterval functions that updates their hours every minute.
The 1st screenshot is from dev version where I'm the only user but the 2nd screenshot is from the prod version. (I've hid the function names)
As you can see in the 1st screenshot that the functions runs exactly every minute. But in the 2nd screenshot since it is shared with several users, it is triggered many times within a minute. This obviously increases the stress level of the app and it takes some time to process the request since there are already functions running several times a minute.
My questions:
Is there a solution where I could limit the executions to run only once per minute no matter how many users are actively using the app.
Will deploying the functions as a library and calling it in the webapp reduce the number of executions?
Dev version
Prod version
I want to stream real time data from Twitter API to Cloud Storage and BigQuery. I have to ingest and transform the data using Cloud Functions but the problem is I have no idea how to pull data from Twitter API and ingest it into the Cloud.
I know I also have to create a scheduler and a Pub/Sub topic to trigger Cloud Functions. I have created a Twitter developer account. The main problem is actually streaming the data into Cloud Storage.
I'm really new to GCP and streaming data so it'll be nice to see a clear explanation on this. Thank you very much :)
You have to design first your solution. What do you want to achieve? Streaming or Microbatches?
If streaming, you have to use the streaming API of Twitter. In short, you initiate a connection and you stay up and running (and connected) receiving the data.
If batches, you have to query an API and to download a set of message. In a Query-response mode.
That being said, how to implement it with Google Cloud. Streaming is problematic because you have to be always connected. And with serverless product you have timeout concern (9 minutes for Cloud Functions V1, 60 minutes for Cloud Run and Cloud Functions V2).
However you can imagine to invoke regularly your serverless product, stay connected for a while (let say 1h) and schedule trigger every hour.
Or use a VM to do that (or a pod on a K8S container)
You can also consider microbatches where you invoke every minute your Cloud Functions and to get all the messages for the past minutes.
At then end, all depends on your use case. What's the real time that you expect? which product do you want to use?
I need advise from any one who uses aws ec2 instance to host their projects.
Currently I have a php project(for backend api) and a reactjs (for frontend). When testing locally, the api response time is 3 seconds(I still optimizing my backend code to reduce it to 2 seconds), but my main concern is when deployed to a staging machine in aws using t3.medium for the backend and t2.medium for frontend, the response time is at least 19 seconds. Here are my goals
1. For staging, at least 5 seconds response time, since this is mainly used for testing purposes.
2. For production, I want the response time same as my local machine.My local machine uses i-7 and 16 gig of ram(with of course too many other application running and lots of tabs open (google chrome) locally).The initial target users for the production is 10-15 users, but will grow once our app will be tested well and stable(I mean data should be accurate).
At first my plan is to test all the available ec2 instance types and see which of them suits in my requirements particularly the response time, but a friend told me that it will cost me a lot since every time an ec2 instance is provisioned, aws will charge for the resources used.Also, what are the best approach, since my backend api has lot of scripts that is being run .The scripts is actually calling the amazon selling partner api and advertising api, which is currently, a very slow api itself, some of their endpoints has a response time of at least 30 seconds, that is why I decided to run them in backeground tru cron jobs.THis scripts also perform database writes after the response from amazon api is successful.
Thank you
I am trying to build a way to log all the times my friends and I are on our video game server. I found an api which returns a JSON file of online players given a server’s IP address.
I plan to receive a new JSON file every 30 seconds and then log player gaming sessions by figuring out when they get on and when they are no longer off.
The problem is that this is my first time using a database like this for my websites. I want to use (and will use) Golang to retrieve the JSON file and update my MySQL database of player logs.
PROBLEM: I have no freaking clue how to have my Golang file run every 30 seconds to update my database. I can get a simple program to grab data and update a local database easily, but I’m lost on how to get this to run on my website and to have it run every 30ish seconds 24/7. I’m used to CRUD with simple html forms and other things related to user input, but I’ve never thought of changing my database separately from website interactions
Is there a standard solution to my problem? Am I thinking about this all wrong? Do I make sense? Does God really exist!!?
I’m using BlueHost
I insist on Golang for the experience
first time using stackoverflow idk if my question was too long
You need a UNIX/Linux subsystem known as cron to do this. cron lets you rig a program to run at a specific interval on your machine.
Bluehost’s cron support lets you run php scripts. I don’t believe they support running other kinds of programs.
If you have some other always-on machine that runs your golang program, you can run it there. But, to do that, you will have to configure Bluehost to allow an external connection to your MySQL server so that program can connect. Ask their customers support people about that.
Pro tip: every 30 seconds may be too high an update frequency to use on a shared service like Bluehost. That kind of frequency is tricky to manage even if you completely control the server.
I've an Flex actionscript 3 schedule reminding app which talks to a web-service through the internet over wifi. The problem is the wifi connection is unreliable and there are frequent dropouts. The schedule which the app reminds doesn't change very frequently. So instead of calling the web-service for finding the schedule every day/hour the app can store the data locally. Also, if the user updates the schedule on the app, the web-service is updated that the task on the schedule is complete. This data can also be stored locally so that when the user uses the app next time and there is an internet connection, the app can update the web-service.
What are the suggestions for the application design in such a case? Are there any examples?
For storing the schedule locally, use a shared object. Here is a tutorial on the subject, if you haven't used them before.
Any time the user adds/edits an item, attempt to send it to the server. Make sure to store the changed/new item in the shared object. If it fails, have the application periodically (eg every min or every 10 sec or every 15 mins, depending on how you want to set it up) check for a successful connection. As soon as it has a successful connection, have the app sync with the server. Make sure the server sends back a signal for successful saving before the app stops trying to send changes.
Does your application run all the time, or just for brief stints? It would only be able to sync when the app is open on the user's computer, of course. How frequently do you lose/regain connectivity?