I have an IoT Enterprise button that (when pressed) triggers a Lambda function. The Lambda function sends an API "put" request to my Philips HUE bridge, which turns on (or off) my Living Room lights.
That much is 100% done. Life is good.
My question:
Is there a specific AWS service that is used to "send" the API request?
I'm assuming that the AWS Lambda service performs this action. But maybe not...
I need to create a firewall rule that (only) allows "Lambda servers" to pass-through my firewall.
If the destination IP = my WAN IP.
If the destination port = ##.
I found the following resource, that explains how to list all IPs owned by AWS.
https://docs.aws.amazon.com/general/latest/gr/aws-ip-ranges.html
Here is a quote from the reference guide:
service
The subset of IP address ranges. The addresses listed for API_GATEWAY are egress only.
Specify AMAZON to get all IP address ranges (meaning that every subset is also
in the AMAZON subset). However, some IP address ranges are only in the AMAZON subset
(meaning that they are not also available in another subset).
Type: String
Valid values:
AMAZON | AMAZON_CONNECT | API_GATEWAY | CLOUD9 | CLOUDFRONT |
CODEBUILD | DYNAMODB | EC2 | EC2_INSTANCE_CONNECT | GLOBALACCELERATOR |
ROUTE53 | ROUTE53_HEALTHCHECKS | S3 | WORKSPACES_GATEWAYS
As you can see, "Labda" isn't a valid (service) string value. I suppose I could allow any IP from the "us-east-1" AWS region. However this is (still) too permissive for my liking. (225 subnets) By comparison, if you specify "EC2" as the service, that narrows the list down to 82 subnets.
Thanks (in advance) for your helpful insight!
If you want to limit to a specific set of IPs (outside the AWS Public Zone) you will need to run your Lambda inside your VPC in a private subnet and then assign a NAT Gateway with an EIP.
See more: https://aws.amazon.com/premiumsupport/knowledge-center/internet-access-lambda-function/
Related
My service is calling a 3rd party service (binance api) and there is a geo location description for US IP addresses (they banned all US IPs). My deployment's region set to Tokyo but the 3rd party services still sees that my request is coming from the US. Is there any solution to get a local ip address to functions where it is located?
You can achieve this requirement by creating a static external IP address in the same region.
To achieve this go through this document which shows how to reserve IP address to a cloud function as it is mentioned clearly what you need to get
In some cases, you might want traffic originating from your function to be associated with a static IP address. For example, this is useful if you are calling an external service that only allows requests from explicitly specified IP addresses.
For more information you can go through this Thread
Just a simple question, it is possible to route a single IP to the default route when this IP is in a range that is already routed.
I have to route this whole range to a certain IP : 10.0.0.0/8
This range goes from 10.0.0.1 to 10.255.255.254
The fact is that we are connected trough a VPN to a server with the IP 10.173.90.171.
Can I make my switch route every IP on the range 10.0.0.0/8 except the single IP 10.173.90.171 and make it go to the default route ?
Many Thanks,
It is possible. You just need to add this specific prefix and point it to the desired gateway. In basic IP networks, routing decision is always performed on most specific entry (10.173.90.171/32). This way you can have 10.0.0.0/8 routed via 192.168.0.1 and then 10.173.90.171/32 routed via 172.16.0.1.
We have an existing solution where there is an Eventhub ingests real time event data from a source. We are using a SAS key for authentication and authorization. For additional security we have whitelisted the IPs of the source on the Eventhub. We also have a databricks instance within a VNET reading from this eventhub. The VNET has been whitelisted on the Eventhub as well.
We now have a new requirement to read off the Eventhub using Azure functions. The problem is since we have enabled IP whitelisting on the Eventhub, we need to whitelist the IPs of the functions as well and we can't figure out which Inbound IPs to whitelist on the Eventhub.
The documentation says that the inbound IPs remain mostly the same but can change for Consumption plan which is what we intend to use.
Does that mean the only other solution is that we need to whitelist the entire Azure region where our functions are hosted using the list in the link Azure service IPs?
Any other suggestions what we can try?
Does that mean the only other solution is that we need to whitelist
the entire Azure region where our functions are hosted? Any other
suggestions what we can try?
Yes, if you don't know the outbound ip address of azure function app, please add the ip region to the whitelist. You could get those here.
More realistic option: You can put your function app in a azure VNET and let the VNET to access the Event Hub. However, this requires a AppService Plan or Premium Consumption Plan Function.
I want to have an IP address which when pinged will load all the data sent from a GPS device. The GPS device is configured to send data to an IP address and port. I need to run a server side script to read the data from that port and display it on the IP address. Does GCP provide a static IP address to be purchased and can I use Google Cloud functions or any other GCP tool to read data from that specific port and display it on that IP address? If yes, how could I go about doing this? And is there any other way to implement this using some other platform?
Does GCP provide a static IP address to be purchased
Yes, you can create a static public IP address in Google Cloud.
Public IP addresses are free when attached to running instances/services.
Reserving a Static External IP Address
can I use Google Cloud functions or any other GCP tool to read data
from that specific port and display it on that IP address?
You have not provided enough information to answer this part of your question.
Do not mix multiple topics into one question. Create separate questions. You will get more/better answers.
I have a huge problem with the manage of an istance group on gcloud compute engine.
I setting to the 1st istance of the group a static ip XX.XXX.XX.XX, this ip is connected with a Domain.
If during the scaling the first machine created was canceled, no one of the new instances will take that IP.
This is a problem because my domain go down.
I thinked to manage this by creating another separeted istance that ping the domain...and if the domain is down change (with gcloud commands) the ip of one that new istances.
I want to ask, there is someone that had found some trick to solve this issue?
Thank's guys
EDIT: Ok, LB is working, but I need to "live streaming" through that LB because this LB manage an istance group that manage the live streaming.
Now, if I set it, I can't go live (from any software, such OBS and similar) :/
So, a little recap:
I have my domain example.com
I have my istance group istance_group_example
Load balancer http_loadb
I set on the frontend of http_loadb my ip (static, not temp), then I go to cloudflare and set the static ip.
If i go to my example.com, i can see my custom page.
Now the problem is, i can access to server, but if I need to create a live streaming with OBS (for example), obs just load the connection for a while but then stop.
If i point my DNS directly to the IP of an istance inside istance group (bypassing the load balancer) everything works.
I think what you actually are looking for is an HTTP load balancer . The load balancer should take the static IP where your domain is pointing to. Form there it can forward the traffic to any instances that are in a healthy status at the moment (another thing you are looking for are health checks which more or less do what you set up with the other instance group, and ping (for TCP) or execute GET/HEAD requests regularly and, if any instance is unresponsive, it gets taken care of and receives no traffic until recovered).
So, the base architecture of your solution would be like this:
One managed instance group set to autoscale (if you need it) and autohealing (pretty much mandatory in this case, so any dead instance gets replaced by a healthy one).
A health check set up on the instance group that will keep polling the instances on the "servie port" to confirm if they are UP or not. This is important to ensure that the instances are checked consistently and terminated/recreated based on a consistent metric. The load balancer will use it's own health check too.
A global HTTP loead balancer (Network services -> load balancer -> HTTP(S) Load Balancer) pointing to a backend service that you will create. The backend service will point to your instance group and to the relevant ports for your service. Assign another health check here. This will be useful so the LB is aware as soon as one instance fails so it can take it out of the pool of destinations. If you don't specifically need it don't select session affinity.
For the LB frontend select HTTP(S) port(s) or anything you need, then under IP address select "create new static address" and name it. This IP address will be where your DNS records for your domain should be pointing at.
After the LB is ready get to your nameservers (like CloudDNS if you are managing your domain DNS from there, if not to whatever solution your registrar provides) and point the A record to the IP you assigned to the load balancer.