Api gateway urls for AWS fargate containers - containers

We are deploying our services in a containerized environment using AWS fargate. Our single Task has all the service definitions in it and is successfully deployed to the container.
We need urls to our deployed services in the containers for further processing. Is there a way by which we get api gateway to these containers ?

You can create URLs for your fargate services by creating Application load balancers.
https://itnext.io/run-your-containers-on-aws-fargate-c2d4f6a47fda

Related

AGIC with App gateway together with azure load balancer

i want to understand the concept and the traffic flow in case of using AGIC. I'm using azure advanced networking in AKS. What i see that azure automatically creates an Azure load balancer once the cluster is created. So now i have an App gateway working together with AGIC . So what's the role of the Azure load balancer in this case?. And how then the traffic will flow ingress and igress of the cluster?. Should also the load balancer have a public IP ? Any explanation or resources would be very helpful.

How to control Spring Boot Admin Server spring cloud kubernetes based service discovery to use HTTP instead of HTTPS

I have spring boot admin server deployed in openshift with the help of fabric8 maven plugin
And also i have several applications deployed in openshift.
Spring boot admin server (SBAS) use spring cloud kubernetes discovery to discover services (applications) registered / running in namespace / cluster, which is automatic client discovery.
SBAS discovered as expected, its fine but some applications shown / registered in SBAS use http and some use https to check the health as like below
I have no idea, why SBAS use http for some apps and for https for some apps to check the health.
Since SBAS use https and port 8443 it shows applications are offline but those applications are exposed in http 8080 only
I have compared applications code and openshift configurations but i don't see any difference and how to fix this issue.
I am new to all above concepts could some one help me ?
I didn't find solution for this issue, but i did work around which helped me.
Since i am using only one port 8080, i have deleted other ports such as 8443 and 8778 via openshif yml as shown below. but you have you have to expose more ports this won't help.

Connect to openshift app via lwip embedded hardware

I have uploaded a simple Rest API application in Openshift (starter program).
I have also an STM32 based hardware running Lwip (TCP/IP) protocol and my goal is to connect it to the above openshift app.
LWIP uses a function (tcp_connect) which uses the external ip of the app.
However I am struggling to understand and find the external IP of the openshift service running in a pod
Any suggestions?

WSO2 API manager API not displaying properly

I'm deploying WSO2 API manager 2.6.0 with an external MySQL database and I'm trying to have my API's persist when I change my deployment.
Currently I have 2 deployments using the same external database, one local and the other hosted on an AWS EKS cluster. When I create an API on my local deployment, I can only view it on my AWS deployment if I'm logged in to the store, and visa-versa for my localhost deployment.
The expected and desired behaviour is that all APIs created on both deployments should be displayed on the store no matter if I'm logged in or not, is there any configurations I can change to make the happen?
Here is the doc I used to configure the external database: https://docs.wso2.com/display/AM260/Installing+and+Configuring+the+Databases

How to access services in K8s from the internal non-K8s network?

Question: How can I provide reliable access from (non-K8s) services running in an GCE network to other services running inside Kubernetes?
Background: We are running a hosted K8s setup in the Google Cloud Platform. Most services are 12factor apps and run just fine within K8s. Some backing stores (databases) are run outside of K8s. Accessing them is easy by using headless services with manually defined endpoints to fixed internal IPs. Those services usually do not need to "talk back" to the services in K8s.
But some services running in the internal GCE network (but outside of K8s) need to access services running within K8s. We can expose the K8s services using spec.type: NodePort and talk to this port on any of the K8s Nodes IPs. But how can we automatically find the right NodePort and a valid Worker Node IP? Or maybe there is even a better way to solve this issue.
This setup is probably not a typical use-case for a K8s deployment, but we'd like to go this way until PetSets and Persistent Storage in K8s have matured enough.
As we are talking about internal services I'd like to avoid using an external loadbalancer in this case.
You can make cluster service IPs meaningful outside of the cluster (but inside the private network) either by creating a "bastion route" or by running kube-proxy on the machine you are connecting from (see this answer).
I think you could also point your resolv.conf at the cluster's DNS service to be able to resolve service DNS names. This could get tricky if you have multiple clusters though.
One possible way is to use an Ingress Controller. Ingress Controllers are designed to provide access from outside a Kubernetes cluster to services running inside the cluster. An Ingress Controller runs as a pod within the cluster and will route requests from outside the cluster to the correct services inside the cluster, based on the configured rules. This provides a secure and reliable way for non-Kubernetes services running in a GCE network to access services running in Kubernetes.