Get logs from AWS EKS native ingress - kubernetes-ingress

How can I get request/response logs from the ingress, that I am running in AWS EKS?
Ingress type is:
"kind": "Ingress",
"apiVersion": "networking.k8s.io/v1",

Related

k8s nginx Ingress take my node IP as Address

I have 3 node k8s Cluster on my virtual env which is VMware Fusion.
When try to create basic Ingress it takes my one node_ip which is nginx_controller running.
But 80 port is not open on all nodes. I mean it is not working
curl: (7) Failed to connect to 172.16.242.133 port 80: Connection refused
What I missing ?
I installed Nginx Ingress Controller
I installed MetalLB and configured it. It is working if I create service with type: LoadBalancer. It takes ExernalIp Ip and I can access it.
I deploy basic app for test.
I create a service for app. I can access on NodePort or CulesterIP. Both I tried.
I create basic Ingress for manage hosts and routing staff. But this step I stuck.
My Questions ;
1-) Normaly what should Ingress take Ip as Address ? One of my node or External DHCP IP.
2-) When I create service with type: LoadBalancer it takes externalIP. I can record DNS to this IP and clients can access it. What is wrong with that ?
Ingress supports two types of service type: NodePort and LoadBalancer.
While using NodePort service type you should use nodeport number instead of default port 80. Explanation to this behavior is available in nginx ingress documentation:
However, due to the container namespace isolation, a client located
outside the cluster network (e.g. on the public internet) is not able
to access Ingress hosts directly on ports 80 and 443. Instead, the
external client must append the NodePort allocated to the
ingress-nginx Service to HTTP requests.
So your curl should look like this:
curl 172.16.242.133:<node_port_number>
When you use MetalLB with LoadBalancer service type, it takes externalIPs from it's configuration that you specified when installing metallb in cluster.
More information about nginx ingress controller cooperation with metallb is available in nginx documentation.
MetalLB requires a pool of IP addresses in order to be able to take
ownership of the ingress-nginx Service. This pool can be defined in a
ConfigMap named config located in the same namespace as the MetalLB
controller. This pool of IPs must be dedicated to MetalLB's use, you
can't reuse the Kubernetes node IPs or IPs handed out by a DHCP
server.
My Problem was,
I thought Ingress takes the IP and we record DNS to this IP. But It is not. Why Ingress object has Address and Port field I do not know. Just for information I guess but It is confusing for newbies.
Clients access the Ingress Controller not Ingress.
Actually Ingress Controller Service manages the externalIP or NodePort. So we have to configure this.
In my case nginx
kubectl edit service/ingress-nginx-controller -n ingress-nginx
You can change type to LoadBalancer and you will get externalIP after configured the MetalLB. And define your Ingress objects, record DNS Records then you are ready.

Cannot connect to Google MySQL from deployed Kubernetes NodeJS app

I have been trying for the past couple of days to get my deployed NodeJS Kubernetes LoadBalancer app to connect to a Google Cloud MySQL instance. the SQL database and the Kubernetes deployment exist in the same Google project. Both The ORM of choice for this project is Sequelize. Here is a snippet of my connection configuration:
"deployConfigs": {
"username": DB_USERNAME,
"password": DB_PASSWORD,
"database": DB_DATABASE,
"host": DB_HOST,
"port": 3306,
"dialect": "mysql",
"socketPath": "/cloudsql/INSTANCE_NAME"
}
When I run the application locally with the same configurations, I am able to query from the database. I can also hit the NodeJS LoadBalancer URL to get a valid API response as long as the API does not hit the database.
I have whitelisted my IP as well as the IP for the NodeJS LoadBalancer API but I still get the following response:
{
"name": "SequelizeConnectionError",
"parent": {
"errorno": "ETIMEDOUT",
"code": "ETIMEDOUT",
"syscall": "connect",
"fatal": true
},
"original": {
"errorno": "ETIMEDOUT",
"code": "ETIMEDOUT",
"syscall": "connect",
"fatal": true
}
}
I followed the instructions for creating a Proxy through a Kubernetes deployment but I don't think that will necessarily solve my issue because I simply want to connect from my Kubernetes app to a persistent database.
Again, I have been able to successfully hit the remote DB when running the container locally and when running the node app locally. I am really unsure as to why this will not connect when deployed.
Thanks!
So Kubernetes does a lot Source NATing so I had to add a rule like this on my network to allow outgoing traffic everywhere from my cluster in GCE:
This is very permissive so you might just want to add it for testing purposes initially. You can also check connectivity to MySQL by shelling into a running pod:
$ kubectl exec -it <running-pod> sh
/home/user # telnet $DB_HOST 3306
It sounds like you might be attempting to connect to your Cloud SQL instance via its public IP? If that's the case, then be careful as that is not supported. Take a look at this documentation page to figure out what's the best way to go about it.
You mentioned you're already using a proxy, but didn't mention which one. If it's the Cloud SQL Proxy, then it should allow you to perform any kind of operation you want against your database, all it does is establish a connection between a client (i.e. a pod) and the Cloud SQL instance. This Proxy should work without any issues.
Don't forget to setup the appropriate grants and all of that stuff on the Cloud SQL side of things.
I have figured it out.
When creating a MYSQL instance you need to do two things (Go to the section titled "Authorized Networks"):
Add a network and give it the name "Application" and the value "127.0.0.1". The environment variable DB_INSTANCE_HOST in your Kubernetes secret should also be set to "127.0.0.1" to prevent ETIMEOUT or ECONNREFUSED from occurring when connecting to MySQL with Node.js.
Create another network and give it the name "Local computer." Search for "my IP address" in a new tab in your browser, then enter that IP in the local computer value input field. (The goal of step two is to connect your instance to MySQL Workbench running on your computer so that you can start building and managing databases.)
That's it!
If you have any questions, write back and we can speak on Facebook or Instagram. I can then help you through the deployment process and address any issues you may have.

Data retrieval from Orion Context Broker subscription fails

With a successful subscription in Orion Context Broker, a listening accumulator server fails to recieve any data, under any circumstances I can find to test.
We are using an Ubuntu virtual machine that has a nested virtual machine with FIWARE Orion in it. Having subscribed to Orion Context Broker and confirmed that it was successful by checking the database and also having confirmed that data is successfully updated, the accumulator server fails to respond. unable to tell if this is a failure to send from Orion, or to receive by accumulator, and unsure how to check and continue, we humbly beg the wisdom of the stack overflow community.
We have run the accumulator server on both virtual machine on the same PC and on another PC with non-vm Ubuntu. The script we are using to subscribe is presented below:
Orion VM
{
"duration": "P1M",
"entities": [
{
"type": "Thing",
"id": "Sensor_GV_01",
"isPattern": "false"
}
],
"throttling": "PT1S",
"reference": "http://10.224.24.236:1028/accumulate",
"attributes": [
"temperature",
"pressure"
]
}
EDIT 1
upon using GET/v2/subscriptions/ we receive that the subscription is present but it gives only basic info, no Timesent values. It is pretty much the same thing we receive when we ask MongoDB directly.
Also, forgot to mention, Orion version we are using is 1.9.0 Subscription check

Cant connect to AWS Aurora cluster endpoint but can access Writier instance

I have a MySql Aurora cluster setup on AWS. For the last few weeks I have had all of my apps pointing to an instance endpoint, and it has been working fine. Yesterday, however, I started getting errors on inserts/updates saying that the instance was in ReadOnly mode and couldnt be updated.
Apparently the Reader/Writer endpoints can change and what I am really supposed to do is point to the cluster endpoint, which will route the request appropriately. I have tried pointing directly to that cluster endpoint, but it always fails. The error message is fairly generic, telling me to check my username/password, make sure I am not blocked by a firewall, and all of the normal default solutions.
My Cluster is in a VPC, but the Subnets assigned to the cluster are public (they are routed through Internet Gateway).
The ready/writer instances have the same Security Group and VPC configuration. I can connect to the Reader instance (Read Only) but not the Writer instance.
Any idea what else I could look for? Most forums say that I need to check my Routing Tables or security groups, but from what I can tell that are all open to all traffic (I realize that is a bad configuration, I am just trying to get this working). Is there anything else that I should be checking?
Thanks
Update
I can Telnet in to the Reader instance, but not the Writer instance. They are in the same VPC, and both use the public subnet as far as I can tell.
Update 2
My Lambda functions that are in the same VPC as my RDS can access the Cluster endpoint, so I guess its just a problem getting outside. I thought that would be resolved by having a public subnet in the VPC but it doesnt seem to work for that endpoint.
Merely having public subnets would not be enough, you need to explicitly enable public accessibility for your db instances. Public Accessibility is an instance level setting, and you need to turn that ON on all your instances in the cluster. Given your symptoms, I suspect if you have enabled public access on one of your instances and not on some of the others. You can check the same via CLI using the describe-db-clusters API and filtering or searching for PubliclyAccessible. Here is an example:
aws rds describe-db-instances --region us-west-2 --output json --query 'DBInstances[?Engine==`aurora`].{instance:DBInstanceIdentifier, cluster:DBClusterIdentifier, isPublic:PubliclyAccessible }'
[
{
"instance": "karthik1",
"isPublic": true,
"cluster": "karthik-cluster"
},
{
"instance": "karthik2",
"isPublic": false,
"cluster": "karthik-cluster"
}
]
You modify an instance and enable public access on it using the modify-db-instance API.
Hope this helps.

What are the relationships between Kubernetes services and clusters and Google Compute Engine objects?

I am setting a couple of services running on Google Container Engine, with traffic coming in through a Google HTTP Load Balancer, using path mapping.
There is a good Google tutorial on setting up content-based load-balancing here, but it is all in terms of plain Google Compute objects like instance groups and backend services. I, however, have Kubernetes services, pods and clusters.
What is the relationship between the Kubernetes objects and the Google Compute resources? How do I map between the two programmatically?
(I am aware that I could be using a Kubernetes web ingress object to do the balancing, as explained here, but it looks like Kubernetes Ingress does not yet support HTTPS, which need.)
What is the relationship between the Kubernetes objects and the Google Compute resources? How do I map between the two programmatically?
https://github.com/kubernetes/contrib/tree/master/Ingress/controllers/gce#overview
(I am aware that I could be using a Kubernetes web ingress object to do the balancing, as explained here, but it looks like Kubernetes Ingress does not yet support HTTPS, which need.)
Ingress will support HTTPS in 1.2. This is what the resource will look like: https://github.com/kubernetes/kubernetes/issues/19497#issuecomment-174112834. In the meanwhile you can setup HTTP loadbalancing with the Ingress and hand modify it to support https. Apologies beforehand that this is convoluted, it will get better soon.
First create an HTTP Ingress:
Create Services of Type=NodePort
Make sure you have BackendService quota
Create a HTTP Ingress
Expose the node port (s) of the service in the Firewall (also as mentioned in https://cloud.google.com/container-engine/docs/tutorials/http-balancer)
Wait till kubect describe ing shows HEALTHY for you backends.
At this point you should be able to curl your Ingress loadbalancer IP and hit the nginx service (or whatever service you created in step 1).
Then do the following, manually through the GCE console:
Change the IP of the Ingress resource from "Ephmermal" to "Static" (look for the IP in kubectl get ing in the "External IP addresses" tab)
Create your ssl cert. If you just want a self signed cert you can do:
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /tmp/nginx.key -out /tmp/nginx.crt -subj "/CN=nginxsvc/O=nginxsvc"
Create a new target HTTPS proxy and forwarding rule for the HTTPS load balancer and assign it to the same (static) IP of the http load balancer.
At this point you should be able to curl https://loadbalancer-ip -k