I can't seem to get the session affinity behavior in the GCP load balancer to work properly. My test has been as follows:
I have a Container Engine cluster with 2 node pools (different zones) with 2 nodes each.
I have a deployment which is set to replica: 8, and it's (almost) evenly spread between the 4 nodes.
I have a service exposed as follows (ips redacted)
Name: svc-foo
Namespace: default
Labels: app=foo
Selector: app=foo
Type: NodePort
IP: ....
Port: <unset> 8080/TCP
NodePort: <unset> 31015/TCP
Endpoints: ...:8080,...:8080,...:8080 + 5 more...
Session Affinity: ClientIP
No events.
I have a load balancer with a backend service that has 2 backends pointed at port 31015. It has a healthcheck which passes and a route to get to that backend service.
Finally, I have the Session affinity set to ClientIP on that backend service as well.
After curling a route and checking the logs in stackdriver, I see container.googleapis.com/pod_name: in the metadata of the logs with a bunch of different pod names. In the Kubernetes ui, I also see that all the pods have a little cpu spike, indicating I'm alternating and hitting each one. A weird part is that in GCP, when I look at the monitoring of the backend service, the graph shows me requests per second only to one of the pools (even though the logs and cpu graphs from k8s show the other pool being hit as well).
Related
I followed the official walkthrough on how to deploy MySQL as a statefulset here https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/
I have it up and running well but the guide says:
The Client Service, called mysql-read, is a normal Service with its own cluster IP that distributes connections across all MySQL Pods that report being Ready. The set of potential endpoints includes the primary MySQL server and all replicas.
Note that only read queries can use the load-balanced Client Service. Because there is only one primary MySQL server, clients should connect directly to the primary MySQL Pod (through its DNS entry within the Headless Service) to execute writes.
this is my connection code:
func NewMysqlClient() *sqlx.DB {
//username:password#protocol(address)/dbname?param=value
dataSourceName := fmt.Sprintf("%s:%s#tcp(%s)/%s?parseTime=true&multiStatements=true",
username, password, host, schema,
)
log.Println(dataSourceName)
var mysqlClient *sqlx.DB
var err error
connected := false
log.Println("trying to connect to db")
for i:=0; i<7; i++{
mysqlClient, err = sqlx.Connect("mysql", dataSourceName)
if err == nil {
connected = true
break
} else {
log.Println(err)
log.Println("failed will try again in 30 secs!")
time.Sleep(30*time.Second)
}
}
if (!connected){
log.Println(err)
log.Println("Couldn't connect to db will exit")
os.Exit(1)
}
log.Println("database successfully configured")
return mysqlClient
}
when I connect the app to the headless MySQL service, I get:
Error 1290: The MySQL server is running with the --super-read-only option so it cannot execute this statement"
I am guessing it is connecting to one of the slave replicas, when I connect to mysql-0.mysql host, everything works fine which is what is expected as this the master node.
My question is how will my application be able to read from the slave nodes when we are only connecting to the master as the application needs to be able to write data.
I tried using mysql-0.mysql,mysql-1.mysql,mysql-2.mysql but then I get:
dial tcp: lookup mysql-0.mysql;mysql-1.mysql,mysql-2.mysql: no such host
So I want to know if there is anyway to connect to the three replicas together so that we write to the master and read from any as with other databases like mongo etc.
If there is no way to connect to all the replicas, how would you suggest that I read from the slaves and write to the master.
Thank you!
You have to use the service name for connecting with the MySQL from Go application.
So your traffic flow like
Go appliction POD running inside same K8s cluster as POD inside the container
send a request to MySQL service -> MySQL service forward traffic to MySQL stateful sets (PODs or in other merge replicas)
So if you have created the service in your case host name will be service name : mysql
For example you can refer this : https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/
If you notice how WordPress is connceting to mysql
containers:
- image: wordpress:4.8-apache
name: wordpress
env:
- name: WORDPRESS_DB_HOST
value: wordpress-mysql
it's using the MySQL service name wordpress-mysql as hostname to connect.
If you just want to connect with Read Replica you can use the service name mysql-read
OR
you can also use try connecting with
kubectl run mysql-client --image=mysql:5.7 -i --rm --restart=Never --\ mysql -h mysql-0.mysql
Option -2
if you just to connect with specific POD or write a replica you can use the
<pod-name>.mysql
The Headless Service provides a home for the DNS entries that the
StatefulSet controller creates for each Pod that's part of the set.
Because the Headless Service is named mysql, the Pods are accessible
by resolving .mysql from within any other Pod in the same
Kubernetes cluster and namespace.
Another appropriate approach could be your application code ignores master, replica instance, etc and operates like it's connected to a single master instance and read, write query splitting is abstracted in a capable proxy. And that proxy is responsible for routing the write queries to the master instance and read queries to the replica instances.
Example proxy - https://proxysql.com/
I read all I could find, but documentation on this scenario is scant or unclear for podman. I have the following (contrived) ROOTLESS podman setup:
pod-1 name: pod1
Container names in pod1:
p1c1 -- This is also it's assigned hostname within pod1
p1c2 -- This is also it's assigned hostname within pod1
p1c3 -- This is also it's assigned hostname within pod1
pod-2 name: pod2
Container names in pod2:
p2c1 -- This is also it's assigned hostname within pod2
p2c2 -- This is also it's assigned hostname within pod2
p2c3 -- This is also it's assigned hostname within pod2
I keep certain containers in different pods specifically to avoid port conflict, and to manage containers as groups.
QUESTION:
Give the above topology, how do I communicate between, say, p1c1 and p2c1? In other words, step-by-step, what podman(1) commands do I issue to collect the necessary addressing information for pod1:p1c1 and pod2:p2c1, and then use that information to configure applications in them so they can communicate with one another?
Thank you in advance!
EDIT: For searchers, additional information can be found here.
Podman doesn't have anything like the "services" concept in Swarm or Kubernetes to provide for service discovery between pods. Your options boil down to:
Run both pods in the same network namespace, or
Expose the services by publishing them on host ports, and then access them via the host
For the first solution, we'd start by creating a network:
podman network create shared
And then creating both pods attached to the shared network:
podman pod create --name pod1 --network shared
podman pod create --name pod2 --network shared
With both pods running on the same network, containers can refer to
the other pod by name. E.g, if you were running a web service in
p1c1 on port 80, in p2c1 you could curl http://pod1.
For the second option, you would do something like:
podman pod create --name pod1 -p 1234:1234 ...
podman pod create --name pod2 ...
Now if p1c1 has a service listening on port 1234, you can access that from p2c1 at <some_host_address>:1234.
If I'm interpreting option 1 correctly, if the applications in p1c1 and p2c1 both use, say, port 8080; then there won't be any conflict anywhere (either within the pods and the outer host) IF I publish using something like this: 8080:8080 for app in p1c1 and 8081:8080 for app in p2c1? Is this interpretation correct?
That's correct. Each pod runs with its own network namespace
(effectively, it's own ip address), so services in different pods can
listen on the same port.
Can the network (not ports) of a pod be reassigned once running? REASON: I'm using podman-compose(1), which creates things for you in a pod, but I may need to change things (like the network assignment) after the fact. Can this be done?
In general you cannot change the configuration of a pod or a
container; you can only delete it and create a new one. Assuming that
podman-compose has relatively complete support for the
docker-compose.yaml format, you should be able to set up the network
correctly in your docker-compose.yaml file (you would create the
network manually, and then reference it as an external network in
your compose file).
Here is a link to the relevant Docker documentation. I haven't tried this myself with podman.
Accepted answer from #larsks will only work for rootful containers. In other words, run every podman commands with sudo prefix. (For instance when you connect postgres container from spring boot application container, you will get SocketTimeout exception)
If two containers will work on the same host, then get the ip address of the host, then <ipOfHost>:<port>. Example: 192.168.1.22:5432
For more information you can read this blog => https://www.redhat.com/sysadmin/container-networking-podman
Note: The above solution of creating networks, only works in rootful mode. You cannot do podman network create as a rootless user.
I have hosted Jenkins on Kubernetes cluster which is hosted on Google Cloud Platform. I am trying a run a a python script though Jenkins. There is a need for the script to read a few values on MySQL. The MySQL instance is being run separately on one of the instances. I have been facing issues connecting Kubernetes to MySQL instance. I am getting the following error:
sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on '35.199.154.36' (timed out)
This is the document I came across
According to the document, I tried connecting via Private IP address.
Generated a secret which has MySQL username and password and included the Host IP address in the following format taking this document as reference:
apiVersion: v1
kind: Secret
metadata:
name: db-credentials
type: Opaque
data:
hostname:<MySQL external ip address>
kubectl create secret generic literal-token --from-literal user=<username> --from-literal password=<password>
This is the raw yaml file for the pod that I am trying to insert in the Jenkins Pod template.
Any help regarding how I can overcome this SQL connection problem would be appreciated.
You can't create a secret in the pod template field. You need to either create the secret before running the jobs and mount it from your pod template, or just refer to your user/password in your pod templates as environment variables, depending on your security levels
I try to configure RabbitMQ cluster in a cloud using config file.
My procedure is this:
Get list of instances I want to cluster with (via cloud API, before cluster startup).
Modify config file like that:
cluster_formation.peer_discovery_backend = rabbit_peer_discovery_classic_config
cluster_formation.classic_config.nodes.1 = rabbit#host1.my.long.domain.name
cluster_formation.classic_config.nodes.2 = rabbit#host2.my.long.domain.name
...
Run rabbitmq-server
I expect all nodes to cluster but instead there might be 2+ independent clusters. How do I solve this issue?
UPDATE:
I found out that when I run rabbitmqctl join_cluster rabbit#host.in.existing.cluster on node that is in some cluster already, this node leaves his previous cluster (I expected this clusters to merge). That might be root of the problem.
UPDATE 2:
I have 4 instances. 3 run bare rabbitmq-servers, 1 is configured to join other 3. When started, it joins with the last instance in its config, 2 others show no activity in threir logs. Happens on classic config and erlang config both.
When you initially start up your cluster, there is no means to resolve race condition. Using peer discovery backends will not help with this issue (tested on etcd).
What actually resolved this issue is not deploying instances simultaneously. When started one by one everything is fine and you get one stable cluster which can handle scaling without failure.
I'm trying to start a Haproxy loadbalancer with the following configuration:
global
log 127.0.0.1 local0
log 127.0.0.1 local0 notice
resolvers docker
nameserver dnsmasq 1.2.3.4:53
defaults
mode http
log global
option httplog
option dontlognull
frontend ft_radix_real
bind *:61616
maxconn 6000
acl is_websocket hdr(Upgrade) -i WebSocket
acl is_websocket hdr_beg(Host) -i ws
use_backend bk_radix_real if is_websocket
backend bk_radix_real
balance roundrobin
option forwardfor
server radix-real-1 1.2.3.4:1884 check resolvers docker resolve-prefer ipv4
server radix-real-2 1.2.3.4:1884 check resolvers docker resolve-prefer ipv4
server radix-real-3 1.2.3.4:1884 check resolvers docker resolve-prefer ipv4
server radix-real-4 1.2.3.4:1884 check resolvers docker resolve-prefer ipv4
listen stats
mode http
option httplog
option dontlognull
bind *:1936
stats enable
stats scope ft_radix_real
stats scope bk_radix_real
stats uri /
stats realm Haproxy\ Statistics
stats auth admin:admin
This configuration works when all backend servers are up. However, I would like to be able to start Haproxy even if some(NOT ALL) of the backend servers are not running. I checked the configuration document but couldn't find a solution. Is it possible?
Since 1.7 you can start HAproxy without resolving all the hosts on startup:
defaults
# never fail on address resolution
default-server init-addr none
https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#init-addr
I don't see any problem, the checks are there for this. Servers will be checked, the dead ones will be marked down and only the remaining valid ones will handle the traffic. You probably need to describe what type of issue you're facing exactly.