I have kubernetes cluster. I have started mysql from kubectl. I have a image of spring boot application. I am confused with the JDBC url to be used in application.yml. I have tried multiple IP addresses by describing pods, services etc. It is getting errored out with "communication Link failure"
Below is my mysql-deployment.yml
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
#type: NodePort
ports:
- port: 3306
#targetPort: 3306
#nodePort: 31000
selector:
app: mysql
clusterIP: None
---
apiVersion: v1
kind: Secret
metadata:
name: mysql-secret
type: Opaque
data:
MYSQL_ROOT_PASSWORD: cGFzc3dvcmQ= #password
MYSQL_DATABASE: dGVzdA== #test
MYSQL_USER: dGVzdHVzZXI= #testuser
MYSQL_PASSWORD: dGVzdDEyMw== #test123
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.7
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: MYSQL_ROOT_PASSWORD
- name: MYSQL_DATABASE
valueFrom:
secretKeyRef:
name: mysql-secret
key: MYSQL_DATABASE
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: mysql-secret
key: MYSQL_USER
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: MYSQL_PASSWORD
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
Your K8S service should expose port and targetPort 3306 and in your JDBC URL use the name of that service:
jdbc:mysql://mysql/database
If your MySQL is a backend service only for apps running in K8S you don't need nodePort in the service manifest.
If you get a SQLException: Connection refused or Connection timed out or a MySQL specific CommunicationsException: Communications link failure, then it means that the DB isn't reachable at all.
This can have one or more of the following causes:
IP address or hostname in JDBC URL is wrong.
Hostname in JDBC URL is not recognized by local DNS server.
Port number is missing or wrong in JDBC URL.
DB server is down.
DB server doesn't accept TCP/IP connections.
DB server has run out of connections.
Something in between Java and DB is blocking connections, e.g. a firewall or proxy.
I suggest these steps to better understand the problem:
Connect to MySQL pod and verify the content of the
/etc/mysql/my.cnf file
Connect to MySQL from inside the pod to verify it works
Remove clusterIP: None from Service manifest
Get the IP address of the node where the MySQL pod is running by running the command:-
kubectl get nodes -o wide
If the MySQL service is exposed to type NodePort get the assigned nodeport:-
kubectl get svc
In your application.properties edit the JDBC URL with
ip-address of node:nodeport assigned to that service
below is my output of kubectl get svc
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 9d
mysql ClusterIP None <none> 3306/TCP 2h
registry NodePort 10.110.33.13 <none> 8761:31881/TCP 7d
Related
I deployed a mysql service in Kubernetes and created a database(db1) and a custom user('foo'#'localhost' identified by 'bar') with all privileges. I can verify it.
but somehow my Flask application cannot connect to it. So, I turned on the debug mode and got
sqlalchemy.exc.OperationalError
sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'localhost' ([Errno 99] Address not available)")
(Background on this error at: http://sqlalche.me/e/13/e3q8)
This never happens in Linux machines. This is my DATABASE_URL mysql+pymysql://foo:bar#localhost:3306/db1
Is there anything I am doing wrong? I am new to kubernetes, I don't even know how to go about debugging it. Please let me know if I am not providing enough information.
Thank you.
[EDITED]
mysql-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
labels:
app: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: "123"
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-volumeclaim
mysql-service.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
spec:
type: ClusterIP
ports:
- port: 3306
selector:
app: mysql
I am a newbie on Kubernetes and try to generate 2 pods including front-end application and back-end mysql. First I make a yaml file which contains both application and mysql server like below,
apiVersion: v1
kind: Pod
metadata:
name: blog-system
spec:
containers:
- name: blog-app
image: blog-app:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
args: ["-t", "-i"]
link: blog-mysql
- name: blog-mysql
image: mysql:latest
env:
- name: MYSQL_ROOT_PASSWORD
value: password
- name: MYSQL_PASSWORD
value: password
- name: MYSQL_DATABASE
value: test
ports:
- containerPort: 3306
The mysql jdbc url of front-end application is jdbc:mysql://localhost:3306/test. And pod generation is successful. The application and mysql are connected without errors. And this time I seperate application pod and mysql pod into 2 yaml files.
== pod-app.yaml
apiVersion: v1
kind: Pod
metadata:
name: blog-app
spec:
selector:
app: blog-mysql
containers:
- name: blog-app
image: app:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
args: ["-t", "-i"]
link: blog-mysql
== pod-db.yaml
apiVersion: v1
kind: Pod
metadata:
name: blog-mysql
labels:
app: blog-mysql
spec:
containers:
- name: blog-mysql
image: mysql:latest
env:
- name: MYSQL_ROOT_PASSWORD
value: password
- name: MYSQL_PASSWORD
value: password
- name: MYSQL_DATABASE
value: test
ports:
- containerPort: 3306
But the front-end application can not connect to mysql pod. It throws the connection exceptions. I am afraid the mysql jdbc url has some incorrect values or the yaml value has inappropriate values. I hope any advices.
In the working case since same pod has two containers they are able to talk using localhost but in the second case since you have two pods you can not use localhost anymore. In this case you need to use the pod IP of the mysql pod in the frontend application. But problem with using POD IP is that it may change. Better is to expose mysql pod as service and use service name instead of IP in the frontend application. Check this guide
For this you need to write service for exposing the db pod.
There are 4 types of services.
ClusterIP
NodePort
LoadBalancer
ExternalName
Now you need only inside the cluster then use ClusterIP
For reference use following yaml file.
kind: Service
apiVersion: v1
metadata:
name: mysql-svc
spec:
type: ClusterIP
ports:
- port: 3306
targetPort: 3306
selector:
app: blog-mysql
Now you will be access this pod using mysql-svc:3306
Refer this in blog-app yaml with
env:
- name: MYSQL_URL
value: mysql-svc
- name: MYSQL_PORT
value: 3306
For more info use Url :https://kubernetes.io/docs/concepts/services-networking/service/
Pods created will have dns configured in the following manner
pod_name.namespace.svc.cluster.local
In your case assuming these pods are in default namespace your jdbc connection string will be
jdbc:mysql://blog-mysql.default.svc.cluster.local:3306/test
Refer: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pods
Like Arghya Sadhu and Sachin Arote suggested you can always create a service and deployment. Service and deployment helps you in the cases where you have more than one replicas of pods and service takes care of the load-balancing.
A have this pod specification :
apiVersion: v1
kind: Pod
metadata:
name: wp
spec:
containers:
- image: wordpress:4.9-apache
name: wordpress
env:
- name: WORDPRESS_DB_PASSWORD
value: mysqlpwd
- name: WORDPRESS_DB_HOST
value: 127.0.0.1
- image: mysql:5.7
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: mysqlpwd
volumeMounts:
- name: data
mountPath: /var/lib/mysql
volumes:
- name: data
emptyDir: {}
I deployed it using :
kubectl create -f wordpress-pod.yaml
Now it is correctly deployed :
kubectl get pods
wp 2/2 Running 3 35h
Then when i do :
kubectl describe po/wp
Name: wp
Namespace: default
Priority: 0
Node: node3/192.168.50.12
Start Time: Mon, 13 Jan 2020 23:27:16 +0100
Labels: <none>
Annotations: <none>
Status: Running
IP: 10.233.92.7
IPs:
IP: 10.233.92.7
Containers:
My issue is that i cannot access to the app :
wget http://192.168.50.12:8080/wp-admin/install.php
Connecting to 192.168.50.12:8080... failed: Connection refused.
Neither wget http://10.233.92.7:8080/wp-admin/install.php
works
Is there any issue in the pod description or deployment process ?
Thanks
With your current setup you need to use wget http://10.233.92.7:8080/wp-admin/install.php from within the cluster i.e by performing kubectl exec into another pod because 10.233.92.7 IP is valid only within the cluster.
You should create a service for exposing your pod. Create a cluster IP type service(default) for accessing from within the cluster. If you want to access from outside the cluster i.e from your desktop then create a NodePort or LoadBalancer type service.
Other way to access the application from your desktop will be port forwarding. In this case you don't need to create a service.
Here is a tutorial for accessing pods using NodePort service. In this case your node need to have public ip.
The problem with your configuration is lack of services that will allow external access to your WordPress.
There a lot of materials explaining what are the options and how they are strictly connected with infrastructure that Kubernetes works on.
Let me elaborate on 3 of them:
minikube
kubeadm
cloud provisioned (GKE, EKS, AKS)
The base of the WordPress configuration will be the same in each case.
Table of contents:
Running MySQL
Secret
PersistentVolumeClaim
Deployment
Service
Running WordPress
PersistentVolumeClaim
Deployment
Allowing external access
minikube
kubeadm
cloud provisioned (GKE)
There is a good tutorial on Kubernetes site: HERE!
Running MySQL
Secret
As the official Kubernetes documentation:
Kubernetes secret objects let you store and manage sensitive information, such as passwords, OAuth tokens, and ssh keys. Putting this information in a secret is safer and more flexible than putting it verbatim in a Pod definition or in a container image.
-- Kubernetes secrets
Example below is a YAML definition of a secret used for MySQL password:
apiVersion: v1
kind: Secret
metadata:
name: mysql-password
type: Opaque
data:
password: c3VwZXJoYXJkcGFzc3dvcmQK
Take a specific look at:
password: c3VwZXJoYXJkcGFzc3dvcmQK
This password is base64 encoded.
To create this password invoke command from your terminal:
$ echo "YOUR_PASSWORD" | base64
Paste the output to the YAML definition and apply it with:
$ kubectl apply -f FILE_NAME.
You can check if it was created correctly with:
$ kubectl get secret mysql-password -o yaml
PersistentVolumeClaim
MySQL require a dedicated space for storing the data. There is an official documentation explaining it: Persistent Volumes
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
Above YAML will create a storage claim for MySQL. Apply it with command:
$ kubectl apply -f FILE_NAME.
Deployment
Create a YAML definition of a deployment from the official example and adjust it if there were any changes to names of the objects:
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-password
key: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
Take a specific look on the part below, which is parsing secret password to the MySQL pod:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-password
key: password
Apply it with command: $ kubectl apply -f FILE_NAME.
Service
What was missing in your the configuration was service objects. This objects allows communication with other pods, external traffic etc. Look at below example:
apiVersion: v1
kind: Service
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
ports:
- port: 3306
selector:
app: wordpress
tier: mysql
clusterIP: None
This definition will create a object which will point to the MySQL pod.
It will create a DNS entry with the name of wordpress-mysql and IP address of the pod.
It will not expose it to external traffic as it's not needed.
Apply it with command: $ kubectl apply -f FILE_NAME.
Running WordPress
Persistent Volume Claim
As well as MySQL, WordPress require a dedicated space for storing the data. Create it with below example:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wp-pv-claim
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
Apply it with command: $ kubectl apply -f FILE_NAME.
Deployment
Create YAML definition of WordPress as example below:
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: frontend
spec:
containers:
- image: wordpress:4.8-apache
name: wordpress
env:
- name: WORDPRESS_DB_HOST
value: wordpress-mysql
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-password
key: password
ports:
- containerPort: 80
name: wordpress
volumeMounts:
- name: wordpress-persistent-storage
mountPath: /var/www/html
volumes:
- name: wordpress-persistent-storage
persistentVolumeClaim:
claimName: wp-pv-claim
Take a specific look at:
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-password
key: password
This part will parse secret value to the deployment.
Below definition will tell WordPress where MySQL is located:
- name: WORDPRESS_DB_HOST
value: wordpress-mysql
Apply it with command: $ kubectl apply -f FILE_NAME.
Allowing external access
There are many different approaches for configuring external access to applications.
Minikube
Configuration could differ between different hypervisors.
For example Minikube can expose WordPress to external traffic with:
NodePort
apiVersion: v1
kind: Service
metadata:
name: wordpress-nodeport
spec:
type: NodePort
selector:
app: wordpress
tier: frontend
ports:
- name: wordpress-port
protocol: TCP
port: 80
targetPort: 80
After applying this definition you will need to enter the minikube IP address with appropriate port to the web browser.
This port can be found with command:
$ kubectl get svc wordpress-nodeport
Output of above command:
wordpress-nodeport NodePort 10.76.9.15 <none> 80:30173/TCP 8s
In this case it is 30173.
LoadBalancer
In this case it will create NodePort also!
apiVersion: v1
kind: Service
metadata:
name: wordpress-loadbalancer
labels:
app: wordpress
spec:
ports:
- port: 80
selector:
app: wordpress
tier: frontend
type: LoadBalancer
Ingress resource
Please refer to this link: Minikube: create-an-ingress-resource
Also you can refer to this Stack Overflow post
Kubeadm
With the Kubernetes clusters provided by kubeadm there are:
NodePort
The configuration process is the same as in minikube. The only difference is that it will create NodePort on each of every node in the cluster. After that you can enter IP address of any of the node with appropriate port. Be aware that you will neeed to be in the same network without firewall blocking your access.
LoadBalancer
You can create LoadBalancer object with the same YAML definition as in minikube. The problem is that with kubeadm provisioning on a bare metal cluster the LoadBalancer will not get IP address. The one of the options is: MetalLB
Ingress
Ingress resources share the same problem as LoadBalancer in kubeadm provisioned infrastructure. As above one of the options is: MetalLB.
Cloud Provisioned
There are many options which are strictly related to cloud that Kubernetes works on. Below is example for configuring Ingress resource with NGINX controller on GKE:
Apply both of the YAML definitions:
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.27.1/deploy/static/mandatory.yaml
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.27.1/deploy/static/provider/cloud-generic.yaml
Apply NodePort definition from minikube
Create Ingress resource:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: wordpress-nodeport
servicePort: wordpress-port
Apply it with command: $ kubectl apply -f FILE_NAME.
Check if Ingress resource got the address from cloud provider with command:
$ kubectl get ingress
The output should look like that:
NAME HOSTS ADDRESS PORTS AGE
ingress * XXX.XXX.XXX.X 80 26m
After entering the IP address from above command you should get:
Cloud provisioned example can be used for kubeadm provisioned clusters with the MetalLB configured.
I have the following deployment...
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-data-disk
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-deployment
labels:
app: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:5.7
ports:
- containerPort: 3306
volumeMounts:
- mountPath: "/var/lib/mysql"
subPath: "mysql"
name: mysql-data
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secrets
key: ROOT_PASSWORD
volumes:
- name: mysql-data
persistentVolumeClaim:
claimName: mysql-data-disk
This works great I can access the db like this...
kubectl exec -it mysql-deployment-<POD-ID> -- /bin/bash
Then I run...
mysql -u root -h localhost -p
And I can log into it. However, when I try to access it as a service by using the following yaml...
---
apiVersion: v1
kind: Service
metadata:
name: mysql-service
spec:
selector:
app: mysql
ports:
- protocol: TCP
port: 3306
targetPort: 3306
I can see it by running this kubectl describe service mysql-service I get...
Name: mysql-service
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"mysql-service","namespace":"default"},"spec":{"ports":[{"port":33...
Selector: app=mysql
Type: ClusterIP
IP: 10.101.1.232
Port: <unset> 3306/TCP
TargetPort: 3306/TCP
Endpoints: 172.17.0.4:3306
Session Affinity: None
Events: <none>
and I get the ip by running kubectl cluster-info
#kubectl cluster-info
Kubernetes master is running at https://192.168.99.100:8443
KubeDNS is running at https://192.168.99.100:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
but when I try to connect using Oracle SQL Developer like this...
It says it cannot connect.
How do I connect to the MySQL running on K8s?
Service type ClusterIP will not be accessible outside of Pod network.
If you don't have LoadBalancer option, then you have to use either Service type NodePort or kubectl port-forward
You need your mysql service to be of Type NodePort instead of ClusterIP to access it outside Kubernetes.
Use the Node Port in your client config
Example Service:
apiVersion: v1
kind: Service
metadata:
name: mysql-service
spec:
type: NodePort
selector:
app: mysql
ports:
- protocol: TCP
port: 3306
nodePort: 30036
targetPort: 3306
So then you can use the port: 30036 in your client.
I am new to Kubernetes. I am trying to deploy a microservices architecture based springboot web application in Kubernetes. I have setup Kubernetes on OpenStack. All Kubernetes services are running fine.
I followed https://github.com/fabric8io/gitcontroller/tree/master/vendor/k8s.io/kubernetes/examples/javaweb-tomcat-sidecar
to deploy a sample springboot application at
https://www.mkyong.com/spring-boot/spring-boot-hello-world-example-jsp/
and I could see the web app at localhost:8080 and <'node-ip'>:8080.
But the application which I am trying to deploy needs MySQL and rabbitmq, so I created MySQL and rabbitmq services using the yaml file(javaweb.yaml):
javaweb.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
spec:
type: NodePort
ports:
- protocol: TCP
port: 3306
targetPort: 3306
selector:
app: mysql
---
apiVersion: v1
kind: Service
metadata:
# Expose the management HTTP port on each node
name: rabbitmq-management
labels:
app: rabbitmq
spec:
type: NodePort # Or LoadBalancer in production w/ proper security
ports:
- port: 15672
name: http
selector:
app: rabbitmq
---
apiVersion: v1
kind: Service
metadata:
# The required headless service for StatefulSets
name: rabbitmq
labels:
app: rabbitmq
spec:
ports:
- port: 5672
name: amqp
- port: 4369
name: epmd
- port: 25672
name: rabbitmq-dist
clusterIP: None
selector:
app: rabbitmq
---
apiVersion: v1
kind: Pod
metadata:
name: javaweb
spec:
containers:
- image: chakravarthych/sample:v1
name: war
volumeMounts:
- mountPath: /app
name: app-volume
- image: mysql:5.7
name: mysql
ports:
- protocol: TCP
containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: root123
command: ["sh","-c","service mysql start; tail -f /dev/null"]
- image: rabbitmq:3.7-management
name: rabbitmq
ports:
- name: http
protocol: TCP
containerPort: 15672
- name: amqp
protocol: TCP
containerPort: 5672
command: ["sh","-c","service rabbitmq-server start; tail -f /dev/null"]
- image: tomcat:8.5.33
name: tomcat
volumeMounts:
- mountPath: /usr/local/tomcat/webapps
name: app-volume
ports:
- containerPort: 8080
hostPort: 8080
command: ["sh","-c","/usr/local/tomcat/bin/startup.sh; tail -f /dev/null"]
volumes:
- name: app-volume
emptyDir: {}
When I try to access my application at localhost:8080 or <'node-ip'>:8080 I see a blank page.
command kubectl get all -o wide gave me below output:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
pod/javaweb 4/4 Running 0 1h 192.168.9.123 kube-node1 <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d <none>
service/mysql NodePort 10.101.253.11 <none> 3306:30527/TCP 1h app=mysql
service/rabbitmq ClusterIP None <none> 5672/TCP,4369/TCP,25672/TCP 1h app=rabbitmq
service/rabbitmq-management NodePort 10.108.7.162 <none> 15672:30525/TCP 1h app=rabbitmq
which shows that MySQL and rabbitmq are running.
My question is how to check if my application has access to MySQL and rabbitmq services running in Kubernetes.
Note:
I could access rabbitmq at 192.168.9.123:15672 only.
I could also log in to MySQL inside of Docker container.
Did you try a "kubectl log" on the springboot pod? You may have some indication of what is going wrong.