Consul
Consul on Kubernetes error messages
This topic provides information about potential error messages associated with Consul on Kubernetes. If you receive an error message that does not appear in this section, refer to the following resources:
- Consul error messages
- API Gateway error messages
- Consul-Terraform-Sync error messages
- Consul Discuss forum
Unable to connect to the Consul client on the same host
If the pods are unable to connect to a Consul client running on the same host, first check if the Consul clients are up and running with kubectl get pods
.
$ kubectl get pods --selector="component=client"
NAME READY STATUS RESTARTS AGE
consul-kzws6 1/1 Running 0 58s
If you are still unable to connect and see i/o timeout
or connection refused
errors when connecting to the Consul client on the Kubernetes worker, this could be because the container networking interface (CNI) does not support the use of hostPort
.
The IP 10.0.0.10
in the following example error messages refers to the IP of the host where the Consul client pods are running.
Put http://10.0.0.10:8500/v1/catalog/register: dial tcp 10.0.0.10:8500: connect: connection refused
Put http://10.0.0.10:8500/v1/agent/service/register: dial tcp 10.0.0.10:8500: connect: connection refused
Get http://10.0.0.10:8500/v1/status/leader: dial tcp 10.0.0.10:8500: i/o timeout
To work around this issue, enable hostNetwork
in your Helm values. Using the host network will enable the pod to use the host's network namespace without the need for CNI to support port mappings between containers and the host.
client:
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
Note
Using host network has security implications because it gives the Consul client unnecessary access to all network traffic on the host. We recommend raising an issue with the CNI you're using to add support for `hostPort` and switching back to `hostPort` eventually.ACL auth method login failed
If you see the following error in the init container logs of service mesh pods, check that the pod has a service account name that matches its Kubernetes Service.
consul-server-connection-manager: ACL auth method login failed: error="rpc error: code = PermissionDenied desc = Permission denied"
For example, this deployment will fail because the serviceAccountName
is does-not-match
instead of static-server
.
apiVersion: v1
kind: Service
metadata:
# This name will be the service name in Consul.
name: static-server
spec:
selector:
app: static-server
ports:
- protocol: TCP
port: 80
targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: static-server
spec:
replicas: 1
selector:
matchLabels:
app: static-server
template:
metadata:
name: static-server
labels:
app: static-server
annotations:
'consul.hashicorp.com/connect-inject': 'true'
spec:
containers:
- name: static-server
image: hashicorp/http-echo:latest
args:
- -text="hello world"
- -listen=:8080
ports:
- containerPort: 8080
name: http
serviceAccountName: does-not-match
Unbound PersistentVolumeClaims
If your Consul server pods are stuck in the Pending
state, check if the PersistentVolumeClaims (PVCs) are bound to PersistentVolumes (PVs). If they are not bound, you will see an error similar to the following:
$ kubectl describe pods --namespace consul consul-server-0
Name: consul-server-0
Namespace: consul
##...
Status: Pending
##...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 3m29s (x3 over 13m) default-scheduler 0/3 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.
There are two ways to resolve this issue. The fastest and simplest option is to use an up-to-date version of the Helm chart or consul-k8s
tool to deploy Consul. The consul-k8s
tool automatically creates the required PVs for you.
If you cannot use a newer version of the Helm chart or consul-k8s
tool, you can manually create the StorageClass
object that governs the creation of PVs, and then specify it in the Consul Helm chart. For example, you can use the following YAML to create a StorageClass
called ebs-sc
for AWS EBS volumes:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ebs-sc
provisioner: ebs.csi.aws.com
volumeBindingMode: WaitForFirstConsumer
parameters:
csi.storage.k8s.io/fstype: xfs
type: io1
iopsPerGB: "50"
encrypted: "true"
Finally, specify the StorageClass in the Consul Helm chart values and redeploy Consul to Kubernetes.
##...
server:
storageClass: "ebs-sc"
##...