209
votes

I am trying to deploy nginx on kubernetes, kubernetes version is v1.5.2, I have deployed nginx with 3 replica, YAML file is below,

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: deployment-example
spec:
  replicas: 3
  revisionHistoryLimit: 2
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.10
        ports:
        - containerPort: 80

and now I want to expose its port 80 on port 30062 of node, for that I created a service below,

kind: Service
apiVersion: v1
metadata:
  name: nginx-ils-service
spec:
  ports:
    - name: http
      port: 80
      nodePort: 30062
  selector:
    app: nginx
  type: LoadBalancer

this service is working good as it should be, but it is showing as pending not only on kubernetes dashboard also on terminal. Terminal outputDash board status

21

21 Answers

228
votes

It looks like you are using a custom Kubernetes Cluster (using minikube, kubeadm or the like). In this case, there is no LoadBalancer integrated (unlike AWS or Google Cloud). With this default setup, you can only use NodePort or an Ingress Controller.

With the Ingress Controller you can setup a domain name which maps to your pod; you don't need to give your Service the LoadBalancer type if you use an Ingress Controller.

140
votes

If you are using Minikube, there is a magic command!

$ minikube tunnel

Hopefully someone can save a few minutes with this.

Reference link https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel

63
votes

If you are not using GCE or EKS (you used kubeadm) you can add an externalIPs spec to your service YAML. You can use the IP associated with your node's primary interface such as eth0. You can then access the service externally, using the external IP of the node.

...
spec:
  type: LoadBalancer
  externalIPs:
  - 192.168.0.10
40
votes

To access a service on minikube, you need to run the following command:

minikube service [-n NAMESPACE] [--url] NAME

More information here : Minikube GitHub

38
votes

I created a single node k8s cluster using kubeadm. When i tried PortForward and kubectl proxy, it showed external IP as pending.

$ kubectl get svc -n argocd argocd-server
NAME            TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
argocd-server   LoadBalancer   10.107.37.153   <pending>     80:30047/TCP,443:31307/TCP   110s

In my case I've patched the service like this:

kubectl patch svc <svc-name> -n <namespace> -p '{"spec": {"type": "LoadBalancer", "externalIPs":["172.31.71.218"]}}'

After this, it started serving over the public IP

$ kubectl get svc argo-ui -n argo
NAME      TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)        AGE
argo-ui   LoadBalancer   10.103.219.8   172.31.71.218   80:30981/TCP   7m50s
14
votes

When using Minikube, you can get the IP and port through which you can access the service by running:

minikube service [service name]

E.g.:

minikube service kubia-http
6
votes

If running on minikube, don't forget to mention namespace if you are not using default.

minikube service << service_name >> --url --namespace=<< namespace_name >>

6
votes

If you are using minikube then run commands below from terminal,

$ minikube ip
$ 172.17.0.2 // then 
$ curl http://172.17.0.2:31245
or simply
$ curl http://$(minikube ip):31245
5
votes

If it is your private k8s cluster, MetalLB would be a better fit. Below are the steps.

Step 1: Install MetalLB in your cluster

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/metallb.yaml
# On first install only
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"

Step 2: Configure it by using a configmap

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 172.42.42.100-172.42.42.105 #Update this with your Nodes IP range 

Step 3: Create your service to get an external IP (would be a private IP though).

FYR:

Before MetalLB installation: enter image description here

After MetalLB installation: enter image description here

enter image description here

4
votes

Following @Javier's answer. I have decided to go with "patching up the external IP" for my load balancer.

 $ kubectl patch service my-loadbalancer-service-name \
-n lb-service-namespace \
-p '{"spec": {"type": "LoadBalancer", "externalIPs":["192.168.39.25"]}}'

This will replace that 'pending' with a new patched up IP address you can use for your cluster.

For more on this. Please see karthik's post on LoadBalancer support with Minikube for Kubernetes

Not the cleanest way to do it. I needed a temporary solution. Hope this helps somebody.

3
votes

Adding a solution for those who encountered this error while running on .

First of all run:

kubectl describe svc <service-name>

And then review the events field in the example output below:

Name:                     some-service
Namespace:                default
Labels:                   <none>
Annotations:              kubectl.kubernetes.io/last-applied-configuration:
                            {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"some-service","namespace":"default"},"spec":{"ports":[{"port":80,...
Selector:                 app=some
Type:                     LoadBalancer
IP:                       10.100.91.19
Port:                     <unset>  80/TCP
TargetPort:               5000/TCP
NodePort:                 <unset>  31022/TCP
Endpoints:                <none>
Session Affinity:         None
External Traffic Policy:  Cluster
Events:
  Type     Reason                  Age        From                Message
  ----     ------                  ----       ----                -------
  Normal   EnsuringLoadBalancer    68s  service-controller  Ensuring load balancer
  Warning  SyncLoadBalancerFailed  67s  service-controller  Error syncing load balancer: failed to ensure load balancer: could not find any suitable subnets for creating the ELB

Review the error message:

Failed to ensure load balancer: could not find any suitable subnets for creating the ELB

In my case, the reason that no suitable subnets were provided for creating the ELB were:

1: The EKS cluster was deployed on the wrong subnets group - internal subnets instead of public facing.
(*) By default, services of type LoadBalancer create public-facing load balancers if no service.beta.kubernetes.io/aws-load-balancer-internal: "true" annotation was provided).

2: The Subnets weren't tagged according to the requirements mentioned here.

Tagging VPC with:

Key: kubernetes.io/cluster/yourEKSClusterName
Value: shared

Tagging public subnets with:

Key: kubernetes.io/role/elb
Value: 1
3
votes

Use NodePort:

$ kubectl run user-login --replicas=2 --labels="run=user-login" --image=kingslayerr/teamproject:version2  --port=5000

$ kubectl expose deployment user-login --type=NodePort --name=user-login-service

$ kubectl describe services user-login-service

(Note down the port)

$ kubectl cluster-info

(IP-> Get The IP where master is running)

Your service is accessible at (IP):(port)

2
votes

The LoadBalancer ServiceType will only work if the underlying infrastructure supports the automatic creation of Load Balancers and have the respective support in Kubernetes, as is the case with the Google Cloud Platform and AWS. If no such feature is configured, the LoadBalancer IP address field is not populated and still in pending status , and the Service will work the same way as a NodePort type Service

1
votes

same issue:

os>kubectl get svc right-sabertooth-wordpress

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
right-sabertooth-wordpress LoadBalancer 10.97.130.7 "pending" 80:30454/TCP,443:30427/TCP

os>minikube service list

|-------------|----------------------------|--------------------------------|

| NAMESPACE | NAME | URL |

|-------------|----------------------------|--------------------------------|

| default | kubernetes | No node port |

| default | right-sabertooth-mariadb | No node port |

| default | right-sabertooth-wordpress | http://192.168.99.100:30454 |

| | | http://192.168.99.100:30427 |

| kube-system | kube-dns | No node port |

| kube-system | tiller-deploy | No node port |

|-------------|----------------------------|--------------------------------|

It is, however,accesible via that http://192.168.99.100:30454.

1
votes

You can patch the IP of Node where pods are hosted ( Private IP of Node ) , this is the easy workaround .

Taking reference with above posts , Following worked for me :

kubectl patch service my-loadbalancer-service-name \ -n lb-service-namespace \ -p '{"spec": {"type": "LoadBalancer", "externalIPs":["xxx.xxx.xxx.xxx Private IP of Physical Server - Node - where deployment is done "]}}'

1
votes

In case someone is using MicroK8s: You need a network load balancer.

MicroK8s comes with metallb, you can enable it like this:

microk8s enable metallb

<pending> should turn into an actual IP address then.

0
votes

Check kube-controller logs. I was able to solve this issue by setting the clusterID tags to the ec2 instance I deployed the cluster on.

0
votes

If you are not on a supported cloud (aws, azure, gcloud etc..) you can't use LoadBalancer without MetalLB https://metallb.universe.tf/ but it's in beta yet..

0
votes

There are three types of exposing your service Nodeport ClusterIP LoadBalancer

When we use a loadbalancer we basically ask our cloud provider to give us a dns which can be accessed online Note not a domain name but a dns.

So loadbalancer type does not work in our local minikube env.

0
votes

May be the subnet in which you are deploying your service, have not enough ip's

-1
votes

Delete existing service and create a same new service solved my problems. My problems is that the loading balancing IP I defines is used so that external endpoint is pending. When I changed a new load balancing IP it still couldn't work.

Finally, delete existing service and create a new one solved my problem.