0
votes

I am using: minikube version: v1.0.0 I now need to create Ingress resource:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: test-ingress
spec:
  backend:
    serviceName: testsvc
    servicePort: 80

and then I run kubectl apply -f ./ingress.yaml

Error happened:

error: SchemaError(io.k8s.api.core.v1.CinderVolumeSource): invalid object doesn't have additional properties

My kubectl version is:

Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:55:54Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:45:25Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}

After upgrading kubectl version to v1.14.0, I can create ingress with no problem. But now, the issue is ingress is NOT redirecting to pod:

this is my ingress.yaml:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
    name: dv
spec:
    rules:
    - host: ui.dv.com
      http:
          paths:
          - path: /
            backend:
                serviceName: ngsc
                servicePort: 3000

this is my service:

apiVersion: v1
kind: Service
metadata:
    name: ngsc
spec:
    type: NodePort
    selector:
        app: ngsc
    ports:
    - port: 3000
      nodePort: 30080
      name: http
      targetPort: 3000

And this is my deployment:

apiVersion: apps/v1beta1
kind: Deployment
metadata:
    name: ngsc
spec:
    replicas: 2
    template:
        metadata:
            name: ngsc
            labels:
                app: ngsc
        spec:
            containers:
            - image: myimage
              name: ngsc
              imagePullPolicy: IfNotPresent

I ave already added ui.dv.com into /etc/hosts, after I start all, and using curl http://ui.dv.com, there is no response

I checked the nginx log:

Error obtaining Endpoints for Service "default/ngsc": no object matching key "default/ngsc" in local store

for all pods,

default       api-server-84dd8bcfc8-2hvlh                1/1     Running            26         3h23m
default       api-server-84dd8bcfc8-s697x                1/1     Running            28         3h23m
default       api-server-84dd8bcfc8-vq4vn                1/1     Running            26         3h23m
default       ngsc-559cbf57df-bcjb7                      1/1     Running            3          3h27m
default       ngsc-559cbf57df-j5v68                      1/1     Running            2          3h27m
kube-system   coredns-fb8b8dccf-ghj4l                    1/1     Running            42         36h
kube-system   coredns-fb8b8dccf-rwhw5                    1/1     Running            41         36h
kube-system   default-http-backend-6864bbb7db-p8fld      1/1     Running            47         36h
kube-system   etcd-minikube                              1/1     Running            3          36h
kube-system   kube-addon-manager-minikube                1/1     Running            4          36h
kube-system   kube-apiserver-minikube                    1/1     Running            27         36h
kube-system   kube-controller-manager-minikube           0/1     Error              4          11m
kube-system   kube-proxy-skn58                           1/1     Running            2          12h
kube-system   kube-scheduler-minikube                    0/1     CrashLoopBackOff   40         36h
kube-system   nginx-ingress-controller-f5744c676-j5r25   1/1     Running            47         3h16m
kube-system   storage-provisioner                        1/1     Running            7          36h

here, the ingress controller is running

now running:

kubectl describe pods -n kube-system nginx-ingress-controller-f5744c676-j5r25

I have this:

Events:
  Type     Reason     Age                    From               Message
  ----     ------     ----                   ----               -------
  Warning  Unhealthy  41m (x98 over 3h9m)    kubelet, minikube  Liveness probe failed: Get http://172.17.0.7:10254/healthz: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
  Warning  Unhealthy  25m (x208 over 3h10m)  kubelet, minikube  Readiness probe failed: Get http://172.17.0.7:10254/healthz: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
  Warning  Unhealthy  5m46s (x4 over 12m)    kubelet, minikube  Readiness probe failed: Get http://172.17.0.6:10254/healthz: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
  Warning  BackOff    35s (x448 over 3h3m)   kubelet, minikube  Back-off restarting failed container

describe ingress:

Namespace:        default
Address:          
Default backend:  default-http-backend:80 ()
Rules:
  Host  Path  Backends
  ----  ----  --------
  *     
        /ui   ngsc:3000 (172.17.0.10:3000,172.17.0.7:3000)
  *     
        /api   api-server:8083 (172.17.0.5:8083,172.17.0.9:8083)
Annotations:

kubectl get ing NAME HOSTS ADDRESS PORTS AGE datavisor * 10.0.2.15 80 3h12m

Finally:

curl http://10.0.2.15/ui

hanging, and stopped

Anything wrong here?

2
Probably, apiVersion and syntax mismatch? Check k8s docsVeerendra Kakumanu

2 Answers

1
votes

There are no errors in your manifest, apparently, you are using wrong kubectl version.

kubectl needs to be within 1 minor from the cluster you are using as described here.

You must use a kubectl version that is within one minor version difference of your cluster. For example, a v1.2 client should work with v1.1, v1.2, and v1.3 master. Using the latest version of kubectl helps avoid unforeseen issues.

You can check your versions with

kubectl version
1
votes
  1. Please check community comments:
    Kubernetes create deployment unexpected SchemaError
    Could you please verify your "kubectl" and minikube version.
    Did you have any errors during installation?
    Could you please check logs and events.
    Please try also and create other deployment to see if there are other errors.

  2. For troubleshooting purposes please use:

    
    kubectl get pods 
    kubectl get events
    kubectl logs "your_pod"
    

    Please share with your findings