0
votes

I have deployed my webservice in openshift (tomcat) and every time I request my services sometimes it works and sometimes it doesn't work. it was working perfectly before, number of pod is 1 no logs for failure

Error is

Application is not available The application is currently not serving requests at this endpoint. It may not have been started or is still starting.

Possible reasons you are seeing this page: The host doesn't exist. Make sure the hostname was typed correctly and that a route matching this hostname exists.

The host exists, but doesn't have a matching path. Check if the URL path was typed correctly and that the route was created using the desired path.

Route and path matches, but all pods are down. Make sure that the resources exposed by this route (pods, services, deployment configs, etc) have at least one pod running.

O/P of oc describe routes
Name:           mysample
Namespace:      enzen
Created:        12 days ago
Labels:         app=mysample
Annotations:        openshift.io/host.generated=true
Requested Host: mysample-enzen.193b.starter-ca-central-1.openshiftapps.com
exposed on router router (host elb.193b.starter-ca-central-1.openshiftapps.com) 12 days ago
Path:           <none>
TLS Termination:    <none>
Insecure Policy:    <none>
Endpoint Port:      8080-tcp
Service:    mysample
Weight:     100 (100%)
Endpoints:  10.128.18.210:8080

O/P of oc describe services
Name:              mysample
Namespace:         enzen
Labels:            app=mysample
Annotations:       openshift.io/generated-by=OpenShiftNewApp
Selector:          app=mysample,deploymentconfig=mysample
Type:              ClusterIP
IP:                172.30.145.245
Port:              8080-tcp  8080/TCP
TargetPort:        8080/TCP
Endpoints:         10.128.18.210:8080
Session Affinity:  None
Events:            <none>
1
If this is something that works intermittently, it may be an issue with the infrastructure as opposed to something in your code/definitions. I would encourage you to try curling your endpoint from your machine, and then also curling the localhost (127.0.0.1) from the pod terminal. If the localhost always succeeds, and your machine's curl is still showing intermittent failures, then it's probably an issue with the router. For infrastructure issues, like a bad router, you can reach out to the OpenShift Online community team at help.openshift.comWill Gordon

1 Answers

0
votes

The initial thought is that route is trying to spread load across multiple services https://docs.openshift.com/container-platform/3.9/architecture/networking/routes.html#alternateBackends, and one of those services is down or not available. Typically in this case I would recreate the service and the route to verify that it's configured as expected. Perhaps you can share the configuration of the route, the service, and the pod?

oc describe routes
oc describe services
oc describe pods

#### EDIT 10-22-18 ####

Adding the output from the google doc with the build pods redacted (as they are not relevant) for the benefit of additional readers. Nothing immediate is jumping out yet as an app/config issue;

oc describe routes
Name:           mysample
Namespace:      enzen
Created:        12 days ago
Labels:         app=mysample
Annotations:        openshift.io/host.generated=true
Requested Host:     mysample-enzen.193b.starter-ca-central-1.openshiftapps.com
              exposed on router router (host elb.193b.starter-ca-central-1.openshiftapps.com) 12 days ago
Path:           <none>
TLS Termination:    <none>
Insecure Policy:    <none>
Endpoint Port:      8080-tcp

Service:    mysample
Weight:     100 (100%)
Endpoints:  10.128.18.210:8080


 oc describe services
Name:              mysample
Namespace:         enzen
Labels:            app=mysample
Annotations:       openshift.io/generated-by=OpenShiftNewApp
Selector:          app=mysample,deploymentconfig=mysample
Type:              ClusterIP
IP:                172.30.145.245
Port:              8080-tcp  8080/TCP
TargetPort:        8080/TCP
Endpoints:         10.128.18.210:8080
Session Affinity:  None
Events:            <none>




oc describe pods
Name:               mysample-15-z85zt
Namespace:          enzen
Priority:           0
PriorityClassName:  <none>
Node:               ip-172-31-29-189.ca-central-1.compute.internal/172.31.29.189
Start Time:         Sun, 21 Oct 2018 20:55:36 +0530
Labels:             app=mysample
                    deployment=mysample-15
                    deploymentconfig=mysample
Annotations:        kubernetes.io/limit-ranger=LimitRanger plugin set: cpu, memory request for container mysample; cpu, memory limit for container mysample
                    openshift.io/deployment-config.latest-version=15
                    openshift.io/deployment-config.name=mysample
                    openshift.io/deployment.name=mysample-15
                    openshift.io/generated-by=OpenShiftNewApp
                    openshift.io/scc=restricted
Status:             Running
IP:                 10.128.18.210
Controlled By:      ReplicationController/mysample-15
Containers:
  mysample:
    Container ID:   cri-o://0cd20854571232b310ce22a282c8d5832908533d28d5d720537bbf3618b86c44
    Image:          docker-registry.default.svc:5000/enzen/mysample@sha256:adadeb7decf82b29699861171c58d7ae5f87ca6eeb1c10e5a1d525e4a0888ebc
    Image ID:       docker-registry.default.svc:5000/enzen/mysample@sha256:adadeb7decf82b29699861171c58d7ae5f87ca6eeb1c10e5a1d525e4a0888ebc
    Port:           8080/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Sun, 21 Oct 2018 20:55:40 +0530
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     1
      memory:  512Mi
    Requests:
      cpu:        20m
      memory:     256Mi
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-8xjb8 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-8xjb8:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-8xjb8
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  type=compute
Tolerations:     node.kubernetes.io/memory-pressure:NoSchedule
Events:          <none>