I have a Spring Boot app running with Spring Actuator enabled. I am using the Spring Actuator health endpoint to serve as the readiness and liveliness checks. All works fine with a single replica. When I scale out to 2 replicas both pods crash. They both fail readiness checks and end up in an endless destroy/re-create loop. If I scale them back in to 1 replica the cluster recovers and the Spring Boot app becomes available. Any ideas what might be causing this issue?
Here is the deployment config (the context root of the Spring Boot app is /dept):
apiVersion: apps/v1
kind: Deployment
metadata:
name: gl-dept-deployment
labels:
app: gl-dept
spec:
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
selector:
matchLabels:
app: gl-dept
template:
metadata:
labels:
app: gl-dept
spec:
containers:
- name: gl-dept
image: zmad5306/gl-dept:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
livenessProbe:
httpGet:
path: /dept/actuator/health
port: 8080
initialDelaySeconds: 15
periodSeconds: 10
timeoutSeconds: 10
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /dept/actuator/health
port: 8080
initialDelaySeconds: 15
periodSeconds: 10
timeoutSeconds: 10
successThreshold: 1
failureThreshold: 5
curl
the/dept/actuator/health
endpoint (and you should be able to viakubectl exec
since they will live at least 50 seconds), what is the error text that accompanies the non-200 response? – mdanielresources: limits: memory:
in a PodSpec'scontainers:
is a great, great idea -- not just for minikube, but for the cluster, too, since kubernetes cannot make intelligent scheduling decisions without knowing how big each moving part is – mdaniel