I am trying to get a service running in minikube (MetaMap Tagger service, medpost-skr).
I have the following Docker file that runs fine on its own:
FROM openjdk:8-jre
RUN mkdir /usr/share/public_mm
ADD public_mm/MedPost-SKR /usr/share/public_mm/MedPost-SKR
ADD public_mm/bin /usr/share/public_mm/bin
ADD required/scripts /usr/share/public_mm/scripts
ENV PATH=/usr/share/public_mm/bin:$PATH
WORKDIR /usr/share/public_mm
# --- BUILD ---
RUN /usr/share/public_mm/bin/install.sh
# --- START ---
CMD skrmedpostctl start
And I have the following service/deployment file (in argo workflow template) to start up the service:
apiVersion: argoproj.io/v1alpha1
kind: Workflow #new type of k8s spec
metadata:
generateName: nlp-adapt-wf- #name of workflow spec
spec:
entrypoint: nlp-adapt-metamap-services #invoke the build template
templates:
- name: nlp-adapt-metamap-services
steps:
- - name: medpost-deployment
template: medpost-server-d
- - name: medpost-service
template: medpost-server-s
- name: medpost-server-d
resource:
action: create
manifest: |
apiVersion: apps/v1
kind: Deployment
metadata:
name: medpost
spec:
selector:
matchLabels:
app: medpost
track: stable
template:
metadata:
labels:
app: medpost
track: stable
spec:
containers:
- image: ahc-nlpie-docker.artifactory.umn.edu/medpost-skr
imagePullPolicy: Never
name: medpost
ports:
- containerPort: 1795
- name: medpost-server-s
resource:
action: create
manifest: |
apiVersion: v1
kind: Service
metadata:
name: medpost
namespace: default
labels:
app: medpost
spec:
selector:
app: medpost
ports:
- name: test1
protocol: TCP
port: 1795
targetPort: 1795
When I look at the services, I see:
(base) D20181472:medpost-skr gms$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 443/TCP 60d medpost ClusterIP 10.100.189.245 1795/TCP 26m
and when I look at the deployment, it looks like the service deployment never got started:
(base) D20181472:medpost-skr gms$ kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
medpost 0/1 1 0 27m
However, when I look at the logs for the pod, it appears the service started, but the pod is in a CrashLoopBackOff condition:
(base) D20181472:medpost-skr gms$ kubectl get pods
NAME READY STATUS RESTARTS AGE
medpost-859844ddf-sw888 0/1 CrashLoopBackOff 10 28m
(base) D20181472:medpost-skr gms$ kubectl logs medpost-859844ddf-sw888
$Starting skrmedpostctl:
started.
Here is more info about the pod:
kubectl describe pod medpost-859844ddf-sw888
Name: medpost-859844ddf-sw888
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: minikube/10.0.2.15
Start Time: Tue, 02 Apr 2019 14:14:57 -0500
Labels: app=medpost
pod-template-hash=859844ddf
track=stable
Annotations: <none>
Status: Running
IP: 172.17.0.7
Controlled By: ReplicaSet/medpost-859844ddf
Containers:
medpost:
Container ID: docker://208de2cc7c5007b53b63f2a063db13a8df5011488ab0be26726c12de54735043
Image: ahc-nlpie-docker.artifactory.umn.edu/medpost-skr
Image ID: docker://sha256:89ea83d80cd906319afbdd37805228440048aded93156c0fc17f900d1aa6228b
Port: 1795/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 02 Apr 2019 14:41:17 -0500
Finished: Tue, 02 Apr 2019 14:41:17 -0500
Ready: False
Restart Count: 10
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-mkpqk (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-mkpqk:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-mkpqk
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 30m default-scheduler Successfully assigned default/medpost-859844ddf-sw888 to minikube
Normal Pulled 28m (x5 over 30m) kubelet, minikube Container image "ahc-nlpie-docker.artifactory.umn.edu/medpost-skr" already present on machine
Normal Created 28m (x5 over 30m) kubelet, minikube Created container
Normal Started 28m (x5 over 30m) kubelet, minikube Started container
Warning BackOff 5m6s (x117 over 30m) kubelet, minikube Back-off restarting failed container
Edit here is the deployment:
(base) D20181472:wsd_server gms$ kubectl describe deploy medpost
Name: medpost
Namespace: default
CreationTimestamp: Tue, 02 Apr 2019 18:03:13 -0500
Labels: <none>
Annotations: deployment.kubernetes.io/revision: 1
Selector: app=medpost,track=stable
Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=medpost
track=stable
Containers:
medpost:
Image: ahc-nlpie-docker.artifactory.umn.edu/medpost-skr
Port: 1795/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available False MinimumReplicasUnavailable
Progressing True ReplicaSetUpdated
OldReplicaSets: <none>
NewReplicaSet: medpost-859844ddf (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 34s deployment-controller Scaled up replica set medpost-859844ddf to 1
Not sure what to make out of this?
$Starting skrmedpostctl: started.
, which is the desired response. – horcle_buzz