0
votes

I am using a NVIDIA Clara Deploy SDK which works on Kubernetes cluster. It provides the pipeline from E2E Medical image analysis (acquisition to analysis/segmentation). As I am using this, the E2E flow doesn't work. Because the output from one of the containers within the pod is empty. Though I am able to get the logs of a main containers, I am not sure how to get the logs of containers which are running within a specific container?.

I executed the below command through online research and this lists down the images,

sudo kubectl get pods --all-namespaces -o jsonpath="{..image}" |tr -s '[[:space:]]' '\n' |sort |uniq -c

enter image description here

Later, I execute this command and it lists down the containers within this pod

sudo kubectl describe pod clara-clara-platform-7bb6f9f5c6-pdzgd

This lists down 5 containers in the pod

1) Inference server 2) Dicom-server 3) Render-server 4) Clara-core 5)Clara-dashboard

But clara has containers within containers. That's how I have understood it.not sure whether I am right.

Sharing the doc below for your reference. I guess all the above containers images are part of main "Clara-core" container. How do I get the status of sub containers within a main container?

enter image description here

When I try to get the logs of the above containers, I don't see any info regarding what happened when the ai-container (applastchannel) was executed?

Please note that I would like to get the status of ai-container which is "applastchannel" in my case

Here is YAML file

apiVersion: v1
 items:
 - apiVersion: v1
 kind: Pod
 metadata:
  creationTimestamp: null
  generateName: clara-clara-platform-7bb6f9f5c6-
  labels:
   app.kubernetes.io/instance: clara
   app.kubernetes.io/name: clara-platform
  pod-template-hash: 7bb6f9f5c6
ownerReferences:
- apiVersion: apps/v1
  blockOwnerDeletion: true
  controller: true
  kind: ReplicaSet
  name: clara-clara-platform-7bb6f9f5c6
  uid: d0f0dc14-8b7e-45e3-8528-0879c7ce9330
selfLink: /api/v1/namespaces/default/pods/clara-clara-platform-7bb6f9f5c6- 
 pdzgd
  spec:
   containers:
   - args:
   - --model-store=/models
   command:
   - trtserver
   image: clara/trtis:0.1.8
   imagePullPolicy: IfNotPresent
   livenessProbe:
    failureThreshold: 3
    httpGet:
      path: /api/health/live
      port: 8000
      scheme: HTTP
     initialDelaySeconds: 5
     periodSeconds: 5
     successThreshold: 1
     timeoutSeconds: 1
   name: inference-server
   ports:
   - containerPort: 8000
     protocol: TCP
   - containerPort: 8001
     protocol: TCP
   - containerPort: 8002
     protocol: TCP
   readinessProbe:
     failureThreshold: 3
     httpGet:
       path: /api/health/ready
      port: 8000
      scheme: HTTP
     initialDelaySeconds: 5
     periodSeconds: 5
     successThreshold: 1
     timeoutSeconds: 1
   resources: {}
   securityContext:
     runAsUser: 1000
   terminationMessagePath: /dev/termination-log
   terminationMessagePolicy: File
  volumeMounts:
   - mountPath: /models
    name: pv-clara-volume
    subPath: models
   - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
    name: clara-service-account-token-c62fp
    readOnly: true
 - image: clara/dicomserver:0.1.8
   imagePullPolicy: Never
   name: dicom-server
   ports:
  - containerPort: 104
    hostPort: 104
    name: dicom-port
    protocol: TCP
  resources: {}
  terminationMessagePath: /dev/termination-log
  terminationMessagePolicy: File
  volumeMounts:
  - mountPath: /payloads
    name: pv-clara-volume
    subPath: clara-core/payloads
  - mountPath: /app/app.yaml
    name: dicom-server-config
    subPath: app.yaml
  - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
    name: clara-service-account-token-c62fp
    readOnly: true
- image: clara/core:0.1.8
  imagePullPolicy: Never
  name: clara-core
  ports:
  - containerPort: 50051
    protocol: TCP
  resources: {}
  terminationMessagePath: /dev/termination-log
  terminationMessagePolicy: File
  volumeMounts:
  - mountPath: /app/Jobs
    name: pv-clara-volume
    subPath: clara-core/payloads
  - mountPath: /app/Workflows
    name: pv-clara-volume
    subPath: clara-core/workflows
  - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
    name: clara-service-account-token-c62fp
    readOnly: true
- image: clara/clara-dashboard:0.1.8
  imagePullPolicy: Never
  name: clara-dashboard
  ports:
  - containerPort: 8080
    hostPort: 8080
    name: dashboard-port
    protocol: TCP
  resources: {}
  terminationMessagePath: /dev/termination-log
  terminationMessagePolicy: File
  volumeMounts:
  - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
    name: clara-service-account-token-c62fp
    readOnly: true
- image: clara/renderserver:0.1.8
  imagePullPolicy: Never
  name: render-server
  ports:
  - containerPort: 2050
    hostPort: 2050
    name: render-port
    protocol: TCP
  resources: {}
  terminationMessagePath: /dev/termination-log
  terminationMessagePolicy: File
  volumeMounts:
  - mountPath: /app/datasets
    name: pv-clara-volume
    subPath: datasets
  - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
    name: clara-service-account-token-c62fp
    readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
imagePullSecrets:
- name: nvcr.io
nodeName: whiskey
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: clara-service-account
serviceAccountName: clara-service-account
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
  key: node.kubernetes.io/not-ready
  operator: Exists
  tolerationSeconds: 300
- effect: NoExecute
  key: node.kubernetes.io/unreachable
  operator: Exists
  tolerationSeconds: 300
volumes:
- name: pv-clara-volume
  persistentVolumeClaim:
    claimName: pv-clara-volume-claim
- configMap:
    defaultMode: 420
    items:
    - key: app.Release.yaml
      path: app.yaml
    name: clara-configmap
  name: dicom-server-config
- name: clara-service-account-token-c62fp
  secret:
    defaultMode: 420
    secretName: clara-service-account-token-c62fp
  status:
 phase: Pending
 qosClass: BestEffort
 kind: List
 metadata:
 resourceVersion: ""
 selfLink: ""

Can you help me achieve this?

1
can you simply share kubernetes deployment manifest?4c74356b41
@4c74356b41 - Hi, I updated the postThe Great
I am marking the below answer as solution because the discussion helped me to figure out the answer and commands also helped me..The Great

1 Answers

1
votes

looking at the yaml I see only these containers in the pod:

Image:                          Name:
clara/core:0.1.8                clara-core
clara/clara-dashboard:0.1.8     clara-dashboard
clara/renderserver:0.1.8        render-server
clara/trtis:0.1.8               inference-server
clara/dicomserver:0.1.8         dicom-server

I'm not sure which one is the one you need, nothing seems to be called AI, but either way you can check logs with:

kubectl logs clara-clara-platform-7bb6f9f5c6-pdzgd %container_name%

so if inference-server is the one you are interested in:

kubectl logs clara-clara-platform-7bb6f9f5c6-pdzgd inference-server