1
votes

I am new to Docker and Kubernetes, though I have mostly figured out how it all works at this point.

I inherited an app that uses both, as well as KOPS.

One of the last things I am having trouble with is the KOPS setup. I know for absolute certain that Kubernetes is setup via KOPS. There's two KOPS state stores on an S3 bucket (corresponding to a dev and prod cluster respectively)

However while I can find the server that kubectl/kubernetes is running on, absolutely none of the servers I have access to seem to have a kops command.

Am I misunderstanding how KOPS works? Does it not do some sort of dynamic monitoring (would that just be done by ReplicaSet by itself?), but rather just sets a cluster running and it's done?

I can include my cluster.spec or config files, if they're helpful to anyone, but I can't really see how they're super relevant to this question.

I guess I'm just confused - as far as I can tell from my perspective, it looks like KOPS is run once, sets up a cluster, and is done. But then whenever one of my node or master servers goes down, it is self-healing. I would expect that of the node servers, but not the master servers.

This is all on AWS.

Sorry if this is a dumb question, I am just having trouble conceptually understanding what is going on here.

1
kops is a command line tool, you run it from your own machine (or a jumpbox) and it creates clusters for you, it’s not a long-running server itself. It’s like terraform if you’re familiar with that, tailored specifically to spinning up k8s clusters.Amit Kumar Gupta
Kops creates resources on AWS via autoscaling groups. It’s this construct (which is an AWS thing) that ensures your nodes come back to the desired number.Amit Kumar Gupta
Ok, so it basically is run once and it creates the desired group then via AWS autoscaling/launch configurations. Got it. I want to use a new docker image - how would I "deploy" that to the autoscaling group? Run Kops again and recreate the whole cluster? Update it in kubectl individually per node?Steven Matthews
Kops is used for managing k8s clusters themselves, like creating them, scaling, updating, deleting. kubectl is used for managing container workloads that run on k8s. You can create, scale, update, and delete your replica sets with that. How you run workloads on k8s should have nothing to do with how/what tool you (or some cluster admin) manage the k8s cluster itself. That is, unless you’re trying to change the “system components” of k8s, like the Kubernetes API it kubedns, which are cluster-admin-level concerns but happen to run on top of k8s as container workloads (kinda meta).Amit Kumar Gupta
Ok that makes sense. I used Kops to replicate a dev version of the cluster, and it makes sense. I'll use the deployment and replica yaml from kubectl to recreate it on the new cluster. Still not totally sure how it spins up new pods when one of the nodes goes downSteven Matthews

1 Answers

1
votes

kops is a command line tool, you run it from your own machine (or a jumpbox) and it creates clusters for you, it’s not a long-running server itself. It’s like Terraform if you’re familiar with that, but tailored specifically to spinning up Kubernetes clusters.

kops creates nodes on AWS via autoscaling groups. It’s this construct (which is an AWS thing) that ensures your nodes come back to the desired number.

kops is used for managing Kubernetes clusters themselves, like creating them, scaling, updating, deleting. kubectl is used for managing container workloads that run on Kubernetes. You can create, scale, update, and delete your replica sets with that. How you run workloads on Kubernetes should have nothing to do with how/what tool you (or some cluster admin) use to manage the Kubernetes cluster itself. That is, unless you’re trying to change the “system components” of Kubernetes, like the Kubernetes API or kubedns, which are cluster-admin-level concerns but happen to run on top of Kuberentes as container workloads.

As for how pods get spun up when nodes go down, that’s what Kubernetes as a container orchestrator strives to do. You declare the desired state you want, and the Kubernetes system makes it so. If things crash or fail or disappear, Kubernetes aims to reconcile this difference between actual state and desired state, and schedules desired container workloads to run on available nodes to bring the actual state of the world back in line with your desired state. At a lower level, AWS does similar things — it creates VMs and keeps them running. If Amazon needs to take down a host for maintenance it will figure out how to run your VM (and attach volumes, etc.) elsewhere automatically.