1
votes

So, i have a k8s cluster running on AWS, provisioned using kops. I have created a secret locally, using kubectl:

    kubectl create secret generic aws-es --from-file=./aws_key.txt --from-file=./aws_secret_key.txt

My service.yml has this env:

    - name: AWS_ACCESS_KEY_ID
        valueFrom:
          secretKeyRef:
            name: aws-es
            key: aws_key

And when i update the service in the cluster with:

    kubectl apply -f service.yml

I get the error running pod:

     Error: secrets "aws-es" not found
     Error syncing pod 

Obviously, my kops installation cannot see the locally created secret, is there a way for me to propagate that secret to kops' s3 storage?

2
Which namespace did you create your PODS in?hdhruna
Would be good to add to the question the output of kubectl get secretsJonah Benton
@HiteshDhruna yeah, the issue was that i set secrets in default namespace, and i was working on pods for kube-system, fixed it.dgmt

2 Answers

10
votes

Fixed it. The problem was i created secrets in default namespace, while my pods were running in kube-system namespace.

0
votes

Creating the secret in the same namespace as the deployment fixed this issue for me as well.

Error: secrets "xxx" not found