0
votes

I am trying to connect a mysql instance on cloud sql to a google kubernetes cluster by using a proxy docker image using this guide.
I have a pod with ubuntu image and mysql client and the proxy image as a sidecar.
The proxy logs shows "Ready for new connections" but when I try to connect to the proxy using the mysql client and the 127.0.0.1 (port 3306) as the host i get the following error:

ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111 "Connection refused").

The pod has the following deployment file:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "20"
  creationTimestamp: 2019-01-30T15:29:58Z
  generation: 19
  labels:
    app: toolbox
  name: toolbox
  namespace: default
  resourceVersion: "1806058"
  selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/toolbox
  uid: e6ee574e-24a3-11e9-82e3-42010a9a0079
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: toolbox
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: toolbox
    spec:
      containers:
      - image: gcr.io/orayya-229213/toolbox:1.0
        imagePullPolicy: IfNotPresent
        name: toolbox
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      - command:
        - /cloud_sql_proxy
        - -instances=orayya-229213:europe-west2:orayya-db=tcp:0.0.0.0:3306
        - -credential_file=/secrets/cloudsql/credentials.json
        image: gcr.io/cloudsql-docker/gce-proxy:1.12
        imagePullPolicy: IfNotPresent
        name: cloudsql-proxy
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /secrets/cloudsql
          name: cloudsql-credentials
          readOnly: true
        - mountPath: /cloudsql
          name: cloudsql
      dnsPolicy: ClusterFirst
      nodeSelector:
        cloud.google.com/gke-nodepool: default-pool
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - name: cloudsql-credentials
        secret:
          defaultMode: 420
          secretName: cloudsql-credentials
      - emptyDir: {}
        name: cloudsql
status:
  availableReplicas: 1
  collisionCount: 1
  conditions:
  - lastTransitionTime: 2019-01-31T10:49:56Z
    lastUpdateTime: 2019-01-31T10:49:56Z
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: 2019-01-31T10:27:21Z
    lastUpdateTime: 2019-02-03T14:29:31Z
    message: ReplicaSet "toolbox-6d589d797" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  observedGeneration: 19
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1

I've also tried several other example variations.
I'll be glad to provide more information if needed.
Thank you!

Edit:
I finally managed to make this work. I started over and apparently i had a problem with the toolbox container. Thanks for everyone that helped!

1
If the CloudSQL and gke are in the same project did you try private ip? You won't need the cloudsql proxy in this case as it will provide an ip in your subnet.night-gold
@night-gold i tried that, but got a get timeout error, so I tired to ping to the private ip but the ping did not arrive to the destinationEsterSason

1 Answers

0
votes

While I can't help you fix this particular problem without more information, I can give you some suggestions to help you debug it.

First, figure out the name of the pod you want to take a look at. From the config posted, it looks like you can use the app tag to look it up:

kubectl get pods -l=app=toolbox

If your pod isn't listed, there is a problem with your deployment and it's not creating pods successfully. If it is listed, the problem is in one of the pods.

The pod has two containers inside of it, so the next step is to check on the proxy to make sure it started successfully, and if it did, why it's refusing connections.

kubectl get logs <YOUR_POD_NAME> -c cloudsql-proxy

This should print logs associated with the container, and should give you some additional information as to why it isn't working.