15
votes

I have multiple docker machines(dev,staging) running on Google Compute Engine which hosts Django servers(this needs access to Google Cloud SQL access). I have multiple Google Cloud SQL instances running, and each instance is used by respective docker machines on my Google Compute Engine instance.

Currently i'm accessing the Cloud SQL by whitelisting my Compute Engine IP. But i dont want to use IPs for obvious reasons ie., i dont use a static ip for my dev machines.

But Now want to use google_cloud_proxy way to gain the access. But How do i do that ! GCP gives multiple ways to access google Cloud SQL instances. But none of them fit my usecase:

I have this option https://cloud.google.com/sql/docs/mysql/connect-compute-engine; but this

  1. only gives my computer engine access to the SQL instance; which i have to access from my Docker.
  2. This doesn't support me to proxy multiple SQL instances on same compute engine machine; I was hoping to do this proxy inside the docker if possible .

So, How do I gain access to the CLoud SQL inside Docker ? If docker compose is a better way to start; How easy is it to implement for kubernetes(i use google container engine for production)

1
A single Cloud SQL proxy can proxy multiple instances. What is the reason you need to have multiple proxies?Vadim
I have reading somethings and realised what you said is true. So my 2nd question is invalid now.. do you have any thoughts on Q1 .. how can I access this proxy connectiom inside individual dockersrrmerugu
I'm not sure I fully understand the question. You can run the proxy as a separate docker image (cloud.google.com/sql/docs/mysql/connect-docker) and then connect to it from your docker image.Vadim
based on your answer. i can see you understand my question. Connect-docker is what is what i mean by using docker-compose in my question. I see docker compose is the option. but im just exploring if thats the best option.rrmerugu
If you connect from GCE instances with static IPs, you can choose to whitelist those IPs and connect directly by IP. If you don't want to maintain IP whitelists, then using the proxy docker container is your best option.Vadim

1 Answers

19
votes

I was able to figure out how to use cloudsql-proxy on my local docker environment by using docker-compose. You will need to pull down your Cloud SQL instance credentials and have them ready. I keep them them in my project root as credentials.json and add it to my .gitignore in the project.

The key part I found was using =tcp:0.0.0.0:5432 after the GCP instance ID so that the port can be forwarded. Then, in your application, use cloudsql-proxy instead of localhost as the hostname. Make sure the rest of your db creds are valid in your application secrets so that it can connect through local proxy being supplied by the cloudsql-proxy container.

Note: Keep in mind I'm writing a tomcat java application and my docker-compose.yml reflects that.

docker-compose.yml:

version: '3'
services:
  cloudsql-proxy:
      container_name: cloudsql-proxy
      image: gcr.io/cloudsql-docker/gce-proxy:1.11
      command: /cloud_sql_proxy --dir=/cloudsql -instances=<YOUR INSTANCE ID HERE>=tcp:0.0.0.0:5432 -credential_file=/secrets/cloudsql/credentials.json
      ports:
        - 5432:5432
      volumes:
        - ./credentials.json:/secrets/cloudsql/credentials.json
      restart: always

  tomcatapp-api:
    container_name: tomcatapp-api
    build: .
    volumes:
      - ./build/libs:/usr/local/tomcat/webapps
    ports:
      - 8080:8080
      - 8000:8000
    env_file:
      - ./secrets.env
    restart: always