2
votes

I have a Cloud Run service that accesses a Cloud SQL instance through SQLAlchemy. However, in the logs for Cloud Run, I see CloudSQL connection failed. Please see https://cloud.google.com/sql/docs/mysql/connect-run for additional details: ensure that the account has access to "<connection_string>". Going to that link, it says that:

"By default, your app will authorize your connections using the Cloud Run (fully managed) service account. The service account is in the format [email protected]."

However, the following (https://cloud.google.com/run/docs/securing/service-identity) says:

"By default, Cloud Run revisions are using the Compute Engine default service account ([email protected]), which has the Project > Editor IAM role. This means that by default, your Cloud Run revisions have read and write access to all resources in your Google Cloud project."

So shouldn't that mean that Cloud Run can already access SQL? I've already set up the Cloud SQL Connection in the Cloud Run deployment page. What do you suggest I do to allow access to Cloud SQL from Cloud Run?

EDIT: I have to enable the Cloud SQL API.

2
Posting the code you use to connect to the instance would be helpful.Gabe Weiss
Can you post your cloudbuild.yaml if you have one?Jason R Stevens CFA

2 Answers

4
votes

No, Cloud Run cannot access to Cloud SQL by default. You need to follow one of the two paths.

  1. Connect to SQL using a local unix socket file: You need to configure permissions like you said and deploy with flags indicating intent to connect to the database. Follow https://cloud.google.com/sql/docs/mysql/connect-run

  2. Connect to SQL with a private IP: This involves deploying Cloud SQL instance into a VPC Network and therefore having it get a private IP address. Then you use Cloud Run VPC Access Connector (currently beta) to allow Cloud Run container to be able to connect to that VPC network, which includes SQL database's IP address directly (no IAM permissions needed). Follow https://cloud.google.com/vpc/docs/configure-serverless-vpc-access

-1
votes

Cloud SQL Proxy solution

I use the cloud-sql-proxy to create a local unix socket file in the workspace directory provided by Cloud Build.

Here are the main steps:

  1. Pull a Berglas container populating its call with the _VAR1 substitution, an environment variable I've encrypted using Berglas called CMCREDENTIALS. You should add as many of these _VAR{n} as you require.
  2. Install the cloudsqlproxy via wget.
  3. Run an intermediate step (tests for this build). This step uses the variables stored in the provided temporary /workspace directory.
  4. Build your image.
  5. Push your image.
  6. Using Cloud Run, deploy and include the flag --set-environment-variables

The full cloudbuild.yaml

# basic cloudbuild.yaml
steps:
# pull the berglas container and write the secrets to temporary files 
# under /workspace
  - name: gcr.io/berglas/berglas
    id: 'Install Berglas'
    env:
    - '${_VAR1}=berglas://${_BUCKET_ID_SECRETS}/${_VAR1}?destination=/workspace/${_VAR1}'

    args: ["exec", "--", "/bin/sh"]

# install the cloud sql proxy
  - id: 'Install Cloud SQL Proxy'
    name: alpine:latest
    entrypoint: sh
    args:
      - "-c"
      - "\
      wget -O /workspace/cloud_sql_proxy \
      https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 && \
      sleep 2 && \
      chmod +x /workspace/cloud_sql_proxy"
    waitFor: ['-']

# using the secrets from above, build and run the test suite
  - name: 'python:3.8.3-slim'
    id: 'Run Unit Tests'
    entrypoint: '/bin/bash'
    args: 
      - "-c"
      - "\
      (/workspace/cloud_sql_proxy -dir=/workspace/${_SQL_PROXY_PATH} -instances=${_INSTANCE_NAME1} & sleep 2) && \
      apt-get update && apt-get install -y --no-install-recommends \
      build-essential libssl-dev libffi-dev libpq-dev python3-dev wget && \
      rm -rf /var/lib/apt/lists/* && \
      export ${_VAR1}=$(cat /workspace/${_VAR1}) && \ 
      export INSTANCE_NAME1=${_INSTANCE_NAME1} && \
      export SQL_PROXY_PATH=/workspace/${_SQL_PROXY_PATH} && \
      pip install -r dev-requirements.txt && \
      pip install -r requirements.txt && \
      python -m pytest -v && \
      rm -rf /workspace/${_SQL_PROXY_PATH} && \
      echo 'Removed Cloud SQL Proxy'"
    
    waitFor: ['Install Cloud SQL Proxy', 'Install Berglas']
    dir: '${_APP_DIR}'

# Using the application/Dockerfile build instructions, build the app image
  - name: 'gcr.io/cloud-builders/docker'
    id: 'Build Application Image'
    args: ['build',
           '-t',
           'gcr.io/$PROJECT_ID/${_IMAGE_NAME}',
           '.',
          ]
    dir: '${_APP_DIR}'

# Push the application image
  - name: 'gcr.io/cloud-builders/docker'
    id: 'Push Application Image'
    args: ['push',
           'gcr.io/$PROJECT_ID/${_IMAGE_NAME}',
          ]

# Deploy the application image to Cloud Run
# populating secrets via Berglas exec ENTRYPOINT for gunicorn
  - name: 'gcr.io/cloud-builders/gcloud'
    id: 'Deploy Application Image'
    args: ['beta', 
           'run',
           'deploy', 
           '${_IMAGE_NAME}',
           '--image',
           'gcr.io/$PROJECT_ID/${_IMAGE_NAME}',
           '--region',
           'us-central1',
           '--platform', 
           'managed',
           '--quiet',
           '--add-cloudsql-instances',
           '${_INSTANCE_NAME1}',
           '--set-env-vars',
           'SQL_PROXY_PATH=/${_SQL_PROXY_PATH},INSTANCE_NAME1=${_INSTANCE_NAME1},${_VAR1}=berglas://${_BUCKET_ID_SECRETS}/${_VAR1}',
           '--allow-unauthenticated',
           '--memory',
           '512Mi'
          ]

# Use the defaults below which can be changed at the command line
substitutions:
  _IMAGE_NAME: your-image-name
  _BUCKET_ID_SECRETS: your-bucket-for-berglas-secrets
  _INSTANCE_NAME1: project-name:location:dbname
  _SQL_PROXY_PATH: cloudsql
  _VAR1: CMCREDENTIALS


# The images we'll push here
images: [
  'gcr.io/$PROJECT_ID/${_IMAGE_NAME}'
]

Dockerfile utilized

The below builds a Python app from source contained inside the directory <myrepo>/application. This dockerfile sits under application/Dockerfile.

# Use the official lightweight Python image.
# https://hub.docker.com/_/python
FROM python:3.8.3-slim

# Add build arguments
# Copy local code to the container image.
ENV APP_HOME /application

WORKDIR $APP_HOME

# Install production dependencies.
RUN apt-get update && apt-get install -y --no-install-recommends \
    build-essential \
    libpq-dev \
    python3-dev \
    libssl-dev \
    libffi-dev \
    && rm -rf /var/lib/apt/lists/*

# Copy the application source
COPY . ./

# Install Python dependencies
RUN pip install -r requirements.txt --no-cache-dir

# Grab Berglas from Google Cloud Registry
COPY --from=gcr.io/berglas/berglas:latest /bin/berglas /bin/berglas

# Run the web service on container startup. Here we use the gunicorn
# webserver, with one worker process and 8 threads.
# For environments with multiple CPU cores, increase the number of workers
# to be equal to the cores available.
ENTRYPOINT exec /bin/berglas exec -- gunicorn --bind :$PORT --workers 1 --threads 8 app:app 

Hope this helps someone, though possibly too specific (Python + Berglas) for the original OP.