8
votes

I use serverless framework to deploy python function onto aws lambda

my configuration file serverless.yml is following

frameworkVersion: "=1.27.3"

service: recipes

provider:
  name: aws
  endpointType: REGIONAL
  runtime: python3.6
  stage: dev
  region: eu-central-1
  memorySize: 512
  deploymentBucket:
    name: dfki-meta
  versionFunctions: false
  stackTags:
    Project: DFKIAPP
  # Allows updates to all resources except deleting/replacing EC2 instances
  stackPolicy:
    - Effect: Allow
      Principal: "*"
      Action: "Update:*"
      Resource: "*"
    - Effect: Deny
      Principal: "*"
      Action:
        - Update: Replace
        - Update: Delete
      Resource: "*"
      Condition:
        StringEquals:
          ResourceType:
            - AWS::EC2::Instance
  # Access to RDS and S3 Bucket
  iamRoleStatements:
    -  Effect: "Allow"
       Action: "s3:ListBucket"
       Resource: "*"

package:
  individually: true



functions:
  get_recipes:
    handler: handler.get_recipes
    module: recipes_crud
    package:
      include:
        - db/*
    timeout: 10
    events:
      - http:
          path: recipes
          method: get
          request:
            parameters:
              querystring:
                persona: true



plugins:
  # deploy conda package on lambda
  - serverless-python-requirements

custom:
  pythonRequirements:
    dockerizePip: non-linux
    dockerFile: prod_env_dockerfile/Dockerfile

and my docker file

lambci/lambda:python3.6
FROM lambci/lambda-base:build

ENV PATH=/var/lang/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin \
    LD_LIBRARY_PATH=/var/lang/lib:/lib64:/usr/lib64:/var/runtime:/var/runtime/lib:/var/task:/var/task/lib \
    AWS_EXECUTION_ENV=AWS_Lambda_python3.6 \
    PYTHONPATH=/var/runtime \
    PKG_CONFIG_PATH=/var/lang/lib/pkgconfig:/usr/lib64/pkgconfig:/usr/share/pkgconfig

RUN rm -rf /var/runtime /var/lang && \
  curl https://lambci.s3.amazonaws.com/fs/python3.6.tgz | tar -xz -C / && \
  sed -i '/^prefix=/c\prefix=/var/lang' /var/lang/lib/pkgconfig/python-3.6.pc && \
  curl https://www.python.org/ftp/python/3.6.1/Python-3.6.1.tar.xz | tar -xJ && \
  cd Python-3.6.1 && \
  LIBS="$LIBS -lutil -lrt" ./configure --prefix=/var/lang && \
  make -j$(getconf _NPROCESSORS_ONLN) libinstall inclinstall && \
  cd .. && \
  rm -rf Python-3.6.1 && \
  pip3 install -U pip awscli virtualenv --no-cache-dir

RUN yum install -y wget
RUN wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh
RUN bash Miniconda3-latest-Linux-x86_64.sh -b -p $HOME/miniconda
RUN export PATH="$HOME/miniconda/bin:$PATH" && conda install -c prometeia -y pymssql

but seemingly sls do not use my dockerfile, it still create a image called sls-py-reqs-custom

(node:43146) ExperimentalWarning: The fs.promises API is experimental
Serverless: Installing requirements of recipes_crud/requirements.txt in .serverless/recipes_crud...
Serverless: Building custom docker image from prod_env_dockerfile/Dockerfile...
Serverless: Docker Image: sls-py-reqs-custom
Serverless: Packaging function: get_recipes...
Serverless: Excluding development dependencies...
Serverless: Injecting required Python packages to package...
Serverless: Uploading function: get_recipes (29.08 MB)...
Serverless: Successfully deployed function: get_recipes
Serverless: Successfully updated function: get_recipes

How can I force serverless to use my customized docker ?

1

1 Answers

0
votes

there seems to be some confusion on your part. I want to mainly address these 2 comments from your original question:

  • but seemingly sls do not use my dockerfile
  • How can I force serverless to use my customized docker ?

TL;DR: Serverless Framework will not use your Dockerfile, and you cannot force it to. These 2 technologies are like apples and oranges. To resolve this, your serverless.yaml must simply be configured to find the path to your Function's Handler.

You are using a popular Docker image called docker-lambda. This image is only for local testing. The best use case I can think of is that it is available without an internet connection (coding while camping, on an airplane without WiFi, etc.).

To quote the project's README, this image's only purpose is:

use it for running your functions in the same strict Lambda environment, knowing that they'll exhibit the same behavior when deployed live. You can also use it to compile native dependencies knowing that you're linking to the same library versions that exist on AWS Lambda and then deploy using the AWS CLI.

When you are ready to package/deploy/etc. to the AWS Cloud, the docker-lambda is of zero use to you.