0
votes

I’m trying to figure out how to add a spark step properly to my aws-emr cluster from the command line aws-cli.

Some background:

I have a large dataset (thousands of .csv files) that I need to read in and analyze. I have a python script that looks something like:

analysis_script.py

import pandas as pd
from pyspark.sql import SQLContext, DataFrame
from pyspark.sql.types import *
from pyspark import SparkContext
import boto3

#Spark context
sc = SparkContext.getOrCreate()
sqlContext = SQLContext(sc)

df = sqlContext.read.format("org.apache.spark.sql.execution.datasources.csv.CSVFileFormat").load("s3n://data_input/*csv")

def analysis(df):
    #do bunch of stuff. Create output dataframe
    return df_output

df_output = analysis(df)

df_output.save_as_csv_to_s3_somehow

I want the output csv file to go to the directory s3://dataoutput/

Do I need to add the py file to a jar or something? What command do I use to run this analysis utilizing my cluster nodes, and how do I get the output to the correct directoy? Thanks.

I launch the cluster using:

aws emr create-cluster --release-label emr-5.5.0\
--name PySpark_Analysis\
--applications Name=Hadoop Name=Hive Name=Spark Name=Pig Name=Ganglia Name=Presto Name=Zeppelin\
--instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=r3.xlarge InstanceGroupType=CORE,InstanceCount=4,InstanceType=r3.xlarge\
--region us-west-2\
--log-uri s3://emr-logs-zerex/ 
--configurations file://./zeppelin-env-config.json/
--bootstrap-actions Name="Install Python Packages",Path="s3://emr-code/bootstraps/install_python_packages_custom.bash"
1

1 Answers

0
votes

I usually use the --steps parameter of the aws emr create-cluster which can be specified like --steps file://mysteps.json. The file has the following look to it:

[
    {
        "Type": "Spark",
        "Name": "KB Spark Program",
        "ActionOnFailure": "TERMINATE_JOB_FLOW",
        "Args": [
            "--verbose",
            "--packages",
            "org.apache.spark:spark-streaming-kafka-0-8_2.11:2.1.1,com.amazonaws:aws-java-sdk-s3:1.11.27,org.apache.hadoop:hadoop-aws:2.7.2,com.databricks:spark-csv_2.11:1.5.0",
            "/tmp/analysis_script.py"
        ]
    },
        {
        "Type": "Spark",
        "Name": "KB Spark Program",
        "ActionOnFailure": "TERMINATE_JOB_FLOW",
        "Args": [
            "--verbose",
            "--packages",
            "org.apache.spark:spark-streaming-kafka-0-8_2.11:2.1.1,com.amazonaws:aws-java-sdk-s3:1.11.27,org.apache.hadoop:hadoop-aws:2.7.2,com.databricks:spark-csv_2.11:1.5.0",
            "/tmp/analysis_script_1.py"
        ]
    }
]

You can read more about steps here. I use the bootstrap script to load my code from S3 into /tmp and then specify the steps of execution in the file.

As for writing to s3 here is a link that explains that.