13
votes

I'm using AWS S3, Glue, and Athena with the following setup:

S3 --> Glue --> Athena

My raw data is stored on S3 as CSV files. I'm using Glue for ETL, and I'm using Athena to query the data.

Since I'm using Athena, I'd like to convert the CSV files to Parquet. I'm using AWS Glue to do this right now. This is the current process I'm using:

  1. Run Crawler to read CSV files and populate Data Catalog.
  2. Run ETL job to create Parquet file from Data Catalog.
  3. Run a Crawler to populate Data Catalog using Parquet file.

The Glue job only allows me to convert one table at a time. If I have many CSV files, this process quickly becomes unmanageable. Is there a better way, perhaps a "correct" way, of converting many CSV files to Parquet using AWS Glue or some other AWS service?

5

5 Answers

15
votes

I had the exact same situation where I wanted to efficiently loop through the catalog tables catalogued by crawler which are pointing to csv files and then convert them to parquet. Unfortunately there is not much information available in the web yet. That's why I have written a blog in LinkedIn explaining how I have done it. Please have a read; specially point #5. Hope that helps. Please let me know your feedback.

Note: As per Antti's feedback, I am pasting the excerpt solution from my blog below:

  1. Iterating through catalog/database/tables

The Job Wizard comes with option to run predefined script on a data source. Problem is that the data source you can select is a single table from the catalog. It does not give you option to run the job on the whole database or a set of tables. You can modify the script later anyways but the way to iterate through the database tables in glue catalog is also very difficult to find. There are Catalog APIs but lacking suitable examples. The github example repo can be enriched with lot more scenarios to help developers.

After some mucking around, I came up with the script below which does the job. I have used boto3 client to loop through the table. I am pasting it here if it comes to someone’s help. I would also like to hear from you if you have a better suggestion

import sys
import boto3
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job

## @params: [JOB_NAME]
args = getResolvedOptions(sys.argv, ['JOB_NAME'])

sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)


client = boto3.client('glue', region_name='ap-southeast-2')

databaseName = 'tpc-ds-csv'
print '\ndatabaseName: ' + databaseName

Tables = client.get_tables(DatabaseName=databaseName)

tableList = Tables['TableList']

for table in tableList:
    tableName = table['Name']
    print '\n-- tableName: ' + tableName

    datasource0 = glueContext.create_dynamic_frame.from_catalog(
        database="tpc-ds-csv", 
        table_name=tableName, 
        transformation_ctx="datasource0"
    )

    datasink4 = glueContext.write_dynamic_frame.from_options(
        frame=datasource0,
        connection_type="s3", 
        connection_options={
            "path": "s3://aws-glue-tpcds-parquet/"+ tableName + "/"
            },
        format="parquet",
        transformation_ctx="datasink4"
    )
job.commit()
4
votes

Please refer to EDIT for updated info.

S3 --> Athena

Why not you use CSV format directly with Athena?

https://docs.aws.amazon.com/athena/latest/ug/supported-format.html

CSV is one of the supported formats. Also to make it efficient, you can compress multiple CSV files for faster loading.

Supported compression,

https://docs.aws.amazon.com/athena/latest/ug/compression-formats.html

Hope it helps.

EDIT:

Why Parquet format is more helpful than CSV?

https://dzone.com/articles/how-to-be-a-hero-with-powerful-parquet-google-and

S3 --> Glue --> Athena

More details on CSV to Parquet conversion,

https://aws.amazon.com/blogs/big-data/build-a-data-lake-foundation-with-aws-glue-and-amazon-s3/

3
votes

I'm not a big fan of Glue, nor creating schemas from data

Here's how to do it in Athena, which is dramatically faster than Glue.

This is for the CSV files:

create table foo (
  id int,
  name string,
  some date
)
row format delimited
  fields terminated by ','
location 's3://mybucket/path/to/csvs/'

This is for the parquet files:

create table bar 
with (
  external_location = 's3://mybucket/path/to/parquet/',
  format = 'PARQUET'
)
as select * from foo 

You don't need to create that path for parquet, even if you use partitioning

1
votes

Sounds like in your step 1 you are crawling the individual csv file (e.g some-bucket/container-path/file.csv), but if you instead set your crawler to look at a path level instead of a file level (e.g some-bucket/container-path/) and all your csv files are uniform then the crawler should only create a single external table instead of an external table per file and you’ll be able to extract the data from all of the files at once.

1
votes

you can convert either JSON or CSV files into parquet directly, without importing it to the catalog first.

This is for the JSON files - the below code would convert anything hosted at the rawFiles directory

import sys
from awsglue.job import Job 
from awsglue.transforms import *
from awsglue.context import GlueContext
from pyspark.context import SparkContext
from awsglue.utils import getResolvedOptions

## @params: [JOB_NAME] args = getResolvedOptions(sys.argv, ['JOB_NAME'])

sparkContext = SparkContext()
glueContext = GlueContext(sparkContext)
spark = glueContext.spark_session
job = Job(glueContext) job.init(args['JOB_NAME'], args)

s3_json_path = 's3://rawFiles/'  
s3_parquet_path = 's3://convertedFiles/'

output = spark.read.load(s3_json_path, format='json') 
output.write.parquet(s3_parquet_path)

job.commit()