35
votes

Suppose that df is a dataframe in Spark. The way to write df into a single CSV file is

df.coalesce(1).write.option("header", "true").csv("name.csv")

This will write the dataframe into a CSV file contained in a folder called name.csv but the actual CSV file will be called something like part-00000-af091215-57c0-45c4-a521-cd7d9afb5e54.csv.

I would like to know if it is possible to avoid the folder name.csv and to have the actual CSV file called name.csv and not part-00000-af091215-57c0-45c4-a521-cd7d9afb5e54.csv. The reason is that I need to write several CSV files which later on I will read together in Python, but my Python code makes use of the actual CSV names and also needs to have all the single CSV files in a folder (and not a folder of folders).

Any help is appreciated.

9
Sorry but I think my question is different because I already know how to write a single CSV file but I don't want the folder that you get at the end and I want the CSV file called as I specified, not the folderantonioACR1
Still you can use copyMerge, as suggested in answers in that question to copy to one file in new directoryT. Gawęda
copyMerge is being removed in 3.0 lib.woot

9 Answers

3
votes

A possible solution could be convert the Spark dataframe to a pandas dataframe and save it as csv:

df.toPandas().to_csv("<path>/<filename>")
2
votes

If you want to use only the python standard library this is an easy function that will write to a single file. You don't have to mess with tempfiles or going through another dir.

import csv

def spark_to_csv(df, file_path):
    """ Converts spark dataframe to CSV file """
    with open(file_path, "w") as f:
        writer = csv.DictWriter(f, fieldnames=df.columns)
        writer.writerow(dict(zip(fieldnames, fieldnames)))
        for row in df.toLocalIterator():
            writer.writerow(row.asDict())
2
votes

If the result size is comparable to spark driver node's free memory, you may have problems with converting the dataframe to pandas.

I would tell spark to save to some temporary location, and then copy the individual csv files into desired folder. Something like this:

import os
import shutil

TEMPORARY_TARGET="big/storage/name"
DESIRED_TARGET="/export/report.csv"

df.coalesce(1).write.option("header", "true").csv(TEMPORARY_TARGET)

part_filename = next(entry for entry in os.listdir(TEMPORARY_TARGET) if entry.startswith('part-'))
temporary_csv = os.path.join(TEMPORARY_TARGET, part_filename)

shutil.copyfile(temporary_csv, DESIRED_TARGET)

If you work with databricks, spark operates with files like dbfs:/mnt/..., and to use python's file operations on them, you need to change the path into /dbfs/mnt/... or (more native to databricks) replace shutil.copyfile with dbutils.fs.cp.

1
votes

There is no dataframe spark API which writes/creates a single file instead of directory as a result of write operation.

Below both options will create one single file inside directory along with standard files (_SUCCESS , _committed , _started).

 1. df.coalesce(1).write.mode("overwrite").format("com.databricks.spark.csv").option("header",
    "true").csv("PATH/FOLDER_NAME/x.csv")  



2. df.repartition(1).write.mode("overwrite").format("com.databricks.spark.csv").option("header",
        "true").csv("PATH/FOLDER_NAME/x.csv")

If you don't use coalesce(1) or repartition(1) and take advantage of sparks parallelism for writing files then it will create multiple data files inside directory.

You need to write function in driver which will combine all data file parts to single file(cat part-00000* singlefilename ) once write operation is done.

1
votes

I had the same problem and used python's NamedTemporaryFile library to solve this.

from tempfile import NamedTemporaryFile

s3 = boto3.resource('s3')

with NamedTemporaryFile() as tmp:
    df.coalesce(1).write.format('csv').options(header=True).save(tmp.name)
    s3.meta.client.upload_file(tmp.name, S3_BUCKET, S3_FOLDER + 'name.csv')

https://boto3.amazonaws.com/v1/documentation/api/latest/guide/s3-uploading-files.html for more info on upload_file()

1
votes

A more databricks'y' solution is here:

TEMPORARY_TARGET="dbfs:/my_folder/filename"
DESIRED_TARGET="dbfs:/my_folder/filename.csv"

spark_df.coalesce(1).write.option("header", "true").csv(TEMPORARY_TARGET)

temporary_csv = os.path.join(TEMPORARY_TARGET, dbutils.fs.ls(TEMPORARY_TARGET)[3][1])

dbutils.fs.cp(temporary_csv, DESIRED_TARGET)

Note if you are working from Koalas data frame you can replace spark_df with koalas_df.to_spark()

1
votes

Create temp folder inside output folder. Copy file part-00000* with the file name to output folder. Delete the temp folder. Python code snippet to do the same in Databricks.

fpath=output+'/'+'temp'

def file_exists(path):
  try:
    dbutils.fs.ls(path)
    return True
  except Exception as e:
    if 'java.io.FileNotFoundException' in str(e):
      return False
    else:
      raise

if file_exists(fpath):
  dbutils.fs.rm(fpath)
  df.coalesce(1).write.option("header", "true").csv(fpath)
else:
  df.coalesce(1).write.option("header", "true").csv(fpath)

fname=([x.name for x in dbutils.fs.ls(fpath) if x.name.startswith('part-00000')])
dbutils.fs.cp(fpath+"/"+fname[0], output+"/"+"name.csv")
dbutils.fs.rm(fpath, True) 

0
votes

For pyspark, you can convert to pandas dataframe and then save it.

df.toPandas().to_csv("<path>/<filename.csv>", header=True, index=False)

-3
votes
df.write.mode("overwrite").format("com.databricks.spark.csv").option("header", "true").csv("PATH/FOLDER_NAME/x.csv")

you can use this and if you don't want to give the name of CSV everytime you can write UDF or create an array of the CSV file name and give it to this it will work