//download file csv
ByteArrayOutputStream downloadedFile = downloadFile();
//save file in temp folder csv (
java.io.File tmpCsvFile = save(downloadedFile);
//reading
Dataset<Row> ds = session
.read()
.option("header", "true")
.csv(tmpCsvFile.getAbsolutePath())
tmpCsvFile saved in the following path:
/mnt/yarn/usercache/hadoop/appcache/application_1511379756333_0001/container_1511379756333_0001_02_000001/tmp/1OkYaovxMsmR7iPoPnb8mx45MWvwr6k1y9xIdh8g7K0Q3118887242212394029.csv
Exception on reading:
org.apache.spark.sql.AnalysisException: Path does not exist: hdfs://ip-33-33-33-33.ec2.internal:8020/mnt/yarn/usercache/hadoop/appcache/application_1511379756333_0001/container_1511379756333_0001_02_000001/tmp/1OkYaovxMsmR7iPoPnb8mx45MWvwr6k1y9xIdh8g7K0Q3118887242212394029.csv;
I think the problem, is that the file is saved locally and when i try to read through spark-sql api it can't find the file. I already tried with sparkContext.addFile() and doesn't work.
Any solutions?
Thanks