I have a CSV file copied from Azure blob to Azure data lake store. The pipe line is established successfully and file copied.
I'm trying to write USQL sample script from here:
Home -> datalakeanalysis1->Sample scripts-> New job
Its showing me default script.
//Define schema of file, must map all columns
@searchlog =
EXTRACT UserId int,
Start DateTime,
Region string,
Query string,
Duration int,
Urls string,
ClickedUrls string
FROM @"/Samples/Data/SearchLog.tsv"
USING Extractors.Tsv();
OUTPUT @searchlog
TO @"/Samples/Output/SearchLog_output.tsv"
USING Outputters.Tsv();
Note: my file in data lake store is here:
Home->dls1->Data explorer->rdl1
How can I give the path of my CSV file in the script ( my CSV file is stored in Data Lake Store).
Also, I would like to keep my destination file(output) in Data lake store.
How can I modify my script to refer to the data lake store path?
Edit:
I have changed my script as below:
//Define schema of file, must map all columns
@searchlog =
EXTRACT ID1 int,
ID2 int,
Date DateTime,
Rs string,
Rs1 string,
Number string,
Direction string,
ID3 int
FROM @"adl://rdl1.azuredatalakestore.net/blob1/vehicle1_09142014_JR.csv"
USING Extractors.Csv();
OUTPUT @searchlog
TO @"adl://rdl1.azuredatalakestore.net/blob1/vehicle1_09142014_JR1.csv"
USING Outputters.Csv();
However, my job is getting failed with attached error:
Moreover, I'm attaching the CSV file that I wanted to be used in the job. Sample CSV file
Is there anything wrong in the CSV file ? Or in my script?? Please help. Thanks.