I have a python data frame which I need to load in snowflake tables. I was doing that via Jupyter notebook and I was able to load the data. However, when I ran the same code in EC2 I encountered segmentation fault error. I want to ask is it possible to load data from python dataframe to snowflake using other method like creating a temp CSV and load the same via python or changing it in pyarrow format and then load it in snowflake via python. Please note that I can't use S3 bucket.
0
votes
I recommend you update your question with more details. Error messages, sample code, etc. There is clearly something different about your local machine and your EC2 instance, but without any details, it's impossible to point you in the right direction and debug your issue.
- Mike Walton
I wouldn't be able to paste the code as it is in virtual environment..I will try to post the code of local machine... meanwhile can you suggest other methods to load the data to snowflake
- user13304521
I think you need to figure out why write_pandas() isn't working on the EC2 instance, because it is running the same commands that you'd be trying to use in alternative solutions (PUT and COPY INTO).
- Mike Walton
1 Answers
0
votes
Your best method is using the write_pandas() function, which will create a temp file, PUT it to Snowflake internal stage, and then execute a COPY INTO command for you.
This is not the only way, though, and since your question doesn't specify which method you are using, I thought I'd start with the best way. If this is the way you are already using, I'll update this answer.