2
votes

I need to export all the data from a node in Firebase database, however it seems that the file is too large to get downloaded through the "Export" option in the firebase console. Therefore, I've tried to download the json file by using the firebase REST api through Curl by making this call:

curl --globoff -k -o dr.json "https://mydatabase.firebaseio.com/data.json?format=export"

This command was able to perform downloads for a certain range of file size (0 - 275Mb) . Unfortunately not for the main file that I want to download and that is approximately 450 MB. I got this error when trying to download it.

{ "error" : "Payload is too large"}

I've also tried to get the file by splitting it up and setting a download limit , but it still gives me the same error of "Payload is too large"

curl --range 0-55555555 --globoff -k -o dr.json "https://mydatabase.firebaseio.com/data.json?format=export"

Any help would be really appreciated.

Thanks

3
Have you tried using the Firebase CLI? firebase.googleblog.com/2017/12/… - Doug Stevenson
@DougStevenson I just try it out and I still get the same error: Error: HTTP Error: 413, Payload is too large I've tracked down the error on the HTTP Status and Error Codes for JSON (cloud.google.com/storage/docs/json_api/v1/… ) , and it suggests that I use the "Rewrite Method" instead - Soulemane Diaoune
Any idea on how I can implement it for firebase export ? - Soulemane Diaoune
Also I've unsuccessfully tried to copy the dataset to another project on firebase. Same error.. - Soulemane Diaoune
The Error Codes you've linked seems to be related to the Cloud Storage API. Here's the Realtime Database ones: firebase.google.com/docs/reference/rest/database/… .. I think they forgot to document your error or maybe they never expected it to happen. - Rosário Pereira Fernandes

3 Answers

1
votes

An easy way to split the file is by using the shallow URI parameter. If you have a data structure like this for example:

{
    "data":{
        "users":{
            //...dataset
        },
        "posts":{
            //...dataset
        },
        "comments":{
            //...dataset
        }
    }
}

When you run curl --globoff -k -o dr.json "https://mydatabase.firebaseio.com/data.json?shallow=true&format=export" it will return:

{
    "data":{
        "users":true,
        "posts":true,
        "comments":true
    }
}

You can then download the users node using something like:

curl --globoff -k -o dr.json "https://mydatabase.firebaseio.com/data/users.json?format=export"

And the same happens for the posts and comments nodes.

I hope these nodes are not too large. But if it happens to be, you can use the shallow parameter again to split them into smaller pieces:

curl --globoff -k -o dr.json "https://mydatabase.firebaseio.com/data/users.json?shallow=true&format=export"
0
votes

I also faced this problem and get rid of this by using Postman's Import feature because downloading a large JSON file sometimes faces failure in the middle of the way. You can put the traditional cUrl commands on it. You just need to click save the response after the response is reached. Postman also needs sometimes to preview the JSON even freezing the UI but you don't need to be bothered with it. enter image description here

0
votes

For anyone looking for alternative solutions, you can use the shallow feature as described here:

shallow

This is an advanced feature, designed to help you work with large datasets without needing to download everything. To use it, add shallow=true as a parameter. This will limit the depth of the data returned. If the data at the location is a JSON primitive (string, number, or boolean) its value will simply be returned. If the data snapshot at the location is a JSON object, the values for each key will be truncated to true.

Example: https://mydatabase.firebaseio.com/.json?shallow=true&format=export