For whatever reason, the svn
solution does not work for me, and since I have no need of svn
for anything else, it did not make sense to spend time trying to make it, so I looked for a simple solution using tools I already had. This script uses only curl
and awk
to download all files in a GitHub directory described as "/:user:repo/contents/:path"
.
The returned body of a call to the GitHub REST API
"GET /repos/:user:repo/contents/:path"
command returns an object that includes a "download_url"
link for each file in a directory.
This command-line script calls that REST API using curl
and sends the result through AWK, which filters out all but the "download_url" lines, erases quote marks and commas from the links, and then downloads the links using another call to curl.
curl -s https://api.github.com/repos/:user/:repo/contents/:path | awk \
'/download_url/ { gsub("\"|,", "", $2); system("curl -O "$2"); }'