166
votes

How to use wget and get all the files from website?

I need all files except the webpage files like HTML, PHP, ASP etc.

8
Even if you want to download php, it is not possible using wget. We can get only raw HTML using wget. I guess you know the reason - Venkateshwaran Selvaraj
NB: Always check with wget --spider first, and always add -w 1 (or more -w 5) so you don't flood the other person's server. - isomorphismes
How could I download all the pdf files in this page? pualib.com/collection/pua-titles-a.html - user4328677
Stack Overflow is a site for programming and development questions. This question appears to be off-topic because it is not about programming or development. See What topics can I ask about here in the Help Center. Perhaps Super User or Unix & Linux Stack Exchange would be a better place to ask. Also see Where do I post questions about Dev Ops? - jww

8 Answers

273
votes

To filter for specific file extensions:

wget -A pdf,jpg -m -p -E -k -K -np http://site/path/

Or, if you prefer long option names:

wget --accept pdf,jpg --mirror --page-requisites --adjust-extension --convert-links --backup-converted --no-parent http://site/path/

This will mirror the site, but the files without jpg or pdf extension will be automatically removed.

86
votes

This downloaded the entire website for me:

wget --no-clobber --convert-links --random-wait -r -p -E -e robots=off -U mozilla http://site/path/
63
votes
wget -m -p -E -k -K -np http://site/path/

man page will tell you what those options do.

wget will only follow links, if there is no link to a file from the index page, then wget will not know about its existence, and hence not download it. ie. it helps if all files are linked to in web pages or in directory indexes.

26
votes

I was trying to download zip files linked from Omeka's themes page - pretty similar task. This worked for me:

wget -A zip -r -l 1 -nd http://omeka.org/add-ons/themes/
  • -A: only accept zip files
  • -r: recurse
  • -l 1: one level deep (ie, only files directly linked from this page)
  • -nd: don't create a directory structure, just download all the files into this directory.

All the answers with -k, -K, -E etc options probably haven't really understood the question, as those as for rewriting HTML pages to make a local structure, renaming .php files and so on. Not relevant.

To literally get all files except .html etc:

wget -R html,htm,php,asp,jsp,js,py,css -r -l 1 -nd http://yoursite.com
7
votes

You may try:

wget --user-agent=Mozilla --content-disposition --mirror --convert-links -E -K -p http://example.com/

Also you can add:

-A pdf,ps,djvu,tex,doc,docx,xls,xlsx,gz,ppt,mp4,avi,zip,rar

to accept the specific extensions, or to reject only specific extensions:

-R html,htm,asp,php

or to exclude the specific areas:

-X "search*,forum*"

If the files are ignored for robots (e.g. search engines), you've to add also: -e robots=off

5
votes

Try this. It always works for me

wget --mirror -p --convert-links -P ./LOCAL-DIR WEBSITE-URL
5
votes
wget -m -A * -pk -e robots=off www.mysite.com/

this will download all type of files locally and point to them from the html file and it will ignore robots file

2
votes

I know this topic is very old, but I fell here at 2021 looking for a way to download all Slackware files from a mirror (http://ftp.slackware-brasil.com.br/slackware64-current/).

After reading all the answers, the best option for me was:

wget -m -p -k -np -R '*html*,*htm*,*asp*,*php*,*css*' -X 'www' http://ftp.slackware-brasil.com.br/slackware64-current/

I had to use *html* instead of just html to avoid downloads like index.html.tmp.

Please forgive me for resurrecting this topic, I thought it might be useful to someone other than me, and my doubt is very similar to @Aniruddhsinh's question.