I am trying to mirror a website at the moment. wget
seems to do the job very well, however it's not working on some pages.
Looking at the manual, the command
wget -r https://www.gnu.org/
should download the GNU page. And it actually does that. However, if I use another page, for example the startpage of my personal website, this doesn't work anymore.
wget -r https://my-personal.website
The index.html
is downloaded, but none of the CSS/JS not to mention the recursive download. All that is downloaded is the index.html
.
I've tried setting the User-Agent using the -U
option, but that didn't help either. Is there an option missing that is causing wget to stop after the index.html
?
UPDATE: I've also tried the --mirror
option, which is also not working and showing the same behavior.