2
votes

So i want to web scrape one site, but when i iterate over the results pages after few request(about 30 max.) requests.get throws this error:

requests.exceptions.TooManyRedirects: Exceeded 30 redirects

The search url gets redirected to the main page url and each next url acts the same until I connect to different VPN. Even when i am spoofing user agent and rotating proxies from a list of free proxies it still gets redirected after few requests. I have never had a problem during web scraping like that before. What is the best way to bypass this "redirect block"? allow_redirects=False doesn't work here too.

import requests
import random
import time

agents = [...] # List of user agents

for i in range(1,100):
    url = "https://panoramafirm.pl/odpady/firmy,{}.html".format(i)
    r = requests.get(url, headers={"User-Agent": random.choice(agents)})
    print(r.status_code)
    time.sleep(random.randint(10,15))
1

1 Answers

0
votes

Since you are using requests you could make use of allow_redirects=False option.