0
votes

I am building a web scraping or crawler C# .NET application that keeps sending requests to a server to collect some information. The problem is that for certain web pages for this specific server that web response is always a 404 not found. However surprisingly enough I've discovered that as long as "Fiddler" is working the problem seems to vanish and the request returns with a successful response. I've been searching the web since seeking an answer but found none. On a brighter side, after searching the web and analysing Fiddler's timeline feature I have came to some conclusions.

1.Fiddler loads these web pages using Buffered mode while my application uses Stream mode. 2.It also appears that Fiddler reuses the connection or in other word Keep-Alive is set to be true.

And now the question is how can I mimic or simulate the way Fiddler loads the web response in Buffered mode, and whether Fiddler actually does some trick (i.e modifies the response) to get the correct response. I am using HttpWebRequest and HttpWebResponse to request my pages. I need a way to buffer httpwebresponse completely before returning data to client(which is my server).

public static String getCookie(String username, String password) { HttpWebRequest request = (HttpWebRequest)WebRequest.Create("certain link");

       request.UserAgent = "Mozilla/5.0 (Windows NT 6.0; rv:6.0.2) Gecko/20100101 Firefox/6.0.2";


       request.Credentials = new NetworkCredential(username, password);


       HttpWebResponse wr = (HttpWebResponse)request.GetResponse();
           String y = wr.Headers["Set-Cookie"].ToString();
           return y.Replace("; path=/", "");


   }

   /// <summary>
   /// Requests the html source of a given web page, using the request credentials given.
   /// </summary>
   /// <param name="username"></param>
   /// <param name="password"></param>
   /// <param name="webPageLink"></param>
   /// <returns></returns>
   public static String requestSource(String username,String password,String webPageLink){
       String source = "";

           HttpWebRequest request = (HttpWebRequest)WebRequest.Create(webPageLink);


       if (username != null && password != null)
       {
           request.Headers["Cookie"] = getCookie(username, password);


           request.UserAgent = "Mozilla/5.0 (Windows NT 6.0; rv:6.0.2) Gecko/20100101 Firefox/6.0.2";

           request.Credentials = new NetworkCredential(username, password);
       }
       StreamReader sr;

       using (HttpWebResponse wr = (HttpWebResponse)request.GetResponse())
       {
           sr = new StreamReader(wr.GetResponseStream());
           source = sr.ReadToEnd();
       }



       return source;
   }
2
FWIW, Buffering isn't what was causing the change in behavior; there's something else going on. FWIW, you really need to call .Close() on the object returned from GetResponseStream. That trips many people up.EricLaw

2 Answers

0
votes

Did you try to take a look at the HttpWebRequest's AllowWriteStreamBuffering property? Also you could try to append all the Fiddler's headers to your request to be as close to Fiddler as you can.

0
votes

Could it be that your scraper is being detected and shut down and Fiddler slows it enough so it doesn't get detected? http://google-scraper.squabbel.com/