0
votes

I'm working on AJAX-crawlable (Google AJAX-crawling) website, but some things are unclear to me. On the back-end of the application I filter out the _escaped_fragment_ parameter and return a HTML snapshot as expected.

When calling the URL's manually as shown below there are no problems:

(1) animals#!dogs

(2) animals?_escaped_fragment_=dogs

When viewing the page source at option (1) the content is loaded dynamically and with option (2) the page source contains the html snapshot. So far so good.

The problem is that, when using Google fetch as suggested (Google Fetch) the spider only seems crawl option (1) as if the hashbang (#!) never gets converted by the AJAX-crawler. Even when hard-coding die("AJAX test); inside the function dealing with the _escaped_fragment_ this does not reflect in the result generated by the spider.

So far I have done everything according to Google's guidelines and the only lead I have towards this problem is found on a sub page on the Google forums: Fetch as Google ignoring my hashtag. If this is the case, then it would mean there is no accurate way of testing what the Google bot would see until the changes have gone live and the page is re-indexed?

Other pages such as How to Test If Googlebot Can Access Your AJAX Content and the Google page its-self suggest that this can be tested using Google Fetch.

The information seems to contradict its-self and I have no idea if my AJAX content will be crawled correctly by the Google bot. Hopefully someone with more knowledge on the subject can help me out.

1
Please just let hangbangs die in a fire. They are a horrible hack and have been superseded by the history API.Quentin
@Quentin I agree. You should read about window.history as Quentin says.tomloprod

1 Answers

0
votes

Hash bangs have been abandoned. PUSH states are the more friendly alternative.