0
votes

I have tried to set my site up ( http://www.diablo3values.com )according to the guidelines set out here : https://developers.google.com/webmasters/ajax-crawling/ However, it appears that Google has updated their indexes (because I see the revisions to the meta description tags) but the ajax content does not show up in the index.

I am trying to use the “Handle pages without hash fragments” option.

If you view either of the following:

http://www.diablo3values.com/?_escaped_fragment_=

http://www.diablo3values.com/about?_escaped_fragment_=

you will correctly see the HTML snap shot with my content. (those are the two pages I an most concerned about).

Any Ideas? Am I doing something wrong? How do you get google to correclty recognize the tag.

2

2 Answers

7
votes

I'm typing this as an answer, since it got a little to long to be a comment.

First of all, your links seems to point to localhost:8080/about, and not /about, which probably is why google doesn't index it in the first place.

Second, here's my experience with pushstate urls and Google AJAX crawling:

My experience is that ajax crawling with pushstate urls is handled a little differently by google than with hashbang urls. Since google won't know that your url is a pushstate url (since it looks just like a regular url), you need to add <meta name="fragment" content="!"> to all your pages, not only the "root" page. And google doesn't seem to know that the pages are part of the same application, so it treats every page as a separate Ajax application. So the Google bot will never actually create a navigation structure inside _escaped_fragment_, like _escaped_fragment_=/about, as it would with a hashbang url (#!/about). Instead, it will request /about?_escaped_fragment_= (which you aparently already have set up). This goes for all your "deep links". Instead of /?_escaped_fragment_=/thelink, google will always request /thelink?_escaped_fragment_=.

But as said initially, the reason it doesn't work for you is probably because you have localhost:8080 urls in your _escaped_fragment_ generated html.

1
votes

Googlebot only knows to crawl the escaped fragment if your urls conform to the hash bang standard. As users navigate your site, your urls need to be:

http://www.diablo3values.com/
http://www.diablo3values.com/#!contact
http://www.diablo3values.com/#!about

Googlebot actually needs to see these urls in the source code so that it can follow them. Then it knows to download the following urls:

http://www.diablo3values.com/?_escaped_fragment=contact
http://www.diablo3values.com/?_escaped_fragment=about

On your site you appear to be loading a new page on each click, and then loading the content of each page via AJAX too. This is not how I would expect an AJAX site to work. Usually the purpose of using AJAX is so that the user never has to load a whole new page. When the user clicks, the new content section is loaded and inserted into the page. You serve the navigation once and then you only serve escaped fragments of the content.