0
votes

I've developed a website with a core of non-ajax pages, and then a branch with urls

 http://mywebsite.com/frameworks/#!ajaxpageaddress1
 http://mywebsite.com/frameworks/#!ajaxpageaddress2

I've set up the site to deliver an html snapshot when it receives _escaped_fragment_ from the google crawler.

I've tested the ajax pages using fetch as google, which correctly returns the html snapshot.

I've submitted a sitemap with the hashbang addresses.

I've followed every instruction I have found, but google only indexes the non-ajax pages (but doesn't note any crawl errors).

Has anyone experienced this, or can spot an obvious step that I need to take?

Thanks Jeremy

1

1 Answers

0
votes

First of all, you need to use #! instead of !# ;-)

I've created a solution for this that you can find on Github as a guide or to use completely: https://github.com/kubrickology/Logical-escaped_fragment

Simply use __init() and __update() from this to AJAXify your pages.