21
votes

We are looking for options on our team to decide between a Angular based client side MVC approach and a server side NodeJS / ExpressJS server side render approach.

Our Angular app downloads as one index.html and makes XHR requests to populate the page. Since we need the page to be pre-rendered we have used PhantomJS to save off a copy of every page when the content changes to a location on the server. This allows support for SEO.

Are there any examples of full page backbone applications or angular applications that people can point to for us to see if others are doing this.

Alternatively are the examples of NodeJS server side rendered applications we can see in the wild.

Lastly does anyone have opinions on this sort of architecture?

4
can you explain a little more about what you mean by saving off pages when the content changes to a location on the server? how do you allow users to link to a specific "page" of the web application? or does this not apply?dqhendricks
is this going to be a website or web application?Tyson Nero
Hi there, I would recommend reading this very informative blogpost of the MVC framworks out there. Does not cover Express but it gives a lot of fodder to think about before building MVC applications, and which approaches suit which applications. coding.smashingmagazine.com/2012/07/27/….Sachin

4 Answers

34
votes

I've worked on both mostly-server-rendering and mostly-client-rendering applications. Each type has its own advantages and disadvantages. The idea that you have to choose between one or the other is a false dichotomy however. If you have the resources you can combine both to get the best of both worlds.

I see 4 main challenges with purely client-side frameworks:

  • SEO and Analytics
  • Caching
  • Memory
  • Latency

SEO

Because you are using Node.JS, the SEO problem can be mitigated by simply using the client-side framework on the server to output static pages for googlebot and company. Recently Google has made a nice Analytics API for single page applications, but that will be a little more work than simply adding a couple of lines to the end of your master template.

Caching

Caching is really important way to speed up any web application. For small amounts of data it can be faster to cache data on the client in memory or in localStorage, but storage space on is very limited (currently about 5MB). Plus cache invalidation is pretty hard to do in localStorage.

Memory

Memory is something I've paid dearly for overlooking. Before I knew it I had accidentally made an Application that takes up more than 200MB of RAM. I might be able to bring that down to half with optimizations, but I doubt it would've taken more than 20 MB if I rendered it all on the server.

Latency

Latency is also easy to miss. Drupal for example runs about 50 to 100 SQL queries for each page. When the Database server is right next to the Application server, you don't have to worry about latency and all those queries can be executed in less than a couple hundred milliseconds. Your client side application will usually take a hundred milliseconds to make a single AJAX request. This means you need to spend a lot of time designing your server side API to minimize these round trips, but at that point the server already has all the data it needs to just generate the HTML too. Having a client-side application that talks to a properly RESTful interface can turn out to be glacially slow if you are not careful.

37 Signals recently blogged about the hybrid client/server architecture they implemented for the new version of Basecamp. This hybrid approach uses the server to render HTML but leverages something like PJAX on the client to get rid of full page refreshes. The effect is really fast and is what I recommend.

8
votes

With node.js on the server, in principle you can use the same code to render on the client as well as on the server. Frameworks that implement this approach are Meteor and Derby, they also do transparent synchronization of data models between the client and server. Both are still considered to be in alpha though but seem to work already quite well.

Meanwhile, both client- and server-side rendering have pros and cons:

  • Client-side rendering has the disadvantage that the initial page load takes a long time but once all the resources are loaded the user can navigate the site seamlessly without page. You might want to minimize the number of Ajax calls and/or use a client-side cache (e.g. cache data in the Angular.js controller).
  • Server-side rendering provides a fast initial page load and is good for SEO but every time the user navigates, the whole page goes blank for a second while it loads the new URL.

So it all depends on whether you want a fast initial page load but don't expect the users to stay that long (then use server-side rendering) or it's not that important that the page loads fast (as in Gmail) but users will navigate around for a long time (then use client-side rendering).

5
votes

We are currently testing this crazy aproach: we have angularJS app which runs on client. When we detect Googlebot as agent, we run PhantomJS instance and respond to crawler with the output from that. Tricky part is knowing when your client app finished loading so that you can select and return it. If you would do that sooner than your client side JS app is loaded, crawler won't get much data back, mostly just the index.html.

Simple implementation can be found here: http://pastebin.com/N3w2iyr8

UPDATE: At the time I wrote the original answer, nothing like prerendr.io existed, but I can point you to it now.

0
votes

My solution to make Application on Angular crawlable by Google. Used in aisel.co

  1. Snapshots handled by https://github.com/localnerve/html-snapshots
  2. Add rule to your .htaccess

    RewriteCond %{QUERY_STRING} ^_escaped_fragment_=(.*)$
    RewriteCond %{REQUEST_URI} !^/snapshots/views/ [NC]
    RewriteRule ^(.*)/?$ /snapshots/views/%1 [L]
    
  3. Create node.js script for snapshots, and run it in terminal: node snapshots.js

    var htmlSnapshots = require('html-snapshots');
        var result = htmlSnapshots.run({
        input: "array",
        source: [
                "http://aisel.dev/#!/",
                "http://aisel.dev/#!/contact/",
                "http://aisel.dev/#!/page/about-aisel"
        ],
        outputDir: "web/snapshots",
        outputDirClean: true,
        selector: ".navbar-header",
        timeout: 10000
    }, function(err, snapshotsCompleted) {
        var fs = require('fs');
        fs.rename('web/snapshots/#!', 'web/snapshots/views', function(err) {
            if ( err ) console.log('ERROR: ' + err);
        });
    });
    
  4. Make sure that everything works with curl, type in terminal

    curl http://aisel.dev/\?_escaped_fragment_\=/page/about-aisel/ this should show contents of snapshot .../www/aisel.dev/public/web/snapshots/views/page/about-aisel/index.html

Do not about directive for google and other crawlers. your app should contain meta rule in head:

    <meta name="fragment" content="!">

Full terms from google here: https://developers.google.com/webmasters/ajax-crawling/docs/specification