Reload Original PagePrint PageEmail Page

Why we left AngularJS: 5 surprisingly painful things about client-side JS

When we opened up Sourcegraph to the public a few months ago, it was a rich AngularJS app. The server delivered the initial HTML page and hosted JSON endpoints. AngularJS did everything else.

But rich client-side JavaScript frameworks aren’t a good fit for every site, especially content sites like Sourcegraph, as we soon learned. Here are some of the unexpectedly painful things we experienced along the way. We hope this is helpful to other developers who are facing a similar decision.

Next week we’ll talk more about how we made the transition from AngularJS to server-side Go templates.

The 5 things about client-side JS frameworks that were surprisingly painful

We knew about many of these pain points in advance, but we didn’t know how painful they would be.

1. Bad search ranking and Twitter/Facebook previews

Search engine crawlers and social preview scrapers can’t load JavaScript-only sites, and serving them alternate versions is complex and slow.

There are 2 ways to allow crawlers to read your site. You can run a browser instance on your server that runs your app’s JavaScript and dumps the resulting HTML from the DOM (using PhantomJS or WebLoop). Or you can create an alternate, server-generated HTML version of your site intended for crawlers.

The former option requires installing WebKit (and possibly Xvfb) on your server and spawning a browser for each page load. (You can cache the pages, but that’s merely an optimization and introduces further complexity.) This will slow down your page loads by a couple of seconds, which harms your search engine rankings.

The latter option (making an alternate server-side site) suffices for simple sites, but it’s a nightmare when you have many kinds of pages. And if Google deems your alternate site to be too different from your main site, it will severely penalize you. You won’t know you’ve crossed the line until your traffic plummets.

2. Flaky stats and monitoring

Most analytics tools require error-prone, manual integration to use the HTML5 history API (pushState) for navigation. This is because they’re unable to automatically detect when your app navigates to a new page using pushState. Even if they were able, they’d still need to wait for a signal from your app to collect other information about the new page (such as the page title and other page-specific metrics you might be tracking).

How do you fix this? The solution depends on both your client-side routing library and the particular analytics tool you want to integrate. Using Google Analytics with Backbone.js? Try Using Heap (which is awesome, BTW) with UI-Router? Set up your own $stateChangeSuccess hook and call heap.track.

You’re not done yet. Are you tracking the initial page load? Perhaps you’re now double-tracking it? Are you tracking page load failures? What about when you use replaceState instead of pushState? It’s tough to even know if you have misconfigured your analytics hooks—or if a dependency upgrade breaks them—without frequently cross-checking your analytics. And when you discover an issue, it’s tough to recover the analytics data you missed (or eliminate duplicates).

3. Slow, complex build tools

Front-end JavaScript build tools, such as Grunt, require complex configuration and can be slow. Wonderful projects like ng-boilerplate exist to save you from mucking around with configuration, but they’re still slow, and you can’t avoid the complexity when you want to add a custom build step. (To see what I mean by the complexity, take a look at ngbp’s Gruntfile.)

Once you’ve configured your app perfectly, with your Gruntfiles and all, you still must endure slow JavaScript build times. You can separate your dev and production build pipelines to improve dev speed, but that’s going to bite you later on. This is especially so with AngularJS, which requires the use of ngmin before uglifying code (if you use a certain feature). In fact, we broke Sourcegraph several times because the uglified JavaScript behaved differently from our dev JavaScript.

Things are getting better, though. Gulp is a huge improvement.

4. Slow, flaky tests

Testing a JavaScript-only site requires using a browser-based test framework, such as Selenium, PhantomJS, or WebLoop. Installing these usually means installing WebKit and Java dependencies, configuring Xvfb, and perhaps running a local VNC client and server for testing. Finally, you need to set all of those up on the continuous integration server you use as well.

In contrast, testing server-generated pages usually only requires libraries to fetch URLs and parse HTML, which are much simpler to install and configure.

Once you start writing browser tests, you must deal with asynchronous loading. You can’t test a page element that hasn’t loaded yet, but if it doesn’t load within a certain timeout, then your test should fail. Browser test libraries provide helper functions for this, but they can only help so much on complex pages.

What do you get when you combine a heavyweight browser test harness (Selenium, plus either Firefox or WebKit) and much greater test complexity (due to the asynchronous nature of browser tests)? Your tests require more configuration, will take much longer to run, and will be much flakier.

5. Slowness is swept under the rug, not addressed

In a rich JavaScript app, page transitions usually happen immediately, and then every distinct element on the page loads asynchronously from the server. In a server-side app, the opposite is generally true: the page isn’t even sent to the client until all of the data is loaded on the server.

That sounds like a win for client-side apps, but it can actually be a curse in disguise.

Consider a client-side JS app that gives the appearance of loading a page immediately after the user clicks a link. Suppose the user navigates to a page with a sidebar that is populated with data that takes 5 seconds to load. The app feels fast on first glance, but if you’re a user who needs the information on the sidebar, the site feels painfully slow to you. Even if the particular content you want loads instantly, you still have to endure the spinning loading indicators and the post-load jitter as the page is filled in.

Now consider the developer who wants to add a new feature to that page. It’s harder to argue that her feature must load quickly—it’s all asynchronous, so who cares if something at the bottom of the page loads a few seconds later? Repeat this a few times and the whole site starts to feel laggy and jittery.

In a server-side app, if one API call was slow, the whole page would block until it finished. It’s impossible to ignore server-side slowness because it’s easier to measure and affects everyone equally. But it’s easier to ignore slowness on a client-side JS app.

You can argue that a good development team should avoid these mistakes, and that the client-side JavaScript framework isn’t the culprit. That’s true, but on the margin, a client-side JavaScript framework reduces the cost of slowness. That alters the incentives for any development team.

What’s next?

None of these issues is a huge problem by itself. There were many things we could have done (and did do) to mitigate them. (Or we could have used a different framework). Taken together, though, these issues mean that client-side JS frameworks were a big burden on our development.

Also, keep in mind that each site is different. In particular, Sourcegraph is a content site, which means its pages don’t change much after loading (compared to a rich application). We still love AngularJS, JavaScript, Selenium, PhantomJS, Grunt, etc., but they just weren’t the right tools for our site.

Check back next week for more about how we made the transition away from AngularJS to Go’s template library.