Until recently, a client-side web-application couldn’t care about SEO. The spiders that Google uses to index content didn’t know how to handle the complexities of Angular or Ember. When Googlebot came along, it expected your best stuff; not an empty DOM.
- Don’t use client-side rendering on the indexed content
- Build a second version of your system that serves spiders
Good news for SEO on AngularJS sites!
Brad Green, AngularJS
Eighteen months later, we wanted to crunch some data and see how Googlebot stacks up on errors.
Just over 20% of our customers capture errors from the Googlebot browser. Considering individual applications, Googlebot has different failure modes than Chrome. That is, we see different errors at different stages in execution from the two browsers. Worse, Googlebot doesn’t seem to send a stack trace along with its errors! This is very concerning to us: how do we begin to debug Googlebot issues if we can’t debug the browser?
Googlebot also appears to vary its user agent when indexing a page. About 15% of the time, it emulates a mobile browser when crawling, presumably to verify the mobile content meets their new standards.
TrackJS Anonymized Data, July 2015
Sometimes things don’t go perfectly during rendering, which may negatively impact search results for your site.
Google Webmaster Central
But if you have that specialized use case and need a client-side rendered SEO app, you’ll need to know when Googlebot is having problems. You’ll probably also want to know when your users are having problems. Let us help with 30 days of free error tracking and bug fixing.