Track performance by user?

Thanks to the awesome folks at SOASTA we’re now using their mPulse system instead of our own boomerang install. This gives us 2 major wins. First, we’re including the tracking code in the non-blocking asynchronous iframe method, which gives the best possible performance at this point. Second, we can actually see the data. Previously, we just weren’t getting visibility into our boomerang data. We had the data, but weren’t using it, which was a total waste.

Looking at our stats today, mPulse tracks the median page load time. I was looking at the data and thinking, I wonder what it looks like per user. For example, I wonder if users with faster connections typically hit more pages. If they do, that means our median average user load time is actually lower than our median page load time.

Take two users, Alice and Bob. Alice is on her desktop in London with a 100Mbps line. She visits 8 pages. The average load time for her 8 pageviews is 1.2s. Bob is on his iPad over 3G in Alabama. (We’re in the UK, so London is closer!) Bob visits 4 pages. The average load time for the 4 pages is 2.3s. Now our arithmetic mean is somewhere in between the two, but our median, in this case, is one of Alice’s pageviews.

What would be really interesting, is to group pageviews by users. To count up all the Alices and Bobs, and then calculate the median (and 95th, 99th percentile) on their averages. That actually tells me, 50% of our users saw a page load of <1.4s, 95% <8s.

Having said all that, the data might actually look very similar to what we’re currently seeing. I’ll try to dig out some of our archived boomerang data, do some analysis, and post an update once I have more info.

Advertisements

All Magento pages from Varnish

I just realised I haven’t updated the site since our last big development. We’re now serving almost all of our pages from Varnish. Crude research suggests around 90% of our pageviews are now coming from Varnish. In simple terms, we’re doing did the following:

  • Render the cart / account links from a cookie with javascript
  • Ajax all pages, so everything can be cached (with a few exceptions we manually exclude)
  • Cache page variants for currencies and tax rate

We’re also warming / refreshing the cache using a bash script parsing the sitemap and hitting every url with a PURGE then a GET.

The hardest part of the whole performance space has been to measure the impact of our changes. But our TTFB was previously in the 300-500ms range for most pages, and now it’s in the 20-30ms range for pages that come from Varnish. I’m very confident that it’s impacting our bottom line.