JSS v15 – What’s new in SSR?

ICYMI Sitecore JSS v15 dropped recently, the second release to support Sitecore 10.  Reading the release notes there is a clear focus on performance and a whole bunch of improvements to the SDK and sample apps.  Given I’ve been playing a lot lately with the Node SSR proxy, there were a few changes that piqued my interest.

Keep-alive connection pooling

In a headless architecture, performance of the node SSR proxy is paramount.  JSS v15 introduces changes to the httpAgent that implements agentkeepalive to support connection pooling on server side HTTP requests.  So requests to the layout service and GraphQL endpoints, with the Apollo client, will now pool TCP connections, minimising the overhead of recreating a connection on every request.  This overhead, at increased load, is likely to degrade performance significantly and in some cases cause port exhaustion for outbound connections and make an application non-responsive.  I’m looking forward to testing the impacts of these changes with a prod load 🤞🚀   As a side note, there is a recent KB article demonstrating an approach to resolving this issue in previous JSS versions. Although explicitly calling out Azure app services, this should have positive perf impacts in any environment…so check it out.

CSP removed in sample

I wrote about this one previously.  Sitecore introduced a Content Security Policy (CSP) header with 9.3+ CM/Standalone instances, which could cause some strange CSP issues in local XP0 dev environments.  While removing the CSP without replacing it is probably not the intention in higher environments, the node proxy sample does now highlight where a new one (or any other headers) can be injected.  See my previous post for more info.

Memory Cache middleware

An optional middleware cache provider has been implemented in the node proxy sample. The SSR process is compute intensive, which can impact response times (Time to First Byte) and CPU/scaling costs. The cache middleware aims to address this with a simple in memory output cache that will cache responses using the URL as the key, and in turn minimising the amount of SSR that needs to occur. It’s really important to understand that this is “whole hog” caching. The entire response body for a given request (the HTML of the SSR’d page) will be cached and used as the response for all future requests matching the key (URL by default) until the cache expires. Due to this there are some big limitations which I’ll address below. Keep these in mind when considering implementing this feature.

The sample provided allows for configuration of the cache key expiration durations (TTL), url pattern exclusion lists and restricts caching to GET requests. All handy for being able to tweak to suit your solution requirements.

There are pros and cons of implementing output caching like this at the node tier. It’s definitely something to assess if it’s right for your solution.

Pros

  • Performance
    • Once a route is cached in memory the time to serve becomes tiny. Response time will be blazing fast 🚀🚀🚀
    • Reduced CPU usage – SSR processing can be CPU intensive. Node tier output caching will minimise the amount of required SSR processing, in theory saving dollarbucks on infrastructure costs.
  • Easy implementation – with the sample provided it’s easy to quickly implement and customise. Just uncomment one line! Other performance/caching improvements will usually require significant dev or infrastructure investment.

Cons

  • Personalisation
    • Well…here’s the kicker. Personalised content will not render correctly on initial page load given the layout service may not be hit. There are workarounds to help mask this (eg. re-rendering placeholders on the client), but out of the box…it’s a no go.
    • Also keep in mind if you are serving personalised content that those components may get cached and served to other users.
  • Analytics – usually the JSS layout service handles page event tracking. Again, now that the layout service is not hit for every “page” request, another solution (like using the JSS tracking API) would need to be implemented.
  • Security – No authorisation is performed on cached items. If a user/role has been denied access to a certain item, it may be available to them if it remains in the cache.
  • Fresh content – The node in memory cache is only cleared when the cache expiry time is hit. Publishing an item in Sitecore will not refresh the node cache. The default expiry is set to 10 seconds, which would be acceptable for most “content freshness” scenarios, but trades off the benefits of caching the lower the value gets.

These pros/cons only apply to the initial SSR (usually just the first request to an application)….after that the JavaScript application should take over in the browser and communicate with the layout service. However, it’s pretty likely customers have purchased Sitecore for many of these features. Ensure you understand the consequences of enabling this caching option OOTB, it’s not a performance silver bullet, however with some configuration and whitelisting can benefit applications with cacheable content.

All in all, v15 has some really solid additions and looking forward to playing with some of the new features and seeing how far the perf improvements will take an SSR workload. For more information on JSS performance best practices, check out https://jss-docs-preview3.herokuapp.com/guides/performance/

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s