😎 Discover cool tips and tricks for customization in our next Developer Deep Dive virtual event - sign up now!

What we learned from using Sanity and Next.js to create our new Resource Guru website

By Billy Moon

Resource Guru has just launched an ambitious rebranded marketing website built using sanity and Next.js. We have built a site that will serve us for many years to come, and need to be able to iterate on the project as we evolve as a company.

About the project

Choosing the stack

We decided early on that we wanted to use React as the UI framework in order to benefit from good performance, and the amazing ecosystem that has grown around React in recent years. We also wanted to use a headless approach to content management to avoid being restricted by the constraints of a traditional all in one bundled CMS solution.

We evaluated several candidate CMS systems, using jamstack.org as an excellent resource to discover and compare options, before doing more research into the final shortlist. Whilst we found several systems that would handle our requirements perfectly well, we settled on sanity as our engineering team were impressed by the exceptionally well architected solution. There were a few things unique to sanity that makes it really stand out against the competition.

  • a great api that makes it very easy to integrate with automation, and not relying on the ui in order to manage data in a pinch
  • self hosted studio we are in full control of - which was especially important to us having had poor experiences with disruptive forced upgrades and migrations in other 3rd party services in the past
  • ability to create completely custom input components, and in a technically superior way to any competition we evaluated
  • an active community, and a responsive support team - and we tested the responsiveness before we made a commitment to use sanity :)

Some members of the team already had good experience with Next.js which made it an easy choice to add to the stack. Some of the key benefits we were looking forward to exploiting were…

  • great performance
    • static exports
    • prefetch linked pages
    • code splitting
  • great ecosystem
    • excellent documentation
    • examples of many integrations
    • very large user base
  • For css, we decided to use emotion as it supports all the features of competing systems, is well documented, under active maintenance, and did not add too much to the bundle size.

Modelling the content

We knew that we wanted the content to be exported to static json as part of our build process as we did not want to introduce external systems to an optimised cloud cdn server setup. This meant that we were only querying the data during build, and not from the client side, or on a per request basis. This meant that we did not have to worry about optimising the queries for speed, and allowed us to model the data purely to suit our content input needs.

It took us a few tries to settle on a pattern that worked well for us, but once we did, we were able to re-use our pattern for many components. We created a few global, top level documents, to cater for site wide configuration and some page defaults. We then defined document types for different media types, and any components we thought might be used in multiple places in the site.

The one killer feature we are still missing is the ability to create documents inline at the point where they are used as a reference. Without that feature, we always needed to make a tradeoff between reusable content items, and ease of use for content editors. We did not always get that right, and have ended up with some document types that are used once, and would be better placed inline in another document model, and other models that include parts that require duplicated configuration that would have been better as a document in it’s own right, and referenced from other documents, but largely our model was intuitive and easy enough to use by multiple CMS editors.

The bulk of our content is to populate a widgetized template in the form of stacked custom components we called shelves. By design, there was very little interaction between shelves either in layout or in functionality, which made it easy to design and develop as a discreet atomic unit. This shelf system worked great for us, and then it got even better when we realised (after seeing a screencast from sanity’s Knut) that we could put our shelves into a custom block content editor, which allowed us to drag and drop shelves, and to copy and paste, even between different pages. Switching to a custom block content editor was shockingly simple. We added a block type to our shelf array, and without needing to touch the data, immediately we were able to interact with our shelves as components within the editor.

Building the components

We started off with base components, like the grid system, theme, typography and forms. Using those components we then developed about 25 different shelf components that were designed to fit together to form the complete pages.

Most of the components would focus on presenting content, but we also have some decorative components, for example the divider component which is designed to separate sections of content with geometric shapes that can themselves be defined in the CMS. This approach of adding material to the CMS that is for presentation rather than content is not ideal from a technical standpoint, but is a pragmatic solution that has worked really well for us.

We used an approach of extracting logic from presentation in our components, meaning we could render and test the ui separately by providing static props to the ui components, and then test the logic in unit tests independently from the ui.

Putting it all together

The last thing to do was to apply the CMS content to the components, which we did by mapping over the array of shelves, and rendering each shelf with a component based on its type. However, in order to make the data conform exactly to the props required by the ui components, we added a post processing layer to the CMS data. We would use post processing to recurse the result of GROQ queries, and based on document type, to replace the CMS data with derived data ready for components to consume. This gave us a chance to strip out unused properties and also to make some substitutions based on environment - for example defining the url prefix for different parts of our application to be used in hyperlinks.

Using GROQ was a great experience, and the expressiveness of the language, and ability to query directly into a desired output json shape allowed us to avoid having to write a lot of mapping and formatting functions.

Summary

Having a headless CMS gives your engineers more control over how to render the content, and having a CMS Studio that you can deploy and customise yourself gives more control over how you want your content editors to be able to work with the content. There is a definite overhead to working in a more sophisticated way, but usually the benefits will outweigh the cost in the long run.

Funnelling content from a headless CMS into a Next.js site works really well, and having many options at our disposal for how to render and deploy the site as a kind of static/server hybrid is great.

Both Next.js and Sanity themselves evolved as we developed the site, and it easy to upgrade and leverage the new functionality. It's really good to know that your tech stack has got your back not just today, but as the procession of time rolls forwards...

Contributor