Handling Stripe webhook errors with Netlify Serverless Functions and SANITY

9 replies
Last updated: Feb 11, 2022
Heyo!
Working with a serverless function and Stripe. So after a successful purchase Stripe hits a Netlify Serverless Function which then in turn goes out and hits SANITY and updates the purchased product in the backend.

I am sometimes seeing timeout errors and

Client network socket disconnected before secure TLS connection was established
Do I need to wrap the
client
in an async/await statement? Or any other measures I can take to better handle this?
Here is what the full code looks like

import Stripe from "stripe"
import { Handler } from "@netlify/functions"
import sanityClient from "@sanity/client"

// Initialize Stripe client
const stripe = new Stripe(process.env.STRIPE_SECRET_KEY, {
  apiVersion: "2020-08-27",
})
// Initialize SANITY client
const client = sanityClient({
  projectId: process.env.GATSBY_SANITY_ID,
  dataset: process.env.GATSBY_SANITY_DATASET,
  token: process.env.SANITY_TOKEN,
  useCdn: false, // `false` if you want to ensure fresh data
  apiVersion: "2022-02-01",
})

const handler: Handler = async ({ body, headers }) => {
  try {
    // check the webhook to make sure it's valid
    const stripeEvent = stripe.webhooks.constructEvent(
      body,
      headers["stripe-signature"],
      process.env.STRIPE_WEBHOOK_SECRET
    )

    // Only do stuff if this is a successful Stripe Checkout purchase
    if (stripeEvent.type === "checkout.session.completed") {
      // Get the session from the webhook and then
      // Got Stripe to get all of the session data
      // We need the full line items from the completed purchase
      const eventObject: any = stripeEvent.data.object
      const sessionId: string = eventObject.id
      const sessionData: any = await stripe.checkout.sessions.retrieve(
        sessionId,
        {
          expand: ["line_items", "line_items.data.price.product"],
        }
      )
      const lineItems: any = sessionData.line_items.data

      // Loop over each line item and update the stock in SANITY if necessary
      lineItems.forEach((item) => {
        const sanityId = item.price.product.metadata.sanityId
        client
          .patch(sanityId) // Document ID to patch
          .dec({ stock: item.quantity }) // Increment field by count
          .commit() // Perform the patch and return a promise
          .catch((err) => {
            console.error("Oh no, the update failed: ", err.message)
          })
      })
    }

    return {
      statusCode: 200,
      body: JSON.stringify({ received: true }),
    }
  } catch (err) {
    console.log(`Stripe webhook failed with ${err}`)

    return {
      statusCode: 400,
      body: `Webhook Error: ${err.message}`,
    }
  }
}

export { handler }

AI Update

The issue you're experiencing is likely due to the forEach loop not waiting for the Sanity patch operations to complete before the serverless function returns. Your function is returning a 200 response immediately while the Sanity updates are still in progress, which can cause connection issues and timeouts.

Here are the key problems and solutions:

Main Issue: Fire-and-forget pattern

Your current code uses forEach with non-awaited promises. The function returns before the Sanity patches complete, causing the connection to close prematurely.

Solution: Use Promise.all with map

Replace your forEach loop with this:

// Wait for all patches to complete before returning
await Promise.all(
  lineItems.map(async (item) => {
    const sanityId = item.price.product.metadata.sanityId
    try {
      await client
        .patch(sanityId)
        .dec({ stock: item.quantity })
        .commit()
    } catch (err) {
      console.error(`Failed to update ${sanityId}: ${err.message}`)
      // Consider whether you want to throw here or just log
    }
  })
)

Additional recommendations:

  1. Move client initialization inside the handler - Creating the client outside the handler can cause connection pooling issues in serverless environments:
const handler: Handler = async ({ body, headers }) => {
  const client = sanityClient({
    projectId: process.env.GATSBY_SANITY_ID,
    dataset: process.env.GATSBY_SANITY_DATASET,
    token: process.env.SANITY_TOKEN,
    useCdn: false,
    apiVersion: "2022-02-01",
  })
  
  // ... rest of your code
}
  1. Update your API version - You're using 2022-02-01, which is quite old. Consider using a more recent version like 2024-01-01 or later.

  2. Consider using Sanity Functions instead - Since you're already using serverless functions, you might want to explore Sanity Functions which are native to the platform and handle these connection issues more reliably. They're designed specifically for this type of webhook processing and provide better integration with Sanity's infrastructure.

The TLS connection error you're seeing is a symptom of the network connection being terminated while the client is still trying to communicate with Sanity's API. Making sure all operations complete with await before returning will resolve this.

Just a hunch, but you’re probably getting rate limited. The
lineItems.forEach
will loop super fast and make multiple patch requests to the API and likely hit the limit of 25 requests per second.
I think I’d build a transaction with all the patches instead and send it as one request.
Here’s a pattern you can adopt .
Ah okay that makes sense because it is succeding for one item, but failing with multiplen items!
Thank you!
Yarg! That wasn't it. I followed that code and it is working locally fine - but still getting weird errors in live mode.
Also seeing socket hang up errors
Ignore the Stripe Webhook signature errors, that is from testing locally and a mismatch in the webhook secret.
Any chance it would relate to using the CDN or not?
Got it! In my rewrite I missed an await!
Thank you
user Y
!

Sanity – Build the way you think, not the way your CMS thinks

Sanity is the developer-first content operating system that gives you complete control. Schema-as-code, GROQ queries, and real-time APIs mean no more workarounds or waiting for deployments. Free to start, scale as you grow.

Was this answer helpful?