Filtering out duplicate slugs after a large data import

13 replies
Last updated: Sep 8, 2020
Hi all. We've ran a large data import and there may be one or two duplicate slugs that have been generated. Any ideas on a way to filter those out??
Sep 8, 2020, 9:16 AM
I guess you could do a GROQ query with something like this:
Sep 8, 2020, 9:36 AM
*[_type == 'something']{
  "hasDuplicateSlug": length(*[_type == 'something' && slug.current == ^.slug.current && _id != ^._id]) > 0
Sep 8, 2020, 9:36 AM
Thanks for getting back to me, appreciate it! That came back as
for every
. I tried switching one of the
for simply
but got the opposite.
Sep 8, 2020, 9:59 AM
Will keep tinkering...
Sep 8, 2020, 10:00 AM
I updated it recently
Sep 8, 2020, 10:00 AM
So you did... one sec
Sep 8, 2020, 10:00 AM
I forgot you had to filter out the current document
Sep 8, 2020, 10:01 AM
Also, I had put the ] in the wrong place, updated just now. Sorry about that
Sep 8, 2020, 10:02 AM
Ahh no need for apologies, I really appreciate the help. Unfortunately that timed out, there are about >4k entries we're running it against.
Sep 8, 2020, 10:07 AM
huh, try paginating it?

*[_type == 'something'][0..500]{
Sep 8, 2020, 10:10 AM
Great minds, that's exactly what I'm just running πŸ˜‰
Sep 8, 2020, 10:11 AM
That's the ticket, thanks
user J
- I really appreciate that πŸ‘
Sep 8, 2020, 10:14 AM
Sep 8, 2020, 10:18 AM

Sanity– build remarkable experiences at scale

Sanity is a modern headless CMS that treats content as data to power your digital business. Free to get started, and pay-as-you-go on all plans.

Was this answer helpful?