Hello guys,
We are currently using the below operator to partial match any chunk of text input in any token order:
compound: {
must: query.split(' ').map(token => ({
autocomplete: {
query: token,
path: 'name',
tokenOrder: 'any',
},
})),
}
With this query we ensure that every word of a text input (split by whitespace
) must match to produce results. And each word will produce a result if it partially matches, hence the use of autocomplete
. We also specify the tokenOrder
to any
so the autocomplete
operator will produce results regardless of the order of the word.
There are too many workarounds in this approach and we don’t like that.
Can you suggest a better solution ? maybe something like custom analyzers with an edgeGram
tokenizer ? We’re currently playing around with custom analyzers
and it’s a bit foggy on how to build the perfect custom analyzer
that best suit our needs.
Best regards