When we make a query, in some cases, we will want to modify score of documents in the result. This tutorial shows how using
function_score can help us.
[Continue reading…] “Elasticsearch Compound Queries – Function Score Query”
Searching natural language is imprecise because computers can’t comprehend entire natural language. Fuzzy Query can find words that need at most a certain number of character modifications to match. In this tutorial, we’re gonna look at way to use Elasticsearch Fuzzy Query that uses similarity based on Levenshtein edit distance.
[Continue reading…] “Elasticsearch Term Level Queries – Fuzzy Query”
We have known some basic Elasticsearch Multi Match Queries. This tutorial shows you more practice: how Operater affects to Best Fields/Most Fields/Cross Fields type, how to use Tie Breaker with Cross Fields type, Fuzziness in Multi Match Query…
[Continue reading…] “Elasticsearch Multi Match Query – More Practice”
In this tutorial, we’re gonna look at way to create an Elasticsearch Customer Analyzer.
[Continue reading…] “Elasticsearch Analyzers – Custom Analyzer”
Elasticsearch Character Filters preprocess (adding, removing, or changing) the stream of characters before it is passed to Tokenizer. In this tutorial, we’re gonna look at 3 types of Character Filters: HTML Strip, Mapping, Pattern Replace that are very important to build Customer Analyzers.
[Continue reading…] “Elasticsearch Character Filters”
In this tutorial, we’re gonna look at some basic analysers that Elasticsearch supports.
[Continue reading…] “Elasticsearch Analyzers – Basic Analyzers”
In this tutorial, we’re gonna look at Structured Text Tokenizers that are usually used with structured text like identifiers, email addresses, zip codes, and paths.
[Continue reading…] “Elasticsearch Tokenizers – Structured Text Tokenizers”
In this tutorial, we’re gonna look at 2 tokenizers that can break up text or words into small fragments, for partial word matching: N-Gram Tokenizer and Edge N-Gram Tokenizer.
[Continue reading…] “Elasticsearch Tokenizers – Partial Word Tokenizers”
A tokenizer breaks a stream of characters up into individual tokens (characters, words…), then outputs a stream of tokens. We can also use tokenizer to record the order or position of each term (for phrase and word proximity queries), or the start and end character offsets of the original word which the term represents (for highlighting search snippets).
In this tutorial, we’re gonna look at how to use some Word Oriented Tokenizers which tokenize full text into individual words.
[Continue reading…] “Elasticsearch Tokenizers – Word Oriented Tokenizers”