英文:
How to limit elastic suggestion with complex conditions?
问题
限制弹性查询并不复杂,例如:
{
"query": {
"bool": {
"must": [
{ "terms": { "brand": ["micromax", "samsung"] } },
{
"bool": {
"should": [
{ "range": { "price": { "gte": 6000, "lte": 10000 } } },
{ "range": { "price": { "gte": 16000, "lte": 30000 } } }
]
}
}
]
}
}
}
但如何以复杂条件(在不同字段上使用'match'、'must'、'should'、'not'等条件)来限制弹性建议呢?
是否可以在'collate'中设置复杂条件?
英文:
Limiting elastic query is not complicated, for example:
` "query":{
"bool":{
"must":[
{"terms":{"brand":["micromax","samsung"]}},
{
"bool":{
"should":[
{ "range": { "price": { "gte": 6000, "lte": 10000 } } },
{ "range": { "price": { "gte": 16000, "lte": 30000 } } }
]
}
}
]
}
}`
But how can limit elastic suggestion with complex conditions (a condition with 'match', 'must', 'should', 'not',... on different fields)?
Is it possible to set a complex condition in 'collate'?
答案1
得分: 0
如果您按照这篇文章所说的做法,您可以创建一个索引,并使用聚合进行建议。请注意,我使用了一个分析器,而aggs位于keyword类型的字段中。
在这个示例中,您搜索"micr"并获得建议"micromax"。如果您更改价格,当价格大于10时建议将不返回结果。
这样,您可以在建议中应用限制规则(按您的要求)。
映射
PUT index
{
"settings": {
"analysis": {
"analyzer": {
"autocomplete": {
"tokenizer": "my_tokenizer"
}
},
"tokenizer": {
"my_tokenizer": {
"type": "ngram",
"min_gram": 3,
"max_gram": 3,
"token_chars": [
"letter",
"digit"
]
}
}
}
},
"mappings": {
"properties": {
"brand": {
"type": "text",
"fields": {
"autocomplete": {
"type": "text",
"analyzer": "autocomplete"
},
"raw_term": {
"type": "keyword"
}
}
}
}
}
}
数据
POST index/_doc
{
"brand":"micromax",
"price":10
}
POST index/_doc
{
"brand":"samsung",
"price":10
}
查询
GET index/_search
{
"size": 0,
"query": {
"bool": {
"filter": [
{
"range": {
"price": {
"gte": 10,
"lte": 20
}
}
}
],
"must": [
{
"match": {
"brand.autocomplete": "micr"
}
}
]
}
},
"aggs": {
"suggestion": {
"terms": {
"field": "brand.raw_term",
"size": 10
}
}
}
}
结果:
{
"took": 1,
"timed_out": false,
"_shards": {
"total": 1,
"successful": 1,
"skipped": 0,
"failed": 0
},
"hits": {
"total": {
"value": 1,
"relation": "eq"
},
"max_score": null,
"hits": []
},
"aggregations": {
"suggestion": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 0,
"buckets": [
{
"key": "micromax",
"doc_count": 1
}
]
}
}
}
英文:
If you follow what the post says you can create an index and make your suggestion with aggregation. Note that I used an analyzer and the aggs is in the keyword type field.
In this example, you search for "micr" and receive the suggestion "micromax". If you change the price, the suggestion will not return results if the price is greater than 10.
That way you can apply restriction rules (the way you want) in your suggestion.
Mapping
PUT index
{
"settings": {
"analysis": {
"analyzer": {
"autocomplete": {
"tokenizer": "my_tokenizer"
}
},
"tokenizer": {
"my_tokenizer": {
"type": "ngram",
"min_gram": 3,
"max_gram": 3,
"token_chars": [
"letter",
"digit"
]
}
}
}
},
"mappings": {
"properties": {
"brand": {
"type": "text",
"fields": {
"autocomplete": {
"type": "text",
"analyzer": "autocomplete"
},
"raw_term": {
"type": "keyword"
}
}
}
}
}
}
Data
POST index/_doc
{
"brand":"micromax",
"price":10
}
POST index/_doc
{
"brand":"samsung",
"price":10
}
Query
GET index/_search
{
"size": 0,
"query": {
"bool": {
"filter": [
{
"range": {
"price": {
"gte": 10,
"lte": 20
}
}
}
],
"must": [
{
"match": {
"brand.autocomplete": "micr"
}
}
]
}
},
"aggs": {
"suggestion": {
"terms": {
"field": "brand.raw_term",
"size": 10
}
}
}
}
Result:
{
"took": 1,
"timed_out": false,
"_shards": {
"total": 1,
"successful": 1,
"skipped": 0,
"failed": 0
},
"hits": {
"total": {
"value": 1,
"relation": "eq"
},
"max_score": null,
"hits": []
},
"aggregations": {
"suggestion": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 0,
"buckets": [
{
"key": "micromax",
"doc_count": 1
}
]
}
}
}
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论