I am quite new to Elastic Search. I have a complex scenario and I am unable to get the right solution(Elastic Queries/params) for this. Any Help would be highly appreciate.
My Fields
Product Name (String)
Price Min
Price Max
Availabilty Status(Avialable/Unavailable)
Beside of these a search will always be filter on unique user. So Mysql query looks like :
Select * from product where product_name like %xxx% AND price >= price_min AND price <= price_max AND availability = availability_ status AND user = 1;
I like the exact elastic search params to solve this scenario Or near about solution will also be appreciated.
You will need to use a filter here since you need an exact match and not a full-text match. And this way its even faster.
{
"query": {
"filtered": {
"query": {
"match": { "name": "YourName" }
},
"filter": {
"bool": {
"must": [
{ "range": { "price_min": { "gte": 20 }}},
{ "range": { "price_max": { "lte": 170 }}},
{ "term" : { "availability" : "false" }} ]
}
}
}
}
}
What you provide in "yourName" is a full text match (Similar names will be retrieved if the name field is analyzed). Im assuming that you have left it unchanged so the field is analyzed(Stemmed + stop words removal) by default.
Since you need the result to match all the criteria its an AND combination. Hence use
bool filter to AND all the terms.
So given your SQL query
SELECT *
FROM product
WHERE product_name LIKE '%xxx%'
AND price >= price_min
AND price <= price_max
AND availability = availability_status
AND user = 1;
We need:
a match query for the product_name
a range filter for the price field
and a term filter for the availability and user fields
All these are wrapped inside a filtered query whose filters will filter out all non-matching documents, and finally the match query will run on the remaining documents to match the product_name
The correct query would then look like this:
{
"query": {
"filtered": {
"query": {
"match": {
"product_name": "xxx"
}
},
"filter": {
"bool": {
"must": [
{
"range": {
"price": {
"gte": 20,
"lte": 170
}
}
},
{
"term": {
"availability": "availability_status"
}
},
{
"term": {
"user": 1
}
}
]
}
}
}
}
}
Related
I have a few records in elastic search I want to group the record by user_id and fetch the latest record which is event_type is 1
If the latest record event_type value is not 1 then we should not fetch that record. I did it in MySQL query. Please let me know how can I do that same in elastic search.
After executing the MySQL query
SELECT * FROM user_events
WHERE id IN( SELECT max(id) FROM `user_events` group by user_id ) AND event_type=1;
I need the same output in elasticsearch aggregations.
Elasticsearch Query:
GET test_analytic_report/_search
{
"from": 0,
"size": 0,
"query": {
"bool": {
"must": [
{
"range": {
"event_date": {
"gte": "2022-10-01",
"lte": "2023-02-06"
}
}
}
]
}
},
"sort": {
"event_date": {
"order": "desc"
}
},
"aggs": {
"group": {
"terms": {
"field": "user_id"
},
"aggs": {
"group_docs": {
"top_hits": {
"size": 1,
"_source": ["user_id", "event_date", "event_type"],
"sort": {
"user_id": "desc"
}
}
}
}
}
}
}
I have the above query I have two users whose user_id is 55 and 56. So, in my aggregations, it should not come. But It fetched the other event_type data but I want only event_types=1 with the latest one. if the user's last record does not have event_type=1, it should not come.
In the above table, user_id 56 latest record event_type contains 2 so it should not come in our aggregations.
I tried but it's not returning the exact result that I want.
Note: event_date is the current date and time. As per the above image, I have inserted it manually that's why the date differs
GET user_events/_search
{
"size": 1,
"query": {
"term": {
"event_type": 1
}
},
"sort": [
{
"id": {
"order": "desc"
}
}
]
}
Explanation: This is an Elasticsearch API request in JSON format. It retrieves the latest event of type 1 (specified by "event_type": 1 in the query) from the "user_events" index, with a size of 1 (specified by "size": 1) and sorts the results in descending order by the "id" field (specified by "order": "desc" in the sort).
If your ES version supports, you can do it with field collapse feature. Here is an example query:
{
"_source": false,
"query": {
"bool": {
"filter": {
"term": {
"event_type": 1
}
}
}
},
"collapse": {
"field": "user_id",
"inner_hits": {
"name": "the_record",
"size": 1,
"sort": [
{
"id": "desc"
}
]
}
},
"sort": [
{
"id": {
"order": "desc"
}
}
]
}
In the response, you will see that the document you want is in inner_hits under the name you give. In my example it is the_record. You can change the size of the inner hits if you want more records in each group and sort them.
Tldr;
They are many ways to go about it:
Sorting
Collapsing
Latest Transform
All those solution are approximate of what you could get with sql.
But my personal favourite is transform
Solution - transform jobs
Set up
We create 2 users, with 2 events.
PUT 75324839/_bulk
{"create":{}}
{"user_id": 1, "type": 2, "date": "2015-01-01T00:00:00.000Z"}
{"create":{}}
{"user_id": 1, "type": 1, "date": "2016-01-01T00:00:00.000Z"}
{"create":{}}
{"user_id": 2, "type": 1, "date": "2015-01-01T00:00:00.000Z"}
{"create":{}}
{"user_id": 2, "type": 2, "date": "2016-01-01T00:00:00.000Z"}
Transform job
This transform job is going to run against the index 75324839.
It will find the latest document, with regard to the user_id, based of the value in date field.
And the results are going to be stored in latest_75324839.
PUT _transform/75324839
{
"source": {
"index": [
"75324839"
]
},
"latest": {
"unique_key": [
"user_id"
],
"sort": "date"
},
"dest": {
"index": "latest_75324839"
}
}
If you were to query latest_75324839
You would find:
{
"hits": [
{
"_index": "latest_75324839",
"_id": "AGvuZWuqqz7c5ytICzX5Z74AAAAAAAAA",
"_score": 1,
"_source": {
"date": "2017-01-01T00:00:00.000Z",
"user_id": 1,
"type": 1
}
},
{
"_index": "latest_75324839",
"_id": "AA3tqz9zEwuio1D73_EArycAAAAAAAAA",
"_score": 1,
"_source": {
"date": "2016-01-01T00:00:00.000Z",
"user_id": 2,
"type": 2
}
}
]
}
}
Get the final results
To get the amount of user with type=1.
A simple search query such as:
GET latest_75324839/_search
{
"query": {
"term": {
"type": {
"value": 1
}
}
},
"aggs": {
"number_of_user": {
"cardinality": {
"field": "user_id"
}
}
}
}
Side notes
This transform job has been running in batch, this means it will only run once.
It is possible to run it in a continuous fashion, to get all the time the latest event for a user_id.
Here are some examples.
Your are looking for an SQL HAVING clause, which would allow you to filter results after grouping. But sadly there is nothing equivalent on Elastic.
So it is not possible to
sort, collapse and filter afterwards (even post_filter does not
help here)
use a top_hits aggregation with custom sorting and then filter
use any map/reduce scripted aggregations, as they do not support
sorting.
work with subqueries.
So basically seen, Elastic is not a database. Any sorting or relation to other documents should be based on scoring. And the score should be calculated independently for each document, distributed on shards.
But there is a tiny loophole, which might be the solution for your use case. It is based on a top_metrics aggregation followed by bucket selector to eliminate the unwanted event types:
GET test_analytic_report/_search
{
"size": 0,
"aggs": {
"by_id": {
"terms": {
"field": "user_id",
"size": 100
},
"aggs": {
"tm": {
"top_metrics": {
"metrics": {
"field": "event_type"
},
"sort": [
{
"id": {
"order": "desc"
}
}
]
}
},
"event_type_filter": {
"bucket_selector": {
"buckets_path": {
"event_type": "tm.event_type"
},
"script": "params.event_type == 1"
}
}
}
}
}
}
If you require more fields from the source document you can add them to the top_metrics.
It is sorted by id now, but you can also use event_date.
I have json documents with entries like :
......
{
"Fieldname" : "booked",
"Fieldvalue" : "yes"
}
...
Within the json document, there are many fields like this, where Boolean value is indirectly mentioned using Fieldname and Fieldvalue : Essentially it signifies that booked=true. Would it be more efficient to transform the json before storing it in elasticsearch ? I.e. replacing the above with :
{
"booked" : true
}
? The search use case is that I want to figure out whether similar json already exists in the system before adding another json.
Yes the later one is much cleaner way to store and search purpose both. Say you want to get all the booked properties from your index then you can easily do this way instead of using extra Fieldname and Fieldvalue
GET /properties/_search
{
"size": 10,
"query": {
"bool": {
"must": [
{
"match": {
"country_code.keyword": "US"
}
},
{
"match": {
"booked": true
}
},
{
"range": {
"usd_price": {
"gte": 50,
"lte": 100000
}
}
}
]
}
},
"sort": [
{
"ranking": {
"order": "desc"
}
}
],
"_source": [
"property_id",
"property_name",
"country",
"country_code",
"state",
"state_abbr",
"city"
]
}
I'm using Elasticsearch API and the schema of the document as follow
{
name: "",
born_year: "",
born_month: "",
born_day: "",
book_type: "",
price: <some number>,
country: ""
}
Now what I need is to get the document count per each name where born before 1995 (born_year + born_month + born_day < "20051220"). How can i achieve?
I tried this:
{
"query": {
"query_string": {
"query": "country:\"SL\""
}
},
"size": 0,
"aggs": {
"total": {
"terms": {
"field": "name"
}
}
}
}
But I have no idea how can I add filter for the birthday.
As mentioned by #val, you need to add a real date field that you can easily add by concatenating these three fields at creation time.
But how you filter based on date range, there are two ways and both of them will return different result sets
Now the level of filtering is your choice.
You mentioned querying on country field. But you have not mentioned at what level you want to filter on date range. I will give you queries for both the cases.
Mappings- assuming you create a date field.
{
name:"",
born_year:"",
born_month:"",
born_day:"",
book_type:"",
price:<some number>,
country:"",
date : ""
}
Case - 1) Filtering date range for name aggregations only, here documents count will not be effected by the date range filter
{
"query": {
"query_string": {
"query": "country:\"SL\""
}
},
"aggs": {
"total": {
"filter": {
"range": {
"date": {
"gte": "your_date_mx",
"lte": "your_date_min"
}
}
},
"aggs": {
"NAME": {
"terms": {
"field": "name",
"size": 10
}
}
}
}
}
}
Case 2) In this case both your documents count and aggregation will be filtered for date range as we add date range filter at query level.
{
"query": {
"query_string": {
"query": "country:\"SL\""
},
"bool": {
"must": [
{
"range": {
"date": {
"gte": "your_date_mx",
"lte": "your_date_mic"
}
}
}
]
}
},
"aggs": {
"toal": {
"terms": {
"field": "name",
"size": 10
}
}
}
}
So adding a filter to aggregation will effect only aggs count.
Edit -
Approach1) with groovy script try to concatinate the string and parse it to integer and then compare with your input date.
{
"query": {
"bool": {
"must": [
{}
],
"filter": {
"script": {
"script": {
"inline": "(doc['year'].value + doc['month'].value + doc['date'].value).toInteger() > 19910701",
"params": {
"param1": 19911122
}
}
}
}
}
}
}
Make sure when indexing index date(or month) with single digit like 6 as 06
2) Approach 2 - parse the string the exact date(preferred)
{
"query": {
"bool": {
"must": [
{}
],
"filter": {
"script": {
"script": {
"inline": "Date.parse('dd-MM-yyyy',doc['date'].value +'-'+ doc['month'].value +'-'+ doc['year'].value).format('dd-MM-yyyy') > param1",
"params": {
"param1": "04-05-1991"
}
}
}
}
}
}
}
Second approach is much better approach as you don't have to worry about the maintaing the string for each field(date, month, day) to later parse to proper int for comparing.
actually i want those record from my db where description is blank, below the mysql query for my result and what should be query in elasticsearch for same result ?
SELECT * FROM products WHERE description != '';
Elastic Search query looks like this
POST http://localhost:9200/<index>/<indextype>/_search
{
"query": {
"filtered": {
"filter": {
"term": {
"description": ""
}
}
}
}
}
Still its not working, check your mapping should be like this.
PUT http://localhost:9200/<index>/<indextype>/_mapping
{
"products": {
"properties": {
"description": {
"type": "string",
"index": "not_analyzed"
}
}
}
}
If you want the result with description != '' then use below query.
Missing Filter in the Must-Not section of a Bool Filter. It will only return documents where the field exists, and if you set the "null_value" property to true, values that are explicitly not null.
{
"query": {
"filtered": {
"filter": {
"bool":{
"must":{},
"should":{},
"must_not":{
"missing":{
"field":"description",
"existence":true,
"null_value":true
}
}
}
}
}
}
}
I have the following query where I want to query the indexname for ID "abc_12-def that fall within the date range specified in the range filter.
But the below query is fetching values of different ID as well(for eg: abc_12-edf, abc_12-pgf etc) and that fall outside the date range. Any advice on how I can give an AND condition here? Thanks.
curl -XPOST 'localhost:9200/indexname/status/_search?pretty=1&size=1000000' -d '{
"query": {
"filtered" : {
"filter": [
{ "term": { "ID": "abc_12-def" }},
{ "range": { "Date": { "gte": "2015-10-01T09:12:11", "lte" : "2015-11-18T10:10:13" }}}
]
}
}
}'
You need to use Bool query for AND aka MUST condition
{
"query": {
"bool": {
"must": [
{
"term": {
"ID": "abc_12-def"
}
},
{
"range": {
"Date": {
"gte": "2015-10-01T09:12:11",
"lte": "2015-11-18T10:10:13"
}
}
}
]
}
}
}
Also, all fields by default are analyzed using standard analyzer, which means abc_12-def is tokenized as [abc_12, def]. term query does not analyze the string.
If you are looking for an exact match, you should mark the field as not_analyzed. How to map it as not_analyzed is explained here.