I made a simple mapping with two fields where i am analyzing one field which is text type and other field is a keyword type. example
fields: Category_one, Category_two
Data
{"Category_one": "liked wine", "Category_two":"Wine"}
{"Category_one": "liked pasta", "Category_two":"pasta"}
{"Category_one": "liked wine and pasta", "Category_two":"Wine and pasta"}
{"Category_one": "liked wine so much", "Category_two":"Wine"}
...........
..........
.....
Now I wrote a search query for the index.
Get _search/cat
{
"size": 20,
"query": {
"match": {
"Category_one.ngrams": {
"query": "Nice food place in XYZ location",
"analyzer": "standard"
}
}
}
}
}
It's working fine. I want to find the top 5 unique Category_two values according to their match query score in Category_one.
For example :
Let's say the query returns 20 results("size" in the query above) with different scores, out of which first 6 are wine(Category_two), next 4 are pasta(Category_two), and so on..
These 20 results have duplicates. How can I fetch top 3 unique values from Category_two for "Wine", "pasta", "wine and pasta" as per their scores in the Category_one match score?
Can someone help me understand how to approach this problem? Any suggestions would be appreciated.
Thank you.
Related
So I have a json data which is something like this:
[
{ "fruit":"apple",
"country": "A"
},
{ "fruit":"banana",
"country": "b"
},
{ "fruit":"apple",
"country": "C"
},
{ "fruit":"banana",
"country": "D"
}]
For now it's all in the same table. But what I want to do is group the data by fruit and show different tables which in this case would generate two tables, where in first table there would be two rows for apple and in the second table there would be two rows for banana. So if there are 5 types of fruits then there should be five tables with their respective data. How can I do this? I tried using *ngFor loop to loop through the data but I'm stuck at this point as to how can I group it and show different tables? Please help!
Thanks!
Just some modification for data might be helpful.
const mapData = new Map<string, string[]>();
this.data.forEach(value => {
const fruit = value.fruit;
const country = value.country;
if (!mapData.has(fruit)) {
mapData.set(fruit, []);
}
mapData.get(fruit).push(country);
});
I have query which is traversing only in forward direction.
example:
{
"orderStatus": "SUBMITTED",
"orderNumber": "785654",
"orderLine": [
{
"lineNumber": "E1000",
**"trackingnumber": "12345,67890",**
"lineStatus": "IN-PROGRESS",
"lineStatusCode": 50
}
],
"accountNumber": 9076
}
find({'orderLine.trackingNumber' : { $regex: "^12345.*"} })**
When I use the above query I get the entire document. But I want to fetch the document when I search with 67890 value as well
At any part of time I will be always querying with single tracking number only.
12345 or 67890 Either with 12345 or 67890. There are chances tracking number value can extend it's value 12345,56789,01234,56678.
I need to pull the whole document no matter what the tracking number is in whatever position.
OUTPUT
should be whole document
{
"orderStatus": "SUBMITTED",
"orderNumber": "785654",
"orderLine": [
{
"lineNumber": "E1000",
"trackingnumber": "12345,67890",
"lineStatus": "IN-PROGRESS",
"lineStatusCode": 50
}
],
"accountNumber": 9076
}
Also I have done indexing for trackingNumber field. Need help here. Thanks in advance.
Following will search with either 12345 or 67890. It is similar to like condition
find({'orderLine.trackingNumber' : { $regex: /12345/} })
find({'orderLine.trackingNumber' : { $regex: /67890/} })
There's also an alternative way to do this
Create a text index
db.order.createIndex({'orderLine.trackingnumber':"text"})
You can make use of this index to search the value from trackingnumber field
db.order.find({$text:{$search:'12345'}})
--
db.order.find({$text:{$search:'67890'}})
--
//Do take note that you can't search using few in between characters
//like the following query won't give any result..
db.order.find({$text:{$search:'6789'}}) //have purposefully removed 0
To further understand how $text searches work, please go through the following link.
I have a Postgres statement that returns extracts/iterates over a json blob in the value column of a table. I am able to get a count one level deep using the query below but I can't count any deeper. I was using:
select jsonb_array_length(value -> 'team') as team_count
This returns the proper count but I cant seem to leverage this to count the names under each team.
In a perfect world I would my results to return 4 lines of results like this(title and a matching count of names):
Product Owner, 2
Technical Product Manager, 2
Data Modeler, 0
Engineer, 0
How would I go about amending this query to give me the count of names under team? I tried all sorts of stuff but nothing that got me close.
Sample Json is below.
"team":[
{
"title":"Product Owner",
"names":[
"John Smith",
"Jane Doe"
]
},
{
"title":"Technical Project Manager",
"names":[
"Fred Flintstone",
"Barney Rubble"
]
},
{
"title":"Data Modeler"
},
{
"title":"Engineer"
}
You seem to be looking for
SELECT
role -> 'title' AS team_role,
jsonb_array_length(role -> 'names') AS member_count
FROM jsonb_array_elements(value -> 'team') AS team(role)
My question is about creating a proper schema or way of storing some data I will be collecting.
My app runs Laravel 6.
So I have a number of 'campaigns', an example of which is like this:
{
"campaign_name": "Campaign 1",
"keywords": ["keyword 1", "keyword 2", "keyword 3"], // there may be hundreds of keywords
"urls": ["google.com", "bing.com", "example.com"], // there may be many urls
"business_names": ["Google", "Bing, "Example"], // there may be many business_names
"locations": [
{
"address": "location 1", //this is a postal address
"lat": "-37.8183",
"lng": "144.957"
},
{
"address": "location 2", //this is a postal address
"lat": "-37.7861",
"lng": "145.312"
}
// there may be 50-100 locations.
]
}
Each url (and each business name) will get matched up with each keyword along with each location.
ie:
google.com
- keyword 1 location 1
- keyword 1 location 2
- keyword 1 location 3
- keyword 2 location 1
- keyword 2 location 2
// etc etc. there may be hundreds of keywords and hundreds of locations.
bing.com
- keyword 1 location 1
- keyword 1 location 2
// etc etc as above.
Each of these concatenations will have time series data points that I want to store and ultimately query.
I see how a number of tables may be setup to handle this, but is there a way to slightly simplify this by storing some json?
Most of my migrations on projects have been pretty simple with just a single relation but this is a bit harder for me.
Any help is appreciated. I would ideally like to avoid a number of tables and complex pivots or associations if possible (understanding the benefits of normalization...)
i'm a total NooB at elastic-Search and i researched on the internet how to store highly related table with elastic search but it's quite confusing, here's my problem,
I have Approximately 16 Tables (one fact table and the other are dimensions tables), i could do a SQL request where i join all the table in an array of long rows containing all the fields and mapping it in a Json way, but there will be tons of duplicated fields,
For example the dimension table A " contain 3 persons: p1,p2,p3" and a Fact Table contain more than 1000 row (for examples) and all these row have foreign keys/references to these 3 persons,
So what's the ideal way to store it?
putting each table in a different Index or category or to embed everything in one single object per row ?
Thanks in advance
The facts can form an index with their dimension field denormalised. And create a separate index for dimensions.
PUT /dimension_1/1
{
"name": "John Smith"
}
PUT /dimension_2/1
{
"name": "John Doe"
}
...
PUT /facts/1
{
"title": "facts",
"name": "p1",
"dimension_1": {
"id": 1,
"name": "John Smith"
},
"dimension_2": {
"id": 1,
"name": "John Doe"
}
...
}
Refer
https://www.elastic.co/guide/en/elasticsearch/guide/master/denormalization.html