N1ql query to handle datetime scenario - couchbase

I am having a scenario where I need to fetch all records from students document :
"fname": "abc",
"timeOfAdmission": 1576042885166,
"lname": "rawat",
"studentId": "1"
where studentId is our documnetId.
Is it possible using N1ql to execute a query like this
select * from students where (CurrentTime - timeOfAdmission) > 3600000.
where CurentTime, timeOfAdmission and 3600000 is in milliseconds.
How can we write this query using N1ql ?

You can use date functions https://docs.couchbase.com/server/current/n1ql/n1ql-language-reference/datefun.html#fn-date-now-millis
SELECT s.*
FROM students AS s
WHERE s.timeOfAdmission < NOW_MILLIS() - 3600000;
CREATE INDEX ix1 ON students(timeOfAdmission);

Related

Lumen model timestamp different with data on table

I have table called teams when I query using Eloquent like this following code:
$team = Team::where('id', 23)->get();
and this code return object like this following JSON:
{
"id": 23,
"name": "Test"
"created_at": "2020-06-03 17:33:14",
"is_used": false
}
but when I tried to direct query on workbench created_at field getting different value
The query look like this following mysql code:
SELECT created_at from teams where id = 23;
-- this query return 2020-06-04 00:33:14
But the valid value is the query with Eloquent.
My question is how to make this default insert timestamp in mysql table equal with Eloquent data?
Thanks in advance

Converting N1Ql query having “where” clause with “IN” parameters into mapreduce views

I have a plain select query for which mapreduce view is already present.
Query:
select count(*) from `my-bucket` where type = 'Order' and status = 'CREATED' and timestamp > 1535605294320 and timestamp <= 1535605594320
view:
function (doc, meta) {
if (doc._class == "com.myclass.Order"){
emit([doc.type,doc.status, doc.timestamp], null);
}
}
Keys for querying the view:
Start key : ["Order","CREATED",1535605294320]
End key : ["Order","CREATED",1535605594320]
Requirement: Now, we would want this view to support a query having IN clause on status parameter. Also, we would like to add additional parameters supporting IN parameters. Sample N1Ql would be like below.
select count(*) from `my-bucket` where type = 'Order' and orderType IN ["TYPE-A","TYPE-B","TYPE-C"]and status IN ['CREATED',""READY,"CANCELLED"] and timestamp > MILLIS("2016-05-15T03:59:00Z") and timestamp <= MILLIS("2017-05-15T03:59:00Z")
How to write a query on view to accomplish this? Only solution comes to my mind is to fire multiple (lets says x) queries on views
where x = m1*m2*....*mn
AND m1=number of paremeters in first IN clause
AND n=number of IN clauses.
Is there any better solution like executing this query in batch (using java SDK) or a single mapreduce query?

Postgres convert json with duplicate IDs

With this select:
json_agg(json_build_object("id", price::money))
I get the resulting value:
[
{"6" : "$475.00"},
{"6" : "$1,900.00"},
{"3" : "$3,110.00"},
{"3" : "$3,110.00"}
]
I would like the data in this format instead:
{
"6": ["$475.00","$1,900.00"],
"3": ["$3,110.00","$3,110.00"]
}
When queried on the server or used with jsonb, the IDs are duplicate and only one of the key value pairs make it through.
You should aggregate prices in groups by ids and use the aggregate function json_object_agg(). You have to use a derived table (subquery in the from clause) because aggregates cannot be nested:
select json_object_agg(id, prices)
from (
select id, json_agg(price::money) as prices
from my_table
group by id
) s
Working example in rextester.

How to iterate through PostgreSQL jsonb array values for purposes of matching within a query

My table has many rows, each containing a jsonb object.
This object holds an array, in which there can potentially be multiple keys of the same name but with different values.
My goal is to scan my entire table and verify which rows contain duplicate values within this json object's array.
Row 1 example data:
{
"Name": "Bobb Smith",
"Identifiers": [
{
"Content": "123",
"RecordID": "123",
"SystemID": "Test",
"LastUpdated": "2017-09-12T02:23:30.817Z"
},
{
"Content": "abc",
"RecordID": "abc",
"SystemID": "Test",
"LastUpdated": "2017-09-13T10:10:21.598Z"
},
{
"Content": "def",
"RecordID": "def",
"SystemID": "Test",
"LastUpdated": "2017-09-13T10:10:21.598Z"
}
]
}
Row 2 example data:
{
"Name": "Bob Smith",
"Identifiers": [
{
"Content": "abc",
"RecordID": "abc",
"SystemID": "Test",
"LastUpdated": "2017-09-13T10:10:26.020Z"
}
]
}
My current query was originally used to find duplicates based on a name value, but, in cases where the names may be flubbed, using a record ID is a more full proof method.
However, I am having trouble figuring out how to essentially iterate over each 'Record ID' within every row and compare that 'Record ID' to every other 'Record ID' in every row within the same table to locate matches.
My current query to match 'Name':
discard temporary;
with dupe as (
select
json_document->>'Name' as name,
json_document->'Identifiers'->0->'RecordID' as record_id,
from staging
)
select name as "Name", record_id::text as "Record ID"
from dupe da
where ( select count(*) from dupe db where db.name = da.name) > 1
order by full_name;
The above query would return the matching rows IF the 'Name' field in both rows contained the same spelling of 'Bob'.
I need this same functionality using the nested value of the 'RecordID' field.
The problem here is that
json_document->'Identifiers'->0->'RecordID'
only returns the 'RecordID' at index 0 within the array.
For example, this does NOT work:
discard temporary;
with dupe as (
select
json_document->>'Name' as name,
json_document->'Identifiers'->0->'RecordID' as record_id,
from staging
)
select name as "Name", record_id::text as "Record ID"
from dupe da
where ( select count(*) from dupe db where db.record_id = da.record_id) > 1
order by full_name;
...because the query only checks the 'RecordID' value at index 0 of the 'Identifiers' array.
How could I essentially perform something like
SELECT json_document#>'RecordID'
in order to have my query check every index within the 'Identifiers' array for the 'RecordID' value?
Any and all help is greatly appreciated! Thanks!
I'm hoping to accomplish this with only a Postgres query and NOT by accessing this data with an external language. (Python, etc.)
I solved this by essentially performing the 'unnest()'-like jsonb_array_elements() on my nested jsonb array.
By doing this in a subquery, then scanning those results using a variation of my original query, I was able to achieve my desired result.
Here is what I came up with.
with dupe as (
select
json_document->>'Name' as name,
identifiers->'RecordID' as record_id
from (
select *,
jsonb_array_elements(json_document->'Identifiers') as identifiers
from staging
) sub
group by record_id, json_document
order by name
)
select * from dupe da where (select count(*) from dupe db where
db.record_id = da.record_id) > 1;

N1QL Query ARRAY_CONTAINS speed

I have the documents of the following form that I need to query:
{
"id": "-KWiJ1LlYbXSSRUmocwK",
"ownerID": "72f16d9d-b905-498c-a7ff-9702cdcae996",
"orgID": "20071513",
"teams": [
"5818f7a75f84c800079186a8",
"5818cbb25f84c800079186a7"
]
}
And I'll want to be able to query based on ownerID and the teams array. My query currently looks like so:
SELECT id FROM
default AS p
WHERE p.ownerID = $1
OR ARRAY_CONTAINS(p.teams, $2)
ORDER BY id
So I can get documents with the expected ownerID as well as documents that have a specific team id in the teams array. This query does work, but I'm concerned about performance when I have a lot of documents, and possibly some documents have up to 20 teams assigned.
Am I on the right track?
EDIT: Couchbase ver 4.1
Couchbase 4.5 introduced array indexing. This allows you to index individual elements of an array, in your case the teams array. This will be essential for the performance of your query. With 4.5.1 or 4.6, you will do:
CREATE INDEX idx_owner ON default( ownerID );
CREATE INDEX idx_teams ON default( DISTINCT ARRAY t FOR t IN teams END );
SELECT id
FROM default AS p
WHERE p.ownerID = $1
UNION
SELECT id
FROM default AS p
WHERE ANY t IN p.teams SATISFIES t = $2 END;