i am new in nodejs, i want to search in big data, how can i search a document without using skip() and limit(), my database structure is
{
"_id" : ObjectId("5a9d1836d2596d624873c84f"),
"updatedAt" : ISODate("2018-03-05T10:13:10.248Z"),
"createdAt" : ISODate("2018-03-05T10:13:10.248Z"),
"phone" : "+92333333",
"country_code" : "+92",
"verified_user" : false,
"verification_code" : "2951",
"last_shared_loc_time" : ISODate("2018-03-05T10:13:10.240Z"),
"share_loc_flag_time" : ISODate("2018-03-05T10:13:10.240Z"),
"share_location" : true,
"deactivate_user" : false,
"profile_photo_url" : null,
"__v" : 0
}
how can i search using createdAt.
i need a mongodb query, which request to api and show that users
Using skip() is not recommended when we have big data in MongoDB because always requires the server to walk from the beginning of the collection, you can use _id index with limit() to do pagination, because _id field is indexed by default in MongoDB so you can use this field for a good performance.
For the first page you need to get _id value of the last document with limit() :
db.users.find().limit(8);
last_id = id;
Secondly for the next page compare this last _id value with next _id :
db.users.find({'_id' > last_id}).limit(8);
For example assuming first _id equal to 1000 so :
1000 < 1008 ==> PAGE 1 where last_id=1008
1008 < 1016 ==> PAGE 2 where last_id=1016
1016 < 1024 ==> PAGE 3 where last_id=1024
{
uid : 1,
name: 'abc'
...
}
{
uid : 2,
name: 'abc'
...
}
....
{
uid : 10,
name: 'abc'
...
}
{
uid : 11,
name: 'abc'
...
}
Now you can query like get data where uid > 0 and uid <5
then in next slot it is get data where uid > 5 and uid <10 like wise ...
May be this approcah can help you out
Related
i'm create a table have one json column and data of inserted has below structure:
{
"options" : {
"info" : [
{"data" : "data1", "verified" : 0},
{"data" : "data2", "verified" : 1},
... and more
],
"otherkeys" : "some data..."
}
}
i want to run a query to get data of verified = 1 "info"
this is for mysql 5.7 comunity running on windows 10
select id, (meta->"$.options.info[*].data") AS `data`
from tbl
WHERE meta->"$.options.info[*].verified" = 1
is expect the output of "data2" but the actual output is nothing.
below query worked perfectly
select id, (meta->"$.options.info[*].data") AS `data`
from tbl
WHERE meta->"$.options.info[1].verified" = 1
but i need to search all item in array not only index 1
how can fix it ?
(sorry for bad english)
Try:
SELECT `id`, (`meta` -> '$.options.info[*].data') `data`
FROM `tbl`
WHERE JSON_CONTAINS(`meta` -> '$.options.info[*].verified', '1');
See dbfiddle.
I want to query JSON data with Redshift Spectrum to find out if a field in the JSON exists.
So for example. Given the data:
{ "field1" : { "one" : 1, "two" : 2}, "field2" : true }
{ "field2" : false }
And given I have defined my table as:
CREATE TABLE stackoverflow_sample AS (
field1 struct<
one:varchar,
two:varchar
>,
field2 boolean
)
I want to be able to query it with something like:
SELECT field2 FROM stackoverflow_sample WHERE field1 IS NOT NULL;
And get the result:
TRUE
However I keep getting the error column field1 does not exist
Any idea how to do this?
As a part of migration from MySQL to MongoDB,I am trying to rewrite all the mySQL queries into MongoDB queries.I am able to successfully do the same,except for a query
MySQL query:
SELECT(select job_status from indexjob order by job_timestamp desc limit 1) AS job_status,(select job_timestamp from indexjob where job_status='SUCCESS' order by job_timestamp desc limit 1) AS job_timestamp;
MongoDB data structure:
/* 1 */
{
"_id" : ObjectId("590caba811ef2308585f0dbb"),
"_class" : "com.info.sample.bean.IndexJobBean",
"jobName" : "IndexJob",
"jobStatus" : "SUCCESS",
"timeStamp" : "2017.05.05.22.13.19"
}
/* 2 */
{
"_id" : ObjectId("590caf1711ef23082cc58a3b"),
"_class" : "com.info.sample.bean.IndexJobBean",
"jobName" : "IndexJob",
"jobStatus" : "FAILED",
"timeStamp" : "2017.05.05.22.27.59"
}
edit - from comments
The query should return the timestamp of last jobStatus="SUCCESS" and the last jobStatus(it can be SUCCESS or FAILED). In MySQL I fetched these in such a way the result will be a row with columns jobStatus and timestamp with the respective values.
I have a complex location database with the following schema :
table States
id : INT PK AutoIncrement
name : VarChar 50 UNIQUE
table Counties
id: INT PK AutoIncrement
stateID : INT ForeignKey ->States(id)
name : VARCHAR(50)
table Towns :
id: INT PK AutoIncrement
stateID : INT ForeignKey ->States(id)
countyID : INT ForeignKey ->Counties(id)
name : VARCHAR(50)
table listings
id : INT PK autoincrement
name: varchar(50)
stateID: INT
countyID: INT
townID: INT
When I want to display some statistic data about geographical repartition in a tree form like this :
state1 (105 results)
county 1 (50 results)
county 2 (55 results)
Town 1 ( 20 results)_
Town 2 ( 35 results)
state2 (200 results)
ect...
In mySQL I would had done this kind of queries :
**1st level : **
select count(*) as nb, S.namem, S.id as stateID from listings L INNER JOIN States S ON S.id=L.stateID GROUP BY S.id;
**2d level : **
foreach(results as $result){
$sql = "select count(*) as nb, from listings L INNER JOIN Counties C ON C.id=L.countyID WHERE L.stateID=".$result['stateID'];
});
and so on...There is a way to do that in a unique long query in MySQL too.
This is a trivial query and it is very fast on a SSD disk in Mysql.
I am starting to learn mongoDB and I want to know what kindof schema I should use to store my location data to optimize this $count() and $group() operations.
And which mongo query would do the job?
Store the documents with a structure like the listings table:
{
"name" : "listing0",
"state" : "Maryland",
"county" : "Washington",
"town" : "Faketown"
}
Then just find the number of listings per (state, country, town) triple with the aggregation pipeline
> db.listings.aggregate([
// hopefully an initial match stage to select a subset of search results or something
{ "$group" : { "_id" : { "state" : "$state", "county" : "$county", "town" : "$town" }, "count" : { "$sum" : 1 } } }
])
From here you can compute the numbers for the higher level of the tree by iterating over the result cursor, or you can run analogous pipelines to compute the numbers at the higher level of the tree. For example, for the county numbers in a specific state
> db.listings.aggregate([
// hopefully an initial match stage to select a subset of search results or something
{ "$match" : { "state" : "Oregon" } },
{ "$group" : { "_id" : { "state" : "$state", "county" : "$county" }, "count" : { "$sum" : 1 } } }
])
I am trying to create nested json array using 2 tables.
I have 2 tables journal and journaldetail.
Schema is -
journal : journalid, totalamount
journaldetail : journaldetailid, journalidfk, account, amount
Relation between journal and journaldetail is one-to-many.
I want the output in following format :
{ journalid : 1,
totalamount : 1000,
journaldetails : [
{
journaldetailid : j1,
account : "abc",
amount : 500
},
{
journaldetailid : j2,
account : "def",
amount : 500
}
]}
However, by writing this query as per this post the query is:
select j.*, row_to_json(jd) as journal from journal j
inner join (
select * from journaldetail
) jd on jd.sjournalidfk = j.sjournalid
and the output is like this :
{ journalid : 1,
totalamount : 1000,
journaldetails :
{
journaldetailid : j1,
account : "abc",
amount : 500
}
}
{ journalid : 1,
totalamount : 1000,
journaldetails :
{
journaldetailid : j2,
account : "def",
amount : 500
}
}
I want the child table data as nested array in the parent.
I found the answer from here:
Here is the query :
select row_to_json(t)
from (
select sjournalid,
(
select array_to_json(array_agg(row_to_json(jd)))
from (
select sjournaldetailid, saccountidfk
from btjournaldetail
where j.sjournalid = sjournalidfk
) jd
) as journaldetail
from btjournal j
) as t
This gives output in array format.