Mysql to MongoDB and Query Equivalent to this schema - mysql

I have a complex location database with the following schema :
table States
id : INT PK AutoIncrement
name : VarChar 50 UNIQUE
table Counties
id: INT PK AutoIncrement
stateID : INT ForeignKey ->States(id)
name : VARCHAR(50)
table Towns :
id: INT PK AutoIncrement
stateID : INT ForeignKey ->States(id)
countyID : INT ForeignKey ->Counties(id)
name : VARCHAR(50)
table listings
id : INT PK autoincrement
name: varchar(50)
stateID: INT
countyID: INT
townID: INT
When I want to display some statistic data about geographical repartition in a tree form like this :
state1 (105 results)
county 1 (50 results)
county 2 (55 results)
Town 1 ( 20 results)_
Town 2 ( 35 results)
state2 (200 results)
ect...
In mySQL I would had done this kind of queries :
**1st level : **
select count(*) as nb, S.namem, S.id as stateID from listings L INNER JOIN States S ON S.id=L.stateID GROUP BY S.id;
**2d level : **
foreach(results as $result){
$sql = "select count(*) as nb, from listings L INNER JOIN Counties C ON C.id=L.countyID WHERE L.stateID=".$result['stateID'];
});
and so on...There is a way to do that in a unique long query in MySQL too.
This is a trivial query and it is very fast on a SSD disk in Mysql.
I am starting to learn mongoDB and I want to know what kindof schema I should use to store my location data to optimize this $count() and $group() operations.
And which mongo query would do the job?

Store the documents with a structure like the listings table:
{
"name" : "listing0",
"state" : "Maryland",
"county" : "Washington",
"town" : "Faketown"
}
Then just find the number of listings per (state, country, town) triple with the aggregation pipeline
> db.listings.aggregate([
// hopefully an initial match stage to select a subset of search results or something
{ "$group" : { "_id" : { "state" : "$state", "county" : "$county", "town" : "$town" }, "count" : { "$sum" : 1 } } }
])
From here you can compute the numbers for the higher level of the tree by iterating over the result cursor, or you can run analogous pipelines to compute the numbers at the higher level of the tree. For example, for the county numbers in a specific state
> db.listings.aggregate([
// hopefully an initial match stage to select a subset of search results or something
{ "$match" : { "state" : "Oregon" } },
{ "$group" : { "_id" : { "state" : "$state", "county" : "$county" }, "count" : { "$sum" : 1 } } }
])

Related

Gorm - query on one to many relation does not return required results

In relation store has many products.
I created the struct like this:
type Store1 struct {
StoreSeq uint `json:"storeSeq" gorm:"primaryKey; column:store_seq"`
NickName string `json:"nickName" gorm:"column:nick_name"`
RegDate *domain.CTime `json:"regDate" gorm:"column:reg_date"`
Product1 []Product1 `json:"products" gorm:"foreignKey:ProductSeq"`
}
func (*Store1) TableName() string {
return "store"
}
type Product1 struct {
ProductSeq uint `json:"productSeq"`
ProductTitle string `json:"productTitle"`
RegDate *domain.CTime `json:"regDate"`
StoreSeq *uint `json:"store_seq" `
}
func (*Product1) TableName() string {
return "product"
}
and I queried it like this:
pro := new(entity.Product1)
store := new(entity.Store1)
orm.GetData().
Model(pro).
Preload("Product1").
Joins("left join store on store.store_seq = product.store_seq").
Where("store.store_seq = ?", 1).
Find(&store)
In my database table has data like this
STORE
1 testStore 2022-03-01 23:19:18
PRODUCT
1 1 test 2022-03-01 23:19:18
2 1 testaaa 2022-03-01 23:19:18
I expect
"storeSeq": 1,
"nickName": "",
"regDate": "2022-03-01 23:19:18",
"products": [
{
"productSeq": 1,
"productTitle": "test",
"regDate": "2022-03-01 23:19:18",
"store_seq": 1
},
{
"productSeq": 2,
"productTitle": "testaaa",
"regDate": "2022-03-01 23:19:18",
"store_seq": 1
}
]
but it only returns one result:
"storeSeq": 1,
"nickName": "",
"regDate": "2022-03-01 23:19:18",
"products": [
{
"productSeq": 1,
"productTitle": "test",
"regDate": "2022-03-01 23:19:18",
"store_seq": 1
}
]
I checked the SQL query then I found that it executes two SQL queries
[1.725ms] [rows:2] SELECT `product`.`product_seq`,`product`.`product_title`,`product`.`reg_date`,`product`.`store_seq` FROM `product` WHERE `product`.`product_seq` = 1
AND
[6.370ms] [rows:1] SELECT `product`.`product_seq`,`product`.`product_title`,`product`.`reg_date`,`product`.`store_seq` FROM `product` left join store on store.store_seq = product.store_seq WHERE store.store_seq = 1
I don't know why it executes the first SQL query; I want it to execute the second query only.
I have no idea and this is my firstime to use Golang with gorm with serverless framework
I found out that I make wrong releation between product and store
store has many produts so I have to relation product1 []Product1 foriegnKey
with storeSeq but i set foriegnkey as product_seq
and i also find out that execute two queries it because of preload option.

creating json array using column name as key and column as value in postgres

i have table named list in a PostgreSQL database
create table list (firstname text,lastname text,age integer);
insert into list values ('SHARON','XAVIER',25);
insert into list values ('RON','PETER',17);
insert into list values ('KIM','BENNY',14);
select * from list;
firstname | lastname | age
-----------+----------+-----
SHARON | XAVIER | 25
RON | PETER | 17
KIM | BENNY | 14
i need to create JSON array from this table like this ::
[ column name : column value]
[
{ "firstname" : "SHARON","lastname" : "XAVIER" , "age" : 25},
{ "firstname" : "RON","lastname" : "PETER" , "age" : 17},
{ "firstname" : "KIM","lastname" : "BENNY" , "age" : 14}
]
any possible options ?
You can use to_jsonb() to convert an entire row to a JSON value, then use jsonb_agg() to aggregate all those into a single JSON array:
select jsonb_agg(to_jsonb(l))
from list l;
Online example

How select mysql with condition on nested array json?

i'm create a table have one json column and data of inserted has below structure:
{
"options" : {
"info" : [
{"data" : "data1", "verified" : 0},
{"data" : "data2", "verified" : 1},
... and more
],
"otherkeys" : "some data..."
}
}
i want to run a query to get data of verified = 1 "info"
this is for mysql 5.7 comunity running on windows 10
select id, (meta->"$.options.info[*].data") AS `data`
from tbl
WHERE meta->"$.options.info[*].verified" = 1
is expect the output of "data2" but the actual output is nothing.
below query worked perfectly
select id, (meta->"$.options.info[*].data") AS `data`
from tbl
WHERE meta->"$.options.info[1].verified" = 1
but i need to search all item in array not only index 1
how can fix it ?
(sorry for bad english)
Try:
SELECT `id`, (`meta` -> '$.options.info[*].data') `data`
FROM `tbl`
WHERE JSON_CONTAINS(`meta` -> '$.options.info[*].verified', '1');
See dbfiddle.

pagination in mongodb avoid skip() and limit()

i am new in nodejs, i want to search in big data, how can i search a document without using skip() and limit(), my database structure is
{
"_id" : ObjectId("5a9d1836d2596d624873c84f"),
"updatedAt" : ISODate("2018-03-05T10:13:10.248Z"),
"createdAt" : ISODate("2018-03-05T10:13:10.248Z"),
"phone" : "+92333333",
"country_code" : "+92",
"verified_user" : false,
"verification_code" : "2951",
"last_shared_loc_time" : ISODate("2018-03-05T10:13:10.240Z"),
"share_loc_flag_time" : ISODate("2018-03-05T10:13:10.240Z"),
"share_location" : true,
"deactivate_user" : false,
"profile_photo_url" : null,
"__v" : 0
}
how can i search using createdAt.
i need a mongodb query, which request to api and show that users
Using skip() is not recommended when we have big data in MongoDB because always requires the server to walk from the beginning of the collection, you can use _id index with limit() to do pagination, because _id field is indexed by default in MongoDB so you can use this field for a good performance.
For the first page you need to get _id value of the last document with limit() :
db.users.find().limit(8);
last_id = id;
Secondly for the next page compare this last _id value with next _id :
db.users.find({'_id' > last_id}).limit(8);
For example assuming first _id equal to 1000 so :
1000 < 1008 ==> PAGE 1 where last_id=1008
1008 < 1016 ==> PAGE 2 where last_id=1016
1016 < 1024 ==> PAGE 3 where last_id=1024
{
uid : 1,
name: 'abc'
...
}
{
uid : 2,
name: 'abc'
...
}
....
{
uid : 10,
name: 'abc'
...
}
{
uid : 11,
name: 'abc'
...
}
Now you can query like get data where uid > 0 and uid <5
then in next slot it is get data where uid > 5 and uid <10 like wise ...
May be this approcah can help you out

Postgres nested JSON array using row_to_json

I am trying to create nested json array using 2 tables.
I have 2 tables journal and journaldetail.
Schema is -
journal : journalid, totalamount
journaldetail : journaldetailid, journalidfk, account, amount
Relation between journal and journaldetail is one-to-many.
I want the output in following format :
{ journalid : 1,
totalamount : 1000,
journaldetails : [
{
journaldetailid : j1,
account : "abc",
amount : 500
},
{
journaldetailid : j2,
account : "def",
amount : 500
}
]}
However, by writing this query as per this post the query is:
select j.*, row_to_json(jd) as journal from journal j
inner join (
select * from journaldetail
) jd on jd.sjournalidfk = j.sjournalid
and the output is like this :
{ journalid : 1,
totalamount : 1000,
journaldetails :
{
journaldetailid : j1,
account : "abc",
amount : 500
}
}
{ journalid : 1,
totalamount : 1000,
journaldetails :
{
journaldetailid : j2,
account : "def",
amount : 500
}
}
I want the child table data as nested array in the parent.
I found the answer from here:
Here is the query :
select row_to_json(t)
from (
select sjournalid,
(
select array_to_json(array_agg(row_to_json(jd)))
from (
select sjournaldetailid, saccountidfk
from btjournaldetail
where j.sjournalid = sjournalidfk
) jd
) as journaldetail
from btjournal j
) as t
This gives output in array format.