I am trying to find the query to find the row and byte count of a table from each backend database:
I) MySQL
II) Oracle DB
III) SQL Server
IV) Mongo DB
V) Teradata
MySQL
Row count of single table
select count(*) from table_name;
Row Count of all tables
SELECT SUM(TABLE_ROWS)
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_SCHEMA = 'database schema name';
Size
SELECT table_schema as `Database`,
table_name AS `Table`,
round(((data_length + index_length) / 1024 / 1024), 2) `Size in MB`
FROM information_schema.TABLES
ORDER BY (data_length + index_length) DESC;
Result for size would be like this
Database Table Size in MB
sakai sakai_realm_rl_fn 22.39
sonar file_sources 8.56
sakai sakai_site_tool 4.55
sakai sakai_event 4.03
sonar issues 3.75
sakai sakai_site_page 3.03
sonar project_measures 2.03
MongoDB
In mongoDB we don't have rows and columns, but everything is documents, so we can find the number of documents in a collection and the size of it.
db.collection.stats() // example db.stackoverflow.stats(), where stackoverflow is my collection name
The result would be similar to this
{
"ns" : "test.stackoverflow",
"count" : 2,
"size" : 224,
"avgObjSize" : 112,
"numExtents" : 1,
"storageSize" : 8192,
"lastExtentSize" : 8192,
"paddingFactor" : 1,
"paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.",
"userFlags" : 1,
"capped" : false,
"nindexes" : 1,
"totalIndexSize" : 8176,
"indexSizes" : {
"_id_" : 8176
},
"ok" : 1
}
db.stats(); // Will help us in finding the total documents in the database
Executing it in mongo shell, will a give a result like this
{
"db" : "test",
"collections" : 9,
"objects" : 1000063,
"avgObjSize" : 112.00058396321032,
"dataSize" : 112007640,
"storageSize" : 175837184,
"numExtents" : 20,
"indexes" : 12,
"indexSize" : 169938160,
"fileSize" : 2080374784,
"nsSizeMB" : 16,
"extentFreeList" : {
"num" : 70,
"totalSize" : 760217600
},
"dataFileVersion" : {
"major" : 4,
"minor" : 22
},
"ok" : 1
}
Here objects is the number of documents across collections
More info - http://docs.mongodb.com/manual/reference/method/db.stats
Oracle
Row count of single table
select count(*) from table_name
Total Size
select * from dba_data_files;
select round((sum(bytes)/1048576/1024),2) from v$datafile;
To execute these queries you need to login as system user with all priveleges
The answer to #1 is show table status or select table_name, table_rows from information_schema.tables
Related
In PostgreSql I can't find in the docs a function that could allow me to combine n json entities, whilst summing the value part in case of existing key/value pair
English not being my main language, I suspect I don't know how to search with the right terms
In other words
from a table with 2 columns
name data
'didier' {'vinyl': 2, 'cd': 3)
'Anne' {'cd' : 1, 'tape' : 4}
'Pierre' {'cd' : 1, 'tape': 9, 'mp3':2}
I want to produce the following result :
{ 'vinyl' : 2, 'cd' : 5, 'tape':13, mp3 : 2}
With is a "combine and sum" function
Thanks in advance for any idea
Didier
Using the_table CTE for illustration, first 'normalize' data column then sum per item type (k) and finally aggregate into a JSONB object.
with the_table("name", data) as
(
values
('didier', '{"vinyl": 2, "cd": 3}'::jsonb),
('Anne', '{"cd" : 1, "tape" : 4}'),
('Pierre', '{"cd" : 1, "tape": 9, "mp3":2}')
)
select jsonb_object_agg(k, v) from
(
select lat.k, sum((lat.v)::integer) v
from the_table
cross join lateral jsonb_each(data) as lat(k, v)
group by lat.k
) t;
-- {"cd": 5, "mp3": 2, "tape": 13, "vinyl": 2}
i'm create a table have one json column and data of inserted has below structure:
{
"options" : {
"info" : [
{"data" : "data1", "verified" : 0},
{"data" : "data2", "verified" : 1},
... and more
],
"otherkeys" : "some data..."
}
}
i want to run a query to get data of verified = 1 "info"
this is for mysql 5.7 comunity running on windows 10
select id, (meta->"$.options.info[*].data") AS `data`
from tbl
WHERE meta->"$.options.info[*].verified" = 1
is expect the output of "data2" but the actual output is nothing.
below query worked perfectly
select id, (meta->"$.options.info[*].data") AS `data`
from tbl
WHERE meta->"$.options.info[1].verified" = 1
but i need to search all item in array not only index 1
how can fix it ?
(sorry for bad english)
Try:
SELECT `id`, (`meta` -> '$.options.info[*].data') `data`
FROM `tbl`
WHERE JSON_CONTAINS(`meta` -> '$.options.info[*].verified', '1');
See dbfiddle.
i am new in nodejs, i want to search in big data, how can i search a document without using skip() and limit(), my database structure is
{
"_id" : ObjectId("5a9d1836d2596d624873c84f"),
"updatedAt" : ISODate("2018-03-05T10:13:10.248Z"),
"createdAt" : ISODate("2018-03-05T10:13:10.248Z"),
"phone" : "+92333333",
"country_code" : "+92",
"verified_user" : false,
"verification_code" : "2951",
"last_shared_loc_time" : ISODate("2018-03-05T10:13:10.240Z"),
"share_loc_flag_time" : ISODate("2018-03-05T10:13:10.240Z"),
"share_location" : true,
"deactivate_user" : false,
"profile_photo_url" : null,
"__v" : 0
}
how can i search using createdAt.
i need a mongodb query, which request to api and show that users
Using skip() is not recommended when we have big data in MongoDB because always requires the server to walk from the beginning of the collection, you can use _id index with limit() to do pagination, because _id field is indexed by default in MongoDB so you can use this field for a good performance.
For the first page you need to get _id value of the last document with limit() :
db.users.find().limit(8);
last_id = id;
Secondly for the next page compare this last _id value with next _id :
db.users.find({'_id' > last_id}).limit(8);
For example assuming first _id equal to 1000 so :
1000 < 1008 ==> PAGE 1 where last_id=1008
1008 < 1016 ==> PAGE 2 where last_id=1016
1016 < 1024 ==> PAGE 3 where last_id=1024
{
uid : 1,
name: 'abc'
...
}
{
uid : 2,
name: 'abc'
...
}
....
{
uid : 10,
name: 'abc'
...
}
{
uid : 11,
name: 'abc'
...
}
Now you can query like get data where uid > 0 and uid <5
then in next slot it is get data where uid > 5 and uid <10 like wise ...
May be this approcah can help you out
As a part of migration from MySQL to MongoDB,I am trying to rewrite all the mySQL queries into MongoDB queries.I am able to successfully do the same,except for a query
MySQL query:
SELECT(select job_status from indexjob order by job_timestamp desc limit 1) AS job_status,(select job_timestamp from indexjob where job_status='SUCCESS' order by job_timestamp desc limit 1) AS job_timestamp;
MongoDB data structure:
/* 1 */
{
"_id" : ObjectId("590caba811ef2308585f0dbb"),
"_class" : "com.info.sample.bean.IndexJobBean",
"jobName" : "IndexJob",
"jobStatus" : "SUCCESS",
"timeStamp" : "2017.05.05.22.13.19"
}
/* 2 */
{
"_id" : ObjectId("590caf1711ef23082cc58a3b"),
"_class" : "com.info.sample.bean.IndexJobBean",
"jobName" : "IndexJob",
"jobStatus" : "FAILED",
"timeStamp" : "2017.05.05.22.27.59"
}
edit - from comments
The query should return the timestamp of last jobStatus="SUCCESS" and the last jobStatus(it can be SUCCESS or FAILED). In MySQL I fetched these in such a way the result will be a row with columns jobStatus and timestamp with the respective values.
I have a complex location database with the following schema :
table States
id : INT PK AutoIncrement
name : VarChar 50 UNIQUE
table Counties
id: INT PK AutoIncrement
stateID : INT ForeignKey ->States(id)
name : VARCHAR(50)
table Towns :
id: INT PK AutoIncrement
stateID : INT ForeignKey ->States(id)
countyID : INT ForeignKey ->Counties(id)
name : VARCHAR(50)
table listings
id : INT PK autoincrement
name: varchar(50)
stateID: INT
countyID: INT
townID: INT
When I want to display some statistic data about geographical repartition in a tree form like this :
state1 (105 results)
county 1 (50 results)
county 2 (55 results)
Town 1 ( 20 results)_
Town 2 ( 35 results)
state2 (200 results)
ect...
In mySQL I would had done this kind of queries :
**1st level : **
select count(*) as nb, S.namem, S.id as stateID from listings L INNER JOIN States S ON S.id=L.stateID GROUP BY S.id;
**2d level : **
foreach(results as $result){
$sql = "select count(*) as nb, from listings L INNER JOIN Counties C ON C.id=L.countyID WHERE L.stateID=".$result['stateID'];
});
and so on...There is a way to do that in a unique long query in MySQL too.
This is a trivial query and it is very fast on a SSD disk in Mysql.
I am starting to learn mongoDB and I want to know what kindof schema I should use to store my location data to optimize this $count() and $group() operations.
And which mongo query would do the job?
Store the documents with a structure like the listings table:
{
"name" : "listing0",
"state" : "Maryland",
"county" : "Washington",
"town" : "Faketown"
}
Then just find the number of listings per (state, country, town) triple with the aggregation pipeline
> db.listings.aggregate([
// hopefully an initial match stage to select a subset of search results or something
{ "$group" : { "_id" : { "state" : "$state", "county" : "$county", "town" : "$town" }, "count" : { "$sum" : 1 } } }
])
From here you can compute the numbers for the higher level of the tree by iterating over the result cursor, or you can run analogous pipelines to compute the numbers at the higher level of the tree. For example, for the county numbers in a specific state
> db.listings.aggregate([
// hopefully an initial match stage to select a subset of search results or something
{ "$match" : { "state" : "Oregon" } },
{ "$group" : { "_id" : { "state" : "$state", "county" : "$county" }, "count" : { "$sum" : 1 } } }
])