How to covert MySQL rows into JSON response using node.js? - mysql

I'm trying to create a mark sheet application for Schools.
I would like to display the marks entered by the teachers in the Angular web application.
I have a MySQL table like this,
+---------+-------------+--------------+-------+
| Roll_no | Subject | Category | Marks |
+---------+-------------+--------------+-------+
| 1 | English | HomeWork 1 | 10 |
| 1 | English | HomeWork 2 | 10 |
| 1 | Science | HomeWork 1 | 10 |
| 1 | Science | HomeWork 2 | 10 |
| 2 | English | HomeWork 1 | 10 |
| 2 | English | HomeWork 2 | 10 |
| 2 | Science | HomeWork 1 | 10 |
| 2 | Science | HomeWork 2 | 10 |
+---------+-------------+--------------+-------+
How to convert this into a JSON response like this,
[
{
"Roll_no" : 1,
"Marks" : [
{
"English" : [ "HomeWork 1" : "10" , "HomeWork 2" : "10" ] ,
"Science" : [ "HomeWork 1" : "10" , "HomeWork 2" : "10" ]
}
},
{
"Roll_no" : 2,
"Marks" : [
{
"English" : [ "HomeWork 1" : "10" , "HomeWork 2" : "10" ] ,
"Science" : [ "HomeWork 1" : "10" , "HomeWork 2" : "10" ]
}
}
]
using SQL queries and node.js?
I'm using,
node.js V10.0
Angular V7.0
Thanks in advance.

Related

QueryDSL with DB2 fetching Nested Json object or Json array aggregation Response

I am trying to fetch nested JSON objects and JSON List from Database using QueryDSL. I have used a native query with LISTAGG and JSON_OBJECT.
Native Query :
SELECT b.id,b.bankName,b.account,b.branch,(select CONCAT(CONCAT('[',LISTAGG(JSON_OBJECT('accountId' value c.accountId, 'name' value customer_name,'amount' value c.amount),',')),']') from CUSTOMER_DETAILS c where c.bankId = b.id) as customers from BANK_DETAILS b
BANK_DETAILS
+----+---------+---------+----------+
| id | BankName| account | branch |
+----+---------+---------+----------+
| 1 | bank1 | savings | branch1 |
| 2 | bank2 | current | branch2 |
+----+---------+---------+----------+
CUSTOMER_DETAILS
+----+-----------+---------------+----------+-----------+
| id | accountId | customer_name | amount | BankId |
+----+-----------+---------------+----------+-----------+
| 1 | 50123 | Abc1 | 150000 | 1 |
| 2 | 50124 | Abc2 | 25000 | 1 |
| 3 | 50125 | Abc3 | 50000 | 2 |
| 4 | 50126 | Abc4 | 250000 | 2 |
+----+-----------+---------------+----------+-----------+
Expected Output for the above tables
[{
"id": "1",
"bankName": "bank1",
"account": "savings",
"branch": "branch1",
"customers": [
{
"accountId": "50123",
"Name": "Abc1",
"amount": 150000
},
{
"accountId": "50124",
"Name": "Abc2",
"amount": 25000
},
]
},{
"id": "2",
"bankName": "bank3",
"account": "current",
"branch": "branch2",
"customers": [
{
"accountId": "50125",
"name": "Abc3",
"amount": 50000
},
{
"accountId": "50126",
"Name": "Abc4",
"amount": 250000
},
]
}]
i have tried with writing this native query in QueryDSL with the below multiple queries for make the same expected output with the forEach loop.
class Repository {
private SQLQueryFactory queryFactory;
public Repository (SQLQueryFactory queryFactory){
this.queryFactory = queryFactory;
}
public void fetchBankDetails(){
List<BankDetails> bankList = queryFactory.select(QBankDetails.bankDetails)
.from(QBankDetails.bankDetails);
bankList.forEach(bankData ->{
List<CustomerDetails> customerList = queryFactory.select(QCustomerDetails.customerDetails)
.from(QCustomerDetails.customerDetails)
.where(QCustomerDetails.customerDetails.bankId.eq(bankData.bankId));
bankData.setCustomerList(customerList)
});
System.out.println(bankList);
}
}
I need to improve my code and convert it into a single query using QueryDSL to return the expected output
Is there any other way or any suggestions?

How to get roleid for enrol user on moodle web service

I want to use the 'enrol_manual_enrol_users' function. One required field to do this 'roleid'. I'd like to pull a list of roles from Moodle and present them to user to select which role the student should be enrolled as. I can't see any function which returns a list of roles. Is there a built in web service for this?
AFAIK there is no Web Servies API (Overview) to retrieve the Moodle roles since there is no need to. You can find the role IDs in the mdl_role table. Unless modified they will look like this:
+------+--------+------------------+---------------+-------------+------------------+
| "id" | "name" | "shortname" | "description" | "sortorder" | "archetype" |
+------+--------+------------------+---------------+-------------+------------------+
| "1" | "" | "manager" | "" | "1" | "manager" |
| "2" | "" | "coursecreator" | "" | "2" | "coursecreator" |
| "3" | "" | "editingteacher" | "" | "3" | "editingteacher" |
| "4" | "" | "teacher" | "" | "4" | "teacher" |
| "5" | "" | "student" | "" | "5" | "student" |
| "6" | "" | "guest" | "" | "6" | "guest" |
| "7" | "" | "user" | "" | "7" | "user" |
| "8" | "" | "frontpage" | "" | "8" | "frontpage" |
+------+--------+------------------+---------------+-------------+------------------+
Most probably, you will just need the student and teacher roles.
Since you work with the Moodle Core API, I suggest activating the built-in API documentation ( Administration block > Plugins > Web services > API Documentation) in the settings.
The official Web Services Forum is also a thing to know.
If you don't have access to the database but have admin access you can go to Site Administration > Users > Permissions > Define Roles and select one, ie Student, and the roleid is a param in the URL

Summing values from a JSON array in Snowflake

I have a source data which contains the following type of a JSON array:
[
[
"source 1",
250
],
[
"other source",
58
],
[
"more stuff",
42
],
...
]
There can be 1..N pairs of strings and values like this. How can I sum all the values together from this JSON?
You can use FLATTEN, it will produce a single row for each element of the input array. Then you can access the number in that element directly.
Imagine you have this input table:
create or replace table input as
select parse_json($$
[
[
"source 1",
250
],
[
"other source",
58
],
[
"more stuff",
42
]
]
$$) as json;
FLATTEN will do this:
select index, value from input, table(flatten(json));
-------+-------------------+
INDEX | VALUE |
-------+-------------------+
0 | [ |
| "source 1", |
| 250 |
| ] |
1 | [ |
| "other source", |
| 58 |
| ] |
2 | [ |
| "more stuff", |
| 42 |
| ] |
-------+-------------------+
And so you can just use VALUE[1] to access what you want
select sum(value[1]) from input, table(flatten(json));
---------------+
SUM(VALUE[1]) |
---------------+
350 |
---------------+

Best way of keeping logs in MySQL

What would be the best way to keep logs?
Possible solutions that I can think of are below
Create Log Table which contains a field that saves data in json format
Create a table which has identical structure with a table that saves data
Currently, I am using Solution 1, however, I wonder if there is a better way to keep logs.
Here is the example:
Table Books
| id | name | created_at | price | author | reputation |
|----|-----------------|------------|-------|----------------|------------|
| 1 | about korea | 2018-01-01 | 1000 | korean | 10 |
| 2 | kimchi vs sushi | 2018-01-01 | 5000 | natural korean | 20 |
| 3 | i love america | 2018-02-03 | 6000 | kim jong un | 30 |
Whenever there is a change in Books, I create a record in BooksLog to keep track of changes like below. Keeping log like this would be flexible, so that even if the field changes, it would not majorly be affected, but this way has performance issue.
| id | books_id | row |
|----|----------|---------------------------------------------------------------------------------------------------------------------------|
| 1 | 1 | { id : 1, name : "about korea", created_at : "2018-01-01", price : 1000, author : "korean", reputation : 10 } |
| 2 | 2 | { id : 2, name : "kimchi vs sushi", created_at : "2018-01-01", price : 5000, author : "natural korean", reputation : 20 } |
| 3 | 3 | { id : 3, name : "i love america", created_at : "2018-01-01", price : 6000, author : "kim jong un", reputation : 30 } |

how to extract data from json using oracle text index

I have a table, which has an Oracle text index. I created the index because I need an extra fast search. The table contains JSON data. Oracle json_textcontains works very poorly so I tried to play with CONTAINS (json_textcontains is rewritten to CONTAINS actually if we have a look into query plan).
I want to find all jsons by given class_type and id of value but Oracle looks all over JSON without looking that class_type and id should be in one JSON section i.e. it deals with JSON not like structured data but like a huge string.
Well formatted JSON looks like this:
{
"class":[
{
"class_type":"ownership",
"values":[{"nm":"id","value":"1"}]
},
{
"class_type":"country",
"values":[{"nm":"id","value":"640"}]
},
,
{
"class_type":"features",
"values":[{"nm":"id","value":"15"},{"nm":"id","value":"20"}]
}
]
}
The second one which shouldn't be found looks like this:
{
"class":[
{
"class_type":"ownership",
"values":[{"nm":"id","value":"18"}]
},
{
"class_type":"country",
"values":[{"nm":"id","value":"11"}]
},
,
{
"class_type":"features",
"values":[{"nm":"id","value":"7"},{"nm":"id","value":"640"}]
}
]
}
Please see how to reproduce what I'm trying to achieve:
create table perso.json_data(id number, data_val blob);
insert into perso.json_data
values(
1,
utl_raw.cast_to_raw('{"class":[{"class_type":"ownership","values":[{"nm":"id","value":"1"}]},{"class_type":"country","values":[{"nm":"id","value":"640"}]},{"class_type":"features","values":[{"nm":"id","value":"15"},{"nm":"id","value":"20"}]}]}')
);
insert into perso.json_data values(
2,
utl_raw.cast_to_raw('{"class":[{"class_type":"ownership","values":[{"nm":"id","value":"18"}]},{"class_type":"country","values":[{"nm":"id","value":"11"}]},{"class_type":"features","values":[{"nm":"id","value":"7"},{"nm":"id","value":"640"}]}]}')
)
;
commit;
ALTER TABLE perso.json_data
ADD CONSTRAINT check_is_json
CHECK (data_val IS JSON (STRICT));
CREATE INDEX perso.json_data_idx ON json_data (data_val)
INDEXTYPE IS CTXSYS.CONTEXT
PARAMETERS ('section group CTXSYS.JSON_SECTION_GROUP SYNC (ON COMMIT)');
select *
from perso.json_data
where ctxsys.contains(data_val, '(640 INPATH(/class/values/value)) and (country inpath (/class/class_type))')>0
The query returns 2 rows but I expect to get only the record where id = 1.
How can I use a full text index with the ability to search without the error I highlighted, without using JSON_TABLE?
There is no options to put data in relational format.
Thanks in advance.
Please don't use the text index directly to try to solve this kind of problem. It's not what it's designed for..
In 12.2.0.1.0 this should work for you (and yes it does use a specialized version of the text index under the covers, but it also applies selective post filtering to ensure the results are correct)..
SQL> create table json_data(id number, data_val blob)
2 /
Table created.
SQL> insert into json_data values(
2 1,utl_raw.cast_to_raw('{"class":[{"class_type":"ownership","values":[{"nm":"id","value":"1"}]},{"class_type":"cou
ntry","values":[{"nm":"id","value":"640"}]},{"class_type":"features","values":[{"nm":"id","value":"15"},{"nm":"id","valu
e":"20"}]}]}')
3 )
4 /
1 row created.
Execution Plan
----------------------------------------------------------
--------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | 1 | 100 | 1 (0)| 00:00:01 |
| 1 | LOAD TABLE CONVENTIONAL | JSON_DATA | | | | |
--------------------------------------------------------------------------------------
SQL> insert into json_data values(
2 2,utl_raw.cast_to_raw('{"class":[{"class_type":"ownership","values":[{"nm":"id","value":"18"}]},{"class_type":"co
untry","values":[{"nm":"id","value":"11"}]},{"class_type":"features","values":[{"nm":"id","value":"7"},{"nm":"id","value
":"640"}]}]}')
3 )
4 /
1 row created.
Execution Plan
----------------------------------------------------------
--------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | 1 | 100 | 1 (0)| 00:00:01 |
| 1 | LOAD TABLE CONVENTIONAL | JSON_DATA | | | | |
--------------------------------------------------------------------------------------
SQL> commit
2 /
Commit complete.
SQL> ALTER TABLE json_data
2 ADD CONSTRAINT check_is_json
3 CHECK (data_val IS JSON (STRICT))
4 /
Table altered.
SQL> CREATE SEARCH INDEX json_SEARCH_idx ON json_data (data_val) for JSON
2 /
Index created.
SQL> set autotrace on explain
SQL> --
SQL> set lines 256 trimspool on pages 50
SQL> --
SQL> select ID, json_query(data_val, '$' PRETTY)
2 from JSON_DATA
3 /
ID
----------
JSON_QUERY(DATA_VAL,'$'PRETTY)
------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------
----------------
1
{
"class" :
[
{
"class_type" : "ownership",
"values" :
[
{
"nm" : "id",
"value" : "1"
}
]
},
{
"class_type" : "country",
"values" :
[
{
"nm" : "id",
"value" : "640"
}
]
},
{
"class_type" : "features",
"values" :
[
{
"nm" : "id",
"value" : "15"
},
{
"nm" : "id",
"value" : "20"
}
]
}
]
}
2
{
"class" :
[
ID
----------
JSON_QUERY(DATA_VAL,'$'PRETTY)
------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------
----------------
{
"class_type" : "ownership",
"values" :
[
{
"nm" : "id",
"value" : "18"
}
]
},
{
"class_type" : "country",
"values" :
[
{
"nm" : "id",
"value" : "11"
}
]
},
{
"class_type" : "features",
"values" :
[
{
"nm" : "id",
"value" : "7"
},
{
"nm" : "id",
"value" : "640"
}
]
}
]
}
Execution Plan
----------------------------------------------------------
Plan hash value: 3213740116
-------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2 | 4030 | 3 (0)| 00:00:01 |
| 1 | TABLE ACCESS FULL| JSON_DATA | 2 | 4030 | 3 (0)| 00:00:01 |
-------------------------------------------------------------------------------
Note
-----
- dynamic statistics used: dynamic sampling (level=2)
SQL> select ID, to_clob(data_val)
2 from json_data
3 where JSON_EXISTS(data_val,'$?(exists(#.class?(#.values.value == $VALUE && #.class_type == $TYPE)))' passing '640'
as "VALUE", 'country' as "TYPE")
4 /
ID TO_CLOB(DATA_VAL)
---------- --------------------------------------------------------------------------------
1 {"class":[{"class_type":"ownership","values":[{"nm":"id","value":"1"}]},{"class_
type":"country","values":[{"nm":"id","value":"640"}]},{"class_type":"features","
values":[{"nm":"id","value":"15"},{"nm":"id","value":"20"}]}]}
Execution Plan
----------------------------------------------------------
Plan hash value: 3248304200
-----------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 2027 | 4 (0)| 00:00:01 |
|* 1 | TABLE ACCESS BY INDEX ROWID| JSON_DATA | 1 | 2027 | 4 (0)| 00:00:01 |
|* 2 | DOMAIN INDEX | JSON_SEARCH_IDX | | | 4 (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter(JSON_EXISTS2("DATA_VAL" FORMAT JSON , '$?(exists(#.class?(#.values.value
== $VALUE && #.class_type == $TYPE)))' PASSING '640' AS "VALUE" , 'country' AS "TYPE"
FALSE ON ERROR)=1)
2 - access("CTXSYS"."CONTAINS"("JSON_DATA"."DATA_VAL",'{640} INPATH
(/class/values/value) and {country} INPATH (/class/class_type)')>0)
Note
-----
- dynamic statistics used: dynamic sampling (level=2)
SQL> select ID, TO_CLOB(DATA_VAL)
2 from JSON_DATA d
3 where exists (
4 select 1
5 from JSON_TABLE(
6 data_val,
7 '$.class'
8 columns (
9 CLASS_TYPE VARCHAR2(32) PATH '$.class_type',
10 NESTED PATH '$.values.value'
11 columns (
12 "VALUE" VARCHAR2(32) path '$'
13 )
14 )
15 )
16 where CLASS_TYPE = 'country' and "VALUE" = '640'
17 )
18 /
ID TO_CLOB(DATA_VAL)
---------- --------------------------------------------------------------------------------
1 {"class":[{"class_type":"ownership","values":[{"nm":"id","value":"1"}]},{"class_
type":"country","values":[{"nm":"id","value":"640"}]},{"class_type":"features","
values":[{"nm":"id","value":"15"},{"nm":"id","value":"20"}]}]}
Execution Plan
----------------------------------------------------------
Plan hash value: 1621266031
-------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 2027 | 32 (0)| 00:00:01 |
|* 1 | FILTER | | | | | |
| 2 | TABLE ACCESS FULL | JSON_DATA | 2 | 4054 | 3 (0)| 00:00:01 |
|* 3 | FILTER | | | | | |
|* 4 | JSONTABLE EVALUATION | | | | | |
-------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter( EXISTS (SELECT 0 FROM JSON_TABLE( :B1, '$.class' COLUMNS(
"CLASS_TYPE" VARCHAR2(32) PATH '$.class_type' NULL ON ERROR , NESTED PATH
'$.values.value' COLUMNS( "VALUE" VARCHAR2(32) PATH '$' NULL ON ERROR ) ) )
"P" WHERE "CTXSYS"."CONTAINS"(:B2,'({country} INPATH (/class/class_type))
and ({640} INPATH (/class/values/value))')>0 AND "P"."CLASS_TYPE"='country'
AND "P"."VALUE"='640'))
3 - filter("CTXSYS"."CONTAINS"(:B1,'({country} INPATH
(/class/class_type)) and ({640} INPATH (/class/values/value))')>0)
4 - filter("P"."CLASS_TYPE"='country' AND "P"."VALUE"='640')
Note
-----
- dynamic statistics used: dynamic sampling (level=2)
SQL>