I am getting data from mysql database to express rest api app. Using sequelize as a ORM.
When it is comes to a BIT(1) value from mysql, sequelize returns a instance of buffer object.
{
"id": 4,
"ProductPrice": 12.25,
"ProductQuantityOnHand": 0,
"ProductCode": "P486",
"ProductName": "FirstProduct",
"ProductDescription": null,
"ProductActive": {
"type": "Buffer",
"data": [
1
]
},
"createdAt": "2019-02-02T11:27:00.000Z",
"updatedAt": "2019-02-02T11:27:00.000Z"
}
Like here product active a BIT(1) and sequelize returning a object.
How can I get boolean value instead of an object?
Like this.
{
"id": 4,
"ProductPrice": 12.25,
"ProductQuantityOnHand": 0,
"ProductCode": "P486",
"ProductName": "FirstProduct",
"ProductDescription": null,
"ProductActive": true,
"createdAt": "2019-02-02T11:27:00.000Z",
"updatedAt": "2019-02-02T11:27:00.000Z"
}
I might suggest that you just use an INT column in your MySQL table. Assuming you only store values 0 and 1, these same values should show up in your ORM/application layer.
As the value 0 is "falsy" in JavaScipt, it would logically behave the same way as false, and vice-versa for 1, which is "truthy."
Related
I have this SQL query:
select question.*,
question_option.id
from question
left join question_option on question_option.question_id = question.id;
How do I map the result obtained to the entity. so that the expected result should be like
Can anyone give the sample code for getting the result as above
{
"id": 2655,
"type": "MCQSingleCorrect",
"difficultyLevel": "Advanced",
"question": "Which country are you from?",
"answer": null,
"marks": 1.5,
"negativeMarks": 0.5,
"hint": null,
"explanation": null,
"booleanAnswer": null,
"passage": null,
"isPassageQuestion": null,
"audioFile": null,
"videoFile": null,
"questionFiles": [],
"tags": [],
"updatedAt": "2021-12-21T11:57:03.229136Z",
"createdAt": "2021-12-21T11:57:03.229098Z",
"questionOptions": [
{
"id": 2719,
"option": "India",
"index": 1,
"correct": false,
"blank": null
},
{
"id": 2720,
"option": "Newzealand",
"index": 1,
"correct": false,
"blank": null
},
{
"id": 2721,
"option": "England",
"index": 1,
"correct": true,
"blank": null
},
{
"id": 2722,
"option": "Australia",
"index": 1,
"correct": false,
"blank": null
}
]}
I'm answering from the perspective of our comments discussion, where I suggested you don't need JPA in the middle, because you can do every mapping / projection with jOOQ directly. In this case, if you're targeting a JSON client, why not just use SQL/JSON, for example? Rather than joining, you nest your collection like this:
ctx.select(jsonObject(
key("id", QUESTION.ID),
key("type", QUESTION.TYPE),
..
key("questionOptions", jsonArrayAgg(jsonObject(
key("id", QUESTION_OPTION.ID),
key("option", QUESTION_OPTION.OPTION),
..
)))
))
.from(QUESTION)
.leftJoin(QUESTION_OPTION)
.on(QUESTION_OPTION.QUESTION_ID.eq(QUESTION.ID))
// Assuming you have a primary key here.
// Otherwise, add also the other QUESTION columns to the GROUP BY clause
.groupBy(QUESTION.ID)
.fetch();
This will produce a NULL JSON array if a question doesn't have any options. You can coalesce() it to an empty array, if needed. There are other ways to achieve the same thing, you could also use MULTISET if you don't actually need JSON, but just some hierarchy of Java objects.
As a rule of thumb, you hardly ever need JPA in your code when you're using jOOQ, except if you really rely on JPA's object graph persistence features.
You can write the query with jOOQ and the do this:
Query result = em.createNativeQuery(query.getSQL());
query.getResultList() // or query.getSingleResult() depending what you need.
Read more here:
https://www.jooq.org/doc/3.15/manual/sql-execution/alternative-execution-models/using-jooq-with-jpa/using-jooq-with-jpa-native/
JSON can be fetched directly using SQL (and also jOOQ). Here are some examples:
https://72.services/use-the-power-of-your-database-xml-and-json/
I want to achieve following JSON transformation using Jolt processor in NIFI
By focusing on values field, in the first input in json (id 900551), values are populated as the following
input JSON
{
"id": 900551,
"internal_name": [],
"values": [
{
"id": 1430156,
"form_field_id": 900551,
"pos": 0,
"weight": null,
"category": null,
"created_at": "2020-10-15 12:55:02",
"updated_at": "2020-11-27 10:45:09",
"deleted_at": null,
"settings": {
"image": "myimage.png"
"fix": false,
"bold": false,
"exclusive": false
},
"internal_value": "494699DV7271000,6343060SX0W1000,619740BWR0W1000",
"css_class": null,
"value": "DIFFERENCE",
"settings_lang": {},
"value_html": ""
}
]
}
On the second input Json file to parse, values is null.
{
"id": 900552,
"internal_name": [],
"values": []
}
I would like to convert null values to an empty array in my conversion
Is there a way to do this using existing Jolt operations ?
Thanks.
The default operation is what you are looking for:
Defaultr walks the spec and asks "Does this exist in the data? If not, add it."
In our case:
if the value for "values" key is null, put the empty array instead
Here is the spec:
[
{
"operation": "default",
"spec": {
"values": []
}
}
]
tested with https://jolt-demo.appspot.com/
edit: answering the question from the comment:
Maybe this workaround will work for you
I am trying to create a query in SQL to retrieve DNS answer information so that I can visualize it in Grafana with the add of TimescaleDB. Right now, I am struggling to get postgres to query more than one element at a time. The structure of my JSON that I am trying to query looks like this:
{
"Z": 0,
"AA": 0,
"ID": 56559,
"QR": 1,
"RA": 1,
"RD": 1,
"TC": 0,
"RCode": 0,
"OpCode": 0,
"answer": [
{
"ttl": 19046,
"name": "i.stack.imgur.com",
"type": 5,
"class": 1,
"rdata": "i.stack.imgur.com.cdn.cloudflare.net"
},
{
"ttl": 220,
"name": "i.stack.imgur.com.cdn.cloudflare.net",
"type": 1,
"class": 1,
"rdata": "104.16.30.34"
},
{
"ttl": 220,
"name": "i.stack.imgur.com.cdn.cloudflare.net",
"type": 1,
"class": 1,
"rdata": "104.16.31.34"
},
{
"ttl": 220,
"name": "i.stack.imgur.com.cdn.cloudflare.net",
"type": 1,
"class": 1,
"rdata": "104.16.0.35"
}
],
"ANCount": 13,
"ARCount": 0,
"QDCount": 1,
"question": [
{
"name": "i.stack.imgur.com",
"qtype": 1,
"qclass": 1
}
]
}
There can be any number of answers, including zero, so I would like to figure out a way to query all answers. For example, I am trying to retrieve the ttl field from every index answer, and I can query a specific index, but have trouble querying all occurrences.
This works for querying a single index:
SELECT (data->'answer'->>0)::json->'ttl'
FROM dns;
When I looked around, I found this as a potential solution for querying all indices within the array, but it did not seem to work and told me "cannot extract elements from a scalar":
SELECT answer->>'ttl' ttl
FROM dns, jsonb_array_elements(data->'answer') answer, jsonb_array_elements(answer->'ttl') ttl
Using jsonb_array_elements() will give you a row for every object in the answer array. You can then dereference that object:
select a.obj->>'ttl' as ttl, a.obj->>'name' as name, a.obj->>'rdata' as rdata
from dns d
cross join lateral jsonb_array_elements(data->'answer') as a(obj)
I am working on a couchbase lite driven app and trying to do live query based on this help from couchbase mobile lite.
While it works, I am confused on the number of documents that reported as changed. This is only in my laptop so I uploaded json file to couchbase server via cbimport. Then sync gateway did sync all the data succesfully to my android app.
Now, I changed one document in couchbase server but all 27 documents are returned as changed in the live query. I was expecting only the document I have changed to be returned as changed since the last sync time.
Looking at the meta information of each document, the document I have changed have the following:
{
"meta": {
"id": "Group_2404_159_5053",
"rev": "15-16148876737400000000000002000006",
"expiration": 0,
"flags": 33554438,
"type": "json"
},
"xattrs": {
"_sync": {
"rev": "7-ad618346393fa2490359555e9c889876",
"sequence": 2951,
"recent_sequences": [
2910,
2946,
2947,
2948,
2949,
2950,
2951
],
"history": {
"revs": [
"3-89bb125a9bb1f5e8108a6570ffb31821",
"4-71480618242841447402418fa1831968",
"5-4c4d990af34fa3f53237c3faafa85843",
"1-4fbb4708f69d8a6cda4f9c38a1aa9570",
"6-f43462023f82a12170f31aed879aecb2",
"7-ad618346393fa2490359555e9c889876",
"2-cf80ca212a3279e4fc01ef6ab6084bc9"
],
"parents": [
6,
0,
1,
-1,
2,
4,
3
],
"channels": [
null,
null,
null,
null,
null,
null,
null
]
},
"cas": "0x0000747376881416",
"value_crc32c": "0x8c664755",
"time_saved": "2020-06-01T14:23:30.669338-07:00"
}
}
}
while the rest 26 documents are similar to this one:
{
"meta": {
"id": "Group_2404_159_5087",
"rev": "2-161344efd90c00000000000002000006",
"expiration": 0,
"flags": 33554438,
"type": "json"
},
"xattrs": {
"_sync": {
"rev": "1-577011ccb4ce61c69507ba44985ca038",
"sequence": 2934,
"recent_sequences": [
2934
],
"history": {
"revs": [
"1-577011ccb4ce61c69507ba44985ca038"
],
"parents": [
-1
],
"channels": [
null
]
},
"cas": "0x00000cd9ef441316",
"value_crc32c": "0xc37bb792",
"time_saved": "2020-05-28T11:34:50.3200745-07:00"
}
}
}
Is that the expected behavior or there is something I can do about it?
That behavior is as expected. The live query re-runs the query every time there is a database change that impacts the results of the query. So in your case, since it's a query that fetches ALL documents in your database, the query re-runs when any document in database changes and it returns all documents (which is what the query is for).
Live queries are best suited if you have a filter predicate on your query. For instance, if the app wants to be notified if the status field in documents of type "foo" changes. in that case, you will only be notified if the status field changes in document of type "foo".
In your case, if you just care about changes if any of the document in your database changes, you should just use a Database Change Listener
I had an issue today with Filemaker on how to get the first element out of a json result without knowing the key.
Example $json result from an API call
{
"26298070": {
"task_id": "26298070",
"parent_id": "0",
"name": "DEPOT-0045 Research ODBC Model Extraction via Django To cut down on development time from Filemaker to Postgres",
"external_task_id": "32c8fd51-2066-42b9-b88b-8a2275fafc3f",
"external_parent_id": "64e7c829-d88e-48ae-9ba4-bb7a3871a7ce",
"level": "1",
"add_date": "2018-06-04 21:45:16",
"archived": "0",
"color": "#34C644",
"tags": "DEPOT-0045",
"budgeted": "1",
"checked_date": null,
"root_group_id": "91456",
"assigned_to": null,
"assigned_by": null,
"due_date": null,
"note": "",
"context": null,
"folder": null,
"repeat": null,
"billable": "0",
"budget_unit": "hours",
"public_hash": null,
"modify_time": null
}
}
I tried JSONGetElement( $json, "") and got the original json.
I tried JSONGetElement( $json, ".") and got the original json.
I tried JSONGetElement( $json, 1 ) and got nothing.
How do you get the first element out of a JSON String without knowing the name of the element in FileMaker 16 or 17?
Try this for the root element:
JSONListKeys ( $json ; "" )
result: 26298070
Once you get the root, you can get the child keys.
I remembered that FileMaker has a function to extract words from text so I thought I'd see what happened if I extracted the first word as a key.
I tried
JSONGetElement ( $json ; MiddleWords ( $json,1,1 ) )
and got the result I was looking for.
{
"add_date": "2018-06-04 21:45:16",
"archived": "0",
"assigned_by": null,
"assigned_to": null,
"billable": "0",
"budget_unit": "hours",
"budgeted": "1",
"checked_date": null,
"color": "#34C644",
"context": null,
"due_date": null,
"external_parent_id": "64e7c829-d88e-48ae-9ba4-bb7a3871a7ce",
"external_task_id": "32c8fd51-2066-42b9-b88b-8a2275fafc3f",
"folder": null,
"level": "1",
"modify_time": null,
"name": "DEPOT-0045 Research ODBC Model Extraction via Django To cut down on development time from Filemaker to Postgres",
"note": "",
"parent_id": "0",
"public_hash": null,
"repeat": null,
"root_group_id": "91456",
"tags": "DEPOT-0045",
"task_id": "26298070"
}
which makes it easy to parse simple JSON schema's that use attributes for keys.