I have an AWS RDS Aurora MySQL connection. Following are the steps that I'm following to operate create, insert & fetch the data:
create table:
CREATE TABLE user (
user_id BINARY(16) PRIMARY KEY NOT NULL,
first_name VARCHAR(25) NOT NULL,
last_name VARCHAR(25) NOT NULL,
);
insert into table:
INSERT INTO user VALUES (
UNHEX(REPLACE("9a9cbfga-7426-471a-af27-31be7tb53ii2", "-","")),
"Jon",
"Smith"
);
fetch from table:
SELECT * FROM user WHERE user_id = UNHEX(REPLACE("9a9cbfga-7426-471a-af27-31be7tb53ii2", "-",""));
The handler function that triggers my function to fetch the data goes like:
module.exports.fetchUser = async (event, context, callback) => {
const params = {
resourceArn: 'arn:aws:rds:*********:cluster:************',
secretArn: 'arn:aws:secretsmanager:*************',
sql: 'SELECT * FROM user WHERE user_id = UNHEX(REPLACE("9a9cbfga-7426-471a-af27-31be7tb53ii2", "-",""))',
database: 'dev_db1',
continueAfterTimeout: true,
includeResultMetadata: true
}
try {
const db_res = await rdsDataService.executeStatement(params).promise();
const response = {
body: {
message: 'Data fetched',
data: JSON.stringify(db_res, null, 2)
}
};
callback(null, response);
} catch (error) {
callback(null, error)
}
};
Executing the http data API endpoint for the same results me in this:
{
"body": {
"message": "Data fetched",
"data": [
[
{
"blobValue": {
"type": "Buffer",
"data": [
202,
136,
22,
206,
45,
214,
75,
233,
156,
28,
163,
223,
186,
115,
89,
107
]
}
},
{
"stringValue": "Jon"
},
{
"stringValue": "Smith"
},
]
]
}
}
All I need is to fetch the complete details of all the users (for now I've inserted just one record for ease) in the table user and show the exact user_id instead of the blobValue. Not sure how to convert this UUID binary into redable form like 9a9cbfga7426471aaf2731be7tb53ii2 or 9a9cbfga-7426-471a-af27-31be7tb53ii2 in the response.
I'am bit new to handling queries in MYSQL & Aurora, any help to get me through this will be really appretiated. Thanks in advance.
Mysql 8 has a function for that as https://dev.mysql.com/doc/refman/8.0/en/miscellaneous-functions.html#function_bin-to-uuid
and https://dev.mysql.com/doc/refman/8.0/en/miscellaneous-functions.html#function_uuid-to-bin
But you have to do it manually.
As i can't insert your data anywhere the select will do
CREATE TABLE user (
user_id BINARY(16) PRIMARY KEY NOT NULL,
first_name VARCHAR(25) NOT NULL,
last_name VARCHAR(25) NOT NULL
)
INSERT INTO user VALUES (UNHEX(REPLACE("3f06af63-a93c-11e4-9797-00505690773f", "-","")),
"Jon",
"Smith");
SELECT * from user where user_id = UNHEX(REPLACE("3f06af63-a93c-11e4-9797-00505690773f","-",""))
user_id | first_name | last_name
:------ | :--------- | :--------
null | Jon | Smith
SELECT CONCAT(SUBSTRING(HEX(user_id),1,8),'-',
SUBSTRING(HEX(user_id),9,4),'-',
SUBSTRING(HEX(user_id),13,4),'-',
SUBSTRING(HEX(user_id),17,12)) user_id,first_name,last_name FROM user
user_id | first_name | last_name
:------------------------------ | :--------- | :--------
3F06AF63-A93C-11E4-979700505690 | Jon | Smith
db<>fiddle here
But as you need the text anyway, keep the original so you don't need to unhex hex the data again
CREATE TABLE user (
user_id BINARY(16) PRIMARY KEY NOT NULL,
first_name VARCHAR(25) NOT NULL,
last_name VARCHAR(25) NOT NULL,
userid_txt Varchar(3&)
);
Related
I have prisma schema like below
model t1 {
id Int #id #default(autoincrement())
title String
previousNodeId Int? #unique
}
and I wanted to update title of rows who have null values in previousNodeId
I have written query
await prisma.t1.update({
where: { previousNodeId: null },
data: {
title:"update title"
}
});
but I am not able to update it.
Getting error Type 'null' is not assignable to type 'number | undefined'.
The expected type comes from property 'previousNodeId' which is declared here on type 'T1WhereUniqueInput'
I am able to get all rows using findMany.
await prisma.t1.findMany({
where: { previousNodeId: null },
});
Result of findMany
[
{
id: 3,
title: 'demo t1 1',
previousNodeId: null
}
]
I have a JSON structure stored in a MySql table. Now, months later, I have a requirement to join to pieces found deep in the bowels of this JSON string.
{
"id": "000689ba-9891-4d82-ad13-a7b96dc08ec4",
"type": "emp",
"firstName": "Brooke",
"facilities": {
"0001": [{
"id": 1,
"code": "125",
"name": "DSL",
"type": "MGMTSupport",
"payRate": 18}],
"0002": [
{
"id": 1,
"code": "100",
"name": "Server",
"type": "FOH",
"payRate": 8
}, {
"id": 2,
"code": "320",
"name": "NCFOHTrainer",
"type": "NCHourlyTraining",
"payRate": 14.5
}]
},
"permissions": ["read:availability", "..."],
"primaryJobCode": "150",
"primaryPayRate": 9,
"primaryFacility": "0260"
}
The big question is: How do I shape this as a query in MySql when the facilities do not follow a single key/value pattern? i.e.: the key to the first entry is the facilityId so I cannot use a path like '$.0001' and the dictionary value is an array so how do I path that correctly?
select id as EmployeeId
, companyId as cpkEmployeeId
, json_table( `data`
, '$.facilities[*]' COLUMNS( facilityId VARCHAR(10) PATH '$.????'
, NESTED PATH '??? $[*] ???' COLUMNS ( code VARCHAR(10) PATH '$.code'
, payRate DECIMAL(8,4) PATH '$.payRate') facilities
from employee
;
Yea - the above does not work. Any help appreciated.
Desired output?
[Other columns from the table] plus facilityId, code & payrate.
A single row in the native table could produce something like:
id | companyId | facilityId | code | payRate
--------+-----------+------------+------+---------
1 | 324337 | 0001 | 125 | 18.0000
1 | 324337 | 0002 | 100 | 8.0000
1 | 324337 | 0002 | 320 | 14.5000
WITH
cte AS (
SELECT test.id,
test.value,
jsontable.facilityId,
CONCAT('$.facilities."', jsontable.facilityId, '"') path
FROM test
CROSS JOIN JSON_TABLE(JSON_KEYS(test.value, '$.facilities'),
'$[*]' COLUMNS (facilityId CHAR(4) PATH '$')) jsontable
)
SELECT cte.id,
cte.facilityId,
jsontable.code,
jsontable.payRate
FROM cte
CROSS JOIN JSON_TABLE(JSON_EXTRACT(cte.value, cte.path),
'$[*]' COLUMNS (code CHAR(3) PATH '$.code',
payRate DECIMAL(6, 4) PATH '$.payRate')) jsontable
https://dbfiddle.uk/?rdbms=mysql_8.0&fiddle=18ab7b6f181b61fb53f88a6de6e049be
I want to retrieve data from my mysql database via an api query in longitude and latitude.
If I query through the placeName column, it returns all the data.
If I query through the id column, it returns all the data.
But using both longitude and latitude columns, an empty array is returned. This is the response:
{
"status": 200,
"data": [],
"message": "Longitude and latitude retrieved successfully"
}
This is the code:
let sql = "select * from sampleData where lng=-73.981750 and lat=40.781040";
connection.query(sql,((err, data, fields) => {
if (err) throw err;
res.send({
status: 200,
data,
message: "Longitude and latitude retrieved successfully"
})
}));
UPDATE:
This is the table and query:
CREATE TABLE IF NOT EXISTS `sampleData` (
`id` int(6) unsigned NOT NULL,
`placeName` VARCHAR(200) NOT NULL,
`lng` DECIMAL(11,6) NOT NULL,
`lat` DECIMAL(11,6) NOT NULL,
PRIMARY KEY (`id`)
) DEFAULT CHARSET=utf8;
INSERT INTO `sampleData` (`id`, `placeName`, `lng`, `lat`) VALUES
('1','Citarella Gourmet Market - Upper West Side', ' -73.981750', '40.781040'),
('2','Bloomingdale The Outlet Store', '-73.982090 ', '40.779130');
select * from `sampleData` where lng="-73.981750" and lat="40.781040";
http://sqlfiddle.com/#!9/445322/2/0
This is what I want for the output when express returns the MySQL response to the client:
{
"status": 200,
"data": [
{
"id": 1,
"placeName": "Citarella Gourmet Market - Upper West Side",
"lng": "-73.981750",
"lat": "40.781040"
}
],
"message": "Longitude and latitude retrieved successfully"
}
hello i want insert data with bulkCreate ex:
[
{
"typeId": 5,
"devEui": "0094E796CBFCFEF9",
"application_name": "Pressure No.10",
"createdAt": "2020-02-05T08:07:17.000Z",
"updatedAt": "2020-02-05T08:07:17.000Z"
}
]
and my sequelize code :
return models.sequelize.transaction(t=>{
return models.iot_nodes.bulkCreate(data,{
updateOnDuplicate: ["devEui",]
})
})
when i hit this code in first data that will be insert to db
my problem is when i hit again whit same data that not update, just insert in new row
iam using mysql db, laragon
log:
Executing (f202b84c-c5d8-4c67-954c-e22f96fb93d8): START TRANSACTION;
Executing (default): INSERT INTO `iot_nodes` (`id`,`typeId`,`devEui`,`application_name`,`createdAt`,`updatedAt`) VALUES (NULL,5,'0094E796CBFCFEF9','Pressure No.10','2020-02-05 08:07:17','2020-02-05 08:07:17') ON DUPLICATE KEY UPDATE `id`=VALUES(`id`),`devEui`=VALUES(`devEui`);
Executing (f202b84c-c5d8-4c67-954c-e22f96fb93d8): COMMIT;
It seems to fit this scenario based on the information. You want to update devEui field. updateOnDuplicate option:
Fields to update if row key already exists (on duplicate key update)?
So, the row key already exists means the table must have a unique key or the primary key is duplicated when you insert the data.
E.g.
import { sequelize } from '../../db';
import { Model, DataTypes } from 'sequelize';
class IotNode extends Model {}
IotNode.init(
{
typeId: {
type: DataTypes.INTEGER,
unique: true,
},
devEui: DataTypes.STRING,
application_name: DataTypes.STRING,
},
{ sequelize, modelName: 'iot_nodes' },
);
(async function test() {
try {
await sequelize.sync({ force: true });
const datas = [
{
typeId: 5,
devEui: '0094E796CBFCFEF9',
application_name: 'Pressure No.10',
createdAt: '2020-02-05T08:07:17.000Z',
updatedAt: '2020-02-05T08:07:17.000Z',
},
];
await IotNode.bulkCreate(datas, { updateOnDuplicate: ['devEui'] });
await IotNode.bulkCreate(datas, { updateOnDuplicate: ['devEui'] });
} catch (error) {
console.log(error);
} finally {
await sequelize.close();
}
})();
As you can see, I make the typeId unique and execute IotNode.bulkCreate twice. The generated SQL logs:
Executing (default): INSERT INTO "iot_nodes" ("id","typeId","devEui","application_name") VALUES (DEFAULT,5,'0094E796CBFCFEF9','Pressure No.10') ON CONFLICT ("typeId") DO UPDATE SET "devEui"=EXCLUDED."devEui" RETURNING *;
Executing (default): INSERT INTO "iot_nodes" ("id","typeId","devEui","application_name") VALUES (DEFAULT,5,'0094E796CBFCFEF9','Pressure No.10') ON CONFLICT ("typeId") DO UPDATE SET "devEui"=EXCLUDED."devEui" RETURNING *;
sequelize use the unique typeId field as the duplicate key. Check the rows in the database:
=# select * from iot_nodes;
id | typeId | devEui | application_name
----+--------+------------------+------------------
1 | 5 | 0094E796CBFCFEF9 | Pressure No.10
(1 row)
The data row is upserted as expected.
If we remove the unique: true from typeId field. sequelize will use primary key as the duplicate key. Take a look below generated SQL and data rows in the database:
Executing (default): INSERT INTO "iot_nodes" ("id","typeId","devEui","application_name") VALUES (DEFAULT,5,'0094E796CBFCFEF9','Pressure No.10') ON CONFLICT ("id") DO UPDATE SET "devEui"=EXCLUDED."devEui" RETURNING *;
Executing (default): INSERT INTO "iot_nodes" ("id","typeId","devEui","application_name") VALUES (DEFAULT,5,'0094E796CBFCFEF9','Pressure No.10') ON CONFLICT ("id") DO UPDATE SET "devEui"=EXCLUDED."devEui" RETURNING *;
=# select * from iot_nodes;
id | typeId | devEui | application_name
----+--------+------------------+------------------
1 | 5 | 0094E796CBFCFEF9 | Pressure No.10
2 | 5 | 0094E796CBFCFEF9 | Pressure No.10
(2 rows)
I am trying to store nested JSON object using composite tables in Cassandra and the nodejs bindings.
Let's say my data looks like this (friends and foes actually have more complex data structures than simple map):
{
id: 001,
name: 'luke',
lastname: 'skywalker',
friends: [
{ id: 002,
name: 'han',
lastname: 'solo' },
{ id: 003,
name: 'obiwan',
lastname: 'kenobi' },
{ id: 004,
name: 'leila',
lastname: 'skywalker' }
],
foes: [
{ id: 005,
name: 'dark',
lastname: 'vador' },
{ id: 006,
name: 'boba',
lastname: 'feet' }
]
}
From what I understood from composite keys (here: https://pkghosh.wordpress.com/2013/07/14/storing-nested-objects-in-cassandra-composite_columns/), I expected to store my data like this:
001 | luke | skywalker | friend:002:han:solo | friend:003:obiwan:kenobi | ... | foe:006:boba:feet
I created my table like this:
CREATE TABLE heroes (
id int,
name text,
lastname text,
friend_id int,
friend_name text,
friend_lastname text,
foe_id int,
foe_name text,
foe_lastname text,
PRIMARY KEY ((id, name, lastname), friend_id, foe_id)
);
And then run for each friends or foes:
client.execute(
'INSERT INTO heros (id, name, lastname, friend_id, friend_name, friend_lastname) VALUES (?, ?, ?, ?, ?)',
[001, 'luke', skywalker', 002, 'han', 'solo'],
function(err) {
//some code
}
)
Now, when runing the query 'SELECT * FROM heroes WHERE id=001' I expected to get only one row with all the friends or foes added as column.
Instead I get as many row as there as friends and foes. On top of this, a row for a friend looks like this:
{ rows:
[ { __columns: [Object],
id: 001,
name: 'luke',
lastname: 'skywalker',
friend_id: 002,
friend_name: 'han',
friend_lastname: 'solo',
foe_id: null,
foe_name: null,
foe_lastname: null } ],
meta:
// some data
}
I would have expected it not to have foe_* field at all.
Am I doing something wrong or is it the way cassandra handles composite items?
The result you are getting is be expected, because you have included friend and foes as part of the primary key. Hero has one to many association with friend and foe. In the blog post of mine that you are referring, I have used only the primary entity attributes as primary key. All the children entity attributes are mapped as dynamic columns.