I have struggle below question with days, and posted same question earlier and didn't get any positive feedback.
Im using mysql in build aes_encrypt method to encrypt new and existing data.
https://dev.mysql.com/doc/refman/8.0/en/encryption-functions.html
SET ##SESSION.block_encryption_mode = 'aes-256-ecb';
INSERT INTO test_aes_ecb ( column_one, column_two )
values ( aes_encrypt('text','key'), aes_encrypt('text', 'key'));
I used ecb ciper, so It no need to use iv for that. Issue is I can't decrypt it from node.js side.
Im using sequelize and tried to call data through model --> decrypt from node side.
I tried with below libraries,
"aes-ecb": "^1.3.15",
"aes256": "^1.1.0",
"crypto-js": "^4.1.1",
"mysql-aes": "0.0.1",
Below are code snippets from sequelize call
async function testmysqlAESModel () {
const users = await test.findAll();
console.log('users', users[0].column_one);
var decrypt = AES.decrypt( users[0].column_one, 'key' );
}
Its returning buffer data and couldn't decrypt from node side, Can someone provide proper example for that? Im struggling for days.
EDIT
Inserted record to mysql as below query.
SET ##SESSION.block_encryption_mode = 'aes-256-ecb';
INSERT INTO test_aes_ecb ( id, column_one, column_two )
VALUES (1, 2,AES_ENCRYPT('test',UNHEX('gVkYp3s6v9y$B&E)H#McQeThWmZq4t7w')));
In nodejs called like this,
testmysqlAESModel();
async function testmysqlAESModel () {
const users = await test.findAll();
console.log('users', users[0].column_one);
var decipher = crypto.createDecipheriv(algorithm, Buffer.from("gVkYp3s6v9y$B&E)H#McQeThWmZq4t7w", "hex"), "");
var encrypted = Buffer.from(users[0].column_one); // Note that this is what is stored inside your database, so that corresponds to users[0].column_one
var decrypted = decipher.update(encrypted, 'binary', 'utf8');
decrypted += decipher.final('utf8');
console.log(decrypted);
}
Im getting below error,
I used below link to create 256bit key.
https://www.allkeysgenerator.com/Random/Security-Encryption-Key-Generator.aspx
Still couldn't fix, can you provide sample project or any kind of supporting code snippet for that ?
There are multiple issues here:
Ensure that your key has the correct length. AES is specified for certain key length (i.e. 128, 196 and 256 bit). if you use any other key length, then your key will be padded (zero extended) or truncated by the crypto library. This is a non-standard process, and different implementations will do this differently. To avoid this, use a key in the correct length and store it has hex instead of ascii (to avoid charset issues)
Potential issues regarding password to key inference. Some AES implementations use methods to infer keys from passwords/passphrases. Since you are using raw keys in MySQL, you do not want to infer anything but want to use raw keys in NodeJS as well. This means that if you are using the native crypto module, that you want to use createDecipheriv instead of createDecipher.
Caution: The AES mode you are using (ECB) is inherently insecure, because equal input leads to equal output. There are ways around that using other AES modes, such as CBC or GCM. You have been warned.
Example:
MySQL SELECT AES_ENCRYPT('text',UNHEX('F3229A0B371ED2D9441B830D21A390C3')) as test; returns the buffer [145,108,16,83,247,49,165,147,71,115,72,63,152,29,218,246];
Decoding this in Node could look like this:
var crypto = require('crypto');
var algorithm = 'aes-128-ecb';
var decipher = crypto.createDecipheriv(algorithm, Buffer.from("F3229A0B371ED2D9441B830D21A390C3", "hex"), "");
var encrypted = Buffer.from([145,108,16,83,247,49,165,147,71,115,72,63,152,29,218,246]); // Note that this is what is stored inside your database, so that corresponds to users[0].column_one
var decrypted = decipher.update(encrypted, 'binary', 'utf8');
decrypted += decipher.final('utf8');
console.log(decrypted);
This prints text again.
Note that F3229A0B371ED2D9441B830D21A390C3 is the key in this example, you would obviously have to create your own. Just ensure that your key has the same length as the example, and is a valid hex string.
Related
I am working on a discord bot written in nodejs, the bot utilises a mysql database server to store information. The problem I have run into is that I cannot seem to retrieve the data from the database in a neat way, every single thing I try seems to run into some issue or another.
The select query returns an object called RowDataPacket. When googling every single result will reference this solution: Object.values(JSON.parse(JSON.stringify(rows)))
It postulates that I should get the values back, but I dont I get an array back that is as hard to work with as the rowdatapacket object.
This is a snippet of my code:
const kenneledMemberRolesTableName = 'kenneled_member_roles'
const kenneledMemberKey = 'kenneled_member'
const kenneledMemberRoleKey = 'kenneled_member_role_id'
const kenneledStaffMemberKey = 'kenneled_staff_member'
const kenneledDateKey = 'kenneled_date'
const kenneledReturnableRoleKey = 'kenneled_role_can_be_returned'
async function findKenneledMemberRoles(kenneledMemberId) {
let sql = `SELECT CAST(${kenneledMemberRoleKey} AS Char) FROM ${kenneledMemberRolesTableName} WHERE ${kenneledMemberKey} = ${kenneledMemberId}`
let rows = await databaseAccessor.runQuery(sql)
let result = JSON.parse(JSON.stringify(rows)).map(row => {
return row.kenneled_member_role_id
})
return result
}
This seemed to work, until I had to do a type conversion on the value, now the dot notations requires me to reference row.CAST(kenneled_member_role_id AS Char), this cannot work, and I have found no other way to retrieve the data than through dot notation. I swear there must be a better way to work with mysql rowdatapackets but the solution eludes me
I figured out something that works, however I still feel like this is an inelegant solution, I would love to hear from others if I am misunderstanding how to work with mysql code in nodejs, or if this is just a consequence of the library:
let result = JSON.parse(JSON.stringify(rows)).map(row => {
return row[`CAST(${kenneledMemberRoleKey} AS CHAR)`];
})
So what I did is I access the value through brackets instead of dot notation, this seems to work, and at least makes me able to store part of or the whole expression in a constant variable, hiding the ugliness.
I run a node API server with pm2 cluster mode that will communicate with a mysql DB server.
In module x.js I have a code like this:
let insertMappingQuery = ``;
...
...
const constructInsertMappingQuery = () => {
insertMappingQuery += `
INSERT IGNORE INTO messages_mapping (message_id, contact_id)
VALUES (` + message_id + `, ` + contact_id + ` + `);`;
}
When a user sends a message a function will call module x and the code above is executed for his message (let's say message_id = 1)
INSERT IGNORE INTO messages_mapping (message_id, contact_id)
VALUES (1, some_id);
then another user sends a message and the code is executed for let's say message_id = 2 however the query will look like this:
INSERT IGNORE INTO messages_mapping (message_id, contact_id)
VALUES (1, some_id);
INSERT IGNORE INTO messages_mapping (message_id, contact_id)
VALUES (2, some_id);
So basically when user two sends a message, this query will contain what user one already executed. So user one will have his record inserted twice.
This doesn't happen all the time but it happens a lot (I would say 30% to 50%) and I couldn't find any pattern when this happens.
Users don't have to do it at the same time, there might be some time difference (minutes or even hours).
could this be related to the variable not being cleared in the memory? or a memory leakage of some kind?
I don't understand how two different users will share a variable.
Remember that require caches modules and all subsequent require calls are given the same things, so write something that exports a function, or class, so that you can safely call/instantiate things without variables getting shared.
For example:
const db = require(`your/db/connector`);
const Mapper = {
addToMessageMapping: async function(messageId, contactId) {
const query = `
INSERT IGNORE INTO messages_mapping (message_id, contact_id)
VALUES (${message_id}, ${contact_id});
`;
...
return db.run(query);
},
...
}
module.exports = Mapper;
And of course this could have been a class, too, or it could even have been that function directly - the only thing that changes is how you make it run that non-conflicting-with-any-other-call function.
Now, consumers of this code simply trust that the following is without side effects:
const mapper = require('mapper.js');
const express, app, etc, whatever = ...
....
app.post(`/api/v1/mappings/message/:msgid`, (req, res, next) => {
const post = getPOSTparamsTheUsualWay();
mapper.addToMessageMapping(req.params.msgId, post.contactId)
.then(() => next());
.catch(error => next(error));
}, ..., moreMiddleware, ... , (req,res) => {
res.render(`blah.html`, {...});
});
Also note that template strings exist specifically to prevent string composition by concatenating strings with +, the whole point is that they can take ${...} inside them and template in "whatever is in those curly braces" (variables, function calls, any JS really).
(The second power they have is that you can prefix tag them with a function name and that function will run as part of the templating action, but not a lot of folks need this on a daily basis. ${...} templating though? Every day, thousands of times).
And of course on a last note: it looks like you're creating raw SQL, which is always a bad idea. Use prepared statements for whatever database library you're using: it supports them, and means any user-input is made safe. Right now, someone could post to your API with a message id that's ); DROP TABLE messages_mapping; -- and done: your table's gone. Fun times.
Apparently I didn't know that requireing a module will cache it and reuse it. Thus global variables in that module will be cached too.
So the best solution here is to avoid using global variables and restructure the code. However if you need a quick solution you can use:
delete require.cache[require.resolve('replaceWithModulePathHere')]
Example:
let somefuncThatNeedsModuleX = () => {
delete require.cache[require.resolve('./x')];
const x = require('./x');
}
I have a lua script, which simplified is like this:
local item = {};
local id = redis.call("INCR", "counter");
item["id"] = id;
item["data"] = KEYS[1]
redis.call("SET", "item:" .. id, cjson.encode(item));
return cjson.encode(item);
KEYS[1] is a stringified json object:
JSON.stringify({name : 'some name'});
What happens is that because I'm using cjson.encode to add the item to the set, it seems to be getting stringified twice, so the result is:
{"id":20,"data":"{\"name\":\"some name\"}"}
Is there a better way to be handling this?
First, regardless your question, you're using KEYS the wrong way and your script isn't written according to the guidelines. You should not generate key names in your script (i.e. call SET with "item:" .. id as a keyname) but rather use the KEYS input array to declare any keys involved a priori.
Secondly, instead of passing the stringified string with KEYS, use the ARGV input array.
Thirdly, you can do item["data"] = json.decode(ARGV[1]) to avoid the double encoding.
Lastly, perhaps you should learn about Redis' Hash data type - it may be more suitable to your needs.
What are the restrictions for a Couchabse Document's ID string?
Length?
Are special characters allowed?
What does the string have to start and end with?
Couchbase Guide Sample Code:
var properties = new Dictionary<string, object>
{
{"title", "Little, Big"},
{"author", "John Crowley"},
{"published", 1982}
};
var document = database.GetDocument("978-0061120053");
Debug.Assert(document != null);
var rev = document.PutProperties(properties);
On var document = database.GetDocument("978-0061120053"); what can be used in place of "978-0061120053"?
Quoting from the Couchbase Developer guide these are the only limits on keys:
Keys are strings, typically enclosed by quotes for any given SDK.
No spaces are allowed in a key.
Separators and identifiers are allowed, such as underscore: ‘person_93847’.
A key must be unique within a bucket; if you attempt to store the same key in a bucket, it will either overwrite the value or return an
error in the case of add().
Maximum key size is 250 bytes. Couchbase Server stores all keys in RAM and does not remove these keys to free up space in RAM. Take this
into consideration when you select keys and key length for your
application.
This has become more of an exercise in what am I doing wrong than mission critical, but I'd still like to see what (simple probably) mistake I'm making.
I'm using mysql (5.1.x) AES_ENCRYPT to encrypt a string. I'm using CF's generateSecretKey('AES') to make a key (I've tried it at defaul and 128 and 256 bit lengths).
So let's say my code looks like this:
<cfset key = 'qLHVTZL9zF81kiTnNnK0Vg=='/>
<cfset strToEncrypt = '4111111111111111'/>
<cfquery name="i" datasource="#dsn#">
INSERT INTO table(str)
VALUES AES_ENCRYPT(strToEncrypt,'#key#');
</cfquery>
That works fine as expected and I can select it using SELECT AES_DECRYPT(str,'#key#') AS... with no problems at all.
What I can't seem to do though is get CF to decrypt it using something like:
<cfquery name="s" datasource="#dsn#">
SELECT str
FROM table
</cfquery>
<cfoutput>#Decrypt(s.str,key,'AES')#</cfoutput>
or
<cfoutput>#Decrypt(toString(s.str),key,'AES')#</cfoutput>
I keep getting "The input and output encodings are not same" (including the toString() - without that I get a binary data error). The field type for the encrypted string in the db is blob.
This entry explains that mySQL handles AES-128 keys a bit differently than you might expect:
.. the MySQL algorithm just or’s the bytes of a given passphrase
against the previous bytes if the password is longer than 16 chars and
just leaves them 0 when the password is shorter than 16 chars.
Not highly tested, but this seems to yield the same results (in hex).
<cfscript>
function getMySQLAES128Key( key ) {
var keyBytes = charsetDecode( arguments.key, "utf-8" );
var finalBytes = listToArray( repeatString("0,", 16) );
for (var i = 1; i <= arrayLen(keyBytes); i++) {
// adjust for base 0 vs 1 index
var pos = ((i-1) % 16) + 1;
finalBytes[ pos ] = bitXOR(finalBytes[ pos ], keyBytes[ i ]);
}
return binaryEncode( javacast("byte[]", finalBytes ), "base64" );
}
key = "qLHVTZL9zF81kiTnNnK0Vg==";
input = "4111111111111111";
encrypted = encrypt(input, getMySQLAES128Key(key), "AES", "hex");
WriteDump("encrypted="& encrypted);
// note: assumes input is in "hex". either convert the bytes
// to hex in mySQL first or use binaryEncode
decrypted = decrypt(encrypted, getMySQLAES128Key(key), "AES", "hex");
WriteDump("decrypted="& decrypted);
</cfscript>
Note: If you are using mySQL for encryption be sure to see its documentation which mentions the plain text may end up in various logs (replication, history, etectera) and "may be read by anyone having read access to that information".
Update: Things may have changed, but according to this 2004 bug report the .mysql_history file is only on Unix. (Keep in mind there may be other log files) Detailed instructions for clearing .mysql_history can be found in the manual, but in summary:
Set the MYSQL_HISTFILE variable to /dev/null (on each log in)
Create .mysql_history as a symbolic link to /dev/null (only once)