I have a MYSQL server on a linux server. I am using AES encryption functions to store data in the DB server. Commands are like this ones:
INSERT into userc (name, town) VALUES ('john',AES_ENCRYPT('nebraska', 'usa2010'));
SELECT CAST(AES_DECRYPT(town, 'usa2010') AS CHAR(50)) town_decrypt from userc;
My concern on this kind of encryption is that everything needed to access my data is travelling in clear, so a sniffer or a Debug level log are capable of capturing everything.
Is there a way of not sending the key in the command, but having it stored on a file (/home/user/key.txt) and so calling the encryption in a way similar to this:
INSERT into userc (name, town) VALUES ('john',AES_ENCRYPT('nebraska', key1));
Where key1 is the reference to the file where the key is stored?
Looks like this was more a DB question, so I posted it there and got a solution. The proposal, now being tested is to create a user defined function (UDF) in C and load it in mysql. This, being coded in C can perform any action I need, reading a file, and geting the key from it, based on a parameter it got.
Of course this file has to be protected in some way so it is not exposed.
Related
I have a big amount of data in a mysql database. I want to poll data from database and push them in a activemq in camel. the connection between database and queue will be lost every 15 minutes. some of the messages are lost during connection interruption. I need to know which messages are lost to poll them again from database. the messages should not be send more that one time. and this should be done without any changes in database schema.(i can not add any Boolean status field to my database).
any suggestion is welcomed.
Essentially, you need to have some unique identifier in the data you pull from the source database. Maybe it is whatever has already been defined as the primary key. Or, maybe the table has some timestamp field. Or, maybe some combination of fields will be unique.
Once you identify that, when you are putting the data into the target, reject any key that is already in the target. You could use Camel's "idempotency" features, but if you are able to check for the key in the target database, you probably won't need anything else.
If you have to make the decision about what to send, but do not have access to your remote database from App #1, you'll need to keep a record on the other side of the firewall.
You would need to do this, even if the connection did not break every 15 minutes...because you could have failures for other reasons.
If you can have an Idempotency database for App#1, another approach could be to transfer data from the local database to some other local table, and read from this. Then you poll this other table, and delete whenever the send is successful.
Example:
It looks like you're using MySql. If both databases are on MySql, you could look into MySql data-replication, rather than using your own app, with Camel.
I am pulling data from a MySQL Server using an ODBC connection to the MySQL tables. Unfortunately, the table names are now being appended to the variables, which means I have to run an extra step after the data pull to rename every variable for use. Why is this happening and how can I prevent this?
E.G.,
MySQL table Sched has the following variables MAPID, TEST, TYPE, APPOINT, etc. but when I pull the data they come down as: Sched_MAPID, Sched_TEST, Sched_TYPE, Sched_APPOINT, etc.
I am using SAS 9.3 64b and setting a lib ref that includes the libname (MySQLLib, type of connection (odbc), the DSN, username and password. The connection obviously tests fine or I wouldn't be able to see the data, much less pull it. Basically, I set the data using the following:
libname MySQLLib odbc dsn=MySQLSSH user=&MySQLID pwd=&sqlpwd;
Data MySQLSched;
Set MySQLLib.Sched;
run;
Generally, a local SAS table (MySQLSched) is created that I can then manipulate and use as I need, without further touching the original Sched table. I still can, but every variable has the table name appended.
I am not sure if this is a SAS issue or a MySQL issue. I will also ask this same question in the SAS forums. IF I receive a pertinent answer there, I will update this post with that answer.
Any ideas?
I have a MySQL database that I use only for logging. It consists of several simple look-alike MyISAM tables. There is always one local (i.e. located on the same machine) client that only writes data to db and several remote clients that only read data.
What I need is to insert bulks of data from local client as fast as possible.
I have already tried many approaches to make this faster such as reducing amount of inserts by increasing the length of values list, or using LOAD DATA .. INFILE and some others.
Now it seems to me that I've came to the limitation of parsing values from string to its target data type (doesn't matter if it is done when parsing queries or a text file).
So the question is:
does MySQL provide some means of manipulating data directly for local clients (i.e. not using SQL)? Maybe there is some API that allow inserting data by simply passing a pointer.
Once again. I don't want to optimize SQL code or invoke the same queries in a script as hd1 adviced. What I want is to pass a buffer of data directly to the database engine. This means I don't want to invoke SQL at all. Is it possible?
Use mysql's LOAD DATA command:
Write the data to file in CSV format then execute this OS command:
LOAD DATA INFILE 'somefile.csv' INTO TABLE mytable
For more info, see the documentation
Other than LOAD DATA INFILE, I'm not sure there is any other way to get data into MySQL without using SQL. If you want to avoid parsing multiple times, you should use a client library that supports parameter binding, the query can be parsed and prepared once and executed multiple times with different data.
However, I highly doubt that parsing the query is your bottleneck. Is this a dedicated database server? What kind of hard disks are being used? Are they fast? Does your RAID controller have battery backed RAM? If so, you can optimize disk writes. Why aren't you using InnoDB instead of MyISAM?
With MySQL you can insert multiple tuples with one insert statement. I don't have an example, because I did this several years ago and don't have the source anymore.
Consider as mentioned to use one INSERT with multiple values:
INSERT INTO table_name (col1, col2) VALUES (1, 'A'), (2, 'B'), (3, 'C'), ( ... )
This leads to you only having to connect to your database with one bigger query instead of several smaller. It's easier to take in the entire couch through the door once than running back and forth with all disassembled pieces of the couch, opening the door every time. :)
Apart from that, you can also run LOCK TABLES table_name WRITE before INSERT and UNLOCK TABLES afterwards. That will secure that nothing else is inserted during.
Lock tables
INSERT into foo (foocol1, foocol2) VALUES ('foocol1val1', 'foocol2val1'),('foocol1val2','foocol2val2') and so on should sort you. More information and sample code will be found here. If you have further problems, do leave a comment.
UPDATE
If you don't want to use SQL, then try this shell script to do as many inserts as you want, put it in a file, say insertToDb.sh, and get on with your day/evening:
#!/bin/sh
mysql --user=me --password=foo dbname -h foo.example.com -e "insert into tablename (col1, col2) values ($1, $2);"
Invoke as sh insertToDb.sh col1value col2value. If I've still misunderstood your question, leave another comment.
After making some investigation I found no way of passing data directly to mysql database engine (without parsing it).
My aim was to speed up communication between local client and db server as much as possible. The idea was if client is local then it could use some api functions to pass data to db engine thus not using (i.e. parsing) SQL and values in it. The only closest solution was proposed by bobwienholt (using prepared statement and binding parameters). But LOAD DATA .. INFILE appeared to be a bit faster in my case.
The best way to insert data on MS SQL without using insert into or update queries is just to access MS SQL Interface. Right click on the table name and select "Edit top 200 rows". Then you will be able to add data on the database directly by just typing per cell. For you to enable searching or using select or other sql commands just right click on any of the 200 rows you have selected. Go to pane then select SQL and you can add sql command. Check it out. :D
without using insert statement , use " Sqllite Studio " for inserting data in mysql. It's free and open source so u can download and check.
I need to build a small private app. I want to store a piece of personally identifiable information (it's an internal account number -- not an SSN or anything like super-sensitive) in a table that "encrypts" it.
I put encrypts in quotes because I wish the data to be stored as follows:
stored in a way that if someone physically looked at the table data the piece of info would not be discernible
stored in a way that if someone did a simple query select the resulting data output would not not be discernible
yet when I write my own query select statement I can still decrypt the data and present it in a readable fashion
In other words, I want it only moderately encrypted so that I can still decrypt it and read it. I know MD5 hashing locks the value from ever being read. I want something less than that.
Thanks
MD5 is NOT encryption. It's hashing.
If you don't mind passing the crypt key around in each query, it's trivial to have this in MySQL:
SELECT AES_DECRYPT(crypted_field, 'crypt key goes here') AS decrypted
and
INSERT INTO yourtable (crypted) VALUES (AES_ENCRYPT('some string', 'crypt key'));
I think what you're looking for is "Symmetric Key Encryption". You can use a key to encrypt your data, and the same key to decrypt it as needed (as opposed to a hash function which as you said - makes the original data irrecoverable). In MySQL I would take a look at the AES_Encrypt and AES_Decrypt functions.. Hopefully that gets you pointed in the right direction!
MySQL provides both DES and AES encryption. You will need to figure out key management, but the encryption algorithms are available.
I created an asymmetric key on one of my SQL servers (2008). I encrypted a password field and I am able to retrieve that password just fine on my development server.
The issue comes into play where I need to move this data to a production server.
Here is the code for the key that was created:
CREATE MASTER KEY ENCRYPTION BY PASSWORD='#########'
CREATE ASYMMETRIC KEY UserEncryptionKey
WITH ALGORITHM = RSA_2048
Now, when I run this on the production server, it creates the key just fine. However, when I run my sproc to get the password, it returns NULL.
SQL:
SELECT EncryptByAsymKey(AsymKey_ID('UserEncryptionKey'), Password )
FROM Users WHERE UserName = '######'
Any thoughts on what I need to do to get the encrypted field to work on multiple SQL Servers?
Please let me know if I need to clarify something.
Thanks
Do not move encrypted data from a database to another. Technically is possible, true, but you will likely compromise the key in the process so I rather not tell you how to do it.
When data is exchanged between sites, the usual procedure separates the key management and deployment from data transfer. Data is decrypted before transfer and dedicate encryption schemes for data transfer are used, like TLS and SSL, that eliminate the problem of deploying and sharing the actual encryption keys.
Asa side note, normally one does no encrypt data with asymmetric keys. They are way too slow for data operations. What everybody does is they encrypt data with a symmetric key and then encrypt the symmetric key with an asymmetric key.