I needed to save id in such a way that would make it encrypted and so that it would take up exactly 8 characters.
I did that using the following command:
SELECT encode(LPAD(id,4,0),'abc')
This command turns an id = 1
into 0001 and then turns that into a code fe5ab21a
How do I decrypt this code?
Below is an example of a select and the result it generates
SELECT 0001
, DECODE(ENCODE('0001', 'abc'), 'abc')
, UNHEX(DECODE(ENCODE('0001', 'abc'), 'abc'))
, ENCODE('0001','abc')
, DECODE('fe5ab21a', 'abc')
, UNHEX(DECODE('fe5ab21a', 'abc'))
, HEX('0001')
The result:
1
30303031
0001
fe5ab21a
68d357a7005dcbe0
NULL
30303031
As I understand, you want to make like a short unique key-pass that's linked to one of your ID.
Be careful with this kind of encryption, if someone want to break it, it'll take less than a few minutes, it's really weak. But if it's only for some obfuscation without security risk, it's ok.
To improve a bit the security (but decrease the user experience), try to use AES_ENCRYPT()
The wrong thing with your code is that you don't force the type of input/output, here is something with the proper typing.
SQL Fiddle
MySQL 5.6 Schema Setup:
CREATE TABLE t
(`id` int, `name` varchar(7))
;
INSERT INTO t
(`id`, `name`)
VALUES
(1, 'hello'),
(2, 'hola'),
(3, 'bonjour')
;
Query 1:
select *, ENCODE(id,'KEY') as encrypted_id from t
Results:
| id | name | encrypted_id |
|----|---------|--------------|
| 1 | hello | Vw== |
| 2 | hola | yw== |
| 3 | bonjour | iA== |
Query 2:
SELECT * from t where id = CAST(DECODE(FROM_BASE64('yw=='),'KEY') AS CHAR(50))
Results:
| id | name |
|----|------|
| 2 | hola |
Related
My current code is given below. I wanted to call all the columns from the table using * but the idcastncrew column name should display like castncrewid. In the requirement code, it's not working though, I wish there was a solution for my requirement such as the sample Requirement code.
Current code:-
SELECT idcastncrew AS castncrewid,castncrewname,castncrewtype,castncrewrole,imagelink,vendor,mode FROM subscriber;
Requirement :-
SELECT idcastncrew AS castncrewid, * FROM subscriber;
The closest I think you can get is to have the renamed column twice, once with the new name and once with the old name.
While MySQL does not allow * after an aliased column (causing your second code snippet to give an error), it does allow table.* anywhere...
SELECT idcastncrew AS castncrewid, subscriber.*
FROM subscriber;
To re-iterate; you'll still get a idcastncrew column, but you will ALSO get a castncrewid column.
There is no way to say don't include *this* column when using * in MySQL
https://dbfiddle.uk/?rdbms=mysql_5.7&fiddle=c69c537e46ad29e3c0c8c03d3ebd1bf7
You can alias columns when you alias the table, example as follows
MariaDB [DEV]> create table xxx (id int, str varchar(20));
MariaDB [DEV]> insert into xxx values (1, 'hi');
MariaDB [DEV]> insert into xxx values (2, 'Hello');
MariaDB [DEV]> insert into xxx values (3, 'World');
MariaDB [DEV]> insert into xxx values (4, 'Goodbye');
MariaDB [DEV]> select a.id as id1, a.* from xxx a order by 1;
+------+------+---------+
| id1 | id | str |
+------+------+---------+
| 1 | 1 | hi |
| 2 | 2 | Hello |
| 3 | 3 | World |
| 4 | 4 | Goodbye |
+------+------+---------+
This question already has answers here:
MySQL - UPDATE query with LIMIT
(9 answers)
Closed 2 years ago.
I have a table with pre-existing giveaway codes and I need to select one or more rows and then update three columns of each row with personal identification, customer code, and "reserved" status. This to reserve each row until receiving a response from our client's API.
The table look like this:
code identification customer_code status
-----------------------------------------------------------------
81Ow3tCs1nNwxKu -- -- available
I1NdH9F22S7RhU3 -- -- available
Xc942LWe8Z6nt8x -- -- available
zcLMRO8kSeM7S06 -- -- available
K94erORvzSsU0ik -- -- available
Tried with this but got an error:
UPDATE promo_codes
SET
identification='12345',
customer_code='67890',
status='reserved'
FROM
(SELECT code FROM promo_codes WHERE status='available' LIMIT 2);
Then I tried with REPLACE INTO but also with error:
REPLACE INTO promo_codes(identification,customer_code,status)
VALUES('12345','67890','reserved')
WHERE
(SELECT code FROM promo_codes WHERE status='available' LIMIT 2);
I do not know what else to do. Could someone give me an idea?
Thank you very much for the help.
A little rewriting and you code works
You should consider adding a ORDER BY RAND() , because a LIMIT without an order is quite meaningless
CREATE TABLE promo_codes (
`code` VARCHAR(15),
`identification` VARCHAR(20),
`customer_code` VARCHAR(20),
`status` VARCHAR(9)
);
INSERT INTO promo_codes
(`code`, `identification`, `customer_code`, `status`)
VALUES
('81Ow3tCs1nNwxKu', '--', '--', 'available'),
('I1NdH9F22S7RhU3', '--', '--', 'available'),
('Xc942LWe8Z6nt8x', '--', '--', 'available'),
('zcLMRO8kSeM7S06', '--', '--', 'available'),
('K94erORvzSsU0ik', '--', '--', 'available');
UPDATE promo_codes
SET
identification='12345',
customer_code='67890',
status='reserved'
WHERE status='available' LIMIT 2;
SELECT * FROM promo_codes
code | identification | customer_code | status
:-------------- | :------------- | :------------ | :--------
81Ow3tCs1nNwxKu | 12345 | 67890 | reserved
I1NdH9F22S7RhU3 | 12345 | 67890 | reserved
Xc942LWe8Z6nt8x | -- | -- | available
zcLMRO8kSeM7S06 | -- | -- | available
K94erORvzSsU0ik | -- | -- | available
db<>fiddle here
I have 2 huge BigQuery tables(say ~20 GB each) from which I would like to find two sessions which had the same tags from different employers. In other words, find all sessions whose tags(except anything that has "default" in it) match with other sessions that were made by employees of another employer(s).
Since there's no BigQuery fiddle, I replicated the problem in MySQL. Below query I wrote gets the data, but has two major problems:
Is way too slow(of course because of 2 select all data queries).
Repeats the sessions data of two such matching sessions. Notice all session tags have repeated data of first_session_id
The pressing problem is the slowness. I am running this query in a daemon job from ruby/rake script that generates reports, the issue is the connection is timing out and results in no data even for setting it up for 5 retries.
Increasing the timeout didn't help either and I am wondering if there's any other way to write this query.
Here is the SQL Fiddle for detailed inspection:
MySQL 5.6 Schema Setup:
CREATE TABLE session_infos(
session_uuid varchar(255),
session_tag varchar(255) DEFAULT 'default'
);
CREATE TABLE employee_sessions(
employer_id int(11),
employee_id int(11),
session_uuid varchar(255)
);
INSERT INTO session_infos
(session_uuid, session_tag)
VALUES
('dcv3erwfw', 'search'),
('aefd4erww', 'default_search'),
('slkjdvbh8', 'game_default'),
('kjdshvo93', 'client'),
('sdvife333', 'client'),
('dvvnjer54', 'search'),
('lsdvkJHd=', 'fiddle'),
('Udbwjw=23', 'fiddle'),
('jKHDJFWJ1', 'search');
INSERT INTO employee_sessions
(employer_id, employee_id, session_uuid)
VALUES
(1, 1, 'dcv3erwfw'),
(1, 2, 'aefd4erww'),
(2, 1, 'slkjdvbh8'),
(2, 1, 'kjdshvo93'),
(1, 2, 'sdvife333'),
(2, 2, 'dvvnjer54'),
(3, 1, 'lsdvkJHd='),
(2, 1, 'Udbwjw=23'),
(4, 1, 'jKHDJFWJ1');
Query 1:
SELECT
first_table.session_tag AS session_tag,
first_table.sid AS first_session_id,
second_table.sid AS second_session_id
FROM
(
SELECT
si.session_tag AS session_tag,
es.employer_id AS employer_id,
es.session_uuid AS sid
FROM session_infos AS si
JOIN employee_sessions AS es
ON es.session_uuid = si.session_uuid
WHERE
si.session_tag NOT LIKE '%default%'
) first_table
INNER JOIN
(
SELECT
ssi.session_tag AS session_tag,
ses.employer_id AS employer_id,
ses.session_uuid AS sid
FROM session_infos AS ssi
JOIN employee_sessions AS ses
ON ses.session_uuid = ssi.session_uuid
WHERE
ssi.session_tag NOT LIKE '%default%'
) second_table
ON first_table.session_tag = second_table.session_tag
WHERE first_table.employer_id != second_table.employer_id
-- GROUP BY first_table.session_tag, first_table.sid, second_table.sid
Results:
| session_tag | first_session_id | second_session_id |
|-------------|------------------|-------------------|
| search | dcv3erwfw | dvvnjer54 |
| search | dcv3erwfw | jKHDJFWJ1 |
| client | kjdshvo93 | sdvife333 |
| client | sdvife333 | kjdshvo93 |
| search | dvvnjer54 | dcv3erwfw |
| search | dvvnjer54 | jKHDJFWJ1 |
| fiddle | lsdvkJHd= | Udbwjw=23 |
| fiddle | Udbwjw=23 | lsdvkJHd= |
| search | jKHDJFWJ1 | dcv3erwfw |
| search | jKHDJFWJ1 | dvvnjer54 |
I am trying to insert data from one table into another, and each table has an 'id' field that should be the same, but is stored different datatype. This 'id' field should represent the same unique value, allowing me to update from one to another.
In one table (the new.table one), the 'id' is stored as datatype varchar(35) and in the old.table it is datatype bigint(20) -- I believe this older table represents the integer version of the hex value stored in the new one. I am trying to update data from the new.table back into the old.table
After searching about this for a while
When I try this simple mysql update query it fails:
INSERT INTO old.table (id, field2)
SELECT CAST(CONV(id,16,10) AS UNSIGNED INTEGER), field2
FROM new.table;
It fails with this error:
Out of range value for column 'id' at row 1
I have also tried a simple
SELECT CAST(CONV(id, 16,10) AS UNSIGNED INTEGER) from new.table;
And the result is all the same integer mostly, but each hex value in new.table is unique. I've google this for two days, and could really use to help to figure out what is wrong. Thanks.
EDIT: Some of the example data from console of output of SELECT ID from new.table:
| 1d2353560110956e1b3e8610a35d903a |
| ec526762556c4f92a3ea4584a7cebfe1.11 |
| 34b8c838c18a4c5690514782b7137468.16 |
| 1233fa2813af44ca9f25bb8cac05b5b5.16 |
| 37f396d9c6e04313b153a34ab1e80304.16 |
The problem id is too high values.
MySQL will return limit-value when overflow happened.
Query 1:
select CONV('FFFFFFFFFFFFFFFF1',16,10)
Results:
| CONV('FFFFFFFFFFFFFFFF1',16,10) |
|---------------------------------|
| 18446744073709551615 |
Query 2:
select CONV('FFFFFFFFFFFFFFFF',16,10)
Results:
| CONV('FFFFFFFFFFFFFFFF',16,10) |
|--------------------------------|
| 18446744073709551615 |
I would suggest you, Implement the logic algorithm for id in your case in a function instead of use CONV function.
EDIT
I would use a variable to make new row number and insert to old table.
CREATE TABLE new(
Id varchar(35)
);
insert into new values ('1d2353560110956e1b3e8610a35d903a');
insert into new values ('ec526762556c4f92a3ea4584a7cebfe1.11');
insert into new values ('34b8c838c18a4c5690514782b7137468.16');
insert into new values ('1233fa2813af44ca9f25bb8cac05b5b5.16');
insert into new values ('37f396d9c6e04313b153a34ab1e80304.16');
CREATE TABLE old(
Id bigint(20),
val varchar(35)
);
INSERT INTO old (id, val)
SELECT rn, id
FROM (
SELECT *,(#Rn:=#Rn +1) rn
FROM new CROSS JOIN (SELECT #Rn:=0) v
) t1
Query 1:
SELECT * FROM old
Results:
| Id | val |
|----|-------------------------------------|
| 1 | 1d2353560110956e1b3e8610a35d903a |
| 2 | ec526762556c4f92a3ea4584a7cebfe1.11 |
| 3 | 34b8c838c18a4c5690514782b7137468.16 |
| 4 | 1233fa2813af44ca9f25bb8cac05b5b5.16 |
| 5 | 37f396d9c6e04313b153a34ab1e80304.16 |
I am writing sql query for bigsql.
If it looks like this
select t.city from table t where t.city like 'A%'
It works ok, but next one fails:
select t.city from table t where t.city like 'A%' escape '\'
I only add escape expression and it give me following error
Error Code: -5199, SQL State: 57067] DB2 SQL Error: SQLCODE=-5199, SQLSTATE=57067, SQLERRMC=Java DFSIO;1;2, DRIVER=4.15.82
I found this documentation http://www-01.ibm.com/support/knowledgecenter/SSPT3X_2.1.2/com.ibm.swg.im.infosphere.biginsights.bigsql.doc/doc/bsql_like_predicate.html?lang=en
So seems escape should work.
If I escape escape character I get another error
Error Code: -130, SQL State: 22019] DB2 SQL Error: SQLCODE=-130, SQLSTATE=22019, SQLERRMC=null, DRIVER=4.15.82. 2) [Error Code: -727, SQL State: 56098] DB2 SQL Error: SQLCODE=-727, SQLSTATE=56098, SQLERRMC=2;-130;22019;, DRIVER=4.15.82
But if I use not '\' character as escape, but something another, like '/' it works fine.
Any ideas why it may happen?
Try this maybe. You might have to escape the escape character.
select t.city from table t where t.city like 'A%' escape '\\'
Based upon this sample:
\connect bigsql
drop table if exists stack.issue1;
create hadoop table if not exists stack.issue1 (
f1 integer,
f2 integer,
f3 varchar(200),
f4 integer
)
stored as parquetfile;
insert into stack.issue1 (f1,f2,f3,f4) values (0,0,'Detroit',0);
insert into stack.issue1 (f1,f2,f3,f4) values (1,1,'Mt. Pleasant',1);
insert into stack.issue1 (f1,f2,f3,f4) values (2,2,'Marysville',2);
insert into stack.issue1 (f1,f2,f3,f4) values (3,3,'St. Clair',3);
insert into stack.issue1 (f1,f2,f3,f4) values (4,4,'Port Huron',4);
select * from stack.issue1;
select * from stack.issue1 where f3 like 'M%';
\quit
I get the following results:
jsqsh --autoconnect --input-file=./t.sql --output-file=t.out
0 rows affected (total: 0.28s)
0 rows affected (total: 0.22s)
1 row affected (total: 0.37s)
1 row affected (total: 0.35s)
1 row affected (total: 0.38s)
1 row affected (total: 0.35s)
1 row affected (total: 0.35s)
5 rows in results(first row: 0.33s; total: 0.33s)
2 rows in results(first row: 0.26s; total: 0.26s)
cat t.out
+----+----+--------------+----+
| F1 | F2 | F3 | F4 |
+----+----+--------------+----+
| 1 | 1 | Mt. Pleasant | 1 |
| 0 | 0 | Detroit | 0 |
| 4 | 4 | Port Huron | 4 |
| 3 | 3 | St. Clair | 3 |
| 2 | 2 | Marysville | 2 |
+----+----+--------------+----+
+----+----+--------------+----+
| F1 | F2 | F3 | F4 |
+----+----+--------------+----+
| 1 | 1 | Mt. Pleasant | 1 |
| 2 | 2 | Marysville | 2 |
+----+----+--------------+----+
This shows your syntax is correct, however, based upon the -5199 error code, this is an issue with the FMP processes not having enough memory or an issue with the Hadoop I/O component. You can get further information on this error by issuing
db2 ? sql5199n
from the command line.
The SQL error message should have directed you to the node where the error occurred and where the Big SQL log file is and the associated reader log files are located.
SQL5199 error generally means an issue with HDFS ( you can do a db2 \? SQL5199 to get details on the message -- as user bigsql ). Check the bigsql and DFS logs to see if that gives any pointers to the problem.
Hope this helps.