I have a table called test. there are several records that have unformatted SSNs (i.e. they are missing the dashes)
They are:
123456789
When I want them to be formatted like:
123-45-6789
I want to run a script that can insert in these 2 dashes for records like this that are strings with 9 characters.
If you need to update the rows you can use the following insert() string syntax to check its functionality:
with test as (
select '123456789' as ssn
)
select insert(insert(ssn,4,0,'-'),7,0,'-')
from test;
update test set ssn=insert(insert(ssn,4,0,'-'),7,0,'-')
where ssn not like '%-%';
It might be a better idea to leave the data as-is and add a generated column that implements the above insert syntax instead. This depends on the data, which you haven't shown us.
A solution that uses the regexp functions
update test
set SSN = regexp_replace(SSN, '([0-9]{3})([0-9]{2})([0-9]{4})$','\\1-\\2-\\3')
where SSN RLIKE '^[0-9]{9}$'
select * from test
| SSN |
| :---------- |
| 123-45-6789 |
db<>fiddle here
Related
My team has stored array data as a string in MySQL like below
["1","2","22","11"]
How can we select data from the table where the column contains a certain branch number.
Example of table
sno | Name | Branch
1. | Tom. | ["1","2","22"]
2. | Tim. | ["1","2"]
Can you suggest a query to select all rows containing branch 2?
We tried using FIND_IN_SET() but that is not working as the double quotes and square brackets are also a part of string.
Use like:
select *
from mytable
where Branch like '%"2"%'
I'm storing permissions into DB with Array JSON String, and i want select them by permission specific permission. at this time I'm selecting them like this:
1 | Dog | [3,4]
2 | Cat | [33,4]
3 | Tiger | [5,33,4]
4 | wolf | [3,5]
SELECT * FROM `pages` WHERE access REGEXP '([^"])3([^"])'
it works but not as it should work. This query gives me all records which contains 3 but also it gives which contains 33. my question is how i must format my regexp to get row by specific value into json string.
p.s i have mysql 5.5 so as i know on this version json functions is not supported
If you only have numbers in the fields, you can alter your regexp to only take values where the string you are looking for (here the '3') does not have another number immediately close to it :
SELECT * FROM `pages` WHERE access REGEXP '([^"0-9])3([^"0-9])'
REGEXP '[[:<:]]3[[:>:]]'
That is, use the "word boundary" thingies.
I have a table that looks like this
+------+------------------------------------+
| id | details |
+------+------------------------------------+
| 1 | {"price":"24.99","currency":"USD"} |
+------+------------------------------------+
Is it possible to, with a single MySQL select statement, obtain the value of price 24.99?
Yes, you can using JSON_EXTRACT
It probably should be like:
SELECT JSON_EXTRACT(details, "$.price")
FROM table_name
or another form:
SELECT details->"$.price"
FROM table_name
(I don't have MySql to test it)
Note that the price in your JSON stored as a string, not a number and you probably would want to cast it to a DECIMAL.
I've stumbled on a previously asked and answered question here:
How to use comparison operator for numeric string in MySQL?
I absolutely agree with the answer being the best mentioned. But it left me with a question myself while I was trying to create my own answer. I was trying to select the first number and convert it to an integer. Next I wanted to compare that integer with a number (3 in case of the question).
This is the query I've created:
SELECT experience,
CONVERT(SUBSTRING_INDEX(experience,'-',1), UNSIGNED INTEGER) AS num
FROM employee
WHERE #num >= 3;
For the sake of simplicity, asume the data inside experience is: 4-8
The query doesn't return any errors. But it doesn't return the data either. I know it's possible to compare the data inside a column with a user defined variable. But is it possible to compare data (the integer in this case) with the variable like I'm trying to do?
This is purely out of curiousity and to learn something.
Yes, a derived table will do. The inner select block below is a derived table. And every derived table needs a name. In my case, xDerived.
The strategy is to let the derived table cleanse the use of the column name. Coming out of the derived chunk is a clean column named num which the outer select is free to use.
Schema
create table employee
( id int auto_increment primary key,
experience varchar(20) not null
);
-- truncate table employee;
insert employee(experience) values
('4-5'),('7-1'),('4-1'),('6-5'),('8-6'),('5-9'),('10-4');
Query
select id,experience,num
from
( SELECT id,experience,
CONVERT(SUBSTRING_INDEX(experience,'-',1),UNSIGNED INTEGER) AS num
FROM employee
) xDerived
where num>=7;
Results
+----+------------+------+
| id | experience | num |
+----+------------+------+
| 2 | 7-1 | 7 |
| 5 | 8-6 | 8 |
| 7 | 10-4 | 10 |
+----+------------+------+
Note, your #num concept was faulty but hopefully I interpreted what you meant to do above.
Also, I went with 7 not 3 because all your sample data would have returned, and I wanted to show you it would work.
The AS num instruction names the result of convert as num, not a variable named #num.
You could repeat the convert
SELECT experience,CONVERT(SUBSTRING_INDEX(experience,'-',1),UNSIGNED INTEGER)
FROM employee
WHERE CONVERT(SUBSTRING_INDEX(experience,'-',1),UNSIGNED INTEGER) >= 3;
Or use a partial (derived) table (only one convert)
SELECT experience,num
FROM (select experience,
CONVERT(SUBSTRING_INDEX(experience,'-',1),UNSIGNED INTEGER) as num
FROM employee) as partialtable WHERE num>=3;
Much simpler. (Or at least much shorter.) This will work for the data as described, namely "number, -, other stuff".
SELECT experience,
0+experience AS 'FirstPart'
FROM employee
WHERE 0+experience >= 3
Why? 0+string is parsed as "convert the string to a number, then add it to 0". Converting a string will extract the digits up to the first non-digit, then convert that as numeric.
I have one table in my database. Field of table are describe below.
ID | NAME | QUALIFICATION
1 | ABC | Phd
2 | XYZ | MBA
3 | ADS | MBA
Now my problem is related to update QUALIFICATION record. Suppose if I update record of QUALIFICATION, it should be append new value to existing value.
For example, I am going to update record of id=1. Now I update "QUALIFICATION" MCA then it should add MCA to the existing record Phd, separated with comma. Output will looks like below.
ID | NAME | QUALIFICATION
1 | ABC | Phd,MCA
2 | XYZ | MBA
3 | ADS | MBA
When "QUALIFICATION" is null then the update should not be add comma before MCA.
Thats a bad database design never store the data as comma separated string, this will make things messy in future.
You should think of normalizing the table something as for the student your table should look like
- id primary key auto_incremented
- name
- other columns related to student
Then another table as student_qualification
- id primary key auto_incremented
- id_student ( id from student table)
- qualification
So for each student you can add as many qualification as possible to this table and can easily do add/edit/delete data
You can later easily retrieve data using simple joining the table.
first u have to select your existing value of Qualification column
that u want to update
Using
select qualification from tb_name where id = 1
Using above query u will get your qualification column value
suppose in
$qulification
Now update that row using
update set tb_name set qualification = '".$qualification."."your new value" where id = 1
May you can try this
update employee set qualification = qualification || ',MCA' where id = 1
the above will work in oracle
EDIT:
Then you can have the case statement with it
update employee set qualification = case when qualification is null then
'MCA' else qualification || ',MCA' end where id = 1
You can test for NULL in the SET clause, and use concatenation to format the string appropriately.
update student
set qualification = concat(if (qualification is not null
, concat( qualification, ',')
, '' )
, 'MBA')
where id = 1;
Here is a working SQL Fiddle (which also demonstrates behaviour with a NULL qualification).
I agree with #Abhik that this is a bad design for this specific data, and normalization is the better approach for the use case you provide However, there are other use cases where doing this sort of update would be perfectly valid so the question is worthy of a proper answer..