mysql: split varchar value and insert parts - mysql

I have a denormalised records in my table:
ID, CODES
1 |1|2|3|4
2 |5|6|7|8
In second column there are int values, saved in varchar field separated by | symbol.
I want to convert them to normal Many2Many relational form, using link table.
So I want to create a table like this
ID CODE
1 1
1 2
1 3
1 4
....
2 8
I understand that I can iterate through the records in mysql stored function, split string and insert value. But I am interested: is it possible to convert data this way without stored procedure/function, but using only query (create table ... select ...)?
Thanks.
UPD: There is variable number of codes in different rows. Each line has from 1 to 15 codes.

Here's how it works, inclusive test data and so on.
But consider that this is just a fun answer. The way to go is clearly a stored procedure or a function or whatever.
drop table testvar;
create table testvar (id int, codes varchar(20));
insert into testvar values (1, '|1|2|3|4'), (2, '|5|6|7|8');
drop table if exists inserttest;
create table inserttest (id int, code int);
select #sql:=left(concat('insert into inserttest values ', group_concat( '(', id, ',', replace(right(codes, length(codes) - 1), '|', concat( '),(', id, ',' )), '),' separator '')), length(concat('insert into inserttest values ', group_concat( '(', id, ',', replace(right(codes, length(codes) - 1), '|', concat( '),(', id, ',' )), '),' separator ''))) -1)
from testvar;
prepare stmt1 from #sql;
execute stmt1;
select * from inserttest;

The Oracle way is:
insert into newtestvar
select t.id, to_number(substr(t.codes, p1 + 1, p2))
from (
select testvar.id, testvar.codes, s.num,
instr(testvar.codes, '|',1,s.num) p1,
instr(testvar.codes||'|', '|',1,s.num + 1)- instr(testvar.codes, '|',1,s.num) - 1 p2
from testvar, (select level num from dual connect by level <= 15) s
where s.num <= (length(testvar.codes)-length(replace(testvar.codes, '|')))
) t;
I hope you can adapt it for mysql.

Related

Pivoting on temptable data using sql server [duplicate]

I have read the stuff on MS pivot tables and I am still having problems getting this correct.
I have a temp table that is being created, we will say that column 1 is a Store number, and column 2 is a week number and lastly column 3 is a total of some type. Also the Week numbers are dynamic, the store numbers are static.
Store Week xCount
------- ---- ------
102 1 96
101 1 138
105 1 37
109 1 59
101 2 282
102 2 212
105 2 78
109 2 97
105 3 60
102 3 123
101 3 220
109 3 87
I would like it to come out as a pivot table, like this:
Store 1 2 3 4 5 6....
-----
101 138 282 220
102 96 212 123
105 37
109
Store numbers down the side and weeks across the top.
If you are using SQL Server 2005+, then you can use the PIVOT function to transform the data from rows into columns.
It sounds like you will need to use dynamic sql if the weeks are unknown but it is easier to see the correct code using a hard-coded version initially.
First up, here are some quick table definitions and data for use:
CREATE TABLE yt
(
[Store] int,
[Week] int,
[xCount] int
);
INSERT INTO yt
(
[Store],
[Week], [xCount]
)
VALUES
(102, 1, 96),
(101, 1, 138),
(105, 1, 37),
(109, 1, 59),
(101, 2, 282),
(102, 2, 212),
(105, 2, 78),
(109, 2, 97),
(105, 3, 60),
(102, 3, 123),
(101, 3, 220),
(109, 3, 87);
If your values are known, then you will hard-code the query:
select *
from
(
select store, week, xCount
from yt
) src
pivot
(
sum(xcount)
for week in ([1], [2], [3])
) piv;
See SQL Demo
Then if you need to generate the week number dynamically, your code will be:
DECLARE #cols AS NVARCHAR(MAX),
#query AS NVARCHAR(MAX)
select #cols = STUFF((SELECT ',' + QUOTENAME(Week)
from yt
group by Week
order by Week
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)')
,1,1,'')
set #query = 'SELECT store,' + #cols + ' from
(
select store, week, xCount
from yt
) x
pivot
(
sum(xCount)
for week in (' + #cols + ')
) p '
execute(#query);
See SQL Demo.
The dynamic version, generates the list of week numbers that should be converted to columns. Both give the same result:
| STORE | 1 | 2 | 3 |
---------------------------
| 101 | 138 | 282 | 220 |
| 102 | 96 | 212 | 123 |
| 105 | 37 | 78 | 60 |
| 109 | 59 | 97 | 87 |
This is for dynamic # of weeks.
Full example here:SQL Dynamic Pivot
DECLARE #DynamicPivotQuery AS NVARCHAR(MAX)
DECLARE #ColumnName AS NVARCHAR(MAX)
--Get distinct values of the PIVOT Column
SELECT #ColumnName= ISNULL(#ColumnName + ',','') + QUOTENAME(Week)
FROM (SELECT DISTINCT Week FROM #StoreSales) AS Weeks
--Prepare the PIVOT query using the dynamic
SET #DynamicPivotQuery =
N'SELECT Store, ' + #ColumnName + '
FROM #StoreSales
PIVOT(SUM(xCount)
FOR Week IN (' + #ColumnName + ')) AS PVTTable'
--Execute the Dynamic Pivot Query
EXEC sp_executesql #DynamicPivotQuery
I've achieved the same thing before by using subqueries. So if your original table was called StoreCountsByWeek, and you had a separate table that listed the Store IDs, then it would look like this:
SELECT StoreID,
Week1=(SELECT ISNULL(SUM(xCount),0) FROM StoreCountsByWeek WHERE StoreCountsByWeek.StoreID=Store.StoreID AND Week=1),
Week2=(SELECT ISNULL(SUM(xCount),0) FROM StoreCountsByWeek WHERE StoreCountsByWeek.StoreID=Store.StoreID AND Week=2),
Week3=(SELECT ISNULL(SUM(xCount),0) FROM StoreCountsByWeek WHERE StoreCountsByWeek.StoreID=Store.StoreID AND Week=3)
FROM Store
ORDER BY StoreID
One advantage to this method is that the syntax is more clear and it makes it easier to join to other tables to pull other fields into the results too.
My anecdotal results are that running this query over a couple of thousand rows completed in less than one second, and I actually had 7 subqueries. But as noted in the comments, it is more computationally expensive to do it this way, so be careful about using this method if you expect it to run on large amounts of data .
This is what you can do:
SELECT *
FROM yourTable
PIVOT (MAX(xCount)
FOR Week in ([1],[2],[3],[4],[5],[6],[7])) AS pvt
DEMO
I'm writing an sp that could be useful for this purpose, basically this sp pivot any table and return a new table pivoted or return just the set of data, this is the way to execute it:
Exec dbo.rs_pivot_table #schema=dbo,#table=table_name,#column=column_to_pivot,#agg='sum([column_to_agg]),avg([another_column_to_agg]),',
#sel_cols='column_to_select1,column_to_select2,column_to_select1',#new_table=returned_table_pivoted;
please note that in the parameter #agg the column names must be with '[' and the parameter must end with a comma ','
SP
Create Procedure [dbo].[rs_pivot_table]
#schema sysname=dbo,
#table sysname,
#column sysname,
#agg nvarchar(max),
#sel_cols varchar(max),
#new_table sysname,
#add_to_col_name sysname=null
As
--Exec dbo.rs_pivot_table dbo,##TEMPORAL1,tip_liq,'sum([val_liq]),sum([can_liq]),','cod_emp,cod_con,tip_liq',##TEMPORAL1PVT,'hola';
Begin
Declare #query varchar(max)='';
Declare #aggDet varchar(100);
Declare #opp_agg varchar(5);
Declare #col_agg varchar(100);
Declare #pivot_col sysname;
Declare #query_col_pvt varchar(max)='';
Declare #full_query_pivot varchar(max)='';
Declare #ind_tmpTbl int; --Indicador de tabla temporal 1=tabla temporal global 0=Tabla fisica
Create Table #pvt_column(
pivot_col varchar(100)
);
Declare #column_agg table(
opp_agg varchar(5),
col_agg varchar(100)
);
IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(#table) AND type in (N'U'))
Set #ind_tmpTbl=0;
ELSE IF OBJECT_ID('tempdb..'+ltrim(rtrim(#table))) IS NOT NULL
Set #ind_tmpTbl=1;
IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(#new_table) AND type in (N'U')) OR
OBJECT_ID('tempdb..'+ltrim(rtrim(#new_table))) IS NOT NULL
Begin
Set #query='DROP TABLE '+#new_table+'';
Exec (#query);
End;
Select #query='Select distinct '+#column+' From '+(case when #ind_tmpTbl=1 then 'tempdb.' else '' end)+#schema+'.'+#table+' where '+#column+' is not null;';
Print #query;
Insert into #pvt_column(pivot_col)
Exec (#query)
While charindex(',',#agg,1)>0
Begin
Select #aggDet=Substring(#agg,1,charindex(',',#agg,1)-1);
Insert Into #column_agg(opp_agg,col_agg)
Values(substring(#aggDet,1,charindex('(',#aggDet,1)-1),ltrim(rtrim(replace(substring(#aggDet,charindex('[',#aggDet,1),charindex(']',#aggDet,1)-4),')',''))));
Set #agg=Substring(#agg,charindex(',',#agg,1)+1,len(#agg))
End
Declare cur_agg cursor read_only forward_only local static for
Select
opp_agg,col_agg
from #column_agg;
Open cur_agg;
Fetch Next From cur_agg
Into #opp_agg,#col_agg;
While ##fetch_status=0
Begin
Declare cur_col cursor read_only forward_only local static for
Select
pivot_col
From #pvt_column;
Open cur_col;
Fetch Next From cur_col
Into #pivot_col;
While ##fetch_status=0
Begin
Select #query_col_pvt='isnull('+#opp_agg+'(case when '+#column+'='+quotename(#pivot_col,char(39))+' then '+#col_agg+
' else null end),0) as ['+lower(Replace(Replace(#opp_agg+'_'+convert(varchar(100),#pivot_col)+'_'+replace(replace(#col_agg,'[',''),']',''),' ',''),'&',''))+
(case when #add_to_col_name is null then space(0) else '_'+isnull(ltrim(rtrim(#add_to_col_name)),'') end)+']'
print #query_col_pvt
Select #full_query_pivot=#full_query_pivot+#query_col_pvt+', '
--print #full_query_pivot
Fetch Next From cur_col
Into #pivot_col;
End
Close cur_col;
Deallocate cur_col;
Fetch Next From cur_agg
Into #opp_agg,#col_agg;
End
Close cur_agg;
Deallocate cur_agg;
Select #full_query_pivot=substring(#full_query_pivot,1,len(#full_query_pivot)-1);
Select #query='Select '+#sel_cols+','+#full_query_pivot+' into '+#new_table+' From '+(case when #ind_tmpTbl=1 then 'tempdb.' else '' end)+
#schema+'.'+#table+' Group by '+#sel_cols+';';
print #query;
Exec (#query);
End;
GO
This is an example of execution:
Exec dbo.rs_pivot_table #schema=dbo,#table=##TEMPORAL1,#column=tip_liq,#agg='sum([val_liq]),avg([can_liq]),',#sel_cols='cod_emp,cod_con,tip_liq',#new_table=##TEMPORAL1PVT;
then Select * From ##TEMPORAL1PVT would return:
Here is a revision of #Tayrn answer above that might help you understand pivoting a little easier:
This may not be the best way to do this, but this is what helped me wrap my head around how to pivot tables.
ID = rows you want to pivot
MY_KEY = the column you are selecting from your original table that contains the column names you want to pivot.
VAL = the value you want returning under each column.
MAX(VAL) => Can be replaced with other aggregiate functions. SUM(VAL), MIN(VAL), ETC...
DECLARE #cols AS NVARCHAR(MAX),
#query AS NVARCHAR(MAX)
select #cols = STUFF((SELECT ',' + QUOTENAME(MY_KEY)
from yt
group by MY_KEY
order by MY_KEY ASC
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)')
,1,1,'')
set #query = 'SELECT ID,' + #cols + ' from
(
select ID, MY_KEY, VAL
from yt
) x
pivot
(
sum(VAL)
for MY_KEY in (' + #cols + ')
) p '
execute(#query);
select * from (select name, ID from Empoyee) Visits
pivot(sum(ID) for name
in ([Emp1],
[Emp2],
[Emp3]
) ) as pivottable;
Just give you some idea how other databases solve this problem. DolphinDB also has built-in support for pivoting and the sql looks much more intuitive and neat. It is as simple as specifying the key column (Store), pivoting column (Week), and the calculated metric (sum(xCount)).
//prepare a 10-million-row table
n=10000000
t=table(rand(100, n) + 1 as Store, rand(54, n) + 1 as Week, rand(100, n) + 1 as xCount)
//use pivot clause to generate a pivoted table pivot_t
pivot_t = select sum(xCount) from t pivot by Store, Week
DolphinDB is a columnar high performance database. The calculation in the demo costs as low as 546 ms on a dell xps laptop (i7 cpu). To get more details, please refer to online DolphinDB manual https://www.dolphindb.com/help/index.html?pivotby.html
Pivot is one of the SQL operator which is used to turn the unique data from one column into multiple column in the output. This is also mean by transforming the rows into columns (rotating table). Let us consider this table,
If I want to filter this data based on the types of product (Speaker, Glass, Headset) by each customer, then use Pivot operator.
Select CustmerName, Speaker, Glass, Headset
from TblCustomer
Pivot
(
Sum(Price) for Product in ([Speaker],[Glass],[Headset])
) as PivotTable

Calculation within a string field MySQL

I have been attempting to edit/add a value string column (process) within a table.
The current values in the string is as follow:
1:38,25:39,41:101
What I want to do is add 1000 to every value after "X:" so after the query the values should read:
1:1038,25:1039,41:1101
I have looked at CONCAT but that seems to only insert a value into a string within certain parameters. Any ideas?
You can use CAST CONCAT and SUBSTRING_INDEX functions to get the required output, e.g.:
SELECT
CONCAT(SUBSTRING_INDEX(value, ':', 1), ":", (CAST(SUBSTRING_INDEX(value, ':', -1) AS UNSIGNED) + 1000))
FROM test;
Here's the SQL Fiddle.
Use of variables can help you achieve what you want:
select #pre := SUBSTRING_INDEX(SUBSTRING_INDEX(`column_name`, ':', 1), '0', -1),
#post := SUBSTRING_INDEX(SUBSTRING_INDEX(`column_name`, ':', 2), '0', -1),
concat(#pre,":",#post+1000) as required_value from table_name;
References:
for use of variables:
https://dev.mysql.com/doc/refman/5.7/en/user-variables.html
for substring_index:
https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_substring-index
You should normalize the data and store it in a separate table. That said, to answer your question you could use these queries to achieve what you want:
CREATE TEMPORARY TABLE temp (val VARCHAR(50));
SET #str = CONCAT("INSERT INTO temp (val) VALUES ('",REPLACE((SELECT org FROM mytable LIMIT 1), ",", "'),('"),"');");
PREPARE st FROM #str;
EXECUTE st;
SELECT
GROUP_CONCAT(DISTINCT CONCAT(SUBSTRING_INDEX(val, ':', 1), ":", (CAST(SUBSTRING_INDEX(val, ':', -1) AS UNSIGNED) + 1000)))
FROM temp;
Just make sure to replace SELECT org FROM mytable LIMIT 1 in the above query with your query that returns the string 1:38,25:39,41:101 you need to edit. Be aware that this example only shows how to process the value in one string. If you need to process values of multiple rows, you need to adjust a bit more...
Check sqlfiddle: http://sqlfiddle.com/#!9/bf6da4/1/0

How to copy data from one table to another "EXCEPT" one field

How to INSERT into another table except specific field
e.g
TABLE A
ID(auto_inc) CODE NAME
1 001 TEST1
2 002 TEST2
I want to insert CODE and NAME to another table, in this case TABLE B but except ID because it is auto increment
Note: I don't want to use "INSERT INTO TABLE B SELECT CODE, NAME FROM TABLE A", because I have an existing table with around 50 fields and I don't want to write it one by one
Thanks for any suggests and replies
This can't be done without specifying the columns (excludes the primary key).
This question might help you. Copy data into another table
You can get all the columns using information_schema.columns:
select group_concat(column_name separator ', ')
from information_schema.columns c
where table_name = 'tableA' and
column_name <> 'id';
This gives you the list. Then past the list into your code. You can also use a prepared statement for this, but a prepared statement might be overkill.
If this is a one time thing?
If yes, do the insert into tableA (select * from table B)
then Alter the table to drop the column that your dont need.
I tried to copy from a table to another one with one extra field.
source table is TERRITORY_t
* the principle is to create a temp table identical to the source table, adjust column fields of the temp table and copy the content of the temp table to the destination table.
This is what I did:
create a temp table called TERRITORY_temp
generate SQL by running export
CREATE TABLE IF NOT EXISTS TERRITORY_temp (
Territory_Id int(11) NOT NULL,
Territory_Name varchar(50) COLLATE utf8_unicode_ci DEFAULT NULL,
PRIMARY KEY (Territory_Id)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
copy over with
INSERT INTO TERRITORY_temp (Territory_Id, Territory_Name) VALUES
(1, 'SouthEast'),
(2, 'SouthWest'),
(3, 'NorthEast'),
(4, 'NorthWest'),
(5, 'Central');
or
INSERT INTO TERRITORY_temp
SELECT * from TERRITORY_t
add the extra field(s) to match with the new table
copy from the temp table to the destination table
INSERT INTO TERRITORY_new
SELECT * from TERRITORY_temp
Please provide feedback.
Step 1. Create stored procedure
CREATE PROCEDURE CopyDataTable
#SourceTable varchar(255),
#TargetTable varchar(255),
#SourceFilter nvarchar(max) = ''
AS
BEGIN
SET NOCOUNT ON;
DECLARE #SourceColumns VARCHAR(MAX)=''
DECLARE #TargetColumns VARCHAR(MAX)=''
DECLARE #Query VARCHAR(MAX)=''
SELECT
#SourceColumns = ISNULL(#SourceColumns +',', '') + T.COLUMN_NAME
FROM
(
select name as COLUMN_NAME from sys.all_columns
where object_id = (select object_id from sys.tables where name = #SourceTable)
and is_identity = 0
)T
SELECT
#TargetColumns = ISNULL(#TargetColumns +',', '') + T.COLUMN_NAME
FROM
(
select name as COLUMN_NAME from sys.all_columns
where object_id = (select object_id from sys.tables where name = #TargetTable)
and is_identity = 0
)T
set #Query = 'INSERT INTO ' + #TargetTable + ' (' + SUBSTRING(#TargetColumns,2 , 9999) + ') SELECT ' + SUBSTRING(#SourceColumns,2 , 9999) + ' FROM ' + #SourceTable + ' ' + #SourceFilter;
PRINT #Query
--EXEC(#Query)
END
GO
Step 2. Run stored procedure
use YourDatabaseName
exec dbo.CopyDataTable 'SourceTable','TargetTable'
Explanations
a) dbo.CopyDataTable will transfer all data from SourceTable to TargetTable, except field with Identity
b) You can apply filter when call stored procedure, in order to transfer only row based on criteria
exec dbo.CopyDataTable 'SourceTable','TargetTable', 'WHERE FieldName=3'
exec dbo.CopyDataTable 'SourceTable','TargetTable', 'WHERE FieldName=''TextValue'''
c) Remove -- from --EXEC(#Query) WHEN finish

Use sql result to specify table to join

Is there any way how can I use result for specifying table to join?
I'd like to do something like
SELECT id, some_number, ... FROM sometable NATURAL JOIN someothertable_$some_number;
I know that there's nothing like this in relational algebra, so probably I'll not succeed, I just wanted to ask to be sure.
I don't want to use any SQL scripts.
Runnable Example Here: http://sqlfiddle.com/#!2/5e92c/36
Code to setup tables for this example:
create table if not exists someTable
(
someTableId bigint not null auto_increment
, tableId int not null
, someOtherTableId bigint not null
, primary key (someTableId)
, index (tableId, someOtherTableId)
);
create table if not exists someOtherTable_$1
(
someOtherTableId bigint not null auto_increment
, data varchar(128) character set utf8
, primary key (someOtherTableId)
);
create table if not exists someOtherTable_$2
(
someOtherTableId bigint not null auto_increment
, data varchar(128) character set utf8
, primary key (someOtherTableId)
);
insert sometable (tableId, someOtherTableId) values (1, 1);
insert sometable (tableId, someOtherTableId) values (1, 2);
insert sometable (tableId, someOtherTableId) values (2, 2);
insert sometable (tableId, someOtherTableId) values (2, 3);
insert someothertable_$1(data) values ('table 1 row 1');
insert someothertable_$1(data) values ('table 1 row 2');
insert someothertable_$1(data) values ('table 1 row 3');
insert someothertable_$2(data) values ('table 1 row 1');
insert someothertable_$2(data) values ('table 1 row 2');
insert someothertable_$2(data) values ('table 1 row 3');
STATIC SOLUTION
Here's a solution if your tables are fixed (e.g. in the example you only have someOtherTable 1 and 2 / you don't need the code to change automatically as new tables are added):
select st.someTableId
, coalesce(sot1.data, sot2.data)
from someTable st
left outer join someOtherTable_$1 sot1
on st.tableId = 1
and st.someOtherTableId = sot1.someOtherTableId
left outer join someOtherTable_$2 sot2
on st.tableId = 2
and st.someOtherTableId = sot2.someOtherTableId;
DYNAMIC SOLUTION
If the number of tables may change at runtime you'd need to write dynamic SQL. Beware: with every successive table you're going to take a performance hit. I wouldn't recommend this for a production system; but it's a fun challenge. If you can describe your tool set & what you're hoping to achieve we may be able to give you a few pointers on a more suitable way forward.
select group_concat(distinct ' sot' , cast(tableId as char) , '.data ')
into #coalesceCols
from someTable;
select group_concat(distinct ' left outer join someOtherTable_$', cast(tableId as char), ' sot', cast(tableId as char), ' on st.tableId = ', cast(tableId as char), ' and st.someOtherTableId = sot', cast(tableId as char), '.someOtherTableId ' separator '')
into #tableJoins
from someTable;
set #sql = concat('select someTableId, coalesce(', #coalesceCols ,') from someTable st', #tableJoins);
prepare stmt from #sql;
execute stmt;

Split value from one field to two

I've got a table field membername which contains both the last name and the first name of users. Is it possible to split those into 2 fields memberfirst, memberlast?
All the records have this format "Firstname Lastname" (without quotes and a space in between).
Unfortunately MySQL does not feature a split string function. However you can create a user defined function for this, such as the one described in the following article:
MySQL Split String Function by Federico Cargnelutti
With that function:
DELIMITER $$
CREATE FUNCTION SPLIT_STR(
x VARCHAR(255),
delim VARCHAR(12),
pos INT
)
RETURNS VARCHAR(255) DETERMINISTIC
BEGIN
RETURN REPLACE(SUBSTRING(SUBSTRING_INDEX(x, delim, pos),
LENGTH(SUBSTRING_INDEX(x, delim, pos -1)) + 1),
delim, '');
END$$
DELIMITER ;
you would be able to build your query as follows:
SELECT SPLIT_STR(membername, ' ', 1) as memberfirst,
SPLIT_STR(membername, ' ', 2) as memberlast
FROM users;
If you prefer not to use a user defined function and you do not mind the query to be a bit more verbose, you can also do the following:
SELECT SUBSTRING_INDEX(SUBSTRING_INDEX(membername, ' ', 1), ' ', -1) as memberfirst,
SUBSTRING_INDEX(SUBSTRING_INDEX(membername, ' ', 2), ' ', -1) as memberlast
FROM users;
SELECT variant (not creating a user defined function):
SELECT IF(
LOCATE(' ', `membername`) > 0,
SUBSTRING(`membername`, 1, LOCATE(' ', `membername`) - 1),
`membername`
) AS memberfirst,
IF(
LOCATE(' ', `membername`) > 0,
SUBSTRING(`membername`, LOCATE(' ', `membername`) + 1),
NULL
) AS memberlast
FROM `user`;
This approach also takes care of:
membername values without a space: it will add the whole string to memberfirst and sets memberlast to NULL.
membername values that have multiple spaces: it will add everything before the first space to memberfirst and the remainder (including additional spaces) to memberlast.
The UPDATE version would be:
UPDATE `user` SET
`memberfirst` = IF(
LOCATE(' ', `membername`) > 0,
SUBSTRING(`membername`, 1, LOCATE(' ', `membername`) - 1),
`membername`
),
`memberlast` = IF(
LOCATE(' ', `membername`) > 0,
SUBSTRING(`membername`, LOCATE(' ', `membername`) + 1),
NULL
);
It seems that existing responses are over complicated or not a strict answer to the particular question.
I think, the simple answer is the following query:
SELECT
SUBSTRING_INDEX(`membername`, ' ', 1) AS `memberfirst`,
SUBSTRING_INDEX(`membername`, ' ', -1) AS `memberlast`
;
I think it is not necessary to deal with more-than-two-word names in this particular situation. If you want to do it properly, splitting can be very hard or even impossible in some cases:
Johann Sebastian Bach
Johann Wolfgang von Goethe
Edgar Allan Poe
Jakob Ludwig Felix Mendelssohn-Bartholdy
Petőfi Sándor
Virág Vendelné Farkas Margit
黒澤 明
In a properly designed database, human names should be stored both in parts and in whole. This is not always possible, of course.
If your plan is to do this as part of a query, please don't do that (a). Seriously, it's a performance killer. There may be situations where you don't care about performance (such as one-off migration jobs to split the fields allowing better performance in future) but, if you're doing this regularly for anything other than a mickey-mouse database, you're wasting resources.
If you ever find yourself having to process only part of a column in some way, your DB design is flawed. It may well work okay on a home address book or recipe application or any of myriad other small databases but it will not be scalable to "real" systems.
Store the components of the name in separate columns. It's almost invariably a lot faster to join columns together with a simple concatenation (when you need the full name) than it is to split them apart with a character search.
If, for some reason you cannot split the field, at least put in the extra columns and use an insert/update trigger to populate them. While not 3NF, this will guarantee that the data is still consistent and will massively speed up your queries. You could also ensure that the extra columns are lower-cased (and indexed if you're searching on them) at the same time so as to not have to fiddle around with case issues.
And, if you cannot even add the columns and triggers, be aware (and make your client aware, if it's for a client) that it is not scalable.
(a) Of course, if your intent is to use this query to fix the schema so that the names are placed into separate columns in the table rather than the query, I'd consider that to be a valid use. But I reiterate, doing it in the query is not really a good idea.
use this
SELECT SUBSTRING_INDEX(SUBSTRING_INDEX( `membername` , ' ', 2 ),' ',1) AS b,
SUBSTRING_INDEX(SUBSTRING_INDEX( `membername` , ' ', -1 ),' ',2) AS c FROM `users` WHERE `userid`='1'
In MySQL this is working this option:
SELECT Substring(nameandsurname, 1, Locate(' ', nameandsurname) - 1) AS
firstname,
Substring(nameandsurname, Locate(' ', nameandsurname) + 1) AS lastname
FROM emp
Not exactly answering the question, but faced with the same problem I ended up doing this:
UPDATE people_exit SET last_name = SUBSTRING_INDEX(fullname,' ',-1)
UPDATE people_exit SET middle_name = TRIM(SUBSTRING_INDEX(SUBSTRING_INDEX(fullname,last_name,1),' ',-2))
UPDATE people_exit SET middle_name = '' WHERE CHAR_LENGTH(middle_name)>3
UPDATE people_exit SET first_name = SUBSTRING_INDEX(fullname,concat(middle_name,' ',last_name),1)
UPDATE people_exit SET first_name = middle_name WHERE first_name = ''
UPDATE people_exit SET middle_name = '' WHERE first_name = middle_name
The only case where you may want such a function is an UPDATE query which will alter your table to store Firstname and Lastname into separate fields.
Database design must follow certain rules, and Database Normalization is among most important ones
I had a column where the first and last name were both were in one column. The first and last name were separated by a comma. The code below worked. There is NO error checking/correction. Just a dumb split. Used phpMyAdmin to execute the SQL statement.
UPDATE tblAuthorList SET AuthorFirst = SUBSTRING_INDEX(AuthorLast,',',-1) , AuthorLast = SUBSTRING_INDEX(AuthorLast,',',1);
13.2.10 UPDATE Syntax
This takes smhg from here and curt's from Last index of a given substring in MySQL and combines them. This is for mysql, all I needed was to get a decent split of name to first_name last_name with the last name a single word, the first name everything before that single word, where the name could be null, 1 word, 2 words, or more than 2 words. Ie: Null; Mary; Mary Smith; Mary A. Smith; Mary Sue Ellen Smith;
So if name is one word or null, last_name is null. If name is > 1 word, last_name is last word, and first_name all words before last word.
Note that I've already trimmed off stuff like Joe Smith Jr. ; Joe Smith Esq. and so on, manually, which was painful, of course, but it was small enough to do that, so you want to make sure to really look at the data in the name field before deciding which method to use.
Note that this also trims the outcome, so you don't end up with spaces in front of or after the names.
I'm just posting this for others who might google their way here looking for what I needed. This works, of course, test it with the select first.
It's a one time thing, so I don't care about efficiency.
SELECT TRIM(
IF(
LOCATE(' ', `name`) > 0,
LEFT(`name`, LENGTH(`name`) - LOCATE(' ', REVERSE(`name`))),
`name`
)
) AS first_name,
TRIM(
IF(
LOCATE(' ', `name`) > 0,
SUBSTRING_INDEX(`name`, ' ', -1) ,
NULL
)
) AS last_name
FROM `users`;
UPDATE `users` SET
`first_name` = TRIM(
IF(
LOCATE(' ', `name`) > 0,
LEFT(`name`, LENGTH(`name`) - LOCATE(' ', REVERSE(`name`))),
`name`
)
),
`last_name` = TRIM(
IF(
LOCATE(' ', `name`) > 0,
SUBSTRING_INDEX(`name`, ' ', -1) ,
NULL
)
);
Method I used to split first_name into first_name and last_name when the data arrived all in the first_name field. This will put only the last word in the last name field, so "john phillips sousa" will be "john phillips" first name and "sousa" last name. It also avoids overwriting any records that have been fixed already.
set last_name=trim(SUBSTRING_INDEX(first_name, ' ', -1)), first_name=trim(SUBSTRING(first_name,1,length(first_name) - length(SUBSTRING_INDEX(first_name, ' ', -1)))) where list_id='$List_ID' and length(first_name)>0 and length(trim(last_name))=0
UPDATE `salary_generation_tbl` SET
`modified_by` = IF(
LOCATE('$', `other_salary_string`) > 0,
SUBSTRING(`other_salary_string`, 1, LOCATE('$', `other_salary_string`) - 1),
`other_salary_string`
),
`other_salary` = IF(
LOCATE('$', `other_salary_string`) > 0,
SUBSTRING(`other_salary_string`, LOCATE('$', `other_salary_string`) + 1),
NULL
);
In case someone needs to run over a table and split a field:
First we use the function mention above:
CREATE DEFINER=`root`#`localhost` FUNCTION `fn_split_str`($str VARCHAR(800), $delimiter VARCHAR(12), $position INT) RETURNS varchar(800) CHARSET utf8
DETERMINISTIC
BEGIN
RETURN REPLACE(
SUBSTRING(
SUBSTRING_INDEX($str, $delimiter, $position),
LENGTH(
SUBSTRING_INDEX($str, $delimiter, $position -1)
) + 1
),
$delimiter, '');
END
Second, we run in a while loop on the string until there isn't any results (I've added $id for JOIN clause):
CREATE DEFINER=`root`#`localhost` FUNCTION `fn_split_str_to_rows`($id INT, $str VARCHAR(800), $delimiter VARCHAR(12), $empty_table BIT) RETURNS int(11)
BEGIN
DECLARE position INT;
DECLARE val VARCHAR(800);
SET position = 1;
IF $empty_table THEN
DROP TEMPORARY TABLE IF EXISTS tmp_rows;
END IF;
SET val = fn_split_str($str, ',', position);
CREATE TEMPORARY TABLE IF NOT EXISTS tmp_rows AS (SELECT $id as id, val as val where 1 = 2);
WHILE (val IS NOT NULL and val != '') DO
INSERT INTO tmp_rows
SELECT $id, val;
SET position = position + 1;
SET val = fn_split_str($str, ',', position);
END WHILE;
RETURN position - 1;
END
Finally we can use it like that:
DROP TEMPORARY TABLE IF EXISTS tmp_rows;
SELECT SUM(fn_split_str_to_rows(ID, FieldToSplit, ',', 0))
FROM MyTable;
SELECT * FROM tmp_rows;
You can use the id to join to other table.
In case you are only splitting one value you can use it like that
SELECT fn_split_str_to_rows(null, 'AAA,BBB,CCC,DDD,EEE,FFF,GGG', ',', 1);
SELECT * FROM tmp_rows;
We don't need to empty the temporary table, the function will take care of that.
mysql 5.4 provides a native split function:
SPLIT_STR(<column>, '<delimiter>', <index>)