MySQL TRIM to clean up table littered with spaces - mysql

I have a table with about 14,000 rows. One particular Column has a lot of blank spaces in the data.
I am a mySql Newbie and can't get the syntax correct to trim the white space.
Here is maybes guess ( i know the where close looks funny )
UPDATE MyTable
SET myColumn=TRIM(myColumn)
WHERE ID > 0
What is the syntax?

UPDATE `table` SET `col_name` = TRIM( `col_name` )

Related

SQL READING THE COLUMN VALUES?

0.0.0.1 saved in sql table column as 10001.
MY data base contains values as such above mentioned i wanted to sort it based on the values but if I sort is as it is it will give me wrong order so i need tp convert it to the above mentioned format(10001). i.e. Remove the dots(.)
Thank you.
(I guess you're actually using Oracle as a database, regarding the tool - Oracle SQL Developer - you've also tagged, which means that MySQL tag should be removed).
To me, it looks as if you'd want to a) remove dots, b) change datatype to number (so that it is correctly sorted):
order by to_number(replace(col, '.', ''))
It presumes that only characters allowed are digits and dots. If there's a value like 'A.0.0.1', it'll - of course - fail, as you can't convert letter A to a number.
Why are you storing the period '.' Character to begin with? If it's not needed you can remove it.
If you want to remove all non-alphanumeric characters you could use a regular expresion.
create table t (nm varchar2(20));
insert into t values ('.0.0.0.1');
insert into t values ('10.0.1.1.0');
commit;
select * from t;
NM
.0.0.0.1
10.0.1.1.0
update t
set nm = regexp_replace(
regexp_replace(nm, '[^A-Z0-9 ]', ''),
' {2,}', ' '
);
select * from t;
NM
0001
100110
You can use the translate command with a SELECT and the data will not be charged in the table.
See below
create table t (nm varchar2(20));
insert into t values ('.0.0.0.1');
insert into t values ('10.0.1.1.0');
commit;
SELECT translate(nm, '*.','*') from t
TRANSLATE(NM,'*.','*')
0001
100110
SELECT DISTINCT(column_name)
FROM Table_name
ORDER BY
TO_NUMBER (REGEXP_SUBSTR (column_name, '\d+',1,2)),
TO_NUMBER (REGEXP_SUBSTR (column_name,'\d+',1,3)) NULLS FIRST,
TO_NUMBER (REGEXP_SUBSTR (column_name,'\d+',1,4)) NULLS FIRST;

Remove leading '0' from a char and update the table

I have a table with ids they read '0050001B' and such then there is '50001B'. They are both the same but due to the fact that I'm using char when I want to return something it sees both as two different ids. Is there anyway to update the table such that I can get rid of the leading zero.
I used this code below to output without leading zeros, but I want to fix the table.
SELECT TRIM(LEADING '0' FROM id) FROM mytable;
but i want to fix the table so the zeros are gone on the table, not just on the output.
any help is greatly appreciated!!
you should use update.
update mytable
set id = TRIM(LEADING '0' FROM id);

Avoid row was cut by GROUP_CONCAT error on insert without changing group_concat_max_len

I have an insert that uses a GROUP_CONCAT. In certain scenarios, the insert fails with Row XX was cut by GROUP_CONCAT. I understand why it fails but I'm looking for a way to have it not error out since the insert column is already smaller than the group_concat_max_len. I don't want to increase group_concat_max_len.
drop table if exists a;
create table a (x varchar(10), c int);
drop table if exists b;
create table b (x varchar(10));
insert into b values ('abcdefgh');
insert into b values ('ijklmnop');
-- contrived example to show that insert column size varchar(10) < 15
set session group_concat_max_len = 15;
insert into a select group_concat(x separator ', '), count(*) from b;
This insert produces the error Row 2 was cut by GROUP_CONCAT().
I'll try to provide a few clarifications -
The data in table b is unknown. There is no way to say set group_concat_max_len to a value greater than 18.
I do know the insert column size.
Why group_concat 4 GB of data when you want the first x characters?
When the concatenated string is longer than 10 chars, it should insert the first 10 characters.
Thanks.
Your example GROUP_CONCAT is probably cooking up this value:
abcdefgh, ijklmnop
That is 18 characters long, including the separator.
Can you try something like this?
set session group_concat_max_len = 4096;
insert into a
select left(group_concat(x separator ', '),10),
count(*)
from b;
This will trim the GROUP_CONCAT result for you.
You temporarily can set the group_concat_max_len if you need to, then set it back.
I don't know MySQL very well, nor if there is a good reason to do this in the first place, but you could create a running total length, and limit the GROUP_CONCAT() to where that length is under a certain max, you'll still need to set your group_concat_max_len high enough to handle the longest single value (or utilize CASE logic to substring them to be under the max length you desire.
Something like this:
SELECT SUBSTRING(GROUP_CONCAT(col1 separator ', '),1,10)
FROM (SELECT *
FROM (SELECT col1
,#lentot := COALESCE(#lentot,0) + CHAR_LENGTH(col1) AS lentot
FROM Table1
)sub
WHERE lentot < 25
)sub2
Demo: SQL Fiddle
I don't know if it's SQL Fiddle being quirky or if there's a problem with the logic, but sometimes when running I get no output. Not big on MySQL so could definitely be me missing something. It doesn't seem like it should require 2 subqueries but filtering didn't work as expected unless it was nested like that.
Actually, a better way is to use DISTINCT.
I had a situation to add new two fields into existing stored procedure, in a way that a value for that new fields had been obtained by a LEFT JOIN, and because it may have contained a NULL value, a single "concat" value was multiplicated for some cases more than a 100 times.
Because, a group with that new field value contained many NULL values, GROUP_CONCAT exceeded maximum value (in my case 16384).

Remove a single character from a varchar field SQL Server 2008

I have a table with several varchar columns that are almost identical to primary keys I have in another table, with the exception of a period (.). I've looked at the replace function in T-SQL but the first argument isn't an expression. How can I remove all occurrences of a particular character with SQL? It seems like the correct answer might be to replace with a zero length string. Is that close?
To whomever felt the question didn't exhibit research effort it was mainly due to a misunderstanding of the documentation itself.
You can update the table directly using REPLACE on the column values:
UPDATE myTable
SET myColumn = REPLACE(myColumn, '.', '')
Do you want to remove all instances of the . from the string? If so, you were right about REPLACE:
DECLARE #Example TABLE
(
Value VARCHAR(100)
)
INSERT #Example (Value)
VALUES ('Test.Value'), ('An.otherT.est')
SELECT
REPLACE(Value, '.', '')
FROM
#Example
-- Replace only the first '.'
SELECT
STUFF(Value, CHARINDEX('.', Value, 0), 1, '')
FROM
#Example
Edit, making the example a little more useful since I played around with it anyway, I might as well post the example. :)
update your_table
set some_column = replace(some_column, '.', '')

Prevent auto increment on MySQL duplicate insert

Using MySQL 5.1.49, I'm trying to implement a tagging system
the problem I have is with a table with two columns: id(autoincrement), tag(unique varchar) (InnoDB)
When using query, INSERT IGNORE INTO tablename SET tag="whatever", the auto increment id value increases even if the insert was ignored.
Normally this wouldn't be a problem, but I expect a lot of possible attempts to insert duplicates for this particular table which means that my next value for id field of a new row will be jumping way too much.
For example I'll end up with a table with say 3 rows but bad id's
1 | test
8 | testtext
678 | testtextt
Also, if I don't do INSERT IGNORE and just do regular INSERT INTO and handle the error, the auto increment field still increases so the next true insert is still a wrong auto increment.
Is there a way to stop auto increment if there's an INSERT duplicate row attempt?
As I understand for MySQL 4.1, this value wouldn't increment, but last thing I want to do is end up either doing a lot of SELECT statements in advance to check if the tags exist, or worse yet, downgrade my MySQL version.
You could modify your INSERT to be something like this:
INSERT INTO tablename (tag)
SELECT $tag
FROM tablename
WHERE NOT EXISTS(
SELECT tag
FROM tablename
WHERE tag = $tag
)
LIMIT 1
Where $tag is the tag (properly quoted or as a placeholder of course) that you want to add if it isn't already there. This approach won't even trigger an INSERT (and the subsequent autoincrement wastage) if the tag is already there. You could probably come up with nicer SQL than that but the above should do the trick.
If your table is properly indexed then the extra SELECT for the existence check will be fast and the database is going to have to perform that check anyway.
This approach won't work for the first tag though. You could seed your tag table with a tag that you think will always end up being used or you could do a separate check for an empty table.
I just found this gem...
http://www.timrosenblatt.com/blog/2008/03/21/insert-where-not-exists/
INSERT INTO [table name] SELECT '[value1]', '[value2]' FROM DUAL
WHERE NOT EXISTS(
SELECT [column1] FROM [same table name]
WHERE [column1]='[value1]'
AND [column2]='[value2]' LIMIT 1
)
If affectedRows = 1 then it inserted; otherwise if affectedRows = 0 there was a duplicate.
The MySQL documentation for v 5.5 says:
"If you use INSERT IGNORE and the row is ignored, the AUTO_INCREMENT counter
is **not** incremented and LAST_INSERT_ID() returns 0,
which reflects that no row was inserted."
Ref: http://dev.mysql.com/doc/refman/5.5/en/information-functions.html#function_last-insert-id
Since version 5.1 InnoDB has configurable Auto-Increment Locking. See also http://dev.mysql.com/doc/refman/5.1/en/innodb-auto-increment-handling.html#innodb-auto-inc...
Workaround: use option innodb_autoinc_lock_mode=0 (traditional).
I found mu is too short's answer helpful, but limiting because it doesn't do inserts on an empty table. I found a simple modification did the trick:
INSERT INTO tablename (tag)
SELECT $tag
FROM (select 1) as a #this line is different from the other answer
WHERE NOT EXISTS(
SELECT tag
FROM tablename
WHERE tag = $tag
)
LIMIT 1
Replacing the table in the from clause with a "fake" table (select 1) as a allowed that part to return a record which allowed the insert to take place. I'm running mysql 5.5.37. Thanks mu for getting me most of the way there ....
The accepted answer was useful, however I ran into a problem while using it that basically if your table had no entries it would not work as the select was using the given table, so instead I came up with the following, which will insert even if the table is blank, it also only needs you to insert the table in 2 places and the inserting variables in 1 place, less to get wrong.
INSERT INTO database_name.table_name (a,b,c,d)
SELECT
i.*
FROM
(SELECT
$a AS a,
$b AS b,
$c AS c,
$d AS d
/*variables (properly escaped) to insert*/
) i
LEFT JOIN
database_name.table_name o ON i.a = o.a AND i.b = o.b /*condition to not insert for*/
WHERE
o.a IS NULL
LIMIT 1 /*Not needed as can only ever be one, just being sure*/
Hope you find it useful
You can always add ON DUPLICATE KEY UPDATE Read here (not exactly, but solves your problem it seems).
From the comments, by #ravi
Whether the increment occurs or not depends on the
innodb_autoinc_lock_mode setting. If set to a non-zero value, the
auto-inc counter will increment even if the ON DUPLICATE KEY fires
I had the same problem but didn't want to use innodb_autoinc_lock_mode = 0 since it felt like I was killing a fly with a howitzer.
To resolve this problem I ended up using a temporary table.
create temporary table mytable_temp like mytable;
Then I inserted the values with:
insert into mytable_temp values (null,'valA'),(null,'valB'),(null,'valC');
After that you simply do another insert but use "not in" to ignore duplicates.
insert into mytable (myRow) select mytable_temp.myRow from mytable_temp
where mytable_temp.myRow not in (select myRow from mytable);
I haven't tested this for performance, but it does the job and is easy to read. Granted this was only important because I was working with data that was constantly being updated so I couldn't ignore the gaps.
modified the answer from mu is too short, (simply remove one line)
as i am newbie and i cannot make comment below his answer. Just post it here
the query below works for the first tag
INSERT INTO tablename (tag)
SELECT $tag
WHERE NOT EXISTS(
SELECT tag
FROM tablename
WHERE tag = $tag
)
I just put an extra statement after the insert/update query:
ALTER TABLE table_name AUTO_INCREMENT = 1
And then he automatically picks up the highest prim key id plus 1.