I have a column name send which consists of alphabetical and numerical value not alphanumeric, i want to segregate the alphabetical value and their counts.
suggest me the query? i tried '%[0-9]%' but not able to segregate
You should really try to include some example data or a little bit of code showing what you're trying to do.
But, whatever, hopefully you can apply my example to your problem. First, I'll create a table with a column name "send" as you mentioned:
mysql> CREATE TABLE test (send VARCHAR(255));
Query OK, 0 rows affected (0.11 sec)
We'll insert into that some various values which are numeric and non numeric:
mysql> INSERT INTO test (send) VALUES (1), (-2), (3), ('a'), ('b'), ('ZZ'),
-> ('test'), (42), ('1.2'), (0), (0123), ('123');
Query OK, 12 rows affected (0.01 sec)
Records: 12 Duplicates: 0 Warnings: 0
Then we can build a query which uses REGEXP to determine if a value is numeric or not:
mysql> SELECT
-> COUNT(*) AS `Count`,
-> IF(1 = send REGEXP '^(-|\\+)?([0-9]+\\.[0-9]*|[0-9]*\\.[0-9]+|[0-9]+)$',
-> 'NUMERIC',
-> 'NON-NUMERIC') AS IsNumeric
-> FROM
-> test
-> GROUP BY
-> IsNumeric
-> ;
+-------+-------------+
| Count | IsNumeric |
+-------+-------------+
| 4 | NON-NUMERIC |
| 8 | NUMERIC |
+-------+-------------+
2 rows in set (0.00 sec)
Apply this to whatever it is that you're trying. The part which says "send REGEXP '^(-|\+)?([0-9]+\.[0-9]|[0-9]\.[0-9]+|[0-9]+)$' " returns 1 if the value of send is a number.
Related
I have a mysql table which has a data structure as follows,
create table data(
....
name char(40) NULL,
...
)
But I could insert names which has characters more than 40 in to name field. Can someone explain what is the actual meaning of char(40)?
You cannot insert a string of more than 40 characters in a column defined with the type CHAR(40).
If you run MySQL in strict mode, you will get an error if you try to insert a longer string.
mysql> create table mytable ( c char(40) );
Query OK, 0 rows affected (0.01 sec)
mysql> insert into mytable (c) values ('Now is the time for all good men to come to the aid of their country.');
ERROR 1406 (22001): Data too long for column 'c' at row 1
If you run MySQL in non-strict mode, the insert will succeed, but only the first 40 characters of your string is stored in the column. The characters beyond 40 are lost, and you get no error.
mysql> set sql_mode='';
Query OK, 0 rows affected (0.00 sec)
mysql> insert into mytable (c) values ('Now is the time for all good men to come to the aid of their country.');
Query OK, 1 row affected, 1 warning (0.01 sec)
mysql> show warnings;
+---------+------+----------------------------------------+
| Level | Code | Message |
+---------+------+----------------------------------------+
| Warning | 1265 | Data truncated for column 'c' at row 1 |
+---------+------+----------------------------------------+
1 row in set (0.00 sec)
mysql> select c from mytable;
+------------------------------------------+
| c |
+------------------------------------------+
| Now is the time for all good men to come |
+------------------------------------------+
1 row in set (0.00 sec)
I recommend operating MySQL in strict mode (strict mode is the default since MySQL 5.7). I would prefer to get an error instead of losing data.
I have a MySQL table with a JSON column. I want to update some rows in the JSON column to change a json value from a float to an integer. e.g {"a": 20.0} should become {"a": 20}. It looks like MySQL finds these 2 values equivalent, so it never bothers to update the row.
Here is the state of my test table:
mysql> describe test;
+-------+------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------+------+------+-----+---------+-------+
| id | int | NO | PRI | NULL | |
| val | json | YES | | NULL | |
+-------+------+------+-----+---------+-------+
2 rows in set (0.00 sec)
mysql> select * from test;
+----+-------------+
| id | val |
+----+-------------+
| 1 | {"a": 20.0} |
+----+-------------+
1 row in set (0.00 sec)
My aim is to change val to {"a": 20}
I've tried the following queries:
mysql> update test set val=JSON_OBJECT("a", 20) where id=1;
Query OK, 0 rows affected (0.00 sec)
Rows matched: 1 Changed: 0 Warnings: 0
(0 rows changed)
mysql> update test
set val=JSON_SET(
val,
"$.a",
FLOOR(
JSON_EXTRACT(val, "$.a")
)
)
where id=1;
Query OK, 0 rows affected (0.00 sec)
Rows matched: 1 Changed: 0 Warnings: 0
(0 rows changed)
mysql> insert into test (id, val) values (1, JSON_OBJECT("a", 20)) ON DUPLICATE KEY UPDATE id=VALUES(id), val=VALUES(val);
Query OK, 0 rows affected, 2 warnings (0.00 sec)
(0 rows affected)
It looks like it doesn't matter how I try to write it, whether I attempt to modify the existing value, or specify a whole new JSON_OBJECT. So I'm wondering if the reason is simply that MySQL considers the before & after values to be equivalent.
Is there any way around this?
(This does not address the original Question, but addresses a problem encountered in Answering it.)
Gross... 8.0 has a naughty history of all-too-quickly removing something after recently deprecating it. Beware. Here is the issue with VALUES from the Changelog for 8.0.20:
----- 2020-04-27 8.0.20 General Availability -- -- -----
The use of VALUES() to access new row values in INSERT ... ON DUPLICATE KEY UPDATE statements is now deprecated, and is subject to removal in a future MySQL release. Instead, you should use aliases for the new row and its columns as implemented in MySQL 8.0.19 and later.
For example, the statement shown here uses VALUES() to access new row values:
INSERT INTO t1 (a,b,c) VALUES (1,2,3),(4,5,6)
ON DUPLICATE KEY UPDATE c=VALUES(a)+VALUES(b);
Henceforth, you should instead use a statement similar to the following, which uses an alias for the new row:
INSERT INTO t1 (a,b,c) VALUES (1,2,3),(4,5,6) AS new
ON DUPLICATE KEY UPDATE c = new.a+new.b;
Alternatively, you can employ aliases for both the new row and each of its columns, as shown here:
INSERT INTO t1 (a,b,c) VALUES (1,2,3),(4,5,6) AS new(m,n,p)
ON DUPLICATE KEY UPDATE c = m+n;
For more information and examples, see INSERT ... ON DUPLICATE KEY UPDATE Statement.
I need to login with an case-insensitive email ID.My mail ID is stored in an encrypted format I am fetching from a database with something like the following query:
$this->db->select('Name');
$this->db->from('users');
$this->db->where('emailId',"AES_ENCRYPT('{$emailId}','/*awshp$*/') ", FALSE);
$query = $this->db->get();
$result = $query->row();
return $result;
I am using binary but no use
it's have simple logic, While you sign up store hash value of email id after convert it to lowercase. and on login also convert it to lowercase.so if user enter email id in any case,encryption string match.
AES encryption is a two-way algorithm, meaning you can recover the original value, so you can also update existing records that don't conform to the format you want to test.
After you run the update in the database, just apply the suggestions made by Tim Biegeleisen and you should be good to go.
Demo for updating existing records
mysql> CREATE TABLE t1 (
-> email VARBINARY(256)
-> );
Query OK, 0 rows affected (0.31 sec)
mysql> INSERT INTO t1 (email) VALUES
-> (AES_ENCRYPT('MiXeDcAsEdEmAiL#gmail.com','salt')),
-> (AES_ENCRYPT('UPPERCASEDEMAIL#gmail.com','salt')),
-> (AES_ENCRYPT('lowercasedemail#gmail.com','salt'));
Query OK, 3 rows affected (0.09 sec)
Records: 3 Duplicates: 0 Warnings: 0
mysql> UPDATE t1 SET email = AES_ENCRYPT(LOWER(CAST(AES_DECRYPT(email,'salt') AS CHARACTER)),'salt');
Query OK, 2 rows affected (0.11 sec)
Rows matched: 3 Changed: 2 Warnings: 0
mysql> SELECT CAST(aes_decrypt(email,'salt') AS CHARACTER) lower_cased from t1;
+---------------------------+
| lower_cased |
+---------------------------+
| mixedcasedemail#gmail.com |
| uppercasedemail#gmail.com |
| lowercasedemail#gmail.com |
+---------------------------+
3 rows in set (0.00 sec)
NB
Don't forget to change the update statement to match your column name and the value you use as a salt.
Case
In our MySql database the data is stored in combined json-strings like this:
| ID | DATA |
| 100 | {var1str: "sometxt", var2double: 0,01, var3integer: 1, var4str: "another text"} |
| 101 | {var3integer: 5, var2double: 2,05, var1str: "txt", var4str: "more text"} |
Problem
Most of the DATA-fields hold over 2500 variables. The order of variables in the DATA-string is random (as shown in above example). Right now we only know how to extract data with the following querie:
select
ID,
json_extract(DATA,'var1str'),
json_extract(DATA,'var2double'),
FROM table
With this querie, only the values of var1str and var2double will be returned as result. Values of variable 3 and 4 are ignored. There is no overview of what possible variables are hiding in the data fields.
With almost 60.000 entries and over 3.000 possible unique variable names, I would like to create a query that loops through all of the 60.000 DATA-fields and extracts every unique variable name that is found in there.
Solution?
The querie I am looking for would give the following result:
var1str
var2double
var3integer
var4str
My knowledge of MySql is very limited. Any direction given to get to this solution is much appreciated.
What version of MySQL are you using?.
From MySQL 8.0.4 and later JSON_TABLE function is supported and can be useful in this case.
mysql> SELECT VERSION();
+-----------+
| VERSION() |
+-----------+
| 8.0.11 |
+-----------+
1 row in set (0.00 sec)
mysql> DROP TABLE IF EXISTS `table`;
Query OK, 0 rows affected (0.09 sec)
mysql> CREATE TABLE IF NOT EXISTS `table` (
-> `ID` BIGINT UNSIGNED AUTO_INCREMENT PRIMARY KEY,
-> `DATA` JSON NOT NULL
-> ) AUTO_INCREMENT=100;
Query OK, 0 rows affected (0.00 sec)
mysql> INSERT INTO `table`
-> (`DATA`)
-> VALUES
-> ('{"var1str": "sometxt", "var2double": 0.01, "var3integer": 1, "var4str": "another text"}'),
-> ('{"var3integer": 5, "var2double": 2.05, "var1str": "txt", "var4str": "more text"}');
Query OK, 2 rows affected (0.00 sec)
Records: 2 Duplicates: 0 Warnings: 0
mysql> SELECT
-> DISTINCT `der`.`key`
-> FROM
-> `table`,
-> JSON_TABLE(
-> JSON_KEYS(`DATA`), '$[*]'
-> COLUMNS(
-> `key` VARCHAR(64) PATH "$"
-> )
-> ) `der`;
+-------------+
| key |
+-------------+
| var1str |
| var4str |
| var2double |
| var3integer |
+-------------+
4 rows in set (0.01 sec)
Be aware of the Bug #90610 ERROR 1142 (42000) when using JSON_TABLE.
I am taking data from a csv file and throwing it all into a tempory table, so everything is in string format.
So even date fields are in string format, so I need to convert date from string to a date. All dates are in this format 28/02/2013
I used STR_TO_DATE for this, but I am having a problem.
Here is a snippet of my code.
INSERT INTO `invoice` (`DueDate`)
SELECT
STR_TO_DATE('','%d/%m/%Y')
FROM `upload_invoice`
There are of course more fields than this, but I am concentrating on the field that doesn't work.
Using this command if a date is invalid it should put in a null, but instead of a null being put in, it generates an error instead.
#1411 - Incorrect datetime value: '' for function str_to_date
I understand what the error means. It means it is getting an empty field instead of a properly formatted date, but after reading the documentation it should not be throwing an error, but it should inserting a null.
However if I use the SELECT statement without the INSERT it works.
I could do the following line which actually works to a point
IF(`DueDate`!='',STR_TO_DATE(`DueDate`,'%d/%m/%Y'),null) as `DueDate`
So STR_TO_DATE doesn't run if the field is empty. Now this works, but it can't check for a date which is not valid eg if a date was ASDFADFAS.
So then I tried
IF(TO_DAY(STR_TO_DATE(`DueDate`,'%d/%m/%Y') IS NOT NULL),STR_TO_DATE(`DueDate`,'%d/%m/%Y'),null) as `DueDate`
But this still gives the #1411 error on the if statement.
So my question is why isn't STR_TO_DATE not returning NULL on an incorrect date? I should not be getting the #1411 error.
This is not an exact duplicate of the other question. Also there was not a satisfactory answer. I solved this a while and I have added my solution, which is actually a better solution that was given in the other post, so I think because of my better answer this should stay.
An option you can try:
mysql> SELECT VERSION();
+-----------+
| VERSION() |
+-----------+
| 5.7.19 |
+-----------+
1 row in set (0.00 sec)
mysql> DROP TABLE IF EXISTS `upload_invoice`, `invoice`;
Query OK, 0 rows affected (0.00 sec)
mysql> CREATE TABLE IF NOT EXISTS `invoice` (
-> `id` BIGINT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY,
-> `DueDate` DATE
-> );
Query OK, 0 rows affected (0.00 sec)
mysql> CREATE TABLE IF NOT EXISTS `upload_invoice` (
-> `DueDate` VARCHAR(10)
-> );
Query OK, 0 rows affected (0.00 sec)
mysql> INSERT INTO `upload_invoice`
-> (`DueDate`)
-> VALUES
-> ('ASDFADFAS'), (NULL), (''),
-> ('28/02/2001'), ('30/02/2001');
Query OK, 5 rows affected (0.01 sec)
Records: 5 Duplicates: 0 Warnings: 0
mysql> INSERT INTO `invoice`
-> SELECT
-> NULL,
-> IF(`DueDate` REGEXP '[[:digit:]]{2}/[[:digit:]]{2}/[[:digit:]]{4}' AND
-> UNIX_TIMESTAMP(
-> STR_TO_DATE(`DueDate`, '%d/%m/%Y')
-> ) > 0,
-> STR_TO_DATE(`DueDate`, '%d/%m/%Y'),
-> NULL)
-> FROM `upload_invoice`;
Query OK, 5 rows affected (0.00 sec)
Records: 5 Duplicates: 0 Warnings: 0
mysql> SELECT `id`, `DueDate`
-> FROM `invoice`;
+----+------------+
| id | DueDate |
+----+------------+
| 1 | NULL |
| 2 | NULL |
| 3 | NULL |
| 4 | 2001-02-28 |
| 5 | NULL |
+----+------------+
5 rows in set (0.00 sec)
See db-fiddle.
I forgot I posted this question, but I solved this problem a while ago like this
IF(`{$date}`!='',STR_TO_DATE(`{$date}`,'%d/%m/%Y'),null) as `{$date}`
So because the line is long and confusing I made a function like this
protected function strDate($date){
return "IF(`{$date}`!='',STR_TO_DATE(`{$date}`,'%d/%m/%Y'),null) as `{$date}`";
}
INSERT INTO `invoice` (`DueDate`)
SELECT
{$this->strDate('DueDate')}
FROM `upload_invoice`
I really forgot I posted this question. It seems like an eternity away, but this is how I solved the issue.