mysql SELECT string with last letter '%n' not working - mysql

I want mysql to show me a table where the last string of a column has a specific letter
SELECT * FROM myTable WHERE col LIKE '%n';
nothing is going displayed (0 rows displayed).
But this one works
SELECT * FROM myTable WHERE col LIKE 'L%';
So when looking for the beginning of a string it will give me an output, but when looking for the end of a string it won't. I also tried it with other columns and it did not work.
Why?
The word it is looking for is London
The table was created like this (found this sample on a webpage):
CREATE TABLE IF NOT EXISTS `company` (
`COMPANY_ID` varchar(6) NOT NULL DEFAULT '',
`COMPANY_NAME` varchar(25) DEFAULT NULL,
`COMPANY_CITY` varchar(25) DEFAULT NULL,
PRIMARY KEY (`COMPANY_ID`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;
--
-- Dumping data for table `company`
--
INSERT INTO `company` (`COMPANY_ID`, `COMPANY_NAME`, `COMPANY_CITY`) VALUES
('18', 'Order All', 'Boston\r'),
('15', 'Jack Hill Ltd', 'London\r'),
('16', 'Akas Foods', 'Delhi\r'),
('17', 'Foodies.', 'London\r'),
('19', 'sip-n-Bite.', 'New York\r');

Your sample data shows that you have a carriage return character (\r) as the last character in the string. If we eliminate that from the search, the remaining string does match.
mysql> SELECT * FROM company where TRIM('\r' from company_city) LIKE '%n';
+------------+---------------+--------------+
| COMPANY_ID | COMPANY_NAME | COMPANY_CITY |
+------------+---------------+--------------+
| | Order All | Boston
| | Jack Hill Ltd | London
| | Foodies. | London
+------------+---------------+--------------+
I recommend not to store trailing whitespace characters in your strings. Take care of newlines and carriage returns in your application presentation, not in the data.

Related

MySQL group by fill gaps

I need to group by a field and fill missing information if any.
For example, we have a test table:
CREATE TEMPORARY TABLE `test` (
`name` VARCHAR(100), `description` VARCHAR(100)
);
For this table, we have the following records:
INSERT INTO `test` (`name`, `description`, `email`)
VALUES ('John', '', ''),
VALUES ('John', 'Description #1', ''),
VALUES ('John', 'Description #2', ''),
VALUES ('John', '', 'john#example.com'),
VALUES ('John', '', '');
I need to select all entries on this table grouped by name and filling gaps, such as description (in this case, it should use 'Description #2' as it is the latest non-empty value for description. Same goes for email, it should return 'john#example.com'.
How should I select these values?
PS: the actual table have several columns, so it would be good to not modify the SELECT statement.
My current select is:
SELECT `name`, `description` FROM `test` GROUP BY `name`;
The problem is it will always use the first occurrence values. I need to "merge" all values based on latest non-empty insertion.
Each column may end up using values from different entries.
Expected output:
____________________________________________
| name | description | email |
--------------------------------------------
| John | Description #2 | john#example.com |
--------------------------------------------
Thanks.
You'll have to make explicit what does "last" actually mean. The records you inserted don't have any specific order, so you'll have to add either an autoincrementing id, or something like created_at date.
Then, you can choose the right records using:
SELECT `name`, `description`
FROM `test`
GROUP BY `name`
HAVING `created_at` = MAX(`created_at`)
For the non-empty part, you'll have to filter them using WHERE description<>''.

data is inserting wrongly into the table

I have a table named quotation details with some columns
Field Name | Type
------------------------
Quotati_Id | bigint(20)
Fk_Rfq_Id | bigint(20)
Quotati_No | varchar(30)
Parent_Quotati_Id | bigint(20)
Fk_Client_Supplie_Id | int(11)
Is_Client_Supplie | bit(1)
and I want to insert data. The insert query is qiven below
INSERT INTO quotationdetails (
Fk_Rfq_Id,
Quotati_No,
Parent_Quotati_Id,Fk_Client_Supplie_Id,
Is_Client_Supplie
) VALUES (
'15847',
(SELECT AUTO_INCREMENT FROM information_schema.TABLES WHERE TABLE_SCHEMA='qtn' AND
TABLE_NAME='quotationdetails'),
'15640', '1',
'0')
Everything is works fine , but only one problem the column named Is_Client_Supplie is inserted wrongly. ie 1 is inserted instead of 0 in the column Is_Client_Supplie .
Whats wrong with me???
It is a bit field, not a string, so remove the apostrophes from '0'. You can do the same for Fk_Client_Supplie_Id and the other integer fields.
A bit field such as bit(3) can be assigned a binary value use the notation b'101' but if assigning 0 this notation is not necessary.
use
INSERT INTO quotationdetails (
Fk_Rfq_Id,
Quotati_No,
Parent_Quotati_Id,Fk_Client_Supplie_Id,
Is_Client_Supplie
) VALUES (
'15847',
(SELECT AUTO_INCREMENT FROM information_schema.TABLES WHERE TABLE_SCHEMA='qtn' AND
TABLE_NAME='quotationdetails'),
'15640', '1',
0)
No need of '0', use 0 instead.

MySQL JOIN Three Tables Using Row Values of A Table

This got complicated really quickly and I'm beginning to question the database design.
The basic concept of the application is:
User accounts
Features
Access levels
So, users have different access levels for each of the features. Fairly basic and common application I would think.
Schema:
CREATE TABLE `user_accounts` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`user_login` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
`user_password` varchar(60) COLLATE utf8_unicode_ci NOT NULL,
`user_fname` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
`user_lname` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
`user_group` varchar(32) COLLATE utf8_unicode_ci NOT NULL DEFAULT 'Default',
PRIMARY KEY (`id`),
UNIQUE KEY `user_login` (`user_login`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci ;
INSERT INTO `user_accounts` VALUES(1, 'email#example.com', 'secret', 'Example', 'Name', 'Admin');
INSERT INTO `user_accounts` VALUES(2, 'john#example.com', 'secret', 'John', 'Doe', 'Trainer');
INSERT INTO `user_accounts` VALUES(3, 'jane#example.com', 'secret', 'Jane', 'Doe', 'Default');
CREATE TABLE `user_access_meta` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`type` varchar(50) COLLATE utf8_unicode_ci NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `type` (`type`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
INSERT INTO `user_access_meta` VALUES(1, 'type_1');
INSERT INTO `user_access_meta` VALUES(2, 'type_2');
INSERT INTO `user_access_meta` VALUES(3, 'type_3');
INSERT INTO `user_access_meta` VALUES(4, 'type_4');
INSERT INTO `user_access_meta` VALUES(5, 'type_5');
INSERT INTO `user_access_meta` VALUES(6, 'type_6');
CREATE TABLE `user_access_levels` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`user_login` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
`type` varchar(50) COLLATE utf8_unicode_ci NOT NULL,
`level` int(1) NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `user_login_2` (`user_login`,`type`),
KEY `user_login` (`user_login`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci ;
INSERT INTO `user_access_levels` VALUES(1, 'email#example.com', 'type_1', 1);
INSERT INTO `user_access_levels` VALUES(2, 'email#example.com', 'type_2', 1);
INSERT INTO `user_access_levels` VALUES(3, 'email#example.com', 'type_3', 0);
INSERT INTO `user_access_levels` VALUES(4, 'email#example.com', 'type_5', 2);
INSERT INTO `user_access_levels` VALUES(5, 'john#example.com', 'type_2', 1);
INSERT INTO `user_access_levels` VALUES(6, 'john#example.com', 'type_3', 1);
INSERT INTO `user_access_levels` VALUES(7, 'john#example.com', 'type_5', 3);
INSERT INTO `user_access_levels` VALUES(8, 'jane#example.com', 'type_4', 1);
These tables actually have a lot more fields and have foreign key constraints between them, but I've striped them down for this example. They are also used individually for other purposes.
I've successfully been able to join all three tables together for a single user with this:
SELECT
ua.`user_fname`,
uam.`type`,
ual.`level`
FROM `user_access_meta` uam
LEFT JOIN `user_access_levels` ual
ON ual.`user_login` = 'email#example.com'
AND uam.`type` = ual.`type`
JOIN `user_accounts` ua
ON ua.`user_login` = 'email#example.com';
Output:
| USER_FNAME | TYPE | LEVEL |
--------------------------------
| Example | type_1 | 1 |
| Example | type_2 | 1 |
| Example | type_3 | 0 |
| Example | type_4 | (null) |
| Example | type_5 | 2 |
| Example | type_6 | (null) |
Even this isn't ideal, but It's all I could come up with and it serves it's purpose.
Now, what I need to do is select all users including their access levels. It would look something like this:
| USER_FNAME | type_1 | type_2 | type_3 | type_4 | type_5 | type_6 |
--------------------------------------------------------------------------
| Example | 1 | 1 | 0 | (null) | 2 | (null) |
| John | (null) | 1 | 1 | (null) | 3 | (null) |
| Jane | (null) | (null) | (null) | 1 | (null) | (null) |
I feel this may not have been the best design, but the reason I went with this design is so that I can easily add and remove features or even temporarily disable them individually.
Should the design be rethought? Is it even possible to get the results I'm looking for with this design?
I've put this up on SQL Fiddle. http://sqlfiddle.com/#!2/bb313/2/0
I have a few suggestions on both your table design and then how to get the data in the format that you want.
First on the database design, the change I would advise is in the table user_access_levels. Alter you table to the following:
CREATE TABLE `user_access_levels` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`user_id` int(11) NOT NULL,
`type_id` int(11) NOT NULL,
`level` int(1) NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `user_id_2` (`user_id`,`type_id`),
KEY `user_id` (`user_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci ;
There is no need to store the user_login and type in this table when you can just store the user_id and the type_id. Use both of these as foreign keys to their respective tables.
Then to get the data in format that you want. MySQL does not have a PIVOT function so you will want to use a CASE statement with an aggregate function.
select ua.user_fname,
MIN(CASE WHEN uam.type = 'type_1' THEN ual.level END) type_1,
MIN(CASE WHEN uam.type = 'type_2' THEN ual.level END) type_2,
MIN(CASE WHEN uam.type = 'type_3' THEN ual.level END) type_3,
MIN(CASE WHEN uam.type = 'type_4' THEN ual.level END) type_4,
MIN(CASE WHEN uam.type = 'type_5' THEN ual.level END) type_5,
MIN(CASE WHEN uam.type = 'type_6' THEN ual.level END) type_6
FROM user_accounts ua
LEFT JOIN user_access_levels ual
ON ua.id = ual.user_id
LEFT JOIN user_access_meta uam
ON ual.type_id = uam.id
group by ua.user_fname
See a SQL Fiddle with a Demo
This version will work if you know ahead of time the type columns that you want to get the values for. But if it is unknown, then you can use prepared statements to generate this dynamically.
Here is a version of the query using prepared statements:
SET #sql = NULL;
SELECT
GROUP_CONCAT(DISTINCT
CONCAT(
'MIN(case when type = ''',
type,
''' then level end) AS ',
replace(type, ' ', '')
)
) INTO #sql
FROM user_access_meta;
SET #sql = CONCAT('SELECT ua.user_fname, ', #sql, ' FROM user_accounts ua
LEFT JOIN user_access_levels ual
ON ua.id = ual.user_id
LEFT JOIN user_access_meta uam
ON ual.type_id = uam.id
group by ua.user_fname');
PREPARE stmt FROM #sql;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
See a SQL Fiddle with Demo
While I'm not familiar with the specifics of MySQL, it seems to me you are describing a pretty fundamental example of a pivot table query. What you're looking for seems reasonable to me, so I don't think based on what you've shown here I'd get too concerned about revisiting the data model. You may find putting the "level" back with the "type" table, based on the ol' saw "Normalize til hit hurts, denormalize til it works :)"
Just my $0.02. Good luck.
I typically use a BIGINT column and use bit masking to set the values.
For example level1 = 2, level2=4, level3=8, level4=16, etc..
Give someone level1 and level2 access:
update user set access_level = 2 & 4
does someone have level2 access?
select 1 from user where access_level | 2 AND user_id = ?

How to generate a dynamic sequence table in MySQL?

I'm trying to generate a sequence table in MySQL, so that I can get unique ids from last_insert_id.
The problem is that I need multiple sequences dynamically.
At the first, I created a table:
CREATE TABLE `sequence` (
`label` char(30) CHARACTER SET latin1 NOT NULL,
`id` mediumint(9) NOT NULL DEFAULT '0',
PRIMARY KEY (`label`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8
And then tried to get the number, using example from http://dev.mysql.com/doc/refman/5.0/en/information-functions.html#function_last-insert-id
UPDATE sequence SET id = LAST_INSERT_ID(id + 1) WHERE label = 'test';
SELECT LAST_INSERT_ID();
After a while I realized that I also need to generate rows for new labels safely.
So I changed this schema into:
CREATE TABLE `sequence` (
`label` char(30) CHARACTER SET latin1 NOT NULL,
`id` mediumint(9) NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`label`,`id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8
And I simply gave up using WHERE clause to update its id.
INSERT INTO sequence (label) values ( ? )
SELECT LAST_INSERT_ID()
Is this a proper way? I want to know if there is a better solution.
The MyISAM engine will do it for you -
Table definition:
CREATE TABLE `sequence` (
`label` char(30) CHARACTER SET latin1 NOT NULL,
`id` mediumint(9) NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`label`,`id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
Populate table:
INSERT INTO sequence VALUES ('a', NULL); -- add some 'a' labels
INSERT INTO sequence VALUES ('a', NULL);
INSERT INTO sequence VALUES ('a', NULL);
INSERT INTO sequence VALUES ('b', NULL); -- add another labels 'b'
INSERT INTO sequence VALUES ('b', NULL);
INSERT INTO sequence VALUES ('a', NULL); -- add some 'a' labels
INSERT INTO sequence VALUES ('a', NULL);
Show result:
SELECT * FROM sequence;
+-------+----+
| label | id |
+-------+----+
| a | 1 |
| a | 2 |
| a | 3 |
| a | 4 |
| a | 5 |
| a | 6 |
| b | 1 |
| b | 2 |
+-------+----+

MySQL filter query with relation

I'm having the following problem with 2 MySQL tables that have a relation:
I can easily query table 1 (address) when I want a full list or filter the result by name or email or such. But now I need to query table 1 and filter it based on the relational content of table 2 (interests). So, I need to find a row (usually many rows) in table 1 only if a (or more) conditions are met in table 2.
Here are the tables:
CREATE TABLE IF NOT EXISTS `address` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`name` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
`email` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
`countryCode` char(2) COLLATE utf8_unicode_ci DEFAULT NULL,
`languageCode` char(2) COLLATE utf8_unicode_ci DEFAULT NULL,
`timestamp` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (`id`),
UNIQUE KEY `emailUnique` (`email`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci
INSERT INTO `address` (`id`, `name`, `email`, `countryCode`, `languageCode`, `timestamp`) VALUES
(1, '', 'dummy#test.com', 'BE', 'nl', '2010-07-16 14:07:00'),
(2, '', 'test#somewhere.com', 'BE', 'fr', '2010-07-16 14:10:25');
CREATE TABLE IF NOT EXISTS `interests` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`address_id` int(11) unsigned NOT NULL,
`cat` char(2) COLLATE utf8_unicode_ci NOT NULL,
`subcat` char(2) COLLATE utf8_unicode_ci NOT NULL,
`timestamp` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (`id`),
KEY `address_id` (`address_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci
INSERT INTO `interests` (`id`, `address_id`, `cat`, `subcat`, `timestamp`) VALUES
(1, 1, 'aa', 'xx', '2010-07-16 14:07:00'),
(2, 1, 'aa', 'yy', '2010-07-16 14:07:00'),
(3, 2, 'aa', 'xx', '2010-07-16 14:07:00'),
(4, 2, 'bb', 'zz', '2010-07-16 14:07:00')
(5, 2, 'aa', 'yy', '2010-07-16 14:07:00');
ALTER TABLE `interests`
ADD CONSTRAINT `interests_ibfk_1` FOREIGN KEY (`address_id`) REFERENCES `address` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION;
For example, I need to find the address(es) that has (have) as interest cat=aa and subcat=xx. Or, another example, I need the address(es) with as interest both cat=aa and subcat=xx AND cat=aa and subcat=yy. Specially the latter is important and one has to keep in mind that both the address and the interest tables will be long lists and that the amount of cat/subcat combinations will vary. I'm working with reference queries through Zend_Db_Table (findDependentRowset) at the moment but that solution is way to slow for address lists numbering 100s and even 1000s of hits.
Thank you for your help.
SELECT a.name FROM address a
INNER JOIN interests i ON (a.id = i.address_id)
WHERE i.cat = "aa" AND i.subcat IN ('xx', 'yy')
I added another row in your interests table, to demonstrate a different result set between the two examples:
INSERT INTO interests VALUES (6, 2, 'aa', 'vv', '2010-07-16 14:07:00');
Then you may want to try using correlated subqueries as follows:
SELECT *
FROM address a
WHERE EXISTS (SELECT id
FROM interests
WHERE address_id = a.id AND
(cat = 'aa' and subcat = 'xx'));
Result:
+----+------+--------------------+-------------+--------------+---------------------+
| id | name | email | countryCode | languageCode | timestamp |
+----+------+--------------------+-------------+--------------+---------------------+
| 1 | | dummy#test.com | BE | nl | 2010-07-16 14:07:00 |
| 2 | | test#somewhere.com | BE | fr | 2010-07-16 14:10:25 |
+----+------+--------------------+-------------+--------------+---------------------+
2 rows in set (0.00 sec)
For the second example, we're testing for the new row added previously in order not to have the same result as above:
SELECT *
FROM address a
WHERE EXISTS (SELECT id
FROM interests
WHERE address_id = a.id AND
(cat = 'aa' and subcat = 'xx')) AND
EXISTS (SELECT id
FROM interests
WHERE address_id = a.id AND
(cat = 'aa' and subcat = 'vv'));
Result:
+----+------+--------------------+-------------+--------------+---------------------+
| id | name | email | countryCode | languageCode | timestamp |
+----+------+--------------------+-------------+--------------+---------------------+
| 2 | | test#somewhere.com | BE | fr | 2010-07-16 14:10:25 |
+----+------+--------------------+-------------+--------------+---------------------+
1 row in set (0.00 sec)
Using correlated subqueries is easy and straightforward. However keep in mind that it might not be the best in terms of performance, because the correlated subqueries will be executed once for each address in the outer query.