what wrong with my SQL statement of condition insert in mysql? - mysql

There:
When I try to do data insert data in my table without any duplicated record, I use both of following SQL statement. But none of them work for me. My verion of mysql is 5.1.33-community.
SELECT #nextID := IFNULL( MAX( N_KEY ), 0 ) + 1
FROM SYS_DICT
WHERE N_CAT = 1;
IF (SELECT COUNT(*) AS N_COUNT
FROM SYS_DICT
WHERE N_CAT=1 AND C_VALUE='welcome'
) <= 0
THEN
INSERT INTO SYS_DICT
VALUES( 1, #nextID, 1, 100, 'welcome');
END IF
or
SELECT #nextID := IFNULL( MAX( N_KEY ), 0 ) + 1
FROM SYS_DICT
WHERE N_CAT=1;
INSERT INTO SYS_DICT
VALUES ( 1, #nextID, 1, 100, 'welcome')
WHERE NOT EXISTS (
SELECT N_KEY
FROM SYS_DICT
WHERE N_CAT=1 AND C_VALUE='welcome'
);
Any hint?

Related

Count vowels and consonants in an array

I am trying to write a function which will return number of vowels
and consonants. Using the IF statement function will successfully
compile, however when I call it in the select it shows the message :
"Conversion failed when converting the varchar value 'MAMAMIA' to
data type int."`
I tried with the CASE statement, but there are too many syntax
errors and i think it is not the best method of solving the problem
using CASE ...
CREATE FUNCTION VOW_CONS(#ARRAY VARCHAR(20))
RETURNS INT
BEGIN
DECLARE #COUNTT INT;
DECLARE #COUNTT1 INT;
SET #COUNTT=0;
SET #COUNTT1=0;
WHILE (#ARRAY!=0)
BEGIN
IF(#ARRAY LIKE '[aeiouAEIOU]%')
SET #COUNTT=#COUNTT+1
ELSE SET #COUNTT1=#COUNTT1+1
/*
DECLARE #C INT;
SET #C=(CASE #SIR WHEN 'A' THEN #COUNTT=#COUNTT+1;
WHEN 'E' THEN #COUNTT=#COUNTT+1
WHEN 'I' THEN #COUNTT=#COUNTT+1
WHEN 'O' THEN #COUNTT=#COUNTT+1
WHEN 'U' THEN #COUNTT=#COUNTT+1
WHEN 'A' THEN #COUNTT=#COUNTT+1
WHEN ' ' THEN ' '
ELSE #COUNTT1=#COUNTT1+1
END)
*/
END
RETURN #COUNTT;
END
SELECT DBO.VOW_CONS('MAMAMIA')
Without knowing what version of SQL Server you are using, I am going to assume you are using the latest version, meaning you have access to TRANSLATE. Also I'm going to assume you do need access to both the number of vowels and consonants, so a table-value function seems the better method here. As such, you could do something like this:
CREATE FUNCTION dbo.CountVowelsAndConsonants (#String varchar(20))
RETURNS table
AS RETURN
SELECT DATALENGTH(#String) - DATALENGTH(REPLACE(TRANSLATE(#String,'aeiou','|||||'),'|','')) AS Vowels, --Pipe is a character that can't be in your string
DATALENGTH(#String) - DATALENGTH(REPLACE(TRANSLATE(#String,'bcdfghjklmnpqrstvwxyz','|||||||||||||||||||||'),'|','')) AS Consonants --Pipe is a character that can't be in your string
GO
And then you can use the function like so:
SELECT *
FROM (VALUES('MAMAMIA'),('Knowledge is power'))V(YourString)
CROSS APPLY dbo.CountVowelsAndConsonents(V.YourString);
Another option would be to split the string into its individual letters using a tally table, join that to a table of vowels/consonants and then get your counts.
Here is a working example, you'll have to update/change for your specific needs but should give you an idea on how it works.
DECLARE #string NVARCHAR(100)
SET #string = 'Knowledge is power';
--Here we build a table of all the letters setting a flag on which ones are vowels
DECLARE #VowelConsonants TABLE
(
[Letter] CHAR(1)
, [IsVowel] BIT
);
--We load it with the data
INSERT INTO #VowelConsonants (
[Letter]
, [IsVowel]
)
VALUES ( 'a', 1 ) , ( 'b', 0 ) , ( 'c', 0 ) , ( 'd', 0 ) , ( 'e', 1 ) , ( 'f', 0 ) , ( 'g', 0 ) , ( 'h', 0 ) , ( 'i', 1 ) , ( 'j', 0 ) , ( 'k', 0 ) , ( 'l', 0 ) , ( 'm', 0 ) , ( 'n', 0 ) , ( 'o', 1 ) , ( 'p', 0 ) , ( 'q', 0 ) , ( 'r', 0 ) , ( 's', 0 ) , ( 't', 0 ) , ( 'u', 1 ) , ( 'v', 0 ) , ( 'w', 0 ) , ( 'x', 0 ) , ( 'y', 0 ) , ( 'z', 0 );
--This tally table example gives 10,000 numbers
WITH
E1(N) AS (select 1 from (values (1),(1),(1),(1),(1),(1),(1),(1),(1),(1))dt(n)),
E2(N) AS (SELECT 1 FROM E1 a, E1 b), --10E+2 or 100 rows
E4(N) AS (SELECT 1 FROM E2 a, E2 b), --10E+4 or 10,000 rows max
cteTally(N) AS
(
SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM E4
)
SELECT SUM( CASE WHEN [vc].[IsVowel] = 1 THEN 1
ELSE 0
END
) AS [VowelCount]
, SUM( CASE WHEN [vc].[IsVowel] = 0 THEN 1
ELSE 0
END
) AS [ConsonantCount]
FROM [cteTally] [t] --Select from tally cte
INNER JOIN #VowelConsonants [vc] --Join to VowelConsonants table on the letter.
ON LOWER([vc].[Letter]) = LOWER(SUBSTRING(#string, [t].[N], 1)) --Using the number from the tally table we can easily split out each letter using substring
WHERE [t].[N] <= LEN(#string);
Giving you results of:
VowelCount ConsonantCount
----------- --------------
6 10

Sql query refactor from mysql 5.6 to 8.0 (GROUP BY problem)

I get error
SQL Error (1055): Expression #7 of SELECT list is not in GROUP BY clause and contains nonaggregated column 'ifu.amount' which is not functionally dependent on columns in GROUP BY clause; this is incompatible with sql_mode=only_full_group_by
after migrating to mysql 8.0 from 5.6. I know that it can be easily fixed by disabling ONLY_FULL_GROUP_BY flag, but I want it to be more compatible with mysql 8.0. So question is if I would add ifu.amount to GROUP BY it should work perfetcly fine and I won't miss any query results or anything? Now without GROUP BY ifu.amount MySQL code looks like:
select
`i`.`id` AS `institution_id`,
`i`.`name` AS `institution_name`,
`cr`.`check_date` AS `check_date`,
sum(
(
case when (`cr`.`status` = '1') then 1 else 0 end
)
) AS `can_accept`,
sum(
(
case when (`cr`.`status` = '0') then 1 else 0 end
)
) AS `cannot_accept`,(
sum(
(
case when (`cr`.`status` = '1') then 1 else 0 end
)
) + sum(
(
case when (`cr`.`status` = '0') then 1 else 0 end
)
)
) AS `suma`,
`ifu`.`amount` AS `amount`,
round(
(
(
(
(
sum(
(
case when (`cr`.`status` = '1') then 1 else 0 end
)
) * 100
) / (
sum(
(
case when (`cr`.`status` = '1') then 1 else 0 end
)
) + sum(
(
case when (`cr`.`status` = '0') then 1 else 0 end
)
)
)
) * `ifu`.`amount`
) * 0.01
),
2
) AS `financed_amount`
from
(
(
(
`check_results` `cr`
join `family_doctors` `fd` on((`fd`.`id` = `cr`.`doctor_id`))
)
join `institutions` `i` on((`i`.`id` = `fd`.`institution_id`))
)
join `institutions_funding` `ifu` on((`ifu`.`institution_id` = `i`.`id`))
)
where
(`cr`.`status` in (1, 0))
group by
`i`.`id`,
`i`.`name`,
`cr`.`check_date`
Thanks for help in advance!
Include amount in your group by clause.
where
(`cr`.`status` in (1, 0))
group by
`i`.`id`,
`i`.`name`,
`cr`.`check_date`,
`ifu`.`amount`
if amount is excluded on your group by clause, this will get the amount that corresponds on your id, name and check date in ascending order (default).
or
min(`ifu`.`amount`) as `amount`.

MySQL: Why would a query run faster with literal conditions compared to variables

Not sure whether the actually query matters but, I have a MySQL Stored Procedure where I commented out the other parts of the proc except the following query...
INSERT INTO temp_attribution (`attribute_type`, `domain`, `id`, `name`, `score`, `rank`, `partner_match`, `person_match`, `sponsor_match`, `date_match`)
SELECT 'Campaign' AS attribute_type, domain, id, name, score, (#proc_counter := #proc_counter + 1) AS rank,
partner_match, person_match, sponsor_match, date_match
FROM (
SELECT m_c.domain, m_c.campaign_id AS id, m_c.name, m_c.client_id, m_c.sent_date,
proc_sponsors AS invoice_sponsor, bs.sponsor AS campaign_sponsor,
proc_email AS invoice_email, aes_decrypt(m_r.email, in_encrypt_key) as campaign_email,
if (m_c.client_id = proc_client_id COLLATE latin1_general_ci, 'Yes', 'No') AS partner_match,
if (aes_encrypt(proc_email, in_encrypt_key) = m_r.email, 'Exact Email', 'Email Domain') AS person_match,
if (LOCATE(CONVERT(bs.sponsor USING utf8mb4), proc_sponsors) > 0, 'Sponsor',
if (CONVERT(bs.vendor USING utf8mb4) = proc_vendor, 'Vendor', 'No') ) AS sponsor_match,
if (datediff(proc_invoice_date, m_c.sent_date) BETWEEN 0 AND 92, 'Within Three', 'Within Six') AS date_match,
(
if (m_c.client_id = proc_client_id COLLATE latin1_general_ci, 45, 10) + 30 +
if (LOCATE(CONVERT(bs.sponsor USING utf8mb4), proc_sponsors) > 0, 10,
if (CONVERT(bs.vendor USING utf8mb4) = proc_vendor, 5, 0) ) +
if (datediff(proc_invoice_date, m_c.sent_date) BETWEEN 0 AND 92, 15, 5)
) AS score
FROM campaign_table m_c
INNER JOIN recipient_table m_r ON m_c.domain = m_r.domain AND m_c.campaign_id = m_r.campaign_id
LEFT JOIN booking_sponsor bs ON m_c.domain = bs.domain AND m_c.campaign_id = bs.campaign_id
WHERE datediff(proc_invoice_date, m_c.sent_date) BETWEEN 0 AND 185
AND ( aes_encrypt(proc_email, in_encrypt_key) = m_r.email OR m_r.email_domain = proc_email_domain )
) T ORDER BY score DESC, sent_date DESC LIMIT 5;
The fields starting with 'proc_' are actually variables declared at the beginning of the procedure and this only takes 0.385 seconds to initialise whereas the entire proc takes 15 seconds.
On a separate query window, I copied the relevant query and substituted variables starting with 'proc_' to test speed and optimise, like so...
INSERT INTO temp_attribution (`attribute_type`, `domain`, `id`, `name`, `score`, `rank`, `partner_match`, `person_match`, `sponsor_match`, `date_match`)
SELECT 'Campaign' AS attribute_type, domain, id, name, score, (#proc_counter := #proc_counter + 1) AS rank,
partner_match, person_match, sponsor_match, date_match
FROM (
SELECT m_c.domain, m_c.campaign_id AS id, m_c.name, m_c.client_id, m_c.sent_date,
'VENDOR SPONSOR VALUE' AS invoice_sponsor, bs.sponsor AS campaign_sponsor,
'johnsmith#domain.com' AS invoice_email, aes_encrypt('johnsmith#domain.com', 'secret_key') as campaign_email,
if (m_c.client_id = m_c.client_id COLLATE latin1_general_ci, 'Yes', 'No') AS partner_match,
if (aes_encrypt('johnsmith#domain.com', 'secret_key'), 'Exact Email', 'Email Domain') AS person_match,
if (LOCATE(CONVERT(bs.sponsor USING utf8mb4), 'VENDOR SPONSOR VALUE') > 0, 'Sponsor',
if (CONVERT(bs.vendor USING utf8mb4) = 'VENDOR', 'Vendor', 'No') ) AS sponsor_match,
if (datediff('2016-10-14', m_c.sent_date) BETWEEN 0 AND 92, 'Within Three', 'Within Six') AS date_match,
(
if (m_c.client_id = m_c.client_id COLLATE latin1_general_ci, 45, 10) + 30 +
if (LOCATE(CONVERT(bs.sponsor USING utf8mb4), 'VENDOR SPONSOR VALUE') > 0, 10,
if (CONVERT(bs.vendor USING utf8mb4) = 'VENDOR', 5, 0) ) +
if (datediff('2016-10-14', m_c.sent_date) BETWEEN 0 AND 92, 15, 5)
) AS score
FROM campaign_table m_c
INNER JOIN recipient_table m_r ON m_c.domain = m_r.domain AND m_c.campaign_id = m_r.campaign_id
LEFT JOIN booking_sponsor bs ON m_c.domain = bs.domain AND m_c.campaign_id = bs.campaign_id
WHERE datediff('2016-10-14', m_c.sent_date) BETWEEN 0 AND 185
AND ( aes_encrypt('johnsmith#domain.com', 'secret_key') = m_r.email OR m_r.email_domain = 'domain.com' )
) T ORDER BY score DESC, sent_date DESC LIMIT 5;
Now, magically without doing anything else, the query runs within two seconds. How is that possible?
Figured it out. Some of the declared variable type was different compared to the column being compared, so I guess MySQL could not compare them in the most efficient way possible.

How to ignore next duplicated row?

I need your help!
I have a table:
CREATE TABLE `table` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`res` varchar(255) DEFAULT NULL,
`value` int(6) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=8 DEFAULT CHARSET=utf8;
-- Records of table
INSERT INTO `table` VALUES (1, 'gold', 44);
INSERT INTO `table` VALUES (2, 'gold', 44);
INSERT INTO `table` VALUES (3, 'gold', 45);
INSERT INTO `table` VALUES (4, 'gold', 46);
INSERT INTO `table` VALUES (5, 'gold', 44);
INSERT INTO `table` VALUES (6, 'gold', 44);
INSERT INTO `table` VALUES (7, 'gold', 44);
INSERT INTO `table` VALUES (8, 'gold', 47);
i need to make SELECT request which will ignored next or previous duplicated rows and i receive data like this:
- gold:44 (ignored 1 record)
- gold:45
- gold:46
- gold:44 (ignored 2 records)
- gold:47
there is no object which duplicated record will ignore (first,second,last).
(i tried to use group by value or distinct but this way removes other records with same value)
You can solve this with a gaps and islands solution.
- Normally that involves ROW_NUMBER() which is not present in MySQL
- The solution below mimics ROW_NUMBER() with variables and ORDER BY
Link to example : http://sqlfiddle.com/#!9/32e72/12
SELECT
MIN(id) AS id,
res,
value
FROM
(
SELECT
IF (#res = res AND #val = value, #row := #row + 1, #row := 1) AS val_ordinal,
id AS id,
res_ordinal AS res_ordinal,
#res := res AS res,
#val := value AS value
FROM
(
SELECT
IF (#res = res , #row := #row + 1, #row := 1) AS res_ordinal,
id AS id,
#res := res AS res,
#val := value AS value
FROM
`table`,
(
SELECT #row := 0, #res := '', #val := 0
)
AS initialiser
ORDER BY
res, id
)
AS sequenced_res_id,
(
SELECT #row := 0, #res := '', #val := 0
)
AS initialiser
ORDER BY
res, value, id
)
AS sequenced_res_val_id
GROUP BY
res,
value,
res_ordinal - val_ordinal
ORDER BY
MIN(id)
;
If I add res_ordinal, val_ordinal and res_ordinal - val_ordinal to your data, it can be seen that you can now differentiate between the two sets of 44
GROUP
INSERT INTO `table` VALUES ('1', 'gold', '44'); 1 - 1 = 0 (Gold, 44, 0)
INSERT INTO `table` VALUES ('2', 'gold', '44'); 2 - 2 = 0
INSERT INTO `table` VALUES ('3', 'gold', '45'); 3 - 1 = 2 (Gold, 45, 2)
INSERT INTO `table` VALUES ('4', 'gold', '46'); 4 - 1 = 3 (Gold, 46, 3)
INSERT INTO `table` VALUES ('5', 'gold', '44'); 5 - 3 = 2 (Gold, 44, 2)
INSERT INTO `table` VALUES ('6', 'gold', '44'); 6 - 4 = 2
INSERT INTO `table` VALUES ('7', 'gold', '44'); 7 - 5 = 2
INSERT INTO `table` VALUES ('8', 'gold', '47'); 8 - 1 = 7 (Gold, 47, 7)
NOTE: According to your data I could use id instead of making my own res_ordinal. doing it this way, however, copes with gaps in the id sequence and having multiple different resources. This means that in the following example the two golds are considered to be duplicates of each other...
1 Gold 44 1 - 1 = 0 (Gold, 44, 0)
2 Poop 45 1 - 1 = 0 (Poop, 45, 0)
3 Gold 44 2 - 2 = 0 (Gold, 44, 0) -- Duplicate
4 Gold 45 3 - 1 = 2 (Gold, 44, 2)
select t1.*
from `table` t1
where not exists ( select 1
from `table` t2
where t1.id = 1+t2.id
and t1.res = t2.res
and t1.value = t2.value
);
works fine
Use the DISTINCT clause to select unique rows like so:
SELECT DISTINCT res, value FROM table
Use Select DISTINCT res, value FROM table ... to avoid redundancy

Group by, with rank and sum - not getting correct output

I'm trying to sum a column with rank function and group by month, my code is
select dbo.UpCase( REPLACE( p.Agent_name,'.',' '))as Agent_name, SUM(convert ( float ,
p.Amount))as amount,
RANK() over( order by SUM(convert ( float ,Amount )) desc ) as arank
from dbo.T_Client_Pc_Reg p
group by p.Agent_name ,p.Sale_status ,MONTH(Reg_date)
having [p].Sale_status='Activated'
Currently I'm getting all total value of that column not month wise
Name amount rank
a 100 1
b 80 2
c 50 3
for a amount 100 is total amount till now but , i want get current month total amount not last months..
Maybe you just need to add a WHERE clause? Here is a minor re-write that I think works generally better. Some setup in tempdb:
USE tempdb;
GO
CREATE TABLE dbo.T_Client_Pc_Reg
(
Agent_name VARCHAR(32),
Amount INT,
Sale_Status VARCHAR(32),
Reg_date DATETIME
);
INSERT dbo.T_Client_Pc_Reg
SELECT 'a', 50, 'Activated', GETDATE()
UNION ALL SELECT 'a', 50, 'Activated', GETDATE()
UNION ALL SELECT 'b', 20, 'Activated', GETDATE()
UNION ALL SELECT 'b', 20, 'Activated', GETDATE()
UNION ALL SELECT 'b', 20, 'Activated', GETDATE()
UNION ALL SELECT 'b', 20, 'Activated', GETDATE()
UNION ALL SELECT 'b', 20, 'NotActivated', GETDATE()
UNION ALL SELECT 'c', 25, 'Activated', GETDATE()
UNION ALL SELECT 'c', 25, 'Activated', GETDATE()
UNION ALL SELECT 'c', 25, 'Activated', GETDATE()-40;
Then the query:
SELECT
Agent_name = UPPER(REPLACE(Agent_name, '.', '')),
Amount = SUM(CONVERT(FLOAT, Amount)),
arank = RANK() OVER (ORDER BY SUM(CONVERT(FLOAT, Amount)) DESC)
FROM dbo.T_Client_Pc_Reg
WHERE Reg_date >= DATEADD(MONTH, DATEDIFF(MONTH, 0, CURRENT_TIMESTAMP), 0)
AND Reg_date < DATEADD(MONTH, DATEDIFF(MONTH, 0, CURRENT_TIMESTAMP) + 1, 0)
AND Sale_status = 'Activated'
GROUP BY UPPER(REPLACE(Agent_name, '.', ''))
ORDER BY arank;
Now cleanup:
USE tempdb;
GO
DROP TABLE dbo.T_Client_Pc_Reg;