mysqli Stored Proc Cross tab [duplicate] - mysql

Following this post: POST ABOUT CONCAT
My problem is that i have many rows CONCAT into one row. For example if i have 10 rows with string around 50 chars, my query will show me only 6-7 of that rows or something like that.
I searech in stack and google and i found that i can change CONCAT max length by command: SET group_concat_max_len := ##max_allowed_packet. What i am doing wrong?
EDIT:
When i SHOW VARIABLES LIKE 'group_concat_max_len' it's shows me 1024.
Mysql version 5.0.96-log. Tables type: MyISAM. Looks like it dont have any limits, i try to select simple varchar with 2000 chars, and it looks fine.
I have 3 tables: 1st - Item with ItemID, 2nd - Descriptionpack with ItemID and DescriptionID, 3rd Description with DescriptionID.
Select
DISTINCT Item.ItemID as item
,GROUP_CONCAT(Description.DescriptionID) AS description
From Item
LEFT OUTER JOIN descriptionpack
on Item.ItemID=descriptionpack.ItemID
LEFT OUTER JOIN description
on descriptionpack.descriptionID=description.descriptionID
GROUP BY item
EDIT2: I think i found the problem, i said my problem to my provider and they answer me this:
I reviewed your question with our hosting team. You wouldn't be able
to change the global settings for that and other variables. However,
you should be able to set that variable on a per session basis by
setting it first, before other queries. Hope that helps.
So now the problem is, how to do that.

Presumably you're using GROUP_CONCAT(), not simple CONCAT().
The default value of the group_concat_max_len is 1024, which is a pretty small limit if you're building up big long concatenations.
To change it, use this command. I've set the length in this example to 100,000. You could set it to anything you need.
SET SESSION group_concat_max_len = 100000;
The usual value for max_allowed_packet is one megabyte, which is likely more than you need.
group_concat_max_len itself has an effectively unlimited size. It's limited only by the unsigned word length of the platform: 2^32-1 on a 32-bit platform and 2^64-1 on a 64-bit platform.
If that still isn't enough for your application, it's time to take #eggyal's suggestion and rethink your approach.

You need change group_concat_max_len default value in the bellow config file
**my.cnf file(Linux) and my.ini file(windows)**
[mysqld]//Add this line group_concat_max_len=15000 under mysqld section
group_concat_max_len=15000
Note: After change is done You need to restart your MySQL server.
my.cnf file path in linux :
1. /etc/my.cnf
2./etc/mysql/my.cnf
3.$MYSQL_HOME/my.cnf
4.[datadir]/my.cnf
5.~/.my.cnf

Related

Alternative to MySQL's GROUP_CONCAT function

I'm retrieving the data with MySQL function called "GROUP_CONCAT()".
But when I checked the result of "GROUP_CONCAT()" function related column, it was missing some data.
When I google the record missing issue with "GROUP_CONCAT()" function, in the official MySQL site they have mentioned as,
There is a global variable called group_concat_max_len and it will permit the maximum result length in bytes for the GROUP_CONCAT() function, the default value of it as 1024.
Therefore it seems I have to increase that value with following MySQL command,
SET GLOBAL group_concat_max_len = 1000000;
Therefore set this value permanently, I have to edit the MySQL server related configuration file (my.cnf or my.ini) and have to restart the server.
But unfortunately I haven't any permission to do so.
Therefore can you please help me to find out some alternative solution to fix this issue.
Thanks a lot.
Use SET SESSION instead:
SET SESSION group_concat_max_len = 1000000;
Unlike SET GLOBAL, SET SESSION does not require super privilege.
Reference

phpMyadmin Query Execution Time

I tried to run the following query in my Cloud VPS cPanel phpMyadmin
SELECT bankcode, bankname
FROM newbankdetails
WHERE
type='DB' AND
bankcode IN ( SELECT drawingbankname
FROM dd1
WHERE entrydate BETWEEN '01/04/2014' AND '30/04/2014'
AND paymentmode='DD'
AND state='TAMIL NADU' )
It takes very very very long time nearly 5 to 6 hours and says out of time or keeps running for days and don't show any error also don't show results.
Whereas it works perfectly in my local machine xampp
It happens only when working on large size tables like 12GB and around like that
How to speed it up and display the result as it displays instantly in localhost xampp
I think, replacing IN with a LEFT JOIN can make more sense, like this:
SELECT
bankcode, bankname
FROM
newbankdetails nd
LEFT JOIN
dd1 ON nd.bankcode = dd1.drawingbankname
WHERE
nd.type='DB'
AND
dd1.entrydate BETWEEN '01/04/2014' AND '30/04/2014'
AND
dd1.paymentmode = 'DD'
AND
dd1.state = 'TAMIL NADU'
I can't test it, But I think with dd1.paymentmode = 'DD' automatically removes rows of dd1.paymentmode IS NULL because of using LEFT JOIN.
First you should use EXPLAIN QUERY (https://dev.mysql.com/doc/refman/5.0/en/using-explain.html)
That will give you information about what is slow.
Your problem seems to be related to the size of the table. On your localhost you may have a small table, but in production it is very large. Also put an index on the first table on nd.bankcode.
You should try to remove the subselect, and use a JOIN instead.
Also, you should put indexes on the columns you use for search.
Put an index on drawingbankname, entrydate, paymentmode, state. Put also an index on nd.backcode .
Column drawingbankname is varchar, you should convert it to fixed char.
Remove the subselect and use JOIN.
Use EXPLAIN after the changes to see progress.

MySQL doesn't work properly when query is very big

When number of pIDs in query is very big - I don't receive expected results. More exactly - I receive nothing.
Note that the same query works OK on another server.
I think (but I'm not sure) that problem may be in MySQL configuration but don't know how to solve it.
Does anyone know how can I resolve this issue?
Query looks like this:
SELECT `filtID`,`pID` FROM (
(SELECT `filtID`,`pID` FROM `ftox_params_prod_links`
LEFT JOIN `ftox_params_values` USING(`fvID`)
WHERE (`filtID` IN (1,4,5,14,15,302,303,304,388,389,390)))
UNION (SELECT `filtID`,`pID` FROM `ftox_params_prod_values`
WHERE (`filtID` IN (1,4,5,14,15,302,303,304,388,389,390)))) AS `_T_`
WHERE (`pID` IN (173,174,175,176,177,178,179,180,181,182,183,
184,185,186,187,188,189,190,191,192,193,194,195,196,197,
198,199,200,201,202,203,204,205,206,207,208,209, ...
....................................................
...................Very much pIDs..................
....................................................
)) ORDER BY `filtID` ASC
MySQL has a limit for a query size, defined by max_allowed_packet configuration value - in my.conf.
Or the result of the query may be too big - try to increase memory_limit php variable in php.ini.
Example:
memory_limit = 256M

What is the maximum allowance for group_concat_max_len in MySQL?

I am using a group_concat to concatenate a lot of rows into one.
I set group concat to 10000 using:
SET group_concat_max_len = 10000;
But even then, my output cells remain incomplete and end with ...
I tried setting group_concat_max_len = 20000 and even that didn't help.
I also tried setting group_concat_max_len to 99999999. It still doesn't complete my output text. And I checked one of the group concat stops at Length = 230 characters and then gives ...
Is there any other way?
Check out this link: https://dev.mysql.com/doc/refman/5.6/en/server-system-variables.html#sysvar_group_concat_max_len
All the MySQL configuration variables are documented on that page, with details like minimum, maximum, default value, whether you can set them globally or per-session, whether you can change them on a running instance or does it require a restart, and other description of usage.
The maximum value for group_concat_max_len is 18446744073709551615.
The group-concat string does not end with "..." If you try to group too much text, it just gets truncated. So I wonder if the problem is not with MySQL's settings, but with the display of your cells.
For 32bit systems, the maximum value is 4294967295
For 64 bit systems, the maximum value is 18446744073709551615.
You can set the variable for your current session using
SET SESSION group_concat_max_len=4294967295;
To set the variable forever use
SET GLOBAL group_concat_max_len=4294967295;
(see http://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_group_concat_max_len)

MYSQL updates column with random but specific entry

MySQL update seems to update with a magic number of 2147483647 when I try to update with 0123456789.
Somewhere, MySQL seems to associate one number for another on an INT column in any schema. Where do I even look for such an association. Details of this are below.
This update does what it is suppose to and enters in 012345678 into the ContactPhone2 column for the appropriate row.
UPDATE `alacarte`.`customercontacts` SET `ContactPhone2`='012345678' WHERE `CustomerID`='cust-000004' and`ContactID`='1'
This update actually enters in 2147483647 in the ContactPhone2 column on the appropriate row; far from 0123456789.
UPDATE `alacarte`.`customercontacts` SET `ContactPhone2`='0123456789' WHERE `CustomerID`='cust-000004' and`ContactID`='1'
Datatype for the ContactPhone2 is INT(10) with a default value of NULL and NO parameters set (OK, NN, AI, etc)
This is from the MYSQL general log.
For the 012345678 update.
130101 17:51:43 89 Query set autocommit=0
130101 17:51:44 89 Prepare UPDATE `alacarte`.`customercontacts` SET `ContactPhone2`='012345678' WHERE `CustomerID`='cust-000004' and`ContactID`='1'
89 Execute UPDATE `alacarte`.`customercontacts` SET `ContactPhone2`='012345678' WHERE `CustomerID`='cust-000004' and`ContactID`='1'
89 Query commit
89 Close stmt
And the log entry for the 0123456789 update.
130101 17:51:48 89 Query set autocommit=0
130101 17:51:49 89 Prepare UPDATE `alacarte`.`customercontacts` SET `ContactPhone2`='0123456789' WHERE `CustomerID`='cust-000004' and`ContactID`='1'
89 Execute UPDATE `alacarte`.`customercontacts` SET `ContactPhone2`='0123456789' WHERE `CustomerID`='cust-000004' and`ContactID`='1'
89 Query commit
89 Close stmt
Updating with 0123456780 Works, so it is not the length of digits.
This happens on ANY column throughout the database with an INT(10+) but not on VARCHAR columns.
Even better is it does the same thing on another schema called thedesignedge that was the old schema that has since been copied and renamed, but it is still active in mysql although unused.
There are NO TRIGGERS running on the column, and only one trigger running on the table on the ContactID column. No errors are given either.
Queries have been generally made through MySQL workbench, although I tried once to directly enter in the update query through the shell in terminal and got the same results.
Somewhere, MySQL seems to associate one number for another on an INT column in any schema. Where do I even look for such an association.
We have not done anything with caching or indexing yet, aside for whatever MySQL defaults to. We are running mysql 5.5.29
A "magic" number of 2147483647 is 2^31-1, which is the upper limit for an int. it means it thinks you put in a number that was too big.
Best option is going to be to use varchar to store your phone numbers. make sure you make it big enough to handle all your expected cases, and I strongly recommend formatting the phone number using a regular expression to clean the input for you. Here's a simple way to format a phone number in php if you only need (xxx) xxx-xxxx and not extensions or anything funny (like international numbers).
$phoneNumber = '1-(235) 555.1234';
$formatted = '';
if (preg_match('/1?[^0-9]*([02-9][0-9][0-9])[^0-9]*([0-9]{3})[^0-9]*([0-9]{4})/', $phoneNumber, $matches)) {
$formatted = "($matches[1]) $matches[2]-$matches[3]";
} else {
// phone number is invalid
}
The result will be $formatted = (235) 555-1234. The regular expression includes an optional 1 prefix that gets discarded and the first actual number cannot be 1.
Obviously, you should use this same regular expression to validate the phone number before you accept it if you use it to format it.
You're probably overflowing your INT value. When MySQL overflows, by default it stores the largest value the data type supports. The largest value for a signed 32-bit integer is 231-1, which is 2147483647.
The values '012345678' and '0123456789' are okay. That is, they are within the range of an INT, and they insert fine. So I doubt those values are really causing the trouble.
I would look for some other SQL statement that's updating with a different value that exceeds the range. For example, someone may have tried to add their phone number with a phone extension.
You can enable strict mode, so that integer overflows cause an error instead of silently truncating the value. That'll tell you more clearly when it's happening.
SET SQL_MODE='STRICT_ALL_TABLES';
See http://dev.mysql.com/doc/refman/5.5/en/server-sql-mode.html
PS: You mention INT(10) as though the integer argument matters to the range of values that the data type supports. That's a common misconception, but it doesn't matter. An INT is always 32 bits. See What is the difference (when being applied to my code) between INT(10) and INT(12)?
What mysql version are you using? I can't duplicate that.
Does set ContactPhone2=round('0123456789') help? (Just changing to varchar is better, presuming you want to keep leading zeroes.)