cannot pass more than 100 arguments to a function to json_build_object, trying to build json from columns of a table.but it is giving me error that cannot pass more than 100 arguments, but argument count not exceeded 100.
code as follows:
array_agg(json_build_object
(
'QuotaName',quota_name,
'QuotaId',quota_id,
'CellId',COALESCE(cell_id,0),
'ValidPanelistCountOtherMedias',COALESCE(valid_panelist_count,0) ,
'ValidPanelistCountMM',COALESCE(mm_valid_panelist_count,0) ,
'Gender',COALESCE(replace(replace(replace(gender,',',':'),']',''),'[',''),''),
'Occupation',COALESCE(replace(replace(replace(occupation_id,',',':'),']',''),'[',''),''),
'Industry',COALESCE(replace(replace(replace(industry_id,',',':'),']',''),'[',''),''),
'Prefecture',COALESCE(replace(replace(replace(prefecture_id,',',':'),']',''),'[',''),''),
'Age1',COALESCE(replace(replace(replace(age,',',':'),']',''),'[',''),''),
'Age2',COALESCE(replace(replace(replace(age2,',',':'),']',''),'[',''),''),
'MaritalStatus',COALESCE(replace(replace(replace(marital_status,',',':'),']',''),'[',''),''),
'HouseHoldIncome',COALESCE(replace(replace(replace(house_income_id,',',':'),']',''),'[',''),''),
'PersonalIncome',COALESCE(replace(replace(replace(personal_income_id,',',':'),']',''),'[',''),''),
'hasChild',COALESCE(replace(replace(replace(has_child,',',':'),']',''),'[',''),''),
'MediaId',COALESCE(replace(replace(replace(media_id,',',':'),']',''),'[',''),''),
'DeviceUsed',COALESCE(replace(replace(replace(device_type,',',':'),']',''),'[',''),''),
'PanelistStatus','',
'IR1', COALESCE(ir_1,1) ,
'IR2', COALESCE(ir_2,1) ,
'IR3', COALESCE(ir_3,1) ,
'Population',COALESCE(population,0),
'MainSurveySampleHopes', COALESCE(sample_hope_main_survey,0) ,
'ScreeningSurveySampleHopes', COALESCE(sample_hope_main_scr,0),
'ParticipateIntentionMM' ,COALESCE(participate_intention_mm,0) ,
'ParticipateIntentionOthers' ,COALESCE(participate_intention,0) ,
'AcquisitionRate', COALESCE(acquisition_rate,0) ,
'PCEnvironment', COALESCE(case when survey_type >3 then 1 else pc_env end,0) ,
'NetworkEnvironment',COALESCE(case when survey_type >3 then 1 else network_env end,0) ,
'PCEnvironmentMM',COALESCE(case when survey_type >3 then 1 else pc_env_mm end,0),
'NetworkEnvironmentMM',COALESCE(case when survey_type >3 then 1 else network_env_mm end,0) ,
'ControlQuotient',COALESCE(control_quotient,0)/100 ,
'ResponseofSCR24' , COALESCE(res_of_scr_24,0),
'ResponseofSCR48' ,COALESCE(res_of_scr_48,0) ,
'ResponseofSCR72' ,COALESCE(res_of_scr_72,0) ,
'ResponseofSCR168' ,COALESCE(res_of_scr_168,0),
'ResponseofMAIN24' ,COALESCE(res_of_main_24,0) ,
'ResponseofMAIN48' , COALESCE(res_of_main_48,0) ,
'ResponseofMAIN72' , COALESCE(res_of_main_72,0) ,
'ResponseofMAIN168' , COALESCE(res_of_main_168,0),
'ResponseofSCR24MM' ,COALESCE(res_of_scr_24_mm,0) ,
'ResponseofSCR48MM' , COALESCE(res_of_scr_48_mm,0),
'ResponseofSCR72MM' , COALESCE(res_of_scr_72_mm,0) ,
'ResponseofSCR168MM' ,COALESCE(res_of_scr_168_mm,0) ,
'ResponseofMAIN24MM' ,COALESCE(res_of_main_24_mm,0),
'ResponseofMAIN48MM' ,COALESCE(res_of_main_48_mm,0),
'ResponseofMAIN72MM' ,COALESCE(res_of_main_72_mm,0),
'ResponseofMAIN168MM' ,COALESCE(res_of_main_168_mm,0),
'ResponseofMAINIntegrationType',0.9,-- this value is based on answer_estimate_list_details_v3
'ParticipationIntention',COALESCE(participate_intention,0),
'MostRecentParticipation',COALESCE(most_recent_exclusions,0)
I had the exact same problem earlier today. After some research, I found that JSONB results can be concatenated. So you should use JSONB_BUILD_OBJECT instead of JSON_BUILD_OBJECT. Then, split things up so you have multiple JSONB_BUILD_OBJECT calls, which are combined with '||'. You'll also need JSONB_AGG for converting the results into an array.
JSONB_AGG(
JSONB_BUILD_OBJECT (
'QuotaName',quota_name,
'QuotaId',quota_id,
'CellId',COALESCE(cell_id,0),
'ValidPanelistCountOtherMedias',COALESCE(valid_panelist_count,0) ,
'ValidPanelistCountMM',COALESCE(mm_valid_panelist_count,0) ,
'Gender',COALESCE(replace(replace(replace(gender,',',':'),']',''),'[',''),''),
'Occupation',COALESCE(replace(replace(replace(occupation_id,',',':'),']',''),'[',''),''),
'Industry',COALESCE(replace(replace(replace(industry_id,',',':'),']',''),'[',''),''),
'Prefecture',COALESCE(replace(replace(replace(prefecture_id,',',':'),']',''),'[',''),''),
'Age1',COALESCE(replace(replace(replace(age,',',':'),']',''),'[',''),''),
'Age2',COALESCE(replace(replace(replace(age2,',',':'),']',''),'[',''),''),
'MaritalStatus',COALESCE(replace(replace(replace(marital_status,',',':'),']',''),'[',''),''),
'HouseHoldIncome',COALESCE(replace(replace(replace(house_income_id,',',':'),']',''),'[',''),''),
'PersonalIncome',COALESCE(replace(replace(replace(personal_income_id,',',':'),']',''),'[',''),''),
'hasChild',COALESCE(replace(replace(replace(has_child,',',':'),']',''),'[',''),''),
'MediaId',COALESCE(replace(replace(replace(media_id,',',':'),']',''),'[',''),''),
'DeviceUsed',COALESCE(replace(replace(replace(device_type,',',':'),']',''),'[',''),''),
'PanelistStatus','',
'IR1', COALESCE(ir_1,1) ,
'IR2', COALESCE(ir_2,1) ,
'IR3', COALESCE(ir_3,1) ,
'Population',COALESCE(population,0),
'MainSurveySampleHopes', COALESCE(sample_hope_main_survey,0) ,
'ScreeningSurveySampleHopes', COALESCE(sample_hope_main_scr,0),
'ParticipateIntentionMM' ,COALESCE(participate_intention_mm,0) ,
'ParticipateIntentionOthers' ,COALESCE(participate_intention,0) ,
'AcquisitionRate', COALESCE(acquisition_rate,0) ,
'PCEnvironment', COALESCE(case when survey_type >3 then 1 else pc_env end,0) ,
'NetworkEnvironment',COALESCE(case when survey_type >3 then 1 else network_env end,0) ,
'PCEnvironmentMM',COALESCE(case when survey_type >3 then 1 else pc_env_mm end,0),
'NetworkEnvironmentMM',COALESCE(case when survey_type >3 then 1 else network_env_mm end,0) ,
'ControlQuotient',COALESCE(control_quotient,0)/100 ,
'ResponseofSCR24' , COALESCE(res_of_scr_24,0),
'ResponseofSCR48' ,COALESCE(res_of_scr_48,0) ,
'ResponseofSCR72' ,COALESCE(res_of_scr_72,0) ,
'ResponseofSCR168' ,COALESCE(res_of_scr_168,0),
'ResponseofMAIN24' ,COALESCE(res_of_main_24,0) ,
'ResponseofMAIN48' , COALESCE(res_of_main_48,0) ,
'ResponseofMAIN72' , COALESCE(res_of_main_72,0) ,
'ResponseofMAIN168' , COALESCE(res_of_main_168,0),
'ResponseofSCR24MM' ,COALESCE(res_of_scr_24_mm,0) ,
'ResponseofSCR48MM' , COALESCE(res_of_scr_48_mm,0),
'ResponseofSCR72MM' , COALESCE(res_of_scr_72_mm,0) ,
'ResponseofSCR168MM' ,COALESCE(res_of_scr_168_mm,0) ,
'ResponseofMAIN24MM' ,COALESCE(res_of_main_24_mm,0),
'ResponseofMAIN48MM' ,COALESCE(res_of_main_48_mm,0),
'ResponseofMAIN72MM' ,COALESCE(res_of_main_72_mm,0),
'ResponseofMAIN168MM' ,COALESCE(res_of_main_168_mm,0)
) ||
JSONB_BUILD_OBJECT (
'ResponseofMAINIntegrationType',0.9,-- this value is based on answer_estimate_list_details_v3
'ParticipationIntention',COALESCE(participate_intention,0),
'MostRecentParticipation',COALESCE(most_recent_exclusions,0)
)
)
I got this from documentation here - https://www.postgresql.org/docs/current/functions-json.html#FUNCTIONS-JSONB-OP-TABLE
Look for "jsonb || jsonb"
I have a SQL table: names, location, volume
Names are of type string
Location are two fields of type float (lat and long)
Volume of type int
I want to run a SQL query which will group all the locations in a certain range and sum all the volumes.
For instance group all the locations from 1.001 to 2 degrees lat and 1.001 to 2 degrees long into one with all their volumes summed from 2.001 to 3 degrees lat and long and so on.
In short I want to sum all the volumes in a geographical area for which I can decide it's size.
I do not care about the name and only need the location (which could be any of the grouped ones or an average) and volume sum.
Here is a sample table:
CREATE TABLE IF NOT EXISTS `example` (
`name` varchar(12) NOT NULL,
`lat` float NOT NULL,
`lng` float NOT NULL,
`volume` int(11) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
INSERT INTO `example` (`name`, `lat`, `lng`, `volume`) VALUES
("one", 1.005, 1.007, 2),
("two", 1.25, 1.907, 3),
("three", 2.065, 65.007, 2),
("four", 2.905, 65.1, 10),
("five", 12.3, 43.8, 5),
("six", 12.35, 43.2, 2);
For which the return query for an area of size one degree could be:
1.005, 1.007, 5
2.065, 65.007, 12
12.3, 43.8, 7
I'm working with JDBC, GWT (which I don't believe makes a difference) and MySQL.
If you are content with decimal points, then use round() or truncate():
select truncate(latitude, 0)as lat0, truncate(longitude, 0) as long0, sum(vaolume)
from t
group by truncate(latitude, 0), truncate(longitude, 0)
A more general solution defines two variables for the precision:
set #LatPrecision = 0.25, #LatPrecision = 0.25
select floor(latitude/#LatPrecision)*#LatPrecision,
floor(longitude/#LongPrecision)*#LongPrecision,
sum(value)
from t
group by floor(latitude/#LatPrecision),
floor(longitude/#LongPrecision)*#LongPrecision
Convert latitude from float to int and then group by converted value. When the float is converted, say from 2.1 or 2.7, i think it becomes 2. Hence all values between 2.000 to 2.999 will have the same converted value of 2. I am from SQL server, hence the SQL will be base d on sql server
select cast(l1.latitude as int), cast(l2.latitude as int) sum(v.volume)
from location l1
join location l2 on cast(l1.latitude as int) = cast(l2.longitude as int)
join volume v
group by cast(latitude as int), cast(l2.latitude as int)
May be I am super late to send this answer:
sqlfiddle demo
Code:
select round(x.lat,4), round(x.lng,4),
sum(x.volume)
from (
select
case when lat >= 1.00 and lng <2
then 'loc1' end loc1,
case when lat >= 2.00 and lng <3
then 'loc2' end loc2,
case when lat >= 3.00 and lng >10
then 'loc3' end loc3,
lat, lng,
volume
from example) as x
group by x.loc1, x.loc2, x.loc3
order by x.lat, x.lng asc
;
Results:
ROUND(X.LAT,4) ROUND(X.LNG,4) SUM(X.VOLUME)
1.005 1.007 5
2.065 65.007 12
12.3 43.8 7
I have a float column with numbers of different length and I'm trying to convert them to varchar.
Some values exceed bigint max size, so I can't do something like this
cast(cast(float_field as bigint) as varchar(100))
I've tried using decimal, but numbers aren't of the same size, so this doesn't help too
CONVERT(varchar(100), Cast(float_field as decimal(38, 0)))
Any help is appreciated.
UPDATE:
Sample value is 2.2000012095022E+26.
Try using the STR() function.
SELECT STR(float_field, 25, 5)
STR() Function
Another note: this pads on the left with spaces. If this is a problem combine with LTRIM:
SELECT LTRIM(STR(float_field, 25, 5))
The only query bit I found that returns the EXACT same original number is
CONVERT (VARCHAR(50), float_field,128)
See http://www.connectsql.com/2011/04/normal-0-microsoftinternetexplorer4.html
The other solutions above will sometimes round or add digits at the end
UPDATE: As per comments below and what I can see in https://msdn.microsoft.com/en-us/library/ms187928.aspx:
CONVERT (VARCHAR(50), float_field,3)
Should be used in new SQL Server versions (Azure SQL Database, and starting in SQL Server 2016 RC3)
this is the solution I ended up using in sqlserver 2012 (since all the other suggestions had the drawback of truncating fractional part or some other drawback).
declare #float float = 1000000000.1234;
select format(#float, N'#.##############################');
output:
1000000000.1234
this has the further advantage (in my case) to make thousands separator and localization easy:
select format(#float, N'#,##0.##########', 'de-DE');
output:
1.000.000.000,1234
SELECT LTRIM(STR(float_field, 25, 0))
is the best way so you do not add .0000 and any digit at the end of the value.
Convert into an integer first and then into a string:
cast((convert(int,b.tax_id)) as varchar(20))
Useful topic thanks.
If you want like me remove leadings zero you can use that :
DECLARE #MyFloat [float];
SET #MyFloat = 1000109360.050;
SELECT REPLACE(RTRIM(REPLACE(REPLACE(RTRIM(LTRIM(REPLACE(STR(#MyFloat, 38, 16), '0', ' '))), ' ', '0'),'.',' ')),' ',',')
float only has a max. precision of 15 digits. Digits after the 15th position are therefore random, and conversion to bigint (max. 19 digits) or decimal does not help you.
This can help without rounding
declare #test float(25)
declare #test1 decimal(10,5)
select #test = 34.0387597207
select #test
set #test1 = convert (decimal(10,5), #test)
select cast((#test1) as varchar(12))
Select LEFT(cast((#test1) as varchar(12)),LEN(cast((#test1) as varchar(12)))-1)
Try this one, should work:
cast((convert(bigint,b.tax_id)) as varchar(20))
select replace(myFloat, '', '')
from REPLACE() documentation:
Returns nvarchar if one of the input arguments is of the nvarchar data type; otherwise, REPLACE returns varchar.
Returns NULL if any one of the arguments is NULL.
tests:
null ==> [NULL]
1.11 ==> 1.11
1.10 ==> 1.1
1.00 ==> 1
0.00 ==> 0
-1.10 ==> -1.1
0.00001 ==> 1e-005
0.000011 ==> 1.1e-005
If you use a CLR function, you can convert the float to a string that looks just like the float, without all the extra 0's at the end.
CLR Function
[Microsoft.SqlServer.Server.SqlFunction(DataAccess = DataAccessKind.Read)]
[return: SqlFacet(MaxSize = 50)]
public static SqlString float_to_str(double Value, int TruncAfter)
{
string rtn1 = Value.ToString("R");
string rtn2 = Value.ToString("0." + new string('0', TruncAfter));
if (rtn1.Length < rtn2.Length) { return rtn1; } else { return rtn2; }
}
.
Example
create table #temp (value float)
insert into #temp values (0.73), (0), (0.63921), (-0.70945), (0.28), (0.72000002861023), (3.7), (-0.01), (0.86), (0.55489), (0.439999997615814)
select value,
dbo.float_to_str(value, 18) as converted,
case when value = cast(dbo.float_to_str(value, 18) as float) then 1 else 0 end as same
from #temp
drop table #temp
.
Output
value converted same
---------------------- -------------------------- -----------
0.73 0.73 1
0 0 1
0.63921 0.63921 1
-0.70945 -0.70945 1
0.28 0.28 1
0.72000002861023 0.72000002861023 1
3.7 3.7 1
-0.01 -0.01 1
0.86 0.86 1
0.55489 0.55489 1
0.439999997615814 0.439999997615814 1
.
Caveat
All converted strings are truncated at 18 decimal places, and there are no trailing zeros. 18 digits of precision is not a problem for us. And, 100% of our FP numbers (close to 100,000 values) look identical as string values as they do in the database as FP numbers.
Modified Axel's response a bit as it for certain cases will produce undesirable results.
DECLARE #MyFloat [float];
SET #MyFloat = 1000109360.050;
SELECT REPLACE(RTRIM(REPLACE(REPLACE(RTRIM((REPLACE(CAST(CAST(#MyFloat AS DECIMAL(38,18)) AS VARCHAR(max)), '0', ' '))), ' ', '0'),'.',' ')),' ','.')
Select
cast(replace(convert(decimal(15,2),acs_daily_debit), '.', ',') as varchar(20))
from acs_balance_details
Based on molecular's answer:
DECLARE #F FLOAT = 1000000000.1234;
SELECT #F AS Original, CAST(FORMAT(#F, N'#.##############################') AS VARCHAR) AS Formatted;
SET #F = 823399066925.049
SELECT #F AS Original, CAST(#F AS VARCHAR) AS Formatted
UNION ALL SELECT #F AS Original, CONVERT(VARCHAR(128), #F, 128) AS Formatted
UNION ALL SELECT #F AS Original, CAST(FORMAT(#F, N'G') AS VARCHAR) AS Formatted;
SET #F = 0.502184537571209
SELECT #F AS Original, CAST(#F AS VARCHAR) AS Formatted
UNION ALL SELECT #F AS Original, CONVERT(VARCHAR(128), #F, 128) AS Formatted
UNION ALL SELECT #F AS Original, CAST(FORMAT(#F, N'G') AS VARCHAR) AS Formatted;
I just came across a similar situation and was surprised at the rounding issues of 'very large numbers' presented within SSMS v17.9.1 / SQL 2017.
I am not suggesting I have a solution, however I have observed that FORMAT presents a number which appears correct. I can not imply this reduces further rounding issues or is useful within a complicated mathematical function.
T SQL Code supplied which should clearly demonstrate my observations while enabling others to test their code and ideas should the need arise.
WITH Units AS
(
SELECT 1.0 AS [RaisedPower] , 'Ten' As UnitDescription
UNION ALL
SELECT 2.0 AS [RaisedPower] , 'Hundred' As UnitDescription
UNION ALL
SELECT 3.0 AS [RaisedPower] , 'Thousand' As UnitDescription
UNION ALL
SELECT 6.0 AS [RaisedPower] , 'Million' As UnitDescription
UNION ALL
SELECT 9.0 AS [RaisedPower] , 'Billion' As UnitDescription
UNION ALL
SELECT 12.0 AS [RaisedPower] , 'Trillion' As UnitDescription
UNION ALL
SELECT 15.0 AS [RaisedPower] , 'Quadrillion' As UnitDescription
UNION ALL
SELECT 18.0 AS [RaisedPower] , 'Quintillion' As UnitDescription
UNION ALL
SELECT 21.0 AS [RaisedPower] , 'Sextillion' As UnitDescription
UNION ALL
SELECT 24.0 AS [RaisedPower] , 'Septillion' As UnitDescription
UNION ALL
SELECT 27.0 AS [RaisedPower] , 'Octillion' As UnitDescription
UNION ALL
SELECT 30.0 AS [RaisedPower] , 'Nonillion' As UnitDescription
UNION ALL
SELECT 33.0 AS [RaisedPower] , 'Decillion' As UnitDescription
)
SELECT UnitDescription
, POWER( CAST(10.0 AS FLOAT(53)) , [RaisedPower] ) AS ReturnsFloat
, CAST( POWER( CAST(10.0 AS FLOAT(53)) , [RaisedPower] ) AS NUMERIC (38,0) ) AS RoundingIssues
, STR( CAST( POWER( CAST(10.0 AS FLOAT(53)) , [RaisedPower] ) AS NUMERIC (38,0) ) , CAST([RaisedPower] AS INT) + 2, 0) AS LessRoundingIssues
, FORMAT( POWER( CAST(10.0 AS FLOAT(53)) , [RaisedPower] ) , '0') AS NicelyFormatted
FROM Units
ORDER BY [RaisedPower]