cannot pass more than 100 arguments to a function to json_build_object - json

cannot pass more than 100 arguments to a function to json_build_object, trying to build json from columns of a table.but it is giving me error that cannot pass more than 100 arguments, but argument count not exceeded 100.
code as follows:
array_agg(json_build_object
(
'QuotaName',quota_name,
'QuotaId',quota_id,
'CellId',COALESCE(cell_id,0),
'ValidPanelistCountOtherMedias',COALESCE(valid_panelist_count,0) ,
'ValidPanelistCountMM',COALESCE(mm_valid_panelist_count,0) ,
'Gender',COALESCE(replace(replace(replace(gender,',',':'),']',''),'[',''),''),
'Occupation',COALESCE(replace(replace(replace(occupation_id,',',':'),']',''),'[',''),''),
'Industry',COALESCE(replace(replace(replace(industry_id,',',':'),']',''),'[',''),''),
'Prefecture',COALESCE(replace(replace(replace(prefecture_id,',',':'),']',''),'[',''),''),
'Age1',COALESCE(replace(replace(replace(age,',',':'),']',''),'[',''),''),
'Age2',COALESCE(replace(replace(replace(age2,',',':'),']',''),'[',''),''),
'MaritalStatus',COALESCE(replace(replace(replace(marital_status,',',':'),']',''),'[',''),''),
'HouseHoldIncome',COALESCE(replace(replace(replace(house_income_id,',',':'),']',''),'[',''),''),
'PersonalIncome',COALESCE(replace(replace(replace(personal_income_id,',',':'),']',''),'[',''),''),
'hasChild',COALESCE(replace(replace(replace(has_child,',',':'),']',''),'[',''),''),
'MediaId',COALESCE(replace(replace(replace(media_id,',',':'),']',''),'[',''),''),
'DeviceUsed',COALESCE(replace(replace(replace(device_type,',',':'),']',''),'[',''),''),
'PanelistStatus','',
'IR1', COALESCE(ir_1,1) ,
'IR2', COALESCE(ir_2,1) ,
'IR3', COALESCE(ir_3,1) ,
'Population',COALESCE(population,0),
'MainSurveySampleHopes', COALESCE(sample_hope_main_survey,0) ,
'ScreeningSurveySampleHopes', COALESCE(sample_hope_main_scr,0),
'ParticipateIntentionMM' ,COALESCE(participate_intention_mm,0) ,
'ParticipateIntentionOthers' ,COALESCE(participate_intention,0) ,
'AcquisitionRate', COALESCE(acquisition_rate,0) ,
'PCEnvironment', COALESCE(case when survey_type >3 then 1 else pc_env end,0) ,
'NetworkEnvironment',COALESCE(case when survey_type >3 then 1 else network_env end,0) ,
'PCEnvironmentMM',COALESCE(case when survey_type >3 then 1 else pc_env_mm end,0),
'NetworkEnvironmentMM',COALESCE(case when survey_type >3 then 1 else network_env_mm end,0) ,
'ControlQuotient',COALESCE(control_quotient,0)/100 ,
'ResponseofSCR24' , COALESCE(res_of_scr_24,0),
'ResponseofSCR48' ,COALESCE(res_of_scr_48,0) ,
'ResponseofSCR72' ,COALESCE(res_of_scr_72,0) ,
'ResponseofSCR168' ,COALESCE(res_of_scr_168,0),
'ResponseofMAIN24' ,COALESCE(res_of_main_24,0) ,
'ResponseofMAIN48' , COALESCE(res_of_main_48,0) ,
'ResponseofMAIN72' , COALESCE(res_of_main_72,0) ,
'ResponseofMAIN168' , COALESCE(res_of_main_168,0),
'ResponseofSCR24MM' ,COALESCE(res_of_scr_24_mm,0) ,
'ResponseofSCR48MM' , COALESCE(res_of_scr_48_mm,0),
'ResponseofSCR72MM' , COALESCE(res_of_scr_72_mm,0) ,
'ResponseofSCR168MM' ,COALESCE(res_of_scr_168_mm,0) ,
'ResponseofMAIN24MM' ,COALESCE(res_of_main_24_mm,0),
'ResponseofMAIN48MM' ,COALESCE(res_of_main_48_mm,0),
'ResponseofMAIN72MM' ,COALESCE(res_of_main_72_mm,0),
'ResponseofMAIN168MM' ,COALESCE(res_of_main_168_mm,0),
'ResponseofMAINIntegrationType',0.9,-- this value is based on answer_estimate_list_details_v3
'ParticipationIntention',COALESCE(participate_intention,0),
'MostRecentParticipation',COALESCE(most_recent_exclusions,0)

I had the exact same problem earlier today. After some research, I found that JSONB results can be concatenated. So you should use JSONB_BUILD_OBJECT instead of JSON_BUILD_OBJECT. Then, split things up so you have multiple JSONB_BUILD_OBJECT calls, which are combined with '||'. You'll also need JSONB_AGG for converting the results into an array.
JSONB_AGG(
JSONB_BUILD_OBJECT (
'QuotaName',quota_name,
'QuotaId',quota_id,
'CellId',COALESCE(cell_id,0),
'ValidPanelistCountOtherMedias',COALESCE(valid_panelist_count,0) ,
'ValidPanelistCountMM',COALESCE(mm_valid_panelist_count,0) ,
'Gender',COALESCE(replace(replace(replace(gender,',',':'),']',''),'[',''),''),
'Occupation',COALESCE(replace(replace(replace(occupation_id,',',':'),']',''),'[',''),''),
'Industry',COALESCE(replace(replace(replace(industry_id,',',':'),']',''),'[',''),''),
'Prefecture',COALESCE(replace(replace(replace(prefecture_id,',',':'),']',''),'[',''),''),
'Age1',COALESCE(replace(replace(replace(age,',',':'),']',''),'[',''),''),
'Age2',COALESCE(replace(replace(replace(age2,',',':'),']',''),'[',''),''),
'MaritalStatus',COALESCE(replace(replace(replace(marital_status,',',':'),']',''),'[',''),''),
'HouseHoldIncome',COALESCE(replace(replace(replace(house_income_id,',',':'),']',''),'[',''),''),
'PersonalIncome',COALESCE(replace(replace(replace(personal_income_id,',',':'),']',''),'[',''),''),
'hasChild',COALESCE(replace(replace(replace(has_child,',',':'),']',''),'[',''),''),
'MediaId',COALESCE(replace(replace(replace(media_id,',',':'),']',''),'[',''),''),
'DeviceUsed',COALESCE(replace(replace(replace(device_type,',',':'),']',''),'[',''),''),
'PanelistStatus','',
'IR1', COALESCE(ir_1,1) ,
'IR2', COALESCE(ir_2,1) ,
'IR3', COALESCE(ir_3,1) ,
'Population',COALESCE(population,0),
'MainSurveySampleHopes', COALESCE(sample_hope_main_survey,0) ,
'ScreeningSurveySampleHopes', COALESCE(sample_hope_main_scr,0),
'ParticipateIntentionMM' ,COALESCE(participate_intention_mm,0) ,
'ParticipateIntentionOthers' ,COALESCE(participate_intention,0) ,
'AcquisitionRate', COALESCE(acquisition_rate,0) ,
'PCEnvironment', COALESCE(case when survey_type >3 then 1 else pc_env end,0) ,
'NetworkEnvironment',COALESCE(case when survey_type >3 then 1 else network_env end,0) ,
'PCEnvironmentMM',COALESCE(case when survey_type >3 then 1 else pc_env_mm end,0),
'NetworkEnvironmentMM',COALESCE(case when survey_type >3 then 1 else network_env_mm end,0) ,
'ControlQuotient',COALESCE(control_quotient,0)/100 ,
'ResponseofSCR24' , COALESCE(res_of_scr_24,0),
'ResponseofSCR48' ,COALESCE(res_of_scr_48,0) ,
'ResponseofSCR72' ,COALESCE(res_of_scr_72,0) ,
'ResponseofSCR168' ,COALESCE(res_of_scr_168,0),
'ResponseofMAIN24' ,COALESCE(res_of_main_24,0) ,
'ResponseofMAIN48' , COALESCE(res_of_main_48,0) ,
'ResponseofMAIN72' , COALESCE(res_of_main_72,0) ,
'ResponseofMAIN168' , COALESCE(res_of_main_168,0),
'ResponseofSCR24MM' ,COALESCE(res_of_scr_24_mm,0) ,
'ResponseofSCR48MM' , COALESCE(res_of_scr_48_mm,0),
'ResponseofSCR72MM' , COALESCE(res_of_scr_72_mm,0) ,
'ResponseofSCR168MM' ,COALESCE(res_of_scr_168_mm,0) ,
'ResponseofMAIN24MM' ,COALESCE(res_of_main_24_mm,0),
'ResponseofMAIN48MM' ,COALESCE(res_of_main_48_mm,0),
'ResponseofMAIN72MM' ,COALESCE(res_of_main_72_mm,0),
'ResponseofMAIN168MM' ,COALESCE(res_of_main_168_mm,0)
) ||
JSONB_BUILD_OBJECT (
'ResponseofMAINIntegrationType',0.9,-- this value is based on answer_estimate_list_details_v3
'ParticipationIntention',COALESCE(participate_intention,0),
'MostRecentParticipation',COALESCE(most_recent_exclusions,0)
)
)
I got this from documentation here - https://www.postgresql.org/docs/current/functions-json.html#FUNCTIONS-JSONB-OP-TABLE
Look for "jsonb || jsonb"

Related

Delete Only First Zero in Column

ch.value values 00 , 01 , 02 , 03.
It must be 0 , 1 , 2 , 3.
I'am using this code :
Concat_ws('_', ch.prop, Trim(LEADING '0' FROM ch.value))
This code does not work at 00 case , how can i fix it ?
You can use below condition:
Concat_ws('_', ch.prop, case when left(ch.value,1)='0' then substring(ch.value,2) end)

Mixed data sampling in R (midas_r)

I have the following code where I have a weekly time series (variable x) which I would like to use in order to forecast my monthly time series (variable y).
So basically I want to forecast the current month's data (variable y) with either 1,2,3 or all 4 weeks (variable New_value) in the current month.
However, I am not sure if I am using the correct lags (I think I am) but moreover I am not sure how to interpret the starting values in the midas_r function (start = list() ).
Any help would be much appreciated.
######################
# MIDAS REGRESSION
####################
x <- structure(c(1.19, 1.24 , 1.67 , 1.67 , 1.55 , 1.67 , 1.39 , 2.01 , 2.14 , 1.71 , 1.59 , 1.49 , 1.68 , -0.37 , -0.44 , -7.87 , -7.79 , -31.22 , -31.05 , -30.47 , -35.53 , -25.48 , -25.9 , -19.03 , -16.33 ,
10.09 , 13.19 , 13.31 , 16.85 , 14.58 , 14.78 , 14.62 , 15.27 , 15.58 , 15.63 , 14.27 , 14.09 , 4.82 , 3.55 , 3.46 , 3.24 , 2.86 , 2.86 , 2.86 , 2.82),
.Tsp = c(2020, 2020.846154, 52), class = "ts")
x <- diff(x)
y <- structure(c(2.30, 2.64 , 2.77 , 2.83 , -43.91 , 12.32 , 26.68 , 12.06 , 10.08 , 12.01 , 4.71 , 3.85),
.Tsp = c(2020, 2020.91667, 12), class = "ts")
y <- diff(y)
trend <- c(1:length(y))
#RUNNING THE MIDAS REGRESSION
reg <- midas_r(y ~ mls(y, 1, 1) + mls(x, 4:7, m = 4, nealmon), start = list(x = c(1, 1, -1, -1)))
summary(reg)
hAh_test(reg)
#forecast(reg, newdata = list(y = c(NA), x =c(rep(NA, 4))))
#FORECAST WITH WEEKLY VALUES
reg <- midas_r(y ~ mls(y, 1, 1) + mls(x, 3:7, m = 4, nealmon), start = list(x = c(1, 1, -1, -1)))
new_value <- 2.52
#new_value <- c(2.52, 3.12)
forecast(reg, newdata = list(x = c(new_value, rep(NA, 4-length(new_value)))))
Looks like you have a problem with indentation, plus some unnecessary complexity.
Try something like this and see if it work:
for i in range(1, 20):
url = 'https://www.boliga.dk/salg/resultater?propertyType=4&street=&municipality=326&salesDateMin=2020&page=' + str(i)
page = requests.get(url)
soup = BeautifulSoup(page.text, 'html.parser')
table = soup.find('table', {'class': 'table generic-table m-0 mb-3'})
df = pd.read_html(str(table))[0]
df = pd.DataFrame(df) #if it all works otherwise, try running the code w/out this line; it may be unnecessary
df

SUM time values MySQL [duplicate]

This question already has answers here:
Surpassing MySQL's TIME value limit of 838:59:59
(7 answers)
Closed 4 years ago.
I am trying to sum time values and have it in the format of hours:minutes:seconds i.e. 100:30:10.
SEC_TO_TIME(SUM(TIME_TO_SEC(ActualHours))) AS Hours
But I'm having a problem because time's max value is 838:59:59.
So if summing the time is over this value it won't show i.e. if it equals 900 hours it will show as 838:59:59 which is wrong.
How do I the display the total hours if it is over 838:59:59?
If I had to do this conversion in SQL, I would do something like this:
SELECT CONCAT( ( _secs_ DIV 3600)
, ':'
, RIGHT(CONCAT('0',( _secs_ DIV 60 ) MOD 60 ),2)
, ':'
, RIGHT(CONCAT('0',( _secs_ MOD 60)),2)
) AS `h:mm:ss`
We can just replace the _secs_ with the expression that returns the number of seconds we want to convert. Using the expression given in the question, we get something like this:
SELECT CONCAT( ( SUM(TIME_TO_SEC(ActualHours)) DIV 3600)
, ':'
, RIGHT(CONCAT('0',( SUM(TIME_TO_SEC(ActualHours)) DIV 60 ) MOD 60 ),2)
, ':'
, RIGHT(CONCAT('0',( SUM(TIME_TO_SEC(ActualHours)) MOD 60)),2)
) AS `h:mm:ss`
DEMONSTRATION
The syntax provided in this answer is valid in MySQL 5.6. As a demonstration, using a user-defined variable #_secs as the expression number of seconds:
Set user-defined variable for demonstration:
SELECT #_secs_ := ( 987 * 3600 ) + ( 5 * 60 ) + 7 ;
returns
#_secs := ( 987 * 3600 ) + ( 5 * 60 ) + 7
-----------------------------------------
3553507
demonstrating the query pattern:
SELECT CONCAT( ( #_secs_ DIV 3600)
, ':'
, RIGHT(CONCAT('0',( #_secs_ DIV 60 ) MOD 60 ),2)
, ':'
, RIGHT(CONCAT('0',( #_secs_ MOD 60)),2)
) AS `hhh:mm:ss`
returns
hhh:mm:ss
---------
987:05:07
Here is one way we can do this:
SELECT
CONCAT(CAST(FLOOR(seconds / 3600) AS CHAR(50)), ':',
CAST(FLOOR(60*((seconds / 3600) - FLOOR(seconds / 3600))) AS CHAR(50)), ':',
CAST(seconds % 60 AS CHAR(50))) AS time
FROM yourTable;
For an input of 10,000,000 (ten million) seconds, this would generate:
2777:46:40
Demo
Use some simple math to concat a time period from seconds,replace 35000 with your column.
SELECT CONCAT(FLOOR(35000/3600),':',FLOOR((35000%3600)/60),':',(35000%3600)%60)
A fiddle to play with

How to make field act like a table

Sorry but I couldn't find/search for specific "thing"
so.
I have tables
users(id , name , kills , .... ) ;
quests( id , name , description , enemy_id , n_to_kill , en_killed , used_id );
what I want to achieve is for all users to have separate quests table.
I don't understand how it's possible but I think it should be(?)
example:
users:{ 1 , admin , 7 } { 2 , user , 11 } { 3 , john , 0 }
quests:{ 1 , "killall" , "kill all" , 0 , 100 , 0 , ??}
so to save for each user how many quests he finished/killed x out of y
Is it even possible?
Thanks for reading...

Null rows in output of query

System:
Windows XP professional
Postgresql 8.4 running on locally stored database (port 5432)
I am using the following code to import a decent size file into a table in my database:
-- Create headers and column variable types
CREATE TABLE special_raw_emissions
(
ORG_ID int ,
COUNTY text ,
MID_UP_STREAM text ,
SITE_ID int ,
PRF_ID int ,
RN text ,
ACCOUNT text ,
CERTIFYING_COMPANY_ORGANIZATION text ,
LEASE_NAME text ,
LEASE_NUMBER text ,
SOURCE text ,
FACILITY_ID int ,
PROFILE text ,
LATITUDE float ,
LONGITUDE float ,
OIL_bbl_yr float ,
CASINGHEAD_GAS_scf_yr float ,
GAS_WELL_GAS_scf_yr float ,
CONDENSATE_bbl_yr float ,
PRODUCED_WATER_bbl_yr float ,
TOTAL_VOC_EMISSION_tpy_EXTRACTED_FROM_SE_TAB float ,
CONTROL_PRESENT boolean ,
CONTROL_TYPE text ,
CONTROL_TYPE_IF_OTHER_DESCRIBE text ,
NOX_CONTROL_EFFICIENCY_PCNT float ,
VOC_CONTROL_EFFICIENCY_PCNT float ,
VENTED_VOLUME_scf_yr float ,
BLOWDOWN_EVENTS int ,
OPERATING_HOURS_hrs_yr float ,
FUEL_CONSUMPTION_MMscf_yr float ,
PILOT_GAS_USED_MMscf_yr float ,
WASTE_GAS_COMBUSTED_MMscf_yr float ,
GAS_TREATED_MMscf_yr float ,
AVERAGE_DAILY_PRODUCTION_RATE_MMscf_day float ,
THROUGHPUT_bbl_yr float ,
SEPARATOR_PRESSURE_psig float ,
SEPARATOR_TEMPERATURE_deg_F float ,
GAS_GRAVITY float ,
MAXIMUM_DAILY_PRODUCTION_bbl_day text ,
SOURCE_ANNUAL_THROUGHPUT_bbl_yr float ,
ANNUAL_THROUGHPUT_bbl_yr float ,
MAXIMUM_DAILY_PRODUCTION_RATE__bbl_day float ,
SERIAL_NUMBER text ,
MAKE text ,
MODEL text ,
FUEL_TYPE text ,
MAXIMUM_DESIGN_CAPACITY text ,
BURN_TYPE text ,
CYCLE text ,
ENGINE_RATING text ,
ASSIST_TYPE text ,
AUTOMATIC_AIR_TO_FUEL_RATIO_CONTROLLER boolean ,
DESTRUCTION_EFFICIENCY text ,
SUBJECT_TO_MACT boolean ,
IF_YES_INDICATE_MAJOR_OR_AREA_SOURCE text ,
SOURCE_TYPE text ,
IF_CONDENSER_WHAT_IS_EFFICIENCY text ,
LIQUID_TYPE text ,
IS_HARC_51C_ACCEPTED_METHOD text ,
WOULD_YOU_LIKE_TO_USE_HARC text ,
SINGLE_OR_MULTIPLE_TANKS text ,
NUMBER_OF_TANKS int ,
CONFIGURATION_TYPE text ,
WORKING_AND_BREATHING_EMISS_CALC_METHOD text ,
FLASH_EMISS_CAL_METHOD text ,
FLASH_IF_OTHER_PLEASE_DESCRIBE text ,
IS_MONITORING_PROGRAM_VOLUNTARY int ,
AIR_ACTUATED_PNEUMATIC_VALVES_GAS int ,
AIR_ACTUATED_PNEUMATIC_VALVES_LIGHT_OIL int ,
CONNECTORS_GAS int ,
CONNECTORS_LIGHT_OIL int ,
FLANGES_GAS int ,
FLANGES_LIGHT_OIL int ,
GAS_ACTUATED_PNEUMATIC_VALVES_GAS int ,
GAS_ACTUATED_PNEUMATIC_VALVES_LIGHT_OIL int ,
IS_COMPLETION_OPTIONAL text ,
NONACTUATED_VALVES_GAS int ,
NONACTUATED_VALVES_LIGHT_OIL int ,
OPEN_ENDED_LINES_GAS int ,
OPEN_ENDED_LINES_LIGHT_OIL int ,
OTHER_GAS int ,
OTHER_LIGHT_OIL int ,
PUMP_SEALS_GAS int ,
PUMP_SEALS_LIGHT_OIL int ,
TOTAL_COMPONENTS int ,
TOTAL_PUMPS_AND_COMPRESSOR_SEALS text ,
TOTAL_UNCONTROLLED_RELIEF_VALVES text ,
GAS_ACTUATED_PNEUMATIC_VALVES_HEAVY_OIL int ,
AIR_ACTUATED_PNEUMATIC_VALVES_HEAVY_OIL int ,
NON_ACTUATED_VALVES_HEAVY_OIL int ,
PUMP_SEALS_HEAVY_OIL int ,
CONNECTORS_HEAVY_OIL int ,
FLANGES_HEAVY_OIL int ,
OPEN_ENDED_LINES_HEAVY_OIL int ,
OTHER_HEAVY_OIL int ,
GAS_ACTUATED_PNEUMATIC_VALVES_WATER_SLASH_OIL text ,
AIR_ACTUATED_PNEUMATIC_VALVES_WATER_SLASH_OIL int ,
NON_ACTUATED_VALVES_WATER_SLASH_OIL int ,
PUMP_SEALS_WATER_SLASH_OIL int ,
CONNECTORS_WATER_SLASH_OIL int ,
FLANGES_WATER_SLASH_OIL int ,
OPEN_ENDED_LINES_WATER_SLASH_OIL int ,
OTHER_WATER_SLASH_OIL text ,
VOC_Gas_Mole_Percent float ,
BENZENE_Gas_Mole_Percent float ,
ETHYBENZENE_Gas_Mole_Percent float ,
n_HEXANE_Gas_Mole_Percent float ,
TOLUENE_Gas_Mole_Percent float ,
XYLENE_S_Gas_Mole_Percent float ,
HAPs_Gas_Mole_Percent float ,
VOC_Liquid_Mole_Percent float ,
BENZENE_Liquid_Mole_Percent float ,
ETHYBENZENE_Liquid_Mole_Percent float ,
n_HEXANE_Liquid_Mole_Percent float ,
TOLUENE_Liquid_Mole_Percent float ,
XYLENE_S_Liquid_Mole_Percent float ,
HAPs_Liquid_Mole_Percent float ,
VOC_Control_Factor_PERC float ,
CH4_Emission_Factor_tonne_Btu float,
Engine_LF float ,
CO2_M1 float ,
CO2_M2 float ,
CH4_M1 float ,
CH4_M2 float ,
Source_Class text ,
Site_class text);
-- Import data into database, note that the delimiter is '~'.
COPY special_raw_emissions
FROM 'C:/PostgreSQL/special results/batch.csv'
WITH DELIMITER AS '~'
CSV;
I was running into some strange error, so I did a QA check and queried this table to see if the data imported correctly, shown below is the query:
\o 'c:/postgresql/special_raw_emissions.csv'
select * from special_raw_emissions;
\o
My query returns all the data that was imported, but randomly there are 'null rows' added. Shown below is an example of 'null row'.
Data input:
155 Wise Midstream 8250 1
155 Wise Midstream 8250 1
4 Wise Upstream 7220 1
4 Wise Upstream 7220 1
95 Wise Midstream 7742 1
95 Wise Midstream 7742 1
7 Clay Upstream 1990 7
7 Cooke Upstream 1414 7
Data with null rows (the example shown below suggests a pattern, this is not the case in the larger output file)
7 Clay Upstream 1990 7
7 Cooke Upstream 1414 7
7 Cooke Upstream 1415 7
7 Cooke Upstream 1416 7
7 Cooke Upstream 3355 7
7 Cooke Upstream 3356 7
7 Cooke Upstream 1418 7
7 Cooke Upstream 3357 7
7 Cooke Upstream 1419 7
7 Cooke Upstream 7489 7
Like I said previously, these null rows are causing my queries to miss certain data and I am losing information.
Any help or guidance is greatly appreciated!
The problem was solved in two steps.
Open the raw data in excel, save the data with appropriate delimiter ('~' in my case) and close the file.
re-import the data into the database.
My speculation is that the raw data, which was created with another psql query, was somehow corrupted or had a line ending character missing. Re-saving in excel fixed the issue and allowed the import to work properly.
I still feel as though the problem is unsolved, I simply found a workaround.