Using JSON schema as column headers in dataframe - json
Ok, as per a previous question (here) I've now managed to read a load of JSON data into R and to get the data into a data frame. here's the code:-
getCall <- GET("http://long-url.com",
authenticate("myusername", "password"))
contJSON <- content(getCall)
contJSON = sub("\n\r\n", "", contJSON)
df1 <- fromJSON(sprintf("[%s]", gsub("\n", ",", contJSON)), asText=TRUE)
df <- data.frame(matrix(unlist(df1), nrow=31, byrow=T))
Which gets me a data frame that looks as follows:-
head(df[,1:8])
X1 X2 X3 X4 X5 X6 X7 X8
1 2013-05-01 33682 11838 8023 3815 84 177.000000 177.000000
2 2013-05-02 32622 11626 7945 3681 58 210.000000 210.000000
3 2013-05-03 28467 11102 7786 3316 56 186.000000 186.000000
4 2013-05-04 20884 9031 6670 2361 51 7.000000 7.000000
5 2013-05-05 20481 8782 6390 2392 58 1.000000 1.000000
6 2013-05-06 25175 10019 7082 2937 62 24.000000 24.000000
However, there are no column names in my data frame. When I search for "names" in my JSON object R returns "NULL" so that doesn't give me anything useful.
I am wondering if there is any simple way (that might be repeatable on more general cases) to get the names for the column headers from the JSON schema.
I'm aware there are similar questions elsewhere on the site, but this one did not appear to be covered.
EDIT: As per the comment, here is the structure of the contJSON object.
"{\"metricDate\":\"2013-05-01\",\"pageCountTotal\":\"33682\",\"landCountTotal\":\"11838\",\"newLandCountTotal\":\"8023\",\"returnLandCountTotal\":\"3815\",\"spiderCountTotal\":\"84\",\"goalCountTotal\":\"177.000000\",\"callGoalCountTotal\":\"177.000000\",\"callCountTotal\":\"237.000000\",\"onlineGoalCountTotal\":\"0.000000\",\"conversionPerc\":\"1.50\",\"callConversionPerc\":\"74.68\"}\n{\"metricDate\":\"2013-05-02\",\"pageCountTotal\":\"32622\",\"landCountTotal\":\"11626\",\"newLandCountTotal\":\"7945\",\"returnLandCountTotal\":\"3681\",\"spiderCountTotal\":\"58\",\"goalCountTotal\":\"210.000000\",\"callGoalCountTotal\":\"210.000000\",\"callCountTotal\":\"297.000000\",\"onlineGoalCountTotal\":\"0.000000\",\"conversionPerc\":\"1.81\",\"callConversionPerc\":\"70.71\"}\n{\"metricDate\":\"2013-05-03\",\"pageCountTotal\":\"28467\",\"landCountTotal\":\"11102\",\"newLandCountTotal\":\"7786\",\"returnLandCountTotal\":\"3316\",\"spiderCountTotal\":\"56\",\"goalCountTotal\":\"186.000000\",\"callGoalCountTotal\":\"186.000000\",\"callCountTotal\":\"261.000000\",\"onlineGoalCountTotal\":\"0.000000\",\"conversionPerc\":\"1.68\",\"callConversionPerc\":\"71.26\"}\n{\"metricDate\":\"2013-05-04\",\"pageCountTotal\":\"20884\",\"landCountTotal\":\"9031\",\"newLandCountTotal\":\"6670\",\"returnLandCountTotal\":\"2361\",\"spiderCountTotal\":\"51\",\"goalCountTotal\":\"7.000000\",\"callGoalCountTotal\":\"7.000000\",\"callCountTotal\":\"44.000000\",\"onlineGoalCountTotal\":\"0.000000\",\"conversionPerc\":\"0.08\",\"callConversionPerc\":\"15.91\"}\n{\"metricDate\":\"2013-05-05\",\"pageCountTotal\":\"20481\",\"landCountTotal\":\"8782\",\"newLandCountTotal\":\"6390\",\"returnLandCountTotal\":\"2392\",\"spiderCountTotal\":\"58\",\"goalCountTotal\":\"1.000000\",\"callGoalCountTotal\":\"1.000000\",\"callCountTotal\":\"8.000000\",\"onlineGoalCountTotal\":\"0.000000\",\"conversionPerc\":\"0.01\",\"callConversionPerc\":\"12.50\"}\n{\"metricDate\":\"2013-05-06\",\"pageCountTotal\":\"25175\",\"landCountTotal\":\"10019\",\"newLandCountTotal\":\"7082\",\"returnLandCountTotal\":\"2937\",\"spiderCountTotal\":\"62\",\"goalCountTotal\":\"24.000000\",\"callGoalCountTotal\":\"24.000000\",\"callCountTotal\":\"47.000000\",\"onlineGoalCountTotal\":\"0.000000\",\"conversionPerc\":\"0.24\",\"callConversionPerc\":\"51.06\"}\n{\"metricDate\":\"2013-05-07\",\"pageCountTotal\":\"35892\",\"landCountTotal\":\"12615\",\"newLandCountTotal\":\"8391\",\"returnLandCountTotal\":\"4224\",\"spiderCountTotal\":\"62\",\"goalCountTotal\":\"239.000000\",\"callGoalCountTotal\":\"239.000000\",\"callCountTotal\":\"321.000000\",\"onlineGoalCountTotal\":\"0.000000\",\"conversionPerc\":\"1.89\",\"callConversionPerc\":\"74.45\"}\n{\"metricDate\":\"2013-05-08\",\"pageCountTotal\":\"34106\",\"landCountTotal\":\"12391\",\"newLandCountTotal\":\"8389\",\"returnLandCountTotal\":\"4002\",\"spiderCountTotal\":\"90\",\"goalCountTotal\":\"221.000000\",\"callGoalCountTotal\":\"221.000000\",\"callCountTotal\":\"295.000000\",\"onlineGoalCountTotal\":\"0.000000\",\"conversionPerc\":\"1.78\",\"callConversionPerc\":\"74.92\"}\n{\"metricDate\":\"2013-05-09\",\"pageCountTotal\":\"32721\",\"landCountTotal\":\"12447\",\"newLandCountTotal\":\"8541\",\"returnLandCountTotal\":\"3906\",\"spiderCountTotal\":\"54\",\"goalCountTotal\":\"207.000000\",\"callGoalCountTotal\":\"207.000000\",\"callCountTotal\":\"280.000000\",\"onlineGoalCountTotal\":\"0.000000\",\"conversionPerc\":\"1.66\",\"callConversionPerc\":\"73.93\"}\n{\"metricDate\":\"2013-05-10\",\"pageCountTotal\":\"29724\",\"landCountTotal\":\"11616\",\"newLandCountTotal\":\"8063\",\"returnLandCountTotal\":\"3553\",\"spiderCountTotal\":\"139\",\"goalCountTotal\":\"207.000000\",\"callGoalCountTotal\":\"207.000000\",\"callCountTotal\":\"301.000000\",\"onlineGoalCountTotal\":\"0.000000\",\"conversionPerc\":\"1.78\",\"callConversionPerc\":\"68.77\"}\n{\"metricDate\":\"2013-05-11\",\"pageCountTotal\":\"22061\",\"landCountTotal\":\"9660\",\"newLandCountTotal\":\"6971\",\"returnLandCountTotal\":\"2689\",\"spiderCountTotal\":\"52\",\"goalCountTotal\":\"3.000000\",\"callGoalCountTotal\":\"3.000000\",\"callCountTotal\":\"40.000000\",\"onlineGoalCountTotal\":\"0.000000\",\"conversionPerc\":\"0.03\",\"callConversionPerc\":\"7.50\"}\n{\"metricDate\":\"2013-05-12\",\"pageCountTotal\":\"23341\",\"landCountTotal\":\"9935\",\"newLandCountTotal\":\"6960\",\"returnLandCountTotal\":\"2975\",\"spiderCountTotal\":\"45\",\"goalCountTotal\":\"0.000000\",\"callGoalCountTotal\":\"0.000000\",\"callCountTotal\":\"12.000000\",\"onlineGoalCountTotal\":\"0.000000\",\"conversionPerc\":\"0.00\",\"callConversionPerc\":\"0.00\"}\n{\"metricDate\":\"2013-05-13\",\"pageCountTotal\":\"36565\",\"landCountTotal\":\"13583\",\"newLandCountTotal\":\"9277\",\"returnLandCountTotal\":\"4306\",\"spiderCountTotal\":\"69\",\"goalCountTotal\":\"246.000000\",\"callGoalCountTotal\":\"246.000000\",\"callCountTotal\":\"324.000000\",\"onlineGoalCountTotal\":\"0.000000\",\"conversionPerc\":\"1.81\",\"callConversionPerc\":\"75.93\"}\n{\"metricDate\":\"2013-05-14\",\"pageCountTotal\":\"35260\",\"landCountTotal\":\"13797\",\"newLandCountTotal\":\"9375\",\"returnLandCountTotal\":\"4422\",\"spiderCountTotal\":\"59\",\"goalCountTotal\":\"212.000000\",\"callGoalCountTotal\":\"212.000000\",\"callCountTotal\":\"283.000000\",\"onlineGoalCountTotal\":\"0.000000\",\"conversionPerc\":\"1.54\",\"callConversionPerc\":\"74.91\"}\n{\"metricDate\":\"2013-05-15\",\"pageCountTotal\":\"35836\",\"landCountTotal\":\"13792\",\"newLandCountTotal\":\"9532\",\"returnLandCountTotal\":\"4260\",\"spiderCountTotal\":\"94\",\"goalCountTotal\":\"187.000000\",\"callGoalCountTotal\":\"187.000000\",\"callCountTotal\":\"258.000000\",\"onlineGoalCountTotal\":\"0.000000\",\"conversionPerc\":\"1.36\",\"callConversionPerc\":\"72.48\"}\n{\"metricDate\":\"2013-05-16\",\"pageCountTotal\":\"33136\",\"landCountTotal\":\"12821\",\"newLandCountTotal\":\"8755\",\"returnLandCountTotal\":\"4066\",\"spiderCountTotal\":\"65\",\"goalCountTotal\":\"192.000000\",\"callGoalCountTotal\":\"192.000000\",\"callCountTotal\":\"260.000000\",\"onlineGoalCountTotal\":\"0.000000\",\"conversionPerc\":\"1.50\",\"callConversionPerc\":\"73.85\"}\n{\"metricDate\":\"2013-05-17\",\"pageCountTotal\":\"29564\",\"landCountTotal\":\"11721\",\"newLandCountTotal\":\"8191\",\"returnLandCountTotal\":\"3530\",\"spiderCountTotal\":\"213\",\"goalCountTotal\":\"166.000000\",\"callGoalCountTotal\":\"166.000000\",\"callCountTotal\":\"222.000000\",\"onlineGoalCountTotal\":\"0.000000\",\"conversionPerc\":\"1.42\",\"callConversionPerc\":\"74.77\"}\n{\"metricDate\":\"2013-05-18\",\"pageCountTotal\":\"23686\",\"landCountTotal\":\"9916\",\"newLandCountTotal\":\"7335\",\"returnLandCountTotal\":\"2581\",\"spiderCountTotal\":\"56\",\"goalCountTotal\":\"5.000000\",\"callGoalCountTotal\":\"5.000000\",\"callCountTotal\":\"34.000000\",\"onlineGoalCountTotal\":\"0.000000\",\"conversionPerc\":\"0.05\",\"callConversionPerc\":\"14.71\"}\n{\"metricDate\":\"2013-05-19\",\"pageCountTotal\":\"23528\",\"landCountTotal\":\"9952\",\"newLandCountTotal\":\"7184\",\"returnLandCountTotal\":\"2768\",\"spiderCountTotal\":\"57\",\"goalCountTotal\":\"1.000000\",\"callGoalCountTotal\":\"1.000000\",\"callCountTotal\":\"14.000000\",\"onlineGoalCountTotal\":\"0.000000\",\"conversionPerc\":\"0.01\",\"callConversionPerc\":\"7.14\"}\n{\"metricDate\":\"2013-05-20\",\"pageCountTotal\":\"37391\",\"landCountTotal\":\"13488\",\"newLandCountTotal\":\"9024\",\"returnLandCountTotal\":\"4464\",\"spiderCountTotal\":\"69\",\"goalCountTotal\":\"227.000000\",\"callGoalCountTotal\":\"227.000000\",\"callCountTotal\":\"291.000000\",\"onlineGoalCountTotal\":\"0.000000\",\"conversionPerc\":\"1.68\",\"callConversionPerc\":\"78.01\"}\n{\"metricDate\":\"2013-05-21\",\"pageCountTotal\":\"36299\",\"landCountTotal\":\"13174\",\"newLandCountTotal\":\"8817\",\"returnLandCountTotal\":\"4357\",\"spiderCountTotal\":\"77\",\"goalCountTotal\":\"164.000000\",\"callGoalCountTotal\":\"164.000000\",\"callCountTotal\":\"221.000000\",\"onlineGoalCountTotal\":\"0.000000\",\"conversionPerc\":\"1.24\",\"callConversionPerc\":\"74.21\"}\n{\"metricDate\":\"2013-05-22\",\"pageCountTotal\":\"34201\",\"landCountTotal\":\"12433\",\"newLandCountTotal\":\"8388\",\"returnLandCountTotal\":\"4045\",\"spiderCountTotal\":\"76\",\"goalCountTotal\":\"195.000000\",\"callGoalCountTotal\":\"195.000000\",\"callCountTotal\":\"262.000000\",\"onlineGoalCountTotal\":\"0.000000\",\"conversionPerc\":\"1.57\",\"callConversionPerc\":\"74.43\"}\n{\"metricDate\":\"2013-05-23\",\"pageCountTotal\":\"32951\",\"landCountTotal\":\"11611\",\"newLandCountTotal\":\"7757\",\"returnLandCountTotal\":\"3854\",\"spiderCountTotal\":\"68\",\"goalCountTotal\":\"167.000000\",\"callGoalCountTotal\":\"167.000000\",\"callCountTotal\":\"231.000000\",\"onlineGoalCountTotal\":\"0.000000\",\"conversionPerc\":\"1.44\",\"callConversionPerc\":\"72.29\"}\n{\"metricDate\":\"2013-05-24\",\"pageCountTotal\":\"28967\",\"landCountTotal\":\"10821\",\"newLandCountTotal\":\"7396\",\"returnLandCountTotal\":\"3425\",\"spiderCountTotal\":\"106\",\"goalCountTotal\":\"167.000000\",\"callGoalCountTotal\":\"167.000000\",\"callCountTotal\":\"203.000000\",\"onlineGoalCountTotal\":\"0.000000\",\"conversionPerc\":\"1.54\",\"callConversionPerc\":\"82.27\"}\n{\"metricDate\":\"2013-05-25\",\"pageCountTotal\":\"19741\",\"landCountTotal\":\"8393\",\"newLandCountTotal\":\"6168\",\"returnLandCountTotal\":\"2225\",\"spiderCountTotal\":\"78\",\"goalCountTotal\":\"0.000000\",\"callGoalCountTotal\":\"0.000000\",\"callCountTotal\":\"28.000000\",\"onlineGoalCountTotal\":\"0.000000\",\"conversionPerc\":\"0.00\",\"callConversionPerc\":\"0.00\"}\n{\"metricDate\":\"2013-05-26\",\"pageCountTotal\":\"19770\",\"landCountTotal\":\"8237\",\"newLandCountTotal\":\"6009\",\"returnLandCountTotal\":\"2228\",\"spiderCountTotal\":\"79\",\"goalCountTotal\":\"0.000000\",\"callGoalCountTotal\":\"0.000000\",\"callCountTotal\":\"8.000000\",\"onlineGoalCountTotal\":\"0.000000\",\"conversionPerc\":\"0.00\",\"callConversionPerc\":\"0.00\"}\n{\"metricDate\":\"2013-05-27\",\"pageCountTotal\":\"26208\",\"landCountTotal\":\"9755\",\"newLandCountTotal\":\"6779\",\"returnLandCountTotal\":\"2976\",\"spiderCountTotal\":\"82\",\"goalCountTotal\":\"26.000000\",\"callGoalCountTotal\":\"26.000000\",\"callCountTotal\":\"40.000000\",\"onlineGoalCountTotal\":\"0.000000\",\"conversionPerc\":\"0.27\",\"callConversionPerc\":\"65.00\"}\n{\"metricDate\":\"2013-05-28\",\"pageCountTotal\":\"36980\",\"landCountTotal\":\"12463\",\"newLandCountTotal\":\"8226\",\"returnLandCountTotal\":\"4237\",\"spiderCountTotal\":\"132\",\"goalCountTotal\":\"208.000000\",\"callGoalCountTotal\":\"208.000000\",\"callCountTotal\":\"276.000000\",\"onlineGoalCountTotal\":\"0.000000\",\"conversionPerc\":\"1.67\",\"callConversionPerc\":\"75.36\"}\n{\"metricDate\":\"2013-05-29\",\"pageCountTotal\":\"34190\",\"landCountTotal\":\"12014\",\"newLandCountTotal\":\"8279\",\"returnLandCountTotal\":\"3735\",\"spiderCountTotal\":\"90\",\"goalCountTotal\":\"179.000000\",\"callGoalCountTotal\":\"179.000000\",\"callCountTotal\":\"235.000000\",\"onlineGoalCountTotal\":\"0.000000\",\"conversionPerc\":\"1.49\",\"callConversionPerc\":\"76.17\"}\n{\"metricDate\":\"2013-05-30\",\"pageCountTotal\":\"33867\",\"landCountTotal\":\"11965\",\"newLandCountTotal\":\"8231\",\"returnLandCountTotal\":\"3734\",\"spiderCountTotal\":\"63\",\"goalCountTotal\":\"160.000000\",\"callGoalCountTotal\":\"160.000000\",\"callCountTotal\":\"219.000000\",\"onlineGoalCountTotal\":\"0.000000\",\"conversionPerc\":\"1.34\",\"callConversionPerc\":\"73.06\"}\n{\"metricDate\":\"2013-05-31\",\"pageCountTotal\":\"27536\",\"landCountTotal\":\"10302\",\"newLandCountTotal\":\"7333\",\"returnLandCountTotal\":\"2969\",\"spiderCountTotal\":\"108\",\"goalCountTotal\":\"173.000000\",\"callGoalCountTotal\":\"173.000000\",\"callCountTotal\":\"226.000000\",\"onlineGoalCountTotal\":\"0.000000\",\"conversionPerc\":\"1.68\",\"callConversionPerc\":\"76.55\"}\n\r\n"
One thing that works is to split on newlines, call from JSON on each row, then recombine the result.
contJSON <- sub("\n\r\n", "", contJSON) #as before
rowJSON <- strsplit(contJSON, "\n")[[1]]
row <- lapply(rowJSON, fromJSON)
as.data.frame(do.call(rbind, row))
Related
How can I merge/join multiple columns from two dataframes, depending on a matching pattern
I would like to merge two dataframes based on similar patterns in the chromosome column. I made various attempts with R & BASH such as with "data.table" "tidyverse", & merge(). Could someone help me by providing alternative solutions in R, BASH, Python, Perl, etc. for solving this solution? I would like to merge based on the chromosome information and retain both counts/RXNs. NOTE: These two DFs are not aligned and I am also curious what happens if some values are missing. Thanks and Cheers: DF1: Chromosome;RXN;ID 1009250;q9hxn4;NA 1010820;p16256;NA 31783;p16588;"PNTOt4;PNTOt4pp" 203;3-DEHYDROQUINATE-DEHYDRATASE-RXN;"DHQTi;DQDH" DF2: Chromosome;Count1;Count2;Count3;Count4;Count5 203;1;31;1;0;0;0 1010820;152;7;0;11;4 1009250;5;0;0;17;0 31783;1;0;0;0;0;0 Expected Result: Chromosome;RXN;Count1;Count2;Count3;Count4;Count5 1009250;q9hxn4;5;0;0;17;0 1010820;p16256;152;7;0;11;4 31783;p16588;1;0;0;0;0 203;3-DEHYDROQUINATE-DEHYDRATASE-RXN;1;31;1;0;0;0
As bash was mentioned in the text body, I offer you an awk solution. The dataframes are in files df1 and df2: $ awk ' BEGIN { FS=OFS=";" # input and output field delimiters } NR==FNR { # process df1 a[$1]=$2 # hash to an array, 1st is the key, 2nd the value next # process next record } { # process df2 $2=(a[$1] OFS $2) # prepend RXN field to 2nd field of df2 }1' df1 df2 # 1 is output command, mind the file order The 2 last lines could be written perhaps more clearly: ... { print $1,a[$1],$2,$3,$4,$5,$6 }' df1 df2 Output: Chromosome;RXN;Count1;Count2;Count3;Count4;Count5 203;3-DEHYDROQUINATE-DEHYDRATASE-RXN;1;31;1;0;0;0 1010820;p16256;152;7;0;11;4 1009250;q9hxn4;5;0;0;17;0 31783;p16588;1;0;0;0;0;0 Output will be in the order of df2. Chromosome present in df1 but not in df2 will not be included. Chromosome in df2 but not in df1 will be output from df2 with empty RXN field. Also, if there are duplicate chromosomes in df1, the last one is used. This can be fixed if it is an issue.
If I understand your request correctly, this should do it in Python. I've made the Chromosome column into the index of each DataFrame. from io import StringIO txt1 = '''Chromosome;RXN;ID 1009250;q9hxn4;NA 1010820;p16256;NA 31783;p16588;"PNTOt4;PNTOt4pp" 203;3-DEHYDROQUINATE-DEHYDRATASE-RXN;"DHQTi;DQDH"''' txt2 = """Chromosome;Count1;Count2;Count3;Count4;Count5;Count6 203;1;31;1;0;0;0 1010820;152;7;0;11;4 1009250;5;0;0;17;0 31783;1;0;0;0;0;0""" df1 = pd.read_csv( StringIO(txt1), sep=';', index_col=0, header=0 ) df2 = pd.read_csv( StringIO(txt2), sep=';', index_col=0, header=0 ) DF1: RXN ID Chromosome 1009250 q9hxn4 NaN 1010820 p16256 NaN 31783 p16588 PNTOt4;PNTOt4pp 203 3-DEHYDROQUINATE-DEHYDRATASE-RXN DHQTi;DQDH DF2: Count1 Count2 Count3 Count4 Count5 Count6 Chromosome 203 1 31 1 0 0 0.0 1010820 152 7 0 11 4 NaN 1009250 5 0 0 17 0 NaN 31783 1 0 0 0 0 0.0 result = pd.concat( [df1.sort_index(), df2.sort_index()], axis=1 ) print(result) RXN ID Count1 Count2 Count3 Count4 Count5 Count6 Chromosome 203 3-DEHYDROQUINATE-DEHYDRATASE-RXN DHQTi;DQDH 1 31 1 0 0 0.0 31783 p16588 PNTOt4;PNTOt4pp 1 0 0 0 0 0.0 1009250 q9hxn4 NaN 5 0 0 17 0 NaN 1010820 p16256 NaN 152 7 0 11 4 NaN The concat command also handles mismatched indices by simply filling in NaN values for columns in e.g. df1 if df2 doesn't have have the same index, and vice versa.
problem with bootMer CI: upper and lower limits are identical
I'm having the hardest time generating confidence intervals for my glmer poisson model. After following several very helpful tutorials (such as https://drewtyre.rbind.io/classes/nres803/week_12/lab_12/) as well as stackoverflow posts, I keep getting very strange results, i.e. the upper and lower limits of the CI are identical. Here is a reproducible example containing a response variable called "production," a fixed effect called "Treatment_Num" and a random effect called "Genotype": df1 <- data.frame(production=c(15,12,10,9,6,8,9,5,3,3,2,1,0,0,0,0), Treatment_Num=c(1,1,1,1,2,2,2,2,3,3,3,3,4,4,4,4), Genotype=c(1,1,2,2,1,1,2,2,1,1,2,2,1,1,2,2)) #run the glmer model df1_glmer <- glmer(production ~ Treatment_Num +(1|Genotype), data = df1, family = poisson(link = "log")) #make an empty data set to predict from, that contains the explanatory variables but no response require(magrittr) df_empty <- df1 %>% tidyr::expand(Treatment_Num, Genotype) #create new column containing predictions df_empty$PopPred <- predict(df1_glmer, newdata = df_empty, type="response",re.form = ~0) #function for bootMer myFunc_df1_glmer <- function(mm) { predict(df1_glmer, newdata = df_empty, type="response",re.form=~0) } #run bootMer require(lme4) merBoot_df1_glmer <- bootMer(df1_glmer, myFunc_df1_glmer, nsim = 10) #get confidence intervals out of it predCL <- t(apply(merBoot_df1_glmer$t, MARGIN = 2, FUN = quantile, probs = c(0.025, 0.975))) #enter lower and upper limits of confidence interval into df_empty df_empty$lci <- predCL[, 1] df_empty$uci <- predCL[, 2] #when viewing df_empty the problem becomes clear: the lci and uci are identical! df_empty Any insights you can give me will be much appreciated!
Ignore my comment! The issue is with the function you created to pass to bootMer(). You wrote: myFunc_df1_glmer <- function(mm) { predict(df1_glmer, newdata = df_empty, type="response",re.form=~0) } The argument mm should be a fitted model object derived from the bootstrapped data. However, you don't pass this object to predict(), but rather the original model object. If you change the function to: myFunc_df1_glmer <- function(mm) { predict(mm, newdata = df_empty, type="response",re.form=~0) #^^ pass in the object created by bootMer } then it works: > df_empty # A tibble: 8 x 5 Treatment_Num Genotype PopPred lci uci <dbl> <dbl> <dbl> <dbl> <dbl> 1 1 1 12.9 9.63 15.7 2 1 2 12.9 9.63 15.7 3 2 1 5.09 3.87 5.89 4 2 2 5.09 3.87 5.89 5 3 1 2.01 1.20 2.46 6 3 2 2.01 1.20 2.46 7 4 1 0.796 0.361 1.14 8 4 2 0.796 0.361 1.14 As an aside -- how many genotypes in your actual data? If less than 5-7 you might do better using a straight up glm() with genotype as a factor using sum-to-zero contrasts.
Undefined columns selected using panelvar package
Have anyone used panel var in R? Currently I'm using the package panelvar of R. And I'm getting this error : Error in `[.data.frame`(data, , c(colnames(data)[panel_identifier], required_vars)) : undefined columns selected And my syntax currently is: model1<-pvargmm( dependent_vars = c("Change.."), lags = 2, exog_vars = c("Price"), transformation = "fd", data = base1, panel_identifier = c("id", "t"), steps = c("twostep"), system_instruments = FALSE, max_instr_dependent_vars = 99, min_instr_dependent_vars = 2L, collapse = FALSE) I don't know why my panel_identifier is not working, it's pretty similar to the example given by panelvar package, however, it doesn't work, I want to appoint that base1 is on data.frame format. any ideas? Also, my data is structured like this: head(base1) id t country DDMMYY month month_text day Date_txt year Price Open 1 1 1296 China 1-4-2020 4 Apr 1 Apr 01 2020 12588.24 12614.82 2 1 1295 China 31-3-2020 3 Mar 31 Mar 31 2020 12614.82 12597.61 High Low Vol. Change.. 1 12775.83 12570.32 NA -0.0021 2 12737.28 12583.05 NA 0.0014 thanks in advance !
Check the documentation of the package and the SSRN paper. For me it helped to ensure all entered formats are identical (you can check this with str(base1) command). For example they write: library(panelvar) data("Dahlberg") ex1_dahlberg_data <- pvargmm(dependent_vars = ....... When I look at it I get >str(Dahlberg) 'data.frame': 2385 obs. of 5 variables: $ id : Factor w/ 265 levels "114","115","120",..: 1 1 1 1 1 1 1 1 1 2 ... $ year : Factor w/ 9 levels "1979","1980",..: 1 2 3 4 5 6 7 8 9 1 ... $ expenditures: num 0.023 0.0266 0.0273 0.0289 0.0226 ... $ revenues : num 0.0182 0.0209 0.0211 0.0234 0.018 ... $ grants : num 0.00544 0.00573 0.00566 0.00589 0.00559 ... For example the input data must be a data.frame (in my case it had additional type specifications like tibble or data.table). I resolved it by casting as.data.frame() on it.
standard unambiguos format [R] MySQL imported data
OK, to set the scene, I have written a function to import multiple tables from MySQL (using RODBC) and run randomForest() on them. This function is run on multiple databases (as separate instances). In one particular database, and one particular table, the "error in as.POSIXlt.character(x, tz,.....): character string not in a standard unambiguous format" error is thrown. The function runs on around 150 tables across two databases without any issues except this one table. Here is a head() print from the table: MQLTime bar5 bar4 bar3 bar2 bar1 pat1 baXRC 1 2014-11-05 23:35:00 184 24 8 24 67 147 Flat 2 2014-11-05 23:57:00 203 184 204 67 51 147 Flat 3 2014-11-06 00:40:00 179 309 49 189 75 19 Flat 4 2014-11-06 00:46:00 28 192 60 49 152 147 Flat 5 2014-11-06 01:20:00 309 48 9 11 24 19 Flat 6 2014-11-06 01:31:00 24 177 64 152 188 19 Flat And here is the function: GenerateRF <- function(db, countstable, RFcutoff) { 'load required libraries' library(RODBC) library(randomForest) library(caret) library(ff) library(stringi) 'connection and data preparation' connection <- odbcConnect ('TTODBC', uid='root', pwd='password', case="nochange") 'import count table and check if RF is allowed to be built' query.str <- paste0 ('select * from ', db, '.', countstable, ' order by RowCount asc') row.counts <- sqlQuery (connection, query.str) 'Operate only on tables that have >= RFcutoff' for (i in 1:nrow (row.counts)) { table.name <- as.character (row.counts[i,1]) col.count <- as.numeric (row.counts[i,2]) row.count <- as.numeric (row.counts[i,3]) if (row.count >= 20) { 'Delete old RFs and DFs for input pattern' if (file.exists (paste0 (table.name, '_RF.Rdata'))) { file.remove (paste0 (table.name, '_RF.Rdata')) } if (file.exists (paste0 (table.name, '_DF.Rdata'))) { file.remove (paste0 (table.name, '_DF.Rdata')) } 'import and clean data' query.str2 <- paste0 ('select * from ', db, '.', table.name, ' order by mqltime asc') raw.data <- sqlQuery(connection, query.str2) 'partition data into training/test sets' set.seed(489) index <- createDataPartition(raw.data$baXRC, p=0.66, list=FALSE, times=1) data.train <- raw.data [index,] data.test <- raw.data [-index,] 'find optimal trees to grow (without outcome and dates) data.mtry <- as.data.frame (tuneRF (data.train [, c(-1,-col.count)], data.train$baXRC, ntreetry=100, stepFactor=.5, improve=0.01, trace=TRUE, plot=TRUE, dobest=FALSE)) best.mtry <- data.mtry [which (data.mtry[,2] == min (data.mtry[,2])), 1] 'compress df' data.ff <- as.ffdf (data.train) 'run RF. Originally set to 1000 trees but M1 dataset is to large for laptop. Maybe train at the lab?' data.rf <- randomForest (baXRC~., data=data.ff[,-1], mtry=best.mtry, ntree=500, keep.forest=TRUE, importance=TRUE, proximity=FALSE) 'generate and print variable importance plot' varImpPlot (data.rf, main = table.name) 'predict on test data' data.test.pred <- as.data.frame( predict (data.rf, data.test, type="prob")) 'get dates and name date column' data.test.dates <- data.frame (data.test[,1]) colnames (data.test.dates) <- 'MQLTime' 'attach dates to prediction df' data.test.res <- cbind (data.test.dates, data.test.pred) 'force date coercion to attempt negating unambiguous format error ' data.test.res$MQLTime <- format(data.test.res$MQLTime, format = "%Y-%m-%d %H:%M:%S") 'delete row names, coerce to dataframe, generate row table name and export outcomes to MySQL' rownames (data.test.res)<-NULL data.test.res <- as.data.frame (data.test.res) root.table <- stri_sub(table.name, 0, -5) sqlUpdate (connection, data.test.res, tablename = paste0(db, '.', root.table, '_outcome'), index = "MQLTime") 'save RF and test df/s for future use; save latest version of row_counts to MQL4 folder' save (data.rf, file = paste0 ("C:/Users/user/Documents/RF_test2/", table.name, '_RF.Rdata')) save (data.test, file = paste0 ("C:/Users/user/Documents/RF_test2/", table.name, '_DF.Rdata')) write.table (row.counts, paste0("C:/Users/user/AppData/Roaming/MetaQuotes/Terminal/71FA4710ABEFC21F77A62A104A956F23/MQL4/Files/", db, "_m1_rowcounts.csv"), sep = ",", col.names = F, row.names = F, quote = F) 'end of conditional block' } 'end of for loop' } 'close all connection to MySQL' odbcCloseAll() 'clear workspace' rm(list=ls()) 'end of function' } At this line: data.test.res$MQLTime <- format(data.test.res$MQLTime, format = "%Y-%m-%d %H:%M:%S") I have tried coercing MQLTime using various functions including: as.character(), as.POSIXct(), as.POSIXlt(), as.Date(), format(), as.character(as.Date()) and have also tried: "%y" vs "%Y" and "%OS" vs "%S" All variants seem to have no effect on the error and the function is still able to run on all other tables. I have checked the table manually (which contains almost 1500 rows) and also in MySQL looking for NULL dates or dates like "0000-00-00 00:00:00". Also, if I run the function line by line in R terminal, this offending table is processed without any problems which just confuses the hell out me. I've exhausted all the functions/solutions I can think of (and also all those I could find through Dr. Google) so I am pleading for help here. I should probably mention that the MQLTime column is stored as varchar() in MySQL. This was done to try and get around issues with type conversions between R and MySQL SHOW VARIABLES LIKE "%version%"; innodb_version, 5.6.19 protocol_version, 10 slave_type_conversions, version, 5.6.19 version_comment, MySQL Community Server (GPL) version_compile_machine, x86 version_compile_os, Win32 > sessionInfo() R version 3.0.2 (2013-09-25) Platform: i386-w64-mingw32/i386 (32-bit) Edit: Str() output on the data as imported from MySQl showing MQLTime is already in POSIXct format: > str(raw.data) 'data.frame': 1472 obs. of 8 variables: $ MQLTime: POSIXct, format: "2014-11-05 23:35:00" "2014-11-05 23:57:00" "2014-11-06 00:40:00" "2014-11-06 00:46:00" ... $ bar5 : int 184 203 179 28 309 24 156 48 309 437 ... $ bar4 : int 24 184 309 192 48 177 48 68 60 71 ... $ bar3 : int 8 204 49 60 9 64 68 27 192 147 ... $ bar2 : int 24 67 189 49 11 152 27 56 437 67 ... $ bar1 : int 67 51 75 152 24 188 56 147 71 0 ... $ pat1 : int 147 147 19 147 19 19 147 19 147 19 ... $ baXRC : Factor w/ 3 levels "Down","Flat",..: 2 2 2 2 2 2 2 2 2 3 ... So I have tried declaring stringsAsfactors = FALSE in the dataframe operations and this had no effect. Interestingly, if the offending table is removed from processing through an additional conditional statement in the first 'if' block, the function stops on the table immediately preceeding the blocked table. If both the original and the new offending tables are removed from processing, then the function stops on the table immediately prior to them. I have never seen this sort of behavior before and it really has me stumped. I watched system resources during the function and they never seem to max out. Could this be a problem with the 'for' loop and not necessarily date formats?
There appears to be some egg on my face. The table following the table where the function was stopping had a row with value '0000-00-00 00:00:00'. I added another statement in my MySQL function to remove these rows when pre-processing the tables. Thanks to those that had a look at this.
json format to csv format conversion, using R
I have a json file as follows: library(RCurl) library(RJSONIO) url <- 'http://www.pm25.in/api/querys/aqi_details.json?city=shijiazhuang&token=5j1znBVAsnSf5xQyNQyq' web <- getURL(url) raw <-fromJSON(web) I want to convert it into csv file like this: aqi area co co_24h no2 no2_24h o3 o3_24h o3_8h o3_8h_24h pm10 142 石家庄 1.509 1.412 95 47 3 137 35 90 119 pm10_24h pm2_5 pm2_5_24h position_name primary_pollutant quality so2 195 80 108 化工学校 颗粒物(PM2.5) 轻度污染 33 so2_24h station_code time_point 32 1028A 2013-07-15T23:00:00Z I used as.data.frame() and other functions, but it didn't work. How can I do this? Please help me, thanks.
There must be a more readable solution... The following replaces the NULLs with NAs, calls as.data.frame on each row, and combines the rows with rbind. tmp <- lapply( raw, function(u) lapply(u, function(x) if(is.null(x)) NA else x) ) tmp <- lapply( tmp, as.data.frame ) tmp <- do.call( rbind, tmp ) tmp