Evaluate Multivariate Function on a grid for two of the variables - function

I should probably delete this question, since I found an answer elsewhere on stackoverflow.
Using 'Vectorize' solved the problem.
outer(x,y,Vectorize(model))
I'll leave this up for a bit in case this helps anyone else. The issue I had is described below.
I am an occasional user of R, mostly for Scientific Programming. I want to evaluate a function on a grid of x,y values. Something similar to this simple example:
> x=seq(1,5,1)
> y=seq(1,5,1)
> model = function(a,b){a+b}
> (outer(x,y,model))
[,1] [,2] [,3] [,4] [,5]
[1,] 2 3 4 5 6
[2,] 3 4 5 6 7
[3,] 4 5 6 7 8
[4,] 5 6 7 8 9
[5,] 6 7 8 9 10
The input and output of my actual function looks like this (the volumetric flow rate of a 'Cross' fluid in a circular pipe):
> SimpleCrossModelCircPipe(eta_0 = 176.45, lambda = 2.4526, m=0.83122, mesh=1000,
pipe_length=1.925, press_drop=20.3609, pipe_diameter=0.0413)
[1] 0.005635064
I want to evaluate the function on a grid of values, for example, press_drop and pipe_diameter. So, define a function of these two variables.
> model = function(a,b){SimpleCrossModelCircPipe(eta_0 = 176.45, lambda = 2.4526,
m=0.83122, mesh=1000,pipe_length=1.925, press_drop=a, pipe_diameter=b)}
For just one pair of values this does work:
> model(1,2)
[1] 107249.8
But, if I use with 'outer', there is an error.
> (outer(x,y,model))
Error in seq.default(tau_wall, 0, length.out = mesh + 1) :
'from' must be of length 1
If I place a print statement for the local variable, tau_wall, using outer generates all 25 values, before moving to the line, which generates the error, since tau_wall should have length 1, not 25.
(outer(x,y,model))
[1] 352.5289 705.0579 1057.5868 1410.1158 1762.6447 705.0579 1410.1158 2115.1736
[9] 2820.2315 3525.2894 1057.5868 2115.1736 3172.7605 4230.3473 5287.9341 1410.1158
[17] 2820.2315 4230.3473 5640.4630 7050.5788 1762.6447 3525.2894 5287.9341 7050.5788
[25] 8813.2235
Is there a simple fix to make this work?
Best,
Steve

Related

ValueError; Quantreg with Intercept

I am working with the following dataframe, called test:
y intercept x
0 -1.6168468132687293 1 NA
1 1.5500031232431757 1 NA
2 1.5952617833602785 1 1.5500031232431757
3 1.1390724309357498 1 1.5952617833602785
4 0.9950311340335872 1 1.1390724309357498
5 0.7095780139613861 1 0.9950311340335872
6 0.5962529862944801 1 0.7095780139613861
7 0.6555353674581792 1 0.5962529862944801
8 1.0008751093886736 1 0.6555353674581792
9 1.2648319050758074 1 1.0008751093886736
I am trying to apply the smf.quantreg() function to the dataframe as follows:
import statsmodels.formula.api as smf
Output_pre = smf.quantreg('y ~ intercept + x', test , missing = 'drop')
Output = Output_pre.fit(q=0.25)
The equivalent standard statsmodels.ols() function works just fine. However, applying the smf.quantreg() function yields the following error:
ValueError: operands could not be broadcast together with shapes (3,) (2,)
Why is that so and how do I solve this issue?
I found out myself. By default, the function gives an intercept. Hence the input matrix is not full rank when including a second intercept (two columns are perfectly colinear).

Difference in Difference in R (Callaway & Sant'Anna)

I'm trying to implement the DiD package by Callaway and Sant'Anna in my master thesis, but I'm coming across errors when I run the DiD code and when I try to view the summary.
did1 <- att_gt(yname = "countgreen",
gname = "signing_year",
idname = "investorid",
tname = "dealyear",
data = panel8)
This code warns me that:
"Be aware that there are some small groups in your dataset.
Check groups: 2006,2007,2008,2011. Dropped 109 observations that had missing data.overlap condition violated for 2009 in time period 2001Not enough control units for group 2009 in time period 2001 to run specified regression"
This error is repeated several hundred times.
Does this mean I need to re-match my treatment firms to control firms using a 1:3 ration (treat:control) rather than the 1:1 I used previously?
Then when I run this code:
summary(did1)
I get this message:
Error in Math.data.frame(list(`mpobj$group` = c(2009L, 2009L, 2009L, 2009L, : non-numeric variable(s) in data frame: mpobj$att
I'm really not too sure what this means.
Can anyone help trouble shoot?
Thanks,
Rory
I don't know the DiD package but i can't answer about the :summary(did1)
If you do str(did1) you should have something like this :
'data.frame': 6 obs. of 7 variables:
$ cluster : int 1 2 3 4 5 6
$ price_scal : num -0.572 -0.132 0.891 1.091 -0.803 ...
$ hd_scal : num -0.778 0.63 0.181 -0.24 0.244 ...
$ ram_scal : num -0.6937 0.00479 0.46411 0.00653 -0.31204 ...
$ screen_scal: num -0.457 2.642 -0.195 2.642 -0.325 ...
$ ads_scal : num 0.315 -0.889 0.472 0.47 -0.822 ...
$ trend_scal : num -0.604 1.267 -0.459 -0.413 1.156 ...
But in your case you should have one variable mpobj$att that is a factor or a str column.
Maybe this should also make the DiD code run.

rjson::fromJSON returns only the first item

I have a sqlite database file with several columns. One of the columns has a JSON dictionary (with two keys) embedded in it. I want to extract the JSON column to a data frame in R that shows each key in a separate column.
I tried rjson::fromJSON, but it reads only the first item. Is there a trick that I'm missing?
Here's an example that mimics my problem:
> eg <- as.vector(c("{\"3x\": 20, \"6y\": 23}", "{\"3x\": 60, \"6y\": 50}"))
> fromJSON(eg)
$3x
[1] 20
$6y
[1] 23
The desired output is something like:
# a data frame for both variables
3x 6y
1 20 23
2 60 50
or,
# a data frame for each variable
3x
1 20
2 60
6y
1 23
2 50
What you are looking for is actually a combination of lapply and some application of rbind or related.
I'll extend your data a little, just to have more than 2 elements.
eg <- c("{\"3x\": 20, \"6y\": 23}",
"{\"3x\": 60, \"6y\": 50}",
"{\"3x\": 99, \"6y\": 72}")
library(jsonlite)
Using base R, we can do
do.call(rbind.data.frame, lapply(eg, fromJSON))
# X3x X6y
# 1 20 23
# 2 60 50
# 3 99 72
You might be tempted to do something like Reduce(rbind, lapply(eg, fromJSON)), but the notable difference is that in the Reduce model, rbind is called "N-1" times, where "N" is the number of elements in eg; this results in a LOT of copying of data, and though it might work alright with small "N", it scales horribly. With the do.call option, rbind is called exactly once.
Notice that the column labels have been R-ized, since data.frame column names should not start with numbers. (It is possible, but generally discouraged.)
If you're confident that all substrings will have exactly the same elements, then you may be good here. If there's a chance that there will be a difference at some point, perhaps
eg <- c(eg, "{\"3x\": 99}")
then you'll notice that the base R solution no longer works by default.
do.call(rbind.data.frame, lapply(eg, fromJSON))
# Error in (function (..., deparse.level = 1, make.row.names = TRUE, stringsAsFactors = default.stringsAsFactors()) :
# numbers of columns of arguments do not match
There may be techniques to try to normalize the elements such that you can be assured of matches. However, if you're not averse to a tidyverse package:
library(dplyr)
eg2 <- bind_rows(lapply(eg, fromJSON))
eg2
# # A tibble: 4 × 2
# `3x` `6y`
# <int> <int>
# 1 20 23
# 2 60 50
# 3 99 72
# 4 99 NA
though you cannot call it as directly with the dollar-method, you can still use [[ or backticks.
eg2$3x
# Error: unexpected numeric constant in "eg2$3"
eg2[["3x"]]
# [1] 20 60 99 99
eg2$`3x`
# [1] 20 60 99 99

Error in eval(expr, envir, enclos) while using Predict function

When I try to run predict() on the dataset, it keeps giving me error -
Error in eval(expr, envir, enclos) : object 'LoanRange' not found
Here is the part of dataset -
LoanRange Loan.Type N WAFICO WALTV WAOrigRev WAPTValue
1 0-99999 Conventional 109 722.5216 63.55385 6068.239 0.6031879
2 0-99999 FHA 30 696.6348 80.00100 7129.650 0.5623650
3 0-99999 VA 13 698.6986 74.40525 7838.894 0.4892977
4 100000-149999 Conventional 860 731.2333 68.25817 6438.330 0.5962638
5 100000-149999 FHA 285 673.2256 82.42225 8145.068 0.5211495
6 100000-149999 VA 125 704.1686 87.71306 8911.461 0.5020074
7 150000-199999 Conventional 1291 738.7164 70.08944 8125.979 0.6045117
8 150000-199999 FHA 403 672.0891 84.65318 10112.192 0.5199632
9 150000-199999 VA 195 694.1885 90.77495 10909.393 0.5250807
10 200000-249999 Conventional 1162 740.8614 70.65027 8832.563 0.6111419
11 200000-249999 FHA 348 667.6291 85.13457 11013.856 0.5374226
12 200000-249999 VA 221 702.9796 91.76759 11753.642 0.5078298
13 250000-299999 Conventional 948 742.0405 72.22742 9903.160 0.6106858
Following is the code used for predicting count data N after determining the overdispersion-
model2=glm(N~Loan.Type+WAFICO+WALTV+WAOrigRev+WAPTValue, family=quasipoisson(link = "log"), data = DF)
summary(model2)
This is what I have done to create a sequence of count and use predict function-
countaxis <- seq (0,1500,150)
Y <- predict(model2, list(N=countaxis, type = "response")
At this step, I get the error -
Error in eval(expr, envir, enclos) : object 'LoanRange' not found
Can someone please point me where is the problem here.
Think about what exactly you are trying to predict. You are providing the predict function values of N (via countaxis), but in fact the way you set up your model, N is your response variable and the remaining variables are the predictors. That's why R is asking for LoanRange. It actually needs values for LoanRange, Loan.Type, ..., WAPTValue in order to predict N. So you need to feed predict inputs that let the model try to predict N.
For example, you could do something like this:
# create some fake data to predict N
newdata1 = data.frame(rbind(c("0-99999", "Conventional", 722.5216, 63.55385, 6068.239, 0.6031879),
c("150000-199999", "VA", 12.5216, 3.55385, 60.239, 0.0031879)))
colnames(newdata1) = c("LoanRange" ,"Loan.Type", "WAFICO" ,"WALTV" , "WAOrigRev" ,"WAPTValue")
# ensure that numeric variables are indeed numeric and not factors
newdata1$WAFICO = as.numeric(as.character(newdata1$WAFICO))
newdata1$WALTV = as.numeric(as.character(newdata1$WALTV))
newdata1$WAPTValue = as.numeric(as.character(newdata1$WAPTValue))
newdata1$WAOrigRev = as.numeric(as.character(newdata1$WAOrigRev))
# make predictions - this will output values of N
predict(model2, newdata = newdata1, type = "response")

Is it possible to write a table to a file in JSON format in R?

I'm making word frequency tables with R and the preferred output format would be a JSON file. sth like
{
"word" : "dog",
"frequency" : 12
}
Is there any way to save the table directly into this format? I've been using the write.csv() function and convert the output into JSON but this is very complicated and time consuming.
set.seed(1)
( tbl <- table(round(runif(100, 1, 5))) )
## 1 2 3 4 5
## 9 24 30 23 14
library(rjson)
sink("json.txt")
cat(toJSON(tbl))
sink()
file.show("json.txt")
## {"1":9,"2":24,"3":30,"4":23,"5":14}
or even better:
set.seed(1)
( tab <- table(letters[round(runif(100, 1, 26))]) )
a b c d e f g h i j k l m n o p q r s t u v w x y z
1 2 4 3 2 5 4 3 5 3 9 4 7 2 2 2 5 5 5 6 5 3 7 3 2 1
sink("lets.txt")
cat(toJSON(tab))
sink()
file.show("lets.txt")
## {"a":1,"b":2,"c":4,"d":3,"e":2,"f":5,"g":4,"h":3,"i":5,"j":3,"k":9,"l":4,"m":7,"n":2,"o":2,"p":2,"q":5,"r":5,"s":5,"t":6,"u":5,"v":3,"w":7,"x":3,"y":2,"z":1}
Then validate it with http://www.jsonlint.com/ to get pretty formatting. If you have multidimensional table, you'll have to work it out a bit...
EDIT:
Oh, now I see, you want the dataset characteristics sink-ed to a JSON file. No problem, just give us a sample data, and I'll work on a code a bit. Practically, you need to carry out the data into desirable format, hence convert it to JSON. list should suffice. Give me a sec, I'll update my answer.
EDIT #2:
Well, time is relative... it's a common knowledge... Here you go:
( dtf <- structure(list(word = structure(1:3, .Label = c("cat", "dog",
"mouse"), class = "factor"), frequency = c(12, 32, 18)), .Names = c("word",
"frequency"), row.names = c(NA, -3L), class = "data.frame") )
## word frequency
## 1 cat 12
## 2 dog 32
## 3 mouse 18
If dtf is a simple data frame, yes, data.frame, if it's not, coerce it! Long story short, you can do:
toJSON(as.data.frame(t(dtf)))
## [1] "{\"V1\":{\"word\":\"cat\",\"frequency\":\"12\"},\"V2\":{\"word\":\"dog\",\"frequency\":\"32\"},\"V3\":{\"word\":\"mouse\",\"frequency\":\"18\"}}"
I though I'll need some melt with this one, but simple t did the trick. Now, you only need to deal with column names after transposing the data.frame. t coerces data.frames to matrix, so you need to convert it back to data.frame. I used as.data.frame, but you can also use toJSON(data.frame(t(dtf))) - you'll get X instead of V as a variable name. Alternatively, you can use regexp to clean the JSON file (if needed), but it's a lousy practice, try to work it out by preparing the data.frame.
I hope this helped a bit...
These days I would typically use the jsonlite package.
library("jsonlite")
toJSON(mydatatable, pretty = TRUE)
This turns the data table into a JSON array of key/value pair objects directly.
RJSONIO is a package "that allows conversion to and from data in Javascript object notation (JSON) format". You can use it to export your object as a JSON file.
library(RJSONIO)
writeLines(toJSON(anobject), "afile.JSON")