Plotting Multiple Regression - regression

We have to do multiple regression for a college project. We have all the data, but we don't know, how to plot it and howenter image description here to draw a regression line.
Our current programming

If you are new to R, try the visreg pacakge. Suppose you you have 3 explanatory variables in your model:
library(visreg)
lm.result <- lm(povred~lnenp+maj+pr)
Since you have 3 variables, visreg would produce 3 plots. In each plot the other two variables are held at mean. So you first want to define a 2x2 space for plots:
par(mfrow=c(2,2))
Then just ask visreg to plot your lm result:
visreg(lm.result)
This is how the plots should look like

Related

Plotting multiple classifier ROC curvers in one plot in JMP Pro

I have 4 sets of numeric continuous data that I used for regression analysis.
Now, I performed ROC analysis using "Fit Y by X" option of "JMP Pro" for each set of data which generated me 4 seperate AUC curves and graphs of ROC in the "JMP Pro" program.
I obtained these curves using the following steps:
My question is, how can I generate all of these graphs in one plot?
What I mean, when I did generated plots, it became like the two in the above of this attached picture. I want something like in the bottom of this picture
For example, the steps required in the software?

How can we define an RNN - LSTM neural network with multiple output for the input at time "t"?

I am trying to construct a RNN to predict the possibility of a player playing the match along with the runs score and wickets taken by the player.I would use a LSTM so that performance in current match would influence player's future selection.
Architecture summary:
Input features: Match details - Venue, teams involved, team batting first
Input samples: Player roster of both teams.
Output:
Discrete: Binary: Did the player play.
Discrete: Wickets taken.
Continous: Runs scored.
Continous: Balls bowled.
Question:
Most often RNN uses "Softmax" or"MSE" in the final layers to process "a" from LSTM -providing only a single variable "Y" as output. But here there are four dependant variables( 2 Discrete and 2 Continuous). Is it possible to stitch together all four as output variables?
If yes, how do we handle mix of continuous and discrete outputs with loss function?
(Though the output from LSTM "a" has multiple features and carries the information to the next time-slot, we need multiple features at output for training based on the ground-truth)
You just do it. Without more detail on the software (if any) in use it is hard to give more detasmail
The output of the LSTM unit is at every times step on of the hidden layers of your network
You can then input it in to 4 output layers.
1 sigmoid
2 i'ld messarfound wuth this abit. Maybe 4x sigmoid(4 wickets to an innnings right?) Or relu4
3,4 linear (squarijng it is as lso an option,e or relu)
For training purposes your loss function is the sum of your 4 individual losses.
Since f they were all MSE you could concatenat your 4 outputs before calculating the loss.
But sincd the first is cross-entropy (for a decision sigmoid) yould calculate seperately and sum.
You can still concatenate them after to have a output vector

Obtaining multiple output in regression using deep learning

Given an RGB image of hand and 3d position of the keypoints of the hand as dataset, I want to do this as regression problem in DL. In this case input will be the RGB image, and output should be estimated 3d position of keypoints.
I have seen some info about regression but most of them are trying to estimate one single value. Is it possible to estimate multiple values(or output) all at once?
For now I have referred to this code. This guy is trying to estimate the age of a person in the image.
The output vector from a neural net can represent anything as long as you define loss function well. Say you want to detect (x,y,z) co-ordinates of 10 keypoints, then just have 30 element long output vector say (x1,y1,z1,x2,y2,z2..............,x10,y10,z10), where xi,yi,zi denote coordinates of ith keypoint, basically you can use any order you feel convenient with. Just be careful with your loss function. Say you want to calculate RMSE loss, you would have to extract tripes correctly and then calculate RMSE loss for each keypoint, or if you are fimiliar with linear algebra, just reshape it into a 3x10 matrix correctly and and have your results also as a 3x10 matrix and then just use
loss = tf.sqrt(tf.reduce_mean(tf.squared_difference(Y1, Y2)))
But once you have formulated your net you will have to stick to it.

Is it possible to plot complex variable in wxMaxima or Octave?

For example , if I want to plot Sin(z) where z is a complex variable , how I will achieve it in either Octave or Maxima?
I don't know about Octave, but here is a message about that, with some code you can try in Maxima: https://www.ma.utexas.edu/pipermail/maxima/2007/006644.html
There may be more specific information for wxMaxima -- you can try their user forum: https://sourceforge.net/p/wxmaxima/discussion/435775/
(referring Octave 4.0.0)
How do you want to try to represent the output of the function? Plotting either the real or imaginary parts of the output can be done fairly simply using a 3-dimensional graph, where the x and y axes are the real and imaginary components of z, and the vertical axis is either the real or imaginary values of sin(z). Producing those are fairly simple in Octave. Here's a link to a script you can save and run to show an example.
Simply change the g = exp(f) line to g = sin(f).
Octave-help mailing list example
Note that the imaginary part plot is commented out. Just switch the # between the different plot commands if you want to see that part.
Now, are you instead looking for options to map the Z plane (z=x+iy) to the W plane (w=u+iv) and represent closed contours mapped by w=sin(z)? in that case you'll need to do parametric plotting as described on this FIT site. There is a link to his Matlab program at the bottom of the explanation that provides one method of using color coding to match z->w plane contour mapping.
Those m-files are written for Matlab, so a few things do not work, but the basic plotting is compatible with Octave 4.0.0. (the top level ss13.m file will fail on calls to flops and imwrite)
But, if you put your desired function in myfun13.m for f, df and d2f, (sin(z), cos(z), -sin(z) respectively), then run cvplot13, you'll get color maps showing the correspondence between z and w planes.
wxMaxima has a plot3d that can do it. Since the expression to plot is in terms of x and y, I plotted the function's magnitude with abs(f(x+%i*y)):
plot3d(abs((x+%i*y-3)*(x+%i*y-5)*(x+%i*y-6)), [x,2,7], [y,-1,1], [grid,100,100], [z,0,5])$

Extract 3D coordinates from R PCA

I am trying to find a way make 3D PCA visualization from R more portable;
I have run a PCA on 2D matrix using prcomp().
How do I export the 3D coordinates of data points, along with labels and colors (RGB) associated with each?
Whats the practical difference with princomp() and prcomp()?
Any ideas on how to best view the 3D PCA plot using HTML5 and canvas?
Thanks!
Here is an example to work from:
pc <- prcomp(~ . - Species, data = iris, scale = TRUE)
The axis scores are extracted from component x; as such you can just write out (you don't say how you want the exported) as CSV using:
write.csv(pc$x[, 1:3], "my_pc_scores.csv")
If you want to assign information to these scores (the colours and labels, which are not something associated with the PCA but something you assign yourself), then add them to the matrix of scores and then export. In the example above there are three species with 50 observations each. If we want that information exported alongside the scores then something like this will work
scrs <- data.frame(pc$x[, 1:3], Species = iris$Species,
Colour = rep(c("red","green","black"), each = 50))
write.csv(scrs, "my_pc_scores2.csv")
scrs looks like this:
> head(scrs)
PC1 PC2 PC3 Species Colour
1 -2.257141 -0.4784238 0.12727962 setosa red
2 -2.074013 0.6718827 0.23382552 setosa red
3 -2.356335 0.3407664 -0.04405390 setosa red
4 -2.291707 0.5953999 -0.09098530 setosa red
5 -2.381863 -0.6446757 -0.01568565 setosa red
6 -2.068701 -1.4842053 -0.02687825 setosa red
Update missed the point about RGB. See ?rgb for ways of specifying this in R, but if all you want are the RGB strings then change the above to use something like
Colour = rep(c("#FF0000","#00FF00","#000000"), each = 50)
instead, where you specify the RGB strings you want.
The essential difference between princomp() and prcomp() is the algorithm used to calculate the PCA. princomp() uses a Eigen decomposition of the covariance or correlation matrix whilst prcomp() uses the singular value decomposition (SVD) of the raw data matrix. princomp() only handles data sets where there are at least as many samples (rows) and variables (columns) in your data. prcomp() can handle that type of data and data sets where there are more columns than rows. In addition, and perhaps of greater importance depending on what uses you had in mind, the SVD is preferred over the eigen decomposition for it's better numerical accuracy.
I have tagged the Q with html5 and canvas in the hope specialists in those can help. If you don't get any responses, delete point 3 from your Q and start a new one specifically on the topic of displaying the PCs using canvas, referencing this one for detail.
You can find out about any R object by doing str(object_name). In this case:
m <- matrix(rnorm(50), nrow = 10)
res <- prcomp(m)
str(m)
If you look at the help page for prcomp by doing ?prcomp, you can discover that the scores are in res$x and the loadings are in res$rotation. These are labeled by PC already. There are no colors, unless you decide to assign some colors in the course of a plot. See the respective help pages to compare princomp with prcomp for a comparison between the two functions. Basically, the difference between them has to do with the method used behind the scenes. I can't help you with your last question.
You state that your perform PCA on a 2D matrix. If this is your data matrix there is no way to get 3D PCA's. Ofcourse it might be that your 2D matrix is a covariance matrix of the data, in that case you need to use princomp (not prcomp!) and explictely pass the covariance matrix m like this:
princomp(covmat = m)
Passing the covariance matrix like:
princomp(m)
does not yield the correct result.