Calculating Polynomial Regression of Dynamic Dataset in Microsoft Access (similar to LINEST in Excel) - ms-access

I have a Dataset in Microsoft Access for Amperage and Lumens (seen below in the first image). This list can be of any length of rows. This example happens to be three, but it can be as many as 10+ rows or more. I want to calculate the polynomial regression so I can have another dataset table (second image) where the user can input their "Lumen Targets" and automatically populate the "Target Amperage". I am open to suggestions on a different route for this. The overall form is a top product with multiple configurations. Each configuration has different sizes and amperages/lumen levels. Let me know if I need to clarify anything.

I'm not sure I fully understand what you're asking, however if you want to calculate the linear regression (e.g. slope of the line given the known x's/y's) you can call a worksheet function. Make sure you have a reference to 'Microsoft Excel 14.0 Object Library' in VBA and create a function where you can pass your known values in order to return the slope.
Public Function Slope(x, y)
Slope = WorksheetFunction.Slope(x, y)
End Function

Related

PointNet can't predict segmentation on custom point cloud

I'm currently working on my bachelor project and I'm using the PointNet deep neural network.
My project group and I have created a dataset of point clouds(an unsorted list of x amount of 3d coordinates) and segmentation files, but we can't train PointNet to predict segmentation with the dataset.
Each segmentation file is a list containing the same amount of rows, as points in the corresponding point cloud, and each row is either a 1 or a 2, depending on the corresponding point belonging to segment 1 or 2.
When PointNet predicts it outputs a list of x elements, where each element is the segment that PointNet predicts the corresponding point belongs to.
When we run the benchmark dataset from the original PointNet implementation, the system runs and can predict segmentation, so we know that the error is in the dataset somewhere, even though we have tried our best to have our dataset look like the original benchmark dataset.
The implemented PointNet uses pytorch conv2d, maxpool2d and linear transformation. For calculating the loss, both the nn.functional.nll_loss and the nn.NLLLos functions have been used. When using the nn.NLLLos the weight parameter was set to a tensor of [1,100] to combat potential imbalance of the data.
These are the thing we have tried:
We have tried downsampling the point clouds i.e remove points using voxel downsampling
We have tried downscaling and normalize all values so they are between 0 and 1, using this formula (data - np.min(data)) / (np.max(data) - np.min(data))
We have tried running an euclidean clustering function on the data, to have each scanned object for it self
We have tried replicating another dataset, which was created using the same raw data, which we know have worked before
In the attached link, images of the datafiles with a description can be found.
Cheers everyone

Engineering Equation Solver - Functions

I must calculate a function on EES.
Function: T(t)=((T_surface-T_infinity)*(e^(-bt)))+T_infinity
t is time and limits are between 1 and 40 second. I need calculate every seconds in 1-40.
How can I write this function in EES?
If I understand the question correctly, no differential equation should be solved, no integration, summation or the like should be carried out. Only a calculation with variation of the variable t is to be accomplished.
There are two possibilities. But first: EES is not 'key-sensitive'! So you should choose one—T or t—and the other should get another 'name'. I prefer 'tau' for time.
Parametric table
Write the equation in the equation window and aad the parameter you may have(?). Do not define 'tau'. like:
T=((T_surface-T_infinity)*(exp(-b*tau)))+T_infinity
T_surcace = 80[C]
T_infinity = 20[C]
b=3
Then open the parametic table, add the variables you want to see. At least you should take T and tau. Expand the rows of the table to 40 and enter the respective times. Then press the green play-button (top left in the table).
Duplicate function:
T_surface = 80[C]
$varinfo tau[] units='s'
b=0.1[1/s]
Duplicate i=1,40
tau[i] = i*1[s]
T[i]=((T_surface-T_infinity)*(exp(-b*tau[i] )))+T_infinity
End
T_infinity = 20[C]
$varinfo T[] units='C'

How to plot a transfer function from a Cauer network

The picture below shows a Cauer network, which is a continued fraction network.
I have built the 3rd olrder transfer function 3rd Octave like this:
function uebertragung=G(R1,Tau1,R2,Tau2,R3,Tau3)
s= tf("s");
C1= Tau1/R1;
C2= Tau2/R2;
C3= Tau3/R3;
# --- Uebertragungsfunktion 3.Ordnung --- #
uebertragung= 1/((s*R1*C1)^3+5*(s*R2*C2)^2+6*s*R3*C3+1);
endfunction
R1,R2,R3,C1,C2,C3 are the 6 parameters my characteristic curve depends on.
I need to put this parameters into the tranfser function, get a result and plot the characteristic curve from the data.
The characteristic curve shows thermal impedance vs time. Like these 2 curves from an igbt data sheet.
My problem is I don't know how to handle transfer functions properly. I need data to plot the characteristic curve but I don't know how to generate them out of the transfer function.
Any tips are welcome. Do I have to make Laplace transformation?
If you need further Information ask me and I try to provide them all.
From the data sheet, the equation they are using for their transient thermal impedance graph is the Foster chain step function response:
Z(t) = sum (R_i * (1-exp(-t/tau_i))) = sum (R_i * (1-exp(-t/(R_i*C_i))))
I verified that the stage R's and C's in the table by the graph will produce the plot you shared with that function.
The method for producing a step function response of an s-domain (Laplace domain) impedance function (Z) is to take the inverse Laplace transform of the product of the transfer function and 1/s (the Laplace domain form of a constant value step function). With the Foster model impedance function:
Z(s) = sum (R_i/(1+R_i*C_i*s))
that will produce the equation above.
Using the transfer function in Octave, you can use the Control package function step to calculate the transient response for you rather than performing the inverse Laplace transform yourself. So once you have Z(s), step(Z) will produce or plot the transient response. See help step for details. You can then adjust the plot (switch to log scale, set axes limits, etc) to look like one of the spec sheet plots.
Now, you want to do the same thing with a Cauer network model. It is important to realize that the R's and C's will not be the same for the two models. The Foster network is a decoupled model that has each primary complex pole isolated by layout, but the R's and C's are actually convolutions of the physical thermal resistances and capacitances in the real package. On the contrary, the Cauer model has R's and C's that match the physical package layers, and the poles in the s-domain transfer function will be complex products of the multiple layers.
So, however you are obtaining your R's and C's for the Cauer model, you can't just use the same values they have in their Foster model parameter table. They can be calculated from physical layer and material properties, however, assuming you have that information. Once you do have useful values, the procedure for going from Z(s) to the transient impedance function is the same for either network, and they should produce the same result.
As an example, the following procedure should work in both Octave and Matlab to plot the Thermal impedance curve from the spec sheet data using the Foster Z(s) model as a starting point. For the Cauer model, just use a different Z(s) function.
(Note that Octave has some issues in the step function that insert t = 0 entries into the time series output, even when they aren't specified, which can cause some errors when trying to plot on a log scale. so this example puts in a t=0 node then ignores it. wanted to explain so that line didn't seem confusing).
s = tf('s')
R1 = 8.5e-3; R2 = 2e-3;
tau1 = 151e-3; tau2 = 5.84e-3;
C1 = tau1/R1; C2 = tau2/R2;
input_imped = R1/(1+R1*C1*s)+R2/(1+R2*C2*s)
times = linspace(0, 10, 100000);
[Zvals,output_times] = step(input_imped, times);
loglog(output_times(2:end), Zvals(2:end));
xlim([.001 10]); ylim([0.0001, .1]);
grid;
xlabel('t [s]');
ylabel('Z_t_h_(_j_-_c_) [K/W] IGBT');
text(1,0.013 ,'Z_t_h_(_j_-_c_) IGBT');

How to get the predicted values in training data set for Least Squares Support Vector Regression

I would like to make a prediction by using Least Squares Support Vector Machine for Regression, which is proposed by Suykens et al. I am using LS-SVMlab, which you can find the MATLAB toolbox here. Let's consider I have an independent variable X and a dependent variable Y, that both are simulated. I am following the instructions in the tutorial.
>>X = linspace(-1,1,50)’;
>>Y = (15*(X.^2-1).^2.*X.^4).*exp(-X)+normrnd(0,0.1,length(X),1);
>>type = ’function estimation’;
>>[gam,sig2] = tunelssvm({X,Y,type,[], [],’RBF_kernel’},’simplex’,...’leaveoneoutlssvm’,’mse’});
>>[alpha,b] = trainlssvm({X,Y,type,gam,sig2,’RBF_kernel’});
>>plotlssvm({X,Y,type,gam,sig2,’RBF_kernel’},{alpha,b});
The code above finds the best parameters using simplex method and leave-one-out cross validation and trains the model and give me alphas (support vector values for all the data points in the training set) and b coefficients. However, it does not give me the predictions of the variable Y. It only draws the plot. In some articles, I saw plots like the one below,
As I said before, the LS-SVM toolbox does not give me the predicted values of Y, it only draws the plot but no values in the workspace. How can I get these values and draw a graph of predicted values together with actual values?
There is one solution that I think of. By using X values in the training set, I re-run the model and get the prediction of values Y by using simlssvm command but it does not seem reasonable to me. Any solution that you can offer? Thanks in advance.
I am afraid you have answered your own question. The only way to obtain the prediction for the training points in LS-SVMLab is by simulating the training points after training your model.
[yp,alpha,b,gam,sig2,model] = lssvm(x,y,'f')
when u use this function yp is the predicted value

SSRS: Trouble applying custom expression to Vertical Axis Chart Label

Using visual studio BI dev studio 2008.
I have a chart that has a Y axis of numbers ranging from 0 to about 1500 (Values), an x axis of dates (Category Group). The Y-axis numbers are integers representing minutes.
I want to convert the Y-Axis of minutes to hh:mm form and I thought it would be simple to write a custom function to do so. However, after going to Vertical Axis Properties -> Number -> Custom format, I am finding that the custom expression will not calculate most expressions that I give it.
For example, I have tried
=(Fields!RealRunTimeMin.Value) * 2
=(Fields!RealRunTimeMin.Value) + 1000
But when I go to Preview the report, the y-Axis is in the same range (0 to 1500) rather than displaying 0-3000.
I have also tried
=CInt(Fields!RealRunTimeMin.Value) + 1000
But the chart remains unchanged. The only thing I seem able to do is convert the number to a string.
Any idea on what I am doing wrong? Note: I'm not asking for the logic to format to hh:mm, but rather I am asking why all attempts to maniuplate numbers in SSRS's axis labels seem to be defeating me.
Thanks in advance,
T
Expressions are not supported with this set of functionality. I do realize that the UI suggests that it is, but that is a bug in the current product. You will need to perform the calculations at either the query or dataset level.