How to plot a transfer function from a Cauer network - octave

The picture below shows a Cauer network, which is a continued fraction network.
I have built the 3rd olrder transfer function 3rd Octave like this:
function uebertragung=G(R1,Tau1,R2,Tau2,R3,Tau3)
s= tf("s");
C1= Tau1/R1;
C2= Tau2/R2;
C3= Tau3/R3;
# --- Uebertragungsfunktion 3.Ordnung --- #
uebertragung= 1/((s*R1*C1)^3+5*(s*R2*C2)^2+6*s*R3*C3+1);
endfunction
R1,R2,R3,C1,C2,C3 are the 6 parameters my characteristic curve depends on.
I need to put this parameters into the tranfser function, get a result and plot the characteristic curve from the data.
The characteristic curve shows thermal impedance vs time. Like these 2 curves from an igbt data sheet.
My problem is I don't know how to handle transfer functions properly. I need data to plot the characteristic curve but I don't know how to generate them out of the transfer function.
Any tips are welcome. Do I have to make Laplace transformation?
If you need further Information ask me and I try to provide them all.

From the data sheet, the equation they are using for their transient thermal impedance graph is the Foster chain step function response:
Z(t) = sum (R_i * (1-exp(-t/tau_i))) = sum (R_i * (1-exp(-t/(R_i*C_i))))
I verified that the stage R's and C's in the table by the graph will produce the plot you shared with that function.
The method for producing a step function response of an s-domain (Laplace domain) impedance function (Z) is to take the inverse Laplace transform of the product of the transfer function and 1/s (the Laplace domain form of a constant value step function). With the Foster model impedance function:
Z(s) = sum (R_i/(1+R_i*C_i*s))
that will produce the equation above.
Using the transfer function in Octave, you can use the Control package function step to calculate the transient response for you rather than performing the inverse Laplace transform yourself. So once you have Z(s), step(Z) will produce or plot the transient response. See help step for details. You can then adjust the plot (switch to log scale, set axes limits, etc) to look like one of the spec sheet plots.
Now, you want to do the same thing with a Cauer network model. It is important to realize that the R's and C's will not be the same for the two models. The Foster network is a decoupled model that has each primary complex pole isolated by layout, but the R's and C's are actually convolutions of the physical thermal resistances and capacitances in the real package. On the contrary, the Cauer model has R's and C's that match the physical package layers, and the poles in the s-domain transfer function will be complex products of the multiple layers.
So, however you are obtaining your R's and C's for the Cauer model, you can't just use the same values they have in their Foster model parameter table. They can be calculated from physical layer and material properties, however, assuming you have that information. Once you do have useful values, the procedure for going from Z(s) to the transient impedance function is the same for either network, and they should produce the same result.
As an example, the following procedure should work in both Octave and Matlab to plot the Thermal impedance curve from the spec sheet data using the Foster Z(s) model as a starting point. For the Cauer model, just use a different Z(s) function.
(Note that Octave has some issues in the step function that insert t = 0 entries into the time series output, even when they aren't specified, which can cause some errors when trying to plot on a log scale. so this example puts in a t=0 node then ignores it. wanted to explain so that line didn't seem confusing).
s = tf('s')
R1 = 8.5e-3; R2 = 2e-3;
tau1 = 151e-3; tau2 = 5.84e-3;
C1 = tau1/R1; C2 = tau2/R2;
input_imped = R1/(1+R1*C1*s)+R2/(1+R2*C2*s)
times = linspace(0, 10, 100000);
[Zvals,output_times] = step(input_imped, times);
loglog(output_times(2:end), Zvals(2:end));
xlim([.001 10]); ylim([0.0001, .1]);
grid;
xlabel('t [s]');
ylabel('Z_t_h_(_j_-_c_) [K/W] IGBT');
text(1,0.013 ,'Z_t_h_(_j_-_c_) IGBT');

Related

Can HuggingFace `Trainer` be customised for curriculum learning?

I have been looking for certain features in the HuggingFace transformer Trainer object (in particular Seq2SeqTrainer) and would like to know whether they exist and if so, how to implement them, or whether I would have to write my own training loop to enable them.
I am looking to apply Curriculum Learning to my training strategy, as well as evaluating the model at regular intervals, and therefore would like to enable the following
choose in which order the model sees training samples at each epoch (it seems that the data passed onto the train_dataset argument are automatically shuffled by some internal code, and even if I managed to stop that, I would still need to pass differently ordered data at different epochs, as I may want to start training the model from easy samples for a few epochs, and then pass a random shuffle of all data for later epochs)
run custom evaluation at integer multiples of a fix number of steps. The standard compute_metrics argument of the Trainer takes a function to which the predictions and labels are passed* and the user can decide how to generate the metrics given these. However I'd like a finer level of control, for example changing the maximum sequence length for the tokenizer when doing the evaluation, as opposed to when doing training, which would require me including some explicit evaluation code inside compute_metrics which needs to access the trained model and the data from disk.
Can these two points be achieved by using the Trainer on a multi-GPU machine, or would I have to write my own training loop?
*The function often looks something like this and I'm not sure it would work with the Trainer if it doesn't have this configuration
def compute_metrics(eval_pred):
predictions, labels = eval_pred
...
You can pass custom functions to compute metrics in the training arguments

PointNet can't predict segmentation on custom point cloud

I'm currently working on my bachelor project and I'm using the PointNet deep neural network.
My project group and I have created a dataset of point clouds(an unsorted list of x amount of 3d coordinates) and segmentation files, but we can't train PointNet to predict segmentation with the dataset.
Each segmentation file is a list containing the same amount of rows, as points in the corresponding point cloud, and each row is either a 1 or a 2, depending on the corresponding point belonging to segment 1 or 2.
When PointNet predicts it outputs a list of x elements, where each element is the segment that PointNet predicts the corresponding point belongs to.
When we run the benchmark dataset from the original PointNet implementation, the system runs and can predict segmentation, so we know that the error is in the dataset somewhere, even though we have tried our best to have our dataset look like the original benchmark dataset.
The implemented PointNet uses pytorch conv2d, maxpool2d and linear transformation. For calculating the loss, both the nn.functional.nll_loss and the nn.NLLLos functions have been used. When using the nn.NLLLos the weight parameter was set to a tensor of [1,100] to combat potential imbalance of the data.
These are the thing we have tried:
We have tried downsampling the point clouds i.e remove points using voxel downsampling
We have tried downscaling and normalize all values so they are between 0 and 1, using this formula (data - np.min(data)) / (np.max(data) - np.min(data))
We have tried running an euclidean clustering function on the data, to have each scanned object for it self
We have tried replicating another dataset, which was created using the same raw data, which we know have worked before
In the attached link, images of the datafiles with a description can be found.
Cheers everyone

Riding the wave Numerical schemes for hyperbolic PDEs, lorena barba lessons, assistance needed

I am a beginner python user who is trying to get a feel for computer science, I've been learning how to use it by studying concepts/subjects I'm already familiar with, such as Computation Fluid Mechanics & Finite Element Analysis. I got my degree in mechanical engineering, so not much CS background.
I'm studying a series by Lorena Barba on jupyter notebook viewer, Practical Numerical Methods, and i'm looking for some help, hopefully someone familiar with the subjects of CFD & FEA in general.
if you click on the link below and go to the following output line, you'll find what i have below. Really confused on this block of code operated within the function that is defined.
Anyway. If there is anyone out there, with any suggestions on how to tackle learning python, HELP
In[9]
rho_hist = [rho0.copy()]
rho = rho0.copy() **# im confused by the role of this variable here**
for n in range(nt):
# Compute the flux.
F = flux(rho, *args)
# Advance in time using Lax-Friedrichs scheme.
rho[1:-1] = (0.5 * (rho[:-2] + rho[2:]) -
dt / (2.0 * dx) * (F[2:] - F[:-2]))
# Set the value at the first location.
rho[0] = bc_values[0]
# Set the value at the last location.
rho[-1] = bc_values[1]
# Record the time-step solution.
rho_hist.append(rho.copy())
return rho_hist
http://nbviewer.jupyter.org/github/numerical-mooc/numerical-mooc/blob/master/lessons/03_wave/03_02_convectionSchemes.ipynb
The intent of the first two lines is to preserve rho0 and provide copies of it for the history (copy so that later changes in rho0 do not reflect back here) and as the initial value for the "working" variable rho that is used and modified during the computation.
The background is that python list and array variables are always references to the object in question. By assigning the variable you produce a copy of the reference, the address of the object, but not the object itself. Both variables refer to the same memory area. Thus not using .copy() will change rho0.
a = [1,2,3]
b = a
b[2] = 5
print a
#>>> [1, 2, 5]
Composite objects that themselves contain structured data objects will need a deepcopy to copy the data on all levels.
Numpy array values changed without being aksed?
how to pass a list as value and not as reference?

Backpropagation on Two Layered Networks

i have been following cs231n lectures of Stanford and trying to complete assignments on my own and sharing these solutions both on github and my blog. But i'm having a hard time on understanding how to modelize backpropagation. I mean i can code modular forward and backward passes but what bothers me is that if i have the model below : Two Layered Neural Network
Lets assume that our loss function here is a softmax loss function. In my modular softmax_loss() function i am calculating loss and gradient with respect to scores (dSoft = dL/dY). After that, when i'am following backwards lets say for b2, db2 would be equal to dSoft*1 or dW2 would be equal to dSoft*dX2(outputs of relu gate). What's the chain rule here ? Why isnt dSoft equal to 1 ? Because dL/dL would be 1 ?
The softmax function is outputs a number given an input x.
What dSoft means is that you're computing the derivative of the function softmax(x) with respect to the input x. Then to calculate the derivative with respect to W of the last layer you use the chain rule i.e. dL/dW = dsoftmax/dx * dx/dW. Note that x = W*x_prev + b where x_prev is the input to the last node. Therefore dx/dW is just x and dx/db is just 1, which means that dL/dW or simply dW is dsoftmax/dx * x_prev and dL/db or simply db is dsoftmax/dx * 1. Note that here dsoftmax/dx is dSoft we defined earlier.

Support vector regression based GIS anaysis

I'm new here and I really want some help. I have a dataset including geographical information (longitude, latitude.. ) and I want to ensure the prediction of some aspects using this dataset with Support Vector Regression, but I don't know how to perform this task. I have the following inquires,
Is there a specific precessing I need to go through?
Does SVR consider a geographic dataset as normal data set or are there some specificities in term of tools and treatment?
Any recommended prediction analytics tools (including SVR) considering geographical data?
This given solution is for the situation that you want to extract the independent variable base on the dependent variable from a raster.
but if you have you all dependent and independent data with their corresponding location you simply use svm function in R and you then add a raster or vector (new) data to your predict function for prediction, or you also can use the estimated coefficient of dependent variable in raster calculator in GIS and multiply them to the corresponding independent variable and finally you will get your predicted raster.
Simply you can do the following for spatial data in R.
First of all, the support vector regression can be used for prediction of real value and you can use the library("e1071") in R in order to execute this algorithm.
you can import your dataset as CSV along with lat and long columns.
transform your data.fram to Spatial data.frame
#Read data
dat<-read.csv(choose.files())
#convert the data to SPDF.
dat_sp=SpatialPoints(cbind(dat$x,dat$y))
#add your Geographical referense system
dat_crs=CRS("+proj=utm +zone=39 +datum=WGS84")
#Data Frams for SpatialPoint Data(Creating a SpatialPoints data frame for dat)
dat_spdf=SpatialPointsDataFrame(coords = dat_sp,data = dat, proj4string = dat_crs)
plot(dat_spdf, col='blue', cex=1, pch=16, axes=TRUE)
#Extract value
dat_spdf$ref <- extract(raster , dat_spdf)
then you can extract your data on a raster data or whatever you have(your independent variable).
and finally, you can use the following cold in R.
SVM(dependent ~.,independent)
But you need to really have an intuition about what the SVR is and how to evaluate the result.
you also can show your result as a final raster map.
you can use toolbox package or you may use raster package.