Replace finite values with '1' in a 3 dimension xarray - gis

I am working on lake ice thickness in the northern hemisphere. My final data set is an xarray with dimensions [365,360,720] - (days,lat,lon) and a data varibale 'icethickness'. This data variable has 3 kinds of values. A finite value for ice thickness, zero for water and 'nan' for oceans.
I want to convert all the finite values of this xarray to 1 and keep the zeros and nan as they are.

You can use the xr.where function.
xr.where(data > 0, 1, data)

Related

How to structure observation_space for openai/gym

I am trying to RL a simple game. Rather than processing the image of game, I decided to extract values from the game that structures the game.
Extraction was easy but the main problem is how to structure observation_space
My goal is to create my own python gym environment and do RL
(value and count gets changed per game but it remains static once game has been started)
type1, between 100 to 200 of 4 element array [x1, x2, y1, y2]
type2, between 0 to 10 of 3 element array [x, y1, y2]
type3, between 0 to 10 of 2 element array [x, y]
(value always exists but gets changed dynamically during the game)
between 0 to 50 of 4 element array [x1, x2, y1, y2]
10 stat values
1 reward values
All values are int16 (-32,768 to +32,767) except that reward value is unsigned int16 (0 to 65,535)
int16 is what the game is defined to hold value but in real game, it does not exceed -10000~10000 and rewards maximum is 5000
In this case, how am I supposed to structure observation_space?
Will it be effective if I just set with maximum possible size and fill it with 0s for non-existing values?
Or is there any best practices or example that I can reference?
Thank you!

multi label problem with intermediate labels

I am trying to create a model for the following problem
id input (diagnoses) elapsed_days output (medication)
1 [2,3,4] 0 [3,4]
1 [4,5,6] 7 [1]
1 [2,3] 56 [6,3]
2 [6,5,9,10] 0 [5,3,1]
Rather than a single label for the different codes over time, there are labels at each time period.
I am think that my arch would be [input] -> [embedding for diagnoses] -> [append normalized elapsed days to embeddings]
-> [LSTM] -> [FFNs] -> [labels over time]
I am familiar with how to set this up if there were a single label per id. Given there are labels for each row (i.e. multiple per id), should I be passing the hidden layers of the LSTM through the FFN and then assigning the labels? I would really appreciate if somebody could point me to a reference/blog/github/anything for this kind of problem or suggest an alternative approach here.
Assuming the [6,3] is equal to [3, 6].
You can use Sigmoid activation with Binary Cross-Entropy loss function (nn.BCELoss class) instead of Softmax Cross-Entropy (nn.CrossEntropyLoss class).
But the output ground truth instead of integers like when using nn.CrossEntropyLoss. You need to make them sort of one hot encoding instead. For example, if the desired output is [6, 3] and the output has 10 nodes. The y_true has to be [0, 0, 0, 1, 0, 0, 1, 0, 0, 0].
Depending on how you implement your data generator, this is one way to do it.
output = [3, 6]
out_tensor = torch.zeros(10)
out_tensor[output] = 1
But if [6,3] is not equal to [3, 6]. Then more information about this is needed.

Is there some "scale invariant" substitute for the softmax function?

It is very common tu use softmax function for converting an array of values in an array of probabilities. In general, the function amplifies the probability of the greater values of the array.
However, this function is not scale invariant. Let us consider an example:
If we take an input of [1, 2, 3, 4, 1, 2, 3], the softmax of that is [0.024, 0.064, 0.175, 0.475, 0.024, 0.064, 0.175]. The output has most of its weight where the '4' was in the original input. That is, softmax highlights the largest values and suppress values which are significantly below the maximum value. However, if the input were [0.1, 0.2, 0.3, 0.4, 0.1, 0.2, 0.3] (which sums to 1.6) the softmax would be [0.125, 0.138, 0.153, 0.169, 0.125, 0.138, 0.153]. This shows that for values between 0 and 1 softmax, in fact, de-emphasizes the maximum value (note that 0.169 is not only less than 0.475, it is also less than the initial proportion of 0.4/1.6=0.25).
I would need a function that amplifies differences between values in an array, emphasizing the greatest values and that is not so affected by the scale of the numbers in the array.
Can you suggest some function with these properties?
As Robert suggested in the comment, you can use temperature. Here is a toy realization in Python using numpy:
import numpy as np
def softmax(preds):
exp_preds = np.exp(preds)
sum_preds = np.sum(exp_preds)
return exp_preds / sum_preds
def softmax_with_temperature(preds, temperature=0.5):
preds = np.log(preds) / temperature
preds = np.exp(preds)
sum_preds = np.sum(preds)
return preds / sum_preds
def check_softmax_scalability():
base_preds = [1, 2, 3, 4, 1, 2, 3]
base_preds = np.asarray(base_preds).astype("float64")
for i in range(1,3):
print('logits: ', base_preds*i,
'\nsoftmax: ', softmax(base_preds*i),
'\nwith temperature: ', softmax_with_temperature(base_preds*i))
Calling check_softmax_scalability() would return:
logits: [1. 2. 3. 4. 1. 2. 3.]
softmax: [0.02364054 0.06426166 0.1746813 0.474833 0.02364054 0.06426166
0.1746813 ]
with temperature: [0.02272727 0.09090909 0.20454545 0.36363636 0.02272727 0.09090909
0.20454545]
logits: [2. 4. 6. 8. 2. 4. 6.]
softmax: [0.00188892 0.01395733 0.10313151 0.76204449 0.00188892 0.01395733
0.10313151]
with temperature: [0.02272727 0.09090909 0.20454545 0.36363636 0.02272727 0.09090909
0.20454545]
But the scale invariance comes with a cost: as you increase temperature, the output values will come closer to each other. Increase it too much, and you will have an output that looks like a uniform distribution. In your case, you should pick a low value for temperature to emphasize the maximum value.
You can read more about how temperature works here.

Approximate function following interpolation

How do you approximate a function for interpolated points? I use the natural cubic spline to interpolate points as follows for n = 500 points:
t=[0; 3; 6; 9]
z=[0; 6; 6; 0]
plot(t,z,'ro')
ti=linspace(0,9,500)
zn = natcubicspline(t,z,ti)
yn = line(ti,zn)
Is there any way approximate a function for these n-many points (for large n)? Or are there ways to treat interpolated points as if they are a function, i.e. find the gradient of the zn vector? Because zn is a vector of constants, this wouldn't necessarily be helpful.
Update: In particular, my data appears to form a 2nd degree polynomial, so I went ahead and used the following Matlab function to fit my data:
p = polyfit(transpose(ti),zn,2)
Which yields coefficient estimates for a 2nd degree polynomial. It does fit the data, but with high error values, and I have to multiply this coefficient vector by a vector [1 z z^2] to get the correct polynomial. Is there any way to streamline this?

How to determine width of peaks and make FFT for every peak (and plot it in separate graph)

I have an acceleration data for X-axis and time vector for it. I determined the peaks more than threshold and now I should find the FFT for every peak.
As result I have this:
Peak Value 1 = 458, index 1988
Peak Value 2 = 456, index 1990
Peak Value 3 = 450, index 12081
....
Peak Value 9 = 432, index 12151
To find these peaks I used the peakfinder script.
The command [peakLoc, peakMag] = peakfinder(x0,...) gives me location and magnitude of peaks.
Also I have the Time (from time data vector) for each peak.
So what I suppose, that I should take every peak, find its width (or some data points around the peak) and make the FFT. Am I right? Could you help me in that?
I'm working in Octave and I'm new here :)
Code:
load ("C:\\..patch..\\peakfinder.m");
d =dlmread("C:\\..patch..\\acc2.csv", ";");
T=d(:,1);
Ax=d(:,2);
[peakInd peakVal]=peakfinder(Ax,10,430,1);
peakTime=T(peakInd);
[sortVal sortInd] = sort(peakVal, 'descend');
originInd = peakInd(sortInd);
for k = 1 : length(sortVal)
fprintf(1, 'Peak #%d = %d, index%d\n', k, sortVal(k), originInd (k));
end
plot(T,Ax,'b-',T(peakInd),Ax(peakInd),'rv');
and here you can download the data http://www.filedropper.com/acc2
FFT
d =dlmread("C:\\..path..\\acc2.csv", ";");
T=d(:,1);
Ax=d(:,2);
% sampling frequency
Fs_a=2000;
% length of FFT
Length_Ax=numel(Ax);
% number of lines of Fourier spectrum
fft_L= Fs_a*2;
% an array of time samples
T_Ax=0:1/Fs_a: Length_Ax;
fft_Ax=abs(fft(Ax,fft_L));
fft_Ax=2*fft_Ax./fft_L;
F=0:Fs_a/fft_L:Fs_a/2-1/fft_L;
subplot(3,1,1);
plot(T,Ax);
title('Ax axis');
xlabel('time (s)');
ylabel('amplitude)'); grid on;
subplot(3,1,2);
plot(F,fft_Ax(1:length(F)));
title('spectrum max Ax axis');
xlabel('frequency (Hz)');
ylabel('amplitude'); grid on;
It looks like you have two clusters of peaks, so I would plot the data over three plots: one of the whole timeseries, one zoomed in on the first cluster, and the last one zoomed in on the second cluster (note I have divided all your time values by 1e6 otherwise the tick labels get ugly):
figure
subplot(3,1,1)
plot(T/1e6,Ax,'b-',peakTime/1e6,peakVal,'rv');
subplot(3,1,2)
plot(T/1e6,Ax,'b-',peakTime(1:4)/1e6,peakVal(1:4),'rv');
axis([0.99*peakTime(1)/1e6 1.01*peakTime(4)/1e6 0.9*peakVal(1) 1.1*peakVal(4)])
subplot(3,1,3)
plot(T/1e6,Ax,'b-',peakTime(5:end)/1e6,peakVal(5:end),'rv');
axis([0.995*peakTime(5)/1e6 1.005*peakTime(end)/1e6 0.9*peakVal(5) 1.1*peakVal(end)])
I have set the axes around the extreme time and acceleration values, using some coefficients to have some "padding" around (the values of these coefficients were obtained through trial and error). This gives me the following plot, hopefully this is the sort of thing you are after. You can add x and y labels if you wish.
EDIT
Here's how I would do the FFT:
Fs = 2000;
L = length(Ax);
NFFT = 2^nextpow2(L); % Next power of 2 from length of Ax
Ax_FFT = fft(Ax,NFFT)/L;
f = Fs/2*linspace(0,1,NFFT/2+1);
% Plot single-sided amplitude spectrum.
figure
semilogx(f,2*abs(Ax_FFT(1:NFFT/2+1))) % using semilogx as huge DC component
title('Single-Sided Amplitude Spectrum of Ax')
xlabel('Frequency (Hz)')
ylabel('|Ax(f)|')
ylim([0 300])
giving the following result: