Why does my LSTM autoencoder model couldn't detect outliers? - deep-learning

I am trying to build a LSTM Autoendoer for anomaly detection.
But the model seems not work for my data.
Here is the normal data that I use it for training.
And here is abnormal data that I use it for validation.
If model works, its loss should be high at #200000~#500000.
Unfortunately, here is the result I put the valid data to the model:
In the abnormal interval, the loss still low.
Here is my code of training model.
I would greatly appreciate it if you kindly give me any suggestions.
scaler = MinMaxScaler(feature_range=(0, 1))
scaler.fit(healthy_data)
data_scaled = scaler.transform(healthy_data)
data_broken_scaled = scaler.transform(broken_data)
timesteps=32
data = data_scaled
dim = 1
data.shape = (-1,timesteps,dim)
lr = 0.0001
Nadam = optimizers.Nadam(lr=lr)
model = Sequential()
model.add(LSTM(50,input_shape=(timesteps,dim),return_sequences=True))
model.add(Dense(dim))
model.compile(loss='mae', optimizer=Nadam ,metrics=['mse'])
EStop = EarlyStopping(monitor='val_loss', min_delta=0.001,patience=150, verbose=2, mode='auto',restore_best_weights=True)
history = model.fit(data,data,validation_data=(data,data),epochs=3000,batch_size=72,verbose=2,shuffle=False,callbacks=[EStop]).history
pred_broken = model.predict(data_broken_scaled)
loss_broken = np.mean(np.abs(pred_broken-data_broken_scaled),axis=1)
fig, ax = plt.subplots(figsize=(20, 6), dpi=80, facecolor='w', edgecolor='k')
ax.plot(range(0,len(loss_broken)), loss_broken, '-', color='red', animated = True, linewidth=1)

I Think, it may better to use Fourier transform to detect anomaly in frequency view.
I means, the train and test data will be converted to frequency domain via Fourier transform. Also I recommend to use Time-windows.
Most AI will act better if they have enough good train data. Your anomaly points should be labeled to 1(by pre-processing step of the data) and it should have high frequency at your time-step(time-windows).
In short, I think your train data may not sufficiently representative for anomaly now.

Related

Modify Random Forest Regression to predict multiple values in future using past data

I am using Random Forest Regression on a power vs time data of an experiment that is performed for a certain time duration. Using that data, I want to predict the trend of power in future using time as an input. The code that has been implemented is mentioned below.
# Loading the excel dataset
df = pd.read_excel('/content/drive/MyDrive/Colab Notebooks/Cleaned total data.xlsx', header = None, names = [ "active_power", "current", "voltage"], usecols = "A:C",skiprows = [i for i in range(1)])
df = df.dropna()
The data set consists of approximately 30 hours of power vs time values as mentioned below.
Next a random Forest Regressor is fitted on training data. The R2 score achieved on test data is 0.87.
# Creating X and y
X = np.array(series[['time_h']]).reshape(-1,1)
y = np.array(series['active_power'])
# Splitting dataset in training and testing
X_train2,X_test2,y_train2,y_test2 = train_test_split(X,y,test_size = 0.15, random_state = 1)
# Creating Random Forest model and fitting it on training data
forest = RandomForestRegressor(n_estimators=128, criterion='mse', random_state=1, n_jobs=-1)
forest_fit = forest.fit(X_train2, y_train2)
# Saving the model and checking the R2 score on test data
filename = 'random_forest.sav'
joblib.dump(forest, filename)
loaded_model = joblib.load(filename)
result = loaded_model.score(X_test2, y_test2)
print(result)
For future prediction, an array of time for 400 hours has been created to use as an input to the model as the power needs to be predicted for that duration.
# Creating a time array for future which will be used as input for future predictions
future_time2 = np.arange(len(series)*15)
future_time2 = future_time2*0.25/360
columns = ['time_hour']
dataframe = pd.DataFrame(data = future_time2, columns = columns)
future_times = dataframe[41006:].to_numpy()
future_times
When the predictions are made in future, the model only provides output of a constant value over the entire duration of 400 hours. The output prediction is as below.
# Predicting power for future
future_pred = loaded_model.predict(future_times)
future_pred
Could someone please suggest me why the model is predicting same value for entire duration and how to modify the code so that I can get a trend of prediction with reasonable values and not a single value.
Thank you.

Compare two segmentation maps predictions

I am using consistency between two predicted segmentation maps on unlabeled data. For labeled data, I’m using nn.BCEwithLogitsLoss and Dice Loss.
I’m working on videos that’s why 5 dimensions output.
(batch_size, channels, frames, height, width)
I want to know how can we compare two predicted segmentation maps.
gmentation maps.
# gt_seg - Ground truth segmentation map. - (8, 1, 8, 112, 112)
# aug_gt_seg - Augmented ground truth segmentation map - (8, 1, 8, 112, 112)
predicted_seg_1 = model(data, targets) # (8, 1, 8, 112, 112)
predicted_seg_2 = model(augmented_data, augmented_targets) #(8, 1, 8, 112, 112)
# define criterion
seg_criterion_1 = nn.BCEwithLogitsLoss(size_average=True)
seg_criterion_2 = nn.DiceLoss()
# labeled losses
supervised_loss_1 = seg_criterion_1(predicted_seg_1, gt_seg)
supervised_loss_2 = seg_criterion_2(predicted_seg_1, gt_seg)
# Consistency loss
if consistency_loss == "l2":
consistency_criterion = nn.MSELoss()
cons_loss = consistency_criterion(predicted_gt_seg_1, predicted_gt_seg_2)
elif consistency_loss == "l1":
consistency_criterion = nn.L1Loss()
cons_loss = consistency_criterion(predicted_gt_seg_1, predicted_gt_seg_2)
total_supervised_loss = supervised_loss_1 + supervised_loss_2
total_consistency_loss = cons_loss
Is this the right way to apply consistency between two predicted segmentation maps?
I’m mainly confused due to the definition on the torch website. It’s a comparison with input x with target y. I thought it looks correct since I want both predicted segmentation maps similar. But, 2nd segmentation map is not a target. That’s why I’m confused. Because if this could be valid, then every loss function can be applied in some or another way. That doesn’t look appealing to me. If it’s the correct way to compare, can it be extended to other segmentation-based losses such as Dice Loss, IoU Loss, etc.?
One more query regarding loss computation on labeled data:
# gt_seg - Ground truth segmentation map
# aug_gt_seg - Augmented ground truth segmentation map
predicted_seg_1 = model(data, targets)
predicted_seg_2 = model(augmented_data, augmented_targets)
# define criterion
seg_criterion_1 = nn.BCEwithLogitsLoss(size_average=True)
seg_criterion_2 = nn.DiceLoss()
# labeled losses
supervised_loss_1 = seg_criterion_1(predicted_seg_1, gt_seg)
supervised_loss_2 = seg_criterion_2(predicted_seg_1, gt_seg)
# augmented labeled losses
aug_supervised_loss_1 = seg_criterion_1(predicted_seg_2, aug_gt_seg)
aug_supervised_loss_2 = seg_criterion_2(predicted_seg_2, aug_gt_seg)
total_supervised_loss = supervised_loss_1 + supervised_loss_2 + aug_supervised_loss_1 + aug_supervised_loss_2
Is the calculation of total_supervised_loss correct? Can I apply loss.backward() on this?
Yes, this is a valid way to implement consistency loss. The nomenclature used by pytorch documentation lists one input as the target and the other as the prediction, but consider that L1, L2, Dice, and IOU loss are all symmetrical (that is, Loss(a,b) = Loss(b,a)). So any of these functions will accomplish a form of consistency loss with no regard for whether one input is actually a ground-truth or "target".

Is learning and cumulative reward a good metrics to evaluate a RL model?

i am new to reinforcement learning.
I have a problem here that i am using DQN on. I have plotted a cumulative reward curve while learning and taking actions. After 100 episodes and it shows a lot of fluctuations that does not show me whether it has learnt anything.
However, instead of using learning and cumulative reward, I put the model through the whole simulation without learning method after each episode and it shows me that the model is actually learning well. This extended the program runtime by quite a bit.
In addition, i have to extract the best model along the way because the final model seems to be performing badly at times.
Any advice or explanation for this?
Try to use the average mean return it's usually a good metric to know if the agent is improving or not.
If you're using tf_agent you can do something like this :
...
checkpoint_dir = os.path.join('./', 'checkpoint')
train_checkpointer = common.Checkpointer(
ckpt_dir=checkpoint_dir,
max_to_keep=1,
agent=agent,
policy=agent.policy,
replay_buffer=replay_buffer,
global_step=train_step
)
policy_dir = os.path.join('./', 'policy')
tf_policy_saver = policy_saver.PolicySaver(agent.policy)
def train_agent(n_iterations):
best_AverageReturn = 0
time_step = None
policy_state = agent.collect_policy.get_initial_state(tf_env.batch_size)
iterator = iter(dataset)
for iteration in range(n_iterations):
time_step, policy_state = collect_driver.run(time_step, policy_state)
trajectories, buffer_info = next(iterator)
train_loss = agent.train(trajectories)
if iteration % 10 == 0:
print("\r{} loss:{:.5f}".format(iteration, train_loss.loss.numpy()), end="")
if iteration % 1000 == 0 and averageReturnMetric.result() > best_AverageReturn:
best_AverageReturn = averageReturnMetric.result()
train_checkpointer.save(train_step)
tf_policy_saver.save(policy_dir)
After 1000 steps the train function evaluates the average return and create a checkpoint if there are any improvements

LSTM Timeseries recursive prediction converge to same value

I'm working on Timeseries sequence prediction using LSTM.
My goal is to use window of 25 past values in order to generate a prediction for the next 25 values. I'm doing that recursively:
I use 25 known values to predict the next value. Append that value as know value then shift the 25 values and predict the next one again until i have 25 new generated values (or more)
I'm using "Keras" to implement the RNN
Architecture:
regressor = Sequential()
regressor.add(LSTM(units = 50, return_sequences = True, input_shape = (X_train.shape[1], 1)))
regressor.add(Dropout(0.1))
regressor.add(LSTM(units = 50, return_sequences = True))
regressor.add(Dropout(0.1))
regressor.add(LSTM(units = 50))
regressor.add(Dropout(0.1))
regressor.add(Dense(units = 1))
regressor.compile(optimizer = 'rmsprop', loss = 'mean_squared_error')
regressor.fit(X_train, y_train, epochs = 10, batch_size = 32)
Problem:
Recursive prediction always converge to the some value no matter what sequence comes before.
For sure this is not what I want, I was expecting that the generated sequence will be different depending on what I have before and I'm wondering if someone have an idea about this behavior and how to avoid it. Maybe I'm doing something wrong ...
I tried different epochs number and didn't help much, actually more epochs made it worse. Changing Batch Size, Number of Units , Number of Layers , and window size didn't help too in avoiding this issue.
I'm using MinMaxScaler for the data.
Edit:
scaling new inputs for testing:
dataset_test = sc.transform(dataset_test.reshape(-1, 1))

Merge serval models (LSTMs) in TensorFlow

I know how to merge different models into one in Keras.
first_model = Sequential()
first_model.add(LSTM(output_dim, input_shape=(m, input_dim)))
second_model = Sequential()
second_model.add(LSTM(output_dim, input_shape=(n-m, input_dim)))
model = Sequential()
model.add(Merge([first_model, second_model], mode='concat'))
model.fit([X1,X2])
I am not sure how to do this in TensorFlow though.
I have two LSTM models and want to merge those (in the same way as in above Keras example).
outputs_1, state_1 = tf.nn.dynamic_rnn(stacked_lstm_1, model_input_1)
outputs_2, state_2 = tf.nn.dynamic_rnn(stacked_lstm_2, model_input_2)
Any help would be much appreciated!
As was said in the comment, I believe the simplest way to do this is just to concatenate the outputs. The only complication that I've found is that, at least how I made my LSTM layers, they ended up with the exact same names for their weight tensors. This led to an error because TensorFlow thought the weights were already made when I tried to make the second layer. If you have this problem, you can solve it using a variable scope, which will apply to the names of the tensors in that LSTM layer:
with tf.variable_scope("LSTM_1"):
lstm_cells_1 = tf.contrib.rnn.MultiRNNCell(tf.contrib.rnn.LSTMCell(256))
output_1, state_1 = tf.nn.dynamic_rnn(lstm_cells_1, inputs_1)
last_output_1 = output_1[:, -1, :]
# I usually work with the last one; you can keep them all, if you want
with tf.variable_scope("LSTM_2"):
lstm_cells_2 = tf.contrib.rnn.MultiRNNCell(tf.contrib.rnn.LSTMCell(256))
output_2, state_2 = tf.nn.dynamic_rnn(lstm_cells_2, inputs_2)
last_output_2 = output_2[:, -1, :]
merged = tf.concat((last_output_1, last_output_2), axis=1)