Update several plots with slider in Python Dash - plotly-dash

I am rather new to Dash (and basic skill level regarding Python) and working on a dashboard for educational purposes.
Students shall be able to alter various variables via sliders. The changes on an economic model shall be visible in several plots shown.
I am able to create a basic example, with one plot being fed by several sliders.
However, I want some sliders to affect several plots simultaneously.
So far, I came up with the following:
import dash
from dash import dcc
import dash_html_components as html
from dash.dependencies import Input, Output
import pandas as pd
import plotly.graph_objects as go
from jupyter_dash import JupyterDash
app = dash.Dash(__name__)
a = 1
b = 1
c = 1
d = {
'x' : [1, 2, 3, 4],
'y1': [i * a for i in [1, 2, 3, 4]],
'y2': [i * b for i in [1, 2, 3, 4]],
'y3': [i * c for i in [1, 2, 3, 4]]
}
df = pd.DataFrame(data = d)
fig1 = go.Figure()
fig.add_trace(go.Line(name="y1", x=df['x'], y=df['y1']))
fig.add_trace(go.Line(name="y2", x=df['x'], y=df['y2']))
fig.update_layout(
transition_duration=500,
xaxis_title="x",
yaxis_title="y"
)
fig.update_yaxes(range = [0,20])
fig.update_xaxes(range = [0,6])
fig2 = go.Figure()
fig.add_trace(go.Line(name="y1", x=df['x'], y=df['y1']))
fig.update_layout(
transition_duration=500,
xaxis_title="x",
yaxis_title="y"
)
fig.update_yaxes(range = [0,20])
fig.update_xaxes(range = [0,6])
fig3 = go.Figure()
fig.add_trace(go.Line(name="y2", x=df['x'], y=df['y2']))
fig.add_trace(go.Line(name="y3", x=df['x'], y=df['y3']))
fig.update_layout(
transition_duration=500,
xaxis_title="x",
yaxis_title="y"
)
fig.update_yaxes(range = [0,20])
fig.update_xaxes(range = [0,6])
app.layout = html.Div(children=[
html.Div([
html.Div([
html.Div(children='''
Market A
'''),
dcc.Graph(
id='graph1',
figure=fig1
),
], className='col'),
html.Div([
html.Div(children='''
Market B
'''),
dcc.Graph(
id='graph2',
figure=fig2
),
], className='col'),
], className='row'),
# New Div for all elements in the new 'row' of the page
html.Div([
html.Div(children='''
Market C
'''),
dcc.Graph(
id='graph3',
figure=fig3
),
dcc.Slider(
0,
10,
step=None,
value=a,
id='a'
),
dcc.Slider(
0,
10,
step=None,
value=b,
id='b'
),
dcc.Slider(
0,
10,
step=None,
value=c,
id='c'
),
], className='row'),
])
#app.callback(
[Output('graph1', 'figure'),
Output('graph2', 'figure'),
Output('graph3', 'figure')],
[Input('a', 'value'), # first slider
Input('b', 'value')], # second slider
Input('c', 'value')) # third slider
def update_figure_money_market(a, b, c):
a = a
b = b
c = c
d = {
'x' : [1, 2, 3, 4],
'y1': [i * a for i in [1, 2, 3, 4]],
'y2': [i * b for i in [1, 2, 3, 4]],
'y3': [i * c for i in [1, 2, 3, 4]]
}
df = pd.DataFrame(data = d)
if __name__ == '__main__':
app.run_server(debug=True)
So far I have not found a proper approach to this on the internet.
Any hint would be highly appreciated.

Each graph needs its own callback, followed by an update function for the graph, for dash to produce several responsive plots.
# input and update of graph 1
#app.callback(
Output('graph1', 'figure'),
[Input('slider1', 'value'),
Input('slider2', 'value')]
)
def update_graph1(slider1, slider2):
X = np.array(range(1,1001,1))
Y = X * slider1
Y2 = X * slider2
d = {'X': X, 'Y': Y, 'Y2': Y2}
df = pd.DataFrame(data = d)
fig = go.Figure()
fig.add_trace(go.Line(name="Y", x=df['X'], y=df['Y']))
fig.add_trace(go.Line(name="Y2", x=df['X'],
y=df['Y2']))
fig.update_layout(
transition_duration=500,
xaxis_title="X",
yaxis_title="Y"
)
return fig
# input and update of second plot
#app.callback(
Output('graph2', 'figure'),
Input('slider2 'value')
)
def update_graph2(slider2):
X = np.array(range(1,1001,1))
Y = X**0.5 * slider2
d = {'X': X, 'Y': Y}
df = pd.DataFrame(data = d)
fig = go.Figure()
fig.add_trace(go.Line(name="Y", x=df['X'], y=df['Y']))
fig.update_layout(
transition_duration=500,
xaxis_title="X",
yaxis_title="Y"
)
return fig
if __name__ == '__main__':
app.run_server(debug=True)

Related

Google Apps Script: How to merge two arrays conditionally?

I have a 2D array having a title in its first element of its 1D array and "trailing" or not at the beginning of the title. I like to add trailing data to the end of its same category parent data, "InterestExpense" or "IncomeTax". How can I do that?
array = [['trailingInterestExpense', 4], ['InterestExpense', 1, 2, 3],
['trailingIncomeTax', 4], ['IncomeTax', 10, 20, 30]]
My expected result is:
array = [['InterestExpense', 1, 2, 3, 4],['IncomeTax', 10, 20, 30, 40]]
From your expected value of array = [['InterestExpense', 1, 2, 3, 4],['IncomeTax', 10, 20, 30, 40]], I guessed your ['trailingIncomeTax', 4] might be ['trailingIncomeTax', 40]. If my understanding is correct, how about the following sample script?
Sample script:
const keys = ["InterestExpense", "IncomeTax"]; // This is from your question.
const array = [['trailingInterestExpense', 4], ['InterestExpense', 1, 2, 3], ['trailingIncomeTax', 40], ['IncomeTax', 10, 20, 30]]; // This is from your question.
const obj = array.reduce((o, [a, ...b]) => {
const key = keys.find(k => a.includes(k));
if (key) o[key] = o[key] ? [...o[key], ...b] : b;
return o;
}, {});
const res = keys.map(k => [k, ...obj[k].sort((a, b) => a - b)]);
console.log(res) // [["InterestExpense",1,2,3,4],["IncomeTax",10,20,30,40]]
References:
reduce()
map()
I made a small change to #Tanaike's, adding array.sort((a, b) => (a[0].charCodeAt(0) - b[0].charCodeAt(0))) right after "var array", because
in a real world situation trailing data are placed randomly in the array, and depending on its index location, the output from #Tanaike's code was different.
function test() {
const keys = ["InterestExpense", "IncomeTax"]; // This is from your question.
var array = [['trailingInterestExpense', 0.4], ['InterestExpense', 1, 2, 3], ['trailingIncomeTax', 0.4], ['IncomeTax', 10, 20, 30]]; // This is from your question.
// Added this line to rearrange trailing data to the end of the array
array.sort((a, b) => (a[0].charCodeAt(0) - b[0].charCodeAt(0)));
console.log(array)
const obj = array.reduce((o, [a, ...b]) => {
const key = keys.find(k => a.includes(k));
if (key) o[key] = o[key] ? [...o[key], ...b] : b;
return o;
}, {});
const res = keys.map(k => [k, ...obj[k]]);
console.log(res) // [["InterestExpense",1,2,3,0.4],["IncomeTax",10,20,30,0.4]]
}

Models yields same prediction for all images in Inference Stage

I am using Transfer Learning (EfficientNet b0), to train a model, using Adam Optimizer and CrossEntropyLoss for a image classification task.
The model yields 96% val acc and 99% train accuracy. However Inference fails. All images tested always yield the same prediction values.
I tried changing the Learning Rate to a very small value, making the batch size smaller.
What am I doing wrong ?
Following is my train and infer code.
from efficientnet_pytorch import EfficientNet
from torch import nn
from torchvision import models
#using efficientnet model based transfer learning
class EffNet(nn.Module):
def __init__(self, numClasses):
self.numClasses = numClasses
self.effNet = {0: models.efficientnet_b0(pretrained = False, num_classes= self.numClasses),
1: models.efficientnet_b1(pretrained = False, num_classes= self.numClasses),
2: models.efficientnet_b2(pretrained = False, num_classes= self.numClasses),
3: models.efficientnet_b3(pretrained = False, num_classes= self.numClasses),
4: models.efficientnet_b4(pretrained = False, num_classes= self.numClasses),
5: models.efficientnet_b5(pretrained = False, num_classes= self.numClasses),
6: models.efficientnet_b6(pretrained = False, num_classes= self.numClasses),
7: models.efficientnet_b7(pretrained = False, num_classes= self.numClasses)
}
# self.effNet = {0: EfficientNet.from_name(model_name='efficientnet-b0', num_classes=self.numClasses),
# 1: EfficientNet.from_name(model_name='efficientnet-b1', num_classes=self.numClasses),
# 2: EfficientNet.from_name(model_name='efficientnet-b2', num_classes=self.numClasses),
# 3: EfficientNet.from_name(model_name='efficientnet-b3', num_classes=self.numClasses),
# 4: EfficientNet.from_name(model_name='efficientnet-b4', num_classes=self.numClasses),
# 5: EfficientNet.from_name(model_name='efficientnet-b5', num_classes=self.numClasses),
# 6: EfficientNet.from_name(model_name='efficientnet-b6', num_classes=self.numClasses),
# 7: EfficientNet.from_name(model_name='efficientnet-b7', num_classes=self.numClasses)
# }
def getEffnetClassification(self, num_layers, fine_tune):
effNet = self.effNet[num_layers]
if fine_tune:
print('[INFO]: Fine-tuning all layers...')
for params in effNet.parameters():
params.requires_grad = True
elif not fine_tune:
print('[INFO]: Freezing hidden layers...')
for params in effNet.parameters():
params.requires_grad = False
# # Change the final classification head.
n_features = effNet.classifier[1].in_features
effNet.classifier = nn.Linear(in_features=n_features, out_features=self.numClasses)
return effNet
from torchvision import models, transforms
import sys
from efficientNet import EffNet
class Network():
def __init__(self, num_classes):
self.numClasses = num_classes
def getNetwork(self, network_name, num_layers=2, fine_tune=False):
if "efficient" in network_name:
#network = EfficientNet.from_name(network_name)
nw = EffNet(self.numClasses)
print("In EffNet")
print(num_layers)
print(self.numClasses)
print(fine_tune)
network = nw.getEffnetClassification(num_layers=num_layers, fine_tune=fine_tune)
elif "resnet" in network_name:
nw = resNet()
network = nw.getResnetClassification(num_layers)
else:
try:
method = getattr(models, network_name)
except AttributeError:
raise NotImplementedError("Pytorch does not implement `{}`".format(method_name))
network = method(pretrained=False)
return network
Training Code::
print("Printing Phase")
print(phase)
torch.cuda.empty_cache()
gc.collect()
bestValLoss = float('inf')
bestAcc = 0
bestLoss = float('inf')
total_step = len(self.dataLoader[phase])
losses = list()
acc = list()
valLosses = list()
valAcc = list()
datadict = {}
if(patience!=None):
earlystop = EarlyStopping(patience = patience,verbose = True)
for epoch in range(self.epochs):
startTime = datetime.now()
print('Epoch {}/{}'.format(epoch+1, self.epochs))
print('-' * 10)
for phase in ['train', 'val']:
if phase == 'train':
self.model.train()
else:
self.model.eval()
running_loss = 0.0
running_corrects = 0
total=0
for batch_id, (imgName, inputs, labels) in enumerate(self.dataLoader[phase]): # Change Here
print("BatchID")
print(batch_id)
inputs = inputs.to(device)
labels = labels.to(device)
self.optimizer.zero_grad()
outputs = self.model(inputs)
#print(outputs)
#print(labels)
loss = self.criterion(outputs, labels.long())
##loss = self.criterion(outputs.float(), labels.float()) # This was changed only for BCEWithLogitsLoss. if you change Loss Type change to line above
if phase == 'train':
loss.backward()
self.optimizer.step()
_, preds = torch.max(outputs, 1)
running_loss += loss.item() * inputs.size(0)
running_corrects += torch.sum(preds == labels.data)
total += labels.size(0)
epoch_loss = running_loss / len(self.dataLoader[phase])
epoch_acc = running_corrects.double() / total
print('{} loss: {:.4f}, acc: {:.4f}'.format(phase,
epoch_loss,
epoch_acc))
endTime = datetime.now()
print('{} loss: {:.4f}, acc: {:.4f}'.format(phase,
epoch_loss,
epoch_acc))
print('Time For Epoch {} :: {} seconds '.format(epoch, (endTime-startTime).total_seconds()))
losses.append(epoch_loss)
acc.append(epoch_acc.cpu().numpy().item())
print("Printing in Epoch")
print(losses)
print(acc)
if epoch_acc > bestAcc:
bestAcc = epoch_acc
bestModelWeights = copy.deepcopy(self.model.state_dict())
torch.save(bestModelWeights, self.path_model)
if epoch_loss < bestLoss:
bestLoss = epoch_loss
torch.save(self.model.state_dict(), self.path_model)
runningValAcc, runningValLoss = self.evalAndSave(bestValLoss, epoch)
valLosses.append(runningValLoss)
valAcc.append(runningValAcc.cpu().numpy().item())
print("Printing Val Data")
print(valLosses)
print(valAcc)
if runningValLoss < bestValLoss and patience:
earlystop(runningValLoss, self.model)
bestValLoss = runningValLoss
if earlystop.early_stop:
earlystop.save_checkpoint(runningValLoss, self.model)
print("Early Stopping")
break
print("Final Print in Trainer Before Return")
print("Training Losses")
print(losses)
print("Training Accuracy")
print(acc)
print("Validation Losses")
print(valLosses)
print("Validation Accuracy")
print(valAcc)
datadict["trainLoss"] = losses
datadict["trainAcc"] = acc
datadict["valLoss"] = valLosses
datadict["valAcc"] = valAcc
datadict["optimizer"] = self.optimizer
datadict["criterion"] = self.criterion
return self.model, datadict
Inference Script:::
with torch.no_grad():
self.model.eval()
#sm = nn.Softmax(dim = 1)
for batch_id, (imgPaths, inputs, labels) in enumerate(self.dataLoader):
for imgPath, image, label in zip(imgPaths, inputs, labels):
output= None
predictedClassName = None
predictions = None
topclass = None
topk = None
new_record = pd.DataFrame()
image = image.to(device)
label = label.to(device)
print("Printing Label")
print(label)
image_tensor = image.unsqueeze_(0)
output = self.model(image_tensor.cuda())
#print(output)
predictions = torch.exp(output.data)
#_, predictions = torch.max(output.data, 1)
#predictions = sm(output) #.softmax(output.data, dim=1)
topk, topclass = torch.max(predictions, 1)
print("Predictions")
print(predictions)
print("Checking")
print(topk)
print(topclass)
correct += torch.sum(topk == label.long()).cpu().numpy()
totalPredictions.extend(topk.cpu().numpy())
#print("Predictions")
#print(totalPredictions)
totalGT.extend([label.cpu().numpy().item()])
#print("Label")
#print([label.cpu().numpy()])
#print(label.cpu().numpy().item())
#k = output_.item()==label.item()
print("FileName")
print(imgPath)
className = self.dictClasses.get(label.item(), None)
predictedClassName = self.dictClasses.get(topclass.item(), None)
print("Predicted Class Names")
print(predictedClassName)
#print(predictedClassNameList)
predictedClassNameList.append(predictedClassName)
new_record = pd.DataFrame([[imgPath, topk.cpu().numpy().item(), predictedClassName]],columns=[ "FileName" , "Confidence" , "Classification"])
dfInfoTable = pd.concat([dfInfoTable,new_record])
cnt = cnt + 1
if cnt%100 == 0:
path = validationFigureLoc + "\\" + "ValidationOuput.csv"
dfInfoTable.to_csv(path, index=False, header=True)
print(cnt)

PyTorch does not make initial weights random

I created a Neural Network that takes two greyscale images 14x14 pixels portraying a digit (from MNIST database) and returns 1 if the first digit is less or equal to the second digit, returns 0 otherwise. The code runs, but every time the initial weights are the same. They should be random
Forcing the initial weights to be random, by using the following line of code in the Net class, does not help.
torch.nn.init.normal_(self.layer1.weight, mean=0.0, std=0.01)
Here is the code of the "main.py" file:
import os; os.environ["KMP_DUPLICATE_LIB_OK"] = "TRUE"
import torch
import torch.nn as nn
from dlc_practical_prologue import *
class Net(nn.Module):
def __init__(self):
super().__init__()
self.layer1 = nn.Linear(2*14*14, 32)
#torch.nn.init.normal_(self.layer1.weight, mean=0.0, std=0.01)
#self.layer2 = nn.Linear(100, 100)
#self.layer3 = nn.Linear(100, 100)
self.layer2 = nn.Linear(32, 1)
def forward(self, x):
x = torch.relu(self.layer1(x))
#x = torch.relu(self.layer2(x))
#x = torch.relu(self.layer3(x))
x = torch.sigmoid(self.layer2(x))
return x
if __name__ == '__main__':
# Data initialization
N = 1000
train_input, train_target, train_classes, _, _, _, = generate_pair_sets(N)
_, _, _, test_input, test_target, test_classes = generate_pair_sets(N)
train_input = train_input.view(-1, 2*14*14)
test_input = test_input.view(-1, 2*14*14)
train_target = train_target.view(-1, 1)
test_target = test_target.view(-1, 1)
# I convert the type to torch.float32
train_input, train_target, train_classes, test_input, test_target, test_classes = \
train_input.type(torch.float32), train_target.type(torch.float32), train_classes.type(torch.long), \
test_input.type(torch.float32), test_target.type(torch.float32), test_classes.type(torch.long)
# Create the neural network
net = Net()
# Training
learning_rate = 0.01
# Use MSELoss
loss = nn.MSELoss()
# Use Adam optimizer
optimizer = torch.optim.Adam(net.parameters(), lr=learning_rate)
EPOCHS = 50
for param in net.parameters():
print(param)
for epoch in range(EPOCHS):
target_predicted = net(train_input)
l = loss(train_target, target_predicted) #loss = nn.MSELoss()
#l = loss(target_predicted, train_target)
l.backward()
optimizer.step()
optimizer.zero_grad()
#print(l)
# Testing
total = 1000
correct = 0
with torch.no_grad():
correct = ( test_target == net(test_input).round() ).sum()
print("Accuracy %.2f%%" % (correct / total * 100))
Here is the code for "dlc_practical_monologue.py":
import os; os.environ["KMP_DUPLICATE_LIB_OK"]="TRUE"
import torch
from torchvision import datasets
import argparse
import os
import urllib
######################################################################
parser = argparse.ArgumentParser(description='DLC prologue file for practical sessions.')
parser.add_argument('--full',
action='store_true', default=False,
help = 'Use the full set, can take ages (default False)')
parser.add_argument('--tiny',
action='store_true', default=False,
help = 'Use a very small set for quick checks (default False)')
parser.add_argument('--seed',
type = int, default = 0,
help = 'Random seed (default 0, < 0 is no seeding)')
parser.add_argument('--cifar',
action='store_true', default=False,
help = 'Use the CIFAR data-set and not MNIST (default False)')
parser.add_argument('--data_dir',
type = str, default = None,
help = 'Where are the PyTorch data located (default $PYTORCH_DATA_DIR or \'./data\')')
# Timur's fix
parser.add_argument('-f', '--file',
help = 'quick hack for jupyter')
args = parser.parse_args()
if args.seed >= 0:
torch.manual_seed(args.seed)
######################################################################
# The data
def convert_to_one_hot_labels(input, target):
tmp = input.new_zeros(target.size(0), target.max() + 1)
tmp.scatter_(1, target.view(-1, 1), 1.0)
return tmp
def load_data(cifar = None, one_hot_labels = False, normalize = False, flatten = True):
if args.data_dir is not None:
data_dir = args.data_dir
else:
data_dir = os.environ.get('PYTORCH_DATA_DIR')
if data_dir is None:
data_dir = './data'
if args.cifar or (cifar is not None and cifar):
print('* Using CIFAR')
cifar_train_set = datasets.CIFAR10(data_dir + '/cifar10/', train = True, download = True)
cifar_test_set = datasets.CIFAR10(data_dir + '/cifar10/', train = False, download = True)
train_input = torch.from_numpy(cifar_train_set.data)
train_input = train_input.transpose(3, 1).transpose(2, 3).float()
train_target = torch.tensor(cifar_train_set.targets, dtype = torch.int64)
test_input = torch.from_numpy(cifar_test_set.data).float()
test_input = test_input.transpose(3, 1).transpose(2, 3).float()
test_target = torch.tensor(cifar_test_set.targets, dtype = torch.int64)
else:
print('* Using MNIST')
######################################################################
# import torchvision
# raw_folder = data_dir + '/mnist/raw/'
# resources = [
# ("https://fleuret.org/dlc/data/train-images-idx3-ubyte.gz", "f68b3c2dcbeaaa9fbdd348bbdeb94873"),
# ("https://fleuret.org/dlc/data/train-labels-idx1-ubyte.gz", "d53e105ee54ea40749a09fcbcd1e9432"),
# ("https://fleuret.org/dlc/data/t10k-images-idx3-ubyte.gz", "9fb629c4189551a2d022fa330f9573f3"),
# ("https://fleuret.org/dlc/data/t10k-labels-idx1-ubyte.gz", "ec29112dd5afa0611ce80d1b7f02629c")
# ]
# os.makedirs(raw_folder, exist_ok=True)
# # download files
# for url, md5 in resources:
# filename = url.rpartition('/')[2]
# torchvision.datasets.utils.download_and_extract_archive(url, download_root=raw_folder, filename=filename, md5=md5)
######################################################################
mnist_train_set = datasets.MNIST(data_dir + '/mnist/', train = True, download = True)
mnist_test_set = datasets.MNIST(data_dir + '/mnist/', train = False, download = True)
train_input = mnist_train_set.data.view(-1, 1, 28, 28).float()
train_target = mnist_train_set.targets
test_input = mnist_test_set.data.view(-1, 1, 28, 28).float()
test_target = mnist_test_set.targets
if flatten:
train_input = train_input.clone().reshape(train_input.size(0), -1)
test_input = test_input.clone().reshape(test_input.size(0), -1)
if args.full:
if args.tiny:
raise ValueError('Cannot have both --full and --tiny')
else:
if args.tiny:
print('** Reduce the data-set to the tiny setup')
train_input = train_input.narrow(0, 0, 500)
train_target = train_target.narrow(0, 0, 500)
test_input = test_input.narrow(0, 0, 100)
test_target = test_target.narrow(0, 0, 100)
else:
print('** Reduce the data-set (use --full for the full thing)')
train_input = train_input.narrow(0, 0, 1000)
train_target = train_target.narrow(0, 0, 1000)
test_input = test_input.narrow(0, 0, 1000)
test_target = test_target.narrow(0, 0, 1000)
print('** Use {:d} train and {:d} test samples'.format(train_input.size(0), test_input.size(0)))
if one_hot_labels:
train_target = convert_to_one_hot_labels(train_input, train_target)
test_target = convert_to_one_hot_labels(test_input, test_target)
if normalize:
mu, std = train_input.mean(), train_input.std()
train_input.sub_(mu).div_(std)
test_input.sub_(mu).div_(std)
return train_input, train_target, test_input, test_target
######################################################################
def mnist_to_pairs(nb, input, target):
input = torch.functional.F.avg_pool2d(input, kernel_size = 2)
a = torch.randperm(input.size(0))
a = a[:2 * nb].view(nb, 2)
input = torch.cat((input[a[:, 0]], input[a[:, 1]]), 1)
classes = target[a]
target = (classes[:, 0] <= classes[:, 1]).long()
return input, target, classes
######################################################################
def generate_pair_sets(nb):
if args.data_dir is not None:
data_dir = args.data_dir
else:
data_dir = os.environ.get('PYTORCH_DATA_DIR')
if data_dir is None:
data_dir = './data'
train_set = datasets.MNIST(data_dir + '/mnist/', train = True, download = True)
train_input = train_set.data.view(-1, 1, 28, 28).float()
train_target = train_set.targets
test_set = datasets.MNIST(data_dir + '/mnist/', train = False, download = True)
test_input = test_set.data.view(-1, 1, 28, 28).float()
test_target = test_set.targets
return mnist_to_pairs(nb, train_input, train_target) + \
mnist_to_pairs(nb, test_input, test_target)
######################################################################
Note that I have to add the following line of code to run the code on Windows 10, while it is not necessary to run it on Linux.
import os; os.environ["KMP_DUPLICATE_LIB_OK"] = "TRUE"
Also on Linux I always get the same initial weights.
Please, can you help me?
Correct me if I'm wrong here but only the weights of the first layer should be the same each time you run this. The thing is when you import the dlc_practical_monologue.py there's this thing in it:
if args.seed >= 0:
torch.manual_seed(args.seed)
which fires up if the seed is >=0 (default is 0).
This should only initialize the first layer with the same weights for each run. Check if this is the case.
The solution was to delete the following lines from "dlv_practical_prologue.py":
if args.seed >= 0:
torch.manual_seed(args.seed)

Coloring scatter plot points differently based on certain conditions

I have a scatter plot made using plotly.py and I would like to color certain points in the scatter plot with a different color based on a certain condition. I have attached a sample code below :
import plotly.plotly as py
import plotly.graph_objs as go
from plotly.offline import plot
data = [4.1, 2.1, 3.1, 4.4, 3.2, 4.1, 2.2]
trace_1 = go.Scatter(
x = [1, 2, 3, 4, 5, 6, 7],
y = data
)
layout = go.Layout(
paper_bgcolor='rgb(255,255,255)',
plot_bgcolor='rgb(229,229,229)',
title = "Sample Plot",
showlegend = False,
xaxis = dict(
mirror = True,
showline = True,
showticklabels = True,
ticks = 'outside',
gridcolor = 'rgb(255,255,255)',
),
yaxis = dict(
mirror = True,
showline = True,
showticklabels = False,
gridcolor = 'rgb(255,255,255)',
),
shapes = [{
'type': 'line',
'x0': 1,
'y0': 4,
'x1': len(data),
'y1': 4,
'name': 'First',
'line': {
'color': 'rgb(147, 19, 19)',
'width': 1,
'dash': 'longdash'
}
},
{
'type': 'line',
'x0': 1,
'y0': 3,
'x1': len(data),
'y1': 3,
'line': {
'color': 'rgb(147, 19, 19)',
'width': 1,
'dash': 'longdash'
}
}
]
)
fig = dict(data = [trace_1], layout = layout)
plot(fig, filename = "test_plot.html")
Here's the output Output Scatter plot
Here the long dashed horizontal lines have corresponding x values 4 & 3 respectively. As one can see, points 1, 2, 4, 6 and 7 lie outside the dashed lines. Is there a way to color them differently based on the condition (x > 3) and (x<4).
Here's a reference to something I found while searching for a solution :
Python Matplotlib scatter plot: Specify color points depending on conditions
How can I achieve this in plotly.py ?
You can accomplish this by using a numeric array to specify the marker color. See https://plot.ly/python/line-and-scatter/#scatter-with-a-color-dimension.
Adapting your particular example to display red markers below 3, green markers above 4, and gray markers between 3 and 4:
import plotly.graph_objs as go
from plotly.offline import init_notebook_mode, iplot
init_notebook_mode()
data = [4.1, 2.1, 3.1, 4.4, 3.2, 4.1, 2.2]
color = [
-1 if v < 3 else 1 if v > 4 else 0
for v in data
]
colorscale = [[0, 'red'], [0.5, 'gray'], [1.0, 'green']]
trace_1 = go.Scatter(
x = [1, 2, 3, 4, 5, 6, 7],
y = data,
marker = {'color': color,
'colorscale': colorscale,
'size': 10
}
)
layout = go.Layout(
paper_bgcolor='rgb(255,255,255)',
plot_bgcolor='rgb(229,229,229)',
title = "Sample Plot",
showlegend = False,
xaxis = dict(
mirror = True,
showline = True,
showticklabels = True,
ticks = 'outside',
gridcolor = 'rgb(255,255,255)',
),
yaxis = dict(
mirror = True,
showline = True,
showticklabels = False,
gridcolor = 'rgb(255,255,255)',
),
shapes = [{
'type': 'line',
'x0': 1,
'y0': 4,
'x1': len(data),
'y1': 4,
'name': 'First',
'line': {
'color': 'rgb(147, 19, 19)',
'width': 1,
'dash': 'longdash'
}
},
{
'type': 'line',
'x0': 1,
'y0': 3,
'x1': len(data),
'y1': 3,
'line': {
'color': 'rgb(147, 19, 19)',
'width': 1,
'dash': 'longdash'
}
}
]
)
fig = dict(data = [trace_1], layout = layout)
iplot(fig)
Hope that helps!

Positioning a dropdown in plotly

I want to position a dropdown menu under the legend. However depending on how big the plot in terms of resolution is, plotly changes the position of this dropdown (cf. uploaded pictures).
The first plot shows the output like it appears in the little Rstudio
plot tab. The second shows how far the dropdown goes to the right if I switch to fullscreen.
How can I fix the position of the dropdown menu? Any solution is appreciated whether it is html, R or anything else
Below you can find the code which I used to create the plots:
library(plotly)
x <- seq(-2 * pi, 2 * pi, length.out = 1000)
df <- data.frame(x, y1 = sin(x), y2 = cos(x),tan_h=tanh(x))
p <- plot_ly(df, x = ~x) %>%
add_lines(y = ~y1, name = "sin") %>%
add_lines(y = ~y2, name = "cos") %>%
add_lines(y = ~tan_h, name='tanh',visible=FALSE) %>%
layout(
title = "Drop down menus - Styling",
xaxis = list(domain = c(0.1, 1)),
yaxis = list(title = "y"),
updatemenus = list(
list(
y = 0.7,
x = 1.1,
buttons = list(
list(method = "restyle",
args = list("visible", list(TRUE, TRUE, FALSE)),
label = "Cos"),
list(method = "restyle",
args = list("visible", list(TRUE, FALSE, TRUE)),
label = "Tanh"),
list(method = "restyle",
args = list("visible", list(TRUE, TRUE, TRUE)),
label = "All")
))
)
)
This question is old now, but for Python you now at least can position this via the plotly API, as per: https://plotly.com/python/v3/dropdowns/#style-dropdown
You can pass the arguments x, y, xanchor and yanchor to the updatemenus argument of the plotly graphobjects Figure's update_layout method:
import plotly.graph_objects as go
fig = go.Figure()
fig.update_layout(
title="Demo",
updatemenus=[go.layout.Updatemenu(
active=0,
buttons=[{"label": "Demo button...", "args": [{"showlegend": False}]}],
x = 0.3,
xanchor = 'left',
y = 1.2,
yanchor = 'top',
)
]
)
fig.show()