FiPy Simple Convection - numerical-methods

I am trying to understand how FiPy works by working an example, in particular I would like to solve the following simple convection equation with periodic boundary:
$$\partial_t u + \partial_x u = 0$$
If initial data is given by $u(x, 0) = F(x)$, then the analytical solution is $u(x, t) = F(x - t)$. I do get a solution, but it is not correct.
What am I missing? Is there a better resource for understanding FiPy than the documentation? It is very sparse...
Here is my attempt
from fipy import *
import numpy as np
# Generate mesh
nx = 20
dx = 2*np.pi/nx
mesh = PeriodicGrid1D(nx=nx, dx=dx)
# Generate solution object with initial discontinuity
phi = CellVariable(name="solution variable", mesh=mesh)
phiAnalytical = CellVariable(name="analytical value", mesh=mesh)
phi.setValue(1.)
phi.setValue(0., where=x > 1.)
# Define the pde
D = [[-1.]]
eq = TransientTerm() == ConvectionTerm(coeff=D)
# Set discretization so analytical solution is exactly one cell translation
dt = 0.01*dx
steps = 2*int(dx/dt)
# Set the analytical value at the end of simulation
phiAnalytical.setValue(np.roll(phi.value, 1))
for step in range(steps):
eq.solve(var=phi, dt=dt)
print(phi.allclose(phiAnalytical, atol=1e-1))

As addressed on the FiPy mailing list, FiPy is not great at handling convection only PDEs (absent diffusion, pure hyperbolic) as it's missing higher order convection schemes. It is better to use CLAWPACK for this class of problem.
FiPy does have one second order scheme that might help with this problem, the VanLeerConvectionTerm, see an example.
If the VanLeerConvectionTerm is used in the above problem, it does do a better job of preserving the shock.
import numpy as np
import fipy
# Generate mesh
nx = 20
dx = 2*np.pi/nx
mesh = fipy.PeriodicGrid1D(nx=nx, dx=dx)
# Generate solution object with initial discontinuity
phi = fipy.CellVariable(name="solution variable", mesh=mesh)
phiAnalytical = fipy.CellVariable(name="analytical value", mesh=mesh)
phi.setValue(1.)
phi.setValue(0., where=mesh.x > 1.)
# Define the pde
D = [[-1.]]
eq = fipy.TransientTerm() == fipy.VanLeerConvectionTerm(coeff=D)
# Set discretization so analytical solution is exactly one cell translation
dt = 0.01*dx
steps = 2*int(dx/dt)
# Set the analytical value at the end of simulation
phiAnalytical.setValue(np.roll(phi.value, 1))
viewer = fipy.Viewer(phi)
for step in range(steps):
eq.solve(var=phi, dt=dt)
viewer.plot()
raw_input('stopped')
print(phi.allclose(phiAnalytical, atol=1e-1))

Related

how can i solve "empty() received an invalid combination of arguments - got (tuple, dtype=NoneType, device=NoneType)" in MushroomRL?

I'm using MushroomRL for a Deep Reinforcement Learning project, and I'm using a Graph representation as RL Environment where the number of nodes represents the number of actions now in my neural network the input is one value ex: tensor([[5.]], and the output Q is the number of nodes which is ten ex: tensor([[5972.4927, 8562.3330, 7443.6479, 7326.1587, 6615.2090, 6617.3145,6911.8672, 8233.7930, 6821.0093, 7000.1182,]] now I'm using a new framework called MushroomRL, and this is the code
if __name__ == '__main__':
from mushroom_rl.core import Core
from mushroom_rl.algorithms.value import TrueOnlineSARSALambda
from mushroom_rl.policy import EpsGreedy
from mushroom_rl.features import Features
from mushroom_rl.features.tiles import Tiles
from mushroom_rl.utils.dataset import compute_J
from mushroom_rl.utils.parameters import LinearParameter, Parameter
from mushroom_rl.approximators.parametric import TorchApproximator
from mushroom_rl.algorithms.value import DQN
# Set the seed
np.random.seed(1)
# Create the toy environment with default parameters
#mdp = Environment.make('graph_env')
mdp=graph_env()
# Using an epsilon-greedy policy
epsilon = Parameter(value=0.1)
pi = EpsGreedy(epsilon=epsilon)
# Policy
epsilon = LinearParameter(value=1.,
threshold_value=.1,
n=1000000)
epsilon_test = Parameter(value=.05)
epsilon_random = Parameter(value=1)
pi = EpsGreedy(epsilon=epsilon_random)
approximator_params = dict(
network=Network,
input_shape=(1,),
output_shape=(1,),
n_actions=mdp.info.action_space.n,
optimizer=optimizer,
loss=F.mse_loss
)
approximator = TorchApproximator
algorithm_params = dict(
batch_size=32,
target_update_frequency=target_update_frequency // train_frequency,
replay_memory=True,
initial_replay_size=initial_replay_size,
max_replay_size=max_replay_size
)
agent=agent = DQN(mdp.info, pi, approximator,
approximator_params=approximator_params,
**algorithm_params)
# Algorithm
core = Core(agent, mdp)
# RUN
# Fill replay memory with random dataset
print_epoch(0)
core.learn(n_steps=initial_replay_size,n_steps_per_fit=initial_replay_size)
# Evaluate initial policy
pi.set_epsilon(epsilon_test)
#mdp.set_episode_end(False)
dataset = core.evaluate(n_steps=test_samples)
scores.append(get_stats(dataset))
for n_epoch in range(1, max_steps // evaluation_frequency + 1):
print_epoch(n_epoch)
print('- Learning:')
# learning step
pi.set_epsilon(epsilon)
mdp.set_episode_end(True)
core.learn(n_steps=evaluation_frequency,
n_steps_per_fit=train_frequency)
print('- Evaluation:')
# evaluation step
pi.set_epsilon(epsilon_test)
mdp.set_episode_end(False)
dataset = core.evaluate(n_steps=test_samples)
scores.append(get_stats(dataset))
it givs me this error when i run the code
TypeError: empty() received an invalid combination of arguments - got (tuple, dtype=NoneType, device=NoneType), but expected one of:
* (tuple of ints size, *, tuple of names names, torch.memory_format memory_format, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad)
* (tuple of ints size, *, torch.memory_format memory_format, Tensor out, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad)
i beleve the proplem in part of the code can any one help to fix it ?

How to normalize pytorch model output to be in range [0,1]

lets say I have model called UNet
output = UNet(input)
that output is a vector of grayscale images shape: (batch_size,1,128,128)
What I want to do is to normalize each image to be in range [0,1].
I did it like this:
for i in range(batch_size):
output[i,:,:,:] = output[i,:,:,:]/torch.amax(output,dim=(1,2,3))[i]
now every image in the output is normalized, but when I'm training such model, pytorch claim it cannot calculate the gradients in this procedure, and I understand why.
my question is what is the right way to normalize image without killing the backpropogation flow?
something like
output = UNet(input)
output = output.normalize
output2 = some_model(output)
loss = ..
loss.backward()
optimize.step()
my only option right now is adding a sigmoid activation at the end of the UNet but i dont think its a good idea..
update - code (gen2,disc = unet,discriminator models. est_bias is some output):
update 2x code:
with torch.no_grad():
est_bias_for_disc = gen2(input_img)
est_bias_for_disc /= est_bias_for_disc.amax(dim=(1,2,3), keepdim=True)
disc_fake_hat = disc(est_bias_for_disc.detach())
disc_fake_loss = BCE(disc_fake_hat, torch.zeros_like(disc_fake_hat))
disc_real_hat = disc(bias_ref)
disc_real_loss = BCE(disc_real_hat, torch.ones_like(disc_real_hat))
disc_loss = (disc_fake_loss + disc_real_loss) / 2
if epoch<=epochs_till_gen2_stop:
disc_loss.backward(retain_graph=True) # Update gradients
opt_disc.step() # Update optimizer
then theres seperate training:
opt_gen2.zero_grad()
est_bias = gen2(input_img)
est_bias /= est_bias.amax(dim=(1,2,3), keepdim=True)
disc_fake = disc(est_bias)
ADV_loss = BCE(disc_fake, torch.ones_like(disc_fake))
gen2_loss = ADV_loss
gen2_loss.backward()
opt_gen2.step()
You can use the normalize function:
>>> import torch
>>> import torch.nn.functional as F
>>> x = torch.tensor([[3.,4.],[5.,6.],[7.,8.]])
>>> x = F.normalize(x, dim = 0)
>>> print(x)
tensor([[0.3293, 0.3714],
[0.5488, 0.5571],
[0.7683, 0.7428]])
This will give a differentiable tensor as long as out is not used.
You are overwriting the tensor's value because of the indexing on the batch dimension. Instead, you can perform the operation in vectorized form:
output = output / output.amax(dim=(1,2,3), keepdim=True)
The keepdim=True argument keeps the shape of torch.Tensor.amax's output equal to that of its inputs allowing you to perform an in-place operation with it.

How to solve Euler–Bernoulli beam equation in FiPy?

To understand how FiPy works, I want to solve the Euler–Bernoulli beam equation with fixed endpoints:
w''''(x) = q(x,t), w(0) = w(1) = 0, w'(0) = w'(1) = 0.
For simplicity, let q(x,t) = sin(x).
How can I define and solve it in FiPy? How to specify the source term sin(x) with respect to the only independent variable in the equation?
from fipy import CellVariable, Grid1D, DiffusionTerm, ExplicitDiffusionTerm
from fipy.tools import numerix
nx = 50
dx = 1/nx
mesh = Grid1D(nx=nx, dx=dx)
w = CellVariable(name="deformation",mesh=mesh,value=0.0)
valueLeft = 0.0
valueRight = 0.0
w.constrain(valueLeft, mesh.facesLeft)
w.constrain(valueRight, mesh.facesRight)
w.faceGrad.constrain(valueLeft, mesh.facesLeft)
w.faceGrad.constrain(valueRight, mesh.facesRight)
# does not work:
eqX = DiffusionTerm((1.0, 1.0)) == numerix.sin(x)
eqX.solve(var=w)
Here is what seems to be a working version of your problem
from fipy import CellVariable, Grid1D, DiffusionTerm
from fipy.tools import numerix
from fipy.solvers.pysparse.linearPCGSolver import LinearPCGSolver
from fipy import Viewer
import numpy as np
L = 1.
nx = 500
dx = L / nx
mesh = Grid1D(nx=nx, dx=dx)
w = CellVariable(name="deformation",mesh=mesh,value=0.0)
valueLeft = 0.0
valueRight = 0.0
w.constrain(valueLeft, mesh.facesLeft)
w.constrain(valueRight, mesh.facesRight)
w.faceGrad.constrain(valueLeft, mesh.facesLeft)
w.faceGrad.constrain(valueRight, mesh.facesRight)
x = mesh.x
k_0 = 0
k_1 = -1
k_2 = 2 + np.cos(L) - 3 * np.sin(L)
k_3 = -1 + 2 * np.sin(L) - np.cos(L)
w_analytical = numerix.sin(x) + k_3 * x**3 + k_2 * x**2 + k_1 * x + k_0
w_analytical.name = 'analytical'
# does not work:
eqX = DiffusionTerm((1.0, 1.0)) == numerix.sin(x)
eqX.solve(var=w, solver=LinearPCGSolver(iterations=20000))
Viewer([w_analytical, w]).plot()
raw_input('stopped')
After running this, the FiPy solution seems to be quite close to the analytical result.
The two important changes from the OP's implementation.
Using mesh.x which is the correct way to refer to the spatial variable for use in FiPy equations.
Specifying the solver and number of iterations. The problem seems to be slow to converge so needed a lot of iterations. From my experience, fourth order spatial equations often need good preconditioners to converge quickly. You might try using the Trilinos solver package with Fipy to make this work better as it has a wider range of available preconditioners.
Use an explicit float for L to avoid integer maths in Python 2.7 (edit from comment)

The built-in VGG16 network in MxNet is not working

I would like to test the trained built-in VGG16 network in MxNet. The experiment is to feed the network with an image from ImageNet. Then, I would like to see whether the result is correct.
However, the results are always error! Hi, how stupid the network is! Well, that cannot be true. I must do something wrong.
from mxnet.gluon.model_zoo.vision import vgg16
from mxnet.image import color_normalize
import mxnet as mx
import numpy as np
import cv2
path=‘http://data.mxnet.io/models/imagenet-11k/’
data_dir = ‘F:/Temps/Models_tmp/’
k = ‘synset.txt’
#gluon.utils.download(path+k, data_dir+k)
img_dir = ‘F:/Temps/DataSets/ImageNet/’
img = cv2.imread(img_dir + ‘cat.jpg’)
img = mx.nd.array(img)
img,_ = mx.image.center_crop(img,(224,224))
img = img/255
img = color_normalize(img,mean=mx.nd.array([0.485, 0.456, 0.406]),std=mx.nd.array([0.229, 0.224, 0.225]))
img = mx.nd.transpose(img, axes=(2, 0, 1))
img = img.expand_dims(axis=0)
with open(data_dir + ‘synset.txt’, ‘r’) as f:
labels = [l.rstrip() for l in f]
aVGG = vgg16(pretrained=True,root=‘F:/Temps/Models_tmp/’)
features = aVGG.forward(img)
features = mx.ndarray.softmax(features)
features = features.asnumpy()
features = np.squeeze(features)
a = np.argsort(features)[::-1]
for i in a[0:5]:
print(‘probability=%f, class=%s’ %(features[i], labels[i]))
The outputs from color_normalize seems not right for the absolute values of some numbers are greater than one.
This is my figure of cat which is downloaded from the ImageNet. 
These are my outputs.
probability=0.218258, class=n01519563 cassowary probability=0.172373,
class=n01519873 emu, Dromaius novaehollandiae, Emu novaehollandiae
probability=0.128973, class=n01521399 rhea, Rhea americana
probability=0.105253, class=n01518878 ostrich, Struthio camelus
probability=0.051424, class=n01517565 ratite, ratite bird, flightless
bird
Reading your code:
path=‘http://data.mxnet.io/models/imagenet-11k/’
I think you might be using the synset of the ImageNet 11k (11000 classes) rather than the 1k (1000) classes. That would explain the mismatch.
The correct synset is here: http://data.mxnet.io/models/imagenet/synset.txt

Function approximator and q-learning

I am trying to implement q-learning with an action-value approximation-function. I am using openai-gym and the "MountainCar-v0" enviroment to test my algorithm out. My problem is, it does not converge or find the goal at all.
Basically the approximator works like the following, you feed in the 2 features: position and velocity and one of the 3 actions in a one-hot encoding: 0 -> [1,0,0], 1 -> [0,1,0] and 2 -> [0,0,1]. The output is the action-value approximation Q_approx(s,a), for one specific action.
I know that usually, the input is the state (2 features) and the output layer contains 1 output for each action. The big difference that I see is that I have run the feed forward pass 3 times (one for each action) and take the max, while in the standard implementation you run it once and take the max over the output.
Maybe my implementation is just completely wrong and I am thinking wrong. Gonna paste the code here, it is a mess but I am just experimenting a bit:
import gym
import numpy as np
from keras.models import Sequential
from keras.layers import Dense, Activation
env = gym.make('MountainCar-v0')
# The mean reward over 20 episodes
mean_rewards = np.zeros(20)
# Feature numpy holder
features = np.zeros(5)
# Q_a value holder
qa_vals = np.zeros(3)
one_hot = {
0 : np.asarray([1,0,0]),
1 : np.asarray([0,1,0]),
2 : np.asarray([0,0,1])
}
model = Sequential()
model.add(Dense(20, activation="relu",input_dim=(5)))
model.add(Dense(10,activation="relu"))
model.add(Dense(1))
model.compile(optimizer='rmsprop',
loss='mse',
metrics=['accuracy'])
epsilon_greedy = 0.1
discount = 0.9
batch_size = 16
# Experience replay containing features and target
experience = np.ones((10*300,5+1))
# Ring buffer
def add_exp(features,target,index):
if index % experience.shape[0] == 0:
index = 0
global filled_once
filled_once = True
experience[index,0:5] = features
experience[index,5] = target
index += 1
return index
for e in range(0,100000):
obs = env.reset()
old_obs = None
new_obs = obs
rewards = 0
loss = 0
for i in range(0,300):
if old_obs is not None:
# Find q_a max for s_(t+1)
features[0:2] = new_obs
for i,pa in enumerate([0,1,2]):
features[2:5] = one_hot[pa]
qa_vals[i] = model.predict(features.reshape(-1,5))
rewards += reward
target = reward + discount*np.max(qa_vals)
features[0:2] = old_obs
features[2:5] = one_hot[a]
fill_index = add_exp(features,target,fill_index)
# Find new action
if np.random.random() < epsilon_greedy:
a = env.action_space.sample()
else:
a = np.argmax(qa_vals)
else:
a = env.action_space.sample()
obs, reward, done, info = env.step(a)
old_obs = new_obs
new_obs = obs
if done:
break
if filled_once:
samples_ids = np.random.choice(experience.shape[0],batch_size)
loss += model.train_on_batch(experience[samples_ids,0:5],experience[samples_ids,5].reshape(-1))[0]
mean_rewards[e%20] = rewards
print("e = {} and loss = {}".format(e,loss))
if e % 50 == 0:
print("e = {} and mean = {}".format(e,mean_rewards.mean()))
Thanks in advance!
There shouldn't be much difference between the actions as inputs to your network or as different outputs of your network. It does make a huge difference if your states are images for example. because Conv nets work very well with images and there would be no obvious way of integrating the actions to the input.
Have you tried the cartpole balancing environment? It is better to test if your model is working correctly.
Mountain climb is pretty hard. It has no reward until you reach the top, which often doesn't happen at all. The model will only start learning something useful once you get to the top once. If you are never getting to the top you should probably increase your time doing exploration. in other words take more random actions, a lot more...