How to solve Euler–Bernoulli beam equation in FiPy? - numerical-methods

To understand how FiPy works, I want to solve the Euler–Bernoulli beam equation with fixed endpoints:
w''''(x) = q(x,t), w(0) = w(1) = 0, w'(0) = w'(1) = 0.
For simplicity, let q(x,t) = sin(x).
How can I define and solve it in FiPy? How to specify the source term sin(x) with respect to the only independent variable in the equation?
from fipy import CellVariable, Grid1D, DiffusionTerm, ExplicitDiffusionTerm
from fipy.tools import numerix
nx = 50
dx = 1/nx
mesh = Grid1D(nx=nx, dx=dx)
w = CellVariable(name="deformation",mesh=mesh,value=0.0)
valueLeft = 0.0
valueRight = 0.0
w.constrain(valueLeft, mesh.facesLeft)
w.constrain(valueRight, mesh.facesRight)
w.faceGrad.constrain(valueLeft, mesh.facesLeft)
w.faceGrad.constrain(valueRight, mesh.facesRight)
# does not work:
eqX = DiffusionTerm((1.0, 1.0)) == numerix.sin(x)
eqX.solve(var=w)

Here is what seems to be a working version of your problem
from fipy import CellVariable, Grid1D, DiffusionTerm
from fipy.tools import numerix
from fipy.solvers.pysparse.linearPCGSolver import LinearPCGSolver
from fipy import Viewer
import numpy as np
L = 1.
nx = 500
dx = L / nx
mesh = Grid1D(nx=nx, dx=dx)
w = CellVariable(name="deformation",mesh=mesh,value=0.0)
valueLeft = 0.0
valueRight = 0.0
w.constrain(valueLeft, mesh.facesLeft)
w.constrain(valueRight, mesh.facesRight)
w.faceGrad.constrain(valueLeft, mesh.facesLeft)
w.faceGrad.constrain(valueRight, mesh.facesRight)
x = mesh.x
k_0 = 0
k_1 = -1
k_2 = 2 + np.cos(L) - 3 * np.sin(L)
k_3 = -1 + 2 * np.sin(L) - np.cos(L)
w_analytical = numerix.sin(x) + k_3 * x**3 + k_2 * x**2 + k_1 * x + k_0
w_analytical.name = 'analytical'
# does not work:
eqX = DiffusionTerm((1.0, 1.0)) == numerix.sin(x)
eqX.solve(var=w, solver=LinearPCGSolver(iterations=20000))
Viewer([w_analytical, w]).plot()
raw_input('stopped')
After running this, the FiPy solution seems to be quite close to the analytical result.
The two important changes from the OP's implementation.
Using mesh.x which is the correct way to refer to the spatial variable for use in FiPy equations.
Specifying the solver and number of iterations. The problem seems to be slow to converge so needed a lot of iterations. From my experience, fourth order spatial equations often need good preconditioners to converge quickly. You might try using the Trilinos solver package with Fipy to make this work better as it has a wider range of available preconditioners.
Use an explicit float for L to avoid integer maths in Python 2.7 (edit from comment)

Related

predicting simple autoregressive model with fully connected

the question is at the end, you can just jump to the question, I just wanted to share my process, in case someone want to give me general advice.
I started learning how to use LSTM layers and tried to build a simple predictor to the following AR model:
class AR_model:
def __init__(self, length=100):
self.time = 0
self.first_value = 0
self.a1 = 0.6
self.a2 = -0.5
self.a3 = -0.2
self.Xt = self.first_value
self.Xt_minus_1 = 0
self.Xt_minus_2 = 0
self.length = length
def __iter__(self):
return self
def __next__(self): # raise StopIteration
if self.time == self.length:
raise StopIteration
new_value = self.a1 * self.Xt + \
self.a2 * self.Xt_minus_1 + \
self.a3 * self.Xt_minus_2 + \
random.uniform(0, 0.1)
self.Xt_minus_2 = self.Xt_minus_1
self.Xt_minus_1 = self.Xt
self.Xt = new_value
self.time += 1
return new_value
which basicly means the following series:
Xt = a1 * Xt−1 + a2 * Xt−2 + a3X * t−3 + Ut
where: a1 = 0.6, a2 = −0.5, a3 = −0.2 and Ut (i.i.d) ∼ Uniform(0, 0.1)
using the following forward method:
def forward(self, input):
# input: [Batch x seq_length x input_size]
x, _ = self.lstm(input)
# x: [Batch x seq_length x hidden_state]
x = x[:, -1, :]
# taking only the last x: [Batch x hidden_state]
x = self.linear(x)
# x: [Batch x 1]
return x
the best result seems ok:
picture of results, 91 steps
with the following hyper-parameters:
signal_count = 50
signal_length = 200
hidden_state = 200
learning_rate = 0.1
also tried it on sin and tri waves:
sin wave 20 steps
tri wave 75 steps
tri wave might have worked on deeper layered network but I didnt bother to try
Question 1
It make sense that for a simple AR model, such as:
Xt = a1 * Xt−1 + a2 * Xt−2 + a3X * t−3 + Ut
where: a1 = 0.6, a2 = −0.5, a3 = −0.2 and Ut (i.i.d) ∼ Uniform(0, 0.1)
It would be possible to get a good prediction with a simple three input one layered fully connected network, where the inputs are the last tree values of the AR series.
but I just get terrible result. Even when I remove the noise from the AR model I still get bad results. Am I in the wrong to think this?
I didn't post the code because I think its a concept problem. If someone asks, I will post.
Question 2
for the above AR model, what simple predictor would you recommend, not necessarily based deep learning.
asking friends I got recommended kalman filter, and Markovian based.
haven't really checked them out yet.
Thank you for reading

How to plot a 2 dimensional density / heatmap in plotnine?

I am trying to recreate the following graph in plotnine. It's asking me for more details but I don't want to distract from the example. I think it's pretty obvious what I'm trying to do. I have been given a function by a colleague. I'm not interested in rewriting the function. I want to take sm and use plotnine to plot it instead of matplotlib. I plot lots of dataframes with plotnine but I'm not sure how to use it in this case. I have tried on my own to figure it out and I keep getting lost. I hope that for someone more experienced I am overlooking something simple.
import matplotlib.pyplot as plt
def getSuccess(y,x):
return((y*(-x))*.5+.5)
steps = 100
stepSize = 1/steps
sm = []
for y in range(steps*2+1):
sm.append([getSuccess((y-steps)*stepSize,(x-steps)*stepSize) for x in range(steps*2+1)])
plt.imshow(sm)
plt.ylim(-1, 1)
plt.colorbar()
plt.yticks([0,steps,steps*2],[str(y) for y in [-1.0,0.0,1.0]])
plt.xticks([0,steps,steps*2],[str(x) for x in [-1.0,0.0,1.0]])
plt.show()
You could try geom_raster.
I have taken your synthetic data sm and converted to a dataframe as plotnine will need this.
import pandas as pd
import numpy as np
from plotnine import *
df = pd.DataFrame(sm).melt()
df.rename(columns={'variable':'x','value':'density'}, inplace=True)
df.insert(1,'y',df.index % 201)
p = (ggplot(df, aes('x','y'))
+ geom_raster(aes(fill='density'), interpolate=True)
+ labs(x=None,y=None)
+ scale_x_continuous(expand=(0,0), breaks=[0,100,200], labels=[-1,0,1])
+ scale_y_continuous(expand=(0,0), breaks=[0,100,200], labels=[-1,0,1])
+ theme_matplotlib()
+ theme(
text = element_text(family="Calibri", size=9),
legend_title = element_blank(),
axis_ticks = element_blank(),
legend_key_height = 29.6,
legend_key_width = 6,
)
)
p.save(filename='C:\\Users\\BRB\\geom_raster.png', height=10, width=10, units = 'cm', dpi=400)
This result is:

Move an object towards a target in PyBullet

I'm pretty new to PyBullet and physics engines in general. My first step is trying to get one object to move towards another.
import pybullet as p
import time
import pybullet_data
DURATION = 10000
physicsClient = p.connect(p.GUI)#or p.DIRECT for non-graphical version
p.setAdditionalSearchPath(pybullet_data.getDataPath()) #optionally
print("data path: %s " % pybullet_data.getDataPath())
p.setGravity(0,0,-10)
planeId = p.loadURDF("plane.urdf")
cubeStartPos = [0,0,1]
cubeStartOrientation = p.getQuaternionFromEuler([0,0,0])
boxId = p.loadURDF("r2d2.urdf",cubeStartPos, cubeStartOrientation)
gemId = p.loadURDF("duck_vhacd.urdf", [2,2,1], p.getQuaternionFromEuler([0,0,0]) )
for i in range (DURATION):
p.stepSimulation()
time.sleep(1./240.)
gemPos, gemOrn = p.getBasePositionAndOrientation(gemId)
cubePos, cubeOrn = p.getBasePositionAndOrientation(boxId)
oid, lk, frac, pos, norm = p.rayTest(cubePos, gemPos)[0]
#rt = p.rayTest(cubePos, gemPos)
#print("rayTest: %s" % rt[0][1])
print("rayTest: Norm: ")
print(norm)
p.applyExternalForce(objectUniqueId=boxId, linkIndex=-1, forceObj=pos
,posObj=gemPos, flags=p.WORLD_FRAME)
print(cubePos,cubeOrn)
p.disconnect()
But this just gets R2 to wiggle a bit. How do I do this?
First of all, if you are moving a robot, you should do something a little more complicated, by providing some commands to the joints of the robot. Here is an example
Now assuming that you are moving something less complicated by applying an external force, the simplest thing you can do is multiply a factor alpha with the difference between the two positions; this would be your force.
For your example this would be:
import numpy as np
import pybullet as p
import time
import pybullet_data
DURATION = 10000
ALPHA = 300
physicsClient = p.connect(p.GUI) # or p.DIRECT for non-graphical version
p.setAdditionalSearchPath(pybullet_data.getDataPath()) # optionally
print("data path: %s " % pybullet_data.getDataPath())
p.setGravity(0, 0, -10)
planeId = p.loadURDF("plane.urdf")
cubeStartPos = [0, 0, 1]
cubeStartOrientation = p.getQuaternionFromEuler([0, 0, 0])
boxId = p.loadURDF("r2d2.urdf", cubeStartPos, cubeStartOrientation)
gemId = p.loadURDF("duck_vhacd.urdf", [
2, 2, 1], p.getQuaternionFromEuler([0, 0, 0]))
for i in range(DURATION):
p.stepSimulation()
time.sleep(1./240.)
gemPos, gemOrn = p.getBasePositionAndOrientation(gemId)
boxPos, boxOrn = p.getBasePositionAndOrientation(boxId)
force = ALPHA * (np.array(gemPos) - np.array(boxPos))
p.applyExternalForce(objectUniqueId=boxId, linkIndex=-1,
forceObj=force, posObj=boxPos, flags=p.WORLD_FRAME)
print('Applied force magnitude = {}'.format(force))
print('Applied force vector = {}'.format(np.linalg.norm(force)))
p.disconnect()

Function approximator and q-learning

I am trying to implement q-learning with an action-value approximation-function. I am using openai-gym and the "MountainCar-v0" enviroment to test my algorithm out. My problem is, it does not converge or find the goal at all.
Basically the approximator works like the following, you feed in the 2 features: position and velocity and one of the 3 actions in a one-hot encoding: 0 -> [1,0,0], 1 -> [0,1,0] and 2 -> [0,0,1]. The output is the action-value approximation Q_approx(s,a), for one specific action.
I know that usually, the input is the state (2 features) and the output layer contains 1 output for each action. The big difference that I see is that I have run the feed forward pass 3 times (one for each action) and take the max, while in the standard implementation you run it once and take the max over the output.
Maybe my implementation is just completely wrong and I am thinking wrong. Gonna paste the code here, it is a mess but I am just experimenting a bit:
import gym
import numpy as np
from keras.models import Sequential
from keras.layers import Dense, Activation
env = gym.make('MountainCar-v0')
# The mean reward over 20 episodes
mean_rewards = np.zeros(20)
# Feature numpy holder
features = np.zeros(5)
# Q_a value holder
qa_vals = np.zeros(3)
one_hot = {
0 : np.asarray([1,0,0]),
1 : np.asarray([0,1,0]),
2 : np.asarray([0,0,1])
}
model = Sequential()
model.add(Dense(20, activation="relu",input_dim=(5)))
model.add(Dense(10,activation="relu"))
model.add(Dense(1))
model.compile(optimizer='rmsprop',
loss='mse',
metrics=['accuracy'])
epsilon_greedy = 0.1
discount = 0.9
batch_size = 16
# Experience replay containing features and target
experience = np.ones((10*300,5+1))
# Ring buffer
def add_exp(features,target,index):
if index % experience.shape[0] == 0:
index = 0
global filled_once
filled_once = True
experience[index,0:5] = features
experience[index,5] = target
index += 1
return index
for e in range(0,100000):
obs = env.reset()
old_obs = None
new_obs = obs
rewards = 0
loss = 0
for i in range(0,300):
if old_obs is not None:
# Find q_a max for s_(t+1)
features[0:2] = new_obs
for i,pa in enumerate([0,1,2]):
features[2:5] = one_hot[pa]
qa_vals[i] = model.predict(features.reshape(-1,5))
rewards += reward
target = reward + discount*np.max(qa_vals)
features[0:2] = old_obs
features[2:5] = one_hot[a]
fill_index = add_exp(features,target,fill_index)
# Find new action
if np.random.random() < epsilon_greedy:
a = env.action_space.sample()
else:
a = np.argmax(qa_vals)
else:
a = env.action_space.sample()
obs, reward, done, info = env.step(a)
old_obs = new_obs
new_obs = obs
if done:
break
if filled_once:
samples_ids = np.random.choice(experience.shape[0],batch_size)
loss += model.train_on_batch(experience[samples_ids,0:5],experience[samples_ids,5].reshape(-1))[0]
mean_rewards[e%20] = rewards
print("e = {} and loss = {}".format(e,loss))
if e % 50 == 0:
print("e = {} and mean = {}".format(e,mean_rewards.mean()))
Thanks in advance!
There shouldn't be much difference between the actions as inputs to your network or as different outputs of your network. It does make a huge difference if your states are images for example. because Conv nets work very well with images and there would be no obvious way of integrating the actions to the input.
Have you tried the cartpole balancing environment? It is better to test if your model is working correctly.
Mountain climb is pretty hard. It has no reward until you reach the top, which often doesn't happen at all. The model will only start learning something useful once you get to the top once. If you are never getting to the top you should probably increase your time doing exploration. in other words take more random actions, a lot more...

FiPy Simple Convection

I am trying to understand how FiPy works by working an example, in particular I would like to solve the following simple convection equation with periodic boundary:
$$\partial_t u + \partial_x u = 0$$
If initial data is given by $u(x, 0) = F(x)$, then the analytical solution is $u(x, t) = F(x - t)$. I do get a solution, but it is not correct.
What am I missing? Is there a better resource for understanding FiPy than the documentation? It is very sparse...
Here is my attempt
from fipy import *
import numpy as np
# Generate mesh
nx = 20
dx = 2*np.pi/nx
mesh = PeriodicGrid1D(nx=nx, dx=dx)
# Generate solution object with initial discontinuity
phi = CellVariable(name="solution variable", mesh=mesh)
phiAnalytical = CellVariable(name="analytical value", mesh=mesh)
phi.setValue(1.)
phi.setValue(0., where=x > 1.)
# Define the pde
D = [[-1.]]
eq = TransientTerm() == ConvectionTerm(coeff=D)
# Set discretization so analytical solution is exactly one cell translation
dt = 0.01*dx
steps = 2*int(dx/dt)
# Set the analytical value at the end of simulation
phiAnalytical.setValue(np.roll(phi.value, 1))
for step in range(steps):
eq.solve(var=phi, dt=dt)
print(phi.allclose(phiAnalytical, atol=1e-1))
As addressed on the FiPy mailing list, FiPy is not great at handling convection only PDEs (absent diffusion, pure hyperbolic) as it's missing higher order convection schemes. It is better to use CLAWPACK for this class of problem.
FiPy does have one second order scheme that might help with this problem, the VanLeerConvectionTerm, see an example.
If the VanLeerConvectionTerm is used in the above problem, it does do a better job of preserving the shock.
import numpy as np
import fipy
# Generate mesh
nx = 20
dx = 2*np.pi/nx
mesh = fipy.PeriodicGrid1D(nx=nx, dx=dx)
# Generate solution object with initial discontinuity
phi = fipy.CellVariable(name="solution variable", mesh=mesh)
phiAnalytical = fipy.CellVariable(name="analytical value", mesh=mesh)
phi.setValue(1.)
phi.setValue(0., where=mesh.x > 1.)
# Define the pde
D = [[-1.]]
eq = fipy.TransientTerm() == fipy.VanLeerConvectionTerm(coeff=D)
# Set discretization so analytical solution is exactly one cell translation
dt = 0.01*dx
steps = 2*int(dx/dt)
# Set the analytical value at the end of simulation
phiAnalytical.setValue(np.roll(phi.value, 1))
viewer = fipy.Viewer(phi)
for step in range(steps):
eq.solve(var=phi, dt=dt)
viewer.plot()
raw_input('stopped')
print(phi.allclose(phiAnalytical, atol=1e-1))