Problems with quad when using lambdify - integration

I'm trying to solve these two integrals, I want to use a numerical approach because C_i will eventually become more complicated and I want to use it for all cases. Currently, C_i is just a constant so _quad is not able to solve it. I'm assuming because it is a Heaviside function and it is having trouble finding the a,b. Please correct me if I'm approaching this wrongly.
Equation 33
In [1]: import numpy as np
...: import scipy as sp
...: import sympy as smp
...: from sympy import DiracDelta
...: from sympy import Heaviside
In [2]: C_i = smp.Function('C_i')
In [3]: t, t0, x, v = smp.symbols('t, t0, x, v', positive=True)
In [4]: tot_l = 10
In [5]: C_fm = (1/tot_l)*v*smp.Integral(C_i(t0), (t0, (-x/v)+t, t))
In [6]: C_fm.doit()
Out[6]:
0.1*v*Integral(C_i(t0), (t0, t - x/v, t))
In [7]: C_fm.doit().simplify()
Out[7]:
0.1*v*Integral(C_i(t0), (t0, t - x/v, t))
In [8]: C_fms = C_fm.doit().simplify()
In [9]: t_arr = np.arange(0,1000,1)
In [10]: f_mean = smp.lambdify((x, v, t), C_fms, ['scipy', {'C_i': lambda e: 0.8}])
In [11]: try2 = f_mean(10, 0.1, t_arr)
Traceback (most recent call last):
File "/var/folders/rd/wzfh_5h110l121rmlxn61v440000gn/T/ipykernel_3164/3786931540.py", line 1, in <module>
try2 = f_mean(10, 0.1, t_arr)
File "<lambdifygenerated-1>", line 2, in _lambdifygenerated
return 0.1*v*quad(lambda t0: C_i(t0), t - x/v, t)[0]
File "/opt/anaconda3/lib/python3.9/site-packages/scipy/integrate/quadpack.py", line 348, in quad
flip, a, b = b < a, min(a, b), max(a, b)
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
Equation 34
In [12]: C_i = smp.Function('C_i')
In [13]: t, tao, x, v = smp.symbols('t, tao, x, v', positive=True)
In [14]: I2 = v*smp.Integral((C_i(t-tao))**2, (tao, 0, t))
In [15]: I2.doit()
Out[15]:
v*Integral(C_i(t - tao)**2, (tao, 0, t))
In [16]: I2.doit().simplify()
Out[16]:
v*Integral(C_i(t - tao)**2, (tao, 0, t))
In [17]: I2_s = I2.doit().simplify()
In [18]: tao_arr = np.arange(0,1000,1)
In [19]: I2_sf = smp.lambdify((v, tao), I2_s, ['scipy', {'C_i': lambda e: 0.8}])
In [20]: try2 = I2_sf(0.1, tao_arr)
Traceback (most recent call last):
File "/var/folders/rd/wzfh_5h110l121rmlxn61v440000gn/T/ipykernel_3164/4262383171.py", line 1, in <module>
try2 = I2_sf(0.1, tao_arr)
File "<lambdifygenerated-2>", line 2, in _lambdifygenerated
return v*quad(lambda tao: C_i(t - tao)**2, 0, t)[0]
File "/opt/anaconda3/lib/python3.9/site-packages/scipy/integrate/quadpack.py", line 351, in quad
retval = _quad(func, a, b, args, full_output, epsabs, epsrel, limit,
File "/opt/anaconda3/lib/python3.9/site-packages/scipy/integrate/quadpack.py", line 463, in _quad
return _quadpack._qagse(func,a,b,args,full_output,epsabs,epsrel,limit)
File "/opt/anaconda3/lib/python3.9/site-packages/sympy/core/expr.py", line 345, in __float__
raise TypeError("Cannot convert expression to float")
TypeError: Cannot convert expression to float

So you are passing an unevaluated Integrate to lambdify, which in turn translates it call to scipy.integrate.quad.
Looks like the integrals can't be evaluated even with doit and simplify calls. Have you actually looked at C_fms and I2_s? That's one of the first things I'd do when running this code!
I've never looked at this approach. I have seen people lambdify the objective expression, and then try to use that in quad directly.
quad has specific requirements (check the docs!). The objective function must return a single number, and the bounds must also be numbers.
In the first error, you are passing array t_arr as the t bound, and it got the usual ambiguity error when checking where it is bigger than the other bound, 0. That's that b < a test. quad cannot use arrays as bounds.
I not sure why the second case gets avoids this problem - bounds must be coming from somewhere else. But the error comes when quad calls the objective function, and expects a float return. Instead the function returns a sympy expression which sympy can't convert to float. My guess there's some variable in the expression that's still a sympy.symbol.
In diagnosing lambdify problems, it's a good idea to look at the generated code. One way is with help on the function, help(I2_sf). But with that you need to be able to read and understand python, including any numpy and scipy functions. That's not always easy.
Have you tried to use sympy's own numeric integrator? Trying to combine sympy and numpy/scipy often has problems.

Related

too many values to unpack (expected 2) lda

I received error : too many values to unpack (expected 2) , when running the below code. anyone can help me? I added more details.
import gensim
import gensim.corpora as corpora
dictionary = corpora.Dictionary(doc_clean)
doc_term_matrix = [dictionary.doc2bow(doc) for doc in doc_clean]
Lda = gensim.models.ldamodel.LdaModel
ldamodel = Lda(doc_term_matrix, num_topics=3, id2word = dictionary, passes=50, per_word_topics = True, eval_every = 1)
print(ldamodel.print_topics(num_topics=3, num_words=20))
for i in range (0,46):
for index, score in sorted(ldamodel[doc_term_matrix[i]], key=lambda tup: -1*tup[1]):
print("subject", i)
print("\n")
print("Score: {}\t \nTopic: {}".format(score, ldamodel.print_topic(index, 6)))
Focusing on the loop, since this is where the error is being raised. Let's take it one iteration at a time.
>>> import numpy as np # just so we can use np.shape()
>>> i = 0 # value in first loop
>>> x = sorted( ldamodel[doc_term_matrix[i]], key=lambda tup: -1*tup[1] )
>>> np.shape(x)
(3, 3, 2)
>>> for index, score in x:
... pass
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: too many values to unpack (expected 2)
Here is where your error is coming from. You are expecting this returned matrix to have 2 elements, however it is a multislice matrix with no simple infer-able way to unpack it. I do not personally have enough experience with this subject material to be able to infer what you might mean to be doing, I can only show you where your problem is coming from. Hope this helps!

when call the class convolution, it say error

gdd.forward(x) call error, but why?
This code uses imcol to implement the convolution layer
Traceback (most recent call last):
File "E:/PycharmProjects/untitled2/kk.py", line 61, in <module>
gdd.forward(x)
File "E:/PycharmProjects/untitled2/kk.py", line 46, in forward
FN,C,FH,FW=self.W.shape
ValueError: not enough values to unpack (expected 4, got 2)
import numpy as np
class Convolution:
# 卷积核大小
def __init__(self,W,b,stride=1,pad=0):
self.W = W
self.b = b
self.stride = stride
self.pad = pad
def forward(self,x):
FN,C,FH,FW=self.W.shape
N,C,H,W = x.shape
out_h = int(1+(H+ 2*self.pad - FH) / self.stride)
out_w = int(1+(W + 2*self.pad -FW) / self.stride)
e = np.array([[2,0,1],[0,1,2],[1,0,2]])
x = np.array([[1,2,3,0],[0,1,2,3],[3,0,1,2],[2,3,0,1]])
gdd = Convolution(e,3,1,0)
gdd.forward(x)
not enough value to unpack means that there are 2 outputs, but you are expecting 4:
FN,C,FH,FW=self.W.shape
just get rid of 2 of them and you are good to go :)
BTW I'm assuming you speak Chinese? 我说中文, 不懂可以用中文问一下

TypeError: cannot unpack non-iterable AxesSubplot object [duplicate]

I write this code and I have error in my subplot. I don't now what is wrong in my code. Can you help me ?
import pywt
import scipy.io.wavfile as wavfile
import matplotlib.pyplot as plt
rate,signal = wavfile.read('a0025.wav')
time = [x /rate for x in range(0,len(signal))]
tree = pywt.wavedec(data=signal[:1000], wavelet='db2', level=4, mode='symmetric')
print(len(tree))
newTree = [tree[0]*0, tree[1]*0, tree[2]*0, tree[3]*0, tree[4]]
recSignal = pywt.waverec(newTree,'db2')
fig, ax = plt.subplot(2, 1)
ax[0].plot(time[:1000], signal[:1000])
ax[0].set_xlabel('Czas [s]')
ax[0].set_ylabel('Amplituda')
ax[1].plot(time[:1000], recSignal[:1000])
ax[1].set_xlabel('Czas [s]')
ax[1].set_ylabel('Amplituda')
plt.show()
The error:
raise ValueError('Illegal argument(s) to subplot: %s' % (args,))
ValueError: Illegal argument(s) to subplot: (2, 1)
As the error clearly states, you passed an illegal argument to pyplot.subplot(). If you look at the documentation for that function, you'll see that it takes 3 arguments (which can be condensed in one): ax = plt.subplot(2, 1, 1) or ax = plt.subplot(211).
However, the function that you are looking for is plt.subplots() (note the s at the end), which generates both a figure and an array of subplots:
f, (ax1, ax2) = plt.subplots(1, 2, sharey=True)
ax1.plot(x, y)
ax1.set_title('Sharing Y axis')
ax2.scatter(x, y)
It seems that this bug is in the documentation see https://matplotlib.org/3.2.1/api/_as_gen/matplotlib.pyplot.subplots.html. They forgot the "s" in the third example. The first two examples are correct however.
e.g.
# using tuple unpacking for multiple Axes
fig, (ax1, ax2) = plt.subplot(1, 2)
fig, ((ax1, ax2), (ax3, ax4)) = plt.subplot(2, 2)

Convert numbers from mathematica csv export to numpy complex array

I have exported data from mathematica to a csv file. The file structure looke as follows:
"x","y","Ex","Ey"
0.,0.,0.+0.*I,-3.0434726787506006*^-12+3.4234894344189825*^-12*I
0.,0.,0.+0.*I,-5.0434726787506006*^-12+10.4234894344189825*^-13*I
...
I'm reading in the data with pandas, but I get an error
import csv
import pandas as pd
import numpy as np
df=pd.read_csv('filename.csv')
df.columns=['x', 'y', 'Ex','Ey']
df['Ey'] = df['Ey'].str.replace('*^','E')
df['Ey'] = df['Ey'].str.replace('I','1j').apply(lambda x: np.complex(x))
Edit: I'm getting the following error in the second last line of my code:
Traceback (most recent call last):
File "plot.py", line 6, in <module>
df['Ey'] = df['Ey'].str.replace('*^','E')
File "/home/.../.local/lib/python2.7/site-packages/pandas/core/strings.py", line 1579, in replace
flags=flags)
File "/home/.../.local/lib/python2.7/site-packages/pandas/core/strings.py", line 424, in str_replace
regex = re.compile(pat, flags=flags)
File "/usr/lib/python2.7/re.py", line 194, in compile
return _compile(pattern, flags)
File "/usr/lib/python2.7/re.py", line 251, in _compile
raise error, v # invalid expression
sre_constants.error: nothing to repeat
When I write instead
df['Ey'] = df['Ey'].str.replace('*','E')
or
df['Ey'] = df['Ey'].str.replace('^','E')
I'm not getting an error. It seems like one can only give one charcter which is replaced?
Why beat yourself up messing with ascii encoded floats?
here is how to exchange complex arrays between python and mathematica using raw binary files.
in mathematica:
cdat = RandomComplex[{0, 1 + I}, 5]
{0.0142816 + 0.0835513 I, 0.434109 + 0.977644 I,
0.579678 + 0.337286 I, 0.426271 + 0.166166 I, 0.363249 + 0.0867334 I}
f = OpenWrite["test", BinaryFormat -> True]
BinaryWrite[f, cdat, "Complex64"]
Close[f]
or:
Export["test", cdat, "Binary", "DataFormat" -> "Complex64"]
in python:
import numpy as np
x=np.fromfile('test',np.complex64)
print x
[ 0.01428160+0.0835513j 0.43410850+0.97764391j 0.57967812+0.3372865j
0.42627081+0.16616575j 0.36324903+0.08673338j]
going the other way:
y=np.array([[1+2j],[3+4j]],np.complex64)
y.tofile('test')
f = OpenRead["test", BinaryFormat -> True]
BinaryReadList[f, "Complex64"]
Close[f]
note this will be several orders of magnitude faster than exchanging data by csv.

Fitting logistic regression with PyMC: ZeroProbability error

To teach myself PyMC I am trying to define a simple logistic regression. But I get a ZeroProbability error, and does not understand exactly why this happens or how to avoid it.
Here is my code:
import pymc
import numpy as np
x = np.array([85, 95, 70, 65, 70, 90, 75, 85, 80, 85])
y = np.array([1., 1., 0., 0., 0., 1., 1., 0., 0., 1.])
w0 = pymc.Normal('w0', 0, 0.000001) # uninformative prior (any real number)
w1 = pymc.Normal('w1', 0, 0.000001) # uninformative prior (any real number)
#pymc.deterministic
def logistic(w0=w0, w1=w1, x=x):
return 1.0 / (1. + np.exp(-(w0 + w1 * x)))
observed = pymc.Bernoulli('observed', logistic, value=y, observed=True)
And here is the trace back with the error message:
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/IPython/core/interactiveshell.py", line 2883, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-2-43ed68985dd1>", line 24, in <module>
observed = pymc.Bernoulli('observed', logistic, value=y, observed=True)
File "/usr/local/lib/python2.7/site-packages/pymc/distributions.py", line 318, in __init__
**arg_dict_out)
File "/usr/local/lib/python2.7/site-packages/pymc/PyMCObjects.py", line 772, in __init__
if not isinstance(self.logp, float):
File "/usr/local/lib/python2.7/site-packages/pymc/PyMCObjects.py", line 929, in get_logp
raise ZeroProbability(self.errmsg)
ZeroProbability: Stochastic observed's value is outside its support,
or it forbids its parents' current values.
I suspect np.exp to be causing the trouble, since it returns inf when the linear equation becomes too high.
I know there are other ways to define a logistic regression using PyMC (her is one), but I am interested in knowing why this approach does not work, and how I can define the regression using the Bernoulli object instead of using bernoulli_like
When you create a your normal stochastastic with pymc.Normal('w0', 0, 0.000001), PyMC2 initializes the value with a random draw from the prior distribution. Since your prior is so diffuse, this can be a value which is so unlikely that the posterior is effectively zero. To fix, just request a reasonable initial value for your Normal:
w0 = pymc.Normal('w0', 0, 0.000001, value=0)
w1 = pymc.Normal('w1', 0, 0.000001, value=0)
Here is a notebook with a few more details.
You have to put some sort of bound on the probability returned by the logistic function.
Maybe something like
#pymc.deterministic
def logistic(w0=w0, w1=w1, x=x):
tol = 1e-9
res = 1.0 / (1. + np.exp(-(w0 + w1 * x)))
return np.maximum(np.minimum(res, 1 - tol), tol)
I think you forgot the negative inside the exp() function, too.
#hahdawg's answer is good, but here's something else to consider.
For your uninformative priors on w0 and w1 I would first do an eyeball fit and then use uniforms with limits.
Obviously your w1 is going to be around 1/15 = .07, so a range like .04 to 1.2 might do it.
w0 is going to be in the range of -80/15 = -5.3, so something like -7 to -3 could do it.
I'm just saying this because exp can easily go bananas, so you have to be careful what you feed it.
If your inverse logit function comes out with a value too close to 0 or 1, logistic regression is guaranteed to break.
Out of curiosity, are you using a thin argument in your call to sample? There was a bug related to that, and it may be the culprit here.
Besides, thinning is not worthwhile in any case.