Chrome extension randomly crashing - google-chrome

My Chrome extension randomly crashes for a small portion of users, and I can't replicate it or figure out what's wrong. Users get a notification "[Extension name] has crashed. Click this balloon to reload the extension.", which is the same behavior as when I click End Process on the extension from Chrome task manager, so it seems like the extension is somehow terminated. What are some possible reasons this could be happening? What steps can I take to investigate further? Anecdotally the crash seems to happen often when loading an html file bundled within the extension, but not always.
I got a crash dump from a user but didn't get very far interpreting it. Running it through minidump_stackwalk produced this. Is there anyway to further decode this log?
Operating system: Mac OS X 10.12.6 16G2136
CPU: amd64
family 6 model 158 stepping 9
8 CPUs
GPU: UNKNOWN
Crash reason: EXC_BREAKPOINT / EXC_I386_BPT
Crash address: 0x10ec219fd
Process uptime: 147598 seconds
Thread 0 (crashed)
0 Google Chrome Framework + 0x7bfa9fd
rax = 0x000000000mi0000001 rdx = 0x0000000000000004
rcx = 0x00007f8a92fc0b04 rbx = 0x0000000000000000
rsi = 0x0000000000000002 rdi = 0x00007f8a8a80a400
rbp = 0x00007fff5af52f20 rsp = 0x00007fff5af52f20
r8 = 0x000000010ec219d0 r9 = 0x0000000000000000
r10 = 0x0000000000000001 r11 = 0x0000000000000006
r12 = 0x0000000000000000 r13 = 0x00007f8a92f5eb88
r14 = 0x000000010ec219d0 r15 = 0x00007f8a8a80a400
rip = 0x000000010ec219fd
Found by: given as instruction pointer in context
1 Google Chrome Framework + 0x7bfa931
rbp = 0x00007fff5af52f70 rsp = 0x00007fff5af52f30
rip = 0x000000010ec21931
Found by: previous frame's frame pointer
2 Google Chrome Framework + 0x7bf937b
rbp = 0x00007fff5af530e0 rsp = 0x00007fff5af52f80
rip = 0x000000010ec2037b
Found by: previous frame's frame pointer
3 Google Chrome Framework + 0x2fc49cf
rbp = 0x00007fff5af53190 rsp = 0x00007fff5af530f0
rip = 0x0000000109feb9cf
Found by: previous frame's frame pointer
4 Google Chrome Framework + 0x2fd4b9b
rbp = 0x00007fff5af53260 rsp = 0x00007fff5af531a0
rip = 0x0000000109ffbb9b
Found by: previous frame's frame pointer
5 Google Chrome Framework + 0x2fd493b
rbp = 0x00007fff5af532b0 rsp = 0x00007fff5af53270
rip = 0x0000000109ffb93b
Found by: previous frame's frame pointer
6 Google Chrome Framework + 0x302ec21
rbp = 0x00007fff5af532d0 rsp = 0x00007fff5af532c0
rip = 0x000000010a055c21
Found by: previous frame's frame pointer
7 Google Chrome Framework + 0x302942a
rbp = 0x00007fff5af532e0 rsp = 0x00007fff5af532e0
rip = 0x000000010a05042a
Found by: previous frame's frame pointer
8 Google Chrome Framework + 0x302e4ef
rbp = 0x00007fff5af53320 rsp = 0x00007fff5af532f0
rip = 0x000000010a0554ef
Found by: previous frame's frame pointer
9 CoreFoundation + 0xa7a31
rbp = 0x00007fff5af53330 rsp = 0x00007fff5af53330
rip = 0x00007fffc47e6a31
Found by: previous frame's frame pointer
10 CoreFoundation + 0x8892d
rbp = 0x00007fff5af53390 rsp = 0x00007fff5af53340
rip = 0x00007fffc47c792d
Found by: previous frame's frame pointer
11 CoreFoundation + 0x87e26
rbp = 0x00007fff5af54080 rsp = 0x00007fff5af533a0
rip = 0x00007fffc47c6e26
Found by: previous frame's frame pointer
12 CoreFoundation + 0x87824
rbp = 0x00007fff5af54110 rsp = 0x00007fff5af54090
rip = 0x00007fffc47c6824
Found by: previous frame's frame pointer
13 Foundation + 0x22ac2
rbp = 0x00007fff5af54150 rsp = 0x00007fff5af54120
rip = 0x00007fffc61dcac2
Found by: previous frame's frame pointer
14 Google Chrome Framework + 0x302f141
rbp = 0x00007fff5af54190 rsp = 0x00007fff5af54160
rip = 0x000000010a056141
Found by: previous frame's frame pointer
15 Google Chrome Framework + 0x302dea2
rbp = 0x00007fff5af541d0 rsp = 0x00007fff5af541a0
rip = 0x000000010a054ea2
Found by: previous frame's frame pointer
16 Google Chrome Framework + 0x2fd5123
rbp = 0x00007fff5af54200 rsp = 0x00007fff5af541e0
rip = 0x0000000109ffc123
Found by: previous frame's frame pointer
17 Google Chrome Framework + 0x2fab3b3
rbp = 0x00007fff5af54290 rsp = 0x00007fff5af54210
rip = 0x0000000109fd23b3
Found by: previous frame's frame pointer
18 Google Chrome Framework + 0x7c45bcf
rbp = 0x00007fff5af54340 rsp = 0x00007fff5af542a0
rip = 0x000000010ec6cbcf
Found by: previous frame's frame pointer
19 Google Chrome Framework + 0x2a3cb59
rbp = 0x00007fff5af543b0 rsp = 0x00007fff5af54350
rip = 0x0000000109a63b59
Found by: previous frame's frame pointer
20 Google Chrome Framework + 0x53d7e48
rbp = 0x00007fff5af54700 rsp = 0x00007fff5af543c0
rip = 0x000000010c3fee48
Found by: previous frame's frame pointer
21 Google Chrome Framework + 0x2a3c1a4
rbp = 0x00007fff5af54790 rsp = 0x00007fff5af54710
rip = 0x0000000109a631a4
Found by: previous frame's frame pointer
22 Google Chrome Framework + 0x342b
rbp = 0x00007fff5af54890 rsp = 0x00007fff5af547a0
rip = 0x000000010702a42b
Found by: previous frame's frame pointer
23 Google Chrome Helper (Renderer) + 0x182f
rbp = 0x00007fff5af548e0 rsp = 0x00007fff5af548a0
rip = 0x0000000104cac82f
Found by: previous frame's frame pointer
24 libdyld.dylib + 0x5235
rbp = 0x00007fff5af548f8 rsp = 0x00007fff5af548f0
rip = 0x00007fffd9fcc235
Found by: previous frame's frame pointer
25 libdyld.dylib + 0x5235
rbp = 0x00007fff5af548f8 rsp = 0x00007fff5af548f8
rip = 0x00007fffd9fcc235
Found by: stack scanning
[UPDATE] We can now reproduce this crash. It happens when we do chrome.tabs.update(tabId, {url:chrome.runtime.getURL("my_extension_page.html"}) several times quickly. The page crashes with error RESULT_CODE_INVALID_CMDLINE_URL. Alternatively, if I set this extension page as the newtab page in the manifest and quickly open many tabs, the extension doesn't crash.

Related

Plotting function with respect to a keyword argument leads to error in Julia

I have a function which involves bunch of integrals and complicated computations, like the following:
using HCubature
function func(v, x0, y0; rad=1.)
L = hcubature(r->r[1]*v(x0+r[1]*cos(r[2]), y0+r[1]*sin(r[2])), [0., π/2], [rad, π])[1]
R = hcubature(r->r[1]*v(x0+r[1]*cos(r[2]), y0+r[1]*sin(r[2])), [0., 0], [rad, π/2])[1]
return L, R
end
The argument v is a function itself.
When I try to plot the function with respect to the keyword argument rad, I obtain error messages as follows:
x0_, y0_ = 0, 0
rad_ = 0.:0.1:9.
func_array_L = [func(v, x0_, y0_; rad = radius)[1] for radius in rad_]
func_array_R = [func(v, x0_, y0_; rad = radius)[2] for radius in rad_]
plot(rad_, func_array_L)
plot!(rad_, func_array_R)
The error message includes a very long error message: Internal error: encountered unexpected error in runtime: then followed by a long list of directories, and then it comes to the following:
MethodError: no method matching string(::Expr)
The applicable method may be too new: running in world age 3820, while current world is 26290.
Closest candidates are:
string(::Any...) at strings/io.jl:168 (method too new to be called from this world context.)
string(!Matched::String) at strings/substring.jl:152 (method too new to be called from this world context.)
string(!Matched::SubString{String}) at strings/substring.jl:153 (method too new to be called from this world context.)
...
I tried also other methods like declaring another function with rad as the only argument, etc. but none of them worked. How to fix the problem?
Indeed the error message is strange, but the reason is simple. You have not defined the function v. You should define it first, and then all should work as expected.
Additionally note that you have a wrong case in using HCubature (note that u should be lowercase). Also in order for plotting to work you should firs import a plotting package e.g. by using Plots.
EDITS
A basic code that reproduces your problem is:
julia> using HCubature
julia> function func(v, x0, y0; rad=1.)
L = hcubature(r->r[1]*v(x0+r[1]*cos(r[2]), y0+r[1]*sin(r[2])), [0., π/2], [rad, π])[1]
R = hcubature(r->r[1]*v(x0+r[1]*cos(r[2]), y0+r[1]*sin(r[2])), [0., 0], [rad, π/2])[1]
return L, R
end
func (generic function with 1 method)
julia> v = (x,y) -> x
#27 (generic function with 1 method)
julia> x0_, y0_ = 0, 0
(0, 0)
julia> rad_ = 0.:0.1:9.
0.0:0.1:9.0
julia> func_array_L = [func(v, x0_, y0_; rad = radius)[1] for radius in rad_]
Internal error: encountered unexpected error in runtime:
MethodError(f=typeof(Base.string)(), args=(Expr(:<:, :t, :r),), world=0x0000000000000eec)
This seems to be a bug. I reported it here.
A workaround
Now - the way to solve it is to make v type stable. There are three example ways to do it.
Option 1: define it as const:
const v1 = v
and use a comprehension with v1 passed instead of v.
Option 2: wrap it in let block:
func_array_L = let v=v
[func(v, x0_, y0_; rad = radius)[1] for radius in rad_]
end
Option 3: define a function with a name using v:
v2(x,y) = v(x,y)
and use a comprehension with v2 passed instead of v.
Alternatively you could also make x0_ or y0_ to be of constant type (it is enough to fix one of them) to make all work. E.g. this
func_array_L = [func(v, 1, y0_; rad = radius) for radius in rad_]
works as expected.
Additional notes
You have a similar problem if you use map instead of a comprehension if in map you use an anonymous function:
map(radius -> func(v, x0_, y0_; rad = radius)[1], rad_)
and also a normal function that has a name produces the same error:
v3(radius) = func(v, x0_, y0_; rad = radius)[1]
map(v3, rad_)
but it starts to work if you make an internal function that is introduced into a method-table:
v3(radius) = (tmp(x...) = v(x...); func(tmp, x0_, y0_; rad = radius)[1])
and now map(v3, rad_) works as expected.

Solving two non-linear equations in Octave

I am trying to solve the following two equations using Octave:
eqn1 = (wp/Cwc)^(2*N) - (1/10^(0.1*Ap))-1 == 0;
eqn2 = (ws/Cwc)^(2*N) - (1/10^(0.1*As))-1 == 0;
I used the following code:
syms Cwc N
eqn1 = (wp/Cwc)^(2*N) - (1/10^(0.1*Ap))-1 == 0;
eqn2 = (ws/Cwc)^(2*N) - (1/10^(0.1*As))-1 == 0;
sol = solve(eqn1 ,eqn2, Cwc, N)
ws,wp,As, and Ap are given as 1.5708, 0.31416, 0.5, 45 respectively.
But I am getting the following error:
error: Python exception: NotImplementedError: could not solve
126491*(pi*(3*10**N*sqrt(314311)*pi**(-N)/1223)**(1/N)/2)**(2*N) - 126495
occurred at line 7 of the Python code block:
d = sp.solve(eqs, *symbols, dict=True)
What should I do to solve this?
Edit:
I modified the equations a little bit.
pkg load symbolic
clear all
syms Cwc N
wp = 0.31416
ws = 1.5708
As = 45
Ap = 0.5
eqn2 = N - log10(((1/(10^(0.05*As)))^2)-1)/2*log10(ws/Cwc) == 0;
eqn1 = N - log10(((1/(10^(0.05*Ap)))^2)-1)/2*log10(wp/Cwc) == 0;
sol = solve(eqn1,eqn2,Cwc,N)
And now I am getting this error:
error: Python exception: AttributeError: MutableDenseMatrix has no attribute is_Relational.
occurred at line 3 of the Python code block:
if arg.is_Relational:
Looking at the equations, with unknowns in the base and exponent of the same term, highly suggests there is no symbolic solution to be found. I gave a simplified system (2/x)^y = 4, (3/x)^y = 5 to a couple of symbolic solvers, neither of which got anything from it. So, the only way to solve this is numerically (which makes sense because the four known parameters you have are some floating point numbers anyway). Octave's numeric solver for this purpose is fsolve. Example of usage:
function y = f(x)
Cwc = x(1);
N = x(2);
ws = 1.5708;
wp = 0.31416;
As = 0.5;
Ap = 45;
y = [(wp/Cwc)^(2*N) - (1/10^(0.1*Ap))-1; (ws/Cwc)^(2*N) - (1/10^(0.1*As))-1];
endfunction
fsolve(#f, [1; 1])
(Here, [1; 1] is an initial guess.) The output is
0.31413
0.19796

increase popen2 buffer size in GNU Octave

Is it possible to increase the buffer used in popen2 between Octave and the subprocess? It looks like the buffer is limited to approximately 66560 Bytes. This snippet shows the problem:
## This works with s = 65
## but not with s = 66
s = 66;
[in, out, pid] = popen2 ("dd", {"if=/dev/urandom",
"bs=1K",
sprintf("count=%i", s)});
pause (1);
[vt, cnt] = fread(out);
assert (cnt, s * 1024);
waitpid (pid);
fclose (in);
fclose (out);
returns:
66+0 Datensätze ein
66+0 Datensätze aus
67584 Bytes (68 kB) kopiert, 0.999523 s, 67.6 kB/s
error: ASSERT errors for: assert (cnt,s * 1024)
Location | Observed | Expected | Reason
() 66560 67584 Abs err 1024 exceeds tol 0

NVVM_ERROR_INVALID_OPTION when using the CUDA kernel with Numbapro api

I want to execute a CUDA kernel in python using Numbapro API. I have this code:
import math
import numpy
from numbapro import jit, cuda, int32, float32
from matplotlib import pyplot
#cuda.jit('void(float32[:], float32[:], float32[:], float32[:], float32, float32, float32, int32)')
def calculate_velocity_field(X, Y, u_source, v_source, x_source, y_source, strength_source, N):
start = cuda.blockIdx.x * cuda.blockDim.x + cuda.threadIdx.x
end = N
stride = cuda.gridDim.x * cuda.blockDim.x
for i in range(start, end, stride):
u_source[i] = strength_source/(2*math.pi) * (X[i]-x_source)/((X[i]-x_source)**2 + (Y[i]-y_source)**2)
v_source[i] = strength_source/(2*math.pi) * (Y[i]-x_source)/((X[i]-x_source)**2 + (Y[i]-y_source)**2)
def main():
N = 200 # number of points in each direction
x_start, x_end = -4.0, 4.0 # boundaries in the x-direction
y_start, y_end = -2.0, 2.0 # boundaries in the y-direction
x = numpy.linspace(x_start, x_end, N) # creates a 1D-array with the x-coordinates
y = numpy.linspace(y_start, y_end, N) # creates a 1D-array with the y-coordinates
X, Y = numpy.meshgrid(x, y) # generates a mesh grid
strength_source = 5.0 # source strength
x_source, y_source = -1.0, 0.0 # location of the source
start = timer()
#calculate grid dimensions
blockSize = 1024
gridSize = int(math.ceil(float(N)/blockSize))
#transfer memory to device
X_d = cuda.to_device(X)
Y_d = cuda.to_device(Y)
u_source_d = cuda.device_array_like(X)
v_source_d = cuda.device_array_like(Y)
#launch kernel
calculate_velocity_field[gridSize,blockSize](X_d,Y_d,u_source_d,v_source_d,x_source,y_source,strength_source,N)
#transfer memory to host
u_source = numpy.empty_like(X)
v_source = numpy.empty_like(Y)
u_source_d.to_host(u_source)
v_source_d.to_host(v_source)
elapsed_time = timer() - start
print("Exec time with GPU %f s" % elapsed_time)
if __name__ == "__main__":
main()
Is giving me this error:
NvvmError Traceback (most recent call last)
<ipython-input-17-85e4a6e56a14> in <module>()
----> 1 #cuda.jit('void(float32[:], float32[:], float32[:], float32[:], float32, float32, float32, int32)')
2 def calculate_velocity_field(X, Y, u_source, v_source, x_source, y_source, strength_source, N):
3 start = cuda.blockIdx.x * cuda.blockDim.x + cuda.threadIdx.x
4 end = N
5 stride = cuda.gridDim.x * cuda.blockDim.x
~/.anaconda3/lib/python3.4/site-packages/numba/cuda/decorators.py in kernel_jit(func)
89 # Force compilation for the current context
90 if bind:
---> 91 kernel.bind()
92
93 return kernel
~/.anaconda3/lib/python3.4/site-packages/numba/cuda/compiler.py in bind(self)
319 Force binding to current CUDA context
320 """
--> 321 self._func.get()
322
323 #property
~/.anaconda3/lib/python3.4/site-packages/numba/cuda/compiler.py in get(self)
254 cufunc = self.cache.get(device.id)
255 if cufunc is None:
--> 256 ptx = self.ptx.get()
257
258 # Link
~/.anaconda3/lib/python3.4/site-packages/numba/cuda/compiler.py in get(self)
226 arch = nvvm.get_arch_option(*cc)
227 ptx = nvvm.llvm_to_ptx(self.llvmir, opt=3, arch=arch,
--> 228 **self._extra_options)
229 self.cache[cc] = ptx
230 if config.DUMP_ASSEMBLY:
~/.anaconda3/lib/python3.4/site-packages/numba/cuda/cudadrv/nvvm.py in llvm_to_ptx(llvmir, **opts)
420 cu.add_module(llvmir.encode('utf8'))
421 cu.add_module(libdevice.get())
--> 422 ptx = cu.compile(**opts)
423 return ptx
424
~/.anaconda3/lib/python3.4/site-packages/numba/cuda/cudadrv/nvvm.py in compile(self, **options)
211 for x in opts])
212 err = self.driver.nvvmCompileProgram(self._handle, len(opts), c_opts)
--> 213 self._try_error(err, 'Failed to compile\n')
214
215 # get result
~/.anaconda3/lib/python3.4/site-packages/numba/cuda/cudadrv/nvvm.py in _try_error(self, err, msg)
229
230 def _try_error(self, err, msg):
--> 231 self.driver.check_error(err, "%s\n%s" % (msg, self.get_log()))
232
233 def get_log(self):
~/.anaconda3/lib/python3.4/site-packages/numba/cuda/cudadrv/nvvm.py in check_error(self, error, msg, exit)
118 sys.exit(1)
119 else:
--> 120 raise exc
121
122
NvvmError: Failed to compile
libnvvm : error: -arch=compute_52 is an unsupported option
NVVM_ERROR_INVALID_OPTION
I tried another numbapro examples and the same error ocurrs.
I don't know if it's a bug of Numbapro that doesn't support 5.2 compute capability or it's a problem of Nvidia NVVM... suggestions?
In theory it should be supported, but I don't know what is happening.
I'm using Linux with CUDA 7.0 and driver version 346.29
Finally I found a solution here
Solution 1:
conda update cudatoolkit
Fetching package metadata: ....
# All requested packages already installed.
# packages in environment at ~/.anaconda3:
#
cudatoolkit 6.0 p0
It looks like me updating the CUDA toolkit doesn't update to CUDA 7.0. A second solution can be done:
Solution 2
conda install -c numba cudatoolkit
Fetching package metadata: ......
Solving package specifications: .
Package plan for installation in environment ~/.anaconda3:
The following packages will be downloaded:
package | build
---------------------------|-----------------
cudatoolkit-7.0 | 1 190.8 MB
The following packages will be UPDATED:
cudatoolkit: 6.0-p0 --> 7.0-1
Proceed ([y]/n)? y
Before:
In [4]: check_cuda()
------------------------------libraries detection-------------------------------
Finding cublas
located at ~/.anaconda3/lib/libcublas.so.6.0.37
trying to open library... ok
Finding cusparse
located at ~/.anaconda3/lib/libcusparse.so.6.0.37
trying to open library... ok
Finding cufft
located at ~/.anaconda3/lib/libcufft.so.6.0.37
trying to open library... ok
Finding curand
located at ~/.anaconda3/lib/libcurand.so.6.0.37
trying to open library... ok
Finding nvvm
located at ~/.anaconda3/lib/libnvvm.so.2.0.0
trying to open library... ok
finding libdevice for compute_20... ok
finding libdevice for compute_30... ok
finding libdevice for compute_35... ok
-------------------------------hardware detection-------------------------------
Found 1 CUDA devices
id 0 b'GeForce GTX 970' [SUPPORTED]
compute capability: 5.2
pci device id: 0
pci bus id: 7
Summary:
1/1 devices are supported
PASSED
Out[4]: True
After:
In [6]: check_cuda()
------------------------------libraries detection-------------------------------
Finding cublas
located at ~/.anaconda3/lib/libcublas.so.7.0.28
trying to open library... ok
Finding cusparse
located at ~/.anaconda3/lib/libcusparse.so.7.0.28
trying to open library... ok
Finding cufft
located at ~/.anaconda3/lib/libcufft.so.7.0.35
trying to open library... ok
Finding curand
located at ~/.anaconda3/lib/libcurand.so.7.0.28
trying to open library... ok
Finding nvvm
located at ~/.anaconda3/lib/libnvvm.so.3.0.0
trying to open library... ok
finding libdevice for compute_20... ok
finding libdevice for compute_30... ok
finding libdevice for compute_35... ok
-------------------------------hardware detection-------------------------------
Found 1 CUDA devices
id 0 b'GeForce GTX 970' [SUPPORTED]
compute capability: 5.2
pci device id: 0
pci bus id: 7
Summary:
1/1 devices are supported
PASSED
Out[6]: True

Octave is in an infinite loop?

It seems like my Octave session is in an infinite loop or at least freezes up when I run this code
c = cos(pi/8)
s = sin(pi/8)
A = [c -s; s c]
xy = [1;0]
for i = 1:17
xy = A * xy
plot(xy(1), xy(2))
hold on
endfor
When the code runs I am unable to close any of the windows and must force close the application.