I am testing with calculating (1e308)**2 and (1e308)*2 in python. I expect that either
both yield overflow, or
both yield inf.
However, (1e308)**2 manifests an overflow exception while (1e308)*2 returns inf without triggering any exception. Any explanation on why they behave differently? Thanks.
In [98]: 1e308**2
---------------------------------------------------------------------------
OverflowError Traceback (most recent call last)
<ipython-input-98-29e84f241b45> in <module>()
----> 1 1e308**2
OverflowError: (34, 'Result too large')
In [99]: 1e308*2
Out[99]: inf
Related
I'm attempting to run a density functional theory geometry optimization with the following script:
xyzToIonposOpt water.xyz 15
however, this relatively simple command returns the following error:
error: fun(0): subscripts must be either integers 1 to (2^63)-1 or logicals
error: called from
fminsearch>nmsmax at line 275 column 8
fminsearch at line 165 column 25
.tmp.m at line 43 column 8
here is line 275 of fminsearch.m:
f(1) = dirn * fun (x, varargin{:});
and line 165
[x, exitflag, output] = nmsmax (fun, x0, options, varargin{:});
A similar issue is described here: https://octave.1599824.n4.nabble.com/Issue-with-fminsearch-function-td4693803.html It seems to be an issue with Octave. however, I am not certain how to workaround this issue as I am not sure where the fminsearch function is called.
Thanks for your help!
I'm writing a simple script that iterates some data and plots the result. It was working fine. Then I zoomed in on the plot. After doing that, every time I try to run it I get the error
error: set: "dataaspectratio" must be finite
whether I use plot() or pcolor(). I found from a search that I can check the data aspect ratio with daspect() and the answer is [4 2 1] which looks finite to me. Even if I close and restart this error persists and won't let me plot anything, even a simple thing from the command line. Or a graph comes up with no y axis. How can I fix this?
The full error trying to run my file logistic.m is:
logistic
error: set: "dataaspectratio" must be finite
error: called from
__plt__>__plt2vv__ at line 495 column 10
__plt__>__plt2__ at line 242 column 14
__plt__ at line 107 column 18
plot at line 223 column 10
logistic at line 8 column 1
error: set: "dataaspectratio" must be finite
Here's the full script that I used:
R=linspace(0,4,100);
for j=1:100
r=R(j);
X=linspace(0,1,100);
for i=1:1000
X=r*(X-X.*X);
endfor
plot(R,X);
hold on;
endfor
Just now, after starting Octave again, this problem went away. A while later it came back. All I did was zoom into a plot that I made. The plot window still comes up the first time, but it's just a horizontal line with no axes. After that, the plot window doesn't even come up.
I have a large equation system to solve. The coefficients are stored in a sparse matrix CM of the dimension 320001 x 320001 elements, of which 18536032 are non-zero. The result vector B is 320001 elements long.
When executing
I=CM\B
Octave Error: SparseMatrix::solve numeric factorization failed
I get the above error message. A brief look into the source code did not give me a clue.
Does anyone know what is causing that error?
BTW: when solving the same problem with a smaller matrix (e.g. 180001x180001) the program runs fine.
Johannes
Octave uses UMFPACK library to solve sparse linear systems. Inspecting the source shows that the error message is due to an error status with a negative value. List of error codes can be found in the user's guide. One of them is related to lack of enough memory:
UMFPACK ERROR out of memory, (-1): Not enough memory. The ANSI C malloc or realloc routine failed.
My Nvidia Geforce 960M has 2 GB of dedicated graphics memory. But when I try and run the sample (CNTK-Samples-2-4\Examples\Image\TransferLearning), I get the following CUDA memory allocation error:
Traceback (most recent call last): File "TransferLearning.py", line
217, in
max_epochs, freeze=freeze_weights) File "TransferLearning.py", line 130, in train_model
trainer.train_minibatch(data) # update model with it File
"C:\Users\Dell\Anaconda3\lib\site-packages\cntk\train\trainer.py",
line 181, in train_minibatch
arguments, device) File "C:\Users\Dell\Anaconda3\lib\site-packages\cntk\cntk_py.py", line
2975, in train_minibatch_overload_for_minibatchdata
return _cntk_py.Trainer_train_minibatch_overload_for_minibatchdata(self, *args) RuntimeError: CUDA failure 2: out of memory ; GPU=0 ; hostname=DESKTOP-IA3HLGI ; expr=cudaMalloc((void**) &deviceBufferPtr,
sizeof(AllocatedElemType) * AsMultipleOf(numElements, 2)) [CALL STACK]
> Microsoft::MSR::CNTK::CudaTimer:: Stop
- Microsoft::MSR::CNTK::CudaTimer:: Stop (x2)
- Microsoft::MSR::CNTK::GPUMatrix:: Resize
- Microsoft::MSR::CNTK::Matrix:: Resize
- std::enable_shared_from_this::enable_shared_from_this
- std::enable_shared_from_this:: shared_from_this (x3)
- CNTK::Internal:: UseSparseGradientAggregationInDataParallelSGD
- CNTK:: CreateTrainer
- CNTK::Trainer:: TotalNumberOfUnitsSeen
- CNTK::Trainer:: TrainMinibatch (x2)
- PyInit__cntk_py (x2)
Is there a way to run this sample using the GPU? Is there a configuration for CUDA/CNTK for memory? Do I need to change image size and/or batch size?
thanks NiallJG. Changing the batch size did the trick. I tried mb_size=30 and it works!
For a simple test case for my compiler project, I'm trying to divide 88 by 11, but when I call idivq my program throws a Floating Point Exception. Here is the relevant section of generated code where the exception occurs:
# push 88
movq $88,%r10
# push 11
movq $11,%r13
# \
movq %r10,%rax
idivq %r13
I have looked up examples of how to use div, and I thought I was following the same format, so I don't understand why I am getting an exception.
idiv concatenates rdx and rax before performing the division (that is, it is actually 128-bit division). If you want to do single-word division, put a zero in rdx. What you're getting is not an FP exception, but an integer overflow exception: there's something left over in rdx which is making the quotient too big to fit in the destination register.