Sage quadratic regression -- numbers too large? - regression

I am trying to do a (quadratic) regression using Sage and the largest point I have is of this order of magnitude: (xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx, xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx) (number of digits shown).
When I run the code:
var('a,b,c')
model(x) = a*x^2+b*x+c
find_fit(data,model)
(yes, data is already defined)
The fit I get is: [a == 1.0, b == 1.0, c == 1.0] which is not even close to correct. Is this because my numbers are too large or is there some other reason?

Related

How can I use scipy interp1d with N-D array for x without for loop

How can I use scipy.interpolate.interp1d when my x array is an N-D array, instead of a 1-D array, without using a loop?
The function f from interp1d then needs to be used with numpy.percentile with one of the arrays as an input.
I think there should be a way to do it with a list comprehension or lambda function, but I am still learning these tools.
(Note that this is different than my recent question here because I mixed up the x and y arrays in the posted question, and this problem was not reproducible.)
Problem statement/example:
# a is y in interp1d docs
a = np.array([97,4809,4762,282,3879,17454,103,2376,40581,])
# b is x in interp1d docs
b = np.array([
[0.14,0.11,0.29,0.11,0.09,0.68,0.09,0.18,0.5,],
[0.32,0.25,0.67,0.25,0.21,1.56,1.60, 0.41,1.15,],]
)
Just trying this, below, fails with ValueError: x and y arrays must be equal in length along interpolation axis. The expected return is array(97, 2376). Using median here, but will need to consider 10th, 90th, etc. percentiles.
f = interpolate.interp1d(b, a, axis=0)
f(np.percentile(b, 50, axis=0))
However this, below, works and prints array(97.)
f = interpolate.interp1d(b[0,:], a, axis=0)
f(np.percentile(b[0,:], 50, axis=0))
A loop works, but I am wondering if there is a solution using list comprehensions, lambda functions, or some other technique.
l = []
for _i in range(b.shape[0]):
_f = interpolate.interp1d(b[_i,:], a, axis=0)
l.append(_f(np.percentile(b[_i,:], 50, axis=0)))
print(out)
# returns
# [array(97.), array(2376.)]
Efforts:
I understand I can loop through the b array with a list comprehension.
[b[i,:] for i in range(b.shape[0])]
# returns
# [array([0.14, 0.11, 0.29, 0.11, 0.09, 0.68, 0.09, 0.18, 0.5 ]),
# array([0.32, 0.25, 0.67, 0.25, 0.21, 1.56, 1.6 , 0.41, 1.15])]
And I also understand that I can use a list comprehension to create the scipy function f for each dimension in b:
[interpolate.interp1d(b[i, :], a, axis=0) for i in range(b.shape[0])]
# returns
# [<scipy.interpolate.interpolate.interp1d at 0x1b72e404360>,
# <scipy.interpolate.interpolate.interp1d at 0x1b72e404900>]
But I don't know how to combine these two list comprehensions to apply the np.percentile function.
Using Python 3.8.3, NumPy 1.18.5, SciPy 1.3.2
If you have large data arrays, you want to stay away from for loops, map, np.vectorize and comprehensions. They will all be slow. Instead, it's always better to use vectorized numpy or scipy operations whenever possible.
In this particular case, you can implement the vectorization pretty trivially yourself. interp1d defaults to a linear interpolation, which is very simple to code by hand. For a general interpolator, the first step would be to sort x and y, which is why scipy can't support multiple x for a given y. If the x rows all have different sort order, what do you do with the y?
Luckily, there are a couple of things you can do to make this much faster than having to build a full interpolator or argsort y multiple times. For example, start by argsorting x:
idx = b.argsort(axis=1)
idx is now an array such that b[np.arange(2)[:, None], idx] gives the sorted version of b along axis 1, and also, a[idx] is the corresponding y-values. Since you are taking the median (50th precentile), and the rows have an odd number of elements, the value of x is just the middle of each row, and y is given by
a[idx[:, len(a) // 2]]
If you had an even number of elements, you would have to average the elements surrounding the middle:
i = len(a) // 2 - 1
a[idx[:, i:i + 2]].mean(axis=1)
You can reduce algorithmic complexity by using np.argpartition instead of a full-blown np.argsort to get the middle element(s).
interp1d and other interpolators from scipy.interpolate only support 1D x arrays. So you'll need to loop over the dimensions of x manually.

Why am I getting this error ? I just want to plot my equations in Octave

%z = ratio of damping co-efficients , z<1
%wn = natural frequency in rad/sec
%wd = frequency of damped osciallations
%x_0 = amp
%phi = initial phase
%t = time
%%
z = 0.6943;
wn = 50;
wd = sqrt(1-(z^2))*wn;
x_0 = 42;
phi = pi/12;
t = linspace(0,100,1000);
x = x_0.*exp(-z*wn*t).*sin(phi+(wd*t));
plot(t,x);
error: operator *: nonconformant arguments (op1 is 1x1000, op2 is 1x1000)
error: called from
/home/koustubhjain/Documents/Damped_Oscialltion_(z<1).m at line 14 column 3
I am completely new to Octave/MATLAB, I just want to plot my equations and get a graph for them. Did I do something wrong with the multiplication ? Please someone help
Also the curve I am trying to plot should look something like a sinusoidal with decreasing amplitude, that's what my teacher told. But If I replace the multiplication signs with .*, all I get is a sort of a straight line.
The curve tends to 0 and the range of t is so wide to see something. Try to plot for t from 0 to 0.5 (instead of from 0 to 100) and you will see your curve.

mIoU for multi-class

I would like to understand how mIoU is calculated for multi-class classification. The formula for each class is
IoU formula
and then the average is done over the classes to get the mIoU. However, I don't understand what happens for the classes that are not represented. The formula becomes a division by 0, so I ignore them and the average is only computed for the classes represented.
The problem is that when a prediction is wrong, the accuracy is really lowered. It adds another class to make the average. For instance : in semantic segmentation the ground-truth of an image is made of 4 classes (0,1,2,3) and 6 classes are represented over the dataset. The prediction is also made of 4 classes (0,1,4,5) but all the items classified in 2 and 3 (in the ground-truth) are classified in 4 and 5 (in the prediction). In this case should we calculate the mIoU over 6 classes ? Even if 4 classes are totally wrong and there respective IoU is 0 ? So the problem is that if just one pixel is predicted in a class that is not in the ground_truth, we have to divide by a higher denominator and it lows a lot the score.
Is it the correct way to compute the mIoU for multi-class (and the semantic segmentation) ?
Instead of calculating the miou of each image and then calculate the "mean" miou over all the images, I calculate the miou as one big image. If a class is not in the image and is not predicited, I set there respective iou equal to 1.
From scratch :
def miou(gt,pred,nbr_mask):
intersection = np.zeros(nbr_mask) # int = (A and B)
den = np.zeros(nbr_mask) # den = A + B = (A or B) + (A and B)
for i in range(len(gt)):
for j in range(height):
for k in range(width):
if pred[i][j][k]==gt[i][j][k]:
intersection[gt[i][j][k]]+=1
den[pred[i][j][k]] += 1
den[gt[i][j][k]] += 1
mIoU = 0
for i in range(nbr_mask):
if den[i]!=0:
mIoU+=intersection[i]/(den[i]-intersection[i])
else:
mIoU+=1
mIoU=mIoU/nbr_mask
return mIoU
With gt the array of ground truth labels and pred the prediction of theassociated images (have to correspond in the array and be the same size).
Adding to the previous answer, this is a great fast and efficient pytorch GPU implementation of calculating the mIOU and classswise IOU for a batch of size (N, H, W) (both pred mask and labels), taken from the NeurIPS 2021 paper "Few-Shot Segmentation via Cycle-Consistent Transformer", github repo available here.
def intersectionAndUnionGPU(output, target, K, ignore_index=255):
# 'K' classes, output and target sizes are N or N * L or N * H * W, each value in range 0 to K - 1.
assert (output.dim() in [1, 2, 3])
assert output.shape == target.shape
output = output.view(-1)
target = target.view(-1)
output[target == ignore_index] = ignore_index
intersection = output[output == target]
area_intersection = torch.histc(intersection, bins=K, min=0, max=K-1)
area_output = torch.histc(output, bins=K, min=0, max=K-1)
area_target = torch.histc(target, bins=K, min=0, max=K-1)
area_union = area_output + area_target - area_intersection
return area_intersection, area_union, area_target
Example usage:
output = torch.rand(4, 5, 224, 224) # model output; batch size=4; channels=5, H,W=224
preds = F.softmax(output, dim=1).argmax(dim=1) # (4, 224, 224)
labels = torch.randint(0,5, (4, 224, 224))
i, u, _ = intersectionAndUnionGPU(preds, labels, 5) # 5 is num_classes
classwise_IOU = i/u # tensor of size (num_classes)
mIOU = i.sum()/u.sum() # mean IOU, taking (i/u).mean() is wrong
Hope this helps everyone!
(A non-GPU implementation is available as well in the repo!)

"dimension too large" error when broadcasting to sparse matrix in octave

32-bit Octave has a limit on the maximum number of elements in an array. I have recompiled from source (following the script at https://github.com/calaba/octave-3.8.2-enable-64-ubuntu-14.04 ), and now have 64-bit indexing.
Nevertheless, when I attempt to perform elementwise multiplication using a broadcast function, I get error: out of memory or dimension too large for Octave's index type
Is this a bug, or an undocumented feature? If it's a bug, does anyone have a reasonably efficient workaround?
Minimal code to reproduce the problem:
function indexerror();
% both of these are formed without error
% a = zeros (2^32, 1, 'int8');
% b = zeros (1024*1024*1024*3, 1, 'int8');
% sizemax % returns 9223372036854775806
nnz = 1000 % number of non-zero elements
rowmax = 250000
colmax = 100000
irow = zeros(1,nnz);
icol = zeros(1,nnz);
for ind =1:nnz
irow(ind) = round(rowmax/nnz*ind);
icol(ind) = round(colmax/nnz*ind);
end
sparseMat = sparse(irow,icol,1,rowmax,colmax);
% column vector to be broadcast
broad = 1:rowmax;
broad = broad(:);
% this gives "dimension too large" error
toobig = bsxfun(#times,sparseMat,broad);
% so does this
toobig2 = sparse(repmat(broad,1,size(sparseMat,2)));
mult = sparse( sparseMat .* toobig2 ); % never made it this far
end
EDIT:
Well, I have an inefficient workaround. It's slower than using bsxfun by a factor of 3 or so (depending on the details), but it's better than having to sort through the error in the libraries. Hope someone finds this useful some day.
% loop over rows, instead of using bsxfun
mult_loop = sparse([],[],[],rowmax,colmax);
for ind =1:length(broad);
mult_loop(ind,:) = broad(ind) * sparseMat(ind,:);
end
The unfortunate answer is that yes, this is a bug. Apparently #bsxfun and repmat are returning full matrices rather than sparse. Bug has been filed here:
http://savannah.gnu.org/bugs/index.php?47175

Has anyone seen a programming puzzle similar to this?

"Suppose you want to build a solid panel out of rows of 4×1 and 6×1 Lego blocks. For structural strength, the spaces between the blocks must never line up in adjacent rows. As an example, the 18×3 panel shown below is not acceptable, because the spaces between the blocks in the top two rows line up.
There are 2 ways to build a 10×1 panel, 2 ways to build a 10×2 panel, 8 ways to build an 18×3 panel, and 7958 ways to build a 36×5 panel.
How many different ways are there to build a 64×10 panel? The answer will fit in a 64-bit signed integer. Write a program to calculate the answer. Your program should run very quickly – certainly, it should not take longer than one minute, even on an older machine. Let us know the value your program computes, how long it took your program to calculate that value, and on what kind of machine you ran it. Include the program’s source code as an attachment.
"
I was recently given a programming puzzle and have been racking my brains trying to solve it. I wrote some code using c++ and I know the number is huge...my program ran for a few hours before I decided just to stop it because the requirement was 1 minute of run time even on a slow computer. Has anyone seen a puzzle similar to this? It has been a few weeks and I can't hand this in anymore, but this has really been bugging me that I couldn't solve it correctly. Any suggestions on algorithms to use? Or maybe possible ways to solve it that are "outside the box". What i resorted to was making a program that built each possible "layer" of 4x1 and 6x1 blocks to make a 64x1 layer. That turned out to be about 3300 different layers. Then I had my program run through and stack them into all possible 10 layer high walls that have no cracks that line up...as you can see this solution would take a long, long, long time. So obviously brute force does not seem to be effective in solving this within the time constraint. Any suggestions/insight would be greatly appreciated.
The main insight is this: when determining what's in row 3, you don't care about what's in row 1, just what's in row 2.
So let's call how to build a 64x1 layer a "row scenario". You say that there are about 3300 row scenarios. That's not so bad.
Let's compute a function:
f(s, r) = the number of ways to put row scenario number "s" into row "r", and legally fill all the rows above "r".
(I'm counting with row "1" at the top, and row "10" at the bottom)
STOP READING NOW IF YOU WANT TO AVOID SPOILERS.
Now clearly (numbering our rows from 1 to 10):
f(s, 1) = 1
for all values of "s".
Also, and this is where the insight comes in, (Using Mathematica-ish notation)
f(s, r) = Sum[ f(i, r-1) * fits(s, i) , {i, 1, 3328} ]
where "fits" is a function that takes two scenario numbers and returns "1" if you can legally stack those two rows on top of each other and "0" if you can't. This uses the insight because the number of legal ways to place scenario depends only on the number of ways to place scenarios above it that are compatible according to "fits".
Now, fits can be precomputed and stored in a 3328 by 3328 array of bytes. That's only about 10 Meg of memory. (Less if you get fancy and store it as a bit array)
The answer then is obviously just
Sum[ f(i, 10) , {i, 1, 3328} ]
Here is my answer. It's Haskell, among other things, you get bignums for free.
EDIT: It now actually solves the problem in a reasonable amount of time.
MORE EDITS: With a sparse matrix it takes a half a second on my computer.
You compute each possible way to tile a row. Let's say there are N ways to tile a row. Make an NxN matrix. Element i,j is 1 if row i can appear next to row j, 0 otherwise. Start with a vector containing N 1s. Multiply the matrix by the vector a number of times equal to the height of the wall minus 1, then sum the resulting vector.
module Main where
import Data.Array.Unboxed
import Data.List
import System.Environment
import Text.Printf
import qualified Data.Foldable as F
import Data.Word
import Data.Bits
-- This records the index of the holes in a bit field
type Row = Word64
-- This generates the possible rows for given block sizes and row length
genRows :: [Int] -> Int -> [Row]
genRows xs n = map (permToRow 0 1) $ concatMap comboPerms $ combos xs n
where
combos [] 0 = return []
combos [] _ = [] -- failure
combos (x:xs) n =
do c <- [0..(n `div` x)]
rest <- combos xs (n - x*c)
return (if c > 0 then (x, c):rest else rest)
comboPerms [] = return []
comboPerms bs =
do (b, brest) <- choose bs
rest <- comboPerms brest
return (b:rest)
choose bs = map (\(x, _) -> (x, remove x bs)) bs
remove x (bc#(y, c):bs) =
if x == y
then if c > 1
then (x, c - 1):bs
else bs
else bc:(remove x bs)
remove _ [] = error "no item to remove"
permToRow a _ [] = a
permToRow a _ [_] = a
permToRow a n (c:cs) =
permToRow (a .|. m) m cs where m = n `shiftL` c
-- Test if two rows of blocks are compatible
-- i.e. they do not have a hole in common
rowCompat :: Row -> Row -> Bool
rowCompat x y = x .&. y == 0
-- It's a sparse matrix with boolean entries
type Matrix = Array Int [Int]
type Vector = UArray Int Word64
-- Creates a matrix of row compatibilities
compatMatrix :: [Row] -> Matrix
compatMatrix rows = listArray (1, n) $ map elts [1..n] where
elts :: Int -> [Int]
elts i = [j | j <- [1..n], rowCompat (arows ! i) (arows ! j)]
arows = listArray (1, n) rows :: UArray Int Row
n = length rows
-- Multiply matrix by vector, O(N^2)
mulMatVec :: Matrix -> Vector -> Vector
mulMatVec m v = array (bounds v)
[(i, sum [v ! j | j <- m ! i]) | i <- [1..n]]
where n = snd $ bounds v
initVec :: Int -> Vector
initVec n = array (1, n) $ zip [1..n] (repeat 1)
main = do
args <- getArgs
if length args < 3
then putStrLn "usage: blocks WIDTH HEIGHT [BLOCKSIZE...]"
else do
let (width:height:sizes) = map read args :: [Int]
printf "Width: %i\nHeight %i\nBlock lengths: %s\n" width height
$ intercalate ", " $ map show sizes
let rows = genRows sizes width
let rowc = length rows
printf "Row tilings: %i\n" rowc
if null rows
then return ()
else do
let m = compatMatrix rows
printf "Matrix density: %i/%i\n"
(sum (map length (elems m))) (rowc^2)
printf "Wall tilings: %i\n" $ sum $ elems
$ iterate (mulMatVec m) (initVec (length rows))
!! (height - 1)
And the results...
$ time ./a.out 64 10 4 6
Width: 64
Height 10
Block lengths: 4, 6
Row tilings: 3329
Matrix density: 37120/11082241
Wall tilings: 806844323190414
real 0m0.451s
user 0m0.423s
sys 0m0.012s
Okay, 500 ms, I can live with that.
I solved a similar problem for a programming contest tiling a long hallway with tiles of various shapes. I used dynamic programming: given any panel, there is a way to construct it by laying down one row at a time. Each row can have finitely many shapes at its end. So for each number of rows, for each shape, I compute how many ways there are to make that row. (For the bottom row, there is exactly one way to make each shape.) Then the shape of each row determines the number of shapes that the next row can take (i.e. never line up the spaces). This number is finite for each row and in fact because you have only two sizes of bricks, it is going to be small. So you wind up spending constant time per row and the program finishes quickly.
To represent a shape I would just make a list of 4's and 6's, then use this list as a key in a table to store the number of ways to make that shape in row i, for each i.