I have the following exercise:
Make a function that takes two parameters, a NumPy matrix and a constant, and uses repetition structures to multiply each element of the matrix and returns the multiplied matrix.
I did it just using repetition structures:
np.random.seed(0)
matriz = np.random.randint (1,30, (3,4))
constante = 4
for i in matrix:
print (i * constante)
Can anyone help me to solve it right?
thanks
The question (which is a terrible exercise, you'll see why in a minute) is trying to help you get used to for loops. The array you're working with has two dimensions, so you need to loop over row indices i and column indices j to access each of the entries in the matrix. You can get the number of rows and columns of the array with the .shape attribute, so you don't have to know the shape of the array beforehand.
Here's one solution that modifies the matrix in-place, meaning the original matrix gets overwritten.
>>> def scale_matrix(A, c):
... for i in range(A.shape[0]): # Loop over row indices.
... for j in range(A.shape[1]): # Loop over column indices.
... A[i,j] = c * A[i,j] # Multiply the entry and store the result in the same spot.
... return A
BUT
This is a futile exercise, because when you multiply a NumPy array by a constant, it multiplies each entry of the array. No need for looping. This way is also faster and much more readable.
>>> import numpy as np
>>> A = np.array([[1, 2],
... [3, 4]])
>>> A * 2
array([[2, 4],
[6, 8]])
Related
R = [cos(pi/3) sin(pi/3); -sin(pi/3) cos(pi/3)]
[i,j]=round([1 1] * R)
returns
i =
-0 1
error: element number 2 undefined in return list
While I want i=0 and j=1
Is there a way to work around that? Or just Octave being stupid?
Octave is not being stupid; it's just that you expect the syntax [a,b] = [c,d] to result in 'destructuring', but that's not how octave/matlab works. Instead, you are assigning a 'single' output (a matrix) to two variables. Since you are not generating multiple outputs, there is no output to assign to the second variable you specify (i.e. j) so this is ignored.
Long story short, if you're after a 'destructuring' effect, you can convert your matrix to a cell, and then perform cell expansion to generate two outputs:
[i,j] = num2cell( round( [1 1] * R ) ){:}
Or, obviously, you can collect the output into a single object, and then assign to i, and j separately via that object:
[IJ] = round( [1 1] * R ) )
i = IJ(1)
j = IJ(2)
but presumably that's what you're trying to avoid.
Explanation:
The reason [a,b] = bla bla doesn't work, is because syntactically speaking, the [a,b] here isn't a normal matrix; it represents a list of variables you expect to assign return values to. If you have a function or operation that returns multiple outputs, then each output will be assigned to each of those variables in turn.
However, if you only pass a single output, and you specified multiple return variables, Octave will assign that single output to the first return variable, and ignore the rest. And since a matrix is a single object, it assigns this to i, and ignores j.
Converting the whole thing to a cell allows you to then index it via {:}, which returns all cells as a comma separated list (this can be used to pass multiple arguments into functions, for instance). You can see this if you just index without capturing - this results in 'two' answers, printed one after another:
num2cell( round( [1 1] * R ) ){:}
% ans = 0
% ans = 1
Note that many functions in matlab/octave behave differently, based on whether you call them with 1 or 2 output arguments. In other words, think of the number of output arguments with which you call a function to be part of its signature! E.g., have a look at the ind2sub function:
[r] = ind2sub([3, 3], [2,8]) % returns 1D indices
% r = 2 8
[r, ~] = ind2sub([3, 3], [2,8]) % returns 2D indices
% r = 2 2
If destructuring worked the way you assumed on normal matrices, it would be impossible to know if one is attempting to call a function in "two-outputs" mode, or simply trying to call it in "one-output" mode and then destructure the output.
I have a special dataset and this dataset could be trains with a %1 error. I need to do hyperparameter tuning for MLPRegressor without a split train set. Meanly cv = 1. Is this possible with GridSearchCV?
One of the options for cv parameter is:
An iterable yielding (train, test) splits as arrays of indices.
So, if you have X input matrix, y target vector, mlp classifier, and params grid you can do just one train-test split.
import numpy as np
from sklearn.model_selection import train_test_split, GridSearchCV
indices = np.arange(len(X))
train_idx, test_idx = train_test_split(indices, test_size=0.2)
clf = GridSearchCV(mlp, params, cv=[(train_idx, test_idx)])
But keep in mind that using 1 split for hyper-parameter sweep is a bad practice. Do not make many steps with such a grid search.
I'm writing a function for Python 3.7.3 that tests if a number is a factor of another number.
I tried researching on the internet to find some idea about how the write a function that tests the validity of factoring two unknown real numbers. I ended up stumbling upon the difference between factoring and divisibility, which intrigued me somewhat.
def is_factor(f, n):
"""This function returns if f, a real number, is a factor of another
real number n."""
while f * f <= n:
if f % n == 0:
f /= n
#return True?
else: f += 1 #return False?
print(is_factor(1, 15))
The function appears to work, because Python returns None, and that's all. I expect the function to return a True or False solution. There must be some logical error in the code. Any feed back is appreciated.
If you are dealing with integers, use:
def is_factor(f, n):
return n%f==0
If you are dealing with real numbers, the above code works but is very sensitive to floating point imprecision. Instead, you can divide n by f and see if you get back n after rounding to the nearest integer:
def is_factor(f, n, e):
return abs(round(n/f)*f-n)<e
Let's say I have some data z=[1,2,3,4]
I am trying to fit this data to a model which is known, so the exercise is simply to find the value of an unknown parameter D
My log likelihood function looks like this
l(D|z)= \sum(\sqrt(z^2 + D^2))
I am trying to define this log likelihood function, z is the data which is a list and theta is the parameter vector which in this case is 1 dimensional
import scipy.optimize as op
import numpy as np
D_true = some given value
def f(z,theta):
D=theta
z2=[x**2 for x in z]
return np.sqrt(np.sum(z2 + D**2))
result = op.minimize(f, D_true,args=(z))
print result.x
But I am getting the error message unsupported operand type(s) for ** or pow(): 'list' and 'int'
and pointing towards return np.sqrt(np.sum(z2 + D**2))
Can anyone help me solve this issue?
as they say, "we have eyes, if we only but see." What I mean is, it's telling you where to look
"But I am getting the error message unsupported operand type(s) for ** or pow(): 'list' and 'int'
and pointing towards return np.sqrt(np.sum(z2 + D**2))"
and it's even telling you the problem: you cannot add a list object to an int object. By the way you wrote it, I think you are assuming Python will broadcast, but it will not do that unless at least one of the objects is a numpy (ndarray) object.
You probably wanted the value D ** 2 to be added to each entry of z2. One option is to write
return np.sqrt(np.sum(z2 + D **2 * np.ones_like(z2)))
another option is
return np.sqrt(np.sum(np.array(z2) + D ** 2))
It's awkward to me to see lists being used with numpy functions, you may consider working exclusively with numpy arrays. For example, you wouldn't have run into this problem if you opted to do that from the start.
If you had a numpy array, you can use ufuncs instead of list comprehensions.
z = np.arange(1,5)
z2 = z ** 2
is the same as
z2 = [x ** 2 for x in z]
but returns a numpy array instead of a list.
I think you should replace D = theta by D, = theta since theta is a list. This way we unpack it on the fly.
The use of the command "return" has always been bothering me since I started learning Python about a month ago(completely no programming background)
The function "double()" seems working fine without me have to reassign the value of the list used as an argument for the function and the value of the elements processed by the function would double as planned. Without the need to assign it outside the function.
However, the function "only_upper()" would require me to assign the list passed as argument through the function in order to see the effect of the function. I have to specify t=only_upper(t) outside of the function to see the effect.
So my question is this: Why are these two seemingly same function produces different result from the use of return?
Please explain in terms as plain as possible due to my inadequate programming skill. Thank you for your input.
def double(x):
for i in range(len(x)):
x[i] = int(x[i])*2
return x
x = [1, 2, 3]
print double(x)
def only_upper(t):
res = []
for s in t:
if s.isupper():
res.append(s)
t = res
return t
t = ['a', 'B', 'C']
t = only_upper(t)
print t
i am assuming that this is your first programming language hence the problem with understanding the return statement found in the functions.
The return in our functions is a means for us to literally return the values we want from that given 'formula' AKA function. For example,
def calculate(x,y):
multiply = x * y
return multiply
print calculate(5,5)
the function calculate defines the steps to be executed in a chunk. Then you ask yourself what values do you want to get from that chunk of steps. In my example, my function is to calculate the multiplied value from 2 values, hence returning the multiplied value. This can be shorten to the following
def calculate(x,y):
return x * y
print calculate(5,5)