Solve two equations with two unknowns having complex conjugate - equation

Unknows are two, which are complex number X and complex number Y.
Constants are sixteen, which are complex numbers A_1, A_2, B_1, B_2, C_1, C_2, D_1, D_2, E_1, E_2, F_1, F_2, G_1, G_2, H_1 and H_2.
Equation (1): XA_1+ YB_1+ XYC_1+ conj(X)D_1 + Yconj(X)E_1 + Xconj(X)F_1 + XYconj(X)G_1+ H_1= 0
Equation (2): XA_2+ YB_2+ XYC_2+ conj(Y)D_2 + Xconj(Y)E_2 + Yconj(Y)F_2 + XYconj(Y)G_2+ H_2= 0
where,
conj(X)= complex conjugate of X and conj(Y)= complex conjugate of Y.
|X|<=1 and |Y|<=1
How to solve for the two unknowns when the equations contain the complex conjugate?

Related

About Quick Start of Deep Learning(Knet.jl) by Julia language

julia language deep learning framework,
This is a quick start for Knet.jl,
https://denizyuret.github.io/Knet.jl/latest/tutorial/#Tutorial
ENV ["COLUMNS"] = 72
using Knet, MLDatasets, IterTools
struct Conv; w; b; f; end
(c :: Conv) (x) = c.f. (pool (conv4 (c.w, x). + C.b))
Conv (w1, w2, cx, cy, f = relu) = Conv (param (w1, w2, cx, cy), param0 (1,1, cy, 1), f);
The complex type Conv has three fields, w, b, and f.
The Conv type c (x) function broadcasts the next function with the f function.
The inner product of the w matrix and the x matrix is ​​calculated with conv4 (c.w, x), and the addition with c.b is performed with. +.
I don't know what the pool is looking for in that matrix.
This (pool (conv4 ...)) is passed through the relu activation function.
At the last Conv (w1, w2, cx, cy, f = relu) = Conv (param (w1, w2, cx, cy), param0 (1,1, cy, 1), f);
I don't know what I'm trying to do.
This is the situation of understanding.
What are you trying to do, especially in the pool?
Why are there two params on the 5th line?
I do not know.
Actually, the layer does a convolution followed by max pooling:
Pooling layers reduce the dimensions of the data by combining the outputs of neuron clusters at one layer into a single neuron in the next layer. Local pooling combines small clusters, typically 2 x 2. Global pooling acts on all the neurons of the convolutional layer. There are two common types of pooling: max and average. Max pooling uses the maximum value of each cluster of neurons at the prior layer, while average pooling instead uses the average value. (source: https://en.wikipedia.org/wiki/Convolutional_neural_network#Pooling_layers)
There are two params on the 5th line, because a convolutional layers has two trainable parameters: the kernel weights w and the bias b. The function param (and param0) initialize them with the correct size and mark them as trainable parameters that will be updated during the optimization.
To learn neural networks, I found these examples: linear regression and a simple feed-forward network (multilayer perceptron) quite useful.

Using linear approximation to perform addition and subtraction | error barrier

I'm attempting my first solo project, after taking an introductory course to machine learning, where I'm trying to use linear approximation to predict the outcome of addition/subtraction of two numbers.
I have 3 features: first number, subtraction/addition (0 or 1), and second number.
So my input looks something like this:
3 0 1
4 1 2
3 0 3
With corresponding output like this:
2
6
0
I have (I think) successfully implemented logistic regression algorithm, as the squared error does gradually decrease, but in 100 values, ranging from 0 to 50, the squared error value flattens out at around 685.6 after about 400 iterations.
Graph: Squared Error vs Iterations
.
To fix this, I have tried using a larger dataset for training, getting rid of regularization, and normalizing the input values.
I know that one of the steps to fix high bias is to add complexity to the approximation, but I want to maximize the performance at this particular level. Is it possible to go any further on this level?
My linear approximation code in Octave:
% Iterate
for i = 1 : iter
% hypothesis
h = X * Theta;
% reg theta prep
regTheta = Theta;
regTheta(:, 1) = 0;
% cost calc
J(i, 2) = (1 / (2 * m)) * (sum((h - y) .^ 2) + lambda * sum(sum(regTheta .^ 2,1),2));
% theta calc
Theta = Theta - (alpha / m) * ((h - y)' * X)' + lambda * sum(sum(regTheta, 1), 2);
end
Note: I'm using 0 for lambda, as to ignore regularization.

Comparing two functions based on Asymptotic notations

f(n)= 1 + 2 + 3 + · · + n
g(n) = 3(n^2) + nlogn
Determining f = O(g) or
f = Ω(g) or f = Θ(g)
.As per my effort and understanding one guess It might be f=O(g) as g(n) has a n^2 power which grows faster than n .
Another way : if divided both by n , f(n) will have a constant 1 and g(n) : nlogn which grows faster than constant 1 . so , f=O(g) .
Is that a correct answer?
What actually is scaling property of Big-O ?
How to prove : For any constant c > 0, cf(n) is O(f(n)).
Understanding so far :
cf(n) < (c + k)f(n) holds for all n > 0 and k > 0.
i. Constant factors are ignored.
ii. Only the powers and functions of n should be exploited
It is this ignoring of constant factors that motivates for such a
notation. Which proves f is O(f).
Is this explanation enough to prove that scaling property of Big-O ?
f(n)=O(g(n)) if there is a positive constant c such as.
|f(n)| <= c*|g(n)| for all n>=n(initial)
and since f(n)=(n(n-1))/2 ----> (n^2)
n^2<= n^2 + nlogn (ignore the constants), for all n>=1 then yes your answer is right.

Plotting a 3D function with Octave

I am having a problem graphing a 3d function - when I enter data, I get a linear graph and the values don't add up if I perform the calculations by hand. I believe the problem is related to using matrices.
INITIAL_VALUE=999999;
INTEREST_RATE=0.1;
MONTHLY_INTEREST_RATE=INTEREST_RATE/12;
# ranges
down_payment=0.2*INITIAL_VALUE:0.1*INITIAL_VALUE:INITIAL_VALUE;
term=180:22.5:360;
[down_paymentn, termn] = meshgrid(down_payment, term);
# functions
principal=INITIAL_VALUE - down_payment;
figure(1);
plot(principal);
grid;
title("Principal (down payment)");
xlabel("down payment $");
ylabel("principal $ (amount borrowed)");
monthly_payment = (MONTHLY_INTEREST_RATE*(INITIAL_VALUE - down_paymentn))/(1 - (1 + MONTHLY_INTEREST_RATE)^-termn);
figure(2);
mesh(down_paymentn, termn, monthly_payment);
title("monthly payment (principal(down payment)) / term months");
xlabel("principal");
ylabel("term (months)");
zlabel("monthly payment");
The 2nd figure like I said doesn't plot like I expect. How can I change my formula for it to render properly?
I tried your script, and got the following error:
error: octave_base_value::array_value(): wrong type argument `complex matrix'
...
Your monthly_payment is a complex matrix (and it shouldn't be).
I guess the problem is the power operator ^. You should be using .^ for element-by-element operations.
From the documentation:
x ^ y
x ** y
Power operator. If x and y are both scalars, this operator returns x raised to the power y. If x is a scalar and y is a square matrix, the result is computed using an eigenvalue expansion. If x is a square matrix. the result is computed by repeated multiplication if y is an integer, and by an eigenvalue expansion if y is not an integer. An error results if both x and y are matrices.
The implementation of this operator needs to be improved.
x .^ y
x .** y
Element by element power operator. If both operands are matrices, the number of rows and columns must both agree.

Mixing addition and subtraction with logical NOT

I found some exercises where you combine n-bit 2's complement values in different ways and simplify the output where possible. (Their practice exercises use 16-bit, but that's irrelevant).
Eg:
!(!x&!y) == x|y
0 & y, negate the output == -1
I'm having no problem applying De Morgan's laws with the examples using AND, OR, and NOT but I am having difficulty using NOT with + and -
Eg:
!(!x+y) == x-y
!(y-1) == -y
How does NOT distribute?
Edit: responding to comments: I realize this is a bitwise NOT. My question is: in algebraic terms, how does it distribute as per algebra? Example on Wikipedia
With 2's complement numbers when you bitwise NOT them it is the same as saying the negative of the number minus 1, so !x is equivalent to -x - 1 where x can be a single variable or an expression.
Starting with !(!x+y), well !x is going to be -x - 1 so then it is !(-x - 1 + y) which becomes -(-x - 1 + y) - 1 which simplifies to x - y.
And for !(y-1), that becomes -(y - 1) - 1 = -y + 1 - 1 = -y.