I am using CVXPY with CPLEX as solver to solve a very simple Lp-regression problem.
The Python script I'm using is the following:
import cvxpy as cp
import numpy as np
import sys
m = 4
n = 2
p = float(sys.argv[1])
A = np.ones((m, n))
b = np.arange(m)
x = cp.Variable(n)
cost = cp.pnorm(A # x - b, p)
prob = cp.Problem(cp.Minimize(cost))
prob.solve(solver='CPLEX', verbose=True, cplex_filename='model.lp')
If I look at the model.lp for p = 1, I see that CPLEX is solving a standard Linear Programming model, as I expected:
\ENCODING=ISO-8859-1
\Problem name:
Minimize
obj1: x1 + x2 + x3 + x4
Subject To
c1: - x1 + x5 + x6 <= 0
c2: - x2 + x5 + x6 <= 1
c3: - x3 + x5 + x6 <= 2
c4: - x4 + x5 + x6 <= 3
c5: - x1 - x5 - x6 <= 0
c6: - x2 - x5 - x6 <= -1
c7: - x3 - x5 - x6 <= -2
c8: - x4 - x5 - x6 <= -3
Bounds
x1 Free
x2 Free
x3 Free
x4 Free
x5 Free
x6 Free
End
Also, for p = 2 I obtain a very simple model, I guess because the resulting problem is a standard least-squares problem:
\ENCODING=ISO-8859-1
\Problem name:
Minimize
obj1: x_0
Subject To
c1: - x_0 + soc_t_0 = 0
c2: - x_1 - x_2 + soc_x_1 = 0
c3: - x_1 - x_2 + soc_x_2 = -1
c4: - x_1 - x_2 + soc_x_3 = -2
c5: - x_1 - x_2 + soc_x_4 = -3
q1: [ - soc_t_0 ^2 + soc_x_1 ^2 + soc_x_2 ^2 + soc_x_3 ^2 + soc_x_4 ^2 ] <= 0
Bounds
x_0 Free
x_1 Free
x_2 Free
soc_x_1 Free
soc_x_2 Free
soc_x_3 Free
soc_x_4 Free
End
On the other hand, if I start to plug-in other values for p, I get significantly larger and more complex models that I don't quite understand. Also, I observed that the number of constraints in the resulting model depends on the value of p. This is the one for p = 3:
\ENCODING=ISO-8859-1
\Problem name:
Minimize
obj1: x_0
Subject To
c1: - x_0 + x_7 + x_8 + x_9 + x_10 = 0
c2: x_1 + x_2 - x_3 <= 0
c3: x_1 + x_2 - x_4 <= 1
c4: x_1 + x_2 - x_5 <= 2
c5: x_1 + x_2 - x_6 <= 3
c6: - x_1 - x_2 - x_3 <= 0
c7: - x_1 - x_2 - x_4 <= -1
c8: - x_1 - x_2 - x_5 <= -2
c9: - x_1 - x_2 - x_6 <= -3
c10: - x_0 - x_11 + soc_t_9 = 0
c11: - x_0 + x_11 + soc_x_10 = 0
c12: - 2 x_3 + soc_x_11 = 0
c13: - x_0 - x_12 + soc_t_12 = 0
c14: - x_0 + x_12 + soc_x_13 = 0
c15: - 2 x_4 + soc_x_14 = 0
c16: - x_0 - x_13 + soc_t_15 = 0
c17: - x_0 + x_13 + soc_x_16 = 0
c18: - 2 x_5 + soc_x_17 = 0
c19: - x_0 - x_14 + soc_t_18 = 0
c20: - x_0 + x_14 + soc_x_19 = 0
c21: - 2 x_6 + soc_x_20 = 0
c22: - x_3 - x_7 + soc_t_21 = 0
c23: x_3 - x_7 + soc_x_22 = 0
c24: - 2 x_11 + soc_x_23 = 0
c25: - x_4 - x_8 + soc_t_24 = 0
c26: x_4 - x_8 + soc_x_25 = 0
c27: - 2 x_12 + soc_x_26 = 0
c28: - x_5 - x_9 + soc_t_27 = 0
c29: x_5 - x_9 + soc_x_28 = 0
c30: - 2 x_13 + soc_x_29 = 0
c31: - x_6 - x_10 + soc_t_30 = 0
c32: x_6 - x_10 + soc_x_31 = 0
c33: - 2 x_14 + soc_x_32 = 0
q1: [ - soc_t_9 ^2 + soc_x_10 ^2 + soc_x_11 ^2 ] <= 0
q2: [ - soc_t_12 ^2 + soc_x_13 ^2 + soc_x_14 ^2 ] <= 0
q3: [ - soc_t_15 ^2 + soc_x_16 ^2 + soc_x_17 ^2 ] <= 0
q4: [ - soc_t_18 ^2 + soc_x_19 ^2 + soc_x_20 ^2 ] <= 0
q5: [ - soc_t_21 ^2 + soc_x_22 ^2 + soc_x_23 ^2 ] <= 0
q6: [ - soc_t_24 ^2 + soc_x_25 ^2 + soc_x_26 ^2 ] <= 0
q7: [ - soc_t_27 ^2 + soc_x_28 ^2 + soc_x_29 ^2 ] <= 0
q8: [ - soc_t_30 ^2 + soc_x_31 ^2 + soc_x_32 ^2 ] <= 0
Bounds
x_0 Free
x_1 Free
x_2 Free
x_3 Free
x_4 Free
x_5 Free
x_6 Free
x_7 Free
x_8 Free
x_9 Free
x_10 Free
x_11 Free
x_12 Free
x_13 Free
x_14 Free
soc_x_10 Free
soc_x_11 Free
soc_x_13 Free
soc_x_14 Free
soc_x_16 Free
soc_x_17 Free
soc_x_19 Free
soc_x_20 Free
soc_x_22 Free
soc_x_23 Free
soc_x_25 Free
soc_x_26 Free
soc_x_28 Free
soc_x_29 Free
soc_x_31 Free
soc_x_32 Free
End
I guess this is because general Lp-regression problems can't be solved with the same techniques that can be used for p = 1, p = 2, or even p = infinity (as explained in the book "Convex Optimization").
Any idea of which technique is being applied here by CPLEX to obtain these models?
Related
I have functional equation
B(2z^4 + 4z^6 + 9z^8 + 20z^{10} + 44z^{12} + 96z^{14}) = (B(z))^4
I try to solve it using Maxima CAS :
(%i2) e: B(2*z^4 + 4*z^6 + 9*z^8 + 20*z^10 + 44*z^12 + 96*z^14) = (B(z))^4;
14 12 10 8 6 4 4
(%o2) B(96 z + 44 z + 20 z + 9 z + 4 z + 2 z ) = B (z)
(%i3) funcsolve (e,B(z));
expt: undefined: 0 to a negative exponent.
#0: rform(%r=[0,0])
#1: funcsol(%a=B(96*z^14+44*z^12+20*z^10+9*z^8+4*z^6+2*z^4) = B(z)^4,%f=B(z),l%=[])
#2: funcsolve(%a=B(96*z^14+44*z^12+20*z^10+9*z^8+4*z^6+2*z^4) = B(z)^4,%f=B(z))
#3: funcsolve(_l=[B(96*z^14+44*z^12+20*z^10+9*z^8+4*z^6+2*z^4) = B(z)^4,B(z)])
-- an error. To debug this try: debugmode(true);
Here simpler example :
define(f(z),z^2-1)
(%o3) f(z):=z^2-1
(%i4) f2:factor(f(f(z)))
(%o4) z^2*(z^2-2)
(%i5) e:B(f2) = B(z)^2
(%o5) B(z^2*(z^2-2)) = B(z)^2
(%i6) s:funcsolve(e,B(z))
expt: undefined: 0 to a negative exponent.
#0: rform(%r=[0,0])
#1: funcsol(%a=B(z^2*(z^2-2)) = B(z)^2,%f=B(z),l%=[])
#2: funcsolve(%a=B(z^2*(z^2-2)) = B(z)^2,%f=B(z))
#3: funcsolve(_l=[B(z^2*(z^2-2)) = B(z)^2,B(z)])
-- an error. To debug this try: debugmode(true);
How should I do it?
Is it another software / method for it ?
I have 4 diff. equations (representing orbit equations of plants)
x'[t] == px[t] + y[t]
y'[t] == py[t] - x[t]
px'[t] == py[t] - dVx[t]
py'[t] == -px[t] - dVy[t]
which i want to solve for x[t] and y[t] for any time t. The given variables are
x[0]==0
y[0]==0
px[0]==0
py[0]==2.0
\[Epsilon]==0.2
-dVx[t] == x[t] - (1 - \[Epsilon])*(x[t] + \[Epsilon])/((x[t] + \[Epsilon])^2 +
y[t]^2)^(3/2) - \[Epsilon] (x[t] + \[Epsilon] -
1)/(((x[t] + \[Epsilon] - 1)^2 + y[t]^2)^(3/2))
-dVy[t] == y[t]*(1 - (1 - \[Epsilon])/((x[t] + \[Epsilon])^2 + y[t]^2)^(3/
2) - \[Epsilon]/((x[t] + \[Epsilon] - 1)^2 + y[t]^2)^(3/2))
How could I get x,y for any time and make a plot in the x,y plane. I tried it with NDSolve but i failed. My code is
In[49]:= -dVx[t] == x[t] - (1 - \[Epsilon])*(x[t] + \[Epsilon])/((x[t] + \
[Epsilon])^2 + y[t]^2)^(3/2) - \[Epsilon] (x[t] + \[Epsilon] -
1)/(((x[t] + \[Epsilon] - 1)^2 + y[t]^2)^(3/2))
Out[49]= -dVx[t] == -(0.16/(0.04 + y[t]^2)^(3/2)) +
0.16/(0.64 + y[t]^2)^(3/2)
In[50]:= -dVy[t] ==
y[t]*(1 - (1 - \[Epsilon])/((x[t] + \[Epsilon])^2 + y[t]^2)^(3/
2) - \[Epsilon]/((x[t] + \[Epsilon] - 1)^2 + y[t]^2)^(3/2))
Out[50]= -dVy[t] ==
y[t] (1 - 0.8/(0.04 + y[t]^2)^(3/2) - 0.2/(0.64 + y[t]^2)^(3/2))
In[56]:= DSolve[{x'[t] == px[t] + y[t], y'[t] == py[t] - x[t],
px'[t] == py[t] - dVx[t], py'[t] == -px[t] - dVy[t], px[0] == 0,
y[0] == 0, py[0] == 2.0, x[0] == 0, \[Epsilon] == 0.2}, {x[t],
y[t]}, t]
During evaluation of In[56]:= DSolve::dsfun: 0 cannot be used as a function.
Out[56]= DSolve[{Derivative[1][x][t] == px[t] + y[t],
Derivative[1][y][t] == 2., Derivative[1][px][t] == 2. - dVx[t],
Derivative[1][py][t] == -dVy[t] - px[t], px[0] == 0, y[0] == 0,
py[0] == 2., True, True}, {0, y[t]}, t]
I'm new to mathematica, I'm glad for any help. I could use python if that is easier
Many syntax erros. Try this instead:
\[Epsilon] = 0.2;
dVx = -(x[
t] - (1 - \[Epsilon])*(x[
t] + \[Epsilon])/((x[t] + \[Epsilon])^2 + y[t]^2)^(3/
2) - \[Epsilon] (x[t] + \[Epsilon] -
1)/(((x[t] + \[Epsilon] - 1)^2 + y[t]^2)^(3/2)));
dVy = -(y[
t]*(1 - (1 - \[Epsilon])/((x[t] + \[Epsilon])^2 + y[t]^2)^(3/
2) - \[Epsilon]/((x[t] + \[Epsilon] - 1)^2 + y[t]^2)^(3/
2)));
NDSolve[{
x'[t] == px[t] + y[t],
y'[t] == py[t] - x[t],
px'[t] == py[t] - dVx,
py'[t] == -px[t] - dVy,
px[0] == 0,
y[0] == 0,
py[0] == 2,
x[0] == 0
}, {x[t], y[t], px[t], py[t]}, {t, 0, 1}]
Hi, I am junior in college and having trouble with my computer architecture classwork. Anyone care to help & tell me if I got them right?
Question1. Convert truth table into bool equation.
Question2. Find miminum SOP(sum of products)
Question3. Use K-map(Karnaugh map) to simplify.
You can simplify the original expression matching the given truth-table just by using Karnaugh maps:
f(x,y,z) = ∑(1,3,4,6,7) = m1 + m3 + m4 + m6 + m7
= ¬x·¬y·z + ¬x·y·z + x·y·z + x·¬y·¬z + x·y·¬z //sum of minterms
f(x,y,z) = ∏(0,2,5) = M0 · M2 · M5
= (x + y + z)·(x + ¬y + z)·(¬x + y + ¬z) //product of maxterms
f(x,y,z) = x·y + ¬x·z + x·¬z //minimal DNF
= (x + z)·(¬x + y + ¬z) //minimal CNF
You would get the same result using the laws of Boolean algebra:
¬x·¬y·z + ¬x·y·z + x·y·z + x·y·¬z + x·¬y·¬z
¬x·(¬y·z + y·z) + x·(y·z + y·¬z + ¬y·¬z) //distributivity
¬x·(z·(¬y + y)) + x·(y·(z + ¬z) + ¬y·¬z)) //distributivity
¬x·(z·( 1 )) + x·(y·( 1 ) + ¬y·¬z)) //complementation
¬x·(z ) + x·(y + ¬y·¬z)) //identity for ·
¬x·(z ) + x·(y + y·¬z + ¬y·¬z)) //absorption
¬x·(z ) + x·(y + ¬z·(y + ¬y)) //distributivity
¬x·(z ) + x·(y + ¬z·( 1 )) //complementation
¬x·(z ) + x·(y + ¬z) //identity for ·
¬x·z + x·y + x·¬z //distributivity
¬x·z + x·y + x·¬z //minimal DNF
¬x·z + x·y + x·¬z
¬x·z + x·(y + ¬z) //distributivity
(¬x + x)·(¬x + (y + ¬z))·(z + x)·(z + (y + ¬z)) //distributivity
( 1 )·(¬x + y + ¬z )·(z + x)·(z + y + ¬z) //complementation
( 1 )·(¬x + y + ¬z )·(z + x)·(y + 1) //complementation
( 1 )·(¬x + y + ¬z )·(z + x)·(1) //annihilator for +
(¬x + y + ¬z )·(z + x) //identity for ·
(¬x + y + ¬z)·(x + z) //minimal CNF
How does one translate the following binary to Decimal. And yes the decimal points are with the whole binary value
1) 101.011
b) .111
Each 1 corresponds to a power of 2, the power used is based on the placement of the 1:
101.011
= 1*2^2 + 0*2^1 + 1*2^0 + 0*2^-1 + 1*2^-2 + 2*2^-3
= 1*4 + 1*1 + 1/4 + 1/8
= 5.375
.111
= 1*2^-1 + 1*2^-2 + 1*2^-3
= 1/2 + 1/4 + 1/8
= .875
If you don't like dealing with the decimal point you can always left shift by multiplying by a power of 2:
101.011 * 2^3 = 101011
Then convert that to decimal and, since you multiplied by 2^3 = 8, divide the result by 8 to get your answer. 101011 converts to 43 and 43/8 = 5.375.
1) 101.011
= 2*2^-3 + 1*2^-2 + 0*2^-1 + 1*2^0 + 0*2^1 + 1*2^2
= (1/8) + (1/4) + 0 + 1 + 0 + 4
= 5.375
2) .111
= 1*2^-3 + 1*2^-2 + 1*2^-1
= (1/8) + (1/4) + (1/2)
= .875
101.011 should be converted like below
(101) base2 = (2^0 + 2^2) = (1 + 4) = (5) base10
(.011) base2 = 0/2 + 1/4 + 1/8 = 3/8
So in total the decimal conversion would be
5 3/8 = 5.375
Decimal numbers cannot be represented in binary. It has to be whole numbers.
Here is a simple system
Let's take your binary number for example.
101011
Every position represents a power of 2. With the left-most position representing the highest power of 2s. To visualize this, we can do the following.
1 0 1 0 1 1
2 ^ 6 2 ^ 5 2 ^ 4 2 ^ 3 2 ^ 2 2 ^ 1
We go by each position and do this math
1 * (2 ^6 ) + 0 * (2 ^ 5) + 1 * (2 ^ 4) + 0 * (2 ^ 3) + 1 * (2 ^ 2) + 1 * (2 ^ 1)
Doing the math gives us
(1 * 64) + (0 * 32) + (1 * 16) + (0 * 8) + (1 * 4) + (1 * 2) =
64 + 0 + 16 + 0 + 4 + 2 = 86
We get an answer of 86 this way.
As we all know usually negative numbers in memory represents as two's complement numbers like that
from x to ~x + 1
and to get back we don't do the obvious thing like
~([~x + 1] - 1)
but instead we do
~[~x + 1] + 1
can someone explain why does it always work? I think I can proof it with 1-bit, 2-bit, 3-bit numbers and then use Mathematical induction but it doesn't help me understand how exactly that works.
Thanks!
That's the same thing anyway. That is, ~x + 1 == ~(x - 1). But let's put that aside for now.
f(x) = ~x + 1 is its own inverse. Proof:
~(~x + 1) + 1 =
(definition of subtraction: a - b = ~(~a + b))
x - 1 + 1 =
(you know this step)
x
Also, ~x + 1 == ~(x - 1). Why? Well,
~(x - 1) =
(definition of subtraction: a - b = ~(~a + b))
~(~(~x + 1)) =
(remove double negation)
~x + 1
And that (slightly unusual) definition of subtraction, a - b = ~(~a + b)?
~(~a + b) =
(use definition of two's complement, ~x = -x - 1)
-(~a + b) - 1 =
(move the 1)
-(~a + b + 1) =
(use definition of two's complement, ~x = -x - 1)
-(-a + b) =
(you know this step)
a - b
This is because if you increment ~x (assuming no overflow). Then converting it to back to x, you've incremented relative to ~x, but decremented relative to x. Same thing applies vice versa. Assuming your variable x has a specific value, every time you increment it, relative to ~x you'll notice it decrements.
From a programmer's point of view, this is what you'd essentially witness.
Let short int x = 1 (0x0001)
then ~x = 65534 (0xFFFE)
~x + 1 = 65534 + 1 (0xFFFF)
~(~x+1) = 0 (0x0000)
~(~x+1) + 1 = 0 + 1 (0x0001)