Understanding the output of a pyomo model, number of solutions is zero? - output

I am using this code to create a solve a simple problem:
import pyomo.environ as pyo
from pyomo.core.expr.numeric_expr import LinearExpression
model = pyo.ConcreteModel()
model.nVars = pyo.Param(initialize=4)
model.N = pyo.RangeSet(model.nVars)
model.x = pyo.Var(model.N, within=pyo.Binary)
model.coefs = [1, 1, 3, 4]
model.linexp = LinearExpression(constant=0,
linear_coefs=model.coefs,
linear_vars=[model.x[i] for i in model.N])
def caprule(m):
return m.linexp <= 50
model.capme = pyo.Constraint(rule=caprule)
model.obj = pyo.Objective(expr = model.linexp, sense = maximize)
results = SolverFactory('glpk', executable='/usr/bin/glpsol').solve(model)
results.write()
And this is the output:
# ==========================================================
# = Solver Results =
# ==========================================================
# ----------------------------------------------------------
# Problem Information
# ----------------------------------------------------------
Problem:
- Name: unknown
Lower bound: 50.0
Upper bound: 50.0
Number of objectives: 1
Number of constraints: 2
Number of variables: 5
Number of nonzeros: 5
Sense: maximize
# ----------------------------------------------------------
# Solver Information
# ----------------------------------------------------------
Solver:
- Status: ok
Termination condition: optimal
Statistics:
Branch and bound:
Number of bounded subproblems: 0
Number of created subproblems: 0
Error rc: 0
Time: 0.09727835655212402
# ----------------------------------------------------------
# Solution Information
# ----------------------------------------------------------
Solution:
- number of solutions: 0
number of solutions displayed: 0
It says the number of solutions is 0, and yet it does solve the problem:
print(list(model.x[i]() for i in model.N))
Will output this:
[1.0, 1.0, 1.0, 1.0]
Which is a correct answer to the problem. what am I missing?

The interface between pyomo and glpk sometimes (always?) seems to return 0 for the number of solutions. I'm assuming there is some issue with the generalized interface between the pyomo core module and the various solvers that it interfaces with. When I use glpk and cbc solvers on this, it reports the number of solutions as zero. Perhaps those solvers don't fill that data element in the generalized interface. Somebody w/ more experience in the data glob returned from the solver may know precisely. That said, the main thing to look at is the termination condition, which I've found to be always accurate. It reports optimal.
I suspect that you have some mixed code from another model in your example. When I fix a typo or two (you missed the pyo prefix on a few things), it solves fine and gives the correct objective value as 9. I'm not sure where 50 came from in your output.
(slightly cleaned up) Code:
import pyomo.environ as pyo
from pyomo.core.expr.numeric_expr import LinearExpression
model = pyo.ConcreteModel()
model.nVars = pyo.Param(initialize=4)
model.N = pyo.RangeSet(model.nVars)
model.x = pyo.Var(model.N, within=pyo.Binary)
model.coefs = [1, 1, 3, 4]
model.linexp = LinearExpression(constant=0,
linear_coefs=model.coefs,
linear_vars=[model.x[i] for i in model.N])
def caprule(m):
return m.linexp <= 50
model.capme = pyo.Constraint(rule=caprule)
model.obj = pyo.Objective(expr = model.linexp, sense = pyo.maximize)
solver = pyo.SolverFactory('glpk') #, executable='/usr/bin/glpsol').solve(model)
results = solver.solve(model)
print(results)
model.obj.pprint()
model.obj.display()
Output:
Problem:
- Name: unknown
Lower bound: 9.0
Upper bound: 9.0
Number of objectives: 1
Number of constraints: 2
Number of variables: 5
Number of nonzeros: 5
Sense: maximize
Solver:
- Status: ok
Termination condition: optimal
Statistics:
Branch and bound:
Number of bounded subproblems: 0
Number of created subproblems: 0
Error rc: 0
Time: 0.00797891616821289
Solution:
- number of solutions: 0
number of solutions displayed: 0
obj : Size=1, Index=None, Active=True
Key : Active : Sense : Expression
None : True : maximize : x[1] + x[2] + 3*x[3] + 4*x[4]
obj : Size=1, Index=None, Active=True
Key : Active : Value
None : True : 9.0

Related

Problem while implementating of “Continual Learning Through Synaptic Intelligence” paper

I am trying to reproduce the results of “Continual Learning Through Synaptic Intelligence” paper [1]. I tried implementing the algorithm as best as I could understand after going through paper many times. I also looked at it’s official implementation on github which is in tensorflow 1.0, but could not understand much as I don’t have much familiarity with that.
Though I got some results but not good enough as paper. I wanted to ask if anyone can help me to find out where I am going wrong. Before going into coding details I want to discuss sudo code so that I undersatnd what is going wrong with my implementation.
Here is kind of sudo code that I have implemented. Please help me.
lambda = 1
xi = 1e-3
total_tasks = 5
model = NN(total_tasks)
## multiheaded linear model ([784(input)-->256-->256-->2(output)(*5, 5 separate heads)])
## output layer is 2 neuron head (separate heads for each task, total 5 tasks)
## output is vector of size 2 (for 2 classes)
prev_theta = model.theta(copy=True) # updated at end of task
## model.theta() returns list of shared parameters (i.e. layer1 and layer2 excluding output layer)
## copy=True, gives copy of parameters
## so it don't effect original params connected to computaitonal graph
omega_total = zero_like(prev_theta) ## Capital Omega in paper (per-parameter regularization strength)
omega = zero_like(prev_theta) ## small omega in paper (per-parameter contribution to loss)
for task_num in range(total_tasks):
optmizer = ADAM() # created before every task (or reset it)
prev_theta_step = model.theta(copy=True) # updated at end of step
## trainig for task start
for epoch in range(10):
for steps in range(steps_per_epoch):
X, Y = train_dataset[task_num].sample()
## X is flattened image of size 784
## Y is binary vector of size 2 ([0,1] or [1,0])
Y_pred = model(X, task_num) # model is multihead, task_num selects the head
loss = CROSS_ENTROPY(Y_pred, Y)
if(task_num>0): ## reg_loss starts from second task
theta = model.theta()
## here copy is not true so it returns params connected to computaitonal graph
reg_loss = torch.sum(omega_total*torch.square(theta - prev_theta))
loss = loss + lambda*reg_loss
optmizer.zero_grad()
loss.backward()
theta = model.theta(copy=True)
grads = model.theta_grads() ## grads of shared paramters only
omega = omega - grads*(theta - prev_theta_step)
prev_theta_step = theta
optimizer.step()
## training for task complete, update importance parameters
theta = model.theta(copy=True)
omega_total += relu( omega/( (theta - prev_theta)**2 + xi) )
prev_theta = theta
omega = torch.zeros(theta_shape)
## evaluation code
...
...
...
## evaluation done
I am also attaching result I got. In results ‘one’ (blue) represents without regression loss (lambda=0), ‘two’ (green) represents with regression loss (lambda=1).
Thank you for reading so far. Kindly help me out.

Generator usage for log analysing

I have a working python code to analyze logs. Logs are at least 10 MBytes of size and they can sometimes reach 250-300 Mbytes depending on failures, retries.
I used generator which could yield the big file as chunks and it can be configurable and I normally use 1 or 2 Mbytes of log to yield. So I analyze logs as 1Mb chunks for verification of some tests.
My problem is when I use generator it could bring up some edge cases. In log analyzing I check for subsequent appearance of some patterns as the following, so if only those 4 list seen then I keep them for next verification part of the code. The following 4 pattern can be seen in the logs once or twice, not more.
listA
listB
listC
listD
if these all occurs subsequently then I keep them all to evaluate in next step, otherwise ignore..
However now there is a small change the following could happen, some patterns (lists as I use regex findall method to find patterns) can be in next chunk to complete the check. So in the following I have 3 matching case chunk 3-4 and 5-6 and 7-8 creates the condition to take into account.
---- chunk 1 -----
listA
listB
----- chunk 2 -----
nothing
----- chunk 3 -----
listA
listB
----- chunk 4 -----
listC
listD
----- chunk 5 -----
listA
----- chunk 6 -----
listB
listC
listD
---- chunk 7 ------
listA
listB
listC
----- chunk 8 ------
listD
---------------------
Usually it does not happen like this, some patterns (B,C,D) is mostly seen subsequently in logs but listA can be seen 20 maybe the most 30 rows earlier than the rest. But any scenario like above can happen.
Please advise a good approach, I'm not sure what to use, I know there is next() function can be used to check next chunk, in such case
should I use any([listA, listB, listC, listD]) method and if any of the patterns occurs then do I need to check one by one the rest in next chunk like the following?:
if any([listA, listB, listC, listD]):
Then here check which of the patterns not seen and keep them in a notSeen list then check them one by one in next chunk?
next_chunk = next(gen_func(chunksize))
isListA = re.findall(pattern, next_chunk)
Or maybe I completely miss an easier approach for this little project. please let me know your thoughts as you might experience such situation before.
I have used next_chunk = next(gen_func(chunksize))
and added necessary if statements underneath to check only 1 next log piece becase I would arrange log chunks with a generator suitably:
I shared only a part of the code as the rest confidential
import re, os
def __init__(self, logfile):
self.read = self.ReadLog(logfile)
self.search = self.SearchData(logfile)
self.file = os.path.abspath(logfile)
self.data = self.read.read_in_chunks
r_search_combined, scan_result, r_complete, r_failed = [], [], [], []
def test123(self, r_reason: str, cyc: int, b_r):
''' Usage : verify_log.py
--log ${LOGS_FOLDER}/log --reason r_low 1 <True | False>'''
ret = False
r_count = 2*int(cyc) if b_r.lower() == "true" else int(cyc)
r_search_combined, scan_result, r_complete, r_failed = [], [], [], []
result_pattern = self.search.r_scan_pattern()
def check_patterns(chunk):
search_cached = re.findall(self.search.r_search_cached, chunk)
search_full = re.findall(self.search.r_search_full, chunk)
scan_complete = re.findall(self.search.r_scan_complete, chunk)
scan_result = re.findall(result_pattern, chunk)
r_complete = re.findall(self.search.r_auth_complete, chunk)
return search_cached, search_full, scan_complete, scan_result, r_complete
with open(self.file) as rf:
for idx, piece in enumerate(self.data(rf), start=1):
is_failed = re.findall(self.search.r_failure, piece)
if is_failed:
print(f'general failure received : {is_failed}')
r_failed.extend(is_failed)
is_r_search_cached, is_r_search_full, is_scan_complete, is_scan, is_r_complete = check_patterns(piece)
if (is_r_search_cached or is_r_search_full) and all([is_scan_complete, is_scan, is_r_complete]):
if is_r_search_cached:
r_search_combined.extend(is_r_search_cached)
if is_r_search_full:
r_search_combined.extend(is_r_search_full)
scan_result.extend(is_scan)
r_complete.extend(is_r_complete)
elif (is_r_search_cached or is_r_search_full) and not any([is_scan, is_r_complete]):
next_piece = next(self.data(rf))
_, _, _, is_scan_next, is_r_complete_next = check_patterns(next_piece)
if (is_r_search_cached or is_r_search_full) and all([is_scan_next, is_r_complete_next]):
r_search_combined.extend(is_r_search_cached)
r_search_combined.extend(is_r_search_full)
scan_result.extend(is_scan_next)
r_complete.extend(is_r_complete_next)
elif (is_r_search_cached or is_r_search_full) and is_scan and not is_r_complete:
next_piece = next(self.data(rf))
_, _, _, _, is_r_complete_next = check_patterns(next_piece)
if (is_r_search_cached or is_r_search_full) and all([is_scan, is_r_complete_next]):
r_search_combined.extend(is_r_search_cached)
r_search_combined.extend(is_r_search_full)
scan_result.extend(is_scan)
r_complete.extend(is_r_complete_next)

Error in eval(expr, envir, enclos) while using Predict function

When I try to run predict() on the dataset, it keeps giving me error -
Error in eval(expr, envir, enclos) : object 'LoanRange' not found
Here is the part of dataset -
LoanRange Loan.Type N WAFICO WALTV WAOrigRev WAPTValue
1 0-99999 Conventional 109 722.5216 63.55385 6068.239 0.6031879
2 0-99999 FHA 30 696.6348 80.00100 7129.650 0.5623650
3 0-99999 VA 13 698.6986 74.40525 7838.894 0.4892977
4 100000-149999 Conventional 860 731.2333 68.25817 6438.330 0.5962638
5 100000-149999 FHA 285 673.2256 82.42225 8145.068 0.5211495
6 100000-149999 VA 125 704.1686 87.71306 8911.461 0.5020074
7 150000-199999 Conventional 1291 738.7164 70.08944 8125.979 0.6045117
8 150000-199999 FHA 403 672.0891 84.65318 10112.192 0.5199632
9 150000-199999 VA 195 694.1885 90.77495 10909.393 0.5250807
10 200000-249999 Conventional 1162 740.8614 70.65027 8832.563 0.6111419
11 200000-249999 FHA 348 667.6291 85.13457 11013.856 0.5374226
12 200000-249999 VA 221 702.9796 91.76759 11753.642 0.5078298
13 250000-299999 Conventional 948 742.0405 72.22742 9903.160 0.6106858
Following is the code used for predicting count data N after determining the overdispersion-
model2=glm(N~Loan.Type+WAFICO+WALTV+WAOrigRev+WAPTValue, family=quasipoisson(link = "log"), data = DF)
summary(model2)
This is what I have done to create a sequence of count and use predict function-
countaxis <- seq (0,1500,150)
Y <- predict(model2, list(N=countaxis, type = "response")
At this step, I get the error -
Error in eval(expr, envir, enclos) : object 'LoanRange' not found
Can someone please point me where is the problem here.
Think about what exactly you are trying to predict. You are providing the predict function values of N (via countaxis), but in fact the way you set up your model, N is your response variable and the remaining variables are the predictors. That's why R is asking for LoanRange. It actually needs values for LoanRange, Loan.Type, ..., WAPTValue in order to predict N. So you need to feed predict inputs that let the model try to predict N.
For example, you could do something like this:
# create some fake data to predict N
newdata1 = data.frame(rbind(c("0-99999", "Conventional", 722.5216, 63.55385, 6068.239, 0.6031879),
c("150000-199999", "VA", 12.5216, 3.55385, 60.239, 0.0031879)))
colnames(newdata1) = c("LoanRange" ,"Loan.Type", "WAFICO" ,"WALTV" , "WAOrigRev" ,"WAPTValue")
# ensure that numeric variables are indeed numeric and not factors
newdata1$WAFICO = as.numeric(as.character(newdata1$WAFICO))
newdata1$WALTV = as.numeric(as.character(newdata1$WALTV))
newdata1$WAPTValue = as.numeric(as.character(newdata1$WAPTValue))
newdata1$WAOrigRev = as.numeric(as.character(newdata1$WAOrigRev))
# make predictions - this will output values of N
predict(model2, newdata = newdata1, type = "response")

Database update weirdness

I am developing a game in Django in which the quantity of rebels updates after a turn change. Rebels are represented as integers from 0 to 100 in the (MySQL) database.
When I save, this happens:
print world.rebels
>>> 0
rebstab = 0
world.rebels += rebstab
world.save()
print world.rebels
>>> 0
However, when I use F() expressions (as I gather I should to prevent race conditions), this happens:
print world.rebels
>>> 0
rebstab = 0
world.rebels = F('rebels') + rebstab
world.save()
print world.rebels
>>> 100
What's going on?
I don't have any experience with using F() objects like this so this answer may not be useful, but having a look at the documentation it mentions:
In order to access the new value that has been saved in this way, the object will need to be reloaded:
reporter = Reporters.objects.get(pk=reporter.pk)
so you may have to actually query the DB again to see the updated value:
print world.rebels
>>> 0
rebstab = 0
world.rebels = F('rebels') + rebstab
world.save()
print World.objects.get(pk=world.pk)
>>> 100
Edit: Also look [at the example where you don't need to pull the objects into memory and can do everythign at the DB level:
World.objects.get(pk=...).update(rebels=F('rebels) + 1)
I just figured out the reason had nothing to do with the F() object itself but rather a conflict with a custom save method. See Django F() objects and custom saves weirdness for a more indepth explanation.

What's the correct way to expand a [0,1] interval to [a,b]?

Many random-number generators return floating numbers between 0 and 1.
What's the best and correct way to get integers between a and b?
Divide the interval [0,1] in B-A+1 bins
Example A=2, B=5
[----+----+----+----]
0 1/4 1/2 3/4 1
Maps to 2 3 4 5
The problem with the formula
Int (Rnd() * (B-A+1)) + A
is that your Rnd() generation interval is closed on both sides, thus the 0 and the 1 are both possible outputs and the formula gives 6 when the Rnd() is exactly 1.
In a real random distribution (not pseudo), the 1 has probability zero. I think it is safe enough to program something like:
r=Rnd()
if r equal 1
MyInt = B
else
MyInt = Int(r * (B-A+1)) + A
endif
Edit
Just a quick test in Mathematica:
Define our function:
f[a_, b_] := If[(r = RandomReal[]) == 1, b, IntegerPart[r (b - a + 1)] + a]
Build a table with 3 10^5 numbers in [1,100]:
table = SortBy[Tally[Table[f[1, 100], {300000}]], First]
Check minimum and maximum:
In[137]:= {Max[First /# table], Min[First /# table]}
Out[137]= {100, 1}
Lets see the distribution:
BarChart[Last /# SortBy[Tally[Table[f[1, 100], {300000}]], First],
ChartStyle -> "DarkRainbow"]
X = (Rand() * (B - A)) + A
Another way to look at it, where r is your random number in the range 0 to 1:
(1-r)a + rb
As for your additional requirement of the result being an integer, maybe (apart from using built in casting) the modulus operator can help you out. Check out this question and the answer:
Expand a random range from 1–5 to 1–7
Well, why not just look at how Python does it itself? Read random.py in your installation's lib directory.
After gutting it to only support the behavior of random.randint() (which is what you want) and removing all error checks for non-integer or out-of-bounds arguments, you get:
import random
def randint(start, stop):
width = stop+1 - start
return start + int(random.random()*width)
Testing:
>>> l = []
>>> for i in range(2000000):
... l.append(randint(3,6))
...
>>> l.count(3)
499593
>>> l.count(4)
499359
>>> l.count(5)
501432
>>> l.count(6)
499616
>>>
Assuming r_a_b is the desired random number between a and b and r_0_1 is a random number between 0 and 1 the following should work just fine:
r_a_b = (r_0_1 * (b-a)) + a