Python backtracking - recursive-backtracking

I have a basic problem in Python where I have to verify if my backtracking code found some solutions (I have to find all sublists of 1 to n numbers with property |x[i] - x[i-1]| == m). How do I check if there is some solution? I mean the potentially solutions I find, I just print them and not save them into memory. I have to print a proper message if there is no solutions.

As I suggested in comment, you might want to dissociate computing from I/O printing, by creating a generator of your solutions of |x[i] - x[i-1]| == m
Let's assume you defined a generator for yielding your solutions:
def mysolutions(...):
....
# Something with 'yield', or maybe not.
....
Here is a generator decorator that you can use to check if an implemented generator has a value
from itertools import chain
def checkGen(generator, doubt):
"""Decorator used to check that we have data in the generator."""
try:
first = next(generator)
except StopIteration:
raise RuntimeError("I had no value!")
return chain([first], generator)
Using this decorator, you can now define your previous solution with :
#checkGen
def mysolutions(...):
....
Then, you can simply use it as is, for dissociating your I/O:
try:
for solution in mysolutions(...):
print(solution) #Probably needs some formatting
except RuntimeError:
print("I found no value (or there was some error somewhere...)")

Related

In Jupyter Notebook, how to show output within function without using print

I want the code to show "123" without using built-in print function, but it does not. What should I do?
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
def myf():
"123"
myf()
If I got right, you just want to print the "123", right?
def myf():
print("123")
myf()
If you want to receive the "123" as a result of your def, would be something like this :
def myf():
x = "123"
return x
Z = myf()
print (Z)
"123"
You can use the display function:
Although, I don't think that's what you want. The settings you're enabling:
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
Only apply to returned values; that's why you don't see it printed. If you return the value and call the function a few times, you'll see them:
I don't have a direct answer to this, but I have some pointers to the problem.
Normal output is written either in stdout or stderr by some Python provided method. However, when utilizing the IPython feature of checking a value by using either direct value ("123") or variable (first line a = "123", second line a). This output stream can not be captured with simple %%capture magic in Jupyter; the output vanishes in the scope of the function definition.
I do agree that this would be useful; in machine learning we sometimes use dependency inversion like structures, where we modify functions instead of line-by-line code, where debugging gets hard since we can not capture some of the outputs without injecting a print or display. However, not using display might have unwanted and hard to predict consequences as some of the models can be rather verbose in what they write. However, capturing some outputs without extra prints and displays from user defined cells might be nice feature.
Notice that sometimes print doesn't work, but display does. Print might not always understand how our utilities in pandas or matplotlib works.

Learning About constructors in Python

I have recently learnt about Exception Handling concepts. I am not able to understand why self.arg=arg doesnt work whereas self.msg=arg works in the below code.
class MyException(Exception):
def init(self,arg):
self.msg=arg
def check(key,value):
print('Name={} Balance={}'.format(key,value))
if(v<2000):
raise MyException('Balance amt is less in the account of ',key)
bank={'Raj':5000,'Vikas':10000,'Nishit':500,'John':321211}
for k,v in bank.items():
try:
check(k,v)
except MyException as obj:
print(obj)
It should work. You're basically assigning the value of the argument arg to the instance variable arg, no conflict there, a classic example that can be found here in Documentation.
Maybe you mistyped, used a symbol from another alphabet, or forgot a spacebar on that line? Without the error message all I can do is guess. So try replacing msg in your working code with arg once again, it may fix the problem.
By the way, it would be easier to use f-strings instead of formatting:
f'Name={key} Balance={value}'
f'Balance amt is less in the account of {key}'
It's shorter, you don't have to worry about messing up the order or number of arguments, and the code is more readable.

How does Ruby JSON.parse differ to OJ.load in terms of allocating memory/Object IDs

This is my first question and I have tried my best to find an answer - I have looked everywhere for an answer but haven't managed to find anything concrete to answer this in both the oj docs and ruby json docs and here.
Oj is a gem that serves to improve serialization/deserialization speeds and can be found at: https://github.com/ohler55/oj
I noticed this difference when I tried to dump and parse a hash with a NaN contained in it, twice, and compared the two, i.e.
# Create json Dump
dump = JSON.dump ({x: Float::NAN})
# Create first JSON load
json_load = JSON.parse(dump, allow_nan: true)
# Create second JSON load
json_load_2 = JSON.parse(dump, allow_nan: true)
# Create first OJ load
oj_load = Oj.load(dump, :mode => :compat)
# Create second OJload
oj_load_2 = Oj.load(dump, :mode => :compat)
json_load == json_load_2 # Returns true
oj_load == oj_load_2 # Returns false
I always thought NaN could not be compared to NaN so this confused me for a while until I realised that json_load and json_load_2 have the same object ID and oj_load and oj_load_2 do not.
Can anyone point me in the direction of where this memory allocation/object ID allocation occurs or how I can control that behaviour with OJ?
Thanks and sorry if this answer is floating somewhere on the internet where I could not find it.
Additional info:
I am running Ruby 1.9.3.
Here's the output from my tests re object IDs:
puts Float::NAN.object_id; puts JSON.parse(%q({"x":NaN}), allow_nan: true)["x"].object_id; puts JSON.parse(%q({"x":NaN}), allow_nan: true)["x"].object_id
70129392082680
70129387898880
70129387898880
puts Float::NAN.object_id; puts Oj.load(%q({"x":NaN}), allow_nan: true)["x"].object_id; puts Oj.load(%q({"x":NaN}), allow_nan: true)["x"].object_id
70255410134280
70255410063100
70255410062620
Perhaps I am doing something wrong?
I believe that is a deep implementation detail. Oj does this:
if (ni->nan) {
rnum = rb_float_new(0.0/0.0);
}
I can't find a Ruby equivalent for that, Float.new doesn't appear to exist, but it does create a new Float object every time (from an actual C's NaN it constructs on-site), hence different object_ids.
Whereas Ruby's JSON module uses (also in C) its own JSON::NaN Float object everywhere:
CNaN = rb_const_get(mJSON, rb_intern("NaN"));
That explains why you get different NaNs' object_ids with Oj and same with Ruby's JSON.
No matter what object_ids the resulting hashes have, the problem is with NaNs. If they have the same object_ids, the enclosing hashes are considered equal. If not, they are not.
According to the docs, Hash#== uses Object#== for values that only outputs true if and only if the argument is the same object (same object_id). This contradicts NaN's property of not being equal to itself.
Spectacular. Inheritance gone haywire.
One could, probably, modify Oj's C code (and even make a pull request with it) to use a constant like Ruby's JSON module does. It's a subtle change, but it's in the spirit of being compat, I guess.

Python practices: Is there a better way to check constructor parameters?

I find myself trying to convert constructor parameters to their right types very often in my Python programs. So far I've been using code similar to this, so I don't have to repeat the exception arguments:
class ClassWithThreads(object):
def __init__(self, num_threads):
try:
self.num_threads= int(num_threads)
if self.num_threads <= 0:
raise ValueError()
except ValueError:
raise ValueError("invalid thread count")
Is this a good practice? Should I just don't bother catching any exceptions on conversion and let them propagate to the caller, with the possible disadvantage of having less meaningful and consistent error messages?
When I have a question like this, I go hunting in the standard library for code that I can model my code after. multiprocessing/pool.py has a class somewhat close to yours:
class Pool(object):
def __init__(self, processes=None, initializer=None, initargs=(),
maxtasksperchild=None):
...
if processes is None:
try:
processes = cpu_count()
except NotImplementedError:
processes = 1
if processes < 1:
raise ValueError("Number of processes must be at least 1")
if initializer is not None and not hasattr(initializer, '__call__'):
raise TypeError('initializer must be a callable')
Notice that it does not say
processes = int(processes)
It just assumes you sent it an integer, not a float or a string, or whatever.
It should be pretty obvious, but if you feel it is not, I think it suffices to just document it.
It does raise ValueError if processes < 1, and it does check that initializer, when given, is callable.
So, if we take multiprocessing.Pool as a model, your class should look like this:
class ClassWithThreads(object):
def __init__(self, num_threads):
self.num_threads = num_threads
if self.num_threads < 1:
raise ValueError('Number of threads must be at least 1')
Wouldn't this approach possibly fail very unpredictably for some
conditions?
I think preemptive type checking generally goes against the grain of Python's
(dynamic-, duck-typing) design philosophy.
Duck typing gives Python programmers opportunities for great expressive power,
and rapid code development but (some might say) is dangerous because it makes no
attempt to catch type errors.
Some argue that logical errors are far more serious and frequent than type
errors. You need unit tests to catch those more serious errors. So even if you
do do preemptive type checking, it does not add much protection.
This debate lies in the realm of opinions, not facts, so it is not a resolvable argument. On which side of the fence
you sit may depend on your experience, your judgment on the likelihood of type
errors. It may be biased by what languages you already know. It may depend on
your problem domain.
You just have to decide for yourself.
PS. In a statically typed language, the type checks can be done at compile-time, thus not impeding the speed of the program. In Python, the type checks have to occur at run-time. This will slow the program down a bit, and maybe a lot if the checking occurs in a loop. As the program grows, so will the number of type checks. And unfortunately, many of those checks may be redundant. So if you really believe you need type checking, you probably should be using a statically-typed language.
PPS. There are decorators for type checking for (Python 2) and (Python 3). This would separate the type checking code from the rest of the function, and allow you to more easily turn off type checking in the future if you so choose.
You could use a type checking decorator like this activestate recipe or this other one for python 3. They allow you to write code something like this:
#require("x", int, float)
#require("y", float)
def foo(x, y):
return x+y
that will raise an exception if the arguments are not of the required type. You could easily extend the decorators to check that the arguments have valid values aswell.
This is subjective, but here's a counter-argument:
>>> obj = ClassWithThreads("potato")
ValueError: invalid thread count
Wait, what? That should be a TypeError. I would do this:
if not isinstance(num_threads, int):
raise TypeError("num_threads must be an integer")
if num_threads <= 0:
raise ValueError("num_threads must be positive")
Okay, so this violates "duck typing" principles. But I wouldn't use duck typing for primitive objects like int.

Python equivalent of PHP's #

Is there a Python equivalent of PHP's #?
#function_which_is_doomed_to_fail();
I've always used this block:
try:
foo()
except:
pass
But I know there has to be a better way.
Does anyone know how I can Pythonicify that code?
I think adding some context to that code would be appropriate:
for line in blkid:
line = line.strip()
partition = Partition()
try:
partition.identifier = re.search(r'^(/dev/[a-zA-Z0-9]+)', line).group(0)
except:
pass
try:
partition.label = re.search(r'LABEL="((?:[^"\\]|\\.)*)"', line).group(1)
except:
pass
try:
partition.uuid = re.search(r'UUID="((?:[^"\\]|\\.)*)"', line).group(1)
except:
pass
try:
partition.type = re.search(r'TYPE="((?:[^"\\]|\\.)*)"', line).group(1)
except:
pass
partitions.add(partition)
What you are looking for is anti-pythonic, because:
The Zen of Python, by Tim Peters
Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren't special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.
Now is better than never.
Although never is often better than right now.
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea -- let's do more of those!
In your case, I would use something like this:
match = re.search(r'^(/dev/[a-zA-Z0-9]+)', line)
if match:
partition.identifier = match.group(0)
And you have 3 lines instead of 4.
There is no better way. Silently ignoring error is bad practice in any language, so it's naturally not Pythonic.
Building upon Gabi Purcanu's answer and your desire to condense to one-liners, you could encapsulate his solution into a function and reduce your example:
def cond_match(regexp, line, grp):
match = re.search(regexp, line)
if match:
return match.group(grp)
else:
return None
for line in blkid:
line = line.strip()
partition = Partition()
partition.identifier = cond_match(r'^(/dev/[a-zA-Z0-9]+)', line, 0)
partition.label = cond_match(r'LABEL="((?:[^"\\]|\\.)*)"', line, 1)
partition.uuid = cond_match(r'UUID="((?:[^"\\]|\\.)*)"', line, 1)
partition.type = cond_match(r'TYPE="((?:[^"\\]|\\.)*)"', line, 1)
partitions.add(partition)
Please don't ask for Python to be like PHP. You should always explicitly trap the most specific error you can. Catching and ignoring all errors like that is not good best practice. This is because it can hide other problems and make bugs harder to find. But in the case of REs, you should really check for the None value that it returns. For example, your code:
label = re.search(r'LABEL="((?:[^"\\]|\.)*)"', line).group(1)
Raises an AttributeError if there is not match, because the re.search returns None if there is no match. But what if there was a match but you had a typo in your code:
label = re.search(r'LABEL="((?:[^"\\]|\.)*)"', line).roup(1)
This also raises an AttributeError, even if there was a match. But using the catchall exception and ignoring it would mask that error from you. You will never match a label in that case, and you would never know it until you found it some other way, such as by eventually noticing that your code never matches a label (but hopefully you have unit tests for that case...)
For REs, the usual pattern is this:
matchobj = re.search(r'LABEL="((?:[^"\\]|\.)*)"', line)
if matchobj:
label = matchobj.group(1)
No need to try and catch an exception here since there would not be one. Except... when there was an exception caused by a similar typo.
Use data-driven design instead of repeating yourself. Naming the relevant group also makes it easier to avoid group indexing bugs:
_components = dict(
identifier = re.compile(r'^(?P<value>/dev/[a-zA-Z0-9]+)'),
label = re.compile(r'LABEL="(?P<value>(?:[^"\\]|\\.)*)"'),
uuid = re.compile(r'UUID="(?P<value>(?:[^"\\]|\\.)*)"'),
type = re.compile(r'TYPE="(?P<value>(?:[^"\\]|\\.)*)"'),
)
for line in blkid:
line = line.strip()
partition = Partition()
for name, pattern in _components:
match = pattern.search(line)
value = match.group('value') if match else None
setattr(partition, name, value)
partitions.add(partition)
There is warnings control in Python - http://docs.python.org/library/warnings.html
After edit:
You probably want to check if it is not None before trying to get the groups.
Also use len() on the groups to see how many groups you have got. "Pass"ing the error is definitely not the way to go.