Learning About constructors in Python - constructor

I have recently learnt about Exception Handling concepts. I am not able to understand why self.arg=arg doesnt work whereas self.msg=arg works in the below code.
class MyException(Exception):
def init(self,arg):
self.msg=arg
def check(key,value):
print('Name={} Balance={}'.format(key,value))
if(v<2000):
raise MyException('Balance amt is less in the account of ',key)
bank={'Raj':5000,'Vikas':10000,'Nishit':500,'John':321211}
for k,v in bank.items():
try:
check(k,v)
except MyException as obj:
print(obj)

It should work. You're basically assigning the value of the argument arg to the instance variable arg, no conflict there, a classic example that can be found here in Documentation.
Maybe you mistyped, used a symbol from another alphabet, or forgot a spacebar on that line? Without the error message all I can do is guess. So try replacing msg in your working code with arg once again, it may fix the problem.
By the way, it would be easier to use f-strings instead of formatting:
f'Name={key} Balance={value}'
f'Balance amt is less in the account of {key}'
It's shorter, you don't have to worry about messing up the order or number of arguments, and the code is more readable.

Related

What mechanism works to show component ID in chisel3 elaboration

Chisel throws an exception with an elaboration error message. The following is a result of my code as an example.
chisel3.core.Binding$ExpectedHardwareException: data to be connected 'chisel3.core.Bool#81' must be hardware, not a bare Chisel type. Perhaps you forgot to wrap it in Wire(_) or IO(_)?
This exception message is interesting because 81 behind chisel3.core.Bool# looks like ID, not hashcode.
Indeed, Data type extends HasId trait which has _id field, and
_id field seems to generate a unique ID for each components.
I've thought Data type overrides toString to make string that has type#ID, but it does not override. That is why $node in below code should not be able to use ID.
throw Binding.ExpectedHardwareException(s"$prefix'$node' must be hardware, " +
"not a bare Chisel type. Perhaps you forgot to wrap it in Wire(_) or IO(_)?")
Instead of toString, toNamed method exists in Data. However, this method seems to be called to generate a firrtl code, not to convert component into string.
Why can Data type show its ID?
If it is not ID, but exactly hashcode, this question is from my misunderstanding.
I think you should take a look at Chisel PR #985. It changes the way that Data's toString method is implemented. I'm not sure if it answers your question directly but it's possible this will make the meaning and location of the error clearer. If not you should comment on it.
Scala classes come with a default toString method that is of the form className#hashCode.
As you noted, the chisel3.core.Bool#81 sure looks like it's using the _id rather than the hashCode. That's because in the most recently published version of Chisel (3.1.6), the hashcode was the id! You can see this if you inspect the source files at the tag for that version: https://github.com/freechipsproject/chisel3/blob/dc4200f8b622e637ec170dc0728c7887a7dbc566/chiselFrontend/src/main/scala/chisel3/internal/Builder.scala#L81
This is no longer the case on master which probably the source of any confusion! As Chick noted, we have just changed the .toString method to be more informative than the default; expect more informative representations in 3.2.0!

Python backtracking

I have a basic problem in Python where I have to verify if my backtracking code found some solutions (I have to find all sublists of 1 to n numbers with property |x[i] - x[i-1]| == m). How do I check if there is some solution? I mean the potentially solutions I find, I just print them and not save them into memory. I have to print a proper message if there is no solutions.
As I suggested in comment, you might want to dissociate computing from I/O printing, by creating a generator of your solutions of |x[i] - x[i-1]| == m
Let's assume you defined a generator for yielding your solutions:
def mysolutions(...):
....
# Something with 'yield', or maybe not.
....
Here is a generator decorator that you can use to check if an implemented generator has a value
from itertools import chain
def checkGen(generator, doubt):
"""Decorator used to check that we have data in the generator."""
try:
first = next(generator)
except StopIteration:
raise RuntimeError("I had no value!")
return chain([first], generator)
Using this decorator, you can now define your previous solution with :
#checkGen
def mysolutions(...):
....
Then, you can simply use it as is, for dissociating your I/O:
try:
for solution in mysolutions(...):
print(solution) #Probably needs some formatting
except RuntimeError:
print("I found no value (or there was some error somewhere...)")

What does the following warning mean: 'side-effecting nullary methods are discouraged'?

I have a lot of nullary methods (methods with 0 parameters) in my scala test file. Hence, instead of writing them as :
def fooBar() = //
I write them as :
def fooBar = //
I get the following warning when I do so:
Warning:(22, 7) side-effecting nullary methods are discouraged: suggest defining as `def fooBar()` instead
What is the meaning of the warning? I am using intelliJ as my IDE and could not really find much about this warning on the web.
EDIT
And, I forgot to mention, when I use the brackets, the warning does not appear.
The common convention for nullary methods is to:
in case it's a side-effecting method, signify it with use of parenthesis
otherwise, drop parenthesis in case it's pure accessor-like method with no side effects
You're breaking this rule and IDE warns you about this.
See also https://stackoverflow.com/a/7606214/298389
Does fooBar have side-effects?
It's simply stating a good practice to define a side-effecting method as such:
def fooBar() = ...
And non-side-effecting methods like this:
def fooBar = ...
Since the method call looks similar to accessing a val, it's good to differentiate when the method is doing more than just returning a value.

Python practices: Is there a better way to check constructor parameters?

I find myself trying to convert constructor parameters to their right types very often in my Python programs. So far I've been using code similar to this, so I don't have to repeat the exception arguments:
class ClassWithThreads(object):
def __init__(self, num_threads):
try:
self.num_threads= int(num_threads)
if self.num_threads <= 0:
raise ValueError()
except ValueError:
raise ValueError("invalid thread count")
Is this a good practice? Should I just don't bother catching any exceptions on conversion and let them propagate to the caller, with the possible disadvantage of having less meaningful and consistent error messages?
When I have a question like this, I go hunting in the standard library for code that I can model my code after. multiprocessing/pool.py has a class somewhat close to yours:
class Pool(object):
def __init__(self, processes=None, initializer=None, initargs=(),
maxtasksperchild=None):
...
if processes is None:
try:
processes = cpu_count()
except NotImplementedError:
processes = 1
if processes < 1:
raise ValueError("Number of processes must be at least 1")
if initializer is not None and not hasattr(initializer, '__call__'):
raise TypeError('initializer must be a callable')
Notice that it does not say
processes = int(processes)
It just assumes you sent it an integer, not a float or a string, or whatever.
It should be pretty obvious, but if you feel it is not, I think it suffices to just document it.
It does raise ValueError if processes < 1, and it does check that initializer, when given, is callable.
So, if we take multiprocessing.Pool as a model, your class should look like this:
class ClassWithThreads(object):
def __init__(self, num_threads):
self.num_threads = num_threads
if self.num_threads < 1:
raise ValueError('Number of threads must be at least 1')
Wouldn't this approach possibly fail very unpredictably for some
conditions?
I think preemptive type checking generally goes against the grain of Python's
(dynamic-, duck-typing) design philosophy.
Duck typing gives Python programmers opportunities for great expressive power,
and rapid code development but (some might say) is dangerous because it makes no
attempt to catch type errors.
Some argue that logical errors are far more serious and frequent than type
errors. You need unit tests to catch those more serious errors. So even if you
do do preemptive type checking, it does not add much protection.
This debate lies in the realm of opinions, not facts, so it is not a resolvable argument. On which side of the fence
you sit may depend on your experience, your judgment on the likelihood of type
errors. It may be biased by what languages you already know. It may depend on
your problem domain.
You just have to decide for yourself.
PS. In a statically typed language, the type checks can be done at compile-time, thus not impeding the speed of the program. In Python, the type checks have to occur at run-time. This will slow the program down a bit, and maybe a lot if the checking occurs in a loop. As the program grows, so will the number of type checks. And unfortunately, many of those checks may be redundant. So if you really believe you need type checking, you probably should be using a statically-typed language.
PPS. There are decorators for type checking for (Python 2) and (Python 3). This would separate the type checking code from the rest of the function, and allow you to more easily turn off type checking in the future if you so choose.
You could use a type checking decorator like this activestate recipe or this other one for python 3. They allow you to write code something like this:
#require("x", int, float)
#require("y", float)
def foo(x, y):
return x+y
that will raise an exception if the arguments are not of the required type. You could easily extend the decorators to check that the arguments have valid values aswell.
This is subjective, but here's a counter-argument:
>>> obj = ClassWithThreads("potato")
ValueError: invalid thread count
Wait, what? That should be a TypeError. I would do this:
if not isinstance(num_threads, int):
raise TypeError("num_threads must be an integer")
if num_threads <= 0:
raise ValueError("num_threads must be positive")
Okay, so this violates "duck typing" principles. But I wouldn't use duck typing for primitive objects like int.

Why do I get a "Too many input arguments" error when not passing any?

I am working on some simple object-oriented code in MATLAB. I am trying to call one of my class methods with no input or output arguments in its definition.
Function definition:
function roll_dice
Function call:
obj.roll_dice;
When this is executed, MATLAB says:
??? Error using ==> roll_dice
Too many input arguments.
Error in ==> DiceSet>Diceset.Diceset at 11
obj.roll_dice;
(etc...)
Anyone have any ideas what could be causing it? Are there secret automatic arguments I'm unaware that I'm passing?
When you make the call:
obj.roll_dice;
It is actually equivalent to:
roll_dice(obj);
So obj is the "secret" automatic argument being passed to roll_dice. If you rewrite the method roll_dice to accept a single input argument (even if you don't use it), things should work correctly.
Alternatively, if you know for sure that your method roll_dice is not going to perform any operations on the class object, you can declare it to be a static method as Dan suggests.
For more information on object-oriented programming in MATLAB, here's a link to the online documentation.
I believe you can also get around this by declaring roll_dice to be a static method.