I'm trying to add variable to the instance of the class.
In console, I get this error:
TypeError: __init__() missing 1 required positional argument: 'view'
And here is the code itself:
import sublime_plugin
class TestMe(sublime_plugin.EventListener):
def __init__(self, view):
self.view = view
self.need_update = False
def setme():
need_update = True
def on_activated(self, view):
setme()
if need_update == True:
print("it works")
I spend all day trying to figure out different ways to resolve it. What I'm doing wrong?
It looks like the core of your issue is that you're subclassing EventListener and not ViewEventListener.
The reason you're seeing this error is that the __init__ method of the EventListener class takes no arguments (with the exception of self, which is always present in class methods). When Sublime creates the instance, it doesn't pass in a view, and since your __init__ requires one, you get an error that you're missing a positional parameter.
This is because the events in EventListener all pass in the view that they apply to (if any), so the class doesn't associate with one specific view and thus one is not needed when the listener is created.
In contrast, ViewEventListener offers only a subset of the events that EventListener does, but its instances apply to a specific view, so its constructor is provided with the view that it applies to. In this case the events themselves don't have a view argument because the listener already knows what view it is associated with.
A modified version of your code that takes all of this into account would look like this:
import sublime_plugin
class TestMe(sublime_plugin.ViewEventListener):
def __init__(self, view):
super().__init__(view)
self.need_update = False
def setme(self):
self.need_update = True
def on_activated(self):
self.setme()
if self.need_update == True:
print("it works")
Here the super class is ViewEventListener, for which Sublime will pass a view when it is created. This also invokes the super class version of the __init__ method instead of setting self.view to the view passed in, which allows it to do whatever other setup the default class needs to do (in this case none, but better safe than sorry).
Additionally the methods are adjusted a little bit, since in this case every view will have a unique instance of this class created for it:
setme takes a self argument so that it knows what instance it is being called for
on_activated does not take a view argument because it has access to self.view if it needs it
Calls to setme need to be prefixed with self. so that python knows what we're trying to do (this also implicitly passes in the self argument)
All accesses for need_update are prefixed with self. so that each method accesses the version of the variable that is unique to its own instance.
Related
Due to the way pytest works, it is not possible (or recommended) to import other modules in a pytest module. Instead, one should properly edit it's conftest.py file.
Several times, I am put in a situation where I need to share constants/functions to several tests modules. And fixture fails to be as practical as functions. Even if they can be indirectly parametrized with the indirect parameter, they are still situations where it's not possible, or simple, to use this approach.
For constants, I am in the following situation, here is an extract of my conftest.py:
TARGET_NAME_1 = 'MY_OP4510'
TARGET_NAME_2 = 'MY_ML605'
TARGET_NAME_3 = 'TARGET_WITH_CHILD'
CONFIG_FILE_NAME = 'config.ini'
#pytest.fixture()
def target_name_1():
"""This fixture returns a target name"""
return TARGET_NAME_1
#pytest.fixture()
def target_name_2():
"""This fixture returns a target name"""
return TARGET_NAME_2
#pytest.fixture()
def target_name_3():
"""This fixture returns a target name"""
return TARGET_NAME_3
#pytest.fixture()
def target_config_path():
"""This fixture returns the config path"""
return CONFIG_FILE_NAME
Every time I have to add a constant, I have to add a fixture. Also, this increase the number of parameters the tests functions will receive (if in this case, I could use the autouse parameter, for some other fixtures that actually execute code, I do not necessary want to auto-use them as they could prevent other test cases from working).
I am looking for a way to simplify this code, would you have a good pattern/implementation to suggest ?
I'd like to automate as much as possible the instantiation of an ILA directly from the Chisel code. This means instantiating a module that looks like this:
i_ila my_ila(
.clk(clock),
.probe0(a_signal_to_monitor),
.probe1(another_signal_to_monitor),
// and so on
);
I'm planning to store the signals that I want to monitor in a list of UInt so that at the end of module elaboration I can generate the instantiation code above, which I will copy/paste in the final Verilog code (or write a Python script that does that automatically).
First, is there a better way of doing this, perhaps at the level of FIRRTL?
Even if I go with this semi-manual approach, I need to know what would be the name of the signals in the final Verilog, which is not necessarily the name of the UInt vals in the code (and which, besides, I don't know how to get automatically without having to retype the name of the variable as a string somewhere). How can I get them?
I'd like to provide a more complete example, but I wanted to make sure to at least write something up. This also needs to be fleshed out as a proper example/tutorial on the website.
FIRRTL has robust support for tracking names of signals across built-in and custom transformations. This is a case where the infrastructure is all there, but it's very much a power user API. In short, you can create FIRRTL Annotations that will track Targets. You can then emit custom metadata files or use the normal FIRRTL annotation file (try the CLI option -foaf / --output-annotation-file).
An example FIRRTL Annotation that has will emit a custom metadata file at the end of compilation:
// Example FIRRTL annotation with Custom serialization
// FIRRTL will track the name of this signal through compilation
case class MyMetadataAnno(target: ReferenceTarget)
extends SingleTargetAnnotation[ReferenceTarget]
with CustomFileEmission {
def duplicate(n: ReferenceTarget) = this.copy(n)
// API for serializing a custom metadata file
// Note that multiple of this annotation will collide which is an error, not handled in this example
protected def baseFileName(annotations: AnnotationSeq): String = "my_metadata"
protected def suffix: Option[String] = Some(".txt")
def getBytes: Iterable[Byte] =
s"Annotated signal: ${target.serialize}".getBytes
}
The case class declaration and duplicate method are enough to track a single signal through compilation. The CustomFileEmission and related baseFileName, suffix, and getBytes methods define how to serialize my custom metadata file. As mentioned in the comment, as implemented in this example we can only have 1 instance of this MyMetadataAnno or they will try to write the same file which is an error. This can be handled by customizing the filename based on the Target, or writing a FIRRTL transform to aggregate multiple of this annotation into a single annotation.
We then need a way to create this annotation in Chisel:
def markSignal[T <: Data](x: T): T = {
annotate(new ChiselAnnotation {
// We can't call .toTarget until end of Chisel elaboration
// .toFirrtl is called by Chisel at the end of elaboration
def toFirrtl = MyMetadataAnno(x.toTarget)
})
x
}
Now all we need to do is use this simple API in our Chisel
// Simple example with a marked signal
class Foo extends MultiIOModule {
val in = IO(Flipped(Decoupled(UInt(8.W))))
val out = IO(Decoupled(UInt(8.W)))
markSignal(out.valid)
out <> in
}
This will result in writing the file my_metadata.txt to the target directory with the contents:
Annotated signal: ~Foo|Foo>out_valid
Note that this is special FIRRTL target syntax saying that out_valid is the annotated signal that lives in module Foo.
Complete code in an executable example:
https://scastie.scala-lang.org/moEiIqZPTRCR5mLQNrV3zA
Django version: 3.1.0, MySQL backend
I have a JSONField on my model:
class Employee(models.Model):
address = models.JSONField(
encoder=AddressEncoder,
dedocer=AddressDecoder,
default=address_default
)
Then the encoder looks like this:
class AddressEncoder(DjangoJSONEncoder):
def default(self, o):
if isinstance(o, Address):
return dataclasses.asdict(o)
raise TypeError("An Address instance is required, got an {0}".format(type(o)))
Then the address_default looks like this:
def address_default():
encoder = AddressEncoder()
address = Addres(...)
return encoder.encode(address)
Currently I have set the address_default to return a dict. Although it should actually return an Address instance. When I change this so that the address_default returns an instance of Address, an error is raised TypeError: Object of type Address is not JSON serializable. However, in other parts of the code where the address is in fact an instance of Address, no errors are raised. So the custom AddressEncoder does not seem to be used on the value provided by the address_default.
When the address attribute on Employee is set to e.g. a string, no error is thrown. This might have to do with what is explained in Why is Django not using my custom encoder class. The code in AddressEncoder is not executed.
Question:
What is the correct way to set up the address_default, and Encoder/Decoder so that the address attribute can be, and only be, an instance of Address?
I solved the problem. It had to do with my migrations. One of my migrations contained a definition of the address-field without the encoders. Hence changing the address_default to return a non json serializable object throws the corresponding error.
I had to manually find and change that migrations so that the definition of the address field includes the custom encoder.
The check isinstance(self.address, Address) is then done in an overridden save() method on the Employee model.
We are working on a Top-Down-RPG-like Multiplayer game for learning purposes (and fun!) with some friends. We already have some Entities in the Game and Inputs are working, but the network implementation gives us headache :D
The Issues
When trying to convert with dict some values will still contain the pygame.Surface, which I dont want to transfer and it causes errors when trying to jsonfy them. Other objects I would like to transfer in a simplyfied way like Rectangle cannot be converted automatically.
Already functional
Client-Server connection
Transfering JSON objects in both directions
Async networking and synchronized putting into a Queue
Situation
A new player connects to the server and wants to get the current game state with all objects.
Data-Structure
We use a "Entity-Component" based architecture, so we separated the game logic very strictly into "systems", while the data is stored in the "components" of each Entity. The Entity is a very simple container and has nothing more than a ID and a list of components. Example Entity (shorten for better readability):
Entity
|-- Component (Moveable)
|-- Component (Graphic)
| |- complex datatypes like pygame.SURFACE
| `- (...)
`- Component (Inventory)
We tried different approaches, but all seems not to fit very well or feel "hacky".
pickle
Very Python near, so not easy to implement other clients in future. And I´ve read about some security risks when creating items from network in this dynamic way how pickle it offers. It does not even solve the Surface/Rectangle issue.
__dict__
Still contains the reference to the old objects, so a "cleanup" or "filter" for unwanted datatypes happens also in the origin. A deepcopy throws Exception.
...\Python\Python36\lib\copy.py", line 169, in deepcopy
rv = reductor(4)
TypeError: can't pickle pygame.Surface objects
Show some code
The method of the "EnitityManager" Class which should generate the Snapshot of all Entities, including their components. This Snapshot should be converted to JSON without any errors - and if possible without much configuration in this core-class.
class EnitityManager:
def generate_world_snapshot(self):
""" Returns a dictionary with all Entities and their components to send
this to the client. This function will probably generate a lot of data,
but, its to send the whole current game state when a new player
connects or when a complete refresh is required """
# It should be possible to add more objects to the snapshot, so we
# create our own Snapshot-Datastructure
result = {'entities': {}}
entities = self.get_all_entities()
for e in entities:
result['entities'][e.id] = deepcopy(e.__dict__)
# Components are Objects, but dictionary is required for transfer
cmp_obj_list = result['entities'][e.id]['components']
# Empty the current list of components, its going to be filled with
# dictionaries of each cmp which are cleaned for the dump, because
# of the errors directly coverting the whole datastructure to JSON
result['entities'][e.id]['components'] = {}
for cmp in cmp_obj_list:
cmp_copy = deepcopy(cmp)
cmp_dict = cmp_copy.__dict__
# Only list, dict, int, str, float and None will stay, while
# other Types are being simply deleted including their key
# Lists and directories will be cleaned ob recursive as well
cmp_dict = self.clean_complex_recursive(cmp_dict)
result['entities'][e.id]['components'][type(cmp_copy).__name__] \
= cmp_dict
logging.debug("EntityMgr: Entity#3: %s" % result['entities'][3])
return result
Expectation and actual results
We can find a way to manually override elements which we dont want. But as the list of components will increase we have to put all the filter logic into this core class, which should not contain any components specializations.
Do we really have to put all the logic into the EntityManager for filtering the right objects? This does not feel good, as I would like to have all convertion to JSON done without any hardcoded configuration.
How to convert all this complex data in a most generic approach?
Thanks for reading so far and thank you very much for your help in advance!
Interesting articles which we were already working threw and maybe helpful for others with similar issues
https://gafferongames.com/post/what_every_programmer_needs_to_know_about_game_networking/
http://code.activestate.com/recipes/408859/
https://docs.python.org/3/library/pickle.html
UPDATE: Solution - thx 2 sloth
We used a combination of the following architecture, which works really great so far and is also good to maintain!
Entity Manager now calls the get_state() function of the entity.
class EntitiyManager:
def generate_world_snapshot(self):
""" Returns a dictionary with all Entities and their components to send
this to the client. This function will probably generate a lot of data,
but, its to send the whole current game state when a new player
connects or when a complete refresh is required """
# It should be possible to add more objects to the snapshot, so we
# create our own Snapshot-Datastructure
result = {'entities': {}}
entities = self.get_all_entities()
for e in entities:
result['entities'][e.id] = e.get_state()
return result
The Entity has only some basic attributes to add to the state and forwards the get_state() call to all the Components:
class Entity:
def get_state(self):
state = {'name': self.name, 'id': self.id, 'components': {}}
for cmp in self.components:
state['components'][type(cmp).__name__] = cmp.get_state()
return state
The components itself now inherit their get_state() method from their new superclass components, which simply cares about all simple datatypes:
class Component:
def __init__(self):
logging.debug('generic component created')
def get_state(self):
state = {}
for attr, value in self.__dict__.items():
if value is None or isinstance(value, (str, int, float, bool)):
state[attr] = value
elif isinstance(value, (list, dict)):
# logging.warn("Generating state: not supporting lists yet")
pass
return state
class GraphicComponent(Component):
# (...)
Now every developer has the opportunity to overlay this function to create a more detailed get_state() function for complex types directly in the Component Classes (like Graphic, Movement, Inventory, etc.) if it is required to safe the state in a more accurate way - which is a huge thing for maintaining the code in future, to have these code pieces in one Class.
Next step is to implement the static method for creating the items from the state in the same Class. This makes this working really smooth.
Thank you so much sloth for your help.
Do we really have to put all the logic into the EntityManager for filtering the right objects?
No, you should use polymorphism.
You need a way to represent your game state in a form that can be shared between different systems; so maybe give your components a method that will return all of their state, and a factory method that allows you create the component instances out of that very state.
(Python already has the __repr__ magic method, but you don't have to use it)
So instead of doing all the filtering in the entity manager, just let him call this new method on all components and let each component decide that the result will look like.
Something like this:
...
result = {'entities': {}}
entities = self.get_all_entities()
for e in entities:
result['entities'][e.id] = {'components': {}}
for cmp in e.components:
result['entities'][e.id]['components'][type(cmp).__name__] = cmp.get_state()
...
And a component could implement it like this:
class GraphicComponent:
def __init__(self, pos=...):
self.image = ...
self.rect = ...
self.whatever = ...
def get_state(self):
return { 'pos_x': self.rect.x, 'pos_y': self.rect.y, 'image': 'name_of_image.jpg' }
#staticmethod
def from_state(state):
return GraphicComponent(pos=(state.pos_x, state.pos_y), ...)
And a client's EntityManager that recieves the state from the server would iterate for the component list of each entity and call from_state to create the instances.
I have a couple of defined functions that I want to create buttons for in my GUI. A couple of these functions require one or two arguments(numbers) and that is what's causing problems for me. I have thought about a combination between a button and an entry where when I click the specific button(for one of my functions), an entry will pop up where I type in a number. Then when I press enter I want this number to be used as the argument for the function I have binded to my button and then the function should be executed.
1 function I want to bind to a button:
def move(power, tacho_units):
MOTOR_CONTROL.cmd(5, power, tacho_units, speedreg=0, smoothstart=1, brake=0)
is_ready(5)
We are working with Lego Mindstorms, so Im pretty sure that for example the function above could be a bit confusing for some people.
from Tkinter import *
class App:
def __init__(self, master):
frame = Frame(master)
frame.pack()
self.button = Button(frame, text="Move", command=!_______!)
self.button.pack(side=LEFT)
root = Tk()
app = App(root)
root.mainloop()
root.destroy()
Does someone have any suggestions/solutions for me? I would appreciate if someone could help me. Do I create a function(that will open a new window with an entry) that I call when I click the Move button? The numbers(power and tacho_units in this function) that I type into the entry is what I want to be used for the function Move when I press enter.
Typically, the way to pass arguments to a function associated with a widget is to use lambda or functools.partial (note: these aren't the only ways). Both of these are somewhat advanced topics, but if you just follow an example, it's fairly safe to use them without fully understanding them.
Here's an example using lambda:
b = tk.Button(..., command=lambda power=1, tacho_units="x": move(power, tacho_units)
While not technically correct, just think of a lambda as a "def" but without a name. It takes arguments and can call functions.
Here is the same thing, using functools.partial:
b = tk.Button(..., command=functools.partial(move, power=1, tacho_units="x"))
Note: you'll have to add an import statement for functools.
functools.partial in essence copies the function (in this case, move) and provides default values for the arguments. Thus, when you call it with no arguments (as tkinter does by default), the parameters will have these default values.
HOWEVER...
Often it's easier to write a function to call your function. The purpose of this extra function is to gather the inputs -- presumably from other widgets -- and then call the final function. For example:
def do_move():
power = power_input.get()
tacho_units = tacho_input.get()
move(power, tacho_units)
b = tk.Button(..., command=do_move)
Whether you use this third method depends on where the argument values come from. If you know the values at the time you create the widget, using lambda or functools.partial works because you can embed the arguments right there. If you're going to be getting the parameters from other widgets, the third form is preferable.
Use lambda function to assign function with arguments
some_power = ... # set value
some_tacho_units = ... # set value
self.button = Button(frame, text="Move", command=lambda a=some_power,b=some_tacho_units:move(a, b) )
or
self.button = Button(frame, text="Move", command=lambda:move(5, 10))