I have an SqlAlchemy declarative base class from which more complex classes are derived but for which I also need instances that are "plain" instances of the base class. I understand that SqlAlchemy doesn't create __init__ methods by default, but the base class does have one. Nonetheless at least PyCharm's linter doesn't seem to grok that the way I would expect: it's complaining that it doesn't recognize subclass instance initialization parameters.
If I'm understanding/using polymorphic identity properly, when I query the base class table I see anything that derives from the base class that matches the query. Rather than distinguish the "plain" instances by checking type or something, it feels like I should be putting them in a separate simple derived class that only introduces a new class name, table name and polymorphic identity name.
That's all background to explain why I have declarative definitions like those below for the base class and the "plain" subclass.
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class GenericFoo(Base):
__tablename__ = 'generic_foo'
__mapper_args__ = {'polymorphic_identity': 'generic_foo'}
def __init__(self, name, color):
self.name = name
self.color = color
class SpecificFoo(GenericFoo):
__tablename__ = 'specific_foo'
__mapper_args__ = {'polymorphic_identity': 'specific_foo'}
# If I uncomment this the linter complaint goes away.
# def __init__(self, name, color):
# super(SpecificFoo, self).__init__(name, color)
# PyCharm flags "Unexpected argument" on this statement.
foo = SpecificFoo(name='bob', color='blue')
# - - - - - - - - - - - - - - - - - - - - - - - -
# "Standard" definitions for comparison:
# - - - - - - - - - - - - - - - - - - - - - - - -
class GenericBar(object):
def __init__(self, name, color):
self.name = name
self.color = color
class SpecificBar(GenericBar):
pass
# This is of course fine.
bar = SpecificBar(name='bob', color='blue')
As noted in the commented line, if I omit a seemingly redundant __init__ method in SpecificFoo, PyCharm flags the foo instantiation for unrecognized arguments. If I add an __init__(self, name, color) method to SpecificFoo that just calls super(SpecificFoo, self).__init__(name, color), PyCharm is happy.
The code appears to execute without error, although I haven't tried anything that might exercise it much. I don't like adding a whole redundant method just to make the linter happy. But I'm concerned that PyCharm knows something I don't and there's an error here that will cause me grief later on. Any idea why PyCharm is flagging this, and if there's indeed a way to satisfy it (presumably without the redundant __init__ method)?
If you would like to solve without a lot of effort, you can use type hints on more recent versions of python.
When defining your class, simply place the type hint :db.Column directly into the declaration:
class GenericFoo(Base):
__tablename__ = 'generic_foo'
id: db.Column = db.Column(db.Integer, primary_key=True)
name: db.Column = db.Column(db.String, unique=True, nullable=False)
If you do this for all tables, it does turn out a bit verbose, but it will also be fully functional for all linters, not just PyCharm.
Related
I'm trying to add variable to the instance of the class.
In console, I get this error:
TypeError: __init__() missing 1 required positional argument: 'view'
And here is the code itself:
import sublime_plugin
class TestMe(sublime_plugin.EventListener):
def __init__(self, view):
self.view = view
self.need_update = False
def setme():
need_update = True
def on_activated(self, view):
setme()
if need_update == True:
print("it works")
I spend all day trying to figure out different ways to resolve it. What I'm doing wrong?
It looks like the core of your issue is that you're subclassing EventListener and not ViewEventListener.
The reason you're seeing this error is that the __init__ method of the EventListener class takes no arguments (with the exception of self, which is always present in class methods). When Sublime creates the instance, it doesn't pass in a view, and since your __init__ requires one, you get an error that you're missing a positional parameter.
This is because the events in EventListener all pass in the view that they apply to (if any), so the class doesn't associate with one specific view and thus one is not needed when the listener is created.
In contrast, ViewEventListener offers only a subset of the events that EventListener does, but its instances apply to a specific view, so its constructor is provided with the view that it applies to. In this case the events themselves don't have a view argument because the listener already knows what view it is associated with.
A modified version of your code that takes all of this into account would look like this:
import sublime_plugin
class TestMe(sublime_plugin.ViewEventListener):
def __init__(self, view):
super().__init__(view)
self.need_update = False
def setme(self):
self.need_update = True
def on_activated(self):
self.setme()
if self.need_update == True:
print("it works")
Here the super class is ViewEventListener, for which Sublime will pass a view when it is created. This also invokes the super class version of the __init__ method instead of setting self.view to the view passed in, which allows it to do whatever other setup the default class needs to do (in this case none, but better safe than sorry).
Additionally the methods are adjusted a little bit, since in this case every view will have a unique instance of this class created for it:
setme takes a self argument so that it knows what instance it is being called for
on_activated does not take a view argument because it has access to self.view if it needs it
Calls to setme need to be prefixed with self. so that python knows what we're trying to do (this also implicitly passes in the self argument)
All accesses for need_update are prefixed with self. so that each method accesses the version of the variable that is unique to its own instance.
I am trying to deploy an application to a production server, but testing has revealed a strange inconsistency in django_rest-framework (first with version 2.3.14, now upgraded to 3.0.1).
On my development machine the json response comes wrapped with some metadata:
{u'count': 2, u'previous': None, u'results': [json objects here]}
Whereas on the production machine only the 'results' array is returned.
Is there a setting or to change this one way or the other?
Serializers are as follows :
class SampleSerializer(serializers.ModelSerializer):
class Meta:
model = Sample
class LibrarySerializer(serializers.ModelSerializer):
sample = SampleSerializer()
class Meta:
model = Library
views.py
class PullLibraryView(generics.ListAPIView):
serializer_class = LibrarySerializer
filter_backends = (filters.DjangoFilterBackend,)
def get_queryset(self, *args, **kwargs):
slug = self.kwargs.get('submission_slug', '')
return Library.objects.filter(sample__submission__submission_slug=slug)
This metadata is added to the response through the pagination serializer, which is a part of the pagination that is built-in. The metadata will only be added if pagination is enabled, so you need to check your settings to make sure that pagination is enabled on your production machine.
Pagination is determined by your settings and paginate_by property on your views. Make sure your requests do include the page_size parameter, which should force pagination on your views.
It just works when I simply do an inheritance with my class
class User(Base):
__tablename__ = ’users’
id = Column(Integer, primary_key=True)
name = Column(String)
fullname = Column(String)
password = Column(String)
And then i'm able to create the table using Base.metadata.create_all(engine)
My question is, how can sqlalchemy know that i have already defined User (inherited from Base) ?
Each declarative base class maintains the registry of descended classes, which is filled when class definition is executed (in __init__() method of metaclass). But this registry is not actually used by create_all(). The MetaData instance (Base.metadata) is a part of SQLAlchemy core (not ORM) and is responsible for maintaining the registry of tables. MetaData instance remembers the table when it is created (and stored in __table__ attribute of class) from the same __init__() method of declarative metaclass. You can get this list of tables through Base.metadata.sorted_tables property.
When you inherited from any class it will give you chain to trace over all its sub classes. With the help of trick given in How to iterate through every class declaration, descended from a particular base class? you will get all sub classes.
Hope this will help you to solve your problem.
this is part of a project that involves working with tg2 against 2 databases one of them (which this model uses is mssql). since that table I need to read/write from is created and managed with a different application I don't want turbogears to overwrite/change the table - just work with the existing table - so I use sqlalchemy magical 'autoload' reflection (I also don't know every detail of this table configuration in the mssql db)
some of the reflection is done in model.__init__.py and not in the class (as some sqlalchemy tutorial suggest) because of tg2 innerworking
this is the error message I get:(the table name in db is SOMETABLE and in my app the class is activities)
sqlalchemy.exc.ArgumentError: Mapper Mapper|Activities|SOMETABLE could not
assemble any primary key columns for mapped table 'SOMETABLE'
this is activities class:
class Activities(DeclarativeBase2):
__tablename__ = 'SOMETABLE'
#tried the classic way, I used in another place without tg but didn't work here - the reflection should be outside the class
#__table_args__= {'autoload':True
#,'autoload_with':engine2
#}
def __init__(self,**kw):
for k,v in kw.items():
setattr(self,k,v)
and this is model.__init__.py init model method (where the reflection is called):
def init_model(engine1,engine2):
"""Call me before using any of the tables or classes in the model."""
DBSession.configure(bind=engine1)
DBSession2.configure(bind=engine2)
# If you are using reflection to introspect your database and create
# table objects for you, your tables must be defined and mapped inside
# the init_model function, so that the engine is available if you
# use the model outside tg2, you need to make sure this is called before
# you use the model.
#
# See the following example:
metadata.bind = engine1
metadata2.bind = engine2
#metadata2=MetaData(engine2)
global t_reflected
#
t_reflected = Table("SOMETABLE", metadata2,
autoload=True, autoload_with=engine2)
#
mapper(Activities, t_reflected
so I think I need to tell sqlalchemy what is the primary key - but how do I do it while using the reflection (I know the primary key field)?
EDIT the working solution:
def init_model(engine1,engine2):
"""Call me before using any of the tables or classes in the model."""
DBSession.configure(bind=engine1)
DBSession2.configure(bind=engine2)
# If you are using reflection to introspect your database and create
# table objects for you, your tables must be defined and mapped inside
# the init_model function, so that the engine is available if you
# use the model outside tg2, you need to make sure this is called before
# you use the model.
#
# See the following example:
metadata.bind = engine1
metadata2.bind = engine2
#metadata2=MetaData(engine2)
global t_reflected
#
t_reflected = Table("SOMETABLE", metadata2,String,primary_key=True),
autoload=True, autoload_with=engine2)# adding the primary key column here didn't work
#
mapper(Activities, t_reflected, non_primary=True)# notice the needed non_primary - for some reason I can't do the whole mapping in one file and I Have to do part in init_model and part in the model - quite annoying
also in the model I had to add the primary key column making it:
class Activities(DeclarativeBase2):
__tablename__ = 'SOMETABLE'
#tried the classic way, I used in another place without tg but didn't work here - the reflection should be outside the class
EVENTCODE = Column(String, primary_key=True)# needed because the reflection couldn't find the primary key .
of course I also had to add various imports in model.__init__.py to make this work
the strange thing is it turned out that it complained about not finding a primary key before it even connected to the db and when a standalone sqlalchemy class (without tg2) doing the same key - didn't complain at all. makes you wonder
You can mix-and-match: see Overriding Reflected Columns in the documentation. In your case the code would look similar to this:
t_reflected = Table("SOMETABLE", metadata2,
Column('id', Integer, primary_key=True), # override reflected '???' column to have primary key
autoload=True, autoload_with=engine2)
edit-1: Model version: I also think that only declarative version should work as well, in which case you should not define a table t_reflected and also should not map those manually using mapper(...) because declarative classes are automatically mapped:
class Activities(DeclarativeBase2):
__table__ = Table('SOMETABLE', metadata2,
Column('EVENTCODE', Unicode, primary_key=True),
autoload=True, autoload_with=engine2,
)
def __init__(self,**kw):
for k,v in kw.items():
setattr(self,k,v)
I'd very much like to avoid state binding to entities when querying through session, and take advantage of class mapping without relying on:
session.query(SomeClass)
I have no need for transactions, eager/deferred loading, change tracking, or any of the other features offered. Essentially I want to manually bind a ResultProxy to the mapped class, and have a list of instances that do not have any references to SQLA (such as state).
I tried Query.instances, but it requires a session instance:
engine = create_engine('sqlite:///:memory:')
meta = MetaData()
meta.bind = engine
table = Table('table, meta,
Column('id', Integer, primary_key=True),
Column('field1', String(16), nullable=False),
Column('field2', String(60)),
Column('field3', String(20), nullable=False)
)
class Table(object)
pass
meta.create_all(checkfirst=True)
for i in range(10):
user.insert({'field1': 'field1'+i,'field2': 'field2'+i*2,'field3': 'field3'+i*4})
mapper(Table, table)
query = Query((Table,))
query.instances(engine.text("SELECT * FROM table").execute())
Results in:
Traceback (most recent call last):
File "sqlalchemy/orm/mapper.py", line 2507, in _instance_processor
session_identity_map = context.session.identity_map
AttributeError: 'NoneType' object has no attribute 'identity_map'
I'm stuck at this point. I've looked through Query.instances, and the manual setup to replicate session seems very extensive. It requires Query, QueryContext, _MapperEntity and an elaborate choreography that would make most ballet companies blush.
tl;dr Want to use SQLAlchemy's query generation (anything that returns ResultProxy instance) and have results mapped to their respective classes while skipping anything to do with session, identity_map & Unit of Work.
you could expunge the object, so it does not maintain a session, but it would still have _sa attributes.
Seeing it's been a while since you asked, it'll probably not serve you, but maybe to others