One entity in multiple schemas : how to switch schema on runtime - sqlalchemy

I use a database with multiple schemas (and the number of schemas is dynamic and not know at launch). Each schema stores the same tables definition.
In the documentation, I have seen that I can set a schema option to the corresponding schema name. But how can it work when the table is present in different schemas ? Is it possible to switch the schema name on runtime ?
UPDATE 29-11-2013
I found this http://www.sqlalchemy.org/trac/wiki/UsageRecipes/EntityName and I wrote the following routines to switch on the correct schema
engine = create_engine('db_host', echo=True)
dbSession = scoped_session(sessionmaker(extension=ZopeTransactionExtension()))
dbSession.configure(bind=engine)
metadata = MetaData(engine)
aTable = Table('aTable', metadata,
Column('foo', String, primary_key=True),
Column('bar', String))
def map_class_to_some_table(cls, table, entity_name, **kw):
newcls = type(entity_name, (cls, ), {})
mapper(newcls, table, **kw)
return newcls
class ObjectDefinition(object):
def __init__(self):
pass
class ObjectRepository():
objectMapper = {}
#classmethod
def _getObjectMapper(cls, aSchema):
if cls.objectMapper == None:
cls.objectMapper = {}
mapper = cls.objectMapper .get(aSchema)
if mapper == None:
logging.debug('Need to create a new mapper for schema %s'%aSchema)
objectSchemaTable = aTable.tometadata(metadata, schema=aSchema)
mapper = map_class_to_some_table(ObjectDefinition, objectSchemaTable , "object_%s"%aSchema)
cls.objectMapper [aSchema] = mapper
return mapper
#staticmethod
def loadAllObject(schema):
return dbSession.query(UserRepository._getUserMapper(schema)).all()

Related

SQLALchemy update ARRAY column [duplicate]

I'm working on a project using Flask and a PostgreSQL database, with SQLAlchemy.
I have Group objects which have a list of User IDs who are members of the group. For some reason, when I try to add an ID to a group, it will not save properly.
If I try members.append(user_id), it doesn't seem to work at all. However, if I try members += [user_id], the id will show up in the view listing all the groups, but if I restart the server, the added value(s) is (are) not there. The initial values, however, are.
Related code:
Adding group to the database initially:
db = SQLAlchemy(app)
# ...
g = Group(request.form['name'], user_id)
db.session.add(g)
db.session.commit()
The Group class:
from flask.ext.sqlalchemy import SQLAlchemy
from sqlalchemy.dialects.postgresql import ARRAY
class Group(db.Model):
__tablename__ = "groups"
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(128))
leader = db.Column(db.Integer)
# list of the members in the group based on user id
members = db.Column(ARRAY(db.Integer))
def __init__(self, name, leader):
self.name = name
self.leader = leader
self.members = [leader]
def __repr__(self):
return "Name: {}, Leader: {}, Members: {}".format(self.name, self.leader, self.members)
def add_user(self, user_id):
self.members += [user_id]
My test function for updating the Group:
def add_2_to_group():
g = Group.query.all()[0]
g.add_user(2)
db.session.commit()
return redirect(url_for('show_groups'))
Thanks for any help!
As you have mentioned, the ARRAY datatype in sqlalchemy is immutable. This means it isn’t possible to add new data into array once it has been initialised.
To solve this, create class MutableList.
from sqlalchemy.ext.mutable import Mutable
class MutableList(Mutable, list):
def append(self, value):
list.append(self, value)
self.changed()
#classmethod
def coerce(cls, key, value):
if not isinstance(value, MutableList):
if isinstance(value, list):
return MutableList(value)
return Mutable.coerce(key, value)
else:
return value
This snippet allows you to extend a list to add mutability to it. So, now you can use the class above to create a mutable array type like:
class Group(db.Model):
...
members = db.Column(MutableList.as_mutable(ARRAY(db.Integer)))
...
You can use the flag_modified function to mark the property as having changed. In this example, you could change your add_user method to:
from sqlalchemy.orm.attributes import flag_modified
# ~~~
def add_user(self, user_id):
self.members += [user_id]
flag_modified(self, 'members')
To anyone in the future: so it turns out that arrays through SQLAlchemy are immutable. So, once they're initialized in the database, they can't change size. There's probably a way to do this, but there are better ways to do what we're trying to do.
This is a hacky solution, but what you can do is:
Store the existing array temporarily
Set the column value to None
Set the column value to the existing temporary array
For example:
g = Group.query.all()[0]
temp_array = g.members
g.members = None
db.session.commit()
db.session.refresh(g)
g.members = temp_array
db.session.commit()
In my case it was solved by using the new reference for storing a object variable and assiging that new created variable in object variable.so, Instead of updating the existing objects variable it will create a new reference address which reflect the changes.
Here in Model,
Table: question
optional_id = sa.Column(sa.ARRAY(sa.Integer), nullable=True)
In views,
option_list=list(question.optional_id if question.optional_id else [])
if option_list:
question.optional_id.clear()
option_list.append(obj.id)
question.optional_id=option_list
else:
question.optional_id=[obj.id]

ValidationError from Composite key with marshmallow_sqlalchemy, sqlalchemy, marshmallow

I am making an API with Flask and I am using sqlalchemy/flask-sqlalchemy, marshmallow and marshmallow_sqlalchemy for handling the modeling of the database.
I am loading in the data for the Character table through the code below
character = {
'name': raw_character['name'],
'original_name': raw_character['original_name'],
'alternative_name': raw_character['alternative_name'],
}
characters_serialized.append(character)
schema = CharacterSchema()
characters = schema.load(data=characters_serialized, many=True, session=db.session)
raw_character is json as seen below:
{
"name": "Olaa",
"original_name": "olå",
"alternative_name": ["ol", "oå"]
}
The model itself is defined as a table for Character and a table representing the list of alternative names
class CharacterAlternativeName(db.Model):
__tablename__ = "character_alternative_name"
character_id = sa.Column(sa.Integer, sa.ForeignKey("character.id"), primary_key=True)
alternative_name = sa.Column(sa.String, primary_key=True)
def __repr__(self):
return "<CharacterAlternativeName(alternative_name={self.alternative_name!r})>".format(self=self)
class Character(db.Model):
__tablename__ = "character"
id = sa.Column(sa.Integer, primary_key=True)
name = sa.Column(sa.String)
original_name = sa.Column(sa.String)
alternative_name = relationship("CharacterAlternativeName")
def __repr__(self):
return "<Character(name={self.name!r})>".format(self=self)
class CharacterSchema(SQLAlchemySchema):
class Meta:
model = Character
include_relationships = True
load_instance = True # Optional: deserialize to model instances
id = auto_field()
name = auto_field()
original_name = auto_field()
alternative_name = auto_field()
The problem I am facing is that it seems to struggle to create the composite key in the CharacterAlternativeName table, as when it tries to deserialize them it gives the following error message
"marshmallow.exceptions.ValidationError: {0: {'alternative_name': {0: ["Could not deserialize related value 'ol'; expected a dictionary with keys ['character_id', 'alternative_name']"], 1: ["Could not deserialize related value 'oå'; expected a dictionary with keys ['character_id', 'alternative_name']"]}}}"
Which seems to suggest it struggles to create the composite key. Any ideas how to make the composite key work with sqlalchemy and marshmallow?

Is there a way to set the id of an existing instance as the value of a nested serializer in DRF?

I'm developing a chat application. I have a serializer like this:
class PersonalChatRoomSerializer(serializers.ModelSerializer):
class Meta:
model = PersonalChatRoom
fields = '__all__'
user_1 = UserSerializer(read_only=True)
user_2 = UserSerializer()
the user_1 field is auto-populated but the client should provide the user_2 field in order to create a personal chat room with another user.
My problem is, when creating a new chat room, the serializer tries to create a new user object from the input data thus giving me validation errors. What I really want it to do is to accept a user id and set the value of user_2 field to an existing user instance that is currently available in the database and if the user is not found, simply return a validation error. (the exact behavior of PrimaryKeyRelatedField when creating a new object)
I want my input data to look like this:
{
'user_2': 1 // id of the user
}
And when I retrieve my PersonalChatRoom object, I want the serialized form of the user object for my user_2 field:
{
...,
'user_2': {
'username': ...,
'the_rest_of_the_fields': ...
}
}
How can I achieve this?
views.py
class GroupChatRoomViewSet(viewsets.ModelViewSet):
permission_classes = [IsUserVerified, IsGroupOrIsAdminOrReadOnly]
serializer_class = GroupChatRoomSerializer
def get_queryset(self):
return self.request.user.group_chat_rooms.all()
def perform_create(self, serializer):
return serializer.save(owner=self.request.user)
I finally figured out how to do it. I just needed to override the to_representation method and serialize the object there. Here is the code I ended up with:
class PersonalChatRoomSerializer(serializers.ModelSerializer):
class Meta:
model = PersonalChatRoom
fields = '__all__'
read_only_fields = ['user_1']
def to_representation(self, chat_room):
""" Serialize user instances when outputing the results """
obj = super().to_representation(chat_room)
for field in obj.keys():
if field.startswith('user_'):
obj[field] = UserSerializer(User.objects.get(pk=obj[field])).data
return obj

django nested model json import

I am quite new to Django and I may be missunderstanding some concepts, but I can not find a solution to what I am trying to do.
I have a multi table model defined and I have defined the models, views, admin, serializers and urls. It is working perfectly to independtly read and write in all of them through the API.
The code looks something like this:
models.py
class Level1(MySQLNoCountModel):
name = models.CharField()
...
class Level2(MySQLNoCountModel):
level1 = models.ForeignKey(
Level1,
blank=False,
null=True,
on_delete=models.CASCADE
)
name = models.CharField()
)
...
serializers.py
class CreateLevel1Serializer(OrderedModelSerializer):
name = serializers.CharField()
def create(self, validated_data):
obj, created = models.Level1.objects.update_or_create(
name = validated_data['name'],
defaults={
}
)
class CreateLevel2Serializer(OrderedModelSerializer):
level1 = serializers.CharField()
name = serializers.CharField()
def validate_level1(self, value):
try:
return models.Level1.objects.get(
name=value
)
except Exception:
raise serializers.ValidationError(_('Invalid leve1'))
def create(self, validated_data):
obj, created = models.Level2.objects.update_or_create(
name = validated_data['name'],
defaults={
'level1': validated_data.get('level1', True),
}
)
With this I can create new elements by sending two consecutive posts to the specific ednpoints:
{
"name":"name1"
}
{
"level1":"name1",
"name":"name2"
}
I am trying to do it in a single operation by inserting something like this:
{
"name":"name1"
"level2":[
{
"name":"name2"
},
{
"name":"name3"
}
]
}
I have tryied to redefine the level1 serializer like this but It tryes to create the level2 before the level1, resulting on a validation error.
class CreateLevel1Serializer(OrderedModelSerializer):
name = serializers.CharField()
level2 = CreateLevel2Serializer(many=True)
What is the correct approach for this?
I have found a way to do it (don't know if it is the regular one). On the creation of the Level1 we can call the level2 serializer. Something like this:
class CreateLevel1Serializer(OrderedModelSerializer):
name = serializers.CharField()
def create(self, validated_data):
obj, created = models.Level1.objects.update_or_create(
name = validated_data['name'],
defaults={
}
)
for level2 in request.data.get('level2'):
level2serializer = CreateLevel2Serializer(data=level2)
r=level2serializer .is_valid()
level2inst = level2serializer .save()

implement mapping for MySQL year type in django

There is a year data type in MySQL and there is no corresponding data field type in django ORM. Can I provide a custom field class to map data type for one specific RDBMS? If so, how can I do that?
I managed to find following solution which satisfies my needs (models.py):
from django.core.exceptions import FieldError
class YearField(models.Field):
def db_type(self, connection):
if connection.settings_dict['ENGINE'] == 'django.db.backends.mysql':
return 'year'
else:
raise FieldError
and then use the column datatype in models:
class Band(models.Model):
"""Music band"""
class Meta:
db_table = 'band'
name = models.CharField(max_length=64)
active_since = YearField()
active_until = YearField(null=True)
genres = models.ManyToManyField(Genre)
def __unicode__(self):
return self.name