I am trying to deploy an application to a production server, but testing has revealed a strange inconsistency in django_rest-framework (first with version 2.3.14, now upgraded to 3.0.1).
On my development machine the json response comes wrapped with some metadata:
{u'count': 2, u'previous': None, u'results': [json objects here]}
Whereas on the production machine only the 'results' array is returned.
Is there a setting or to change this one way or the other?
Serializers are as follows :
class SampleSerializer(serializers.ModelSerializer):
class Meta:
model = Sample
class LibrarySerializer(serializers.ModelSerializer):
sample = SampleSerializer()
class Meta:
model = Library
views.py
class PullLibraryView(generics.ListAPIView):
serializer_class = LibrarySerializer
filter_backends = (filters.DjangoFilterBackend,)
def get_queryset(self, *args, **kwargs):
slug = self.kwargs.get('submission_slug', '')
return Library.objects.filter(sample__submission__submission_slug=slug)
This metadata is added to the response through the pagination serializer, which is a part of the pagination that is built-in. The metadata will only be added if pagination is enabled, so you need to check your settings to make sure that pagination is enabled on your production machine.
Pagination is determined by your settings and paginate_by property on your views. Make sure your requests do include the page_size parameter, which should force pagination on your views.
Related
We are working on a Top-Down-RPG-like Multiplayer game for learning purposes (and fun!) with some friends. We already have some Entities in the Game and Inputs are working, but the network implementation gives us headache :D
The Issues
When trying to convert with dict some values will still contain the pygame.Surface, which I dont want to transfer and it causes errors when trying to jsonfy them. Other objects I would like to transfer in a simplyfied way like Rectangle cannot be converted automatically.
Already functional
Client-Server connection
Transfering JSON objects in both directions
Async networking and synchronized putting into a Queue
Situation
A new player connects to the server and wants to get the current game state with all objects.
Data-Structure
We use a "Entity-Component" based architecture, so we separated the game logic very strictly into "systems", while the data is stored in the "components" of each Entity. The Entity is a very simple container and has nothing more than a ID and a list of components. Example Entity (shorten for better readability):
Entity
|-- Component (Moveable)
|-- Component (Graphic)
| |- complex datatypes like pygame.SURFACE
| `- (...)
`- Component (Inventory)
We tried different approaches, but all seems not to fit very well or feel "hacky".
pickle
Very Python near, so not easy to implement other clients in future. And I´ve read about some security risks when creating items from network in this dynamic way how pickle it offers. It does not even solve the Surface/Rectangle issue.
__dict__
Still contains the reference to the old objects, so a "cleanup" or "filter" for unwanted datatypes happens also in the origin. A deepcopy throws Exception.
...\Python\Python36\lib\copy.py", line 169, in deepcopy
rv = reductor(4)
TypeError: can't pickle pygame.Surface objects
Show some code
The method of the "EnitityManager" Class which should generate the Snapshot of all Entities, including their components. This Snapshot should be converted to JSON without any errors - and if possible without much configuration in this core-class.
class EnitityManager:
def generate_world_snapshot(self):
""" Returns a dictionary with all Entities and their components to send
this to the client. This function will probably generate a lot of data,
but, its to send the whole current game state when a new player
connects or when a complete refresh is required """
# It should be possible to add more objects to the snapshot, so we
# create our own Snapshot-Datastructure
result = {'entities': {}}
entities = self.get_all_entities()
for e in entities:
result['entities'][e.id] = deepcopy(e.__dict__)
# Components are Objects, but dictionary is required for transfer
cmp_obj_list = result['entities'][e.id]['components']
# Empty the current list of components, its going to be filled with
# dictionaries of each cmp which are cleaned for the dump, because
# of the errors directly coverting the whole datastructure to JSON
result['entities'][e.id]['components'] = {}
for cmp in cmp_obj_list:
cmp_copy = deepcopy(cmp)
cmp_dict = cmp_copy.__dict__
# Only list, dict, int, str, float and None will stay, while
# other Types are being simply deleted including their key
# Lists and directories will be cleaned ob recursive as well
cmp_dict = self.clean_complex_recursive(cmp_dict)
result['entities'][e.id]['components'][type(cmp_copy).__name__] \
= cmp_dict
logging.debug("EntityMgr: Entity#3: %s" % result['entities'][3])
return result
Expectation and actual results
We can find a way to manually override elements which we dont want. But as the list of components will increase we have to put all the filter logic into this core class, which should not contain any components specializations.
Do we really have to put all the logic into the EntityManager for filtering the right objects? This does not feel good, as I would like to have all convertion to JSON done without any hardcoded configuration.
How to convert all this complex data in a most generic approach?
Thanks for reading so far and thank you very much for your help in advance!
Interesting articles which we were already working threw and maybe helpful for others with similar issues
https://gafferongames.com/post/what_every_programmer_needs_to_know_about_game_networking/
http://code.activestate.com/recipes/408859/
https://docs.python.org/3/library/pickle.html
UPDATE: Solution - thx 2 sloth
We used a combination of the following architecture, which works really great so far and is also good to maintain!
Entity Manager now calls the get_state() function of the entity.
class EntitiyManager:
def generate_world_snapshot(self):
""" Returns a dictionary with all Entities and their components to send
this to the client. This function will probably generate a lot of data,
but, its to send the whole current game state when a new player
connects or when a complete refresh is required """
# It should be possible to add more objects to the snapshot, so we
# create our own Snapshot-Datastructure
result = {'entities': {}}
entities = self.get_all_entities()
for e in entities:
result['entities'][e.id] = e.get_state()
return result
The Entity has only some basic attributes to add to the state and forwards the get_state() call to all the Components:
class Entity:
def get_state(self):
state = {'name': self.name, 'id': self.id, 'components': {}}
for cmp in self.components:
state['components'][type(cmp).__name__] = cmp.get_state()
return state
The components itself now inherit their get_state() method from their new superclass components, which simply cares about all simple datatypes:
class Component:
def __init__(self):
logging.debug('generic component created')
def get_state(self):
state = {}
for attr, value in self.__dict__.items():
if value is None or isinstance(value, (str, int, float, bool)):
state[attr] = value
elif isinstance(value, (list, dict)):
# logging.warn("Generating state: not supporting lists yet")
pass
return state
class GraphicComponent(Component):
# (...)
Now every developer has the opportunity to overlay this function to create a more detailed get_state() function for complex types directly in the Component Classes (like Graphic, Movement, Inventory, etc.) if it is required to safe the state in a more accurate way - which is a huge thing for maintaining the code in future, to have these code pieces in one Class.
Next step is to implement the static method for creating the items from the state in the same Class. This makes this working really smooth.
Thank you so much sloth for your help.
Do we really have to put all the logic into the EntityManager for filtering the right objects?
No, you should use polymorphism.
You need a way to represent your game state in a form that can be shared between different systems; so maybe give your components a method that will return all of their state, and a factory method that allows you create the component instances out of that very state.
(Python already has the __repr__ magic method, but you don't have to use it)
So instead of doing all the filtering in the entity manager, just let him call this new method on all components and let each component decide that the result will look like.
Something like this:
...
result = {'entities': {}}
entities = self.get_all_entities()
for e in entities:
result['entities'][e.id] = {'components': {}}
for cmp in e.components:
result['entities'][e.id]['components'][type(cmp).__name__] = cmp.get_state()
...
And a component could implement it like this:
class GraphicComponent:
def __init__(self, pos=...):
self.image = ...
self.rect = ...
self.whatever = ...
def get_state(self):
return { 'pos_x': self.rect.x, 'pos_y': self.rect.y, 'image': 'name_of_image.jpg' }
#staticmethod
def from_state(state):
return GraphicComponent(pos=(state.pos_x, state.pos_y), ...)
And a client's EntityManager that recieves the state from the server would iterate for the component list of each entity and call from_state to create the instances.
I have one database that I specifically using for Django. I ran into a simple problem. For the user authentication in the login I need to use data from another database(mysql) and table. Both the tables are in the same server. How can I do that? Thanks
You need to define both the databases in your settings.py. You should read through setting up multiple databases documentation. It's pretty straight forward. Once you have databases configured and synchronized, you can manually select a database for a QuerySet as follows:
Model.objects.using('database').all()
Update:
To specify a connection for a model you can define a Router as follows.
from django.db import connections
class MyRouter(object):
"""
This router object will take care of the database
operations for models.
"""
def db_for_read(self, model, **hints):
if hasattr(model,'connection_name'):
return model.connection_name
return None
def db_for_write(self, model, **hints):
if hasattr(model,'connection_name'):
return model.connection_name
return None
def allow_syncdb(self, db, model):
if hasattr(model,'connection_name'):
return model.connection_name == db
return db == default
and then for your models in models.py you can do something like this:
class MyModel(models.Model):
connection_name="filename"
Update#2
you can setup the router anywhere you like. All you need to do is pass the Routers path to the DATABASE_ROUTERS variable in the settings.py file. For the sake of cleanliness of code and being organised i generally do this in a separate routers.py file under my application folder and then set the DATABASE_ROUTERS
example:
DATABASE_ROUTERS = ['myapp.routers.MyRouter',]
i'm using CSS3 accordion effect, and i want to detect if a hacker will
make a script to make a parallel request; ie:
i've a login form and a registration form in the same page, but only
one is visible because there is a CSS3: to access the page, the user
agent must be HTML5 compatible.
the tip i use is:
class Register(tornado.web.RequestHandler):
def post(self):
tt = self.get_argument("_xsrf") + str(time.time())
rtime = float(tt.replace(self.get_argument("_xsrf"), ""))
print rtime
class LoginHandler(BaseHandler):
def post(self):
tt = self.get_argument("_xsrf") + str(time.time())
ltime = float(tt.replace(self.get_argument("_xsrf"), ""))
print ltime
i've used the xsrf variable because it's unique for every user, to
avoid making the server think that the request is coming from the same
machine.
now what i want: how to make the difference between time values:
abs(ltime - rtime) ; mean, how do i access to rtime outside the class,
i just know how to access the value outside the method, i want to make
this operation to detect if the value is small, then the user is using
a script to make a parallel request to kill the server!
in other words (for general python users)
if i have:
class Product:
def info(self):
self.price = 1000
def show(self):
print self.price
>>> car = Product()
>>> car.info()
>>> car.show()
1000
but what if i've another
class User:
pass
then how do make a method that prints me the self.price, i've tried
inheritance, but got error: AttributeError: User instance has no
attribute 'price', so only methods are passed, not attributs?
It sounds like you need to understand Model objects and patterns that use persistant storage of data. tornado.web.RequestHandler and any object that you subclass from it only exists for the duration of your request. From when the URL is received on the server to when data is sent back to the browser via a self.write() or self.finish().
I would recommend you look at some of the Django or Flask tutorials for some basic ideas of how to build a MVC application in Python (There is no Tornado Tutorials that cover this that I know of).
In Scrapy, I have my items specified in a certain order in items.py, & my spider has those items again in the same order. However, when I run the spider & save the results as a csv, the column order from the items.py or the spider is not maintained. How can I get the CSV to show columns in a specific order. Example code would be very appreciated.
Thanks.
This is related to Modifiying CSV export in scrapy
The problem is that the exporter is instantiated without any keyword parameters, so the keywords like EXPORT_FIELDS are ignored. The solution is the same: you need to subclass the CSV item exporter to pass the keyword parameters.
Following the above recipe, I created a new file xyzzy/feedexport.py (change "xyzzy" to whatever your scrapy class is named):
"""
The standard CSVItemExporter class does not pass the kwargs through to the
CSV writer, resulting in EXPORT_FIELDS and EXPORT_ENCODING being ignored
(EXPORT_EMPTY is not used by CSV).
"""
from scrapy.conf import settings
from scrapy.contrib.exporter import CsvItemExporter
class CSVkwItemExporter(CsvItemExporter):
def __init__(self, *args, **kwargs):
kwargs['fields_to_export'] = settings.getlist('EXPORT_FIELDS') or None
kwargs['encoding'] = settings.get('EXPORT_ENCODING', 'utf-8')
super(CSVkwItemExporter, self).__init__(*args, **kwargs)
and then added it into xyzzy/settings.py:
FEED_EXPORTERS = {
'csv': 'xyzzy.feedexport.CSVkwItemExporter'
}
Now the CSV exporter will honor the EXPORT_FIELD setting - also add to xyzzy/settings.py:
# By specifying the fields to export, the CSV export honors the order
# rather than using a random order.
EXPORT_FIELDS = [
'field1',
'field2',
'field3',
]
I wouldn't know about the time you asked your question but Scrapy now provides a fields_to_export attribute to the BaseItemExporter class, from which CsvItemExporter inherits.
As per version 0.22:
fields_to_export
A list with the name of the fields that will be exported, or None if you want to export all fields. Defaults to None.
Some exporters (like CsvItemExporter) respect the order
of the fields defined in this attribute.
See also the documentation for BaseItemExporter and CsvItemExporter on the Scrapy website.
In order to use this feature, though, you will have to create your own ItemPipeline, as detailed in this answer
In a Django application, I'm trying to access an existing MySQL database created with Hibernate (a Java ORM). I reverse engineered the model using:
$ manage.py inspectdb > models.py
This created a nice models file from the Database and many things were quite fine. But I can't find how to properly access boolean fields, which were mapped by Hibernate as columns of type BIT(1).
The inspectdb script by default creates these fields in the model as TextField and adds a comment saying that it couldn't reliably obtain the field type. I changed these to BooleanField but and opened my model objects using the admin, but it doesn't work (the model objects always fetch a value of true for these fields). Using IntegerField won't work as well (e.g. in the admin these fields show strange non-ascii characters).
Any hints of doing this without changing the database? (I need the existing Hibernate mappings and Java application to still work with the database).
Further info: I left these fields as BooleanField and used the interactive shell to look at the fetched values. They are returned as '\x00' (when Java/Hibernate value is false) and '\x01' (when true), instead of Python boolean values "True" and "False".
>>> u = AppUser.objects.all()[0]
>>> u.account_expired
'\x00'
>>> u.account_enabled
'\x01'
Where the model includes:
class AppUser(models.Model):
account_expired = models.BooleanField()
account_enabled = models.BooleanField(blank=True)
# etc...
This is the detailed solution from Dmitry's suggestion:
My derived field class:
class MySQLBooleanField(models.BooleanField):
__metaclass__ = models.SubfieldBase
def to_python(self, value):
if isinstance(value, bool):
return value
return bytearray(value)[0]
def get_db_prep_value(self, value):
return '\x01' if value else '\x00'
Fields in my model:
account_enabled = MySQLBooleanField()
account_expired = MySQLBooleanField()
I had to deal the same problem, but rather than subclassing the field, I extended the MySQL backend to understand the Hibernate way. It's only a few lines of code and has the advantage that the DB introspection can be made to work correctly, as well.
See it here.
hibernateboolsbackend / backends / mysql / base.py
# We want to import everything since we are basically subclassing the module.
from django.db.backends.mysql.base import *
django_conversions.update({
FIELD_TYPE.BIT: lambda x: x != '\x00',
})
DatabaseIntrospection.data_types_reverse.update({
FIELD_TYPE.BIT: 'BooleanField',
})
The django-mysql package provides a BooleanField subclass called Bit1BooleanField that solves this:
from django.db import Model
from django_mysql.models import Bit1BooleanField
class AppUser(Model):
bit1bool = Bit1BooleanField()
Easier than rolling your own, and tested on several Django and Python versions.
I guess that only way is to subclass, say, BooleanField, and override to_python/get_prep_value functions, so the field works seamlessly with django and
your db.
To make it work on django 1.7.1 i've to change the "to_python" function because it was not working to read correctly the data from the db:
def to_python(self, value):
if value in (True, False): return value
if value in ('t', 'True', '1', '\x01'): return True
if value in ('f', 'False', '0', '\x00'): return False
In Python Boolean type is a subclass of integer But whereas Java it is of type Bit.
So in DB for Python Boolean field datatype should be Tinyint instead of BIT type.
Due to the above reason the application behaving unexpectedly returning \x00 and \x01 value.
If we add a boolean field in Django model and run the migration then migrate, It will add a column of type Tinyint instead of bit.
So updating the column type to Tinyint should resolve the issue.