odoo: error creating new form view : Field(s) `arch` failed against a constraint: Invalid view definition - formview

I want to create new form view associated to new data model, I create a new menu item "menu1" that has a submenu "menus" and then, I want to customize the action view. This is my code:
My xml file:
My data model:
from openerp.osv import fields, osv
class hr_cutomization(osv.osv):
_inherit = "hr.employee"
_columns = {
'new_field_ID': fields.char('new filed ID',size=11)
}
_default={
'new_field_ID':0
}
hr_cutomization()
class hr_newmodel(osv.osv):
_name = "hr.newmodel"
_columns = {
'field1': fields.char('new filed1',size=11),
'field2': fields.char('new filed2',size=11)
}
_default={
'field1':0
}
hr_newmodel()
When I update my module, I got this error:
ParseError: "ValidateError
Field(s) arch failed against a constraint: Invalid view definition
Error details:
Element '
what's doing wrong in my code ?

Just Update your view action in your xml file some think like this
<record id="new_action" model="ir.actions.act_window">
<field name="name">New</field>
<field name="type">ir.actions.act_window</field>
<field name="res_model">hr.newmodel</field>
<field name="view_type">form</field>
<field name="view_mode">form,tree</field>
<field name="view_id" ref="view_new_form"/>
</record>
Just update your py file
from openerp.osv import fields, osv
class hr_cutomization(osv.osv):
_inherit = "hr.employee"
_columns = {
'new_field_ID': fields.char('new filed ID',size=11)
}
_default={
'new_field_ID':'0'
}
hr_cutomization()
class hr_newmodel(osv.osv):
_name = "hr.newmodel"
_columns = {
'field1': fields.char('new filed1',size=11),
'field2': fields.char('new filed2',size=11)
}
_default={
'field1':'0'
}
hr_newmodel()
In this .py your are assign as char field but you are using _defaults as 0 (as integer)
you must have to pass it as character not the integer in your _default attributes.
and you are creating your module in OpenERP 7.0 then the add the new form Attribute as the version="7.0" in your form tag of your view.
If it is in odoo 8.0 then it is not needed to do so.

I got the same error, and in my case it was because of a wrong indentation in my .py file. Try doing the indentation in the correct way, something like this:
from openerp.osv import fields, osv
class hr_cutomization(osv.osv):
_inherit = "hr.employee"
_columns = {
'new_field_ID': fields.char('new filed ID',size=11)
}
_default={
'new_field_ID':'0'
}
hr_cutomization()
class hr_newmodel(osv.osv):
_name = "hr.newmodel"
_columns = {
'field1': fields.char('new filed1',size=11),
'field2': fields.char('new filed2',size=11)
}
_default={
'field1':'0'
}
hr_newmodel()
I think that way could work

Related

Is there a way to set the id of an existing instance as the value of a nested serializer in DRF?

I'm developing a chat application. I have a serializer like this:
class PersonalChatRoomSerializer(serializers.ModelSerializer):
class Meta:
model = PersonalChatRoom
fields = '__all__'
user_1 = UserSerializer(read_only=True)
user_2 = UserSerializer()
the user_1 field is auto-populated but the client should provide the user_2 field in order to create a personal chat room with another user.
My problem is, when creating a new chat room, the serializer tries to create a new user object from the input data thus giving me validation errors. What I really want it to do is to accept a user id and set the value of user_2 field to an existing user instance that is currently available in the database and if the user is not found, simply return a validation error. (the exact behavior of PrimaryKeyRelatedField when creating a new object)
I want my input data to look like this:
{
'user_2': 1 // id of the user
}
And when I retrieve my PersonalChatRoom object, I want the serialized form of the user object for my user_2 field:
{
...,
'user_2': {
'username': ...,
'the_rest_of_the_fields': ...
}
}
How can I achieve this?
views.py
class GroupChatRoomViewSet(viewsets.ModelViewSet):
permission_classes = [IsUserVerified, IsGroupOrIsAdminOrReadOnly]
serializer_class = GroupChatRoomSerializer
def get_queryset(self):
return self.request.user.group_chat_rooms.all()
def perform_create(self, serializer):
return serializer.save(owner=self.request.user)
I finally figured out how to do it. I just needed to override the to_representation method and serialize the object there. Here is the code I ended up with:
class PersonalChatRoomSerializer(serializers.ModelSerializer):
class Meta:
model = PersonalChatRoom
fields = '__all__'
read_only_fields = ['user_1']
def to_representation(self, chat_room):
""" Serialize user instances when outputing the results """
obj = super().to_representation(chat_room)
for field in obj.keys():
if field.startswith('user_'):
obj[field] = UserSerializer(User.objects.get(pk=obj[field])).data
return obj

django nested model json import

I am quite new to Django and I may be missunderstanding some concepts, but I can not find a solution to what I am trying to do.
I have a multi table model defined and I have defined the models, views, admin, serializers and urls. It is working perfectly to independtly read and write in all of them through the API.
The code looks something like this:
models.py
class Level1(MySQLNoCountModel):
name = models.CharField()
...
class Level2(MySQLNoCountModel):
level1 = models.ForeignKey(
Level1,
blank=False,
null=True,
on_delete=models.CASCADE
)
name = models.CharField()
)
...
serializers.py
class CreateLevel1Serializer(OrderedModelSerializer):
name = serializers.CharField()
def create(self, validated_data):
obj, created = models.Level1.objects.update_or_create(
name = validated_data['name'],
defaults={
}
)
class CreateLevel2Serializer(OrderedModelSerializer):
level1 = serializers.CharField()
name = serializers.CharField()
def validate_level1(self, value):
try:
return models.Level1.objects.get(
name=value
)
except Exception:
raise serializers.ValidationError(_('Invalid leve1'))
def create(self, validated_data):
obj, created = models.Level2.objects.update_or_create(
name = validated_data['name'],
defaults={
'level1': validated_data.get('level1', True),
}
)
With this I can create new elements by sending two consecutive posts to the specific ednpoints:
{
"name":"name1"
}
{
"level1":"name1",
"name":"name2"
}
I am trying to do it in a single operation by inserting something like this:
{
"name":"name1"
"level2":[
{
"name":"name2"
},
{
"name":"name3"
}
]
}
I have tryied to redefine the level1 serializer like this but It tryes to create the level2 before the level1, resulting on a validation error.
class CreateLevel1Serializer(OrderedModelSerializer):
name = serializers.CharField()
level2 = CreateLevel2Serializer(many=True)
What is the correct approach for this?
I have found a way to do it (don't know if it is the regular one). On the creation of the Level1 we can call the level2 serializer. Something like this:
class CreateLevel1Serializer(OrderedModelSerializer):
name = serializers.CharField()
def create(self, validated_data):
obj, created = models.Level1.objects.update_or_create(
name = validated_data['name'],
defaults={
}
)
for level2 in request.data.get('level2'):
level2serializer = CreateLevel2Serializer(data=level2)
r=level2serializer .is_valid()
level2inst = level2serializer .save()

How to filter a query by a list of ids in GraphQL using graphene-django?

I'm trying to perform a GraphQL query using Django and Graphene. To query one single object using the id I did the following:
{
samples(id:"U2FtcGxlU2V0VHlwZToxMjYw") {
edges {
nodes {
name
}
}
}
}
And it just works fine. Problem arise when I try to query with more than one id, like the following:
{
samples(id_In:"U2FtcGxlU2V0VHlwZToxMjYw, U2FtcGxlU2V0VHlwZToxMjYx") {
edges {
nodes {
name
}
}
}
}
In the latter case I got the following error:
argument should be a bytes-like object or ASCII string, not 'list'
And this is a sketch of how defined the Type and Query in django-graphene
class SampleType(DjangoObjectType):
class Meta:
model = Sample
filter_fields = {
'id': ['exact', 'in'],
}
interfaces = (graphene.relay.Node,)
class Query(object):
samples = DjangoFilterConnectionField(SampleType)
def resolve_sample_sets(self, info, **kwargs):
return Sample.objects.all()
GlobalIDMultipleChoiceFilter from django-graphene kinda solves this issue, if you put "in" in the field name. You can create filters like
from django_filters import FilterSet
from graphene_django.filter import GlobalIDMultipleChoiceFilter
class BookFilter(FilterSet):
author = GlobalIDMultipleChoiceFilter()
and use it by
{
books(author: ["<GlobalID1>", "<GlobalID2>"]) {
edges {
nodes {
name
}
}
}
}
Still not perfect, but the need for custom code is minimized.
You can easily use a Filter just put this with your nodes.
class ReportFileFilter(FilterSet):
id = GlobalIDMultipleChoiceFilter()
Then in your query just use -
class Query(graphene.ObjectType):
all_report_files = DjangoFilterConnectionField(ReportFileNode, filterset_class=ReportFileFilter)
This is for relay implementation of graphql django.
None of the existing answers seemed to work for me as they were presented, however with some slight changes I managed to resolve my problem as follows:
You can create a custom FilterSet class for your object type, and filter the field by using the GlobalIDMultipleChoiceFilter. for example:
from django_filters import FilterSet
from graphene_django.filter import GlobalIDFilter, GlobalIDMultipleChoiceFilter
class SampleFilter(FilterSet):
id = GlobalIDFilter()
id__in = GlobalIDMultipleChoiceFilter(field_name="id")
class Meta:
model = Sample
fields = (
"id_in",
"id",
)
Something I came cross is that you can not have filter_fields defined with this approach. Instead, you have to only rely on the custom FilterSet class exclusively, making your object type effectively look like this:
from graphene import relay
from graphene_django import DjangoObjectType
class SampleType(DjangoObjectType):
class Meta:
model = Sample
filterset_class = SampleFilter
interfaces = (relay.Node,)
I had trouble implementing the 'in' filter as well--it appears to be misimplemented in graphene-django right now and does not work as expected. Here are the steps to make it work:
Remove the 'in' filter from your filter_fields
Add an input value to your DjangoFilterConnectionField entitled 'id__in' and make it a list of IDs
Rename your resolver to match the 'samples' field.
Handle filtering by 'id__in' in your resolver for the field. For you this will look as follows:
from base64 import b64decode
def get_pk_from_node_id(node_id: str):
"""Gets pk from node_id"""
model_with_pk = b64decode(node_id).decode('utf-8')
model_name, pk = model_with_pk.split(":")
return pk
class SampleType(DjangoObjectType):
class Meta:
model = Sample
filter_fields = {
'id': ['exact'],
}
interfaces = (graphene.relay.Node,)
class Query(object):
samples = DjangoFilterConnectionField(SampleType, id__in=graphene.List(graphene.ID))
def resolve_samples(self, info, **kwargs):
# filter_field for 'in' seems to not work, this hack works
id__in = kwargs.get('id__in')
if id__in:
node_ids = kwargs.pop('id__in')
pk_list = [get_pk_from_node_id(node_id) for node_id in node_ids]
return Sample._default_manager.filter(id__in=pk_list)
return Sample._default_manager.all()
This will allow you to call the filter with the following api. Note the use of an actual array in the signature (I think this is a better API than sending a comma separated string of values). This solution still allows you to add other filters to the request and they will chain together correctly.
{
samples(id_In: ["U2FtcGxlU2V0VHlwZToxMjYw", "U2FtcGxlU2V0VHlwZToxMjYx"]) {
edges {
nodes {
name
}
}
}
}
Another way is to tell the Relay filter of graphene_django to also deals with a list. This filter is register in a mixin in graphene_django and applied to any filter you define.
So here my solution:
from graphene_django.filter.filterset import (
GlobalIDFilter,
GrapheneFilterSetMixin,
)
from graphql_relay import from_global_id
class CustomGlobalIDFilter(GlobalIDFilter):
"""Allow __in lookup for IDs"""
def filter(self, qs, value):
if isinstance(value, list):
value_lst = [from_global_id(v)[1] for v in value]
return super(GlobalIDFilter, self).filter(qs, value_lst)
else:
return super().filter(qs, value)
# Fix the mixin defaults
GrapheneFilterSetMixin.FILTER_DEFAULTS.update({
AutoField: {"filter_class": CustomGlobalIDFilter},
OneToOneField: {"filter_class": CustomGlobalIDFilter},
ForeignKey: {"filter_class": CustomGlobalIDFilter},
})

Scala - Processing XML to JSON Objects in Functional Style

I have an interesting problem. I want to process an XML file into individual JSON objects (JSONObject). But in a functional way.
I have a source XMLfile that contains Property Groups. Each group containing properties (see below).
The only way to tell if a group belongs to different products is by the id attribute.
<data>
<products>
<property_group classification="product_properties" id="1234">
<property name="Name">Product1</property>
<property name="Brand">Brand1</property>
...
</property_group>
<property_group classification="size_properties" id="1234">
<property name="width">200cm</property>
<property name="height">100cm</property>
...
</property_group>
...
<property_group classification="product_properties" id="5678">
<property name="Name">Product2</property>
<property name="Brand">Brand2</property>
...
</property_group>
<property_group classification="weight_properties" id="5678">
<property name="kg">20</property>
<property name="lbs">44</property>
...
</property_group>
...
</products>
</data>
My code for processing this XML file looks like this:
def createProducts(propertyGroups: NodeSeq): MutableList[JSONObject] = {
val products = new MutableList[JSONObject]
var productJSON = new JSONObject()
var currentProductID = "";
propertyGroups.foreach { propertyGroup => {
// Get data from the current property_group
val productID = getProductID(propertyGroup)
val propertiesClassification = getClassification(propertyGroup)
val properties = getProductAttributes(propertyGroup \\ "property")
// Does this group relates to a new product?
if(currentProductID != productID){
// Starting a new Product
productJSON = new JSONObject()
products += productJSON
productJSON.put("product", productID)
currentProductID = productID
}
// Add the properties to the product
val propertiesJSON = new JSONObject()
propertiesJSON.put(propertiesClassification, properties)
productJSON.put(propertiesClassification, properties)
} }
return products
}
Although this works and does what it is supposed to do, it is not 'functional style'. How do I change this from the imperative mind-set to a functional style?
The JavaEE JSON API is not the most functional JSON library, but if you need to return JsonObjects and want to do it in a more functional style, you could do:
def createProducts(propertyGroups: NodeSeq): List[JSONObject] =
propertyGroups
// map to a (id, classification, properties) tuple
.map { propertyGroup =>
val productID = getProductID(propertyGroup)
val propertiesClassification = getClassification(propertyGroup)
val properties = getProductAttributes(propertyGroup \\ "property")
(productId, propertiesClassification, properties)
}
// group by product ID
.groupBy(_._1)
// turn grouped properties into json product
.map { case (id, tuples) =>
val product = new JSONObject()
product.put("product", productID)
// fold over all the tuples and add properties to the product json
tuples.foldLeft(product) { case (accProd, (_, classif, properties)) =>
// not exactly sure want you want to do with your propertiesJSON
val propertiesJSON = new JSONObject()
propertiesJSON.put(classif, properties)
accProd.put(classif, properties)
accProd
}
}.toList

CsvBulkLoader import/update only existing objects

I'm using a simple CsvBulkLoader to bulk update dataobjects.
class OrderImporter extends CsvBulkLoader {
public $delimiter = ';';
public $enclosure = '"';
public $hasHeaderRow = true;
public $columnMap = array(
'ID' => 'ID',
'Bezahlt' => 'Payed',
'Geandert' => 'NeedReview'
);
}
My problem is, that I don't want to create new objects, If they are in the import file. I only want to update the existing ones.
Is there a way to achieve this? Sadly I can't find anything in the docs.
I think you'd have a look at CsvBulkLoader::processRecord(). This is where each line is processed. You could try in your OrderImporter class (untested):
protected function processRecord($record, $columnMap, &$results, $preview = false) {
// find existing object
$existingObj = $this->findExistingObject($record, $columnMap);
return ($existingObject)
? parent::processRecord($record, $columnMap, $results, $preview)
: false;
}
HTH, wmk
You need to set $duplicateChecks based on what fields already existing in your DB can be checked against the import file. If the IDs match you can use:
public $duplicateChecks = array(
'ID' => 'ID',
);
You should test the import on a dev server first, especially if you use a combination of fields, as the results can be different from what you're expecting.
See $duplicateChecks in the BulkLoader api http://api.silverstripe.org/3.1/class-BulkLoader.html