Since Many2one field only displays one field, I thought about wrote a function to display in Many2one two fields, like this way:
def get_services(self, cr, uid, ids, context=None):
values = cr.execute("""SELECT name, entity
FROM services WHERE id = 3""")
values.fetchall()
for value__ in values:
if value__:
return {'value': {'service_id': value__[0] + " | " + value__[1]},} # Example: "Service 1 | Google"
First of all, is it possible? Is there any module which does this? So I could see it.
Then, I call the function this way:
_columns = {
'service_id':fields.function(get_services, type = 'many2one', obj = 'services_getservices_function', method = True, string = 'Service'),
I'm not getting any error, but the field doesn't display on the screen.
What you need is overwriting name_get on service model.
see https://doc.openerp.com/trunk/server/api_models/#openerp.osv.orm.BaseModel.name_get
Solved.
I created another field which would contain the name plus the entity.
'name_plus_entity':fields.char('All', size = 300),
Then I created a function "onchange", so whenever the field 'name' OR the field 'entity' was changed, the field 'name_plus_entity' would get: 'name' + " | " + entity.
Also, I hide the field 'name_plus_entity' in the form XML.
Related
I have a spreadsheet of members designed as below:
My aim is to upload some columns and exclude others. In this case, I wish to upload only the name, age and email and exclude the others. I have been able to achieve this using the slice method as shown below:
def load_imported_members
spreadsheet = open_spreadsheet
spreadsheet.default_sheet = 'Worksheet'
header = spreadsheet.row(1)
(2..spreadsheet.last_row).map do |i|
row = Hash[[header, spreadsheet.row(i)].transpose]
member = Member.find_by_id(row["id"]) || Member.new
member.attributes = row.to_hash.slice("id", "name", "age", "email")
member
end
end
The problem is that last_row considers all the rows upto the last one (13), and since there are validations on the form, there are errors due to missing data as a result of the empty rows (which shouldn’t be considered). Is there a way I can upload only specific columns as I have done, yet limit to only the rows that have data?
You might want to chain the map call off of a reject filter like this example
You may just need to change the map line to this (assuming the missing rows all look like those above):
(2..spreadsheet.last_row).reject{|i| spreadsheet.row(i)[0] }.map do |i|
This is assuming the blank rows return as nil and that blank rows will always have all four desired fields blank as shown in the image. The reject call tests to see if spreadsheet.row(i)[0], the id column, is nil, if so the item is rejected from the list output given to map
Thanks for this question. I have learned some things from this question.
I have shortlisted your answer [note: use 'ROO' gem]
def load_imported_members(member)
spreadsheet = open_spreadsheet(member)
spreadsheet.each do |records|
record = #spreadsheet ? Hash[[#header, #spreadsheet.row(records)].transpose] : Hash[records] # transpose for xlsx records and
attributes = {id: record['id'], name: record['name'], email: record['email'], age: record['age']}
member_object = Member.new(attributes)
if member_object.valid?
if Member.find(attributes[:id])
Member.find(attributes[:id]).update(attributes)
else
member_object.save
end
end
end
end
You can parse your uploaded file using Roo gem.
def self.open_spreadsheet(member)
case File.extname(member.file.original_filename)
when ".csv" then
Roo::CSV.new(member.file.expiring_url, csv_options: {headers: true, skip_blanks: true, header_converters: ->(header) { header.strip }, converters: ->(data) { data ? data.strip : nil }})
when ".xlsx", ".xls" then
#spreadsheet = Roo::Spreadsheet.open(member.file.expiring_url)
#header = #spreadsheet.row(1)
(2..#spreadsheet.last_row)
end
end
Here I have used s3 uploaded url i.e expiring_url. Hope This will helpful. I have not tested. sorry, for small errors.
If you have used validations for name, email, and age. This will surely help u..
I've been trying, and failing, to generate a Google BigQuery table using a schema that I build from text.
I have no problem defining the schema in script like this:
var fl = {fields: [{name: 'issue_id', type: 'STRING'},.....}
then assigning it as schema: fl
What I want to do is use an array as input to the field list (e.g. name, type) and dynamically build this list into a table schema. I can do this in text (simple string building) but I can't use a text string as a schema - it appears as a null.
There's probably a wildly simple solution but I've not found it yet. It needs to avoid any add-ons if at all possible.
Specific code information
This is the table definition, which requires a schema.
var table = {
tableReference: {
projectId: projectId,
datasetId: datasetId,
tableId: tableId
},
schema: fl
};
If I define fl as below, I don't have a problem. I'm using the expected syntax and it all works.
var fl = {fields: [{name: 'issue_id', type: 'STRING'},{name: 'internal_issue_id', type: 'STRING'}] };
However, if I define fl as below (fs is an array and I'm concatenating text from this array), I end up with fl as a string, which doesn't work here.
var fl = "{fields: ["
while (countRow<numRows) {
fl = fieldList + "{name: '" + fs[countRow][0] + "', type: '" + fs[countRow][1] + "'},";
countRow=countRow+1
}
fl = fl.substring(0,fl.length-1) + "] }"
The string looks exactly like the originally defined variable, but of course is a string so I didn't really expect it to work without some sort of conversion - just like a date string usually needs conversion to be used in date calculations. Currently it appears as a null to the table definition.
I'm sure I'm not the first person to want to do this, and hoping there's a simple solution.
I am building a multi-step form input (a "wizard") where the user input parts of an entity over multiple form input views. At each step, I want to validate only the input data (not the entire entity).
My question is how to use error() with an array of field names.
The model has 12 fields with validation rules. I want to validate 3 of those in one controller action.
So, in this controller action, I get three inputs
$thedata = $this->request->data;
This results in:
['number' => '102','color' => 'blue','size' => 'large']
I then make an array of field names:
$thearray = array_keys($thedata);
This results in:
[
(int) 0 => 'number',
(int) 1 => 'color',
(int) 2 => 'size']
Now I would like to check these three fields for errors.
$errors = $this->Items->newEntity($this->request->data)->errors($thearray);
This results in checking ALL 12 fields with validation defined, not just the three in the array, and it fails validation (it picks up all the errors in the entity).
If I define only ONE field to check it works:
$errors = $this->Items->newEntity($this->request->data)->errors('number');
This correctly validates only the field 'number' and produces the desired result.
However, passing an array of fields instead of a string with a single field name validates ALL fields requiring validation.
Also, I tried hard-coding an array as a parameter of errors():
$errors = $this->Items->newEntity($this->request->data)->errors(['number','color']);
That also checks all 12 fields in the table definition, not just these two.
So the question is, how do you prepare the array and pass it to the errors() method if you want to check only two or three specific fields?
Thanks in advance for any advice!
D
Thanks in
According to the docs errors can take a $field argument, but not an array. If you want to validate multiple fields without validating all of them, you could loop over $thearray.
$item = $this->Items->newEntity($this->request->data);
foreach ($thearray as $error) {
$errors[] = $item->errors($error);
}
I have a function which returns json data as history from Version of reversion.models.
from django.http import HttpResponse
from reversion.models import Version
from django.contrib.admin.models import LogEntry
import json
def history_list(request):
history_list = Version.objects.all().order_by('-revision__date_created')
data = []
for i in history_list:
data.append({
'date_time': str(i.revision.date_created),
'user': str(i.revision.user),
'object': i.object_repr,
'field': i.revision.comment.split(' ')[-1],
'new_value_field': str(i.field_dict),
'type': i.content_type.name,
'comment': i.revision.comment
})
data_ser = json.dumps(data)
return HttpResponse(data_ser, content_type="application/json")
When I run the above snippet I get the output json as
[{"type": "fruits", "field": "colour", "object": "anyobject", "user": "anyuser", "new_value_field": "{'price': $23, 'weight': 2kgs, 'colour': 'red'}", "comment": "Changed colour."}]
From the function above,
'comment': i.revision.comment
returns json as "comment": "changed colour" and colour is the field which I have written in the function to retrieve it from comment as
'field': i.revision.comment.split(' ')[-1]
But i assume getting fieldname and value from field_dict is a better approach
Problem: from the above json list I would like to filter new_field_value and old_value. In the new_filed_value only value of colour.
Getting the changed fields isn't as easy as checking the comment, as this can be overridden.
Django-reversion just takes care of storing each version, not comparing.
Your best option is to look at the django-reversion-compare module and its admin.py code.
The majority of the code in there is designed to produce a neat side-by-side HTML diff page, but the code should be able to be re-purposed to generate a list of changed fields per object (as there can be more than one changed field per version).
The code should* include a view independent way to get the changed fields at some point, but this should get you started:
from reversion_compare.admin import CompareObjects
from reversion.revisions import default_revision_manager
def changed_fields(obj, version1, version2):
"""
Create a generic html diff from the obj between version1 and version2:
A diff of every changes field values.
This method should be overwritten, to create a nice diff view
coordinated with the model.
"""
diff = []
# Create a list of all normal fields and append many-to-many fields
fields = [field for field in obj._meta.fields]
concrete_model = obj._meta.concrete_model
fields += concrete_model._meta.many_to_many
# This gathers the related reverse ForeignKey fields, so we can do ManyToOne compares
reverse_fields = []
# From: http://stackoverflow.com/questions/19512187/django-list-all-reverse-relations-of-a-model
changed_fields = []
for field_name in obj._meta.get_all_field_names():
f = getattr(
obj._meta.get_field_by_name(field_name)[0],
'field',
None
)
if isinstance(f, models.ForeignKey) and f not in fields:
reverse_fields.append(f.rel)
fields += reverse_fields
for field in fields:
try:
field_name = field.name
except:
# is a reverse FK field
field_name = field.field_name
is_reversed = field in reverse_fields
obj_compare = CompareObjects(field, field_name, obj, version1, version2, default_revision_manager, is_reversed)
if obj_compare.changed():
changed_fields.append(field)
return changed_fields
This can then be called like so:
changed_fields(MyModel,history_list_item1, history_list_item2)
Where history_list_item1 and history_list_item2 correspond to various actual Version items.
*: Said as a contributor, I'll get right on it.
I have the following object structure and for each record I need to display the attribute name and its value. for the following example, I need to display "Name " = xxx. The attribute name can be different for each json response. These are field names of a table so I cann't use hard coded names.
How do I read the attribute value?
I tried var propname = DataInObj.DataSet[0].Record[0].properties[1] but it didn't work. Pls help
object
REcord
+attributes
-0
Name xxx
Amount 100
+attributes
-1
Name yyy
Amount 200
See this other post: Iterate over an object in Google Apps script
Code goes like this:
var dict = { "foo": "a", "bar": "b" };
function showProperties(){
var keys = [];
for(var k in dict) keys.push(k+':'+dict[k]);
Logger.log("total " + keys.length + "\n" + keys.join('\n'));
}