I have a dash page that contains a table.
From a callback, I can update the info inside the table based on a datepicker or a filter from a dropdown.
#app.callback(
Output('table_web', 'data'),
[Input(component_id='datepickerrange', component_property='start_date'),
Input(component_id='datepickerrange', component_property='end_date'),
Input(component_id='my_button', component_property='n_clicks')],
[State(component_id='table_dropdown_web', component_property='value')]
)
My problem appears when I want to add another table, that represent something else, and I want to filter that table with another callback, with it's own input.
But I cannot figure it out how can I declare the 2 callbacks for the 2 tables so the first callback adresses the first table, and the second callback to call the second table.
I tried something like this:
#app.callback(
Output('table_teste', 'data_table1'),
[Input(component_id='datepickerrange', component_property='start_date'),
Input(component_id='datepickerrange', component_property='end_date'),
Input(component_id='my_button_2', component_property='n_clicks')],
)
def create_table_1(start_date, end_date, n):
# code
return data_table1
#app.callback(
Output('table_web_utm', 'data_table2'),
[Input('my_button', 'n_clicks')],
[State('my_txt_input', 'value')]
)
def create_table_2(n, value):
code
return data_table2
I figured this one out.
In case someone has the same problem.
I should have named the output in the callbacks and in the return of the functions with "data".
#app.callback(
Output('table_teste', 'data'),
[Input(component_id='datepickerrange', component_property='start_date'),
Input(component_id='datepickerrange', component_property='end_date'),
Input(component_id='my_button_2', component_property='n_clicks')],
)
def create_table_1(start_date, end_date, n):
# code
return data
#app.callback(
Output('table_web_utm', 'data'),
[Input('my_button', 'n_clicks')],
[State('my_txt_input', 'value')]
)
def create_table_2(n, value):
code
return data
Related
The long title contain also a mini-exaple because I couldn't explain well what I'm trying to do. Nonethless, the similar questions windows led me to various implementation. But since I read multiple times that it's a bad design, I would like to ask if what I'm trying to do is a bad design rather asking how to do it. For this reason I will try to explain my use case with a minial functional code.
Suppose I have a two classes, each of them with their own parameters:
class MyClass1:
def __init__(self,param1=1,param2=2):
self.param1=param1
self.param2=param2
class MyClass2:
def __init__(self,param3=3,param4=4):
self.param3=param3
self.param4=param4
I want to print param1...param4 as a string (i.e. "param1"..."param4") and not its value (i.e.=1...4).
Why? Two reasons in my case:
I have a GUI where the user is asked to select one of of the class
type (Myclass1, Myclass2) and then it's asked to insert the values
for the parameters of that class. The GUI then must show the
parameter names ("param1", "param2" if MyClass1 was chosen) as a
label with the Entry Widget to get the value. Now, suppose the
number of MyClass and parameter is very high, like 10 classes and 20
parameters per class. In order to minimize the written code and to
make it flexible (add or remove parameters from classes without
modifying the GUI code) I would like to cycle all the parameter of
Myclass and for each of them create the relative widget, thus I need
the paramx names under the form od string. The real application I'm
working on is even more complex, like parameter are inside other
objects of classes, but I used the simpliest example. One solution
would be to define every parameter as an object where
param1.name="param1" and param1.value=1. Thus in the GUI I would
print param1.name. But this lead to a specifi problem of my
implementation, that's reason 2:
MyClass1..MyClassN will be at some point printed in a JSON. The JSON
will be a huge file, and also since it's a complex tree (the example
is simple) I want to make it as simple as possibile. To explain why
I don't like to solution above, suppose this situation:
class MyClass1:
def init(self,param1,param2,combinations=[]):
self.param1=param1
self.param2=param2
self.combinations=combinations
Supposse param1 and param2 are now list of variable size, and
combination is a list where each element is composed by all the
combination of param1 and param2, and generate an output from some
sort of calculation. Each element of the list combinations is an
object SingleCombination,for example (metacode):
param1=[1,2] param2=[5,6] SingleCombination.param1=1
SingleCombination.param2=5 SingleCombination.output=1*5
MyInst1.combinations.append(SingleCombination).
In my case I will further incapsulated param1,param2 in a object
called parameters, so every condition will hace a nice tree with
only two object, parameters and output, and expanding parameters
node will show all the parameters with their value.
If I use JSON pickle to generate a JSON from the situation above, it
is nicely displayed since the name of the node will be the name of
the varaible ("param1", "param2" as strings in the JSON). But if I
do the trick at the end of situation (1), creating an object of
paramN as paramN.name and paramN.value, the JSON tree will become
ugly but especially huge, because if I have a big number of
condition, every paramN contains 2 sub-element. I wrote the
situation and displayed with a JSON Viewer, see the attached immage
I could pre processing the data structure before creating the JSON,
the problem is that I use the JSON to recreate the data structure in
another session of the program, so I need all the pieces of the data
structure to be in the JSON.
So, from my requirements, it seems that the workround to avoid print the variable names creates some side effect on the JSON visualization that I don't know how to solve without changing the logic of my program...
If you use dataclasses, getting the field names is pretty straightforward:
from dataclasses import dataclass, fields
#dataclass
class MyClass1:
first:int = 4
>>> fields(MyClass1)
(Field(name='first',type=<class 'int'>,default=4,...),)
This way, you can iterate over the class fields and ask your user to fill them. Note the field has a type, which you could use to eg ask the user for several values, as in your example.
You could add functions to extract programatically the param names (_show_inputs below ) from the class and values from instances (_json below ):
def blossom(cls):
"""decorate a class with `_json` (classmethod) and `_show_inputs` (bound)"""
def _json(self):
return json.dumps(self, cls=DataClassEncoder)
def _show_inputs(cls):
return {
field.name: field.type.__name__
for field in fields(cls)
}
cls._json = _json
cls._show_inputs = classmethod(_show_inputs)
return cls
NOTE 1: There's actually no need to decorate the classes with blossom. You could just use its internal functions programatically.
Using a custom json encoder to dump the dataclass objects, including properties:
import json
class DataClassPropEncoder(json.JSONEncoder): # https://stackoverflow.com/a/51286749/7814595
def default(self, o):
if is_dataclass(o):
cls = type(o)
# inject instance properties
props = {
name: getattr(o, name)
for name, value in cls.__dict__.items() if isinstance(value, property)
}
return {
**props,
**asdict(o)
}
return super().default(o)
Finally, wrap the computations inside properties so they are
serialized as well when using the decorated class. Full code example:
from dataclasses import asdict
from dataclasses import dataclass
from dataclasses import fields
from dataclasses import is_dataclass
import json
from itertools import product
from typing import List
class DataClassPropEncoder(json.JSONEncoder): # https://stackoverflow.com/a/51286749/7814595
def default(self, o):
if is_dataclass(o):
cls = type(o)
props = {
name: getattr(o, name)
for name, value in cls.__dict__.items() if isinstance(value, property)
}
return {
**props,
**asdict(o)
}
return super().default(o)
def blossom(cls):
def _json(self):
return json.dumps(self, cls=DataClassEncoder)
def _show_inputs(cls):
return {
field.name: field.type.__name__
for field in fields(cls)
}
cls._json = _json
cls._show_inputs = classmethod(_show_inputs)
return cls
#blossom
#dataclass
class MyClass1:
param1:int
param2:int
#blossom
#dataclass
class MyClass2:
param3: List[str]
param4: List[int]
def _compute_single(self, values): # TODO: implmement this
return values[0]*values[1]
#property
def combinations(self):
# TODO: cache if used more than once
# TODO: combinations might explode
field_names = []
field_values = []
cls = type(self)
for field in fields(cls):
field_names.append(field.name)
field_values.append(getattr(self, field.name))
results = []
for values in product(*field_values):
result = {
**{
field_names[idx]: value
for idx, value in enumerate(values)
},
"output": self._compute_single(values)
}
results.append(result)
return results
>>> print(f"MyClass1:\n{MyClass1._show_inputs()}")
MyClass1:
{'param1': 'int', 'param2': 'int'}
>>> print(f"MyClass2:\n{MyClass2._show_inputs()}")
MyClass2:
{'param3': 'List', 'param4': 'List'}
>>> obj_1 = MyClass1(3,4)
>>> print(f"obj_1:\n{obj_1._json()}")
obj_1:
{"param1": 3, "param2": 4}
>>> obj_2 = MyClass2(["first", "second"],[4,2])._json()
>>> print(f"obj_2:\n{obj_2._json()}")
obj_2:
{"combinations": [{"param3": "first", "param4": 4, "output": "firstfirstfirstfirst"}, {"param3": "first", "param4": 2, "output": "firstfirst"}, {"param3": "second", "param4": 4, "output": "secondsecondsecondsecond"}, {"param3": "second", "param4": 2, "output": "secondsecond"}], "param3": ["first", "second"], "param4": [4, 2]}
NOTE 2: If you need to perform several computations per class, it might be a good idea to abstract away the pattern in the combinations property to avoid repeating code.
NOTE 3: If you need access to the properties several times and not ust once, you might want to consider caching their values to avoid re-computation.
Once you have an instance of MyClass / MyClass2, you can call vars() or vars().keys() and it will give you the attributes as a str. Unlike dir, it will not show all the builtin attributes/methods starting with __.
class MyClass2:
def __init__(self,param3=3,param4=4):
self.param3=param3
self.param4=param4
instance_of_myclass2 = MyClass2(param3="what", param4="ever")
print(vars(instance_of_myclass2))
{'param3': 'what', 'param4': 'ever'}
print(vars(instance_of_myclass2).keys())
dict_keys(['param3', 'param4'])
dir(instance_of_myclass2)
['__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', 'param3', 'param4']
I have a dash dashbboard
I use a callback to retrieve my data as a table
#app.callback(
Output('kpi_id', 'children'),
[Input(component_id='my_button')]
)
def create_kpi(value):
df= df.loc[df['user_role'].isin(value)]
df_kpi_users = kpi(df)
data = df_kpi_users.to_dict("records")
return data
I use a list of names to declare the names of my columns
TABLE_COLUMNS = ['day', 'DAU', 'DAU_7']
Inside the html declaration I have:
dash_table.DataTable(
id='kpi_id',
columns=[{"name": i, "id": i} for i in TABLE_COLUMNS],
My problem is: if I need to call the new table as a df to apply a certain conditional formatting, like the data_bars, all the examples I find is to call the df like this:
style_data_conditional=(
data_bars(df, 'DAU')
)
But I don't have a df declared, so how can I call my table?
I have a list of 120 tables and i want to save sample size of first 1000 and last 1000 rows from each table into individual csv files for each table.
How can this be done in code repo or code authoring.
The following code allows to save one table to csv, can anyone help with this to loop through list of tables from a project folder and create individual csv files for each table?
#transform(
my_input = Input('/path/to/input/dataset'),
my_output = Output('/path/to/output/dataset')
)
def compute_function(my_input, my_output):
my_output.write_dataframe(
my_input.dataframe(),
output_format = "csv",
options = {
"compression": "gzip"
}
)
psuedo code
list_of_tables = [table1,table2,table3,...table120]
for tables in list_of_tables:
table = table.limit(1000)
table.write_dataframe(table.dataframe(),output_format = "csv",
options = {
"compression": "gzip"
})
i was able to get it working for one table, how can i just loop through a list of tables and generate it ?
The code for one table
# to get the first and last rows
from transforms.api import transform_df, Input, Output
from pyspark.sql.functions import monotonically_increasing_id
from pyspark.sql.functions import col
table_name = 'stock'
#transform_df(
output=Output(f"foundry/sample/{table_name}_sample"),
my_input=Input(f"foundry/input/{table_name}"),
)
def compute_first_last_1000(my_input):
first_stock_df = my_input.withColumn("index", monotonically_increasing_id())
first_stock_df = first_stock_df.orderBy("index").filter(col("index") < 1000).drop("index")
last_stock_df = my_input.withColumn("index", monotonically_increasing_id())
last_stock_df = last_stock_df.orderBy("index").filter(col("index") < 1000).drop("index")
stock_df = first_stock_df.unionByName(last_stock_df)
return stock_df
# code to save as csv file
table_name = 'stock'
#transform(
output=Output(f"foundry/sample/{table_name}_sample_csv"),
my_input=Input(f"foundry/sample/{table_name}_sample"),
)
def my_compute_function(my_input, output):
df = my_input.dataframe()
with output.filesystem().open('stock.csv', 'w') as stream:
csv_writer = csv.writer(stream)
csv_writer.writerow(df.schema.names)
csv_writer.writerows(df.collect())
Your best strategy here would be to programatically generate your transforms, you can also do a multi output transform if you don't fancy creating 1000 transforms. Something like this (written live into the answer box, non tested code some sintax may be wrong):
# you can generate this programatically
my_inputs = [
'/path/to/input/dataset1',
'/path/to/input/dataset2',
'/path/to/input/dataset3',
# ...
]
for table_path in my_inputs:
#transform_df(
Output(table_path + '_out'),
df=Input(table_path))
def transform(df):
# your logic here
return df
If you need to read the table names rather than hard coding them, then you could use the os.listdir or the os.walk method.
I think the previous answer left out the part about exporting only the first and last N rows. If the table is converted to a dataframe df, then
dfoutput = df.head(N).append(df.tail(N)])
or
dfoutput = df[:N].append(df[-N:])
I am using odoo 10 and I have two models Order_Line and Products.
OrderLine
class OrderLine(models.Model):
_name = 'order_line'
_description = 'Order Lines'
name = fields.Char()
products = fields.Many2one('amgl.products', String='Products')
Products
class Products(models.Model):
_name = 'products'
_description = 'Products'
_sql_constraints = [
('uniq_poduct_code', 'unique(product_code)', 'Product Code already exists!')
]
name = fields.Char()
product_code = Char()
Now i am trying to create order_line from a csv file and in csv file the customer is providing me 'Product Code' instead of Id. How to handle this that, we use product code and system automatically fills the products associated with that product code.
Note :
Product Code in products table is also unique, so there is no chance of duplicating.
CSV template:
customer/account_number,customer/first_name,customer/last_name,customer/account_type,order/transaction_id,order/products/product_code,order/quantity,order/customer_id/id
Case 1: there are no products stored in the database with any of the product codes the customer is giving to you
If the product codes haven't been created yet in the database, you should have two CSV files (Products.csv and OrderLine.csv). The first one must have three columns (id, name and product_code). The second one must have three columns too (id, name and products/id). So you would only have to make up a XML ID under the id column in Products.csv and call this XML ID from the respective row of the column products/id of the file OrderLine.csv.
Case 2: the product codes the customer has given to you belong to existing products in the database
Now, the customer has given you product codes of products which already exist in the database. In this case you don't have to create a Products.csv file. You need to know which are the XML IDs of the products which have the product codes the customer gave to you. For that, you can go through the interface of Odoo to the tree view of the model products (if this view doesn't exist, you must create it). Then, you'll have to select all records (click on the number 80 of the top right corner to show more records per page if you need it). Once all of them are selected, click on More button and afterwars on Export. Select the column product_code and name and afterwards proceed. Save the generated CSV file as Products.csv, for example. Open it, you'll see all the XML ID of the exported products (if they hadn't XML ID, after the exportation they'll do -an exportation generates XML ID for each exported record if it doesn't have anyone-). Now, I guess the customer has given you something like a file with columns Name of the order line, Product code, so replace the Product code column values with the respective XML IDs of the products you have just exported. So in the end youu should have one file to import, OrderLine.csv, with id, name and products/id columns.
Case 3: there are some product codes belonging to existing products stored in the database and there are some ones which still don't exist
In this case you will have to combine both cases 1 and 2, first, export the products as described in case 2, and then, create a new one with the products whose code doesn't exist yet, as described in case 1. Then replace the product codes the customer gave to you with the respective ones as described in case 2.
Note: this process will give you a lot of time if you have thousands of records to import and you replace them manually. In this case it is mandatory to create a macro in your CSV editor which does the replacements (with search and replace). For example, with LibreOffice you can do macros with Python.
Example (Case 3)
The customer has given you a file of order lines, with two lines:
Name: OL A, Product Code: AAA
Name: OL B, Product Code: BBB
You export products from Odoo interface and you get a file with one
line:
id,name,product_code
__export__.products_a,"Product A","AAA"
You look for the coincidences of the product codes in both files, and
do the replacements in a copy of the customer file, so now you have
this:
Name: OL A, Product Code: __export__.products_a
Name: OL B, Product Code: BBB
Then you create a new CSV Products.csv and put in there the products
whose product code don't exist yet:
id,name,product_code
__import__.products_b,"Product B","BBB"
Now apply the replacements again comparing this new file with the one
we had, and you will get this:
Name: OL A, Product Code: __export__.products_a
Name: OL B, Product Code: __import__.products_b
Convert this file to a right CSV format for Odoo, and save it as
OrderLine.csv:
id,name,products/id
__import__.order_line_1,"OL A",__export__.products_a
__import__.order_line_2,"OL B",__import__.products_b
And finally, import the files, and take into account: import
Products.csv before OrderLine.csv.
EDIT
I think it should be better to waste a bit of time in programming a macro for your CSV editor (Excel, LibreOffice, Open Office or whatever), but if you're desperated and you need to do this only through Odoo, I came up with an awful workaround, but at least, it should work too.
1.Create a new Char field named product_code in order_line model (it would be there temporaly).
2.Modify the ORM create method of this model:
#api.model
def create(self, vals):
product_id = False
product_code = vals.get('product_code', False)
if product_code:
product = self.env['products'].search([
('product_code', '=', product_code)
])
if product:
product_id = product[0].id
vals.update({
'products': product_id,
})
return super(OrderLine, self).create(vals)
3.Copy the file which the customer's sent you, rename the headers properly, and rename the column order/products/product_code as product_code. Import the CSV file. Each importation of records will call the ORM create method of order_line model.
After the importation you'll have in the database the order lines rightly related to the products.
When you've finished you'll have to remember to remove the code you've added (and also remove the column product_code from order_line model in the database, in order to remove junk).
Solution 1
You can create a transient model with the fields that you are using in the CSV. And applying the idea of #forvas:
class ImportOrderLines(models.TransientModel):
_name = 'import.order.lines'
product_code = Char()
#api.model
def create(self, vals):
product_id = False
product_code = vals.get('product_code', False)
if product_code:
product = self.env['products'].search([
('product_code', '=', product_code)
])
if product:
product_id = product[0].id
self.env['order_line'].create({
'products': product_id,
})
return False # you don't need to create the record in the transient model
You can go to the list view of this transient model and import like in any other model, with the base_import view.
Solution 2
You could create a wizard in order to import the CSV to create the Order Lines.
Check the following source code. You must assing the method import_order_lines to a button in the wizard.
import base64
import magic
import csv
from cStringIO import StringIO
import codecs
from openerp import models, fields, api, _
from openerp.exceptions import Warning
class ImportDefaultCodeWizard(models.TransientModel):
_name = 'import.default_code.wizard'
name = fields.Char(
string='File name',
)
file = fields.Binary(
string='ZIP file to import to Odoo',
required=True,
)
#api.multi
def import_order_lines(self):
self.ensure_one()
content = base64.decodestring(self.file)
if codecs.BOM_UTF8 == content[:3]: # remove "byte order mark" (windows)
content = content[3:]
file_type = magic.from_buffer(content, mime=True)
if file_type == 'text/plain':
self._generate_order_line_from_csv(content)
return self._show_result_wizard()
raise Warning(
_('WRONG FILETYPE'),
_('You should send a CSV file')
)
def _show_result_wizard(self):
return {
'type': 'ir.actions.act_window',
'res_model': self._name,
'view_type': 'form',
'view_mode': 'form',
'target': 'new',
'context': self.env.context,
}
def _generate_order_line_from_csv(self, data):
try:
reader = csv.DictReader(StringIO(data))
except Exception:
raise Warning(
_('ERROR getting data from csv file'
'\nThere was some error trying to get the data from the csv file.'
'\nMake sure you are using the right format.'))
n = 1
for row in reader:
n += 1
self._validate_data(n, row)
default_code = row.get('default_code', False)
order_line = {
'default_code': self._get_product_id(default_code),
# here you should add all the order line fields
}
try:
self.env['order_line'].create(order_line)
except Exception:
raise Warning(
_('The order line could not be created.'
'\nROW: %s') % n
)
def _validate_data(self, n, row):
csv_fields = [
'default_code',
]
""" here is where you should add the CSV fields in order to validate them
customer/account_number, customer/first_name, customer/last_name,
customer/account_type, order/transaction_id, order/products/product_code ,order/quantity, order/customer_id/id
"""
for key in row:
if key not in csv_fields:
raise Warning(_('ERROR\nThe file format is not right.'
'\nCheck the column names and the CSV format'
'\nKEY: %s' % key))
if row.get('default_code', False) == '':
raise Warning(
_('ERROR Validating data'),
_('The product code should be filled.'
'\nROW: %s') % n
)
def _get_product_id(self, default_code):
if partner_id:
product_obj = self.env['product.product'].search([
('default_code', '=', default_code),
])
if len(product_code_obj) == 1:
return product_obj.default_code
else:
raise Warning(
_('ERROR Validating data'),
_('The product code should be filled.'
'\nROW: %s') % n
)
return False
You can search by product_code like so:
#api.model
def search_by_code(self, code):
result = self.env['products'].search([('product_code', '=', code)])
I have a function which returns json data as history from Version of reversion.models.
from django.http import HttpResponse
from reversion.models import Version
from django.contrib.admin.models import LogEntry
import json
def history_list(request):
history_list = Version.objects.all().order_by('-revision__date_created')
data = []
for i in history_list:
data.append({
'date_time': str(i.revision.date_created),
'user': str(i.revision.user),
'object': i.object_repr,
'field': i.revision.comment.split(' ')[-1],
'new_value_field': str(i.field_dict),
'type': i.content_type.name,
'comment': i.revision.comment
})
data_ser = json.dumps(data)
return HttpResponse(data_ser, content_type="application/json")
When I run the above snippet I get the output json as
[{"type": "fruits", "field": "colour", "object": "anyobject", "user": "anyuser", "new_value_field": "{'price': $23, 'weight': 2kgs, 'colour': 'red'}", "comment": "Changed colour."}]
From the function above,
'comment': i.revision.comment
returns json as "comment": "changed colour" and colour is the field which I have written in the function to retrieve it from comment as
'field': i.revision.comment.split(' ')[-1]
But i assume getting fieldname and value from field_dict is a better approach
Problem: from the above json list I would like to filter new_field_value and old_value. In the new_filed_value only value of colour.
Getting the changed fields isn't as easy as checking the comment, as this can be overridden.
Django-reversion just takes care of storing each version, not comparing.
Your best option is to look at the django-reversion-compare module and its admin.py code.
The majority of the code in there is designed to produce a neat side-by-side HTML diff page, but the code should be able to be re-purposed to generate a list of changed fields per object (as there can be more than one changed field per version).
The code should* include a view independent way to get the changed fields at some point, but this should get you started:
from reversion_compare.admin import CompareObjects
from reversion.revisions import default_revision_manager
def changed_fields(obj, version1, version2):
"""
Create a generic html diff from the obj between version1 and version2:
A diff of every changes field values.
This method should be overwritten, to create a nice diff view
coordinated with the model.
"""
diff = []
# Create a list of all normal fields and append many-to-many fields
fields = [field for field in obj._meta.fields]
concrete_model = obj._meta.concrete_model
fields += concrete_model._meta.many_to_many
# This gathers the related reverse ForeignKey fields, so we can do ManyToOne compares
reverse_fields = []
# From: http://stackoverflow.com/questions/19512187/django-list-all-reverse-relations-of-a-model
changed_fields = []
for field_name in obj._meta.get_all_field_names():
f = getattr(
obj._meta.get_field_by_name(field_name)[0],
'field',
None
)
if isinstance(f, models.ForeignKey) and f not in fields:
reverse_fields.append(f.rel)
fields += reverse_fields
for field in fields:
try:
field_name = field.name
except:
# is a reverse FK field
field_name = field.field_name
is_reversed = field in reverse_fields
obj_compare = CompareObjects(field, field_name, obj, version1, version2, default_revision_manager, is_reversed)
if obj_compare.changed():
changed_fields.append(field)
return changed_fields
This can then be called like so:
changed_fields(MyModel,history_list_item1, history_list_item2)
Where history_list_item1 and history_list_item2 correspond to various actual Version items.
*: Said as a contributor, I'll get right on it.