Data set 1 and dataset 2 having same column names but different descriptions. In dataset 1 transformation, I would say I am working on data set 1 so it has to give preference to that data set 1 specific descriptions. If I am doing transformation for another data set, I want to give preference to that data set. Is there a way to populate column descriptions which are data set specific?
For example, the arguments in my_compute_function is there a way to pass the dataset name which has to be given priority
Column1, Column Description for dataset 1, {Dataset 1 name}.
Column1, Column Description for dataset 2, {Dataset 2 name},
...
from transforms.api import transform, Input, Output
#transform(
my_output=Output("/my/output"),
my_input=Input("/my/input"),
)
def my_compute_function(my_input, my_output):
my_output.write_dataframe(
my_input.dataframe(),
column_descriptions={
"col_1": "col 1 description"
},
???
)
One way to do this is to provide a 'override dictionary' for all your datasets, where dataset-specific descriptions could take precedence.
i.e. you have :
from transforms.api import transform, Input, Output
GENERAL_DESCRIPTIONS = {
"col_1": "my general description"
}
LOCAL_DESCRIPTIONS = {
"/path/to/my/dataset": {
"col_1": "my override description"
}
}
#transform(
my_output=Output("/path/to/my/dataset"),
my_input=Input("/path/to/input"),
)
def my_compute_function(my_output, my_input):
local_updates = LOCAL_DESCRIPTIONS.get(my_output.path, {})
local_descriptions = GENERAL_DESCRIPTIONS.copy()
local_descriptions.update(local_updates)
my_output.write_dataframe(
my_input.dataframe(),
column_descriptions=local_descriptions
)
This would then allow you to put GENERAL_DESCRIPTIONS at the root of your module and override in each transformation .py file at the top with your 'local' descriptions. You could even put the 'local' descriptions above a group of transformations so you don't have to inspect each and every file to specify overrides.
The most granular way to update the description dictionary will be to simply:
...
GENERAL_DESCRIPTIONS = {
"col_1": "my general description"
}
LOCAL_DESCRIPTIONS = {
"col_1": "my override description"
}
...
def my_compute_function(my_output, my_input):
local_descriptions = GENERAL_DESCRIPTIONS.copy()
local_descriptions.update(LOCAL_DESCRIPTIONS)
...
Related
The long title contain also a mini-exaple because I couldn't explain well what I'm trying to do. Nonethless, the similar questions windows led me to various implementation. But since I read multiple times that it's a bad design, I would like to ask if what I'm trying to do is a bad design rather asking how to do it. For this reason I will try to explain my use case with a minial functional code.
Suppose I have a two classes, each of them with their own parameters:
class MyClass1:
def __init__(self,param1=1,param2=2):
self.param1=param1
self.param2=param2
class MyClass2:
def __init__(self,param3=3,param4=4):
self.param3=param3
self.param4=param4
I want to print param1...param4 as a string (i.e. "param1"..."param4") and not its value (i.e.=1...4).
Why? Two reasons in my case:
I have a GUI where the user is asked to select one of of the class
type (Myclass1, Myclass2) and then it's asked to insert the values
for the parameters of that class. The GUI then must show the
parameter names ("param1", "param2" if MyClass1 was chosen) as a
label with the Entry Widget to get the value. Now, suppose the
number of MyClass and parameter is very high, like 10 classes and 20
parameters per class. In order to minimize the written code and to
make it flexible (add or remove parameters from classes without
modifying the GUI code) I would like to cycle all the parameter of
Myclass and for each of them create the relative widget, thus I need
the paramx names under the form od string. The real application I'm
working on is even more complex, like parameter are inside other
objects of classes, but I used the simpliest example. One solution
would be to define every parameter as an object where
param1.name="param1" and param1.value=1. Thus in the GUI I would
print param1.name. But this lead to a specifi problem of my
implementation, that's reason 2:
MyClass1..MyClassN will be at some point printed in a JSON. The JSON
will be a huge file, and also since it's a complex tree (the example
is simple) I want to make it as simple as possibile. To explain why
I don't like to solution above, suppose this situation:
class MyClass1:
def init(self,param1,param2,combinations=[]):
self.param1=param1
self.param2=param2
self.combinations=combinations
Supposse param1 and param2 are now list of variable size, and
combination is a list where each element is composed by all the
combination of param1 and param2, and generate an output from some
sort of calculation. Each element of the list combinations is an
object SingleCombination,for example (metacode):
param1=[1,2] param2=[5,6] SingleCombination.param1=1
SingleCombination.param2=5 SingleCombination.output=1*5
MyInst1.combinations.append(SingleCombination).
In my case I will further incapsulated param1,param2 in a object
called parameters, so every condition will hace a nice tree with
only two object, parameters and output, and expanding parameters
node will show all the parameters with their value.
If I use JSON pickle to generate a JSON from the situation above, it
is nicely displayed since the name of the node will be the name of
the varaible ("param1", "param2" as strings in the JSON). But if I
do the trick at the end of situation (1), creating an object of
paramN as paramN.name and paramN.value, the JSON tree will become
ugly but especially huge, because if I have a big number of
condition, every paramN contains 2 sub-element. I wrote the
situation and displayed with a JSON Viewer, see the attached immage
I could pre processing the data structure before creating the JSON,
the problem is that I use the JSON to recreate the data structure in
another session of the program, so I need all the pieces of the data
structure to be in the JSON.
So, from my requirements, it seems that the workround to avoid print the variable names creates some side effect on the JSON visualization that I don't know how to solve without changing the logic of my program...
If you use dataclasses, getting the field names is pretty straightforward:
from dataclasses import dataclass, fields
#dataclass
class MyClass1:
first:int = 4
>>> fields(MyClass1)
(Field(name='first',type=<class 'int'>,default=4,...),)
This way, you can iterate over the class fields and ask your user to fill them. Note the field has a type, which you could use to eg ask the user for several values, as in your example.
You could add functions to extract programatically the param names (_show_inputs below ) from the class and values from instances (_json below ):
def blossom(cls):
"""decorate a class with `_json` (classmethod) and `_show_inputs` (bound)"""
def _json(self):
return json.dumps(self, cls=DataClassEncoder)
def _show_inputs(cls):
return {
field.name: field.type.__name__
for field in fields(cls)
}
cls._json = _json
cls._show_inputs = classmethod(_show_inputs)
return cls
NOTE 1: There's actually no need to decorate the classes with blossom. You could just use its internal functions programatically.
Using a custom json encoder to dump the dataclass objects, including properties:
import json
class DataClassPropEncoder(json.JSONEncoder): # https://stackoverflow.com/a/51286749/7814595
def default(self, o):
if is_dataclass(o):
cls = type(o)
# inject instance properties
props = {
name: getattr(o, name)
for name, value in cls.__dict__.items() if isinstance(value, property)
}
return {
**props,
**asdict(o)
}
return super().default(o)
Finally, wrap the computations inside properties so they are
serialized as well when using the decorated class. Full code example:
from dataclasses import asdict
from dataclasses import dataclass
from dataclasses import fields
from dataclasses import is_dataclass
import json
from itertools import product
from typing import List
class DataClassPropEncoder(json.JSONEncoder): # https://stackoverflow.com/a/51286749/7814595
def default(self, o):
if is_dataclass(o):
cls = type(o)
props = {
name: getattr(o, name)
for name, value in cls.__dict__.items() if isinstance(value, property)
}
return {
**props,
**asdict(o)
}
return super().default(o)
def blossom(cls):
def _json(self):
return json.dumps(self, cls=DataClassEncoder)
def _show_inputs(cls):
return {
field.name: field.type.__name__
for field in fields(cls)
}
cls._json = _json
cls._show_inputs = classmethod(_show_inputs)
return cls
#blossom
#dataclass
class MyClass1:
param1:int
param2:int
#blossom
#dataclass
class MyClass2:
param3: List[str]
param4: List[int]
def _compute_single(self, values): # TODO: implmement this
return values[0]*values[1]
#property
def combinations(self):
# TODO: cache if used more than once
# TODO: combinations might explode
field_names = []
field_values = []
cls = type(self)
for field in fields(cls):
field_names.append(field.name)
field_values.append(getattr(self, field.name))
results = []
for values in product(*field_values):
result = {
**{
field_names[idx]: value
for idx, value in enumerate(values)
},
"output": self._compute_single(values)
}
results.append(result)
return results
>>> print(f"MyClass1:\n{MyClass1._show_inputs()}")
MyClass1:
{'param1': 'int', 'param2': 'int'}
>>> print(f"MyClass2:\n{MyClass2._show_inputs()}")
MyClass2:
{'param3': 'List', 'param4': 'List'}
>>> obj_1 = MyClass1(3,4)
>>> print(f"obj_1:\n{obj_1._json()}")
obj_1:
{"param1": 3, "param2": 4}
>>> obj_2 = MyClass2(["first", "second"],[4,2])._json()
>>> print(f"obj_2:\n{obj_2._json()}")
obj_2:
{"combinations": [{"param3": "first", "param4": 4, "output": "firstfirstfirstfirst"}, {"param3": "first", "param4": 2, "output": "firstfirst"}, {"param3": "second", "param4": 4, "output": "secondsecondsecondsecond"}, {"param3": "second", "param4": 2, "output": "secondsecond"}], "param3": ["first", "second"], "param4": [4, 2]}
NOTE 2: If you need to perform several computations per class, it might be a good idea to abstract away the pattern in the combinations property to avoid repeating code.
NOTE 3: If you need access to the properties several times and not ust once, you might want to consider caching their values to avoid re-computation.
Once you have an instance of MyClass / MyClass2, you can call vars() or vars().keys() and it will give you the attributes as a str. Unlike dir, it will not show all the builtin attributes/methods starting with __.
class MyClass2:
def __init__(self,param3=3,param4=4):
self.param3=param3
self.param4=param4
instance_of_myclass2 = MyClass2(param3="what", param4="ever")
print(vars(instance_of_myclass2))
{'param3': 'what', 'param4': 'ever'}
print(vars(instance_of_myclass2).keys())
dict_keys(['param3', 'param4'])
dir(instance_of_myclass2)
['__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', 'param3', 'param4']
I have a list of 120 tables and i want to save sample size of first 1000 and last 1000 rows from each table into individual csv files for each table.
How can this be done in code repo or code authoring.
The following code allows to save one table to csv, can anyone help with this to loop through list of tables from a project folder and create individual csv files for each table?
#transform(
my_input = Input('/path/to/input/dataset'),
my_output = Output('/path/to/output/dataset')
)
def compute_function(my_input, my_output):
my_output.write_dataframe(
my_input.dataframe(),
output_format = "csv",
options = {
"compression": "gzip"
}
)
psuedo code
list_of_tables = [table1,table2,table3,...table120]
for tables in list_of_tables:
table = table.limit(1000)
table.write_dataframe(table.dataframe(),output_format = "csv",
options = {
"compression": "gzip"
})
i was able to get it working for one table, how can i just loop through a list of tables and generate it ?
The code for one table
# to get the first and last rows
from transforms.api import transform_df, Input, Output
from pyspark.sql.functions import monotonically_increasing_id
from pyspark.sql.functions import col
table_name = 'stock'
#transform_df(
output=Output(f"foundry/sample/{table_name}_sample"),
my_input=Input(f"foundry/input/{table_name}"),
)
def compute_first_last_1000(my_input):
first_stock_df = my_input.withColumn("index", monotonically_increasing_id())
first_stock_df = first_stock_df.orderBy("index").filter(col("index") < 1000).drop("index")
last_stock_df = my_input.withColumn("index", monotonically_increasing_id())
last_stock_df = last_stock_df.orderBy("index").filter(col("index") < 1000).drop("index")
stock_df = first_stock_df.unionByName(last_stock_df)
return stock_df
# code to save as csv file
table_name = 'stock'
#transform(
output=Output(f"foundry/sample/{table_name}_sample_csv"),
my_input=Input(f"foundry/sample/{table_name}_sample"),
)
def my_compute_function(my_input, output):
df = my_input.dataframe()
with output.filesystem().open('stock.csv', 'w') as stream:
csv_writer = csv.writer(stream)
csv_writer.writerow(df.schema.names)
csv_writer.writerows(df.collect())
Your best strategy here would be to programatically generate your transforms, you can also do a multi output transform if you don't fancy creating 1000 transforms. Something like this (written live into the answer box, non tested code some sintax may be wrong):
# you can generate this programatically
my_inputs = [
'/path/to/input/dataset1',
'/path/to/input/dataset2',
'/path/to/input/dataset3',
# ...
]
for table_path in my_inputs:
#transform_df(
Output(table_path + '_out'),
df=Input(table_path))
def transform(df):
# your logic here
return df
If you need to read the table names rather than hard coding them, then you could use the os.listdir or the os.walk method.
I think the previous answer left out the part about exporting only the first and last N rows. If the table is converted to a dataframe df, then
dfoutput = df.head(N).append(df.tail(N)])
or
dfoutput = df[:N].append(df[-N:])
Below is the code to convert csv file to json format in python.
I have two fields 'recommendation' and 'rating'. Based on the recommendation value I need to set the value for rating field like if recommendation is 1 then rating =1 and vice versa. With the answer I got I'm getting output for only one record entry instead of getting all the records. I think it's overriding. Do I need to create separate list for that and append each record entry to the list to get the output for all records.
here's the updated code:
def main(input_file):
csv_rows = []
with open(input_file, 'r') as csvfile:
reader = csv.DictReader(csvfile, delimiter='|')
title = reader.fieldnames
for row in reader:
entry = OrderedDict()
for field in title:
entry[field] = row[field]
[c.update({'RATING': c['RECOMMENDATIONS']}) for c in reader]
csv_rows.append(entry)
with open(json_file, 'w') as f:
json.dump(csv_rows, f, sort_keys=True, indent=4, ensure_ascii=False)
f.write('\n')
I want to create the nested format like the below:
"rating": {
"user_rating": {
"rating": 1
},
"recommended": {
"rating": 1
}
After you've read the file in, using the csv.DictReader, you'll have a list of dicts. Since you want to set the values now, it's a simple dict manipulation. There are several ways, of which one is:
[c.update({'rating': c['recommendation']}) for c in read_csvDictReader]
Hope that helps.
I have a function which returns json data as history from Version of reversion.models.
from django.http import HttpResponse
from reversion.models import Version
from django.contrib.admin.models import LogEntry
import json
def history_list(request):
history_list = Version.objects.all().order_by('-revision__date_created')
data = []
for i in history_list:
data.append({
'date_time': str(i.revision.date_created),
'user': str(i.revision.user),
'object': i.object_repr,
'field': i.revision.comment.split(' ')[-1],
'new_value_field': str(i.field_dict),
'type': i.content_type.name,
'comment': i.revision.comment
})
data_ser = json.dumps(data)
return HttpResponse(data_ser, content_type="application/json")
When I run the above snippet I get the output json as
[{"type": "fruits", "field": "colour", "object": "anyobject", "user": "anyuser", "new_value_field": "{'price': $23, 'weight': 2kgs, 'colour': 'red'}", "comment": "Changed colour."}]
From the function above,
'comment': i.revision.comment
returns json as "comment": "changed colour" and colour is the field which I have written in the function to retrieve it from comment as
'field': i.revision.comment.split(' ')[-1]
But i assume getting fieldname and value from field_dict is a better approach
Problem: from the above json list I would like to filter new_field_value and old_value. In the new_filed_value only value of colour.
Getting the changed fields isn't as easy as checking the comment, as this can be overridden.
Django-reversion just takes care of storing each version, not comparing.
Your best option is to look at the django-reversion-compare module and its admin.py code.
The majority of the code in there is designed to produce a neat side-by-side HTML diff page, but the code should be able to be re-purposed to generate a list of changed fields per object (as there can be more than one changed field per version).
The code should* include a view independent way to get the changed fields at some point, but this should get you started:
from reversion_compare.admin import CompareObjects
from reversion.revisions import default_revision_manager
def changed_fields(obj, version1, version2):
"""
Create a generic html diff from the obj between version1 and version2:
A diff of every changes field values.
This method should be overwritten, to create a nice diff view
coordinated with the model.
"""
diff = []
# Create a list of all normal fields and append many-to-many fields
fields = [field for field in obj._meta.fields]
concrete_model = obj._meta.concrete_model
fields += concrete_model._meta.many_to_many
# This gathers the related reverse ForeignKey fields, so we can do ManyToOne compares
reverse_fields = []
# From: http://stackoverflow.com/questions/19512187/django-list-all-reverse-relations-of-a-model
changed_fields = []
for field_name in obj._meta.get_all_field_names():
f = getattr(
obj._meta.get_field_by_name(field_name)[0],
'field',
None
)
if isinstance(f, models.ForeignKey) and f not in fields:
reverse_fields.append(f.rel)
fields += reverse_fields
for field in fields:
try:
field_name = field.name
except:
# is a reverse FK field
field_name = field.field_name
is_reversed = field in reverse_fields
obj_compare = CompareObjects(field, field_name, obj, version1, version2, default_revision_manager, is_reversed)
if obj_compare.changed():
changed_fields.append(field)
return changed_fields
This can then be called like so:
changed_fields(MyModel,history_list_item1, history_list_item2)
Where history_list_item1 and history_list_item2 correspond to various actual Version items.
*: Said as a contributor, I'll get right on it.
Table structure like -
db.define_table('parent',
Field('name'),format='%(name)s')
db.define_table('children',
Field('name'),
Field('mother','reference parent'),
Field('father','reference parent'))
db.children.mother.requires = IS_IN_DB(db, db.parent.id,'%(name)s')
db.children.father.requires = IS_IN_DB(db, db.parent.id,'%(name)s')
Controller :
grid = SQLFORM.grid(db.children, orderby=[db.children.id],
csv=True,
fields=[db.children.id, db.children.name, db.children.mother, db.children.father])
return dict(grid=grid)
Here grid shows proper values i.e names of the mother and father from the parent table.
But when I try to export it via csv link - resulted excelsheet shows ids and not the names of mother and father.
Please help!
The CSV download just gives you the raw database values without first applying each field's represent attribute. If you want the "represented" values of each field, you have two options. First, you can choose the TSV (tab-separated-values) download instead of CSV. Second, you can define a custom export class:
import cStringIO
class CSVExporter(object):
file_ext = "csv"
content_type = "text/csv"
def __init__(self, rows):
self.rows = rows
def export(self):
if self.rows:
s = cStringIO.StringIO()
self.rows.export_to_csv_file(s, represent=True)
return s.getvalue()
else:
return ''
grid = SQLFORM.grid(db.mytable, exportclasses=dict(csv=(CSVExporter, 'CSV')))
The exportclasses argument is a dictionary of custom download types that can be used to override existing types or add new ones. Each item is a tuple including the exporter class and the label to be used for the download link in the UI.
We should probably add this as an option.