Applying a persistent filter to a periodically updated qtableview in python - qtableview

I've a custom model of a QtableView displaying data that I want to filter out on a column.
The Qtableview function updateAccountTable is refreshed periodically from a timer to show real-time data.
I've put a QlineEdit to seize my filter value and I do apply my custom QSortFilterProxyModel on this.
I can see the data is filtered in the Qtableview, until the next refresh where the list is again unfiltered.
This come obviously from the signal textChanged of the Qline Edit which is not persistent but I do not see how to have my QSortFilterProxyModel be persistent after the Qtableview refresh
Any idea on how to do that ?
Cheers
Stephane
Part of my code is :
# AccountTableView sorting overide function
class mysortingproxy(QSortFilterProxyModel):
def __init__(self, parent=None):
super(mysortingproxy, self).__init__(parent)
def lessThan(self, left: QModelIndex, right: QModelIndex) -> bool:
leftData = self.sourceModel().data(left, Qt.UserRole)
rightData = self.sourceModel().data(right, Qt.UserRole)
return leftData < rightData
class MainUi(QMainWindow):
# snip...
def updateAccountTable(self):
# Account table
self.accountTableModel = AccountTableModel(self.data, self.scalping)
proxyModel = mysortingproxy() # if not sorting override : proxyModel = QSortFilterProxyModel()
proxyModel.setFilterKeyColumn(0) # first column
proxyModel.setSourceModel(self.accountTableModel)
self.tableView_Account.setModel(proxyModel)
self.tableView_Account.setSortingEnabled(True)
self.tableView_Account.verticalHeader().setVisible(False)
# filter proxy model
self.lineEdit_Find.textChanged.connect(proxyModel.setFilterRegExp)

found it!
This actually only required to add the reading of the filter field each time before displaying my data and then apply the filter again
The code became
def updateAccountTable(self):
# Account table
self.accountTableModel = AccountTableModel(self.data, self.scalping)
proxyModel = mysortingproxy() # if not sorting override : proxyModel = QSortFilterProxyModel()
proxyModel.setFilterKeyColumn(0) # first column
proxyModel.setSourceModel(self.accountTableModel)
self.tableView_Account.setModel(proxyModel)
self.tableView_Account.setSortingEnabled(True)
self.tableView_Account.verticalHeader().setVisible(False)
# filter proxy model
self.lineEdit_Find.textChanged.connect(proxyModel.setFilterRegExp)
self.crypto_find = self.lineEdit_Find.text()
proxyModel.setFilterRegExp(self.crypto_find.upper())

Related

dash error loading dependencies when adding a editable table

I'm trying to add an editable table in my dash app and use the edited data in the following step.
First, I have a callback that is triggered by a button, receives a file, the function processes the file and the output is the editable table and a json data. The table is showing on the screen.
Then, I want the user to do the necessary changes in the table and click on another button.
The click will trigger another callback that should receive the edited data from the table + the json data from the previous callback and as output, another json data.
However, when I test the function, I get "error loading dependencies".
I'm using a very old version of dash 0.43 and at the moment I can't update (many functions were deprecated and I can't change them all now).
#dash_application.callback(
[Output('journey-data-store-raw', 'data'),
Output('new-lable-table', 'children'), #this goes to a div in the layout file
Output('source-table-updated', 'data')],
[Input('parser-process', 'n_clicks')],
[State('upload-data', 'contents'),
State('upload-data', 'filename'),
State('decimal-selection', 'value')]
)
def source_data(_, content, name, decimal_selection):
"""Show the sources to allow the user to edit it"""
if name:
clean_data = get_clean_data(content, name, decimal_selection)
clean_data.to_csv('clean_data_test.csv')
return df_to_json(clean_data), dash_table.DataTable(**get_sources(clean_data)), json.dumps({'updated': True})
else:
print('deu ruim')
raise dash.exceptions.PreventUpdate
#dash_application.callback(
[Output('journey-data-store', 'data'),
Output('color-store', 'data')],
[Input('run-analysis', 'n_clicks')],
[State('sources-table', 'data'),
State('journey-data-store-raw', 'data')]
)
def store_data(_, new_table, raw_data):
"""Stores the datafile and colors in a Store object"""
i = 0
for row in new_table:
if row['new source labels'] != '':
i = 1
break
if i > 0:
# call function to "parser file"
# colors = get_colors(new_data)
# return df_to_json(clean_data), json.dumps(colors)
# the return is only a test, I'd develop the function later I just wanna test and make
# the call back work
return raw_data, json.dumps(get_colors(df_from_json(raw_data)))
else:
return raw_data, json.dumps(get_colors(df_from_json(raw_data)))
I tried to exclude the button and the sources-table of the callback, so it would trigger when the first callback is finished (journey-data-store-raw is available). But is not happening either.
i tried to run in a private window.

How to receive data from buildin function of another class in another module

I am programming a case manager (administration system). To build it constructively, I program in separate modules to keep an overview. Some modules contain a class-object where I build an small search engine including its own functions. The main program is the case form itself. Obviously, when the search engine finds an entry, it should fill in the case form. I am able to call the search engine (and the search engine works to), however I don't know how to return the results back to the main program/case form/module.
To give you a picture, I have added a image of the GUI, so you can see the case form and the search engine (which is a different module and class (inheriting tk.Toplevel)
The relevant code (case_form/main program):
import ReferenceSearch as rs #Own module
def search_ref(self):
#Function to call search engine
search_engine = rs.ReferenceSearch(self, self.csv_file.get(), self.references_list)
#Reveive data from search_engine and show it in case_form
self.title_var.set(search_engine) #DOES NOT WORK BECAUSE search_engine IS THE ACTUAL ENGINE NOT THE
DATA returned from its buildin function
Relevant code in ReferenceSearch module:
class ReferenceSearch(tk.Toplevel):
def __init__(self, parent, csv_file,references_list=[]):
super().__init__()
self.parent = parent
self.csv_file = csv_file
self.references_list = references_list
self.ref_search_entry = ttk.Entry(self.search_frame)
self.search_but = tk.Button(self.search_frame,
text=" Search ",
command=lambda:self.search_for_ref(self.ref_search_entry.get())
def search_for_ref(self, reference, csv_file="Cases.csv"):
#Function to read specific entry by reference
if reference in self.references_list:
with open(csv_file, "r", newline="") as file:
reader = csv.DictReader(file, delimiter="|")
for entry in reader:
if reference == entry["Reference"]:
data = entry["Title"] #By example
return data
How do I receive the data from the buildin function of the ReferenceSearch class and use it in the main module the case_form?
Keep in mind that the ReferenceSearch module is calling this function when the search button is pressed (and not the case_form module). However, the data is needed in the case_form module.
Change the ReferenceSearch module contents to:
class ReferenceSearch(tk.Toplevel):
def __init__(self, parent, csv_file,references_list=[]):
super().__init__()
self.data = ""
self.parent = parent
self.csv_file = csv_file
self.references_list = references_list
self.ref_search_entry = ttk.Entry(self.search_frame)
self.search_but = tk.Button(self.search_frame,
text=" Search ",
command=lambda:self.search_for_ref(self.ref_search_entry.get())
def search_for_ref(self, reference, csv_file="Cases.csv"):
#Function to read specific entry by reference
if reference in self.references_list:
with open(csv_file, "r", newline="") as file:
reader = csv.DictReader(file, delimiter="|")
for entry in reader:
if reference == entry["Reference"]:
data = entry["Title"] #By example
self.parent.title_var.set(data)
and case_form contents to:
import ReferenceSearch as rs
def search_ref(self):
#Function to call search engine
search_engine = rs.ReferenceSearch(self, self.csv_file.get(), self.references_list)

Why must use DataParallel when testing?

Train on the GPU, num_gpus is set to 1:
device_ids = list(range(num_gpus))
model = NestedUNet(opt.num_channel, 2).to(device)
model = nn.DataParallel(model, device_ids=device_ids)
Test on the CPU:
model = NestedUNet_Purn2(opt.num_channel, 2).to(dev)
device_ids = list(range(num_gpus))
model = torch.nn.DataParallel(model, device_ids=device_ids)
model_old = torch.load(path, map_location=dev)
pretrained_dict = model_old.state_dict()
model_dict = model.state_dict()
pretrained_dict = {k: v for k, v in pretrained_dict.items() if k in model_dict}
model_dict.update(pretrained_dict)
model.load_state_dict(model_dict)
This will get the correct result, but when I delete:
device_ids = list(range(num_gpus))
model = torch.nn.DataParallel(model, device_ids=device_ids)
the result is wrong.
nn.DataParallel wraps the model, where the actual model is assigned to the module attribute. That also means that the keys in the state dict have a module. prefix.
Let's look at a very simplified version with just one convolution to see the difference:
class NestedUNet(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(3, 16, kernel_size=3, padding=1)
model = NestedUNet()
model.state_dict().keys() # => odict_keys(['conv1.weight', 'conv1.bias'])
# Wrap the model in DataParallel
model_dp = nn.DataParallel(model, device_ids=range(num_gpus))
model_dp.state_dict().keys() # => odict_keys(['module.conv1.weight', 'module.conv1.bias'])
The state dict you saved with nn.DataParallel does not line up with the regular model's state. You are merging the current state dict with the loaded state dict, that means that the loaded state is ignored, because the model does not have any attributes that belong to the keys and instead you are left with the randomly initialised model.
To avoid making that mistake, you shouldn't merge the state dicts, but rather directly apply it to the model, in which case there will be an error if the keys don't match.
RuntimeError: Error(s) in loading state_dict for NestedUNet:
Missing key(s) in state_dict: "conv1.weight", "conv1.bias".
Unexpected key(s) in state_dict: "module.conv1.weight", "module.conv1.bias".
To make the state dict that you have saved compatible, you can strip off the module. prefix:
pretrained_dict = {key.replace("module.", ""): value for key, value in pretrained_dict.items()}
model.load_state_dict(pretrained_dict)
You can also avoid this issue in the future by unwrapping the model from nn.DataParallel before saving its state, i.e. saving model.module.state_dict(). So you can always load the model first with its state and then later decide to put it into nn.DataParallel if you wanted to use multiple GPUs.
You trained your model using DataParallel and saved it. So, the model weights were stored with a module. prefix. Now, when you load without DataParallel, you basically are not loading any model weights (the model has random weights). As a result, the model predictions are wrong.
I am giving an example.
model = nn.Linear(2, 4)
model = torch.nn.DataParallel(model, device_ids=device_ids)
model.state_dict().keys() # => odict_keys(['module.weight', 'module.bias'])
On the other hand,
another_model = nn.Linear(2, 4)
another_model.state_dict().keys() # => odict_keys(['weight', 'bias'])
See the difference in the OrderedDict keys.
So, in your code, the following three-line works but no model weights are loaded.
pretrained_dict = model_old.state_dict()
model_dict = model.state_dict()
pretrained_dict = {k: v for k, v in pretrained_dict.items() if k in model_dict}
Here, model_dict has keys without the module. prefix but pretrained_dict has when you do not use DataParalle. So, essentially pretrained_dict is empty when DataParallel is not used.
Solution: If you want to avoid using DataParallel, or you can load the weights file, create a new OrderedDict without the module prefix, and load it back.
Something like the following would work for your case without using DataParallel.
# original saved file with DataParallel
model_old = torch.load(path, map_location=dev)
# create new OrderedDict that does not contain `module.`
from collections import OrderedDict
new_state_dict = OrderedDict()
for k, v in model_old.items():
name = k[7:] # remove `module.`
new_state_dict[name] = v
# load params
model.load_state_dict(new_state_dict)

Implementing MySQL "generated columns" on Django 1.8/1.9

I discovered the new generated columns functionality of MySQL 5.7, and wanted to replace some properties of my models by those kind of columns. Here is a sample of a model:
class Ligne_commande(models.Model):
Quantite = models.IntegerField()
Prix = models.DecimalField(max_digits=8, decimal_places=3)
Discount = models.DecimalField(max_digits=5, decimal_places=3, blank=True, null=True)
#property
def Prix_net(self):
if self.Discount:
return (1 - self.Discount) * self.Prix
return self.Prix
#property
def Prix_total(self):
return self.Quantite * self.Prix_net
I defined generated field classes as subclasses of Django fields (e.g. GeneratedDecimalField as a subclass of DecimalField). This worked in a read-only context and Django migrations handles it correctly, except a detail : generated columns of MySQL does not support forward references and django migrations does not respect the order the fields are defined in a model, so the migration file must be edited to reorder operations.
After that, trying to create or modify an instance returned the mysql error : 'error totally whack'. I suppose Django tries to write generated field and MySQL doesn't like that. After taking a look to django code I realized that, at the lowest level, django uses the _meta.local_concrete_fields list and send it to MySQL. Removing the generated fields from this list fixed the problem.
I encountered another problem: during the modification of an instance, generated fields don't reflect the change that have been made to the fields from which they are computed. If generated fields are used during instance modification, as in my case, this is problematic. To fix that point, I created a "generated field descriptor".
Here is the final code of all of this.
Creation of generated fields in the model, replacing the properties defined above:
Prix_net = mk_generated_field(models.DecimalField, max_digits=8, decimal_places=3,
sql_expr='if(Discount is null, Prix, (1.0 - Discount) * Prix)',
pyfunc=lambda x: x.Prix if not x.Discount else (1 - x.Discount) * x.Prix)
Prix_total = mk_generated_field(models.DecimalField, max_digits=10, decimal_places=2,
sql_expr='Prix_net * Quantite',
pyfunc=lambda x: x.Prix_net * x.Quantite)
Function that creates generated fields. Classes are dynamically created for simplicity:
from django.db.models import fields
def mk_generated_field(field_klass, *args, sql_expr=None, pyfunc=None, **kwargs):
assert issubclass(field_klass, fields.Field)
assert sql_expr
generated_name = 'Generated' + field_klass.__name__
try:
generated_klass = globals()[generated_name]
except KeyError:
globals()[generated_name] = generated_klass = type(generated_name, (field_klass,), {})
def __init__(self, sql_expr, pyfunc=None, *args, **kwargs):
self.sql_expr = sql_expr
self.pyfunc = pyfunc
self.is_generated = True # mark the field
# null must be True otherwise migration will ask for a default value
kwargs.update(null=True, editable=False)
super(generated_klass, self).__init__(*args, **kwargs)
def db_type(self, connection):
assert connection.settings_dict['ENGINE'] == 'django.db.backends.mysql'
result = super(generated_klass, self).db_type(connection)
# double single '%' if any because it will clash with later Django format
sql_expr = re.sub('(?<!%)%(?!%)', '%%', self.sql_expr)
result += ' GENERATED ALWAYS AS (%s)' % sql_expr
return result
def deconstruct(self):
name, path, args, kwargs = super(generated_klass, self).deconstruct()
kwargs.update(sql_expr=self.sql_expr)
return name, path, args, kwargs
generated_klass.__init__ = __init__
generated_klass.db_type = db_type
generated_klass.deconstruct = deconstruct
return generated_klass(sql_expr, pyfunc, *args, **kwargs)
The function to register generated fields in a model. It must be called at django start-up, for example in the ready method of the AppConfig of the application.
from django.utils.datastructures import ImmutableList
def register_generated_fields(model):
local_concrete_fields = list(model._meta.local_concrete_fields[:])
generated_fields = []
for field in model._meta.fields:
if hasattr(field, 'is_generated'):
local_concrete_fields.remove(field)
generated_fields.append(field)
if field.pyfunc:
setattr(model, field.name, GeneratedFieldDescriptor(field.pyfunc))
if generated_fields:
model._meta.local_concrete_fields = ImmutableList(local_concrete_fields)
And the descriptor. Note that it is used only if a pyfunc is defined for the field.
class GeneratedFieldDescriptor(object):
attr_prefix = '_GFD_'
def __init__(self, pyfunc, name=None):
self.pyfunc = pyfunc
self.nickname = self.attr_prefix + (name or str(id(self)))
def __get__(self, instance, owner):
if instance is None:
return self
if hasattr(instance, self.nickname) and not instance.has_changed:
return getattr(instance, self.nickname)
return self.pyfunc(instance)
def __set__(self, instance, value):
setattr(instance, self.nickname, value)
def __delete__(self, instance):
delattr(instance, self.nickname)
Note the instance.has_changed that must tell if the instance is being modified. If found a solution for this here.
I have done extensive tests of my application and it works fine, but I am far from using all django functionalities. My question is: could this settings clash with some use cases of django ?

PYQT : qCombobox displaying Column "Name" but passing Column "ID"

I've been trying very hard to get this working but so far haven't found the correct route.
I am using pyqt, and I am querying a MySql DataBase, collecting from it a model with all the columns. Until here it's all good..
I've created a combobox that is displaying the correct text using model.setcolumn(1)
What I need now is for this combobox to send on "activated" the relative unique ID of this record, so I am able to create a category relatioship.
What exactly is the best way to do this? I feel I've arrived to a dead end, any help would be appreciated.
Best,
Cris
Best way would be sub-classing QComboBox. You can't override the activated signal but you can create a custom signal that will also be emitted with ID whenever activated is emitted. And you can connect to this signal and do your stuff. It will be something like this:
class MyComboBox(QtGui.QComboBox):
activatedId = QtCore.pyqtSignal(int) #correct this if your ID is not an int
def __init__(self, parent=None):
super(MyComboBox, self).__init__(parent)
self.activated.connect(self.sendId)
#QtCore.pyqtSlot(int)
def sendId(self, index):
model = self.model()
uniqueIdColumn = 0 # if ID is elsewhere adjust
uniqueId = model.data(model.createIndex(index,uniqueIdColumn,0),QtCore.Qt.DisplayRole)
self.activatedId.emit(uniqueId)
Edit
Here is a similar version without Signals. This will return uniqueId whenever you call sendId with an index of the combobox.
class MyComboBox(QtGui.QComboBox):
def __init__(self, parent=None):
super(MyComboBox, self).__init__(parent)
def sendId(self, index):
model = self.model()
uniqueIdColumn = 0 # if ID is elsewhere adjust
uniqueId = model.data(model.createIndex(index,uniqueIdColumn,0),QtCore.Qt.DisplayRole)
return uniqueId