Qweb, Blocking the report - openerp-8

I need to block the report at the draft state, In draft state if the user click the print button to generate the pdf it should raise a warning message.
Thanks in advance

In General Case Qweb Report Can Be printed in Two Way
HTML
PDF
Hear each and every time when you call the report based on report type the different report method is calling.
If you call the report as PDF then the get_pdf() method is called or if you call the report type as HTML then get_html() method is called of report module.
so that in our case you must have to override the above two method in our module then add some thing like this.
Override the get_pdf() method of report module :
class Report(osv.Model):
_inherit = "report"
_description = "Report"
#api.v7
def get_pdf(self, cr, uid, ids, report_name, html=None, data=None, context=None):
"""This method generates and returns pdf version of a report.
"""
order_pool=self.pool.get('sale.order')
for order in order_pool.browse(cr, uid, ids, context=None):
if order.state:
if order.state == 'draft':
raise osv.except_osv(_("Warning!"), _("Your Printed Report is in Draft State ...!! "))
if context is None:
context = {}
if html is None:
html = self.get_html(cr, uid, ids, report_name, data=data, context=context)
html = html.decode('utf-8') # Ensure the current document is utf-8 encoded.
# Get the ir.actions.report.xml record we are working on.
report = self._get_report_from_name(cr, uid, report_name)
# Check if we have to save the report or if we have to get one from the db.
save_in_attachment = self._check_attachment_use(cr, uid, ids, report)
# Get the paperformat associated to the report, otherwise fallback on the company one.
if not report.paperformat_id:
user = self.pool['res.users'].browse(cr, uid, uid)
paperformat = user.company_id.paperformat_id
else:
paperformat = report.paperformat_id
# Preparing the minimal html pages
css = '' # Will contain local css
headerhtml = []
contenthtml = []
footerhtml = []
irconfig_obj = self.pool['ir.config_parameter']
base_url = irconfig_obj.get_param(cr, SUPERUSER_ID, 'report.url') or irconfig_obj.get_param(cr, SUPERUSER_ID, 'web.base.url')
# Minimal page renderer
view_obj = self.pool['ir.ui.view']
render_minimal = partial(view_obj.render, cr, uid, 'report.minimal_layout', context=context)
# The received html report must be simplified. We convert it in a xml tree
# in order to extract headers, bodies and footers.
try:
root = lxml.html.fromstring(html)
match_klass = "//div[contains(concat(' ', normalize-space(#class), ' '), ' {} ')]"
for node in root.xpath("//html/head/style"):
css += node.text
for node in root.xpath(match_klass.format('header')):
body = lxml.html.tostring(node)
header = render_minimal(dict(css=css, subst=True, body=body, base_url=base_url))
headerhtml.append(header)
for node in root.xpath(match_klass.format('footer')):
body = lxml.html.tostring(node)
footer = render_minimal(dict(css=css, subst=True, body=body, base_url=base_url))
footerhtml.append(footer)
for node in root.xpath(match_klass.format('page')):
# Previously, we marked some reports to be saved in attachment via their ids, so we
# must set a relation between report ids and report's content. We use the QWeb
# branding in order to do so: searching after a node having a data-oe-model
# attribute with the value of the current report model and read its oe-id attribute
if ids and len(ids) == 1:
reportid = ids[0]
else:
oemodelnode = node.find(".//*[#data-oe-model='%s']" % report.model)
if oemodelnode is not None:
reportid = oemodelnode.get('data-oe-id')
if reportid:
reportid = int(reportid)
else:
reportid = False
# Extract the body
body = lxml.html.tostring(node)
reportcontent = render_minimal(dict(css=css, subst=False, body=body, base_url=base_url))
contenthtml.append(tuple([reportid, reportcontent]))
except lxml.etree.XMLSyntaxError:
contenthtml = []
contenthtml.append(html)
save_in_attachment = {} # Don't save this potentially malformed document
# Get paperformat arguments set in the root html tag. They are prioritized over
# paperformat-record arguments.
specific_paperformat_args = {}
for attribute in root.items():
if attribute[0].startswith('data-report-'):
specific_paperformat_args[attribute[0]] = attribute[1]
# Run wkhtmltopdf process
return self._run_wkhtmltopdf(
cr, uid, headerhtml, footerhtml, contenthtml, context.get('landscape'),
paperformat, specific_paperformat_args, save_in_attachment
)
As same as method you can override as get_html() in your module and check it
Hear the code will check the sale order report action.
Above code can be tested successfully from my side.
I hope this should helpful for you ..:)

Related

Count the number of people having a property bounded by two numbers

The following code goes over the 10 pages of JSON returned by GET request to the URL.
and checks how many records satisfy the condition that bloodPressureDiastole is between the specified limits. It does the job, but I was wondering if there was a better or cleaner way to achieve this in python
import urllib.request
import urllib.parse
import json
baseUrl = 'https://jsonmock.hackerrank.com/api/medical_records?page='
count = 0
for i in range(1, 11):
url = baseUrl+str(i)
f = urllib.request.urlopen(url)
response = f.read().decode('utf-8')
response = json.loads(response)
lowerlimit = 110
upperlimit = 120
for elem in response['data']:
bd = elem['vitals']['bloodPressureDiastole']
if bd >= lowerlimit and bd <= upperlimit:
count = count+1
print(count)
There is no access through fields to json content because you get dict object from json.loads (see translation scheme here). It realises access via __getitem__ method (dict[key]) instead of __getattr__ (object.field) as keys may be any hashible objects not only strings. Moreover, even strings cannot serve as fields if they starts with digits or are the same with built-in dictionary methods.
Despite this, you can define your own custom class realising desired behavior with acceptable key names. json.loads has an argument object_hook wherein you may put any callable object (function or class) that take a dict as the sole argument (not only the resulted one but every one in json recursively) & return something as the result. If your jsons match some template, you can define a class with predefined fields for the json content & even with methods in order to get a robust Python-object, a part of your domain logic.
For instance, let's realise the access through fields. I get json content from response.json method from requests but it has the same arguments as in json package. Comments in code contain remarks about how to make your code more pythonic.
from collections import Counter
from requests import get
class CustomJSON(dict):
def __getattr__(self, key):
return self[key]
def __setattr__(self, key, value):
self[key] = value
LOWER_LIMIT = 110 # Constants should be in uppercase.
UPPER_LIMIT = 120
base_url = 'https://jsonmock.hackerrank.com/api/medical_records'
# It is better use special tools for handling URLs
# in order to evade possible exceptions in the future.
# By the way, your option could look clearer with f-strings
# that can put values from variables (not only) in-place:
# url = f'https://jsonmock.hackerrank.com/api/medical_records?page={i}'
counter = Counter(normal_pressure=0)
# It might be left as it was. This option is useful
# in case of additional counting any other information.
for page_number in range(1, 11):
records = get(
base_url, data={"page": page_number}
).json(object_hook=CustomJSON)
# Python has a pile of libraries for handling url requests & responses.
# urllib is a standard library rewritten from scratch for Python 3.
# However, there is a more featured (connection pooling, redirections, proxi,
# SSL verification &c.) & convenient third-party
# (this is the only disadvantage) library: urllib3.
# Based on it, requests provides an easier, more convenient & friendlier way
# to work with url-requests. So I highly recommend using it
# unless you are aiming for complex connections & url-processing.
for elem in records.data:
if LOWER_LIMIT <= elem.vitals.bloodPressureDiastole <= UPPER_LIMIT:
counter["normal_pressure"] += 1
print(counter)

Applying a persistent filter to a periodically updated qtableview in python

I've a custom model of a QtableView displaying data that I want to filter out on a column.
The Qtableview function updateAccountTable is refreshed periodically from a timer to show real-time data.
I've put a QlineEdit to seize my filter value and I do apply my custom QSortFilterProxyModel on this.
I can see the data is filtered in the Qtableview, until the next refresh where the list is again unfiltered.
This come obviously from the signal textChanged of the Qline Edit which is not persistent but I do not see how to have my QSortFilterProxyModel be persistent after the Qtableview refresh
Any idea on how to do that ?
Cheers
Stephane
Part of my code is :
# AccountTableView sorting overide function
class mysortingproxy(QSortFilterProxyModel):
def __init__(self, parent=None):
super(mysortingproxy, self).__init__(parent)
def lessThan(self, left: QModelIndex, right: QModelIndex) -> bool:
leftData = self.sourceModel().data(left, Qt.UserRole)
rightData = self.sourceModel().data(right, Qt.UserRole)
return leftData < rightData
class MainUi(QMainWindow):
# snip...
def updateAccountTable(self):
# Account table
self.accountTableModel = AccountTableModel(self.data, self.scalping)
proxyModel = mysortingproxy() # if not sorting override : proxyModel = QSortFilterProxyModel()
proxyModel.setFilterKeyColumn(0) # first column
proxyModel.setSourceModel(self.accountTableModel)
self.tableView_Account.setModel(proxyModel)
self.tableView_Account.setSortingEnabled(True)
self.tableView_Account.verticalHeader().setVisible(False)
# filter proxy model
self.lineEdit_Find.textChanged.connect(proxyModel.setFilterRegExp)
found it!
This actually only required to add the reading of the filter field each time before displaying my data and then apply the filter again
The code became
def updateAccountTable(self):
# Account table
self.accountTableModel = AccountTableModel(self.data, self.scalping)
proxyModel = mysortingproxy() # if not sorting override : proxyModel = QSortFilterProxyModel()
proxyModel.setFilterKeyColumn(0) # first column
proxyModel.setSourceModel(self.accountTableModel)
self.tableView_Account.setModel(proxyModel)
self.tableView_Account.setSortingEnabled(True)
self.tableView_Account.verticalHeader().setVisible(False)
# filter proxy model
self.lineEdit_Find.textChanged.connect(proxyModel.setFilterRegExp)
self.crypto_find = self.lineEdit_Find.text()
proxyModel.setFilterRegExp(self.crypto_find.upper())

control the empty data in django filter

I was coding the backend using Django. I am a beginner in Django. I used a filter to filter some post requests from the HTML form. here is the code.
#api_view(["POST"])
def program_search(request):
Data_List = []
MyFilter = CreateProgram.objects.filter(price__lte=request.POST['price'],
days=request.POST['days']).values()
...
but if I send a request from HTML form that one field of data be null the filter function cant handle it.
I hope you can make use of a simple if... clause to handle the situation
#api_view(["POST"])
def program_search(request):
price = request.POST.get('price')
days = request.POST.get("days")
if price and days:
qs = CreateProgram.objects.filter(price__lte=price, days=days)
else:
# in case of empty filter params from HTML, return empty QuerySet
qs = CreateProgram.objects.none()
# `qs` variable holds your result

How to dump all results of a API request when there is a page limit?

I am using an API to pull data from a url, however the API has a pagination limit. It goes like:
Page (default is 1 and it's the page number you want to retrieve)
Per_page (default is 100 and it's the maximum number of results returned in the response(max=500))
I have a script which I can get the results of a page or per page but I want to automate it. I want to be able to loop through all the pages or per_page(500) and load it in to a json file.
Here is my code that can get 500 results per_page:
import json, pprint
import requests
url = "https://my_api.com/v1/users?per_page=500"
header = {"Authorization": "Bearer <my_api_token>"}
s = requests.Session()
s.proxies = {"http": "<my_proxies>", "https": "<my_proxies>" }
resp = s.get(url, headers=header, verify=False)
raw=resp.json()
for x in raw:
print(x)
The output is 500 but is there a way to keep going and pull the results starting from where it left off? Or even go by page and get all the data per page until there's no data in a page?
It will be helpful, if you present a sample response from your API.
If the API is equipped properly, there will be a next property in a given response that leads you to the next page.
You can then keep calling the API with the link given in the next recursively. On the last page, there will be no next in the Link header.
resp.links["next"]["url"] will give you the URL to the next page.
For example, the GitHub API has next, last, first, and prev properties.
To put it into code, first, you need to turn your code into functions.
Given that there is a maximum of 500 results per page, it implies you are extracting a list of data of some sort from the API. Often, these data are returned in a list somewhere inside raw.
For now, let's assume you want to extract all elements inside a list at raw.get('data').
import requests
header = {"Authorization": "Bearer <my_api_token>"}
results_per_page = 500
def compose_url():
return (
"https://my_api.com/v1/users"
+ "?per_page="
+ str(results_per_page)
+ "&page_number="
+ "1"
)
def get_result(url=None):
if url_get is None:
url_get = compose_url()
else:
url_get = url
s = requests.Session()
s.proxies = {"http": "<my_proxies>", "https": "<my_proxies>"}
resp = s.get(url_get, headers=header, verify=False)
# You may also want to check the status code
if resp.status_code != 200:
raise Exception(resp.status_code)
raw = resp.json() # of type dict
data = raw.get("data") # of type list
if not "url" in resp.links.get("next"):
# We are at the last page, return data
return data
# Otherwise, recursively get results from the next url
return data + get_result(resp.links["next"]["url"]) # concat lists
def main():
# Driver function
data = get_result()
# Then you can print the data or save it to a file
if __name__ == "__main__":
# Now run the driver function
main()
However, if there isn't a proper Link header, I see 2 solutions:
(1) recursion and (2) loop.
I'll demonstrate recursion.
As you have mentioned, when there is pagination in API responses, i.e. when there is a limit of maximum number of results per page, there is often a query parameter called page number or start index of some sort to indicate which "page" you are querying, so we'll utilize the page_number parameter in the code.
The logic is:
Given a HTTP request response, if there is less than 500 results, it means there is no more pages. Return the results.
If there are 500 results in a given response, it means there's probably another page, so we advance page_number by 1 and do a recursion (by calling the function itself) and concatenate with previous results.
import requests
header = {"Authorization": "Bearer <my_api_token>"}
results_per_page = 500
def compose_url(results_per_page, current_page_number):
return (
"https://my_api.com/v1/users"
+ "?per_page="
+ str(results_per_page)
+ "&page_number="
+ str(current_page_number)
)
def get_result(current_page_number):
s = requests.Session()
s.proxies = {"http": "<my_proxies>", "https": "<my_proxies>"}
url = compose_url(results_per_page, current_page_number)
resp = s.get(url, headers=header, verify=False)
# You may also want to check the status code
if resp.status_code != 200:
raise Exception(resp.status_code)
raw = resp.json() # of type dict
data = raw.get("data") # of type list
# If the length of data is smaller than results_per_page (500 of them),
# that means there is no more pages
if len(data) < results_per_page:
return data
# Otherwise, advance the page number and do a recursion
return data + get_result(current_page_number + 1) # concat lists
def main():
# Driver function
data = get_result(1)
# Then you can print the data or save it to a file
if __name__ == "__main__":
# Now run the driver function
main()
If you truly want to store the raw responses, you can. However, you'll still need to check the number of results in a given response. The logic is similar. If a given raw contains 500 results, it means there is probably another page. We advance the page number by 1 and do a recursion.
Let's still assume raw.get('data') is the list whose length is the number of results.
Because JSON/dictionary files cannot be simply concatenated, you can store raw (which is a dictionary) of each page into a list of raws. You can then parse and synthesize the data in whatever way you want.
Use the following get_result function:
def get_result(current_page_number):
s = requests.Session()
s.proxies = {"http": "<my_proxies>", "https": "<my_proxies>"}
url = compose_url(results_per_page, current_page_number)
resp = s.get(url, headers=header, verify=False)
# You may also want to check the status code
if resp.status_code != 200:
raise Exception(resp.status_code)
raw = resp.json() # of type dict
data = raw.get("data") # of type list
if len(data) == results_per_page:
return [raw] + get_result(current_page_number + 1) # concat lists
return [raw] # convert raw into a list object on the fly
As for the loop method, the logic is similar to recursion. Essentially, you will call the get_result() function a number of times, collect the results, and break early when the furthest page contains less than 500 results.
If you know the total number of results in advance, you can simply the run the loop for a predetermined number of times.
Do you follow? Do you have any further questions?
(I'm a little confused by what you mean by "load it into a JSON file". Do you mean saving the final raw results into a JSON file? Or are you referring to the .json() method in resp.json()? In that case, you don't need import json to do resp.json(). The .json() method on resp is actually part of the requests module.
On a bonus point, you can make your HTTP requests asynchronous, but this is slightly beyond the scope of your original question.
P.s. I'm happy to learn what other solutions, perhaps more elegant ones, that people use.

Django TypeError not JSON serializable in request.session

I have a sort function on a project I'm working on, where users can create a sort query of all the assets they're working on. When they get the results of their query, I want them to be able to download a .csv of all the objects in the query.
However, when I try to store the query results in a session, I get an error that the results are not JSON serializable. If I don't try to store the query results then the sort runs fine, but then export button won't work since the query results haven't been stored.
In my views:
def sort(request, project_id=1):
thisuser = request.user
project = Project.objects.get(id=project_id)
if Project.objects.filter(Q(created_by=thisuser) | Q(access__give_access_to=thisuser), id=project_id).exists():
permission = 1
else:
permission = None
if Asset.objects.filter(project__id=project_id, unique_id=1):
assets = 1
else:
assets = None
if request.POST:
if request.POST.get('date_start') and request.POST.get('date_end'):
date_start = datetime.strptime(request.POST['date_start'], '%m/%d/%Y')
date_end = datetime.strptime(request.POST['date_end'], '%m/%d/%Y')
q_date = Q(date_produced__range=[date_start, date_end])
else:
q_date = Q(date_produced__isnull=False) | Q(date_produced__isnull=True)
text_fields = {
'asset_type': request.POST.get('asset_type'),
'description': request.POST.get('description'),
'master_status': request.POST.get('master_status'),
'location': request.POST.get('location'),
'file_location': request.POST.get('file_location'),
'footage_format': request.POST.get('footage_format'),
'footage_region': request.POST.get('footage_region'),
'footage_type': request.POST.get('footage_type'),
'footage_fps': request.POST.get('footage_fps'),
'footage_url': request.POST.get('footage_url'),
'stills_credit': request.POST.get('stills_credit'),
'stills_url': request.POST.get('stills_url'),
'music_format': request.POST.get('music_format'),
'music_credit': request.POST.get('music_credit'),
'music_url': request.POST.get('music_url'),
'license_type': request.POST.get('license_type'),
'source': request.POST.get('source'),
'source_contact': request.POST.get('source_contact'),
'source_email': request.POST.get('source_email'),
'source_id': request.POST.get('source_id'),
'source_phone': request.POST.get('source_phone'),
'source_fax': request.POST.get('source_fax'),
'source_address': request.POST.get('source_address'),
'credit_language': request.POST.get('source_language'),
'cost': request.POST.get('cost'),
'cost_model': request.POST.get('cost_model'),
'total_cost': request.POST.get('total_cost'),
'notes': request.POST.get('notes')
}
boolean_fields = {
'used_in_film': request.POST.get('used_in_film'),
'footage_blackandwhite': request.POST.get('footage_blackandwhite'),
'footage_color': request.POST.get('footage_color'),
'footage_sepia': request.POST.get('footage_sepia'),
'stills_blackandwhite': request.POST.get('stills_blackandwhite'),
'stills_color': request.POST.get('stills_color'),
'stills_sepia': request.POST.get('stills_sepia'),
'license_obtained': request.POST.get('license_obtained')
}
q_objects = Q()
for field, value in text_fields.iteritems():
if value:
q_objects = Q(**{field+'__contains': value})
q_boolean = Q()
for field, value in boolean_fields.iteritems():
if value:
q_boolean |= Q(**{field: True})
query_results = Asset.objects.filter(q_date, q_objects, q_boolean)
list(query_results)
request.session['query_results'] = list(query_results)
args = {'query_results': query_results, 'thisuser': thisuser, 'project': project, 'assets': assets}
args.update(csrf(request))
args['query_results'] = query_results
return render_to_response('sort_results.html', args)
else:
args = {'thisuser': thisuser, 'project': project, 'assets': assets}
args.update(csrf(request))
return render_to_response('sort.html', args)
This is the line: "request.session['query_results'] = list(query_results)" that causes it to fail. It also fails if it's "request.session['query_results'] = query_results"
The reason of this error is that you try to assign list on model instances to session. Model instance cannot be serialized to JSON. If you want to pass list of instances of Asset model to session you can do in that way:
query_results = Asset.objects.values('id','name').filter(q_date, q_objects, q_boolean)
You can list necessary model fields in values()
In that case you will have list of dictionaries, not instances. And this list may be assigned to session. But you cannot operate with this dictionaries like instances of class Assign, i.e. you cannot call class methods and so on.