With the flask app, I managed to display the my.csv file in an HTML table. Now I'm trying to add URL to each userId that is displayed in HTML table (see Output), for example:
https://myURL.com/1
https://myURL.com/2 etc...
What would be the best way to achieve this assuming that once I click on the userId URL in userID column it will bring me to an HTML page with more details specific for that ID.
app.py
from flask import Flask,render_template, request
import pandas as pd
import numpy as np
app = Flask(__name__)
#app.route('/example')
def dataframe():
df = pd.read_csv("my.csv")
return render_template("example.html", data=df.head(5).to_html())
if __name__ == "__main__":
app.run()
example.html
<!DOCTYPE html>
<html>
<head>
<title>CSV Data</title>
</head>
<body>
<h1>My stats</h1>
{{data | safe}}
</body>
</html>
output: http://127.0.0.1:5000/example
Thank you in advance!
Although you can render HTML from Pandas and then render that HTML in Flask, the problem is that approach prevents additional edits to the HTML. To get started with the task, I created some synthetic data in Pandas
import pandas
df = pandas.DataFrame([{"userID": 4, "customers": 23},
{"userID": 3, "customers": 33},
{"userID": 1, "customers": 42},
{"userID": 5, "customers": 13},])
print(df.to_html())
To include URLs, a naive method would be to try to embed the links as strings in the Pandas dataframe prior to generating the HTML. The problem is that Pandas converts strings:
import pandas
df = pandas.DataFrame([{"userID": "<a href='helo'>4</a>", "customers": 23},
{"userID": 3, "customers": 33},
{"userID": 1, "customers": 42},
{"userID": 5, "customers": 13},])
print(df.to_html())
produces a cell that contains the string <a href='helo'>4</a>
To keep the hyperlink as HTML, tell Pandas to not escape special characters.
import pandas
df = pandas.DataFrame([{"userID": "<a href='helo'>4</a>", "customers": 23},
{"userID": 3, "customers": 33},
{"userID": 1, "customers": 42},
{"userID": 5, "customers": 13},])
print(df.to_html(escape=False))
Next task is to inject links into the dataframe:
list_of_rows = []
for index, row in df.iterrows():
new_row = {'customers': dict(row)['customers']} # keep the old data
uid = str(dict(row)['userID'])
new_row['userID'] = "<a href='http://url.com/"+uid+"'>"+uid+"</a>"
list_of_rows.append(new_row)
df_with_links = pandas.DataFrame(list_of_rows)
Incorporating those techniques into your code,
#app.route('/example')
def dataframe():
df = pd.read_csv("my.csv")
list_of_rows = []
for index, row in df.iterrows():
new_row = {'customers': dict(row)['customers']} # keep the old data
uid = str(dict(row)['userID'])
new_row['userID'] = "<a href='http://url.com/"+uid+"'>"+uid+"</a>"
list_of_rows.append(new_row)
df_with_links = pandas.DataFrame(list_of_rows)
return render_template("example.html", data=df_with_links.to_html(escape=False))
Related
I was wondering if there is a way to remove/replace null/empty square brackets in json or pandas dataframe. I have tried to replace them after converting into string via .astype(str) and it is successful and/but it seems it converts all json values into string and I can not process further with the same structure. I would appreciate any solution/recommendation. thanks...
With the following toy dataframe:
import pandas as pd
df = pd.DataFrame({"col1": ["a", [1, 2, 3], [], "d"], "col2": ["e", [], "f", "g"]})
print(df)
# Output
Here is one way to do it:
df = df.applymap(lambda x: pd.NA if isinstance(x, list) and not x else x)
print(df)
# Output
Good afternoon, I'm trying to find the top 10 ip in access.log (standard log of the Apache server).
There is a code like this:
import argparse
import json
import re
from collections import defaultdict, Counter
parser = argparse.ArgumentParser(description='parser script')
parser.add_argument('-f', dest='logfile', action='store', default='access.log')
args = parser.parse_args()
regul_ip = (r"^(?P<ips>.*?)")
regul_method = (r"\"(?P<request_method>GET|POST|PUT|DELETE|HEAD)")
def req_by_method():
dict_ip = defaultdict(lambda: {"GET": 0, "POST": 0, "PUT": 0, "DELETE": 0, "HEAD": 0})
with open(args.logfile) as file:
for index, line in enumerate(file.readlines()):
try:
ip = re.search(regul_ip, line).group()
method = re.search(regul_method, line).groups()[0]
return Counter(dict_ip).most_common(10)
except AttributeError:
pass
dict_ip[ip][method] += 1
print(json.dumps(dict_ip, indent=4))
with open("final_log.json", "w") as jsonfile:
json.dump(dict_ip, jsonfile, indent=5)
When the code is executed, I only get: []
How can I fix this code to make it work?
I also need to output to the final json file a set of such lines: "ip", "method", "status code", "url" and the duration of the request
I have the following issue:
I am trying to execute the code:
import requests
import json
url = 'https://data.gov.gr/api/v1/query/mdg_emvolio?date_from=2021-01-07&date_to=2021-01-14'
headers = {'Authorization':'Token xxxxxxxxxxxxxx’}
response = requests.get(url, headers=headers)
json_object = json.loads(response.text)
json_formatted_str = json.dumps(json_object, indent=2)
print(json_formatted_str)
I want the output to be nice formatted in json, but the Greek characters do not appear correctly. I get the following output (a part of):
{
"area": "\u0391\u0399\u03a4\u03a9\u039b\u039f\u0391\u039a\u0391\u03a1\u039d\u0391\u039d\u0399\u0391\u03a3",
"areaid": 701,
"daydiff": 10,
"daytotal": 60,
"referencedate": "2021-01-07T00:00:00",
"totaldistinctpersons": 210,
"totalvaccinations": 210
},
On the other hand, if I use:
print (response.json())
I get correct Greek characters but (as expected), no nice as for example the following:
{'area': 'ΑΙΤΩΛΟΑΚΑΡΝΑΝΙΑΣ', 'areaid': 701, 'daydiff': 10, 'daytotal': 60, 'referencedate': '2021-01-07T00:00:00', 'totaldistinctpersons': 210, 'totalvaccinations': 210},
Any ideas?
As suggested by JosefZ, I modified the code as follows:
import requests
import json
import pprint
url = 'https://data.gov.gr/api/v1/query/mdg_emvolio?date_from=2021-01-07&date_to=2021-01-14'
headers = {'Authorization':'Token xxxxxxxxxxxxxx’}
response = requests.get(url, headers=headers)
pprint.pprint(response.json())
My python code reads the excel sheet and converts it into a json file output. I have a column in the excel sheet, where the values are either "Planned" or "Unplanned".
1)In the json output, I want the Planned to be replaced with "1" and Unplanned to be replaced with "2" without changing anything in the excel file.
2)In the output I dont want "data" to appear.
3)In the excel, my Start time column value is like this "2018-11-16 08:00:00". I want the output to be "2018-11-16T08:00:00Z". Currently i am getting some garbage value.
Below is my code.
import xlrd, json, time, pytz, requests
from os import sys
from datetime import datetime, timedelta
from collections import OrderedDict
def json_from_excel():
excel_file = 'test.xlsx'
jsonfile = open('ExceltoJSON.json', 'w')
data = []
datestr = str(datetime.now().date())
loaddata = OrderedDict()
workbook = xlrd.open_workbook(excel_file)
worksheet = workbook.sheet_by_name('OMS-GX Data Extraction')
sheet = workbook.sheet_by_index(0)
for j in range(0, 6):
for i in range(1, 40):
temp = {}
temp["requestedStart"] = (sheet.cell_value(i,0)) #Start Time
temp["requestedComplete"] = (sheet.cell_value(i, 1)) #End Time
temp["location"] = (sheet.cell_value(i, 3)) #Station
temp["equipment"] = (sheet.cell_value(i, 4)) #Device Name
temp["switchOrderTypeID"] = (sheet.cell_value(i, 5)) #Outage Type
data.append(temp)
loaddata['data'] = data
json.dump(loaddata, jsonfile, indent=3, sort_keys=False)
jsonfile.write('\n')
return loaddata
if __name__ == '__main__':
data = json_from_excel()
Below is my sample output:
{
"data": [
{
"requestedStart": testtime,
"requestedComplete": testtime,
"location": "testlocation",
"equipment": "testequipment",
"switchOrderTypeID": "Planned"
},
{
"requestedStart": testtime,
"requestedComplete": testtime,
"location": "testlocation",
"equipment": "testequipment",
"switchOrderTypeID": "Unplanned"
}
]
}
Answer to the 1st question:
You may use conditional assignment.
temp["switchOrderTypeID"] = (1 if sheet.cell_value(i, 5) == "Planned" else 0)
Answer to the 2nd question:
Use loaddata = data which will be an array of the jsons without data as json key.
Answer to 3rd question:
from dateutil.parser import parse
t = "2018-11-16 08:00:00"
parse(t).strftime("%Y-%m-%dT%H:%M:%SZ")
The following is my Json file which is decoded on base64.
response={"response": [{"objcontent": [{"title": "Pressure","rowkeys": [
"lat",
"lon",
"Pressure"
],
"rowvalues": [
[
"WxsArK0NV0A=",
"uaQCWFxSM0A=",
"ncvggc7lcUA6MVVLnZiMQH6msaA+0yhANzLp2RsZhkBwobfXt9BXQKtxbnjV+IFARq3fVqOWiEBwyyvmt+V9QDGg7k8YUHpA4IZm9W/De0A="
],
[
"WxsArK0NV0A=",
"HqJT4w7RUkA=",
"BfPox4I5ikCLVYxUxWqIQIFwlJFA+IVAJeQ6gBLyhEBB0QlkoGiCQDOkvnAZUm1AkGbWKEgza0A+FCkwH4phQHwSRSY+iVRAKcvC4pRliEA="
],
[
"WxsArK0NV0A=",
"G5rYdw0NXkA=",
"C9dhhIVrg0B2hCvzOoKKQMrMWhll5o5AIujgxBB0ZkD8+EipfXx0QOXh0LLycH5ATdtxKqbtdkAw66X3l/VhQLqvZBbd13FAjKl2+8UUjUA="
],
[
"WxsArK0NV0A=",
"PTvsm55daEA=",
"W+wyHC12dUCrvSLM1d6BQMfay0ZjbYpAjnk4Ecc8dkDH35pL429xQPTOwkF6Z41Aci5JATkXjUBQ6Wjlp3RQQFlpNGmsNHpAFf0DUor+dUA="
]]}]}]}
I decoded the values and use these values to draw a plot.following is the code.
import base64
import struct
import numpy as np
import pylab as pl
for response_i in response['response']:
for row in response_i['objcontent'][0]['rowvalues']:
for item in row[:]:
decoded=base64.b64decode(item)
if len(decoded)<9:
a=struct.unpack('d',decoded)
else:
decoded=base64.b64decode(item)
a=struct.unpack('10d',decoded)
last=np.array(a)
pl.show(pl.plot(last))
but i would like to saparate the value of each list. in the 'row keys' there are 3 elements [ "lat", "lon", "Pressure"] accordingly there are 3 values in each list of rowvalues.
My question is how can I separate the different values in rowvalues and add them in each group of rowkeys.
so, at the end I suppose to have 3 list which included all the decoded values.
'lat': [WxsArK0NV0A=,WxsArK0NV0A=,WxsArK0NV0A=,WxsArK0NV0A=]
'lon': [uaQCWFxSM0A=,HqJT4w7RUkA=,G5rYdw0NXkA=,PTvsm55daEA=]
'pressure': [ncvggc7lcUA6MVVLnZiMQH6msaA+0yhANzLp2RsZhkBwobfXt9BXQKtxbnjV+IFARq3fVqOWiEBwyyvmt+V9QDGg7k8YUHpA4IZm9W/De0A=, BfPox4I5ikCLVYxUxWqIQIFwlJFA+IVAJeQ6gBLyhEBB0QlkoGiCQDOkvnAZUm1AkGbWKEgza0A+FCkwH4phQHwSRSY+iVRAKcvC4pRliEA=, C9dhhIVrg0B2hCvzOoKKQMrMWhll5o5AIujgxBB0ZkD8+EipfXx0QOXh0LLycH5ATdtxKqbtdkAw66X3l/VhQLqvZBbd13FAjKl2+8UUjUA=, W+wyHC12dUCrvSLM1d6BQMfay0ZjbYpAjnk4Ecc8dkDH35pL429xQPTOwkF6Z41Aci5JATkXjUBQ6Wjlp3RQQFlpNGmsNHpAFf0DUor+dUA=]
One approach would be to manually sort the data, like so:
from collections import defaultdict
from base64 import b64decode
import json
d = defaultdict(list)
js = ''
with open(json_file) as f:
js = b64decode(f.read()).decode()
js = json.loads(js)
response = js['response']['obj_content'][0]
for i, col_name in enumerate(response['row_keys']):
for row_val in ['row_values']:
d[col_name].append(row_val[i])
defaultdict automatically creates a new list when a key is called that previously didn't exist, which makes your code slightly sleeker.
Another option would be to use pandas.DataFrame and load data like so:
import pandas as pd
response = json_file['response']['obj_content'][0]
df = pd.DataFrame(response['row_values'], columns= response['row_keys'])
The neat thing about pandas is, that it's quite expansive in its features; for example, you could plot your data using the previously created DataFrame like so:
df.plot()