Getting Image through HTML and using Python Code - html

I am trying to make a website that gets the user image and uses
facelandmark code(python) to tell the user about user's face shape and etc.
How can I get the imange through html and use the image file in python code and show the result to the user again? Is using django the only way? I have tried to study django in many ways and most of the stuffs I found were not directly helping on my planning website. Thank you for reading

You can use this code to directly embed the image in your HTML: Python 3
import base64
data_uri = base64.b64encode(open('Graph.png', 'rb').read()).decode('utf-8')
img_tag = '<img src="data:image/png;base64,{0}">'.format(data_uri)
print(img_tag)

Related

How to open atoti link in streamlit application with python?

I'm trying to run atoti link in streamlit application where i want to create some plots with atoti.
I tried the below code but it's showing something else in place of the link.
new = pd.DataFrame()
new['Link'] = [session.link()]
st.dataframe(new)
st.write(new.to_html(escape=False, index=False), unsafe_allow_html=True)
The output is
Link
0 Link(_path='', _session=<atoti.session.Session object at 0x000002B700293FA0>)
Followed by :
Link
Link(_path='', _session=)
The expected link is http://localhost:53533
Can anyone help me with this?
As documented here, Session.link() is only available in JupyterLab.
You could use f"http://localhost:{session.port}".
It's also possible to always use the same port.

Selenium login to page with python 3.6 can't find element by name

Today I tried to write a code to make a bot for ytpals.com webpage.
I am using python selenium library.
What I am trying to do first is to login to page with my youtube channel ID.
But I was unsucessfull to find element 'channelid' whatever I do.
Adding to this this, page sometimes doesn't load fully...
Btw it worked for me with other pages to find an input form, but this page... I can't understand.
Maybe someone has better understanding than me and know how to log in in this page?
My simple code:
import time
from selenium import webdriver
browser = webdriver.Firefox()
browser.get('https://www.ytpals.com/')
search = browser.find_element_by_name('channelid')
search.send_keys("testchannel")
time.sleep(5) # sleep for 5 seconds so you can see the results
browser.quit()
So I found a solution to my problem.
I downloaded SELENIUM IDE, and I can use it as a debugger, such a great tool!
if someone will need it, grab a link:
https://www.seleniumhq.org/docs/02_selenium_ide.jsp

splinter nested <html> documents

I am working on some website automation. Currently, I am unable to access a nested html documents with Splinter. Here's a sample website that will help demonstrate what I am dealing with: https://www.w3schools.com/html/tryit.asp?filename=tryhtml_elem_select
I am trying to get into the select element and choose the "saab" option. I am stuck on how to enter the second html document. I've read the documentation and saw nothing. I'm hoping there is a way with Python.
Any thoughts?
Before Solution:
from splinter import Browser
exe = {"executable_path": "chromedriver.exe"}
browser = Browser("chrome",**exe, headless=False)
url = "https://www.w3schools.com/html/tryit.asp?filename=tryhtml_elem_select"
browser.visit(url)
# This is where I'm stuck. I cannot find a way to access the second (nested) html doc
innerframe = browser.find_by_name("iframeResult").first
innerframe.find_by_name("cars")[0]
Solution:
from splinter import Browser
exe = {"executable_path": "chromedriver.exe"}
browser = Browser("chrome",**exe, headless=False)
url = "https://www.w3schools.com/html/tryit.asp?filename=tryhtml_elem_select"
browser.visit(url)
with browser.get_iframe("iframeResult") as iframe:
cars = iframe.find_by_name("cars")
cars.select("saab")
I figured out that these are called iframes. Once I learned the terminology, it wasn't too hard to figure out how it interact with it. "Nested html documents" was not returning the results I needed to find the solution.
I hope this helps someone out in the future!

How to fill out a web form and return the data with knowing the web form id/name in python

I am currently trying to automatically submit information into the web forms on this website : https://coinomi.com/recovery-phrase-tool.html Unfortunately I do not know the name of the forms, and cant seem to find out from its source code. Now I have tried to fill out the forms using the requests python module, and just by passing the parameters through the URL before scraping it. Unfortunately I have trouble finding the name of the form so I cant do this.
If possible I wanted to do this with the offline version of the website at https://github.com/Coinomi/bip39/blob/master/bip39-standalone.html so that it is more secure but I barely know how to use regular web forms with the tools I have, let alone locally from my computer.
I am not sure what exactly are you looking for. However, here is a part of code, which use selenium to fill some parts of the form that you mention.
import selenium
from selenium import webdriver
from selenium.webdriver.support.select import Select
browser = browser = webdriver.Chrome('C:\\Users...\\chromedriver.exe')
browser.get('https://coinomi.com/recovery-phrase-tool.html')
# Example to fill a text box
recoveryPhrase = browser.find_element_by_id('phrase')
recoveryPhrase.send_keys('your answer')
# Example to select a element
numberOfWords = Select(browser.find_element_by_id('strength'))
numberOfWords.select_by_visible_text('24')
# Example to click a button
generateRandomMnemonic = browser.find_element_by_xpath('/html/body/div[1]/div[1]/div/form/div[4]/div/div/span/button')
generateRandomMnemonic.click()

What is the best way to scrape this webpage? fromJSON doesnt seem to work

I have been happily scraping parts of this mobile app website using getURL and fromJSON (jsonlite package in R). For example, I have been using this code and it is fairly straight forward:
CardURL = getURL("http://m.racingpost.com/#cards-horse/horse_id=901690& race_id=639116&r_date=2015-12-07")
CardDATA = fromJSON(CardURL)
CardDATA[["tab-card"]][["runners"]]
However when i got to this particular part of the webpage it doesn't work in same way as other sections:
http://m.racingpost.com/#cards-horse/horse_id=901690&race_id=639116&r_date=2015-12-07
It seems to return with text like 'your browzer isn't optimised' rather than returning the actual text I want to scrape. What is the best way to scrape data like this?
thanks