how to automatically open a website when launching the dash? - plotly-dash

I am using plotly-dash with jupyter dash. I am wondering if I can automatically open a website if Jupyter dash is run and launch the dashboad after app.run_server(mode='external',debug=True,port=8050).
The reason is I have to log in a website to connect to the data for the dashboard.
Thanks

Dash runs on Flask in the background, so I found a similar question for Flask which can be adapted for dash similarly (credit to both responders to that question at the time of writing this answer).
Here is an example on how you can adapt it for Dash:
import os
from threading import Timer
import webbrowser
import dash
from dash import html
from dash import dcc
app = dash.Dash(__name__)
app.layout = html.Div(
[
dcc.DatePickerRange(id='date-range')
]
)
def open_browser():
if not os.environ.get("WERKZEUG_RUN_MAIN"):
webbrowser.open_new('http://127.0.0.1:1222/')
if __name__ == "__main__":
Timer(1, open_browser).start()
app.run_server(debug=True, port=1222)

Related

Can we run appium test scripts through flask server on locally connected device

I am trying to run my appium scripts on button click in a html template, but unfortunately I have searched everywhere but still I found no solution.
My html template is inside a templates folder in the flask project directory and I am taking path as an input and want to pass that string to my Appium script and as soon as I click on the Submit button on my template it should launch the test script on my locally connected phone taking the path as a parameter.
Any kind of help will be appreciated.
Regards
I tried adding functions of my test script in the flask route but I have multiple functions in my python script and for every function I would have to create multiple routes. I was expecting to run the python script all at once on the default flask route ('/').
Following is my code for flask_server.py file as per now I am just getting the parameter in it and showing it in the next route but instead I want to pass my appium script here and run it on device on submit.
from flask import Flask
from flask import render_template
from flask import request
from Instagram.instagram_android_automation import
InstagramAndroidAutomation
from flask import Flask, redirect, url_for, request
app = Flask(__name__)
#app.route('/dashboard/<name>')
def dashboard(name):
return 'welcome %s' % name
#app.route('/login',methods = ['POST', 'GET'])
def login():
if request.method == 'POST':
user = request.form['name']
return redirect(url_for('dashboard',name = user))
else:
user = request.args.get('name')
return render_template('login.html')
if __name__ == '__main__':
app.run(debug = True)

Can anyone show me how I can just extract and display the text in the image on Python

Can some one please show me how to extract only the text in the red square?
I have been fiddling around with python and tried to extract it with no success.
I am writing a script that asks you to enter an address, then fires up Firefox (or Chrome) goes to google website and searches the travel time and distance from the address already saved in the python script. I just need the text in the red square to be displayed as plain text in the command screen.
Any help will be greatly appreciated, so far what I have tried is below, I just don't know how to access the element.
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
import time
print("Please type in address")
address = input()
driver = webdriver.Firefox()
url = r"https://www.google.com"
driver.get(url)
ara = driver.find_element_by_name("q")
ara.send_keys("Navigate to "+ address + " "+"from terror st keilor park" + Keys.RETURN)
x = driver.get_element_by_xpath("//div[#id='exp0']/div[1]/div/div[#class='BbbuR uc9Qxb uE1RRc']")
print(x.text)
google maps
Use WebDriverWait and wait for the element to element_to_be_clickable and use the following xpath.
element=WebDriverWait(driver,20).until(EC.element_to_be_clickable((By.XPATH,'//div[#data-rre-id="exp0"]')))
print(element.text)
To execute above code you need to imports followings.
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
Look into using the google maps API to directly pull the data from google in python without having to open a browser, take an image, and then process that image to read the text and and and...
Or, this isn't recommended but may work, send a request through python code to retrieve the web page and parse the response.
Google "python requests examples" to learn how to make web requests in python code.
Then google "python parse html" and learn to parse web pages with code to extract the information you're looking for.
There are plenty of ways to get the information you're looking for, but the easiest sure wont be by using optical character recognition on an image. But, if your dead set on that, google "python optical character recognition".
I hope this helps :)

How to view hundreds of local html files efficiently by safari web-browser and Python3

I have used bokeh to generate 400 graphs and saved them into 400 html-files (file_1.html ... file_400.html) on my local drive of my Mac.
An example of the codes that I used to generate a graph and save it is below
import numpy as np
from bokeh.plotting import figure, output_file, save
p = figure(plot_width=400, plot_height=400)
x = np.arange(1, 1000) # all 400 graphs have the same x
y1 = np.arange(1, 1000)*2 # different file can have different y
p.line(x, y1, line_width=2)
output_file('file_1.html')
save(p)
I need to view the 400 html-files one by one, and I am interested only in a zoomed-in view of each graph, meaning the last 100 points of each graph. Note that the curve in each graph has to be viewed by me (due to my expertise), so I cannot use things like artificial intelligence to view the graphs for me.
What I can do now, is:
open the folder containing these 400 html-files
double click one file then it will be opened with safari web-browser
click the zoom-in button defined by bokeh
find the area of the last 100 points and drag a rectangle by mouse to zoom-in
close this file
repeat the above 5 steps for another 399 times.
This approach is very time-consuming and boring.
Do you have better ways to go through all these files?
One preferred feature is that I can open them all in a window, they are automatically zoomed-in, and I just need to hit the button of left-arrow and right-arrow on my keyboard to navigate through the graphs.
Looking forward to your help and thanks!
This actually seems like a perfect use case for a little Bokeh server application you can run locally. You can put the code in a file app.py then run bokeh serve --show app.py at the command line.
import numpy as np
from bokeh.io import curdoc
from bokeh.models import Button, ColumnDataSource, TextInput
from bokeh.layouts import widgetbox, row
from bokeh.plotting import figure
current = 0
x = np.linspace(0, 20, 500)
y = np.sin(x)
source = ColumnDataSource(data=dict(x=x, y=y))
plot = figure(x_range=(10,20), title="Plot 0")
plot.line('x', 'y', source=source)
def update_data(i):
global current
current = i
# compute new data or load from file, etc
source.data = dict(x=x, y = np.sin(x*(i+1)))
plot.title.text = "Plot %d" % i
def update_range(attr, old, new):
plot.x_range.start = float(start.value)
plot.x_range.end = float(end.value)
start = TextInput(title="start", value="10")
start.on_change('value', update_range)
end = TextInput(title="start", value="20")
end.on_change('value', update_range)
next = Button(label="next")
next.on_click(lambda: update_data(current+1))
prev = Button(label="prev")
prev.on_click(lambda: update_data(current-1))
curdoc().add_root(row(widgetbox(start, end, next, prev), plot))
This could be improved with some error handling and maybe some additional bells and whistles, but is hopefully demonstrative. It yields the interactive app below:
Alright, let's see how we can do this. My first thought is, this could be accomplished through selenium. I'm going to assume that you haven't used it before. In short, it's a way to programmatically do things with a browser.
Let's get started with that! Install the python library
pip install selenium
You'll also need to install geckodriver (we'll use firefox in this example). If you're on osx you can install that with brew.
brew install geckodriver
Then we can start writing our script to open 400 tabs! It'll open all the figures that you have locally. I'll leave it up to you how to figure out how to zoom. The documentation for selenium can be found here ->
(the script uses python 3, and pathlib only exists in python 3)
from pathlib import Path
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.action_chains import ActionChains
html_path = Path.cwd()
browser = webdriver.Firefox()
for no in range(1, 400):
(ActionChains(browser).key_down(Keys.CONTROL)
.send_keys('t')
.key_up(Keys.CONTROL)
.perform())
file_path = html_path / 'file_1.html'
browser.get('file://' + str(file_path))

Not able to select links from a module on a website using BeautifulSoup

I have build a scraper to extract links from a company's website (I have permission), however when I try to add in the url where the jobs are posted, I'm only able to retrieve some of the links. It seems that the job's are stored in some kind of module whereby I can't access them using my scraper.
html parbase section is the html name of the module I can't seem to access
Question
Why is the scraper not able to pull the urls for the job posts from the link I have provided below?
LINK TO JOS POSTINGS HERE: https://www.pwc.dk/da/karriere/ledige-stillinger.html
Code for scraper
import requests
from bs4 import BeautifulSoup
url = "http://www.pwc.dk/da/karriere/ledige-stillinger.html"
r = requests.get(url)
soup = BeautifulSoup(r.content)
links = soup.find_all("a")
for link in links:
print "<a href='%s'>%s</a>" %(link.get("href"), link.text)
As the webpage is a javascript-heavy one, you need to use selenium to gatecrash. Install selenium and give this a try:
from selenium import webdriver
from bs4 import BeautifulSoup
driver = webdriver.Chrome()
driver.get("https://www.pwc.dk/da/karriere/ledige-stillinger.html")
soup = BeautifulSoup(driver.page_source, "lxml")
driver.quit()
for item in soup.select(".vbtitle a"):
print(item.get("href"))

How to run a python script from html

I have been trying to run a python script (rainbow.py) through a HTML button. I have no idea how to do this and all the answers I have found make no sense to me. All I want is a simple explanation of how to run a python script through regular HTML script.
I am trying to do this on a Raspberry Pi.
You can make it through Flask, a Python Web Microframework:
HTML (index.html):
Your button
Python (app.py):
from flask import Flask, render_template
app = Flask(__name__)
#app.route('/')
def index():
# render your html template
return render_template('index.html')
#app.route('/something')
def do_something():
# when the button was clicked,
# the code below will be execute.
print 'do something here'
...
if __name__ == '__main__':
app.run()
Start the server:$ python app.py
Then go to http://127.0.0.1:5000.
File structure:
template\
-index.html
app.py