I have already make a prediction of testing data, but now I want to save the output of the testing data csv format to google drive. Can anyone help me?
test_preds = predict_dl(test_dl, model_resnet18)
test_preds = [p.item() for p in test_preds]
test_preds
Here is the code:
from google.colab import drive
drive.mount('/content/drive')
import numpy as np
import pandas as pd
test_preds = pd.DataFrame(test_preds).to_csv('/content/drive/My Drive/test_preds.csv')
Related
I am using plotly-dash with jupyter dash. I am wondering if I can automatically open a website if Jupyter dash is run and launch the dashboad after app.run_server(mode='external',debug=True,port=8050).
The reason is I have to log in a website to connect to the data for the dashboard.
Thanks
Dash runs on Flask in the background, so I found a similar question for Flask which can be adapted for dash similarly (credit to both responders to that question at the time of writing this answer).
Here is an example on how you can adapt it for Dash:
import os
from threading import Timer
import webbrowser
import dash
from dash import html
from dash import dcc
app = dash.Dash(__name__)
app.layout = html.Div(
[
dcc.DatePickerRange(id='date-range')
]
)
def open_browser():
if not os.environ.get("WERKZEUG_RUN_MAIN"):
webbrowser.open_new('http://127.0.0.1:1222/')
if __name__ == "__main__":
Timer(1, open_browser).start()
app.run_server(debug=True, port=1222)
I have a dataset which consists of URL of the images in Flickr. For deep-learning applications I need the raw images. So I need a code in python to extract the raw images from the relevant Flickr page.
A sample link
But the images are corrupted. I would appreciated so much if anyone could help me
When I go to flickr page and want to download image manually, the description of each image shows this sentences:
The owner has disabled downloading of their photos
Also I tried the code that provide in same topics in python but it didn't worked, again but other errors happen
import os
import requests
df4.pid = df4.pid.astype(str)
df4.index = range(len(df4.index))
directory="E://MsSoftware/972/thesis/data set/data"
for i in range(len(df4)):
r = requests.get(df4['url'][i], stream = True)
with open(directory+df4['pid'][i]+".jpg", 'wb') as f:
f.write(r.content)
I expect to download the raw images of flickr but the actual output is the corrupted images that couldn't open.
There's nothing wrong with your code.
This, for example, works perfectly on MacOS 10.13:
import os
import requests
def download(url):
r = requests.get(url, stream=True)
with open(os.path.basename(url), 'wb') as f:
f.write(r.content)
download('https://live.staticflickr.com/1076/1378219626_9bc9a789fa.jpg')
It produces this result:
$ F=Desktop/1378219626_9bc9a789fa.jpg; identify $F; shasum -a 224 < $F
Desktop/1378219626_9bc9a789fa.jpg JPEG 360x262 360x262+0+0 8-bit sRGB 43022B 0.000u 0:00.000
b30521e5ae379eda34d1cad25f4eec5791b2cbc059f03d89dc1ac1b3 -
Can some one please show me how to extract only the text in the red square?
I have been fiddling around with python and tried to extract it with no success.
I am writing a script that asks you to enter an address, then fires up Firefox (or Chrome) goes to google website and searches the travel time and distance from the address already saved in the python script. I just need the text in the red square to be displayed as plain text in the command screen.
Any help will be greatly appreciated, so far what I have tried is below, I just don't know how to access the element.
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
import time
print("Please type in address")
address = input()
driver = webdriver.Firefox()
url = r"https://www.google.com"
driver.get(url)
ara = driver.find_element_by_name("q")
ara.send_keys("Navigate to "+ address + " "+"from terror st keilor park" + Keys.RETURN)
x = driver.get_element_by_xpath("//div[#id='exp0']/div[1]/div/div[#class='BbbuR uc9Qxb uE1RRc']")
print(x.text)
google maps
Use WebDriverWait and wait for the element to element_to_be_clickable and use the following xpath.
element=WebDriverWait(driver,20).until(EC.element_to_be_clickable((By.XPATH,'//div[#data-rre-id="exp0"]')))
print(element.text)
To execute above code you need to imports followings.
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
Look into using the google maps API to directly pull the data from google in python without having to open a browser, take an image, and then process that image to read the text and and and...
Or, this isn't recommended but may work, send a request through python code to retrieve the web page and parse the response.
Google "python requests examples" to learn how to make web requests in python code.
Then google "python parse html" and learn to parse web pages with code to extract the information you're looking for.
There are plenty of ways to get the information you're looking for, but the easiest sure wont be by using optical character recognition on an image. But, if your dead set on that, google "python optical character recognition".
I hope this helps :)
I have used bokeh to generate 400 graphs and saved them into 400 html-files (file_1.html ... file_400.html) on my local drive of my Mac.
An example of the codes that I used to generate a graph and save it is below
import numpy as np
from bokeh.plotting import figure, output_file, save
p = figure(plot_width=400, plot_height=400)
x = np.arange(1, 1000) # all 400 graphs have the same x
y1 = np.arange(1, 1000)*2 # different file can have different y
p.line(x, y1, line_width=2)
output_file('file_1.html')
save(p)
I need to view the 400 html-files one by one, and I am interested only in a zoomed-in view of each graph, meaning the last 100 points of each graph. Note that the curve in each graph has to be viewed by me (due to my expertise), so I cannot use things like artificial intelligence to view the graphs for me.
What I can do now, is:
open the folder containing these 400 html-files
double click one file then it will be opened with safari web-browser
click the zoom-in button defined by bokeh
find the area of the last 100 points and drag a rectangle by mouse to zoom-in
close this file
repeat the above 5 steps for another 399 times.
This approach is very time-consuming and boring.
Do you have better ways to go through all these files?
One preferred feature is that I can open them all in a window, they are automatically zoomed-in, and I just need to hit the button of left-arrow and right-arrow on my keyboard to navigate through the graphs.
Looking forward to your help and thanks!
This actually seems like a perfect use case for a little Bokeh server application you can run locally. You can put the code in a file app.py then run bokeh serve --show app.py at the command line.
import numpy as np
from bokeh.io import curdoc
from bokeh.models import Button, ColumnDataSource, TextInput
from bokeh.layouts import widgetbox, row
from bokeh.plotting import figure
current = 0
x = np.linspace(0, 20, 500)
y = np.sin(x)
source = ColumnDataSource(data=dict(x=x, y=y))
plot = figure(x_range=(10,20), title="Plot 0")
plot.line('x', 'y', source=source)
def update_data(i):
global current
current = i
# compute new data or load from file, etc
source.data = dict(x=x, y = np.sin(x*(i+1)))
plot.title.text = "Plot %d" % i
def update_range(attr, old, new):
plot.x_range.start = float(start.value)
plot.x_range.end = float(end.value)
start = TextInput(title="start", value="10")
start.on_change('value', update_range)
end = TextInput(title="start", value="20")
end.on_change('value', update_range)
next = Button(label="next")
next.on_click(lambda: update_data(current+1))
prev = Button(label="prev")
prev.on_click(lambda: update_data(current-1))
curdoc().add_root(row(widgetbox(start, end, next, prev), plot))
This could be improved with some error handling and maybe some additional bells and whistles, but is hopefully demonstrative. It yields the interactive app below:
Alright, let's see how we can do this. My first thought is, this could be accomplished through selenium. I'm going to assume that you haven't used it before. In short, it's a way to programmatically do things with a browser.
Let's get started with that! Install the python library
pip install selenium
You'll also need to install geckodriver (we'll use firefox in this example). If you're on osx you can install that with brew.
brew install geckodriver
Then we can start writing our script to open 400 tabs! It'll open all the figures that you have locally. I'll leave it up to you how to figure out how to zoom. The documentation for selenium can be found here ->
(the script uses python 3, and pathlib only exists in python 3)
from pathlib import Path
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.action_chains import ActionChains
html_path = Path.cwd()
browser = webdriver.Firefox()
for no in range(1, 400):
(ActionChains(browser).key_down(Keys.CONTROL)
.send_keys('t')
.key_up(Keys.CONTROL)
.perform())
file_path = html_path / 'file_1.html'
browser.get('file://' + str(file_path))
to access and display google maps i used the following code:
import pylab
from cStringIO import StringIO
from PIL import Image
import urllib
url = "http://maps.googleapis.com/maps/api/staticmap?center=12.955232,77.579923&size=600x600&zoom=17&sensor=false"
buffer = StringIO(urllib.urlopen(url).read())
image = Image.open(buffer)
pylab.imshow(image)
pylab.show()
but I am not able to find a way to add the traffic layer to this image.
You used to load traffic into the static map by adding &layer=t.
Since the API version v2 came out, this property has been removed.
For now, there is no alternative, other than using another map provider.