I would like to use the Empirical Orthogonal Function in my lat/long/time/temperature dataset. First problem I face is to convert my .csv data into .nc (I need to obtain a three dimension but I failed).
In the follow my code and what I get:
import pandas as pd
import xarray
new_df =df[['TIME','LAT','LONG','Temperat']].copy()
print("DataFrame Shape:",new_df.shape)
display(new_df.head(5))
xr = xarray.Dataset.from_dataframe(new_df)
xr.to_netcdf('test.nc')
image of the dataset
I use API to get the data from the web, and I successfully get the data!I want to save these data in an excel.But there must be some mistakes that I can't save!Also, there isn't a Error.Can anyone tell me how to fix it?Thanks!Wish you a good life!
Here is my code:
import requests
from bs4 import BeautifulSoup
Function_code=[input("請輸入所需代碼")]
Year_of_start_time=str(eval(input("請輸入開始年份")))
Month_of_start_time=str(eval(input("請輸入開始月份")))
Year_of_end_time=str(eval(input("請輸入結束年份")))
Month_of_end_time=str(eval(input("請輸入結束月份")))
Demension:[]
url="https://nstatdb.dgbas.gov.tw/dgbasAll/webMain.aspx?sdmx/"
if Function_code==["A11010208010"]:
Dimension="1+2.1+2+3+4..M"
elif Function_code==["A093102020"]:
Dimension=".1+2+3+4+5+6+7+8+9+10+11+12+13+14+15+16..M"
elif Function_code==["A018203010"]:
Dimension="1+2+3.1+2+3+4+5+6+7+8+9+10+11+12+13+14+15+16+17+18+19+20..M"
elif Function_code==["A093005010"]:
Dimension="1+2+3.1+2+3+4+5+6+7+8..M"
Function_code=str(Function_code)
r=requests.get(url+Function_code+"/"+Dimension+"&"+"startTime="+Year_of_start_time+"-"+Month_of_start_time+"&"+"endTime="+Year_of_end_time+"-"+Month_of_end_time)
print(r.text)
import json
import pandas as pd
import csv
list_of_dicts=r.json()
print(type(r))
print(type(list_of_dicts))
import json
import pandas as pd
df=pd.DataFrame(list_of_dicts)
df.to_excel('list_of_dicts.xlsx')
I wants save those data, which I craw from the web, into a excel.
btw, these data are 3-dimension.
I'm trying to scrape the urls for the individual players from this website.
I've already tried doing this with bs4 and it just returns [] every time i try to find the table. Switched to lxml to give this a try.
import urlopen from urllib.requests
import lxml.html
url = "https://www.espn.com/soccer/team/squad/_/id/359/arsenal"
tree = etree.HTML(urlopen(url).read())
table = tree.xpath('/*
[#id="fittPageContainer"]/div[2]/div[5]/div[1]/div/article/div/section/div[5]/section/table/tbody/tr/td[1]/div/table/tbody/tr[1]/td/span')
print(table)
I expect some sort output that I could use to get the links but the code returns square brackets
I think this is what you want.
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
driver = webdriver.Firefox(executable_path=r'C:\files\geckodriver.exe')
driver.set_page_load_timeout(30)
driver.get("https://www.espn.com/soccer/team/squad/_/id/359/arsenal")
continue_link = driver.find_element_by_tag_name('a')
elems = driver.find_elements_by_xpath("//a[#href]")
for elem in elems:
print(elem.get_attribute("href"))
import pandas as pd
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.ticker as mticker
from matplotlib.finance import candlestick_ohlc
import matplotlib.dates as mdates
import datetime as dt
import urllib
import json
from urllib.request import urlopen
import datetime as dt
import requests
dataLink ='http://api.huobi.com/staticmarket/btc_kline_015_json.js'
r = requests.get(dataLink) # r is a response object.
quotes = pd.DataFrame.from_records(r.json()) # fetches dataset
quotes[0] = pd.to_datetime(quotes[0].str[:-3], format='%Y%m%d%H%M%S')
#Naming columns
quotes.columns = ["Date","Open","High",'Low',"Close", "Vol"]
#Converting dates column to float values
quotes['Date'] = quotes['Date'].map(mdates.date2num)
#Making plot
fig = plt.figure()
fig.autofmt_xdate()
ax1 = plt.subplot2grid((6,1), (0,0), rowspan=6, colspan=1)
#Converts raw mdate numbers to dates
ax1.xaxis_date()
plt.xlabel("Date")
print(quotes)
#Making candlestick plot
candlestick_ohlc, (ax1,quotes.values,width=1,colorup='g',colordown='k',
alpha=0.75)
plt.show()
I'm trying to plot a candlestick chart from json data provided by Huobi but I can't sort the dates out & the plot looks horrible. Can you explain in fairly simple terms that a novice might understand what I am doing wrong please? This is my code ....
Thx, in advance`
You can put the fig.autofmt_xdate() at some point after calling the candlestick function; this will make the dates look nicer.
Concerning the plot itself, you may decide to make the bars a bit smaller, width=0.01, such that they won't overlap.
You may then also decide to zoom in a bit, to actually see what's going on in the chart, either interactively, or programmatically,
ax1.set_xlim(dt.datetime(2017,04,17,8),dt.datetime(2017,04,18,0))
This boiled down to a question of how wide to make the candlesticks given the granularity of the data as determined by the period & length parameters of the json feed. You just have to fiddle around with the width parameter in candlestick_ohlc() until the graph looks right...
I'm a python beginner (working only with python3 so far) and I'm trying to present some code working the curses library to my classmates.
I got the code from a python/curses tutorial and it runs without problems in python2. In python3 it doesn't and I get the error in title.
Searching through the already asked questions, I found several solutions to this, but since being a absolute beginner with coding, I have no idea how to execute those in my specific code.
This is the code working in python2 :
import curses
from urllib2 import urlopen
from HTMLParser import HTMLParser
from simplejson import loads
def get_new_joke():
joke_json = loads(urlopen('http://api.icndb.com/jokes/random').read())
return HTMLParser().unescape(joke_json['value']['joke']).encode('utf-8')
Using the new modules in python3:
import curses
import json
import urllib
from html.parser import HTMLParser
def get_new_joke():
joke_json = loads(urlopen('http://api.icndb.com/jokes/random').read())
return HTMLParser().unescape(joke_json['value']['joke']).encode('utf-8')
Furthermore I tried to include this solution into my code:
Python 3, let json object accept bytes or let urlopen output strings
response = urllib.request.urlopen('http://api.icndb.com/jokes/random')
str_response = joke_json.readall().decode('utf-8')
obj = json.loads(str_response)
Tried around for hours now, but it tells me "json" ist not defined.