I am new to Spark. Have a query tied to CSV file read.
I am trying to read 2 CSV files in 2 separate dataframes and 'Take' 5 rows from each. However, I am seeing only last dataframe action being printed. Am i missing something? Why first CSV dataframe action was not printed?
# Read first CSV
file_location1 = "/FileStore/tables/airports.csv"
file_type1 = "csv"
# CSV options
infer_schema1 = "true"
first_row_is_header1 = "false"
delimiter1 = ","
# Load File 1
df1 = spark.read.format(file_type1) \
.option("inferSchema", infer_schema1) \
.option("header", first_row_is_header1) \
.option("sep", delimiter1) \
.load(file_location1)
**df1.take(5)**
# Read second CSV
file_location2 = "/FileStore/tables/Report.csv"
file_type2 = "csv"
# CSV options
infer_schema2 = "true"
first_row_is_header2 = "true"
delimiter2 = ","
# Load File 2
df2 = spark.read.format(file_type2) \
.option("inferSchema", infer_schema2) \
.option("header", first_row_is_header2) \
.option("sep", delimiter2) \
.load(file_location2)
**df2.take(5)**
Output: only second dataframe output can be seen
(https://i.stack.imgur.com/bO7GO.png)
df1:pyspark.sql.dataframe.DataFrame
_c0:integer
_c1:string
_c2:string
_c3:string
_c4:string
_c5:string
_c6:double
_c7:double
_c8:integer
_c9:string
_c10:string
_c11:string
_c12:string
_c13:string
df2:pyspark.sql.dataframe.DataFrame = [Parcel(s): string, Building Name: string ... 100 more fields]
Out[1]: [Row(Parcel(s)='0022/012', Building Name='580 NORTH POINT ST', Building Address='580 NORTH POINT ST', Postal Code=94133, Full.Address='POINT (-122.416746 37.806186)', Floor Area=24022, Property Type='Commercial', Property Type - Self Selected='Hotel', PIM Link='http://propertymap.sfplanning.org/?&search=0022/012', Year Built=1900, Energy Audit Due Date=datetime.datetime(2013, 4, 1, 0, 0), Energy Audit Status='Did Not Comply', Benchmark 2018 Status='Violation - Did Not Report', 2018 Reason for Exemption=None, Benchmark 2017 Status='Violation - Did Not Report', 2017 Reason for Exemption=None, Benchmark 2016 Status='Violation - Did Not Report', 2016 Reason for Exemption=None, Benchmark 2015 Status='Violation - Did Not Report', 2015 Reason for Exemption=None, Benchmark 2014 Status='Violation - Did Not Report', 2014 Reason for Exemption=None, Benchmark 2013 Status='Violation - Did Not Report', 2013 Reason for Exemption=None, Benchmark 2012 Status='Violation - Did Not Report', 2012 Reason for Exemption=None, Benchmark 2011 Status='Exempt', 2011 Reason for Exemption='SqFt Not Subject This Year', Benchmark 2010 Status='Exempt', 2010 Reason for Exemption='SqFt Not Subject This Year', 2018 ENERGY STAR Score=None, 2018 Site EUI (kBtu/ft2)=None, 2018 Source EUI (kBtu/ft2)=None, 2018 Percent Better than National Median Site EUI=None, 2018 Percent Better than National Median Source EUI=None, 2018 Total GHG Emissions (Metric Tons CO2e)=None, 2018 Total GHG Emissions Intensity (kgCO2e/ft2)=None, 2018 Weather Normalized Site EUI (kBtu/ft2)=None, 2018 Weather Normalized
Why is it happening?
The take operation does not print the result to your standard output
Spark operations are lazy, so all the transformations are only evaluated once you ask for a final result (for example, print result to screen or save it to a file)
In your last line Spark is guessing that you actually want to see the result, so it evaluates df2 reading five lines from the file. On the other hand, df1 is never evaluated and only the schema is known as it is inferred.
Try using show instead, like df1.show(n=5), it should trigger the evaluation of the dataframe.
I think that's how Databricks was supposed to work. Every notebook environment actually does this. It displays only the last results. So, if you want to display results of both dataframes, probably you should write them in two different cells. You've written complete code in a single cell. It is not meant to be used that way. You write single line and verify that it's successful and then you add new lines. So, for try, you can add following in two separate cells and you will see results for both.
# Read first CSV
file_location1 = "/FileStore/tables/airports.csv"
file_type1 = "csv"
# CSV options
infer_schema1 = "true"
first_row_is_header1 = "false"
delimiter1 = ","
# Load File 1
df1 = spark.read.format(file_type1) \
.option("inferSchema", infer_schema1) \
.option("header", first_row_is_header1) \
.option("sep", delimiter1) \
.load(file_location1)
df1.take(5)
# Read second CSV
file_location2 = "/FileStore/tables/Report.csv"
file_type2 = "csv"
# CSV options
infer_schema2 = "true"
first_row_is_header2 = "true"
delimiter2 = ","
# Load File 2
df2 = spark.read.format(file_type2) \
.option("inferSchema", infer_schema2) \
.option("header", first_row_is_header2) \
.option("sep", delimiter2) \
.load(file_location2)
df2.take(5)
If you want to print it, you will have to use print() explicitly.
I hope this helps.
Related
I am trying to get google finance JSON data into a dataframe.
I tried:
library(jsonlite)
dat1 <- fromJSON("http://www.google.com/finance/info?q=NSE:%20AAPL,MSFT,TSLA,AMZN,IBM")
dat1
However I get an error:
Error in feed_push_parser(readBin(con, raw(), n), reset = TRUE) :
parse error: trailing garbage
Thank you for any help.
I could not replicate your error using fromJSON due to proxy issues from my side but the following works using httr
require(jsonlite)
require(httr)
#Set your proxy setting if needed
#set_config(use_proxy(url='hostname',port= port,username="",password=""))
url.name = "http://www.google.com/finance/info?q=NSE:%20AAPL,MSFT,TSLA,AMZN,IBM"
url.get = GET(url.name)
#parsing the content as json results in similar error as you encountered
#url.content = content(url.get,type="application/json")
#Error in parseJSON(txt) : parse error: trailing garbage
# " : "0.57" ,"yld" : "2.46" } ,{ "id": "358464" ,"t" : "MSFT"
# (right here) ------^
#read content as html text
url.content = content(url.get, as="text")
#remove html tags
clean.text = gsub("<.*?>", "", url.content)
#remove residual text
clean.text = gsub("\\n|\\//","",clean.text)
DF = fromJSON(clean.text)
head(DF[,1:10],5)
# id t e l l_fix l_cur s ltt lt lt_dts
#1 22144 AAPL NASDAQ 92.51 92.51 92.51 1 4:00PM EDT May 11, 4:00PM EDT 2016-05-11T16:00:02Z
#2 358464 MSFT NASDAQ 51.05 51.05 51.05 1 4:00PM EDT May 11, 4:00PM EDT 2016-05-11T16:00:02Z
#3 12607212 TSLA NASDAQ 208.96 208.96 208.96 1 4:00PM EDT May 11, 4:00PM EDT 2016-05-11T16:00:02Z
#4 660463 AMZN NASDAQ 713.23 713.23 713.23 1 4:00PM EDT May 11, 4:00PM EDT 2016-05-11T16:00:02Z
#5 18241 IBM NYSE 148.95 148.95 148.95 2 6:59PM EDT May 11, 6:59PM EDT 2016-05-11T18:59:12Z
I got the below code from here. Let me know if this helps. On a side note, I would also recommend netfonds. Netfonds is the only source I've found that provides intra-day tick level data for both historical prices and the open book. I posted some additional links below for pulling the Netfonds data if you're interested.
http://www.blackarbs.com/blog/3/22/2015/how-to-get-free-intraday-stock-data-from-netfonds
http://www.onestepremoved.com/free-stock-data/
import urllib
from datetime import date, datetime
""" googlefinance
This module provides a Python API for retrieving stock data from Google Finance.
"""
_month_dict = {
'Jan': 1,
'Feb': 2,
'Mar': 3,
'Apr': 4,
'May': 5,
'Jun': 6,
'Jul': 7,
'Aug': 8,
'Sep': 9,
'Oct': 10,
'Nov': 11,
'Dec': 12}
# Google doesn't like Python's user agent...
class FirefoxOpener(urllib.FancyURLopener):
version = 'Mozilla/5.0 (X11; U; Linux i686) Gecko/20071127 Firefox/2.0.0.11'
def __request(symbol):
url = 'http://google.com/finance/historical?q=%s&output=csv' % symbol
opener = FirefoxOpener()
return opener.open(url).read().strip().strip('"')
def get_historical_prices(symbol, start_date=None, end_date=None):
"""
Get historical prices for the given ticker symbol.
Returns a nested list. fields are Date, Open, High, Low, Close, Volume.
"""
price_data = [data.split(',') for data in __request(symbol).split('\n')[1:]]
for quote in price_data:
quote[0] = _format_date(quote[0])
return price_data
def _format_date(datestr):
""" Change datestr from google format ('20-Jul-12') to the format yahoo uses ('2012-07-20')
"""
parts = datestr.split('-')
day = int(parts[0])
month = _month_dict[parts[1]]
year = int('20'+ parts[2])
return date(year, month, day).strftime('%Y-%m-%d')
If the Google finance endpoint returns a newline delimited json, the solution in R should be:
library(jsonlite)
dat1 <- stream_in(url("http://www.google.com/finance/info?q=NSE:%20AAPL,MSFT,TSLA,AMZN,IBM"))
But it seems the endpoint is not accepting such request (any more?):
HTTP status was '403 Forbidden'
I'm currently trying to construct a database of chemicals used in a university department, and their hazard classes. I then wish to output to a csv file. One step is to pull all the synonyms for the various chemicals from standard PDFs, such as this for gamma hexalactone:
sample PDF
At the moment, the code I'm using to extract the text just loses the greek characters which I need to transfer. It looks like this:
pdfReader = PyPDF2.PdfFileReader(inpathf) txtObj = '' for pageNum in range (0, pdfReader.numPages):
pageObj = pdfReader.getPage(pageNum)
txtObj += str(pageObj.extractText())
inpathf.close()
outputf.write(txtObj)
outputf.close()
return txtObj
Parameters are extracted from ~2000 PDFs and stored in a dictionary before being transferred to a csv file:
def Outfile_csv(outfile, dict1, length):
outputfile = open((outfile) + '.csv', 'w', newline ='')
output_list = []
outputWriter = csv.writer(outputfile)
outputWriter.writerow(['PDF file', 'Name', 'Synonyms', 'CAS No.', 'H statements',
'TWA limits /ppm', 'STEL limits /ppm'])
for r in range (0, length):
output_list =[]
for s in range (0,7):
if s == 0 or s == 3:
output_list.append(str((dict1[s][r])).encode('utf-8'))
else:
output_list.append(str(dict1[s][r]))
outputWriter.writerow(output_list)
outputfile.close()
I also can't read out to the CSV in cases where there are greek characters - those data are simply not placed in the csv file. Many thanks for any help - a day playing with codecs and the contents of stackexchange has not helped yet. I'm using Python 3.4 and Windows 8.
I wanted to import a .txt file in R but the format is really special and it's looks like a json format but I don't know how to import it. There is an example of my data:
{"datetime":"2015-07-08 09:10:00","subject":"MMM","sscore":"-0.2280","smean":"0.2593","svscore":"-0.2795","sdispersion":"0.375","svolume":"8","sbuzz":"0.6026","lastclose":"155.430000000","companyname":"3M Company"},{"datetime":"2015-07-07 09:10:00","subject":"MMM","sscore":"0.2977","smean":"0.2713","svscore":"-0.7436","sdispersion":"0.400","svolume":"5","sbuzz":"0.4895","lastclose":"155.080000000","companyname":"3M Company"},{"datetime":"2015-07-06 09:10:00","subject":"MMM","sscore":"-1.0057","smean":"0.2579","svscore":"-1.3796","sdispersion":"1.000","svolume":"1","sbuzz":"0.4531","lastclose":"155.380000000","companyname":"3M Company"}
To deal with this is used this code:
test1 <- read.csv("C:/Users/test1.txt", header=FALSE)
## Import as 5 observations (5th is all empty) of 1700 variables
#(in fact 40 observations of 11 variables). In fact when I imported the
#.txt file, it's having one line (5th obs) empty, and 4 lines of data and
#placed next to each other 4 lines of data of 11 variables.
# Get the different lines
part1=test1[1:10]
part2=test1[11:20]
part3=test1[21:30]
part4=test1[31:40]
...
## Remove the empty line (there were an empty line after each)
part1=part1[-5,]
part2=part2[-5,]
part3=part3[-5,]
...
## Rename the columns
names(part1)=c("Date Time","Subject","Sscore","Smean","Svscore","Sdispersion","Svolume","Sbuzz","Last close","Company name")
names(part2)=c("Date Time","Subject","Sscore","Smean","Svscore","Sdispersion","Svolume","Sbuzz","Last close","Company name")
names(part3)=c("Date Time","Subject","Sscore","Smean","Svscore","Sdispersion","Svolume","Sbuzz","Last close","Company name")
...
## Assemble data to have one dataset
data=rbind(part1,part2,part3,part4,part5,part6,part7,part8,part9,part10)
## Formate Date Time
times <- as.POSIXct(data$`Date Time`, format='{datetime:%Y-%m-%d %H:%M:%S')
data$`Date Time` <- times
## Keep only the Date
data$Date <- as.Date(times)
## Formate data - Remove text
data$Subject <- gsub("subject:", "", data$Subject)
data$Sscore <- gsub("sscore:", "", data$Sscore)
...
So My code is working to reinstate the data but it's maybe very difficult and more long I know there is better ways to do it, so if you could help me with that I would be very grateful.
There are many packages that read JSON, e.g. rjson, jsonlite, RJSONIO (they will turn in up a google search) - just pick one and give it a go.
e.g.
library(jsonlite)
json.text <- '{"datetime":"2015-07-08 09:10:00","subject":"MMM","sscore":"-0.2280","smean":"0.2593","svscore":"-0.2795","sdispersion":"0.375","svolume":"8","sbuzz":"0.6026","lastclose":"155.430000000","companyname":"3M Company"},{"datetime":"2015-07-07 09:10:00","subject":"MMM","sscore":"0.2977","smean":"0.2713","svscore":"-0.7436","sdispersion":"0.400","svolume":"5","sbuzz":"0.4895","lastclose":"155.080000000","companyname":"3M Company"},{"datetime":"2015-07-06 09:10:00","subject":"MMM","sscore":"-1.0057","smean":"0.2579","svscore":"-1.3796","sdispersion":"1.000","svolume":"1","sbuzz":"0.4531","lastclose":"155.380000000","companyname":"3M Company"}'
x <- fromJSON(paste0('[', json.text, ']'))
datetime subject sscore smean svscore sdispersion svolume sbuzz lastclose companyname
1 2015-07-08 09:10:00 MMM -0.2280 0.2593 -0.2795 0.375 8 0.6026 155.430000000 3M Company
2 2015-07-07 09:10:00 MMM 0.2977 0.2713 -0.7436 0.400 5 0.4895 155.080000000 3M Company
3 2015-07-06 09:10:00 MMM -1.0057 0.2579 -1.3796 1.000 1 0.4531 155.380000000 3M Company
I paste the '[' and ']' around your JSON because you have multiple JSON elements (the rows in the dataframe above) and for this to be well-formed JSON it needs to be an array, i.e. [ {...}, {...}, {...} ] rather than {...}, {...}, {...}.
I am using RMySQL() to send data from R to a MySQL database. The problem is that the database does not receive any data.... I am using doParallel() since i am running over 4500 iterations.... could it be because i try to send the data to the database in the pullSpread() function?
library(RMySQL)
library(doParallel)
library(stringr)
library(foreach)
makeCluster(detectCores()) # ANSWER = 4
cl <- makeCluster(4, type="SOCK") # also used PSOCK & FORK but receive the same problem
registerDoParrallel(cl)
# Now use foreach() and %dopar% to pull data...
# the apply(t(stock1), 2, pullSpread) works but not "parallelized"
# I have also used clusterApply() but is unsuccessful
system.time(
foreach(a=t(stock1)) %dopar% pullSpread(a)
)
When I look in my working directory, all the files are copied successfully onto a .csv file as it should but when I check MySQL workbench or even call the files from R they do not exist...
Here is the stock1() character vector and the pullSpread() function used...
# This list contains more than 4500 iterations.. so I am only posting a few
stock1<-c(
"SGMS.O","SGNL.O","SGNT.O",
"SGOC.O","SGRP.O", ...)
Important Dates needed for function:
Friday <- Sys.Date()-10
# Get Previous 5 days
Thursday <- Friday - 1
Wednesday <- Thursday -1
Tuesday <- Wednesday -1
Monday <- Tuesday -1
#Make Them readable for NetFonds
Friday <- format(Friday, "%Y%m%d")
Thursday<- format(Thursday, "%Y%m%d")
Wednesday<- format(Wednesday, "%Y%m%d")
Tuesday<- format(Tuesday, "%Y%m%d")
Monday<-format(Monday, "%Y%m%d")
Here is the pullSpread() function:
pullSpread = function (stock1){
AAPL_FRI<- read.delim(header=TRUE, stringsAsFactor=FALSE,
paste(sep="",
"http://www.netfonds.no/quotes/posdump.php?date=",
Friday,"&paper=",stock1,"&csv_format=txt"))
tryit <- try(AAPL_FRI[,c(1:7)])
if(inherits(tryit, "try-error")){
rm(AAPL_FRI)
} else {
AAPL_THURS<- read.delim(header=TRUE, stringsAsFactor=FALSE,
paste(sep="",
"http://www.netfonds.no/quotes/posdump.php?date=",
Thursday,"&paper=",stock1,"&csv_format=txt"))
AAPL_WED<- read.delim(header=TRUE, stringsAsFactor=FALSE,
paste(sep="",
"http://www.netfonds.no/quotes/posdump.php?date=",
Wednesday,"&paper=",stock1,"&csv_format=txt"))
AAPL_TUES<- read.delim(header=TRUE, stringsAsFactor=FALSE,
paste(sep="",
"http://www.netfonds.no/quotes/posdump.php?date=",
Tuesday,"&paper=",stock1,"&csv_format=txt"))
AAPL_MON<- read.delim(header=TRUE, stringsAsFactor=FALSE,
paste(sep="",
"http://www.netfonds.no/quotes/posdump.php?date=",
Monday,"&paper=",stock1,"&csv_format=txt"))
SERIES <- rbind(AAPL_MON,AAPL_TUES,AAPL_WED,AAPL_THURS,AAPL_FRI)
#Write .CSV File
write.csv(SERIES,paste(sep="",stock1,"_",Friday,".csv"), row.names=FALSE)
dbWriteTable(con2,paste0( "",str_sub(stock1, start = 1L, end = -3L),""),paste0(
"~/Desktop/R/",stock1,"_",Friday,".csv"), append=T)
}
}
Retrieve last Friday using something like this:
Friday <- Sys.Date()
while(weekdays(Friday) != "Friday")
{
Friday <- Friday - 1
}
As a matter of good practice, when retrieving data from the internet, separate the act of downloading it with processing it. That way, when the processing fails, you don't waste time and bandwidth redownloading things.
lastWeek <- format(Friday - 0:4, "%Y%m%d")
stockDatePairs <- expand.grid(Stock = stock1, Date = lastWeek)
urls <- with(
stockDatePairs,
paste0(
"http://www.netfonds.no/quotes/posdump.php?date=",
Date,
"&paper=",
Stock,
"&csv_format=txt"
)
)
for(url in urls)
{
# or whatever file name you want
download.file(url, paste0("data from ", make.names(url), ".txt"))
}
Make sure that you know which directory those files are being saved to. (Either provide an absolute path or set your working directory.)
Now try reading and rbinding those files.
If that works, then you can try doing things in parallel.
Also note that many online data services will limit the rate that you can download, unless you are paying for the service. So parallel downloads may just mean that you hit the limit quicker.
I'm trying to use PyAlogoTrade's event profiler
However I don't want to use data from yahoo!finance, I want to use my own but can't figure out how to parse in the CSV, it is in the format:
Timestamp Low Open Close High BTC_vol USD_vol [8] [9]
2013-11-23 00 800 860 847.666666 886.876543 853.833333 6195.334452 5248330 0
2013-11-24 00 745 847.5 815.01 860 831.255 10785.94131 8680720 0
The complete CSV is here
I want to do something like:
def main(plot):
instruments = ["AA", "AES", "AIG"]
feed = yahoofinance.build_feed(instruments, 2008, 2009, ".")
Then replace yahoofinance.build_feed(instruments, 2008, 2009, ".") with my CSV
I tried:
import csv
with open( 'FinexBTCDaily.csv', 'rb' ) as csvfile:
data = csv.reader( csvfile )
def main( plot ):
feed = data
But it throws an attribute error. Any ideas how to do this?
I suggest to create your own Rowparser and Feed, which is much easier than it sounds, have a look here: yahoofeed
This also allows you to work with intraday data and cleanup the data if needed, like your timestamp.
Another possibility, of course, would be to parse your file and save it, so it looks like a yahoo feed. In your case, you would have to adapt the columns and the Timestamp.
Step A: follow PyAlgoTrade doc on GenericBarFeed class
On this link see the addBarsFromCSV() in CSV section of the BarFeed class in v0.16
On this link see the addBarsFromCSV() in CSV section of the BarFeed class in v0.17
Note
- The CSV file must have the column names in the first row.
- It is ok if the Adj Close column is empty.
- When working with multiple instruments:
--- If all the instruments loaded are in the same timezone, then the timezone parameter may not be specified.
--- If any of the instruments loaded are in different timezones, then the timezone parameter should be set.
addBarsFromCSV( instrument, path, timezone = None )
Loads bars for a given instrument from a CSV formatted file. The instrument gets registered in the bar feed.
Parameters:
(string) instrument – Instrument identifier.
(string) path – The path to the CSV file.
(pytz) timezone – The timezone to use to localize bars.Check pyalgotrade.marketsession.
Next:
A BarFeed loads bars from CSV files that have the following format:
Date Time, Open, High, Low, Close, Volume, Adj Close
2013-01-01 13:59:00,13.51001,13.56,13.51,13.56789,273.88014126,13.51001
Step B: implement a documented CSV-file pre-formatting
Your CSV data will need a bit of sanity ( before will be able to be used in PyAlgoTrade methods ),however it is doable and you can create an easy transformator either by hand or with a use of a powerful numpy.genfromtxt() lambda-based converters facilities.
This sample code is intended for an illustration purpose, to see immediately the powers of converters for your own transformations, as CSV-structure differs.
with open( getCsvFileNAME( ... ), "r" ) as aFH:
numpy.genfromtxt( aFH,
skip_header = 1, # Ref. pyalgotrade
delimiter = ",",
# v v v v v v
# 2011.08.30,12:00,1791.20,1792.60,1787.60,1789.60,835
# 2011.08.30,13:00,1789.70,1794.30,1788.70,1792.60,550
# 2011.08.30,14:00,1792.70,1816.70,1790.20,1812.10,1222
# 2011.08.30,15:00,1812.20,1831.50,1811.90,1824.70,2373
# 2011.08.30,16:00,1824.80,1828.10,1813.70,1817.90,2215
converters = { 0: lambda aString: mPlotDATEs.date2num( datetime.datetime.strptime( aString, "%Y.%m.%d" ) ), #_______________________________________asFloat ( 1.0, +++ )
1: lambda aString: ( ( int( aString[0:2] ) * 60 + int( aString[3:] ) ) / 60. / 24. ) # ( 15*60 + 00 ) / 60. / 24.__asFloat < 0.0, 1.0 )
# HH: :MM HH MM
}
)
You can use pyalgotrade.barfeed.addBarsFromSequence with list comprehension to feed in data from CSV row by row/bar by bar. Basically you create a bar from each row, pass OHLCV as init parameters and extra columns with additional data in a dictionary. You can try something like this (with all the required imports):
data = pd.DataFrame(index=pd.date_range(start='2021-11-01', end='2021-11-05'), columns=['Open', 'High', 'Low', 'Close', 'Adj Close', 'Volume', 'ExtraCol1', 'ExtraCol3', 'ExtraCol4', 'ExtraCol5'], data=np.random.rand(5, 10))
feed = yahoofeed.Feed()
feed.addBarsFromSequence('instrumentID', data.index.map(lambda i:
BasicBar(
i,
data.loc[i, 'Open'],
data.loc[i, 'High'],
data.loc[i, 'Low'],
data.loc[i, 'Close'],
data.loc[i, 'Volume'],
data.loc[i, 'Adj Close'],
Frequency.DAY,
data.loc[i, 'ExtraCol1':].to_dict())
).values)
The input data frame was created with random values to make this example easier to reproduce, but the part where the bars are added to the feed should work the same for data frames from CSVs given that the valid column names are used.