download a csv zip file - csv

I'm trying to download the following file. The code works just find in RStudio when I run to the console. But when I try to compile a markdown file (to either html or pdf), it gives an error. Why can't markdown communicate with the csv zip file?
```
temp = tempfile()
download.file("http://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/Medicare-Provider-Charge-Data/Downloads/Inpatient_Data_2013_CSV.zip", temp)
temp2 = unz(temp, "Medicare_Provider_Charge_Inpatient_DRG100_FY2013.csv")
medData = read_csv(temp2)
```
Gives the following error (I had to remove URLs in error because I don't have enough reputation points):
trying URL Quitting from lines 41-49 (medicare.Rmd) Error in
download.file("..", : cannot open URL '...' Calls: ...
withCallingHandlers -> withVisible -> eval -> eval -> download.file

Related

Unable to download NSE Bhavcopy zip file in R. HTTP status was '403 Forbidden'

I am trying to download zip file in R (basically I am automating). On the below webpage I can download the zip if I click on the zip file.
I can even download if I do right click and save the file. However, I am not able to download it via R. So far I have tried download.file, RCurl and downloader packages. But no success yet!
url = "https://www.nseindia.com/content/historical/DERIVATIVES/2009/JAN/fo07JAN2009bhav.csv.zip"
download.file(url, dest);
> trying URL
> 'https://www.nseindia.com/content/historical/DERIVATIVES/2009/JAN/fo07JAN2009bhav.csv.zip'
> Error in download.file(url, temp) : cannot open URL
> 'https://www.nseindia.com/content/historical/DERIVATIVES/2009/JAN/fo07JAN2009bhav.csv.zip'
> In addition: Warning message: In download.file(url, temp) : cannot
> open URL
> 'https://www.nseindia.com/content/historical/DERIVATIVES/2009/JAN/fo07JAN2009bhav.csv.zip':
> HTTP status was '403 Forbidden'
Manual Download:
Open:
https://www.nseindia.com/products/content/derivatives/equities/archieve_fo.htm
Select: bhavcopy from drop down
Select date: 07-01-2009 (I need data from 2010-2015)
click on zip file OR right click on zip and do save as.
download.file doesn't work for NSE Bhavcopy before 2016. Hence use RSlenium.
Refer package nser https://cloud.r-project.org/web/packages/nser/index.html to download the FO Bhavcopy.
library(nser)
library(RSelenium)
# Start a selenium server and browser
# For Google Chrome (Update Chrome to latest version)
driver <- rsDriver(browser = c("chrome"), port = 3163L, chromever = "91.0.4472.101")
remDr <- driver$client
# or for Firefox
driver <- rsDriver(browser = c("firefox"), port = 3799L)
# Download Equity Bhavcopy zip file
bhavfos("03012000", 2)
# Close the Browser
remDr$close()

LuaLaTex using fontspec package and luacode reading JSON file

I'm using Latex since years but I'm new to embedded luacode (with Lualatex). Below you can see a simplified example:
\begin{filecontents*}{data.json}
[
{"firstName":"Max", "lastName":"Möller"},
{"firstName":"Anna", "lastName":"Smith"}
];
\end{filecontents*}
\documentclass[11pt]{article}
\usepackage{fontspec}
%\setmainfont{Carlito}
\usepackage{tikz}
\usepackage{luacode}
\begin{document}
\begin{luacode}
require("lualibs.lua")
local file = io.open('data.json','rb')
local jsonstring = file:read('*a')
file.close()
local jsondata = utilities.json.tolua(jsonstring)
tex.print('\\begin{tabular}{cc}')
for key, value in pairs(jsondata) do
tex.print(value["firstName"] .. ' & ' .. value["lastName"] .. '\\\\')
end
tex.print('\\hline\\end{tabular}')
\end{luacode}
\end{document}
When executing Lualatex following error occurs:
LuaTeX error [\directlua]:6: attempt to index field 'json' (a nil value) [\directlua]:6: in main chunk. \end{luacode}
When commenting the line \usepackage{fontspec} the output will be produced. Alternatively, the error can be avoided by commenting utilities.json.tolua(jsonstring) and all following lua-code lines.
So the question is: How can I use both "fontspec" package and json-data without generating an error message? Apart from this I have another question: How to enable german umlauts in output of luacode (see first "lastName" in example: Möller)?
Ah, I'm using TeX Live 2015/Debian on Ubuntu 16.04.
Thank you,
Jerome

how do you parse json files that has incomplete lines in the file?

I have bunch of files in one directory that has many entries in the file as this:
{"DateTimeStamp":"2017-07-20T21:52:00.767-0400","Host":"Server","Code":"test101","use":"stats"}
I need to be able read each file and form a data frame from the json etries. Sometimes, the lines in the file may not be complete and my script is failing. How can I modify this script to account for not complete lines in the files:
path<-c("C:/JsonFiles")
filenames <- list.files(path, pattern="*Data*", full.names=TRUE)
dflist <- lapply(filenames, function(i) {
jsonlite::fromJSON(
paste0("[",
paste0(readLines(i),collapse=","),
"]"),flatten=TRUE
)
})
mq<-rbindlist(dflist, use.names=TRUE, fill=TRUE)

Run R silently from command line, export results to JSON

How might I call an R script from the shell (e.g. from Node.js exec) and export results as JSON (e.g. back to Node.js)?
The R code below basically works. It reads data, fits a model, converts the parameter estimates to JSON, and prints them to stdout:
#!/usr/bin/Rscript --quiet --slave
install.packages("cut", repos="http://cran.rstudio.com/");
install.packages("Hmisc", repos="http://cran.rstudio.com/");
install.packages("rjson", repos="http://cran.rstudio.com/");
library(rjson)
library(reshape2);
data = read.csv("/data/records.csv", header = TRUE, sep=",");
mylogit <- glm( y ~ x1 + x2 + x3, data=data, family="binomial");
params <- melt(mylogit$coefficients);
json <- toJSON(params);
json
Here's how I'd like to call it from Node...
var exec = require('child_process').exec;
exec('./model.R', function(err, stdout, stderr) {
var params = JSON.parse(stdout); // FAIL! Too much junk in stdout
});
Except the R process won't stop printing to stdout. I've tried --quiet --slave --silent which all help a little but not enough. Here's what's sent to stdout:
The downloaded binary packages are in
/var/folders/tq/frvmq0kx4m1gydw26pcxgm7w0000gn/T//Rtmpyk7GmN/downloaded_packages
The downloaded binary packages are in
/var/folders/tq/frvmq0kx4m1gydw26pcxgm7w0000gn/T//Rtmpyk7GmN/downloaded_packages
[1] "{\"value\":[4.04458733165933,0.253895751245782,-0.1142272181932,0.153106007464742,-0.00289013062471735,-0.00282580664375527,0.0970325223603164,-0.0906967639834928,0.117150317941983,0.046131890754108,6.48538603593323e-06,6.70646151749708e-06,-0.221173770066275,-0.232262366060079,0.163331098409235]}"
What's the best way to use R scripts on the command line?
Running R --silent --slave CMD BATCH model.R per the post below still results in a lot of extraneous text printed to model.Rout:
Run R script from command line
Those options only stop R's own system messages from printing, they won't stop another R function doing some printing. Otherwise you'll stop your last line from printing and you won't get your json to stdout!
Those messages are coming from install.packages, so try:
install.packages(-whatever-, quiet=TRUE)
which claims to reduce the amount of output. If it reduces it to zero, job done.
If not, then you can redirect stdout with sink, or run things inside capture.output.

Read An Input.md file and output a .html file Haskell

I had a question concerning some basic transformations in Haskell.
Basically, I have a written Input file, named Input.md. This contains some markdown text that is read in my project file, and I want to write a few functions to do transformations on the text. After completing these functions under a function called convertToHTML, I have output the file as an .html file in the correct format.
module Main
(
convertToHTML,
main
) where
import System.Environment (getArgs)
import System.IO
import Data.Char (toLower, toUpper)
process :: String -> String
process s = head $ lines s
convertToHTML :: String -> String
convertToHTML str = do
x <- str
if (x == '#')
then "<h1>"
else return x
--convertToHTML x = map toUpper x
main = do
args <- getArgs -- command line args
let (infile,outfile) = (\(x:y:ys)->(x,y)) args
putStrLn $ "Input file: " ++ infile
putStrLn $ "Output file: " ++ outfile
contents <- readFile infile
writeFile outfile $ convertToHTML contents
So,
How would I read through my input file, and transform any line that starts with a # to an html tag
How would I read through my input file once more and transform any WORD that is surrounded by _word_ (1 underscore) to another html tag
Replace any Character with an html string.
I tried using such functions such as Map, Filter, ZipWith, but could not figure out how to iterate through the text and transform each text. Please if anybody has any suggestions. I've been working on this for 2 days straight and have a bunch of failed code to show for a couple of weeks and have a bunch of failed code to show it.
I tried using such functions such as Map, Filter, ZipWith, but could not figure out how to iterate through the text and transform each text.
Because they work on appropriate element collection. And they don't really "iterate"; you simply have to feed the appropriate data. Let's tackle the # problem as an example.
Our file is one giant String, and what we'd like is to have it nicely split in lines, so [String]. What could do it for us? I have no idea, so let's just search Hoogle for String -> [String].
Ah, there we go, lines function! Its counterpart, unlines, is also going to be useful. Now we can write our line wrapper:
convertHeader :: String -> String
convertHeader [] = [] -- that prevents us from calling head on an empty line
convertHeader x = if head x == '#' then "<h1>" ++ x ++ "</h1>"
else x
and so:
convertHeaders :: String -> String
convertHeaders = unlines . map convertHeader . lines
-- ^String ^[String] ^[String] ^String
As you can see the function first converts the file to lines, maps convertHeader on each line, and the puts the file back together.
See it live on Ideone
Try now doing the same with words to replace your formatting patterns. As a bonus exercise, change convertHeader to count the number of # in front of the line and output <h1>, <h2>, <h3> and so on accordingly.