rbind fromJSON page: duplicate rowname error - json

I was trying to rbind some json data scraped from api
library(jsonlite)
pop_dat <- data.frame()
for (i in 1:3) {
# Generate url for each page
url <- paste0('http://api.worldbank.org/v2/countries/all/indicators/SP.POP.TOTL?format=json&page=',i)
# Get json data from each page and transform it into dataframe
dat <- as.data.frame(fromJSON(url)[2],flatten = TRUE, row.names = NULL)
pop_dat <- rbind(pop_dat, dat)
}
However, it returns the following error:
Error in row.names<-.data.frame(*tmp*, value = value) :
duplicate 'row.names' are not allowed
In addition: Warning message:
non-unique values when setting 'row.names': ‘1’, ‘10’, ‘11’, ‘12’, ‘13’, ‘14’, ‘15’, ‘16’, ‘17’, ‘18’, ‘19’, ‘2’, ‘20’, ‘21’, ‘22’, ‘23’, ‘24’, ‘25’, ‘26’, ‘27’, ‘28’, ‘29’, ‘3’, ‘30’, ‘31’, ‘32’, ‘33’, ‘34’, ‘35’, ‘36’, ‘37’, ‘38’, ‘39’, ‘4’, ‘40’, ‘41’, ‘42’, ‘43’, ‘44’, ‘45’, ‘46’, ‘47’, ‘48’, ‘49’, ‘5’, ‘50’, ‘6’, ‘7’, ‘8’, ‘9’
Changing the row.names to null doesn't work. I heard from someone it is due to the fact that some data are stored as lists here, which I don't quite understand.
I understand that there is an alternative package WDI to access this data and it works well, but I want to know how to resolve the duplicates row name problem here in general so that I can deal with similar situation where no alternative package is available.

I heard from someone it is due to the fact that some data are stored as lists...
This is correct. The solution is fairly simple, but I find it really easy to get tripped up by this. Right now you're using:
dat <- as.data.frame(fromJSON(url)[2],flatten = TRUE, row.names = NULL)
The problem comes from fromJSON(url)[2]. This should be fromJSON(url)[[2]] instead. According to the documentation, the key difference between [ and [[ is a single bracket can select multiple elements whereas [[ selects only one.
You can see how this works with some fake data.
foo <- list(
a = rnorm(100),
b = rnorm(100),
c = rnorm(100)
)
With [, you can select multiple values inside this list.
foo[c("a", "b")]
length(foo["a"]) # Result is 1 not 100 like you might expect.
With [[ the results are different.
foo[[c("a", "b")]] # Raises a subscript error.
foo[["a"]] #This works.
length(foo[["a"]]) # Result is 100.
So, your answer will depend on which subset operator you're using. For your problem, you'll want to use [[ to select a single data.frame inside of the list. Then, you should be able to use rbind correctly.
final <- data.frame()
for (i in 1:10) {
url <- paste0(
'http://api.worldbank.org/v2/countries/all/indicators/SP.POP.TOTL?format=json&page=',
i
)
res <- jsonlite::fromJSON(url, flatten = TRUE)[[2]]
final <- rbind(final, res)
}
Alternative solution with lapply:
urls <- sprintf(
'http://api.worldbank.org/v2/countries/all/indicators/SP.POP.TOTL?format=json&page=%s',
1:10
)
resl <- lapply(urls, jsonlite::fromJSON, flatten = TRUE)
resl <- lapply(resl, "[[", 2) # Use lapply to select the 2 element from each list element.
resl <- do.call(rbind, resl) # This takes all the elements of the list and uses those elements as the arguments for rbind.

Related

How to pass multiple value as ID to `rvest`

I want to extract a bunch of numbers from a website https://www.bcassessment.ca/ using PID (unique ID). The list of sample PID are shown below:
PID <- c("012-215-023", "024-521-647", "025-891-669")
For these values, I opened the website manually and in the search engine of the website, I chose PID from the list of available options and then searched these numbers. The search redirected me to the following URLs
URL <- c("https://www.bcassessment.ca//Property/Info/QTAwMDAwM1hIUA==",
"https://www.bcassessment.ca//Property/Info/QTAwMDAwNEJKMA==",
"https://www.bcassessment.ca//Property/Info/QTAwMDAwMUc5OA==")
Then for each of these URLs, I ran the code shown below, to extract the total value of the property:
out <- c()
for (i in 1: length(URL)) {
url <- URL[i]
out[i] <- url %>%
read_html %>%
html_nodes('span#lblTotalAssessedValue') %>%
html_text()
i <- i+1
}
which gives me the final result
[1] "$543,000" "$957,000" "$487,000"
The problem is that I have a list of PID (more than 50000) and I cannot manually search each of these PIDs in the website to find the actual link and then run rvest to scrape it. How do you recommend automating this process so I can only provide PIDs and get the output price?
Summary: for a list of known PID I want to open https://www.bcassessment.ca/ and extract the most up-to-date price of the property, and I want it to be done Automatically.
Test_PID
I added list of PID code, so you can check if you want to check the code is working:
structure(list(P.I.D.. = c("004-050-541", "016-658-540", "016-657-861",
"016-657-764", "019-048-386", "025-528-360", "800-058-036", "025-728-954",
"028-445-783", "027-178-048", "028-445-571", "025-205-145", "015-752-798",
"026-041-308", "024-521-698", "027-541-631", "024-360-651", "028-445-040",
"025-851-411", "025-529-293", "024-138-436", "023-893-796", "018-496-768",
"025-758-721", "024-219-665", "024-359-866", "018-511-015", "026-724-979",
"023-894-253", "006-331-505", "025-961-012", "024-219-690", "027-309-878",
"028-445-716", "025-759-060", "017-692-733", "025-728-237", "028-447-221",
"023-894-202", "028-446-020", "026-827-611", "028-058-798", "017-574-412",
"023-893-591", "018-511-457", "025-960-199", "027-178-714", "027-674-941",
"027-874-826", "025-110-390", "028-071-336", "018-257-984", "023-923-393",
"026-367-203", "027-601-854", "003-773-922", "025-902-989", "018-060-641",
"025-530-003", "018-060-722", "025-960-423", "016-160-126", "009-301-461",
"025-960-580", "019-090-315", "023-464-283", "028-445-503", "006-395-708",
"028-446-674", "018-258-549", "023-247-398", "029-321-166", "024-519-871",
"023-154-161", "003-904-547", "004-640-357", "006-314-864", "025-960-521",
"013-326-783", "003-430-049", "027-490-084", "024-360-392", "028-054-474",
"026-076-179", "005-309-689", "024-613-509", "025-978-551", "012-215-066",
"024-034-002", "025-847-244", "024-222-038", "003-912-019", "024-845-264",
"006-186-254", "026-826-691", "026-826-712", "024-575-569", "028-572-581",
"026-197-774", "009-695-958", "016-089-120", "025-703-811", "024-576-671",
"026-460-751", "026-460-149", "003-794-181", "018-378-684", "023-916-745",
"003-497-721", "003-397-599", "024-982-211", "018-060-129", "018-061-231",
"017-765-714", "027-303-799", "028-565-312", "018-061-010", "006-338-232",
"023-680-024", "028-983-971", "028-092-490", "006-293-239", "018-061-257",
"028-092-376", "018-060-137", "004-302-664", "016-988-060", "003-371-166",
"027-325-342", "011-475-480", "018-060-200")), row.names = c(NA,
-131L), class = c("tbl_df", "tbl", "data.frame"))
P.S. The website I mentioned is a public website and everyone can open it and add an address to find the estimated price of a property, so I don't think there is any problem with scraping it as it's a public database.
When you submit the pid through the form, it triggers the following call:
GET https://www.bcassessment.ca/Property/Search/GetByPid/012215023?PID=012215023&_=1619713418473
The call above has the following parameters:
012215023 is the PID without dash - in your input. It's both a path and query parameter
1619713418473 is the current timestamp in milliseconds since 1970 (unix timestamp)
The result of the call above is a json response like this:
{
"sEcho": 1,
"aaData": [
["XXXXXXX", "XXXXXXXX", "XXXXXXXXXXXX", "200-027-615-115-48-0004", "QTAwMDAwM1hIUA=="]
]
}
The above call returns the response as text/plain and not as application/json content type, so we have to parse it using jsonlite. Then pick the last item of aaData array value which is, in this case: QTAwMDAwM1hIUA== and build the resulted url like the one in your post.
The following code gets a list of PID and extracts the $ values for each one of these:
library(rvest)
getValueForPID <- function(pid) {
pidNum = gsub("-", "", pid)
time <- as.numeric(as.POSIXct(Sys.time()))*1000
output <- content(httr::GET(paste0("https://www.bcassessment.ca/Property/Search/GetByPid/",pidNum), query = list(
"PID" = pidNum,
"_" = format(time, digits=13)
)), "text", encoding = "UTF-8")
if(output == "found_no_results"){
return("")
}
data = jsonlite::fromJSON(output)
id = data$aaData[5]
text <- paste0("https://www.bcassessment.ca/Property/Info/", id) %>%
read_html %>%
html_nodes('span#lblTotalAssessedValue') %>%
html_text()
return(text)
}
PID <- c("004-050-541", "016-658-540", "016-657-861", "016-657-764", "019-048-386", "025-528-360", "800-058-036")
out <- c()
count <- 1
for (i in PID) {
print(i)
out[count] <- getValueForPID(i)
count <- count + 1
}
print(out)
sample output:
[1] "$543,000" "$957,000" "$487,000"
kaggle link: https://www.kaggle.com/bertrandmartel/bcassesment-pid

Embed html in Jupyter with R kernel [duplicate]

I just started using Jupyter with R, and I'm wondering if there's a good way to display HTML or LaTeX output.
Here's some example code that I wish worked:
library(xtable)
x <- runif(500, 1, 50)
y <- x + runif(500, -5, 5)
model <- lm(y~x)
print(xtable(model), type = 'html')
Instead of rendering the HTML, it just displays it as plaintext. Is there any way to change that behavior?
A combination of repr (for setting options) and IRdisplay will work for HTML. Others may know about latex.
# Cell 1 ------------------------------------------------------------------
library(xtable)
library(IRdisplay)
library(repr)
data(tli)
tli.table <- xtable(tli[1:20, ])
digits(tli.table) <- matrix( 0:4, nrow = 20, ncol = ncol(tli)+1 )
options(repr.vector.quote=FALSE)
display_html(paste(capture.output(print(head(tli.table), type = 'html')), collapse="", sep=""))
# Cell 2 ------------------------------------------------------------------
display_html("<span style='color:red; float:right'>hello</span>")
# Cell 3 ------------------------------------------------------------------
display_markdown("[this](http://google.com)")
# Cell 4 ------------------------------------------------------------------
display_png(file="shovel-512.png")
# Cell 5 ------------------------------------------------------------------
display_html("<table style='width:20%;border:1px solid blue'><tr><td style='text-align:right'>cell 1</td></tr></table>")
I found a simpler answer, for the initial, simple use case.
If you call xtable without wrapping it in a call to print, then it totally works. E.g.,
library(xtable)
data(cars)
model <- lm(speed ~ ., data = cars)
xtable(model)
In Jupyter, you can use Markdown. Just be sure to change the Jupyter cell from a code cell to a Markdown cell. Once you have done this you can simply place a double dollar sign ("$$") before and after the LaTex you have. Then run the cell.
The steps are as follows:
1. Create a Markdown cell.
2. $$ some LaTex $$
3. Press play button within Jupyter.
Defining the following function in the session will display objects returned by xtable as html generated by xtable:
repr_html.xtable <- function(obj, ...){
paste(capture.output(print(obj, type = 'html')), collapse="", sep="")
}
library(xtable)
data(cars)
model <- lm(speed ~ ., data = cars)
xtable(model)
Without the repr_html.xtable function, because the returned object is also of class data.frame, the display system in the kernel will rich display that object (=html table) via repr::repr_html.data.frame.
Just don't print(...) the object :-)
Render/Embed html/Latex table to IR Kernel jupyter
Some packages in R give tables in html format like "knitr", so if you want to put this tables in the notebook:
library(knitr)
library(kableExtra)
library(IRdisplay) #the package that you need
#we create the table
dt <- mtcars[1:5, 1:6]
options(knitr.table.format = "html")
html_table= kable(dt) %>%
kable_styling("striped") %>%
add_header_above(c(" " = 1, "Group 1" = 2, "Group 2" = 2, "Group 3" = 2))
#We put the table in our notebook
display_html(toString(html_table))
Or for example if you have a file
display_latex(file = "your file path")

Extract JSON data from the rows of an R data frame

I have a data frame where the values of column Parameters are Json data:
# Parameters
#1 {"a":0,"b":[10.2,11.5,22.1]}
#2 {"a":3,"b":[4.0,6.2,-3.3]}
...
I want to extract the parameters of each row and append them to the data frame as columns A, B1, B2 and B3.
How can I do it?
I would rather use dplyr if it is possible and efficient.
In your example data, each row contains a json object. This format is called jsonlines aka ndjson, and the jsonlite package has a special function stream_in to parse such data into a data frame:
# Example data
mydata <- data.frame(parameters = c(
'{"a":0,"b":[10.2,11.5,22.1]}',
'{"a":3,"b":[4.0,6.2,-3.3]}'
), stringsAsFactors = FALSE)
# Parse json lines
res <- jsonlite::stream_in(textConnection(mydata$parameters))
# Extract columns
a <- res$a
b1 <- sapply(res$b, "[", 1)
b2 <- sapply(res$b, "[", 2)
b3 <- sapply(res$b, "[", 3)
In your example, the json structure is fairly simple so the other suggestions work as well, but this solution will generalize to more complex json structures.
I actually had a similar problem where I had multiple variables in a data frame which were JSON objects and a lot of them were NA's, but I did not want to remove the rows where NA's existed. I wrote a function which is passed a data frame, id within the data frame(usually a record ID), and the variable name in quotes to parse. The function will create two subsets, one for records which contain JSON objects and another to keep track of NA value records for the same variable then it joins those data frames and joins their combination to the original data frame thereby replacing the former variable. Perhaps it will help you or someone else as it has worked for me in a few cases now. I also haven't really cleaned it up too much so I apologize if my variable names are a bit confusing as well as this was a very ad-hoc function I wrote for work. I also should state that I did use another poster's idea for replacing the former variable with the new variables created from the JSON object. You can find that here : Add (insert) a column between two columns in a data.frame
One last note: there is a package called tidyjson which would've had a simpler solution but apparently cannot work with list type JSON objects. At least that's my interpretation.
library(jsonlite)
library(stringr)
library(dplyr)
parse_var <- function(df,id, var) {
m <- df[,var]
p <- m[-which(is.na(m))]
n <- df[,id]
key <- n[-which(is.na(df[,var]))]
#create df for rows which are NA
key_na <- n[which(is.na(df[,var]))]
q <- m[which(is.na(m))]
parse_df_na <- data.frame(key_na,q,stringsAsFactors = FALSE)
#Parse JSON values and bind them together into a dataframe.
p <- lapply(p,function(x){
fromJSON(x) %>% data.frame(stringsAsFactors = FALSE)}) %>% bind_rows()
#bind the record id's of the JSON values to the above JSON parsed dataframe and name the columns appropriately.
parse_df <- data.frame(key,p,stringsAsFactors = FALSE)
## The new variables begin with a capital 'x' so I replace those with my former variables name
n <- names(parse_df) %>% str_replace('X',paste(var,".",sep = ""))
n <- n[2:length(n)]
colnames(parse_df) <- c(id,n)
#join the dataframe for NA JSON values and the dataframe containing parsed JSON values, then remove the NA column,q.
parse_df <- merge(parse_df,parse_df_na,by.x = id,by.y = 'key_na',all = TRUE)
#Remove the new column formed by the NA values#
parse_df <- parse_df[,-which(names(parse_df) =='q')]
####Replace variable that is being parsed in dataframe with the new parsed and names values.######
new_df <- data.frame(append(df,parse_df[,-which(names(parse_df) == id)],after = which(names(df) == var)),stringsAsFactors = FALSE)
new_df <- new_df[,-which(names(new_df) == var)]
return(new_df)
}

Issues with readHTMLTable in R

I was trying to use readHTMLTable to store some data in a dataframe in R Studio, but it just keeps telling me could not find function "ReadHTMLTable". I don't understand where I did wrong. Can someone take a lot at this and tell me how I can fix this? or if it works in your R studio.
url <- 'http://www.cdc.gov/vhf/ebola/outbreaks/2014-west-africa/case-counts.html'
ebola <- getURL(url)
ebola <- readHTMLTable(ebola, stringAsFactors = F)
Error: could not find function "readHTMLTable"
You are reading the table in with R default which converts characters to factors. You can use stringsAsFactors = FALSE in readHTMLTable and this will be passed to data.frame. Also the table uses commas for thousand seperators which you will need to remove :
library(XML)
url1 <-'http://en.wikipedia.org/wiki/List_of_Ebola_outbreaks'
df1<- readHTMLTable(url1, which = 2, stringsAsFactors = FALSE)
df1$"Human death"
mySum <- sum(as.integer(gsub(",", "", df1$"Human death")))
> mySum
[1] 6910
The problem is that you dont initialize de XML library
library(XML)

Substring in Data Frame R

I have data from GPS log like this : (this data in rows of data frame columns)
{"mAccuracy":20.0,"mAltitude":0.0,"mBearing":0.0,"mElapsedRealtimeNanos":21677339000000,"mExtras":{"networkLocationSource":"cached","networkLocationType":"wifi","noGPSLocation":{"mAccuracy":20.0,"mAltitude":0.0,"mBearing":0.0,"mElapsedRealtimeNanos":21677339000000,"mHasAccuracy":true,"mHasAltitude":false,"mHasBearing":false,"mHasSpeed":false,"mIsFromMockProvider":false,"mLatitude":35.1811956,"mLongitude":126.9104909,"mProvider":"network","mSpeed":0.0,"mTime":1402801381486},"travelState":"stationary"},"mHasAccuracy":true,"mHasAltitude":false,"mHasBearing":false,"mHasSpeed":false,"mIsFromMockProvider":false,"mLatitude":35.1811956,"mLongitude":126.9104909,"mProvider":"network","mSpeed":0.0,"mTime":1402801381486,"timestamp":1402801665.512}
The problem is I only need Latitude and longitude value, so I think i can use substring and sappy for applying to all data in dataframe.
But I am not sure this way is handsome because when i use substring ex: substr("abcdef", 2, 4) so I need to count who many chars from beginning until "mLatitude" , so anybody can give suggestion the fast way to processing it?
Thank you to #mnel for answering question, it's work , but i still have problem
From mnel answer I've created function like this :
fgps <- function(x) {
out <- fromJSON(x)
c(out$mExtras$noGPSLocation$mLatitude,
out$mExtras$noGPSLocation$mLongitude)
}
and then this is my data :
gpsdata <- head(dfallgps[,4],2)
[1] "{\"mAccuracy\":23.128,\"mAltitude\":0.0,\"mBearing\":0.0,\"mElapsedRealtimeNanos\":76437488000000,\"mExtras\":{\"networkLocationSource\":\"cached\",\"networkLocationType\":\"wifi\",\"noGPSLocation\":{\"mAccuracy\":23.128,\"mAltitude\":0.0,\"mBearing\":0.0,\"mElapsedRealtimeNanos\":76437488000000,\"mHasAccuracy\":true,\"mHasAltitude\":false,\"mHasBearing\":false,\"mHasSpeed\":false,\"mIsFromMockProvider\":false,\"mLatitude\":35.1779956,\"mLongitude\":126.9089661,\"mProvider\":\"network\",\"mSpeed\":0.0,\"mTime\":1402894224187},\"travelState\":\"stationary\"},\"mHasAccuracy\":true,\"mHasAltitude\":false,\"mHasBearing\":false,\"mHasSpeed\":false,\"mIsFromMockProvider\":false,\"mLatitude\":35.1779956,\"mLongitude\":126.9089661,\"mProvider\":\"network\",\"mSpeed\":0.0,\"mTime\":1402894224187,\"timestamp\":1402894517.425}"
[2] "{\"mAccuracy\":1625.0,\"mAltitude\":0.0,\"mBearing\":0.0,\"mElapsedRealtimeNanos\":77069916000000,\"mExtras\":{\"networkLocationSource\":\"cached\",\"networkLocationType\":\"cell\",\"noGPSLocation\":{\"mAccuracy\":1625.0,\"mAltitude\":0.0,\"mBearing\":0.0,\"mElapsedRealtimeNanos\":77069916000000,\"mHasAccuracy\":true,\"mHasAltitude\":false,\"mHasBearing\":false,\"mHasSpeed\":false,\"mIsFromMockProvider\":false,\"mLatitude\":35.1811881,\"mLongitude\":126.9084072,\"mProvider\":\"network\",\"mSpeed\":0.0,\"mTime\":1402894857416},\"travelState\":\"stationary\"},\"mHasAccuracy\":true,\"mHasAltitude\":false,\"mHasBearing\":false,\"mHasSpeed\":false,\"mIsFromMockProvider\":false,\"mLatitude\":35.1811881,\"mLongitude\":126.9084072,\"mProvider\":\"network\",\"mSpeed\":0.0,\"mTime\":1402894857416,\"timestamp\":1402894857.519}"
When run sapply why the data still shows in the result not just the results values.
sapply(gpsdata, function(gpsdata) fgps(gpsdata))
{"mAccuracy":23.128,"mAltitude":0.0,"mBearing":0.0,"mElapsedRealtimeNanos":76437488000000,"mExtras":{"networkLocationSource":"cached","networkLocationType":"wifi","noGPSLocation":{"mAccuracy":23.128,"mAltitude":0.0,"mBearing":0.0,"mElapsedRealtimeNanos":76437488000000,"mHasAccuracy":true,"mHasAltitude":false,"mHasBearing":false,"mHasSpeed":false,"mIsFromMockProvider":false,"mLatitude":35.1779956,"mLongitude":126.9089661,"mProvider":"network","mSpeed":0.0,"mTime":1402894224187},"travelState":"stationary"},"mHasAccuracy":true,"mHasAltitude":false,"mHasBearing":false,"mHasSpeed":false,"mIsFromMockProvider":false,"mLatitude":35.1779956,"mLongitude":126.9089661,"mProvider":"network","mSpeed":0.0,"mTime":1402894224187,"timestamp":1402894517.425}
[1,] 35.178
[2,] 126.909
{"mAccuracy":1625.0,"mAltitude":0.0,"mBearing":0.0,"mElapsedRealtimeNanos":77069916000000,"mExtras":{"networkLocationSource":"cached","networkLocationType":"cell","noGPSLocation":{"mAccuracy":1625.0,"mAltitude":0.0,"mBearing":0.0,"mElapsedRealtimeNanos":77069916000000,"mHasAccuracy":true,"mHasAltitude":false,"mHasBearing":false,"mHasSpeed":false,"mIsFromMockProvider":false,"mLatitude":35.1811881,"mLongitude":126.9084072,"mProvider":"network","mSpeed":0.0,"mTime":1402894857416},"travelState":"stationary"},"mHasAccuracy":true,"mHasAltitude":false,"mHasBearing":false,"mHasSpeed":false,"mIsFromMockProvider":false,"mLatitude":35.1811881,"mLongitude":126.9084072,"mProvider":"network","mSpeed":0.0,"mTime":1402894857416,"timestamp":1402894857.519}
[1,] 35.18119
[2,] 126.90841
I want the result looks like :
[1] 35.178 126.909
[2] 35.18119 126.90841
Thank you
It would appear that your data is in JSON format. Therefore, use a RJSONIO::fromJSON to read the file.
E.g.:
txt <- "{\"mAccuracy\":20.0,\"mAltitude\":0.0,\"mBearing\":0.0,\"mElapsedRealtimeNanos\":21677339000000,\"mExtras\":{\"networkLocationSource\":\"cached\",\"networkLocationType\":\"wifi\",\"noGPSLocation\":{\"mAccuracy\":20.0,\"mAltitude\":0.0,\"mBearing\":0.0,\"mElapsedRealtimeNanos\":21677339000000,\"mHasAccuracy\":true,\"mHasAltitude\":false,\"mHasBearing\":false,\"mHasSpeed\":false,\"mIsFromMockProvider\":false,\"mLatitude\":35.1811956,\"mLongitude\":126.9104909,\"mProvider\":\"network\",\"mSpeed\":0.0,\"mTime\":1402801381486},\"travelState\":\"stationary\"},\"mHasAccuracy\":true,\"mHasAltitude\":false,\"mHasBearing\":false,\"mHasSpeed\":false,\"mIsFromMockProvider\":false,\"mLatitude\":35.1811956,\"mLongitude\":126.9104909,\"mProvider\":\"network\",\"mSpeed\":0.0,\"mTime\":1402801381486,\"timestamp\":1402801665.512}"
Then process:
library(RJSONIO)
out <- fromJSON(txt)
out$$mLongitude
#[1] 126.9105
out$mLatitude
#[1] 35.1812
# to process multiple values
tt <- rep(txt,2)
myData <- lapply(tt, fromJSON)
latlong <- do.call(rbind,lapply(myData, `[` ,c('mLatitude','mLongitude')))
# or using rbind list
library(data.table)
latlong <- rbindlist(lapply(myData, `[` ,c('mLatitude','mLongitude')))