Comparison the words with the original file in the R - json

I have original dataset in json format. Let's load it in R.
library("rjson")
setwd("mydir")
getwd()
json_data <- fromJSON(paste(readLines("N1.json"), collapse=""))
uu <- unlist(json_data)
uutext <- uu[names(uu) == "text"]
And I have another dataset mydata2
mydata=read.csv(path to data/words)
I need to find the words in mydata2, only which are present in messages in json file. And then write this messages into the new document, "xyz.txt" How to do it?
chalk indirect pick reaction team skip pumpkin surprise bless ignorance
1 time patient road extent decade cemetery staircase monarch bubble abbey
2 service conglomerate banish pan friendly position tight highlight rice disappear
3 write swear break tire jam neutral momentum requirement relationship matrix
4 inspire dose jump promote trace latest absolute adjust joystick habit
5 wrong behave claim dedicate threat sell particle statement teach lamb
6 eye tissue prescription problem secretion revenge barrel beard mechanism platform
7 forest kick face wisecrack uncertainty ratio complain doubt reflection realism
8 total fee debate hall soft smart sip ritual pill category
9 contain headline lump absorption superintendent digital increase key banner second
i mean
chalk -1 number1 indirect -2 number2
template
Word1-1 number1-1; Word1-2 number 1-2; …; Word 1-10 number 1-10
Word2-1 number2-1; Word2-2 number 2-2; …; Word 2-10 number 2-10

Next time pls include real data. Simplified model:
library(data.table)
word = c("test","meh","blah")
jsonF = c("let's do test", "blah is right", "test blah", "test test")
outp <- list()
for (i in 1:length(word)) {
outp[[i]] = as.data.frame(grep(word[i],jsonF,v=T,fixed=T)) # possibly, ignore.case=T
}
qq = rbindlist(outp)
qq = unique(qq)
print(qq)
1: let's do test
2: test blah
3: test test
4: blah is right
Edit: quick and dirty paste/collapse:
library(data.table)
x = LETTERS[1:10]
y = LETTERS[11:20]
df = rbind(x,y)
L = list()
for (i in 1:nrow(df)) {
L[i] = paste0(df[i,],"-",seq(1,10)," ",i,"-",seq(1,10),collapse="; ")
}
Fin = cbind(L)
View(Fin)
Gives:
> Fin
L
[1,] "A-1 1-1; B-2 1-2; C-3 1-3; D-4 1-4; E-5 1-5; F-6 1-6; G-7 1-7; H-8 1-8; I-9 1-9; J-10 1-10"
[2,] "K-1 2-1; L-2 2-2; M-3 2-3; N-4 2-4; O-5 2-5; P-6 2-6; Q-7 2-7; R-8 2-8; S-9 2-9; T-10 2-10"

Related

Contrast emmeans: post-hoc t-test as the average differences of the differences between baseline and treatment periods

I am using the lme4 package in R to undertake linear mixed effect models (LMM). Essentially all participants received two interventions (an intervention treatment and a placebo (control)) and were separated by a washout period. However, the order or sequence they received the interventions differed.
An interaction term of intervention and visit was included in the LMM with eight levels including all combinations of intervention (2 levels: control and intervention) and visit (4 levels: visit 1=baseline 1, visit 2, visit 3=post-randomization baseline 2, visit 4).
My question is how do I determine the intervention effect by a post-hoc t-test as the average differences of the differences between interventions, hence between visits 1 and 2 and between visits 3 and 4. I also want to determine the effects of the intervention and control compared to baseline.
Please see code below:
model1<- lmer(X ~ treatment_type:visit_code + (1|SID) + (1|SID:period), na.action= na.omit, data = data.x)
emm <- emmeans(model1 , ~treatment_type:visit_code)
My results of model 1 is:
emm
treatment_type visit_code emmean SE df lower.CL upper.CL
Control T0 -0.2915 0.167 26.0 -0.635 0.0520
Intervention T0 -0.1424 0.167 26.0 -0.486 0.2011
Control T1 -0.2335 0.167 26.0 -0.577 0.1100
Intervention T1 0.0884 0.167 26.0 -0.255 0.4319
Control T2 0.0441 0.167 26.0 -0.299 0.3876
Intervention T2 -0.2708 0.168 26.8 -0.616 0.0748
Control T3 0.1272 0.167 26.0 -0.216 0.4708
Intervention T3 0.0530 0.168 26.8 -0.293 0.3987
Degrees-of-freedom method: kenward-roger
Confidence level used: 0.95
I first created a matrix/ vectors:
#name vectors
Control.B1<- c(1,0,0,0,0,0,0,0) #control baseline 1 (visit 1)
Intervention.B1<- c(0,1,0,0,0,0,0,0) #intervention baseline 1 (visit 1)
Control.A2<- c(0,0,1,0,0,0,0,0) #post control 1 (visit 2)
Intervention.A2<- c(0,0,0,1,0,0,0,0) #post intervention 1 (visit 2)
ControlB3<- c(0,0,0,0,1,0,0,0) #control baseline 2 (visit 3)
Intervention.B3<- c(0,0,0,0,0,1,0,0) #intervention baseline 2 (visit 3)
Control.A4<- c(0,0,0,0,0,0,1,0) #post control 2 (visit 4)
Intervention.A4<- c(0,0,0,0,0,0,0,1) #post intervention 2 (visit 4)
Contbaseline = (Control.B1 + Control.B3)/2 # average of control baseline visits
Intbaseline = (Intervention. B1 + Intervention.B3)/2 # average of intervention baseline visits
ControlAfter= (Control.A2 + Control.A4)/2 # average of after control visits
IntervAfter= (Intervention.A2 + Intervention.A4)/2 # average of after intervention visits
Control.vs.Baseline = (ControlAfter-Contbaseline)
Intervention.vs.Baseline = (IntervAfter-Intbaseline)
Control.vs.Intervention = ((Control.vs.Baseline)-(Intervention.vs.Baseline))
the output of these are as follows:
> Control.vs.Baseline
[1] -0.5 0.0 0.5 0.0 -0.5 0.0 0.5 0.0
> Intervention.vs.Baseline
[1] 0.0 -0.5 0.0 0.5 0.0 -0.5 0.0 0.5
> Control.vs.Intervention
[1] -0.5 0.5 0.5 -0.5 -0.5 0.5 0.5 -0.5
Is this correct to the average differences of the differences between baseline and treatment periods?
Many thanks in advance!
A two-period crossover is the same as a repeated 2x2 Latin square. My suggestion for future such experiments is to structure the data accordingly, using variables for sequence (rows), period (columns), and treatment (assigned in the pattern (A,B) first sequence and (B,A) second sequence. The subjects are randomized to which sequence they are in.
So with your data, you would need to add a variable sequence that has the level AB for those subjects who receive the treatment sequence A, A, B, B, and level BA for those who receive B, B, A, A (though I guess the 1st and 3rd are really baseline for everybody).
Since there are 4 visits, it helps keep things sorted if you recode that as two factors trial and period, as follows:
visit trial period
1 base 1
2 test 1
3 base 2
4 test 2
Then fit the model with formula
model2 <- lmer(X ~ (sequence + period + treatment_type) * trial +
(1|SID:sequence), ...etc...)
The parenthesized part is the standard model for a Latin square. Then the analysis can be done without custom contrasts as follows:
RG <- ref_grid(model2) # same really as emmeans() for all 4 factors
CHG <- contrast(RG, "consec", simple = "trial")
CHG <- update(CHG, by = NULL, infer = c(TRUE, FALSE))
CHG contains the differences from baseline (trial differences for each combination of the other three factors. The update() step removes the by variables saved from contrast(). Now, we can get the marginal means and comparisons for each factor:
emmeans(CHG, consec ~ treatment_type)
emmeans(CHG, consec ~ period)
emmeans(CHG, consec ~ sequence)
These will be the same results you got the other way via custom contrasts. The one that was a difference of differences before is now handled by sequence. This works because in a 2x2 Latin square, the main effect of each factor is confounded with the two-way interaction of the other two factors.

Web-scraping in IBM Watson Studio Jupyter Notebook using BeautifulSoup not working

I'm looking to scrape data in an IBM Watson Studio Jupyter Notebook from this search result page:
https://www.aspc.co.uk/search/?PrimaryPropertyType=Rent&SortBy=PublishedDesc&LastUpdated=AddedAnytime&SearchTerm=&PropertyType=Residential&PriceMin=&PriceMax=&Bathrooms=&OrMoreBathrooms=true&Bedrooms=&OrMoreBedrooms=true&HasCentralHeating=false&HasGarage=false&HasDoubleGarage=false&HasGarden=false&IsNewBuild=false&IsDevelopment=false&IsParkingAvailable=false&IsPartExchangeConsidered=false&PublicRooms=&OrMorePublicRooms=true&IsHmoLicense=false&IsAllowPets=false&IsAllowSmoking=false&IsFullyFurnished=false&IsPartFurnished=false&IsUnfurnished=false&ExcludeUnderOffer=false&IncludeClosedProperties=true&ClosedDatesSearch=14&MapSearchType=EDITED&ResultView=LIST&ResultMode=NONE&AreaZoom=13&AreaCenter[lat]=57.14955426557916&AreaCenter[lng]=-2.0927401123046785&EditedZoom=13&EditedCenter[lat]=57.14955426557916&EditedCenter[lng]=-2.0927401123046785
I've tried BeautifulSoup and attempted Selenium (full disclosure: I am a beginner) over multiple variations of codes. I've gone over dozens of questions on Stack Overflow, Medium articles, etc and I cannot understand what I'm doing wrong.
The latest one I'm doing is:
from bs4 import BeautifulSoup
html_soup = BeautifulSoup(response.text, 'html.parser')
type(html_soup)
properties_containers = html_soup.find_all('div', class_ = 'information-card property-card col ')
print(type(properties_containers))
print(len(properties_containers))
This returns 0.
<class 'bs4.element.ResultSet'>
0
Can someone please guide me in the right direction as to what I'm doing wrong/ missing?
The data you see is loaded via JavaScript. BeautifulSoup cannot execute it, but you can use requests module to load the data from their API.
For example:
import json
import requests
url = 'https://www.aspc.co.uk/search/?PrimaryPropertyType=Rent&SortBy=PublishedDesc&LastUpdated=AddedAnytime&SearchTerm=&PropertyType=Residential&PriceMin=&PriceMax=&Bathrooms=&OrMoreBathrooms=true&Bedrooms=&OrMoreBedrooms=true&HasCentralHeating=false&HasGarage=false&HasDoubleGarage=false&HasGarden=false&IsNewBuild=false&IsDevelopment=false&IsParkingAvailable=false&IsPartExchangeConsidered=false&PublicRooms=&OrMorePublicRooms=true&IsHmoLicense=false&IsAllowPets=false&IsAllowSmoking=false&IsFullyFurnished=false&IsPartFurnished=false&IsUnfurnished=false&ExcludeUnderOffer=false&IncludeClosedProperties=true&ClosedDatesSearch=14&MapSearchType=EDITED&ResultView=LIST&ResultMode=NONE&AreaZoom=13&AreaCenter[lat]=57.14955426557916&AreaCenter[lng]=-2.0927401123046785&EditedZoom=13&EditedCenter[lat]=57.14955426557916&EditedCenter[lng]=-2.0927401123046785'
api_url = 'https://api.aspc.co.uk/Property/GetProperties?{}&Sort=PublishedDesc&Page=1&PageSize=12'
params = url.split('?')[-1]
data = requests.get(api_url.format(params)).json()
# uncomment this to print all data:
# print(json.dumps(data, indent=4)) # <-- uncomment this to see all data received from server
# print some data to screen:
for property_ in data:
print(property_['Location']['AddressLine1'])
print(property_['CategorisationDescription'])
print('Bedrooms:', property_["Bedrooms"]) # <-- print number of Bedrooms
print('Bathrooms:', property_["Bathrooms"]) # <-- print number of Bathrooms
print('PublicRooms:', property_["PublicRooms"]) # <-- print number of PublicRooms
# .. etc.
print('-' * 80)
Prints:
44 Roslin Place
Fully furnished 2 Bdrm 1st flr Flat. Hall. Lounge. Dining kitch. 2 Bdrms. Bathrm (CT band - C). Deposit 1 months rent. Parking. No pets. No smokers. Rent £550 p.m Entry by arr. Viewing contact solicitors. Landlord reg: 871287/100/26061. (EPC band - B).
Bedrooms: 2
Bathrooms: 1
PublicRooms: 1
--------------------------------------------------------------------------------
Second Floor Left, 173 Victoria Road
Unfurnished 1 Bdrm 2nd flr Flat. Hall. Lounge. Dining kitch. Bdrm. Bathrm (CT Band - A). Deposit 1 months rent. No pets. No smokers. Rent £375 p.m Immed entry. Viewing contact solicitors. Landlord reg: 1261711/100/09072. (EPC band - D).
Bedrooms: 1
Bathrooms: 1
PublicRooms: 1
--------------------------------------------------------------------------------
102 Bedford Road
Fully furnished 3 Bdrm 1st flr Flat. Hall. Lounge. Kitch. 3 Bdrms. Bathrm (CT band - B). Deposit 1 months rent. Garden. HMO License. No pets. No smokers. Rent £750 p.m Entry by arr. Viewing contact solicitors. Landlord reg: 49171/100/27130. (EPC band - D).
Bedrooms: 3
Bathrooms: 1
PublicRooms: 1
--------------------------------------------------------------------------------
... and so on.

Nltk lesk issue

I am running a simple sentence disambiguation test. But the synset returned by nltk Lesk for the word 'cat' in the sentence "The cat likes milk" is 'kat.n.01', synsetid=3608870.
(n) kat, khat, qat, quat, cat, Arabian tea, African tea (the leaves of the shrub Catha edulis which are chewed like tobacco or used to make tea; has the effect of a euphoric stimulant) "in Yemen kat is used daily by 85% of adults"
This is a simple phrase and yet the disambiguation task fails.
And this is happening for many words in a set containing more than one sentence, for example in my test sentences, I would expect 'dog' to be disambiguated as 'domestic dog' but Lesk gives me 'pawl' (a hinged catch that fits into a notch of a ratchet to move a wheel forward or prevent it from moving backward)
Is it related to the size of the training set which is in my test only few sentences?
Here is my test code:
def test_lesk():
words = get_sample_words()
print(words)
tagger = PerceptronTagger()
tags = tagger.tag(words)
print (tags[:5])
for word, tag in tags:
pos = get_wordnet_pos(tag)
if pos is None:
continue
print("word=%s,tag=%s,pos=%s" %(word, tag, pos))
synset = lesk(words, word, pos)
if synset is None:
print('No synsetid for word=%s' %word)
else:
print('word=%s, synsetname=%s, synsetid=%d' %(word,synset.name(), synset.offset()))

ANOVA mean test for Vector Valued Response

What codes should I use in R for an ANOVA model when the response is vector valued...i.e.
Suppose I have longitudinal data for 20 individuals each having measurements of 10 time points...now I have a factor X having 3 levels,say 0,1,2...I need to test if the levels differ significantly from each other...I have to test for the mean vector (vector since each individual contains 10 time points)..i.e. if the mean vector for level 0, mean vector for level 1 and mean vector for level 2 are significantly different...
My sample data is:
Y
[1,] 9.759608 15.02230 17.70331
[2,] 9.596711 15.50542 18.49343
[3,] 11.298570 17.44781 19.48276
[4,] 8.519376 13.73086 17.05881
[5,] 10.232851 15.85302 19.87476
[6,] 10.888219 16.05568 20.12624
[7,] 9.688724 15.50494 18.82778
[8,] 10.309219 16.78230 18.80428
[9,] 9.620743 15.84582 19.32465
[10,] 10.418802 16.18098 17.94019
>treatment=c(0,1,1,2,0,2,1,1,0,1)
>treatment=factor(treatment)
> result=aov(Y~treatment)
Error in model.frame.default(formula = Y ~ treatment, drop.unused.levels = TRUE) :
object is not a matrix
Maybe it's just a problem of type of object. Try :
as.matrix(Y)

Scraping data from tables on multiple web pages in R (football players)

I'm working on a project for school where I need to collect the career statistics for individual NCAA football players. The data for each player is in this format.
http://www.sports-reference.com/cfb/players/ryan-aplin-1.html
I cannot find an aggregate of all players so I need to go page by page and pull out the bottom row of each passing scoring Rushing & receiving etc. html table
Each player is catagorized by their last name with links to each alphabet going here.
http://www.sports-reference.com/cfb/players/
For instance, each player with the last name A is found here.
http://www.sports-reference.com/cfb/players/a-index.html
This is my first time really getting into data scraping so I tried to find similar questions with answers. The closest answer I found was this question
I believe I could use something very similar where I switch page number with the collected player's name. However, I'm not sure how to change it to look for player name instead of page number.
Samuel L. Ventura also gave a talk about data scraping for NFL data recently that can be found here.
EDIT:
Ben was really helpful and provided some great code. The first part works really well, however when I attempt to run the second part I run into this.
> # unlist into a single character vector
> links <- unlist(links)
> # Go to each URL in the list and scrape all the data from the tables
> # this will take some time... don't interrupt it!
> all_tables <- lapply(links, readHTMLTable, stringsAsFactors = FALSE)
Error in UseMethod("xmlNamespaceDefinitions") :
no applicable method for 'xmlNamespaceDefinitions' applied to an object of class "NULL"
> # Put player names in the list so we know who the data belong to
> # extract names from the URLs to their stats page...
> toMatch <- c("http://www.sports-reference.com/cfb/players/", "-1.html")
> player_names <- unique (gsub(paste(toMatch,collapse="|"), "", links))
Error: cannot allocate vector of size 512 Kb
> # assign player names to list of tables
> names(all_tables) <- player_names
Error: object 'player_names' not found
> fix(inx_page)
Error in edit(name, file, title, editor) :
unexpected '<' occurred on line 1
use a command like
x <- edit()
to recover
In addition: Warning message:
In edit.default(name, file, title, editor = defaultEditor) :
deparse may be incomplete
This could be an error due to not having sufficient memory (only 4gb on computer I am currently using). Although I do not understand the error
> all_tables <- lapply(links, readHTMLTable, stringsAsFactors = FALSE)
Error in UseMethod("xmlNamespaceDefinitions") :
no applicable method for 'xmlNamespaceDefinitions' applied to an object of class "NULL"
Looking through my other datasets my players really only go back to 2007. If there would be some way to pull just people from 2007 onwards that may help shrink the data. If I had a list of people whose names I wanted to pull could I just replace the lnk in
links[[i]] <- paste0("http://www.sports-reference.com", lnk)
with only the players that I need?
Here's how you can easily get all the data in all the tables on all the player pages...
First make a list of the URLs for all the players' pages...
require(RCurl); require(XML)
n <- length(letters)
# pre-allocate list to fill
links <- vector("list", length = n)
for(i in 1:n){
print(i) # keep track of what the function is up to
# get all html on each page of the a-z index pages
inx_page <- htmlParse(getURI(paste0("http://www.sports-reference.com/cfb/players/", letters[i], "-index.html")))
# scrape URLs for each player from each index page
lnk <- unname(xpathSApply(inx_page, "//a/#href"))
# skip first 63 and last 10 links as they are constant on each page
lnk <- lnk[-c(1:63, (length(lnk)-10):length(lnk))]
# only keep links that go to players (exclude schools)
lnk <- lnk[grep("players", lnk)]
# now we have a list of all the URLs to all the players on that index page
# but the URLs are incomplete, so let's complete them so we can use them from
# anywhere
links[[i]] <- paste0("http://www.sports-reference.com", lnk)
}
# unlist into a single character vector
links <- unlist(links)
Now we have a vector of some 67,000 URLs (seems like a lot of players, can that be right?), so:
Second, scrape all the tables at each URL to get their data, like so:
# Go to each URL in the list and scrape all the data from the tables
# this will take some time... don't interrupt it!
# start edit1 here - just so you can see what's changed
# pre-allocate list
all_tables <- vector("list", length = (length(links)))
for(i in 1:length(links)){
print(i)
# error handling - skips to next URL if it gets an error
result <- try(
all_tables[[i]] <- readHTMLTable(links[i], stringsAsFactors = FALSE)
); if(class(result) == "try-error") next;
}
# end edit1 here
# Put player names in the list so we know who the data belong to
# extract names from the URLs to their stats page...
toMatch <- c("http://www.sports-reference.com/cfb/players/", "-1.html")
player_names <- unique (gsub(paste(toMatch,collapse="|"), "", links))
# assign player names to list of tables
names(all_tables) <- player_names
The result looks like this (this is just a snippet of the output):
all_tables
$`neli-aasa`
$`neli-aasa`$defense
Year School Conf Class Pos Solo Ast Tot Loss Sk Int Yds Avg TD PD FR Yds TD FF
1 *2007 Utah MWC FR DL 2 1 3 0.0 0.0 0 0 0 0 0 0 0 0
2 *2010 Utah MWC SR DL 4 4 8 2.5 1.5 0 0 0 1 0 0 0 0
$`neli-aasa`$kick_ret
Year School Conf Class Pos Ret Yds Avg TD Ret Yds Avg TD
1 *2007 Utah MWC FR DL 0 0 0 0 0 0
2 *2010 Utah MWC SR DL 2 24 12.0 0 0 0 0
$`neli-aasa`$receiving
Year School Conf Class Pos Rec Yds Avg TD Att Yds Avg TD Plays Yds Avg TD
1 *2007 Utah MWC FR DL 1 41 41.0 0 0 0 0 1 41 41.0 0
2 *2010 Utah MWC SR DL 0 0 0 0 0 0 0 0 0
Finally, let's say we just want to look at the passing tables...
# just show passing tables
passing <- lapply(all_tables, function(i) i$passing)
# but lots of NULL in here, and not a convenient format, so...
passing <- do.call(rbind, passing)
And we end up with a data frame that is ready for further analyses (also just a snippet)...
Year School Conf Class Pos Cmp Att Pct Yds Y/A AY/A TD Int Rate
james-aaron 1978 Air Force Ind QB 28 56 50.0 316 5.6 3.6 1 3 92.6
jeff-aaron.1 2000 Alabama-Birmingham CUSA JR QB 100 182 54.9 1135 6.2 6.0 5 3 113.1
jeff-aaron.2 2001 Alabama-Birmingham CUSA SR QB 77 148 52.0 828 5.6 4.3 4 6 99.8