Im having a bit of trouble in qiime2R at the moment. I'm following the tutorial and getting stuck here:
unwunifrac <- read_qza("unweighted_unifrac_pcoa_results.qza")
unwunifrac <- read_qza("unweighted_unifrac_pcoa_results.qza")
unwunifrac$data$Vectors %>%
select(SampleID, PC1, PC2) %>%
left_join(metadata) %>%
left_join(shannon)
ggplot(aes(x=PC1, y=PC2, color=`SPT_No`, shape=`Geosmin`, size=shannon_entropy)) +
geom_point(alpha=0.5) +
theme_q2r() +
scale_shape_manual(values=c(16,1), name="Geosmin") +
scale_size_continuous(name="Geosmin") +
scale_color_discrete(name="Location")
It is because my sampleid is called sampleid in my metadata and shannon vector but its called SampleID in the unweighted unifrac qza file.
Has anyone experienced this issue? How do I resolve it? is it best to rename the columns?
Any help would be much appreciated.
Many thanks
This issue you are having is not clear. Can you please try to post something to reproduce whatever error message you are getting from q2R?
Related
I cannot include plotly figures in a html report in R markdown when the figures are generated from a function.
I can obtain ggplotly figures in a html report in R markdown when ploting them directly without any functions. For instance, kniting the example below:
a <- ggplot(Data, aes_string(x="Category", y=Phenotype)) + geom_violin() + geom_boxplot()
plot(a)
ggplotly(a)
However, when I try to include the graph in a function in order to obtain many graphs from different phenotypes, the figures don't appear in the html report.
Do_graph=function(Data, Phenotype){a<- ggplot(Data, aes_string(x="Category", y=Phenotype)) + geom_violin() + geom_boxplot()
plot(a)
ggplotly(a)}
Phenotypes<-c("A","B","C")
for(Phenotype in Phenotypes){Do_graph(Data, Phenotype)}
Does anyone know how to solve this problem? I've seen some similar issues related to html.widget but I couldn't fix it. Thank you
I want to get information about price from this page: https://www.coffeedesk.pl/product/16632/Espresso-Miesiaca-Lacava-Etiopia-Yirgacheffe-Rocko-Mountain-1Kg
My code
url <-"https://www.coffeedesk.pl/product/16632/Espresso-Miesiaca-Lacava-Etiopia-Yirgacheffe-Rocko-Mountain-1Kg"
x <- xml2::read_html(url)
price<-x%>% html_node('span.product-price smaller-price') %>%
html_text()
but it returns NA
What can I do?
You have a space in your html statement when you really need to have a period. Try html_node('span.product-price.smaller-price') in your code and see if that works.
I'm trying to use tidymodels to do exercise 6.2 in Applied Predictive Modeling and need to specify a PLS model. I tried using the code from this post, but I keep getting errors.
library (tidymodels)
library(modpls)
pls_spec <- plsmod::pls(num_comp = tune()) %>%
set_mode("regression") %>%
set_engine("mixOmics")
My Lasso spec works fine:
lasso_spec <- linear_reg(penalty = 0.1, mixture = 1) %>%
set_engine("glmnet")
Do I use glmnet for the PLS spec too?
Some additional research yielded my answer on the tidymodels site: https://www.tidymodels.org/find/parsnip/
I'm trying to scrape a ncbi website (https://www.ncbi.nlm.nih.gov/protein/29436380) to obtain information of a protein. I need to access the gene_synonyms and GeneID fields. I have tried to find the relevant nodes with the selectorGadget addon in chrome and with the code inspector in ff. I have tried this code:
require("dplyr")
require("rvest")
require("stringr")
GIwebPage <- read_html("https://www.ncbi.nlm.nih.gov/protein/29436380")
TestHTML <- GIwebPage %>% html_node("div.grid , div#maincontent.col.nine_col , div.sequence , pre.genebank , .feature") %>% html_text(trim = TRUE)
Then I try to find the relevant text but it is simply not there.
str_extract_all(TestHTML, pattern = "(synonym).{30}")
[[1]]
character(0)
str_extract_all(TestHTML, pattern = "(GeneID:).{30}")
[[1]]
character(0)
All I seem to be accessing is some of the text content of the column on the right.
str_extract_all(TestHTML, pattern = "(protein).{30}")
[[1]]
[1] "protein codes including ambiguities a"
[2] "protein sequence for myosin-9 (NP_00"
[3] "protein should not be confused with t"
[4] "protein, partial [Homo sapiens]gi|294"
[5] "protein codes including ambiguities a"
I have tried so many combinations of nodes selections with html_node() that I don't know anymore what to try. Is this content buried in some structure I can't see? or I'm just not skilled enough to realize the node to select?
Thanks a lot,
José.
The page is dynamically loading the information. The underlying information is store at another location.
Using the developer tools from your bowser, look for the link:
The information you are looking for is store at the "viewer.fcgi", right click to copy the link.
See similar question/answers: R not accepting xpath query
Anyone can help me why the below code doe not have any data for the selected table?
library('httr')
library('rvest')
url= read_html("http://projects.worldbank.org/search?lang=en&searchTerm=§orcode_exact=AB")
table = html_node(url,"table#f05v5-sorting-table.border-top2.border-allside.clearboth")
Thanks!
You are missing some steps. Your workflow should look like this:
dat_html <- read_html(
"http://projects.worldbank.org/search?lang=en&searchTerm=§orcode_exact=AB"
)
dat_nodes <- html_nodes(dat_html, xpath = "xxxx")
dat <- html_table(dat_nodes)
dat will be a list, so if you want a data frame, you could do something like:
dat_df <- as.data.frame(dat)
Or, if you like tibbles:
dat_tbl <- as_tibble(dat)
I cannot find the table you are interested in on that webpage, so you have to replace "xxxx" by the xpath of the table you are interested in.
To find the xpath, if you are inspecting the page from chrome or chromium, you can right click on the node in the inspector window, and look for Copy, then Copy XPath.