I have been looking around the internets and have not come up with a solution. Does anyone know how to jump to a css declaration inside an html file and have a new buffer open with point at or close to that declaration?
Edit
After more searching I decided to go with a search open buffers for string approach.
;; CSS search open buffers
(defun search-open-css-buffers-for-region-or-word ()
"Use the current region/point and search open css buffers"
(interactive)
(let (searchTerm)
(setq searchTerm
(if (region-active-p)
(buffer-substring-no-properties (region-beginning) (region-end))
(thing-at-point 'symbol)))
(multi-occur (mapcar (lambda (buf)
(if (string-match "\w*.css" (buffer-name buf))
buf)) (buffer-list))
searchTerm 5)))
(global-set-key (kbd "M-s-.") 'search-open-css-buffers-for-region-or-word)
It feels like this is a hack though.
There's a couple of things that make this non-generic:
depending on what HTML mode you use you might need different ways of detecting what is at point
where are your CSS files? people have all sorts of different project structures
how do you want the output to look?
Having said that, here's an example, for nxml-mode (though it could easily be adapted).
(defun find-css-defun ()
(interactive)
(when (memq 'nxml-attribute-value
(get-text-property (point) 'face))
(let ((css-buffers
(mapcar
(lambda (file)
(find-file-noselect file))
(directory-files default-directory t ".*\\.css$"))))
(multi-occur css-buffers
(thing-at-point 'symbol)))))
answer to first point, we're using nxml so that's how we detect whether we're on a CSS attribute or not
answer to second point, we just scan through all CSS files in the same directory as the HTML, css-buffers is a list of opened CSS files; clearly this is adaptable
we use multi-occur to present the results
It works ok.
PS it requires thing-at-point which is in emacs by default these days
Related
I have a triply nested component (in different files) in Reagent + ShadowCLJS. When editing and saving this file, the changes don't show immediately until editing and saving the parent component.
For example, NAV is nested in DASHBOARD which itself is nested in APP. Editing and saving DASHBOARD result in changes also in the browser, but editing and saving NAV does not, until DASHBOARD itself is modified, then NAV will show changes in the browser.
Example code:
(ns app.core
(:require [app.views.dashboard :as dash]))
(defn app[]
[dashboard])
(ns app.views.dashboard
(:require [app.components.nav :as nav]))
(defn dashboard[]
[:div
[:div "Dashboard"]
[nav/nav]])
(ns app.components.nav)
(defn nav[]
[:div "Navigation"])
Build configuration:
;;shadow-cljs.edn
...
{:app {:target :browser
:modules {:main {:entries [app.core]}}}
...
I tried un-nesting the namespaces so that the components live next to each other in the directory, but still triply nested. This also does not work.
I wrote about hot-reload in CLJS, maybe you can find an answer there.
Normally I'd expect your setup to just work, but I suspect that reagent/react decides to skip rendering at some point. They memoize some components and since CORE doesn't change, when touching NAV it may decide that nothing needs to be done.
You can force a full reload by setting :devtool {:reload-strategy :full} in your build config, which should address this problem. It may however become somewhat slow in larger builds.
After 5 hours of meddling around, it turns out if C is nested in B which is nested in A (A (B (C))), not only do you have to import C inside B, but you also have to import it inside A. So my solution was to also :require app.components.nav inside app.core.
I'm not very sure why this behaves like this, so if anyone else would like to respond with an explanation, that would be welcome!
I'm working on a project that has:
Bootstrap
FontAwesome
It's own custom stylings
When I write something like this:
.some-el {
font-size: $f (and then wait)
}
Then it thinks for a second and then suggests something like this:
$focus [my variable]
$fa-font-display [implicitly imported from font-awesome]
$fa-font-path [implicitly imported from font-awesome]
$fa-css-prefix [implicitly imported from font-awesome]
$font_size_base [my variable - that I was looking for]
...
...
It gets better with time, since it remembers what I've used previously - so I guess this is something that would fix itself. But it would be awesome to be able to fix it myself right away.
This is just an example where FontAwesome-variable are a nuisance, but other times it's the Bootstrap-variables.
How can I define which SASS-variables that are suggested (and/or the order of the suggestions)?
Solution attempts
Googled a bunch.
Looking through settings for 'Code Completion' and 'Code Style'
It's not possible. The only way I can think of is excluding the folder where the .css/.scss files you don't like getting completion from are stored from indexing (Mark directory as/excluded).
Related feature request: WEB-41257
I am working in ClojureScript and would like to serialize a massive EDN data structure (in particular: a large map) in the form of a text file (in the same way that JS objects are stored as .json files). Performance concerns are not an issue.
Is this possible, and if so, is there considered a standard/best practice way to do this?
Yes.
Use pr-str or clojure.pprint/pprint to write EDN and use clojure.edn/read-string to ingest EDN.
In ClojureScript you may face the same challenges as Javascript in accessing the filesystem from a browser. For example to save a file from the browser things can be a little tricky:
(defn save-file [filename t s]
(if js/Blob
(let [b (js/Blob. #js [s] #js {:type t})]
(if js/window.navigator.msSaveBlob
(js/window.navigator.msSaveBlob b filename)
(let [link (js/document.createElement "a")]
(aset link "download" filename)
(if js/window.webkitURL
(aset link "href" (js/window.webkitURL.createObjectURL b))
(do
(aset link "href" (js/window.URL.createObjectURL b))
(aset link "onclick" (fn destroy-clicked [e]
(.removeChild (.-body js/document) (.-target e))))
(aset link "style" "display" "none")
(.appendChild (.-body js/document) link)))
(.click link))))
(log/error "Browser does not support Blob")))
So it depends on the context of how you access the files, but so long as you can get/put strings, it's as easy as pr-str and edn/read-string.
It is very possible.
This approach gives you a url string such as "blob:http://localhost:3000/4a6407c6-414e-4262-a194-28bd2b72be00", where your data will be available for download on a browser.
(defn download-as-edn [coll]
(-> coll
str
vector
clj->js
(js/Blob. #js {:type "text/edn"}))
js/URL.createObjectURL))
Notice that Blob takes a sequence, therefore we pass it the edn string inside a vector.
I am writing an update to my rNOMADS package to include all the models on the NOMADS web site. To do this, I must search the html directory tree for each model. I do not know how deep this tree is, or how many branches it contains, beforehand. Therefore I am writing a simple web crawler to recursively search the top page for links, follow each link, and return the URLs of pages that have no more links. Such a page is the download page for model data. Here is an example of a URL that must be searched.
I want to get the addresses of all web pages below this one.
I have attempted this code:
library(XML)
url <- "http://nomads.ncep.noaa.gov/cgi-bin/filter_cmcens.pl"
WebCrawler <- function(url) {
doc <- htmlParse(url)
links <- xpathSApply(doc, "//a/#href")
free(doc)
if(is.null(links)) { #If there are no links, this is the page we want, return it!
return(url)
} else {
for(link in links) { #Call recursively on each link found
print(link)
return(WebCrawler(link))
}
}
}
However, I have not figured out a good way to return a list of all the "dead end" pages.
Instead, this code will only return one model page, not the whole list of them.
I could declare a global variable and have the URLS saved to that variable, but I am wondering if there is a better way to go about this. How should I go about constructing this function to give me a list of every single page?
your error is in the recursion:
## THIS IS INCORRECT
for(link in links) { #Call recursively on each link found
print(link)
return(WebCrawler(link)) <~~~ Specifically this line
}
There is no recursive property here, you are just digging in deep along a single branch.
*
/ \
\
\
\
\
*
You don't want to return the value of WebCrawler(link).
Rather you want to capture that value, then return the collection of values.
ret <- vector("list", length=length(links))
for(link in links) { #Call recursively on each link found
print(link)
ret[[link]] <- WebCrawler(link) <~~~ Specifically this line
}
return(ret) # or return(unlist(ret))
Update:
It might be worth considering what you expect as a final output? You will get a deeply nested-list. If you just want the end nodes, you can unlist(.. recursive=TRUE, use.names=FALSE) or you can even unlist as you go along, but that will probably slow you down more. Might be worth benchmarking it to be sure.
I am using knit to convert my .Rhtml file to an .html file.
I am calling the output of a chunk called Q1:
<!--begin.rcode Q1,echo=FALSE,fig.show="all",fig.align="center",warning=FALSE
end.rcode-->
Here comes the chunk, it is basically a ggplot2 figure in a 2x2 layout.
library(ggplot2)
myplot = list()
for (i in 1:4){
x = 1:100
y = sample(100,100)
data = data.frame(x=x,y=y)
myplot[[i]] = ggplot(data,aes(x=x,y=y))+geom_point()+labs(title="bla")}
do.call(grid.arrange,c(myplot,list(nrow=2,ncol =2)))
Now, when looking at the resulting html file, I would like to incorporate the following feature:
I would like to have a link (e.g. to a database) when clicking on the title of each plot.
Is this somehow possible?
Thx
This doesn't completely answer your question, but it might get you or someone else started on a full answer.
Paul Murrel's gridSVG package (see also this useful pdf doc) allows one to add hyperlinks to grid-based SVG graphics. (In theory it should thus work with ggplot2; in practice I've just got it working with lattice). The current issue of the R Journal includes a couple of articles ("What's in a name?" and "Debugging grid graphics." -- Warning: pdfs) that might help you to best design dynamic searches for name of the grob to which you'd like to add a link (as in my second line of code).
library(gridSVG)
library(lattice)
xyplot(mpg~wt, data=mtcars, main = "Link to R-project home")
mainGrobName <- grep("main", grid.ls()[[1]], value=TRUE)
grid.hyperlink(mainGrobName, "http://www.r-project.org")
gridToSVG("HyperlinkExample.svg")