I started learning Rub/Rails about a month ago, but haven't been able to find many resources specific to my issue.
I understand that in HTML/JS you can do something like:
let elements = document.getElementByName('name')
Is there a way in rails to get elements that share the class/id/name?
How can we interact with those elements? for example: if a div with a specific name already exists, append some data from our rails application to that div instead of creating a new one.
Thank you in advance.
Unless I'm really missing the point, what you're asking for isn't possible.
DOM manipulation (using Javascript) is something that happens client-side, in the browser; the browser requests a page, the server responds with an HTML document, and then the browser builds the DOM and we go from there, running Javascript, and potentially inspecting and manipulating the DOM.
Ruby on Rails is server-side; in the above description, it would be involved in the "the server responds" step, but there is no DOM at that point; it's simply generating an HTML document, using models / a view / a controller.
Related
I am currently trying to do this: once the webpage loads, find out if the URL is of a certain pattern (say www.wikipedia.com/*), then, if so, parse the HTML content of that webpage like one can do with BeautifulSoup, and check if the webpage has a div with class foo and id boo. Any idea where can I writ this code, that is, where can I get access to URL, where do I need to listen to to know that the webpage has finished loading following which I can look for the URL and HTML content, and where and how I can parse the HTML?
I tried going through the code in src/chrome/browser/tab_contents, I could not find any reasonable place where I can do all this.
Take a look at the following conceptual application layers which represent how Chromium displays web pages:
Image Source: https://docs.google.com/drawings/d/1gdSTfvLxbJDbX8oiWo5LTwAmXmdMQvjoUhYEhfhj0-k/edit
The different layers are described as:
WebKit: Rendering engine shared between Safari, Chromium, and all other WebKit-based browsers. The Port is a part of WebKit that integrates with platform dependent system services such as resource loading and graphics.
Glue: Converts WebKit types to Chromium types. This is our "WebKit embedding layer." It is the basis of two browsers, Chromium, and test_shell (which allows us to test WebKit).
Renderer / Render host: This is Chromium's "multi-process embedding layer." It proxies notifications and commands across the process boundary.
WebContents: A reusable component that is the main class of the Content module. It's easily embeddable to allow multiprocess rendering of HTML into a view. See the content module pages for more information.
Browser: Represents the browser window, it contains multiple WebContentses.
Tab Helpers: Individual objects that can be attached to a WebContents (via the WebContentsUserData mixin). The Browser attaches an assortment of them to the WebContentses that it holds (one for favicons, one for infobars, etc).
Since your goal is to access and interpret the HTML content of a web page by element and/or class, you can look to the rendering process which uses Blink:
The renderers use the Blink open-source layout engine for interpreting and laying out HTML.
Blink has a WebDocument class which allows you to access the HTML content and other properties of a web page:
WebDocument document = GetMainFrame()->GetDocument();
WebElement element = document.GetElementById(WebString::FromUTF8("example"));
// document.Url();
Cleanest would be via the chrome remote debugging protocol
Use the DOM methods to get the root DOM and walk, search, or query the dom
This would make testing simpler as well: you can implement the logic in your favourite scripting language using an existing client library (there are many) and once that works implement it in C++.
If this for some reason has to be inprocess within Chromium, as a next step start a thread that connects to this and performs the operations.
You need to use a server side library to parse the contents of a requested HTML page. In Java for example there is a library "jsoup" there might be another alternatives for other server side languages. The main problem you could find is a "forbiden access", due to security restrictions, but as you are not trying to access REST services or similar things but only parse pure HTML to found string patterns, it must be easily done with "jsoup". There was a project where similar things were programmed for accessing web sites pages & parse the response html string.
Document doc = Jsoup.connect("http://jsoup.org").get();
Element link = doc.select("a").first();
String relHref = link.attr("href"); // == "/"
String absHref = link.attr("abs:href"); // "http://jsoup.org/"
See: https://jsoup.org/
I am running a springboot application with Thymeleaf and reactJS. All the HTML text are read from message.properties by using th:text in the pages, but when I have th:text in reactJS HTML block, reactJS seems angry about it.
render() {
return (
<input type="text" th:text="#{home.welcome}">
)
}
The error is:
Namespace tags are not supported. ReactJSX is not XML.
Is there a walkaround besides using dangerouslySetInnerHTML?
Thank you!
There is no sane workaround.
You are getting this error because Thymeleaf outputs XML, and JSX parsers do not parse XML.
You did this because JSX looks very, very similar to XML. But they are very, very different, and even if you somehow hacked Thymeleaf to strip namespaced attributes and managed to get a component to render, it would be merely a fleeting moment of duct-taped-together, jury-rigged code that will fall apart under further use.
This is a really, really bad idea because JSX is Javascript. You are generating Javascript on the fly. Just to name a few reasons this will not work in the long term:
This makes your components difficult if not impossible to test.
Reasoning about application state will be a nightmare as you will struggle to figure out if the source of a certain state is coming from Thymeleaf or JS.
Your application will completely grind to a halt if Thymeleaf outputs bad JS.
These problems will all get worse with time (Thyme?) as as developers abuse the ease with which they can render server-side data to the client-side, leading to an insane application architecture.
Do not do this. Just use Thymeleaf, or just use React.
Sample Alternative: I primarily work on a React application backed by a Java backend. So I understand how someone could stumble upon this hybrid and think it might be a good idea. You are likely already using Thymeleaf and are trying to figure out how you can avoid rewriting your servlets but still get the power of React.
We were in a similar boat two years ago, except with an aging JSP frontend, but the difference is negligible. What we did (and it works well) is use a JSP page to bootstrap the entire React application. There is now one JSP page that we render to the user. This JSP page outputs JSON into a single <script> tag that contains some initial startup data that we would otherwise have to fetch immediately. This contains resources, properties, and just plain data.
We then output another <script> that points to the location of a compiled JS module containing the entire standalone React application. This application loads the JSON data once when it starts up and then makes backend calls for the rest. In some places, we have to use JSP for these, which is less than ideal but still better than your solution. What we do is have the JSP pages output a single attribute containing JSON. In this way (and with some careful pruning by our XHR library) we get a poor man's data interchange layer built atop a JSP framework we don't have time to change.
It is definitely not ideal, but it works well and we have benefited vastly from the many advantages of React. When we do have issues with this peculiar implementation, they are easy to isolate and resolve.
It is possible wrap ReactJS apps in Thymeleaf. Think if you want a static persistent part (like some links, or even just displayed data), you could use Thymeleaf. If you have a complicated part (something that requires DOM repaints, shared data, updates from UI/Sockets/whatever), you could use React.
If you need to pass state you could use Redux/other methods.
You could have your backend send data via a rest API to the React part and just render your simple parts as fragments or as whole chunks of plain HTML using Thymeleaf.
Remember, Thymeleaf is really just HTML. React is virtual DOM that renders as HTML. It's actually fairly easy to migrate one to the other. So you could write anything "Static" or that does not respond much to UI, in Thymeleaf/HTML. You could also just render those parts in React too, but without State.
Thymeleaf 3 allows you to render variables from your Java to a separate JS file. So that is also an option to pass into JSX
function showCode() {
var code = /*[[${code}]]*/ '12345';
document.getElementById('code').innerHTML = code;
}
Now you can use data- prefix attributes (ex. data-th-text="${message}").
https://www.thymeleaf.org/doc/tutorials/3.0/usingthymeleaf.html#support-for-html5-friendly-attribute-and-element-names
I have a HTML page that displays a SVG element (a Business process diagram) using some javascript libraries. A String variable, say 'str' needs to be given to html function.
After reading this, I plan to use widgets. So far I understand that I need to copy all scripts to Widgets: Test. For creating the hook, I write
{{#widget:Test|str=UserTask_1}}
The problem is that UserTask_1 is a variable as well. It is different each time.
Can someone help how can I add this dynamic information to my hook? This hook is a hyperlink from a previous page. In the previous page, I send the str=UserTask_1 through JavaWiki Bot.
PS: I have come-across SMW for first time. Please excuse if my language is not very technical at the moment.
Thanks.
I'd like to achieve the following and I'm looking for ideas. I have a document and I want to represent/transform this content in/to a nice SAPUI5 framework. My idea is the following: a split app with having the paragraph titles in the master view (plus a search function on top) and the respective content in the detail view.
I'd like to know from you if
a) you might want to share your ideas and hints on alternatives.
b) this can be achieved within one single file (i.e. all the code for the split app and document content in one html) and maybe using pure html code (xml also feasible) - against the background of easily handing a large amount of text available in html.
c) if you happen to have/know a reusable template.
Thanks in advance!
An interesting question. I went through a similar exercise once, re-presenting my site with UI5.
To your questions:
(a) I would think that the approach you suggest is a good one
(b) You can indeed include all the app in a single file, I do that often by using script templates, even with XML Views. You can see some examples in my sapui5bin repository, in particular in the SinglePageExamples folder. Have a look at this html file for example: https://github.com/qmacro/sapui5bin/blob/master/SinglePageExamples/SAP-Inside-Track-Sheffield-2014/end.html
What I would suggest is, rather than intermingle the document content and the app & view definitions, maintain the content of your document separately, for example, in XML or JSON, and use a client side model to load it in and bind the parts to the right places.
I just want to automate a web application, where that application parses the HTML page and pulls all the HTML Tags inner text based on some condition like if we have a tag called Span Example has given whose class="spanclass_1"
This is span tag...
which has particular class id. so that app parses and pulls that span into it.
And here the main pain area is, I should not use the developer code to automate that same parsing the HTML.
I want to automate that parsing done correctly, simply by using the parsed data which is shown in UI.
Any help, would be great.
Appreciating your time reading this.
(Note span tag is not shown)
Thanks buddies.
not enough details.
is this html page just a file in local filesystem on it is internet webpage?
do u have access to pages? can u modify it ? if answer yes, that just add javascript to page which will extract data and post to server.
if answer not, than it depends on language u use to programm.
Find good framework to parse html. load page parse it and extract data. Several situation can be there.
Worse scenario - page generated on client side using js.
Best scenario - page is in xhtml mode( u are lucky. any xml parser will help to build dom and extract data)
So so - page is simple html format (try several html parser to find most suitable for u)