Can Go capture a click event in an HTML document it is serving? - html

I am writing a program for managing an inventory. It serves up html based on records from a postresql database, or writes to the database using html forms.
Different functions (adding records, searching, etc.) are accessible using <a></a> tags or form submits, which in turn call functions using http.HandleFunc(), functions then generate queries, parse results and render these to html templates.
The search function renders query results to an html table. To keep the search results page ideally usable and uncluttered I intent to provide only the most relevant information there. However, since there are many more details stored in the database, I need a way to access that information too. In order to do that I wanted to have each table row clickable, displaying the details of the selected record in a status area at the bottom or side of the page for instance.
I could try to follow the pattern that works for running the other functions, that is use <a></a> tags and http.HandleFunc() to render new content but this isn't exactly what I want for a couple of reasons.
First: There should be no need to navigate away from the search result page to view the additional details; there are not so many details that a single record's full data should not be able to be rendered on the same page as the search results.
Second: I want the whole row clickable, not merely the text within a table cell, which is what the <a></a> tags get me.
Using the id returned from the database in an attribute, as in <div id="search-result-row-id-{{.ID}}"></div> I am able to work with individual records but I have yet to find a way to then capture a click in Go.
Before I run off and write this in javascript, does anyone know of a way to do this strictly in Go? I am not particularly adverse to using the tried-and-true js methods but I am curious to see if it could be done without it.

does anyone know of a way to do this strictly in Go?
As others have indicated in the comments, no, Go cannot capture the event in the browser.
For that you will need to use some JavaScript to send to the server (where Go runs) the web request for more information.
You could also push all the required information to the browser when you first serve the page and hide/show it based on CSS/JavaScript event but again, that's just regular web development and nothing to do with Go.

Related

Send and receive data to and from a website using the TWebbrowser component in Delphi

I'm creating a VCL Application with Delpi 10.3 and want to support some web functionality by having the user enter the ISBN of a book into a TEdit component and from there passing/sending this value to a search field on this website: https://isbnsearch.org after which the website looks up the ISBN and displays the Author of the book. I want to somehow access the information (i.e Author) presented by the search result and again use it in my application.
This is my GUI, for a better idea of what I want to accomplish:
What code can I use for this? Any other feasible suggestions or approaches are acceptable.
When performing a search on that website, it simply loads a page with a specific URL query string...
https://isbnsearch.org/search?s=suess
The above example is when I search for "suess", so you can easily concatenate a search URL.
You can use any HTTP component, such as TIdHTTP, to load this search page, then use an HTML parser to scrape the page and read what you need. Much, much easier than trying to read through the TWebBrowser.
In the end, you won't actually display the HTML (I mean you can if you want to), but the idea is to read the data and display it in your own format.
On that specific page, start by locating the ul element with id searchresults. Then, each li element contains individual results. Unfortunately, this website uses pagination, and only shows 10 results per page. To do this, call this page again with another parameter &p=2 for the 2nd page, &p=3 for the 3rd page, and so on.
On the other hand, that is the worst way to acquire such information. What you should be doing is using a proper API which gives you machine-friendly data. The service you are referencing doesn't appear to have an option, but here's an example of one which does:
https://openlibrary.org/dev/docs/api/books - this also appears to provide you MUCH more information than the one you're using.

HTML Form: Can submitted GET/POST parameters be suppressed using only HTML or CSS?

I am volunteering on a website-based project that is trying to make all pages fully operable JavaScript free before adding any JavaScript for enhancements, and I was asked to investigate whether or not a particular scenario could be handled purely through HTML/CSS.
What we have is a form that is populated to help us filter a list of tickets that are displayed on the screen after a page update through a GET action, which itself works fine, but the concern with the current implementation is that the URL cannot be made into a permanent link. The request, however, to keep the permanent link as minimal as possible, is to only send GET parameters for fields that are populated with something (so, suppressing GET parameters for fields that are blank) instead of having a different GET parameter for each form field on the page.
I have thought of several ways that could be done, most including JavaScript (example: create fields with ids but no names and a hidden field w/ name that uses JS to grab the data from the fields), but also one that would be a POST action with a redirect back to the GET with a human readable string that could be permanently used. The lead dev, however would prefer not to go through the POST/redirect method if at all possible.
That being said, I'm trying to make sure I cover all my bases and ask experts their thoughts on this before I strongly push for the POST/redirect solution: Is there a way using only HTML & CSS to directly suppress GET parameters of a form for fields that are blank without using a POST/redirect?
No, suppressing fields from being submitted in an HTML form with method of "GET" is not possible without using JavaScript, or instead submitting the form with a POST method and using a server side function to minimize the form.
What fields are submitted are defined by the HTML specification and HTML and CSS alone cannot modify this behavior and still have the browser be compliant with the standards.
No, you cannot programmatically suppress any default browser behavior without using some kind of client scripting language, like JavaScript.
As a side note, you say "JavaScript for enhancements", but JavaScript is not used for enhancements these days. And no one in the real world would except a decent front-end without the use of JavaScript. I would suggest you simply use JavaScript.
I do not think you can avoid Javascript here to pre process before submission to eliminate unchanged /empty form fields.

URL Masking in .Net / HTML

I have a website in which I have many categories, many sub-categories within each one and many products within each of those. Since the URLs are very user-unfriendly (they contain a GUID!!!), I would like to use a method which I think is called URL Masking. For example instead of going to catalogue.aspx?ItemID=12343435323434243534, they would go to notpads.htm. This would display the same as going to catalogue.aspx?ItemID=12343435323434243534 would display, somehow.
I know I could do this by creating a file for each category / sub-category (individual products cannot be accessed individually as it is a wholesale site - customers cannot purchase directly from the site). This would be a lot of work as the server would have to update each relevant file whenever a category / sub-category / product visibility changes, or a description changes, a name changes... you get the idea...
I have tried using server-side includes but that doesn't like it when a .aspx file is specified in an html file.
I have also tried using an iframe set to 100% width / height and absolutely positioned left 0 and top 0. This works quite well, but I know there are reasons you should not use this method such as some search engines not coping with it well. I also notice that the title of the "parent" page (notepads.htm) is not the title set in the iframe (logically this is correct - but another issue I need to solve if I go ahead and use this method).
Can anyone suggest another way I could do this, or tell me whether I am going along the right lines by using iframes? Thanks.
Regards,
Richard
PS If this is the wrong name for what I am trying to do then please let me know what it actually is so I can rename / retag it.
Look into URL Rewrites. You can create a regular expression and map it to your true url. For example
http://mysite.com?product=banana
could map to
http://mysite.com?guid=lakjdsflkajkfj3lj3l4923892&asfd=9234983920894893
I believe you mean URL Rewriting.
IIS 7+ has a rewrite module built in that you can use for this kind of thing.
URL Rewriters solve the problem you are describing - When someone requests page A, display page B - in a general way.
But yours is not a general requirement. You seem to have a finite uuid-to-shortname mapping requirement. This is the kind of thing you could or should set up in your app, yourself, rather than inserting a new piece of machinery into your system.
Within a default .aspx page, You'd simply do a lookup on the shortname from the url in a persistent table stored somewhere, and then call Server.Transfer() to the uuid-named page associated to that shortname.
It should be easy to prototype this.

Methods to copy form values from one form to another?

I need to copy values from a form displayed in one HTML page (incoming.html) to another HTML page (outgoing.html) in a browser. What will be the best approach to this, I have tried using imacros but have not been able to figure it out. I believe there can be a javascript solution to the above. What can be the best approach, I need the feature in a prototype and hence efficiency does not matter.
The best option would be to use a server scripting language. You can pass values between two HTML pages in 2 ways.
First, storing them in cookies using Javascript. If cookies are not enabled, then it would be a problem.
Second, Passing Values through the URL like PHP, but this method is not very secure if your using information that you do not want the user to view.
For eg:
a href='something.htm?Something=Passing some variables'
You can fetch them in the next page using Javascript and display them.

REST/Ajax deep linking compatibility - Anchor tags vs query string

So I'm working on a web app, and I want to filter search results.
A nice restful implementation might look like this:
1. mysite.com/clothes/men/hats+scarfs
But lets say we want to ajax up the filtering, like the cool kids, and we want to retain deep linking, we might use the anchor tag and parse that with Javascript to show the correct listings:
2. mysite.com/clothes#/men/hats+scarfs
However, if someone clicks the first link with JS enabled, and then changes filters, we might get:
3. mysite.com/clothes/men/hats+scarfs#/women/shoes
Urk.
Similarly, if someone does not have JS enabled, and clicks link 2 - JS will not parse the options and the correct listings will not be shown.
Are Ajax deep links and non-Ajax links incompatible? It would seem so, as servers cannot parse the # part of a url, since it is not sent to the server.
There's a monkeywrench being thrown into this issue by Google: A proposal for making Ajax crawlable. Google is including recommendations for url structure there that may give you ideas for your own application.
Here's the wrapup:
In summary, starting with a stateful
URL such as
http://example.com/dictionary.html#AJAX
, it could be available to both
crawlers and users as
http://example.com/dictionary.html#!AJAX
which could be crawled as
http://example.com/dictionary.html?_escaped_fragment_=AJAX
which in turn would be shown to users
and accessed as
http://example.com/dictionary.html#!AJAX
View Google's Presentation here (note: google docs presentation)
In general I think it's useful to simply turn off JavaScript and CSS entirely and browse your website and web application and see what ends up getting exposed. Once you get a sense of what's visible, you will understand what most search engines see and that in turn will show you what is and is not getting spidered.
If you go to mysite.com/clothes/men/hats+scarfs with JavaScript enabled then your JavaScript should automatically rewrite that to mysite.com/clothes#men/hats+scarfs - when you click on a filter, they should be controlled by JavaScript meaning you'll only change the hashtag rather than the entire URL (as you're going to have return false anyway).
The problem you have is for non-JS users going to your JS enabled deeplinks as the server can't determine that stuff. Unfortunately, the only thing you can do is take them to mysite.com/clothes and make them start their journey again (as far as I'm aware). You'll need to try and ensure that when people link to the site, they use the hardcoded deeplink rather than the hashed deeplink
I don't recommend ever using the query string as you are sending data back to the server without direct relevance to the prior specified destination. That is a corruptible security hole as malicious code can be manually added to the query string to cause a XSS or buffer overflow attack at your webserver.
I believe REST was intended to work with absolute URIs without a query string, because then your specifying only a location of a resource and it is that location that is descriptive and semantically relevant in addition to the possibility of the resource being so equally relevant. Even if there is no resource at the specified path you have still instantiated a potentially unique and descriptive location that can be processed accordingly.
Users entering the site via deep links
Nonsensical links (like /clothes/men/hats#women/shoes) can be avoided if you construct your Ajax initialisation code in such a way that users who enter the site on filtered pages (e.g. /clothes/women/shoes) are taken to the /clothes page before any Ajax filtering happens. For example, you might do something like this (using jQuery):
$("a.filter")
.each(function() {
var href = $(this).attr("href").replace("/clothes/", "/clothes#");
$(this).attr("href", href);
})
.click(function() {
update_filter($(this).attr("href").split("#")[1]);
});
Users without JavaScript
As you said in the question, there's no way for the server to know about the URL fragment so filtering would not be applied for users without JavaScript enabled if they were given a link to /clothes#filter.
However, even without filtering, these links could be made more meaningful for non-JS users by using the filter strings as IDs in your /clothes page. To prevent this messing with the Ajax experience the IDs would need to be changed (or the elements removed) with JavaScript before the Ajax links were initialised.
How practical this is depends on how many categories you have and what your /clothes page contains.