Convention for adding data to a clickable element - html

This question is perhaps too philosophical for SO but I wanted to give it a try.
We are using a tool called Google-Tag-Manager to track website events. GTM listens for clicks and you can filter based on e.g. element class or id.
Consider this scenario. We have several pdfs on particular page and wnat to know how popular they are. The pdfs can either be downloaded or viewed online. The pdfs each have a name. So I need to get the following data:
whether downloaded pdf or viewed online
the name of the pdf
For name we can presumably use name or title attribute. But whether viewed online or downloaded is more custom.
I am currently going to ask our developers to edit the html in such a way that I can configure GTM to listen for these clicks and pass the data back into our web reporting tool.
I can tell them to add a class to all of them e.g. class='pdf_clicks". What would a conventional means of adding the other two data points? A custom attribute? I know I can use title for name. What about whehter or not was viewed online or downloaded? Is there a generic meta data type attribute for this?

Related

How to convert Javascript, CSS, and HTML content into a interactive-pdf or .h5p page

I have a webapp that let users place dots on sitemap and link them to images.
The web app uses Javascript, CSS, and HTML.
phase1
While the user is subscribed he uses a rich set of functionalities to:
add dots on the sitemap and link them to images
edit the dots: move, delete, link momultiple images etc ..
etc..
This is done via the website that hosts the webapp.
phase2
When the user ends the subscription, he gets a .zip file with the information that he created (sitemap, images, links between the sitemap and the images, etc..).
The user can then connect to the website that hosts the webapp, without signing in and get a subset of the functionalities (e.g. he can only click on the dots and see the linked images, but he can no longer edit the dots or add images).
I want to change phase2.
Instead of interacting with the webapp on the website, I want to "freeze" the webapp into a interactive-pdf, or h5p page that can be played independently without the webapp.
There are multiple reasons that motivate to do this:
the webapp is complex, so engaging with the webapp is prone to more errors.
If the small subset functionality of the final data, which boils down to showing the image when clicking on the hyperlink, can be done via h5p browsing, then the risks for runtime errors are greatly reduced.
the interactive-pdf or .h5p file can be browsed by variety of tools potentially even when being offline.
the end product can be re-designed to appear more simple.
My questions:
is it possible to programatically convert the Javascript, CSS, and HTML content into a interactive-pdf or .h5p page?
Every end-product will be different (e.g. by the number of dots, and their location in the sitemap) so having to manually create the .h5p page every time is not practical.
are there mobile apps (e.g. on Apple Store, or Google Play) that can read .h5p content locally, e.g. when the device is offline?
Thanks
EDIT:
Oliver Tacke, thank you for replying.
Up to few days ago, looking for a solution to my problem, I did not hear about h5p at all.
When looking into h5p, I see that
many comments rlated to h5p that is a bit old - from ~5/6 years ago.
h5p is frequently talked in context of education (e.g. Moodle)
when I filed the question I could not even find a tag for 'h5p'
I could not find forums for h5p in mainstream channels like Discourse or Slack
So I want to know if I'm in the right direction at all.
Is h5p a new thing that just takes time to pick up, or is it something that started a while ago and dwindlled down,
or maybe I'm wrong and it is currently more active than I think (I'm aware of h5p.org and I do see activity there).
Basically, I want to create interactive content that can work
ideally offline, or
online but with a mainstream browser/tool/website (i.e. without needing my special website)
In the design industry, I know there are interactive catalogues.
But I don't know if the user can download them and somehow (e.g. with an epub reader) read them.
Thanks
I don't know anything about creating PDFs programmatically, so I can only offer a partial answer for the H5P related part. Given the broad scope of your question, this may be acceptable as a comment.
H5P content follows a specification that is documented at https://h5p.org/documentation/developers/h5p-specification.
You would basically have to implement an H5P content type library (file) from the files that you are given by the service. I assume that the JavaScript and CSS files are always the same, then those could be reused directly (but potentially not legally). You would also have to add some more JavaScript that takes parameters and generates the HTML output that you get from the service. You would then have to model semantics.json to suit the parameters, and then you essentially have an H5P content type. You don't have to use the then available form based editor (which probably wouldn't make sense), but you could create the content.json file programmatically and put it into the H5P content file archive. To create that file programmatically, you'd have to create a converter that identities the parameters in the HTML file generated by that service and transform them into the H5P semantics/content format. Not sure if it made more sense to rather create an editor widget for H5P, so you wouldn't have to depend on the other service at all.
There are currently no known mobile apps that allow you to load and run H5P content. They are on the roadmap of the H5P core team, but I wouldn't expect them to work on those any time soon. There's the moodle app for the moodle LMS that allows to use H5P content offline, but it needs to be fetched from a moodle instance. There's Lumi that allows to run H5P content locally on Windows, MacOS and Linux, but not on Android or iOS. However, Lumi also allows to create single standalone HTML files from H5P content containing all the content and logic ready to play, so that would allow offline use on Android and iOS.

Saving static HTML page generated with ReactJS

Background:
I need to allow users to create web pages for various products, with each page having a standard overall appearance. So basically, I will have a template, and based on the input data I need the HTML page to be generated for each product. The input data will be submitted via a web form, following which the data should be merged with the template to produce the output.
I initially considered using a pure templating approach such as Nunjucks, but moved to ReactJS as I have prior experience with the latter.
Problem:
Once I display the output page (by adding the user input to the template file with placeholders), I am getting the desired output page displayed in the browser. But how can I now obtain the HTML code for this specific page?
When I tried to view the source code of the page, I see the contents of 'public/index.html' stating:
This HTML file is a template.
If you open it directly in the browser, you will see an empty page.
Expectedly, the same happens when I try to save (Save As...) the html page via the browser. I understand why the above happens.
But I cannot find a solution to my requirement. Can anyone tell me how I can download/save the static source code for the output page displayed on the browser.
I have read possible solutions such as installing 'React/Redux Development Extension' etc... but these would not work as a solution for external users (who cannot be expected to install these extensions to use my tool). I need a way to do this on production environment.
p.s. Having read the "background" info of my task, do let me know if you can think of any better ways of approaching this.
Edit note:
My app is currently actually just a single page, that accepts user data via a form and displays the output (in a full screen dialog). I don't wish to have these output pages 'published' on the website, and these are simply to be saved/downloaded for internal use. So simply being able to get the "source code" for the dislayed view/page on the browser and saving this to a file would solve my problem. But I am not sure if there is a way to do this?
Its recommended that you use a well-known site generator such as Gatsby or Next for your static sites since "npx create-react-app my-app" is for single page apps.
(ref: https://reactjs.org/docs/create-a-new-react-app.html#recommended-toolchains)
If I'm understanding correctly, you need to generate a new page link for each user. Each of your users will have their own link (http/https) to share with their users.
For example, a scheduling tool will need each user to create their own "booking page", which is a generated link (could be on your domain --> www.yourdomain.com/bookinguser1).
You'll need user profiles to store each user's custom page, a database, and such. If you're not comfortable, I'll use something like an e-commerce tool that will do it for you.
You can turn on the debugger (f12) and go to "Elements"
Then right-click on the HTML tag and press edit as HTML
And then copy everything (ctrl + a)

Add custom input form in MediaWiki homepage

Where can I put custom input form code in media wiki homepage?
This is so I can modify it into fewer steps for a user to create a new page. The input form will be for entering the title of the new page.
Currently, when adding a page, the user has to search for a page, and if it doesn't exist, it redirects to another page with a link to add the new page. After that it will load the built-in Wiki editor(will also modify this to default to the Visual Editor extension I integrated instead of Wiki editor).
Any input would be greatly appreciated.
There are a number of extensions that can do what you want:
InputBox, is bundled with recent versions of MediaWiki. It is used with Wikimedia wikis, and thus probably very stable.
CreateBox, specifically for letting users create pages
Create Page, more general aproach
Semantic Forms The most fulfledged, but also the most complex, and requires the Semantic MediaWiki extension
You might also want to combine this with some biolerplate extension, e.g. Preloader
As you are posting on SO, I assume that developing your own extension would also be an option. In that case, have a look at the parser functions manual: https://www.mediawiki.org/wiki/Manual:Parser_functions
The file in which i can add/modify a custom input form in the media wiki homepage would be the /rootWikiDir/skins/Vector.php

Obtaining the URL for a Particular Page in DotNetNuke 7

I have created a page in DNN 7 and added the standard feedback module available at Codeplex to it. Now I want to link to this page using a hyperlink in the middle of another page (not from a menu).
I am able to see the URL for the feedback page via the admin pages and it seems to be consistent. So the obvious way would be to use the HTML module and simply hardcode the URL. But something feels wrong about that. I thought of creating a simple module, encapsulate the hyperlink and surrounding text in a control and use NavigateURL to obtain the URL for the feedback page. Unfortunately, I have not been able to figure out how to do that. I have seen a lot of information about getting the URL for other controls within the same module and even using ModuleID but nothing that would help me implement the code for getting the URL for a particular page at my level of experience.
Sorry about the long intro but I was wondering if it is good practice to hardcode the URL and if not how to programmatically obtain the URL for the feedback page.
TIA
The first argument to NavigateURL is TabId (pages are called tabs in the DNN API). To get the ID of the Feedback tab/page, you'll want to call a method off of the DotNetNuke.Entities.Tabs.TabController class; I'd suggest the static method TabController.GetTabByTabPath(portalId, tabPath, cultureCode), so something like this:
Globals.NavigateURL(TabController.GetTabByTabPath(this.PortalId, "//Feedback", string.Empty))
You're still hard-coding the path to the page here; you could have a setting, which would let you pick the page, but that seems like a bit of overkill for a simple link. The main benefit that you would get by hard-coding the path, but still using NavigateURL is that any changes you make to how URLs are generated (e.g. upgrading to the Advanced URL Provider that comes in DNN 7.1) will happen automatically.
Most folks don't worry much about programatically generating links in HTML content.

Is there anyway of making json data readable by a Google spider?

Is it possible to make JSON data readable by a Google spider?
Say for instance that I have a JSON feed that contains the data for an e-commerce site. This JSON data is used to populate a human-readable page in the users browser. (I.E. The translation from JSON data to human displayed page is done inside the users browser; not my choice, just what I've been given to work with, its an old legacy CGI application and not an actual server-side scripting language.)
My concern here is that, the google spiders will not be able to pickup/directly link to the item in question when a user clicks on it in google, being presented with an index page full of all the items, rather than being linked directly to the item they clicked on.
Is there anyway of "informing" the google spider in the JSON that what they should feed the user a different link?
While Google does crawl and index JavaScript in some circumstances, it's still best to serve "normal" (X)HTML content if at all possible. In this case, it would help to know the rest of the site's setup, in particular: is the JSON content just used to create a feed of links to the product pages (with static content) or are all product pages also generated by JSON feeds? If the feed is only used to point to the actual product pages (which are static) then one way to make the product pages discoverable could be to create a HTML sitemap page or some other alternate form of navigation. A XML Sitemap file can also help, but I would recommend not using it as the sole way of making the product pages discoverable.
If all of the content is only accessible through JSON feeds, then I think you will have to make some bigger changes if you want that content to be accessible through search results.
One way to handle it could also be to use the new JavaScript crawling/indexing proposal, which basically would result in a headless browser being set up between your site and Google: http://code.google.com/web/ajaxcrawling/ (whether setting this up or revamping the rest of the site is easier is hard to say :-))
You should make a wrapper page in server-side code around the JSON data, and respond to requests with either the wrapper or the regular version depending on the User-Agent.