Adding URL links in table charts - html

We have a Superset table that displays data based on an SQL query. Currently, all the data is rendered in HTML div/span tags.
We need to open a link in a new tab on click of one of the columns. If we send the raw link in anchor tag, it displays <a href={{link}}></a>, because the superset code wraps all the contents in a div/span tag.
Is there any way this can be done?

As far as superset use df.to_html to render the pandas data frames to Html on Dahsoborads Explore Tabs, you can use HTML tags and other on your queries. For example, I developed this simple query that generates a simple table of charts with CSV download links. Check This out:
Write The query like this:
Try to explore it(click on the explore button!):

As far as I know, you can't. All visualization on superset is based on d3. You might want to look for custom visualizations on their site.

Related

Dynamically produce html based on templates

I am trying to automate a workflow for automatically creating HTML newsletters based on information stored in a spreadsheet.
Currently, I am using a newsletter drag and drop tool, in which several pre-programmed blocks are available (e.g. full column block, 2 column block etc). When creating a newsletter, I drag and drop a block and fill in my content (e.g. uploading an image, inserting a url). This is all well and good, however, since I have to create the same newsletter in 10 different languages, this process is quiet time consuming and prone to human error. While all newsletters are the same in terms of layout, the images and urls differ.
To solve this issue, I would like to get rid of the drag and drop process, and instead automate the workflow in some other way.
One idea that I have already tried, but that doesn't seem like the perfect option to me, is to dynamically create the needed HTMLs in Excel. Basically, the idea is to take the existing block template structure, and put it into Excel with some formulas.
I could then copy and paste the links to the images (in a simple format, such as EN1.jpg, ES1.jpg, etc.), as well as to urls (url.com, url.es).
This is some example block:
<img alt="" align="center" width="700" style="max-width:700px;" class="resetWidth" border="0" src="IMAGE" />
My final expected result is something like this:
I define the layout in a very quick manner (e.g. writing fullcolumn, half column, fullcolumn). The corresponding code is taken from the template. I then provide the attributes (image url, link url) in the form of a list or so. The end result should then be 10 html files that I simply have to upload to the newsletter software.
I would appreciate it very much if anyone had any ideas on this.
Another option for translating the page is to do something like this https://www.w3schools.com/howto/howto_google_translate.asp
it adds a selection for languages to translate into.
As for automating the images, you could set up folders for each langauge and reuse the name of images based on where you want them so they would be placed in the correct location.
All you'll have to do it replace the images with the same file names and swap the default language on the Google Translator.
So something like this that the html will stay the same with regards to the image names
For the link variables you may be able to write some JS or another language to take advantage of the
<html lang="">
and based on which lang is set, insert a set of links to the file.

Openrefine cannot fetch html code inside accordion

I know that openrefine is not a perfect tool for web scraping but looking for some helps from the first step.
I cannot collect the full html codes from openrefine when I add column by fetching url (https://profiles.health.ny.gov/hospital/view/103094). They do not incorporate any codes under accordion such as services, bed types, and etc.
Any idea to get the full codes by fetching in openrefine?
I am trying to collect information under administrative, whose Xpath is "//div[4]/div/ul/li" ("div#AdministrativeBox.in.collapse")
This website loads its content dynamically using Javascript. The information that interests you is not stored in the source code of the page, so Open Refine cannot extract it.
However, there is a workaround. If you transform your URLs with the GREL formula value.replace('view', 'tab_overview'), you will get scrapable pages like this one.
Note that OpenRefine does not use Xpath, but JSOUP selectors. To get the elements of the "Administrative" block, you can use this GREL formula.
forEach(value.parseHtml().select('#AdministrativeBox li'), e, e.htmlText()).join(',')
Result:

How to best transfer a document to a SAPUI5 framwork?

I'd like to achieve the following and I'm looking for ideas. I have a document and I want to represent/transform this content in/to a nice SAPUI5 framework. My idea is the following: a split app with having the paragraph titles in the master view (plus a search function on top) and the respective content in the detail view.
I'd like to know from you if
a) you might want to share your ideas and hints on alternatives.
b) this can be achieved within one single file (i.e. all the code for the split app and document content in one html) and maybe using pure html code (xml also feasible) - against the background of easily handing a large amount of text available in html.
c) if you happen to have/know a reusable template.
Thanks in advance!
An interesting question. I went through a similar exercise once, re-presenting my site with UI5.
To your questions:
(a) I would think that the approach you suggest is a good one
(b) You can indeed include all the app in a single file, I do that often by using script templates, even with XML Views. You can see some examples in my sapui5bin repository, in particular in the SinglePageExamples folder. Have a look at this html file for example: https://github.com/qmacro/sapui5bin/blob/master/SinglePageExamples/SAP-Inside-Track-Sheffield-2014/end.html
What I would suggest is, rather than intermingle the document content and the app & view definitions, maintain the content of your document separately, for example, in XML or JSON, and use a client side model to load it in and bind the parts to the right places.

Automate Web Applications -parsing HTML Data

I just want to automate a web application, where that application parses the HTML page and pulls all the HTML Tags inner text based on some condition like if we have a tag called Span Example has given whose class="spanclass_1"
This is span tag...
which has particular class id. so that app parses and pulls that span into it.
And here the main pain area is, I should not use the developer code to automate that same parsing the HTML.
I want to automate that parsing done correctly, simply by using the parsed data which is shown in UI.
Any help, would be great.
Appreciating your time reading this.
(Note span tag is not shown)
Thanks buddies.
not enough details.
is this html page just a file in local filesystem on it is internet webpage?
do u have access to pages? can u modify it ? if answer yes, that just add javascript to page which will extract data and post to server.
if answer not, than it depends on language u use to programm.
Find good framework to parse html. load page parse it and extract data. Several situation can be there.
Worse scenario - page generated on client side using js.
Best scenario - page is in xhtml mode( u are lucky. any xml parser will help to build dom and extract data)
So so - page is simple html format (try several html parser to find most suitable for u)

Pulling out some text from a giant HTML file using Nokogiri/xpath

I am scraping a website and am trying to pull out certain elements from the HTML. In the sites I am scraping, there are script tags with a bunch of info in them however, there is one part inside these tags that I am interested in. The line basically looks like:
'image':'http://ut5.example.com/t/231/3_b_643435.jpg',
With some stuff above and below it. Now, this is different for each page source except for obviously the domain and some of the subfolders that store the images.
How would I go about looking through the source for this specific line, and cutting out just the URL? I would need to use regular expressions I feel as the URLs are dynamic.
The "gsub" method does something similar to what I want to search for, with its ability to use /regex/. But, I am not wanting to replace anything, I just want to find that URL in the source code using a /regex/ and copy it.
According to you comments, this is what you're looking for I guess
var regex = /http.+/;
Example http://jsfiddle.net/Km9ZB/