I would like to add content to my Joomla! site by using an .xml feed that is offered by a company that I am an affiliate of.
The company has an .xml feed available so that affiliates can have the updates done automatically. I thought that I might be able to use the built in newsreader, but the tech support from the company has quickly informed me that an rss feed reader won't do the job. Though it seems to me that an rss feed reader esentially parses .xml?
Has anyone had any experience or advice with having Joomla! display the results of an .xml feed that is not rss?
<?xml version="1.0" encoding="utf-8" ?>
<videos>
<item>
<title>Raja Mahal</title><categories>Movies</categories><genre>Drama, Action & Adventure</genre><description>A Zamindar’s son working as an ordinary mill worker gives shelter to an escaped convict. The convict, however, dupes his benefactor and goes to the Zamindar’s place posing as the heir to the property. <br/><br/></description><vid>52585</vid><keywords>Drama, crime, thriller, stunts, revenge, Krishna, Vijaya Lalitha, Krishnam Raju, Telugu Movies, 70s movies, K.V. Chalam, Jyothi Lakshmi, Rama Kameswara, </keywords><duration>136.10</duration><embed><object width="425" height="355"><param name="movie" value="http://www.rajshritelugu.com/players/affplayer.swf?blogid=A6D70264-037C-453B-8A01-1089F183E5A7_1070&flashpath=http://www.rajshritelugu.com/"></param><embed src="http://www.rajshritelugu.com/players/affplayer.swf?blogid=A6D70264-037C-453B-8A01-1089F183E5A7_1070&flashpath=http://www.rajshritelugu.com/" type="application/x-shockwave-flash" width="425" height="355"></embed></object></embed><thumbnail>http://rajshri-c-18.vo.llnwd.net/d1/content/Telugu/Movies/52585.jpg</thumbnail>
</item>
<item>
<title>Bezawada Bebbuli</title><categories>Movies</categories><genre>Drama, Action & Adventure</genre><description>A righteous lawyer is killed when the thug he wants to expose kills him. One of his sons grows up to become a cop while the younger one becomes a criminal. </description><vid>52579</vid><keywords>Drama, suspense, thriller, revenge, comedy, humour, Krishna, Sri Priya, Radhika, Sivaji Ganesan, Satyanaryana, Sutti Verabhadra Rao, Shyamala Gouri, Sowcar Janaki, Mada, Sakshi Ranga Rao</keywords><duration>112.09</duration><embed><object width="425" height="355"><param name="movie" value="http://www.rajshritelugu.com/players/affplayer.swf?blogid=C53B4659-1E82-4152-82A7-5FBF162BDB66_1070&flashpath=http://www.rajshritelugu.com/"></param><embed src="http://www.rajshritelugu.com/players/affplayer.swf?blogid=C53B4659-1E82-4152-82A7-5FBF162BDB66_1070&flashpath=http://www.rajshritelugu.com/" type="application/x-shockwave-flash" width="425" height="355"></embed></object></embed><thumbnail>http://rajshri-c-18.vo.llnwd.net/d1/content/Telugu/Movies/52579.jpg</thumbnail></item>
</videos>
this is the url from i got this xml file from:
http://www.rajshri.com/syndicate/?uid=1070&sig=b20aee5e1336fb1ffb4f520e67e89a75&lang=telugu&channel=movies
First off, an RSS reader does read XML. However it reads XML files that have a specific structure (RSS). The file source you show above is not in the RSS structure, so an RSS reader would not be able to understand it. A more general XML reader would be able to parse it for you, but you'd need to tell it what to do with the data (it wouldn't inherently know how you want the various elements placed on the page).
Joomla is built on PHP and has the capability to add in extensions and user-created code. Usually this conforms to the Model/Controller/View design principle, but if you create just one PHP page that fetches the XML, parses it with PHP's XML parser, and echoes out the content you want, you can install that into Joomla as a Component and have a menu item point to it, or install it as a Module and have it appear in the sidebar of another page.
If you don't want to deal with the internal workings of Joomla, you could have an outside script on your server (using PHP or another programming language) that captures the XML file from the remote server, parses it with its XML reader, and turns around and outputs the same content in an RSS-structured XML file. Then you could point Joomla's RSS reader at that external script that's acting as an interpreter of the data.
Or if your goal is to allow users to download the file from your website and do something else with it, either put a link in an Article to the file on a remote server, or install an extension like Phoca Download, which would allow your Joomla installation to host the file yourself and track the number of downloads and set security on the file.
Create a folder in your website called "XMLFiles". Create a file in that folder called "Videos.xml" and place your XML in the file.
Pick a programing language (e.g. Perl), pick an XML parsing library (e.g. XML::LibXML), read the data in, extract the bits you want (e.g. with DOM or XPath), then generate some HTML (e.g. with a templating language like TT2).
You could generate static files or use a web framework like CGI::Application or Catalyst.
Related
I have an XML file and 3 xsl files that tranform the same xml. I want to create a home page, with three buttons. Each button will redirect to one of three tranformations. How can I create a link to the xml file with a spesific transformation.
Let's say I have : example.xml and t1.xsl, t2.xsl, t3.xsl and index.html with buttons t1, t2, t3. When I press the t1 button I want to get the XML file transformed by t1.xsl.
From your description ("home page, ...") I infer that all this should happen on the Web; in that case the answer will most likely involve the rules for configuring your Web server, so it's going to be a question about Apache, or IIS, or nginx, or Jetty, or whatever server is actually serving your documents.
There are many ways to achieve the goal; these are the first three or four that occur to me. For concreteness I will assume you are using Apache (many people do), know how to find and edit Apache configuration files, and can adapt relative references to suit your directory layout.
Assuming that what you want is precisely what #Stefan Hegny assumes you do not want.
You save three copies of the XML document. The one named example.1.xml begins
<?xml-stylesheet href="t1.xsl" type="text/xsl"?>
<example>
...
The one named example.2.xml begins
<?xml-stylesheet href="t2.xsl" type="text/xsl"?>
<example>
...
And similarly example.3.xml begins with a reference to t3.xsl.
The three buttons point to these three documents.
If example.xml is still changing, you will want to automate the process of updating the three near-identical copies of it whenever the master document changes; I use Make and sed for such tasks, myself.
Another way to achieve the same thing, with a single copy of example.xml
Another way to achieve the same effect is to maintain a single copy of example.xml, with a reference to t1.xsl (so it looks like the example.1.xml described above), and tell your server
Whenever a user requests the URI example.1.xml, serve document example.xml.
Whenever a user requests the URI example.2.xml, run the command sed -e s/t1.xsl/t2.xsl/ < example.xml and send the result (stdout) to the client.
Whenever a user requests the URI example.3.xml, run the command sed -e s/t1.xsl/t3.xsl/ < example.xml and send the result (stdout) to the client.
In Apache, I use the Rewrite module to redirect these three URIs to a CGI script which inspects the URI by which it was called and runs the appropriate command.
The three buttons continue to point to the three URIs example.1.xml, example.2.xml, and example.3.xml.
Running the stylesheet on the server
If the three displays must work even with browsers that don't support XSLT, then you want to run the stylesheet on the server.
Here, again, I use Rewrite to redirect the URIs to a CGI script, but instead of running sed, the CGI script runs xsltproc, or whatever XSLT processor is available on my server.
Running the stylesheet in the browser
Another way to handle this requirement is to make index.xhtml be an XForms document, suited for a processor which supports the transform() extension function (e.g. XSLTForms). The document example.xml is referred to by an xf:instance element, and the three buttons invoke the three stylesheets on that instance. They might update an auxiliary instance, or they might simply cause different cases in an xf:switch to display. (If this mostly makes sense to you but you need more details, ask a question tagged XForms; if it doesn't make sense to you, then you probably don't know XForms and this is not the simplest path to the goal you describe.)
Some people would use Javascript instead of XForms for this task, but browsers vary a lot in how they want their internal XSLT processor to be invoked, so unless you enjoy working around browser inconsistencies in Javascript, you might not want to go that way, either.
When I view a website in my browser (for example https://www.homedepot.ca/en/home/p.725-inch-miter-saw-with-laser.1000748698.html), it contains information that is not in the source code.
For example, the source code of this page doesn't specify a product price:
<span itemprop="price">-</span>
<small>/
each</small>
However, when viewed in a browser, the tag does actually contain a price.
How can I retrieve the product's price from the source code?
Short answer: just by reading the source, you can't. The price is dynamically loaded from their servers (using javascript), after the page loaded.
Using appropriate tools (such as the network tab in Chrome/Firefox's developer console) you can figure out where they retrieve the price from (in this case JSON document on their servers). However, even if you used that, there is no guarantee that it'll still work tomorrow - they can charge their link or the format of the data at any moment.
A good place to get started on the technologies they use is reading up on
JavaScript
AJAX
JSON
If you are interested in retrieving information from their page pro grammatically, a good start would be to contact them to see if they have a public interface (API) you can use. These are usually more stable to use.
Hi guys I am trying to download a document from a swf link in ipaper
Please guide me on how can I download the book
Here is the link to the book which I want to convert to pdf or word and save
http://en-gage.kaplan.co.uk/LMS/content/live_content_v2/acca/exam_kits/2014-15/p6_fa2014/iPaper.swf
Your kind guidance in this regard would be appreciated.
Regards,
Muneeb
first you open the book in your browser with network capturing (in developer/s tools).
you should open many pages at diffrent locations with and without zoom
then look in the captured data.
you will see that for each new page you are opening, the browser asks for a new file (or files).
this means that there is a file for each page and with that file your browser is creating the image of the page. (usually there is one file for a page and it is some format of picture but I encountered base64 encoded picture and a picture cut into four pieces).
so we want to download and save all the files that are containing the book's pages.
now, usually there is a consistent pattern to the addresses of the files and there is some incrementing number in it (as we can see in the captured data the difference between following files), and knowing the number of pages in the book we can guess ourselves the remaining addresses till the end of the book (and of course download all the files programmatically in a for loop)
and we could stop here.
but sometimes the addresses are bit difficult to guess or we want the process to be more automatic.anyway we want to get programmatically the number of pages and all the addresses of the pages.
so we have to check how the browser knows that stuff. usually the browser downloads some files at the beginning and one of them contains the number of pages in the book (and potentially their address). we just have to check in the captured data and find that file to parse it in our proram.
at the end there is issue of security:
some websites try to protect their data one way or another (ussually using cookies or http authentication). but if your browser can access the data you just have to track how it does it and mimic it.
(if it is cookies the server will respond at some point with Set-Cookie: header. it could be that you have to log-in to view the book so you have to track also this process. usually it's via post messeges and cookies. if it is http authentication you will see something like Authorization: Basic in the request headers).
in your case the answer is simple:
(all the files names are relative to the main file directory: "http://en-gage.kaplan.co.uk/LMS/content/live_content_v2/acca/exam_kits/2014-15/p6_fa2014/")
there is a "manifest.zip" file that contains "pages.xml" file which contains the number of files and links to them. we can see that for each page there is a thumb, a small, and a large pictures so we want just the large ones.
you just need a program that will loop those addresses (from Paper/Pages/491287/Zoom.jpg to Paper/Pages/491968/Zoom.jpg).
finally you can merge all the jpg's to pdf.
The Problem
I have a 35mb PDF file with 130 pages that I need to put online so that people can print off different sections from it each week.
I host the PDF file on Amazon S3 now and have been told that the users don't like to have to wait on the whole file to download before they choose which pages they want to print.
I assume I am going to have to get creative and output the whole magazine to JPGs and get a neat viewer or find another service like ISSUU that doesn't suck.
The Requirements and Situation
I am given 130 single page PDF Files each week (All together this makes up The Magazine).
Users can browse the Magazine
Users can print a few pages.
Can Pay
Automated Process
Things I've tried
Google Docs Viewer - Get an Error, Sorry, we are unable to retrieve the document for viewing or you don't have permission to view the document.
ISSUU.com - They make my users log in to print. No way to automate the upload/conversion.
FlexPaper - Uses SWFTools (see next)
SWFTools - File is too complex error.
Hosting PDF File with an Image Preview of Cover - Users say having to download the whole file before viewing it is too slow. (I can't get new users. =()
Anyone have a solution to this? Or a fix for something I have tried already?
PDF documents can be optimized for downloading through the web, this process is known as PDF Linearization. If you have control over the PDF files you are going to use, you could try to optimize them as linearized PDF files. There are many tools that can help you on this task, just to name a few:
Ghostscript (GPL)
Amyuni PDF Converter (Commercial, Windows only, usual disclaimer applies)
Another option could be to split your file in sections and only deliver each section to its "owner". For the rest of the information, you can put bookmarks linking to the other sections, so that they can be retrieved also if needed. For example:
If the linearization was not enough and you do not have a way to know how to split the file, you could try to split it by page numbers and create bookmarks like these:
-Pages 1-100
-Pages 101-200
-Pages 201-300
...
-Pages 901-1000
-All pages*
The last bookmark is for the ambitious guy that wants to have the whole thing by all means.
And of course you can combine the two approaches and deliver each section as a linearized PDF.
Blankasaurus,
Based on what you've tried, it looks like you are willing to prep the document(s) or I wouldn't suggest this. See if it'll meet your needs... Download ColdFusion and install locally on your PC/VM. You can use CF's cfpdf function to automatically create "thumbnails" (you can set the size) of each of the pages without so much work. Then load it into your favorite gallery script with links to the individual PDFs. Convaluted, I know, but it shouldn't take more than 10 mins once you get the gallery script working.
I would recommend splitting the pdf into pages and then using a web based viewer to publish them online. FlexPaper has many open source tools such as pdf2json, pdftoimage to help out with the publishing. Have a look at our examples here:
http://flexpaper.devaldi.com/demo/
Look at a random wikipedia article like http://en.wikipedia.org/wiki/Impostor_syndrome, I see that there's no .html attached to the end of the address. In fact, if I do try to put a .html after it, Wikipedia tells me "Wikipedia does not have an article with this exact name." How come it doesn't need any file extensions?
More a superuser question?
There is no law saying that an html file has to end in .html or .htm and since wiki generates pages from a database there is really no file page there anyway (except in a cache).
Not having .htm or .php is moresensible - why do you care what technology they use when you ask for a url? It would be like having to put the operating system of the recipient at the end of their email address.
if you make a call to a website it probably looks like
www.example.com/siteA/index.html
this request just tells the webserver you want to see a resource that is called index.html in siteA.
the website that runs on this server has to determine what you want to see and how the data is loaded.
index.html could be a file in the siteA directory
or
it can be row with the key "index.html" in the siteA-table in your database.
so the part siteA/index.html is just a resource identifier. the grammar of this resource identifier is completely free and is determined per website.
url rewriting is also common to make url easier to read and remember.
for example there could be a rewrite rule to accomplish the following:
if the user enters something like
www.example.com/download/demo.zip
rewrite it so your website sees it like:
www.example.com/download.php?file=demo.zip
Wikipedia's servers map the url to the page you want. .html is just a naming convention that, today is mostly historical from the period of static pages when urls actually were names of files on the server. In fact, there may be no file at all, where the server queries the database and a web framework sends out the html on the fly.
Wikipedia is most likely using the Apache module mod_rewrite in order to not have to link paths directly to a file system path.
See: http://en.wikipedia.org/wiki/Rewrite_engine#Web_frameworks
However programming languages can also take control of the incoming URLs and return data depending on the structure of the link according to some set of rules, for example the Django web framework employees a URL dispatcher.
That's because Wikipedia uses MediaWiki's feature of URL shortening.
Actually when you search for a file it really loads a php file. Try searching for a word that doesn't exist, for example "Pazaz". The URL is http://en.wikipedia.org/w/index.php?title=Special%3ASearch&search=pazaz . Notice index.php in the URL.
To tell the truth it's not a MediaWiki feature, it's Apache. For further info http://www.mediawiki.org/wiki/Manual:Short_URL .
URL routing is your answer for example in ASP read below source from
The ASP.NET MVC framework includes a flexible URL routing system that enables you to define URL mapping rules within your applications. The routing system has two main purposes:
Map incoming URLs to the application and route them so that the right Controller and Action method executes to process them
Construct outgoing URLs that can be used to call back to Controllers/Actions (for example: form posts, links, and AJAX calls)
I would suggest that sites like this use some sort of Model View Controller framework similar to Ruby on Rails where the url 'directories' form a part of a request/url route...
In frameworks that are MVC based, the url 'directories' can dictate what View/Controller to utilise as well as what action should be taken with the data.
eg: shop.com/product/carrots
Where product is a view/controller and carrots is the data. The framework then analyses which action/route to take. Default could be viewing the product information and price of the carrot.