Why doesn't the edit Text in the MySQL table text show in the Article page?
You can only see it if you edit the Article, then the you see the in mySQL edited text.
Does exist an second table with the article text?
You should never ever edit a revision's text directly in the database to avoid any data corruption and to have a revision of every page version/edit. The text table itself holds only the wikitext of a specific revision/page, not the parsed text. If you request a page, MediaWiki parses the wikitext to html and saves the result in the parser cache (parsing is an expensive task, so it would be very bad for performance to parse every page on every page view). If you request the page a second time, the content will be requested from parser cache, instead of reparsing the wikitext from the text table.
That's why you have to clear the parser cache, if you change the wikitext on another way as the MediaWiki interface (if you edit a page in the interface, MediaWiki itself triggers a reparse of the page ;)). You could do that with the URL parameter "action=purge" next time :)
I solved it, the data is in the table objectcache saved, you only have to delete the content and it works.
Related
Hi guys I am trying to download a document from a swf link in ipaper
Please guide me on how can I download the book
Here is the link to the book which I want to convert to pdf or word and save
http://en-gage.kaplan.co.uk/LMS/content/live_content_v2/acca/exam_kits/2014-15/p6_fa2014/iPaper.swf
Your kind guidance in this regard would be appreciated.
Regards,
Muneeb
first you open the book in your browser with network capturing (in developer/s tools).
you should open many pages at diffrent locations with and without zoom
then look in the captured data.
you will see that for each new page you are opening, the browser asks for a new file (or files).
this means that there is a file for each page and with that file your browser is creating the image of the page. (usually there is one file for a page and it is some format of picture but I encountered base64 encoded picture and a picture cut into four pieces).
so we want to download and save all the files that are containing the book's pages.
now, usually there is a consistent pattern to the addresses of the files and there is some incrementing number in it (as we can see in the captured data the difference between following files), and knowing the number of pages in the book we can guess ourselves the remaining addresses till the end of the book (and of course download all the files programmatically in a for loop)
and we could stop here.
but sometimes the addresses are bit difficult to guess or we want the process to be more automatic.anyway we want to get programmatically the number of pages and all the addresses of the pages.
so we have to check how the browser knows that stuff. usually the browser downloads some files at the beginning and one of them contains the number of pages in the book (and potentially their address). we just have to check in the captured data and find that file to parse it in our proram.
at the end there is issue of security:
some websites try to protect their data one way or another (ussually using cookies or http authentication). but if your browser can access the data you just have to track how it does it and mimic it.
(if it is cookies the server will respond at some point with Set-Cookie: header. it could be that you have to log-in to view the book so you have to track also this process. usually it's via post messeges and cookies. if it is http authentication you will see something like Authorization: Basic in the request headers).
in your case the answer is simple:
(all the files names are relative to the main file directory: "http://en-gage.kaplan.co.uk/LMS/content/live_content_v2/acca/exam_kits/2014-15/p6_fa2014/")
there is a "manifest.zip" file that contains "pages.xml" file which contains the number of files and links to them. we can see that for each page there is a thumb, a small, and a large pictures so we want just the large ones.
you just need a program that will loop those addresses (from Paper/Pages/491287/Zoom.jpg to Paper/Pages/491968/Zoom.jpg).
finally you can merge all the jpg's to pdf.
I am able to successfully create a folder within the root of my Box account via the v2 API. However, if I immediately issue a search request for it, I get no results. If I wait for some period of time (maybe 20 mins) and then issue the search request, the folder I created is returned.
Is there some cacheing going on on the Box side? If so, is there a way to invalidate the cache via the API or some workaround for this?
Thanks!
What is going on is background backend processing of your file. Just like a new website won't show up in a google search until Google has time to 'learn' about the new website, Box's search engine has to process the file and add the text version of the contents to the Box search engine. Exactly how long it takes to be added depends on a lot of variables, including the size and format of the file.
You'll see pretty much the same behavior if you upload a large document to Box and then try to preview it immediately. Box goes off and does some magic to convert your file to a previewable format. Except in the case of the preview, the Box website gives you a little bit of feedback saying "Generating preview." The search bar doesn't tell you "adding new files to search index."
This is mostly because it is more important for Box to get your file and make sure we store it safely and let you know that Box has it. A few milliseconds later on we start working on processing your file for full text search and all the other processing that we do.
What is the most simple way to insert values into a mysql database without reloading the page? In this particular example, I'd like to have a form with one input field, and when the form is submitted, the user's input to the field is inserted into a mysql database table, but the page is not reloaded.
You can use AJAX to send content to a server side file (without reloading) and that file can insert row(s) in the database. Here's an example : http://www.9lessons.info/2009/08/vote-with-jquery-ajax-and-php.html Here, this guy creates a digg like vote button and it inserts and updates rows without reloading the page. Check it out.
Look at jquery ajax() or the jquery.form() plugin.
This requires AJAX.
You CAN do this with plain JS, but jQuery makes your life a lot easier.
See this post for a good example:
Inserting into MySQL from PHP (jQuery/AJAX)
I was wondering if it's possible to include hashes of external files within a HTML file. This should basically serve 2 purposes:
Including unencrypted content into encrypted pages. (The hashes would ensure the integrity of the data)
Allow more caching for resources that are used on multiple pages
Let's focus on the second case and clarify it with a made-up example:
<script type="text/javascript" src="jQuery-1.5.1.min.js" hash-md5="b04a3bccd23ddeb7982143707a63ccf9">
Browsers could now download and cache the file initially. For every following page that uses the same hash, it would be clear that the cached version could be used. This technique should work independent of file origin, file type, transmission protocol and without even hitting the server once to know that a file is already cached locally.
My question is: Is such a mechanism available in HTML?
The following example is just to clarify the idea further and does not add new information.
An example of a library included in 2 unrelated pages would lead to the following steps.
User navigates to page A for the first time
Browser loads page A and looks for external files (images, scripts, …)
Browser finds page A includes a script with hash b04a3bccd23ddeb7982143707a63ccf9
Browser checks its cache and finds no file with that hash
Browser downloads the file from the given URL (gives a file on page A's domain)
Browser calculates hash and compares it with the hash as stated on page A
Browser adds file to its cache using the hash. If calculated hash would not have matched given hash, the file would have been rejected with an error message
Browser executes file.
At some point later in time:
User navigates to page B for the first time
Browser loads page B and looks for external files (images, scripts, …)
Browser finds page B includes a script with hash b04a3bccd23ddeb7982143707a63ccf9
Browser checks its cache and finds a file with that hash
Browser loads file from cache. The browser did not care about the URL given on page B pointing to the file. Also, it did not matter how the file's content found its way into the cache – protocol, encryption of connection and source are ignored. No connection to any server was made to load the file for page B
Browser executes file.
It's basically a kernel of a good idea, but I don't think there's anything in HTML to support it. Might be able to kludge something together with JavaScript, I suppose.
It is not necessary and not a new idea at all.
You can do this, using your example, omitting attr. "type" (for brevity):
<script src="jQuery-1.5.1.min.js?b04a3bccd23ddeb7982143707a63ccf9">
This has been practised using the file's timestamp instad of MD5 for a long time on quite a few sites, Rails supports it too, see here (look for "timestamp"), or here for an example with PHP.
Also see How to set up caching for css/js static properly
I have some static websites. By static I mean all the pages are simple HTML without JavaScript (all the data are hard-coded).
I have a server side program that creates dynamic data that I'd like to insert into my static sites. By dynamic I mean the data changes very often.
How should I do this?
Here is a scenario: On the server side my program generates the current time-stamp in every millisecond. When a user open one of my static sites the page gets the current time-stamp from the server and render it.
I'd like it to work with search engines, so I can't use JavaScript.
It's not possible to change the HTML structure client side without Javascript, so your solution is to add some handler server side for files with .htm and .html extensions.
About JS: Please note that most spiders (if not all) won't be able to see data rendered by javascript since most of them are analyzing the plain HTML that is returned by the server.