When using the MediaWiki software, how do you disable the popup saying "Your edit was saved." from appearing?
This feature has been created after A/B testing to provide users a feedback that their edit indeed has been saved. The module providing this functionality was unconditionally loaded, while now it's only loaded if a cookie, was set in EditPage.php. It will trigger a label/popup on the next page loading with action=view (the default action). There was an HTML snippet, while now, HTML is generated by JavaScript mediawiki.action.view.postEdit.js.
2 ways to get rid of it:
As a server administrator (and you'll have to remember to re-install this hack after upgrading): Remove the line loading the module mediawiki.action.view.postEdit - this will possibly save you a few bytes bandwidth, too.
As a site administrator: Hide the message by adding the following to MediaWiki:Common.css:
.postedit {
display: none;
}
Related
Robot Framework Version 2.8.3
Selenium2Library Version 1.4
The problem which I am facing is with regards to the controls used in the application under test.
Unlike the conventional coding technique of having the controls with ids, my application has been developed by using 'CSS - Class'.
For example a button is coded as :
where the "btn-do-login" is defined in CSS file.
Here when I enter the ids in the username and password fields, I write Click Element btn-do-login
The keyword clicks on the element but does not submits the data to the host as in the case of Submit Form keyword.
Also to mention, the application does not have any form in it. Instead there is a Div with reference to a CSS class.
Following is the entire hierarchy:
<div class="login-form">
<div class="form-element-username"> … </div>
<div class="form-element-password"> … </div>
<div id="btn-do-login" class="wbutton-login"> … </div>
</div>
Any help on how to Post data to the host is appreciated.
Also, please note that, now on entering manually the access credentials on this WebPage opened by Webdriver, and trying to submit it manually still gives me the javascript error and the page is not submitted. For logging in the application manually I need to close the browser instance (opened by Webdriver) and open a new instance manually for manual logging.
And Lastly just wanted to ask whether Selenium2Library supports HTML5 ?
This is what I have done till now.
> Login With Valid Credentials
>> Input Text ${id_login_email} ${country}
>>> Input Text ${id_login_password} ${PASSWD}
>>>> CLick Element btn-do-login
Here the variables have been defined in separate python files and have been imported as VARIABLE in the setting table.
Thanks in advance.
--Raj Sarodaya
After doing quite a few trail and errors I was able to make this work.
The problem with submit was that the javascripts were not loading as my system was behind proxy and I had not set the proxy ip in the browsers.
For many other websites proxy was not required contrary to this case hence it took some time for me to correct it.
:P
When you type in an invalid address, Chrome displays a grey page that says "Oops! Google Chrome could not find X. Did you mean Y?"
Because this is not an HTTP page but rather one of the browser's built-in things, I can't put a content script in it and can't control it, so my extension is frozen until the user manually goes to another page.
Since the extension is supposed to be able to control the browser on its own, it's very important that anytime this page opens, it automatically goes back to a page I do have content script access to, and then displays a message instead.
Is this impossible?
You can use the chrome.webNavigation.onErrorOccurred to detect such errors, and redirect to a different page if you want. Unless you've got an extremely good reason to do so, I strongly recommend against implementing such a feature, because it might break the user's expectations of how the browser behaves.
Nevertheless, sample code:
chrome.webNavigation.onErrorOccurred(function(details) {
if (details.frameId === 0) {
// Main frame
chrome.tabs.update(details.tabId, {
url: chrome.runtime.getURL('error.html?error=' + encodeURIComponent(details.error))
});
}
});
According to the docs the only pages an extension can override are:
The bookmarks manager
The history
The new-tab
So, an extension can't change/contol/affect the behaviour of the browser regarding the "Oops!..." page.
Is it possible to have a print option that bypasses the print dialog?
I am working on a closed system and would like to be able to pre-define the print dialog settings; and process the print as soon as I click the button.
From what I am reading, the way to do this varies for each browser. For example, IE would use ActiveX. Chrome / Firefox would require extensions. Based on this, it appears I'll have to write an application in C++ that can handle parameters passed by the browser to auto print with proper formatting (for labels). Then i'll have to rewrite it as an extension for Chrome / Firefox. End result being that users on our closed system will have to download / install these features depending on which browser they use.
I'm hoping there is another way to go about this, but this task most likely violates browser security issues.
I ended up implementing a custom application that works very similar to the Nexus Mod Manager. I wrote a C# application that registers a custom Application URI Scheme. Here's how it works:
User clicks "Print" on the website.
Website links user to "CustomURL://Print/{ID}
Application is launched by windows via the custom uri scheme.
Application communicates with the pre-configured server to confirm the print request and in my case get the actual print command.
The application then uses the C# RawPrinterHelper class to send commands directly to the printer.
This approach required an initial download from the user, and a single security prompt from windows when launching the application the first time. I also implemented some Javascript magic to make it detect whether the print job was handled or not. If it wasn't it asks them to download the application.
I know this is a late reply, but here's a solution I'm using. I have only used this with IE, and have not tested it with any other browser.
This Sub Print blow effectively replaces the default print function.
<script language='VBScript'>
Sub Print()
OLECMDID_PRINT = 6
OLECMDEXECOPT_DONTPROMPTUSER = 2
OLECMDEXECOPT_PROMPTUSER = 1
call WB.ExecWB(OLECMDID_PRINT, OLECMDEXECOPT_DONTPROMPTUSER,1)
End Sub
document.write "<object ID='WB' WIDTH=0 HEIGHT=0 CLASSID='CLSID:8856F961-340A-11D0-A96B-00C04FD705A2'></object>"
</script>
Then use Javascript's window.print(); ties to a hyperlink or a button to execute the print command.
If you want to automatically print when the page loads, then put the code below near tag.
<script type="text/javascript">
window.onload=function(){self.print();}
</script>
I am writing this answer for firefox browser.
Open File > Page Setup
Make all the headers and footers blank
Set the margins to 0 (zero)
In the address bar of Firefox, type about:config
Search for print.always_print_silent and double click it
Change it from false to true
This lets you skip the Print pop up box that comes up, as well as skipping the step where you have to click OK, automatically printing the right sized slip.
If print.always_print_silent does not come up
Right click on a blank area of the preference window
Select new > Boolean
Enter "print.always_print_silent" as the name (without quotes)
Click OK
Select true for the value
You may also want to check what is listed for print.print_printer
You may have to choose Generic/Text Only (or whatever your receipt printer might be named)
The general answer is: NO you cannot do this in the general case but there some cases where you might do it.
Check
http://justtalkaboutweb.com/2008/05/09/javascript-print-bypass-printer-dialog-in-ie-and-firefox/
If you where allowed to do such a thing anyway, it would be a security issue since a malware script could silently sent printing jobs to visitor's printer.
I found a awesome plugin by Firefox which solve this issue. try seamless printing plugin of firefox which will print something from a web application without showing a print dialog.
Open Firefox
Search addon name seamless printing and install it
After successful installation the printing window will get bypassed when user wants to print anything.
I was able to solve the problem with this library: html2pdf.js (https://github.com/eKoopmans/html2pdf.js)
Considering that you have access to it, you could do something like that (taken from the github repository):
var element = document.getElementById('element-to-print');
html2pdf(element);
Does anyone know of an extension for Firefox, or a script or some other mechanism, that can monitor one or more local files. Firefox would auto-refresh or otherwise update its canvas when it detected a change (of timestamp) in the files(s).
For editing CSS, it would be ideal if just the CSS could be reloaded, rather than a full HTML re-render.
Effectively it would enable similar behaviour to Firebug with its dynamic HTML/CSS editing, only through external files.
Live.js
From the website:
How?
Just include Live.js and it will monitor the current page including local CSS and Javascript by sending consecutive HEAD requests to the server. Changes to CSS will be applied dynamically and HTML or Javascript changes will reload the page. Try it!
Where?
Live.js works in Firefox, Chrome, Safari, Opera and IE6+ until proven otherwise. Live.js is independent of the development framework or language you use, whether it be Ruby, Handcraft, Python, Django, NET, Java, Php, Drupal, Joomla or what-have-you.
It has the huge benefit of working with IETester, dynamically refreshing each open IE tab.
Try it out by adding the following to your <head>
<script type="text/javascript" src="http://livejs.com/live.js"></script>
Have a look at FileWatcher extension:
https://addons.mozilla.org/en-US/firefox/addon/filewatcher/
it's a WebExtension, so it works with the latest Firefox
it has a native app (to be installed locally) that monitors watched files for changes using native OS calls (no polling!) and notifies the WebExtension to let it reload the web page
reload is driven by rules: a rule contains the page URL (with regular expression support) and its included/excluded local source files
open source: https://github.com/coolsoft-ita/filewatcher
DISCLAIMER: I'm the author of the extension ;)
I would recommend livejs
But it has following Advantages and Disadvantages
Advantages:
1. Easy setup
2. Works seamlessly on different browsers (Live.js works in Firefox, Chrome, Safari, Opera and IE6+)
3. Don't add irritating interval for refreshing browser specially when you want to debug along with designing
4. Only refreshing when you save change ctrl + S
5. Directly saves CSS etc from firebug I have not used that feature but read on their site http://livejs.com/ that they support it too!!!
Disadvantages:
1. It will not work on file protocol file:///C:/Users/Admin/Desktop/livejs/live.html
2. You need to have server to run it like http://localhost
3. You have to remove it while deploying on staging/production
4. Doesn't serves CDN I have tried cheating & applying direct link http://livejs.com/live.js but it will not work you have to download and keep on local to work.
Xrefresh with firebug.
Firefox has an extension called mozRepl.
Emacs can plug into this, with moz-reload-on-save-mode.
when it's set up, saving the file forces a refresh of the browser window.
There are some IDE's that contain this ability (They'll have a pane within them or some other means to auto-refresh a page on save).
If you want to do this yourself a quick hack is to set the meta refresh on the page to a low value - one or two seconds.
# Will refresh the page content every second
<meta http-equiv="refresh" content="1" />
You could just place a javascript interval on your page, have it query a local script which checks the last date modified of the css file, and refreshes it if it changed.
jQuery Example:
var modTime = 0;
setInterval(function(){
$.post("isModified.php", {"file":"main.css", "time":modTime}, function(rst) {
if (rst.time != modTime) {
modTime = rst.time;
// reload style tag
$("head link[rel='stylesheet']:eq(0)").remove();
$("head").prepend($(document.createElement("link")).attr({
"rel":"stylesheet",
"href":"http://sstatic.net/mso/all.css?v=4372"
})
);
}
});
}, 5000);
Browsersync can do this from the server side / outside of the browser.
This can achieve more repeatable results / things that don't require so much clicking.
This will serve a page and refresh on change
cd static_content
browser-sync start --server --files .
It also allows a scripting mode.
This is certainly hacky, but if you want to work locally without making any external request (to live.js, for example), or run any local server, I think this might be useful. This is not specific to web development, you can adopt similar strategy to any other workflow.
You will need two tiny tools (which are present in almost all distribution repos): inotify-tools and xdotool.
First get the ID of your Firefox and your editor window using xdotool.
$ xdotool search --name "Mozilla Firefox"
60817411
60817836
$ xdotool search --name "Pluma" # Pluma is my editor
94371842
Depending on the number of processes running, you will get one or more window ID. Use xdotool windowactivate <ID> to know which one you want (the focus changes to the respective window).
Use inotifywait -e close_write to monitor changes to your local file and when you save the file using your editor, change focus to your browser, reload xdotool key CTRL+R and focus back to your editor. This is so instantaneous you will not notice nothing.
Also, inotifywait exits on change, so you might have to do it in a loop. Here is a minimum working example (in Bash in your working directory).
while /usr/bin/true
do
inotifywait -e close_write index.html;
xdotool windowactivate 60917411; # Switch to Firefox
xdotool key CTRL+R; # Reload Firefox
xdotool windowactivate 94371842 # Switch back to Pluma
done
You can use inotifywait to watch for the entire directory or some selected files in your directory.
You can write a script that can automate is easily.
This works on Linux (I've tested this on Void Linux.)
You can use live.js with a tampermonkey script to avoid having to include https://livejs.com/live.js in your HTML file.
// ==UserScript==
// #name Auto reload
// #author weirane
// #version 0.1
// #match http://127.0.0.1/*
// #grant none
// ==/UserScript==
(function() {
'use strict';
if (Number(window.location.port) === 8000) {
const script = document.createElement('script');
script.src = 'https://livejs.com/live.js';
document.body.appendChild(script);
}
})();
With this tampermonkey script, the live.js script will be automatically inserted to pages whose address matches http://127.0.0.1:8000/*. You can change the port according to your need.
I think that you can solve it by using some ajax requests after a determinate interval. You can do a request to CSS files and then if you don't get the "not modified" header you delete your css and load it again. For dynamic files you do a request and store the response and then every time you make a request to that file you compare the response to the latest.
My theme's custom options panel has the following code...
`
/* initialize the site options */
if(get_option('permalink_structure')==""){update_option('permalink_structure', '/%postname%/');}
`
This checks the permalink option setting and since the WP default is "" which triggers the site.com/?p=x handler. This way, if the user has not yet set permalinks from the default, my script does it for them, by setting permalink to post name. Or at least that what I thought...
However, I've had a few folks who have my template tell me that upon first install, they were getting 404 errors on pages.
Apparently, the workaround is to physically navigate to the Permalinks page and just click "Save Changes" (even though when you first hit this page, the Permalink comes up as if it's correctly entered into the "custom" field.
Anyone know why this happens? Is their perhaps another setting in the db that determines the permalink in addition to what happens when update_options() is called as in the above code?
Well, this probably happens because you're updating value in database table (permalink_structure), while .htaccess remains the same, and that's why mod_rewrite isn't loaded and users are getting 404-errors on pages.
I believe WordPress also adds rewriting rules into .htaccess in order to enable permalinks when you're clicking "Save Changes" in admin panel. Let me dig it out and find out what WP is doing exactly.
EDIT.
Ok, here is the code that is doing what you're trying to accomplish:
<?php
if (get_option('permalink_structure') == "")
{
// Including files responsible for .htaccess update
require_once(ABSPATH . 'wp-admin/includes/misc.php');
require_once(ABSPATH . 'wp-admin/includes/file.php');
// Prepare WordPress Rewrite object in case it hasn't been initialized yet
if (empty($wp_rewrite) || !($wp_rewrite instanceof WP_Rewrite))
{
$wp_rewrite = new WP_Rewrite();
}
// Update permalink structure
$permalink_structure = '/%postname%/';
$wp_rewrite->set_permalink_structure($permalink_structure);
// Recreate rewrite rules
$wp_rewrite->flush_rules();
}
wp_rewrite does not appear to have any effect. Users still have to manually click "Save Options" on the permalinks screen.
I suppose I will run firebug on that page during the update to see what's getting set that update_options is apparently missing.
This would appear to be a bug in update_options when the option being updated is permalink_structure.
Anyone disagree?