Make summaries mandatory in MediaWiki? - mediawiki

When editing/moving/deleting pages or blocking/unblocking users, I would like the summary field in each page that performs one of the above functions to be mandatory, so that motives behind one of the actions can be known more easily.

This is very well known to draw a lot of contributions away, but you can enforce it via a custom JavaScript or soft-force it by setting forceeditsummary to true in $wgDefaultUserOptions:
// in LocalSettings.php:
$wgDefaultUserOptions = [
'forceeditsummary' => 1
];
With this option on, after hitting Save page without an edit summary, you have to hit Save page again to get the edit saved. A reminder to fill the edit summary is shown at the top of the page but the second save goes through anyway.
As I said above, experience shows that many edits get lost with this setting. If you think that people forget to save twice, rather than just refuse to fill edit summaries, you can use MediaWiki stylesheets to make the warning more visible, with something flashy like
#mw-missingsummary {
background-color: #FFFFCC;
color: #000000;
border: 3px double #CC0000;
margin: 0 0 1em;
padding: 0.5em 1em;
}
Note that the default settings are used for unregistered users and users who have not customized their preferences only. Registered users are allowed to override the value at any time: Preferences → Editing → Prompt me when entering a blank edit summary.

I just implemented this functionality in a project I work on. It works fine, and plays well with all the in-built features, ranging from very early MediaWiki versions to the current one from master branch (2015-12-13).
I put this in my LocalSettings.php:
function forceEditSummary($editor, $text, $section, &$error, $summary) {
// Override the setting so far based on wpIgnoreBlankSummary form
// variable, forceeditsummary user option and whether the page is the
// editor’s own user or talk page:
$editor->allowBlankSummary = false;
return true; // continue processing
}
$wgHooks['EditFilter'][] = 'forceEditSummary';
MediaWiki:Missingsummary and MediaWiki:Missingcommentheader1 messages should be edited to be true (second submit will not go through anymore).
I used the EditFilter hook to set the allowBlankSummary member of the editor (EditPage object) and thus override previous decisions on whether the edit shall go through even with blank summary, or not. See the source code of EditPage class for details of the original settings.
For more options (and supplementary styling giving emphasis to the missingsummary message), read Nemo’s answer.
1 Where is the missingcommentheader message used? I see it in the code but I don’t know when $editor->section == 'new'. Probably a feature of MediaWiki I never use…

You might try this extension: https://www.mediawiki.org/wiki/Extension:SummaryRequired.
What can this extension do?
This extension forces a user to enter a comment.

Related

How do I keep a user from double clicking a link in an email?

When users request a password reset, they get an email with a link to generate a password reset code. This link is valid for 24 hours and can be re-used within the 24 hours to generate a new code if the first is lost or forgotten. When users double click the link, two codes are getting generated, leading to user confusion about which to use (the second code invalidates the first code with the way it has been developed).
Since the link in the email is just an html a tag, I'm not sure how I can keep users from double clicking the link.
This sounds like you're facing the XY problem. Your actual issue is that users get confused by visits in a quick succession causing a code that was just generated to be invalid, rather than the fact that the link can be clicked twice.
From a security point of view, these kind of links should really be single-use, and the user should request a new e-mail if they want to perform the action again. Assuming this is something you're forced to do, I believe the best compromise would be to limit code generation to a time frame, so visits within, let's say, 5-10 seconds would result in the same code being shown to the user, based on the server's time.
Implementing any CSS based solution for this that'd work across every e-mail client out there is challenging enough (if at all possible), and I doubt any self-respecting e-mail client is going to let you run any sort of JavaScript to intercept the event.
The following works in a modern browser on an actual web page, but this is not just a bad idea, it's also probably not going to work if you try to use it in an e-mail. I'm providing it here just for the sake of completeness, showing that it's somewhat possible, but please do not rely on this to fix the underlying issue.
<style>a:focus { pointer-events: none }</style>
<p>This is some text, here's a link you can't double click by the way.</p>

Html signatures: how to prevent user editing

I have made some .html signatures for one of our clients, and there are two things they want I can't figure out how to achieve.
One of them, the client wants the text inside the signature to not be editable by the user, that is to prevent them from accidentally changing something in the signature when sending an e-mail. Is this somehow posible?
The second issue, they said they can resize the images in the signature using the mouse. I also need to prevent this so that they cannot accidentally deform the logo or enlarge or diminish it. How can I do it? I tried setting width and height attributes to the images, but that doesn't prevent them form resizing it at will.
Any help or orientation will be really much appreciated.
Thanks
maybe you can create an image with the entire signature, so it won't be editable.
EDIT
Take a look at this link, maybe you will be able to add the signature after the send button is click.
How to modify email before sending
There are a few options:
1.) If you use an exchange server, you can set a Group Policy to add a signature server level and then another to remove permissions for signature access to all users. This will give you 100% control on all Outgoing messages. This is supposed to be used for disclaimers, so in a long email chain, the signatures may wind up at the bottom of the chain, not the message. See for more info: http://www.howto-outlook.com/howto/corporatesignatures.htm
2.) Another option is to run a script. This option steers away from using Group Policy, but I believe it would require action done on user level for each person, which may be an issue in a larger company. See here for more info: http://www.edugeek.net/blogs/thescarfedone/1016-centrally-managing-signatures-outlook-owa-free-way.html
3.) Last option I know of is to make signatures folder read only and insert the signature file directly on each person's computer. This is a very manual process and time consuming and certainly not scalable. See here for more details: https://support.office.com/en-us/article/Copy-email-signatures-to-another-computer-4e03286f-2246-4d7d-ae95-a4cc1992595a?CorrelationId=0db01a3d-f8b9-4bfb-af86-37cd4dcf6ef9&ui=en-US&rs=en-US&ad=US

Chrome extension attach properties to each tab

In a chrome extension, I've created tab properties that I'm trying to store with each tab. What is the best way to do this? I've looked into using localStorage, but it seems like there could be an easier way. The data is by no means permanent, it only exists as long as the tab does.
There's definitely no need to use localStorage. Without the notion "data is by no means permanent", one already knows that: tab IDs are unique within a session. From this fact, it follows that the data is non-persistent.
The best method to implement it is to maintain a hash of tab properties:
chrome.tabs.onCreated (optional, add initial info to tab hash)
chrome.tabs.onUpdated - (Add/) Update tab hash (URL is available)
chrome.tabs.onRemoved - Remove hash entry
Tab objects are not expensive: All properties are primitives (booleans, numbers, strings).
For instance (background page only):
var tabStore = {}; // <-- Collection of tabs
chrome.tabs.onUpdated.addListener(function(tabId, changeInfo, tab) {
tabStore[tabId] = tab;
});
chrome.tabs.onRemoved.addListener(function(tabId) {
delete tabStore[tabId];
});
IMPORTANT ADDENDUM to Rob W's answer.
Make sure to also listen to tabs.onReplaced as well, and update the tabStore accordingly.
chrome.tabs.onReplaced.addListener(function(addedTabId, removedTabId) {
tabStore[addedTabId] = tabStore[removedTabId];
delete tabStore[removedTabId];
});
Chrome can change the id of a tab under the hood without warning or signs. As far as I know, the only place that this happens is the Google "instant search" when you type in a search into the address bar. It may be an edge case, but if you don't track this, it could end up being a very insidious problem.

Creating "are you sure?" popup window by using html only

Assume I have a html from, and it contain some submit type. I want to create a "are you sure" popup window that will appear when user click submit button.
My question is that is there any way to create it by using "only" html, not using javascript or any other?
HTML only is possible, but not without a postback
Scenario that could work without javascript:
You have your form with submit button
User clicks (and submits) the form
You display another form with are you sure? form (that contains Yes and No buttons as well as hidden fields of the first form that will make it possible to do the action required on the original data
functionality that executes the action and goes back to whatever required.
This would be completely Javascript free, but it would require several postbacks.
This kind of thing is usually done on the client with a Javascript confirm() function (here's a simple example) or lately with a more user friendly modal dialog provided by many different client libraries or their plugins.
When to choose the script free version?
If you know your clients are going to be very basic ones (ie. vast majority of your users will access your application using clients like Opera Mini that's not able to run scripts at all). But in all other cases it's much better to do this using Javascript. It will be faster, easier to develop and much more user friendly. Not to mention that it will put less strain on your server as well since certain parts will execute on the client without the need of any server processing.
No, there isn't. Despite of the new features in HTML 5, HTML is still a markup language, not a programming language. In order to express dynamic behavior (such as an "are you sure?" box), you need to use a programming language.
Javascript would be the most obvious choice for this, but you could also do it with frameworks that can get you around writing Javascript by hand (for example ASP.NET).
Edit: Actually it appears that it would theoretically possible to do this with without Javascript or other frameworks. As I just learned, HTML 5 + CSS 3 seems to be turing complete. But this is hardly relevant to this question.
It's possible to ask for a confirmation, but it will not be in a "popup window". The creation of the "popup window" requires javascript/other language.
It will be:
Request (first form)
POST
Response (confirmation form)
POST
Response (outcome message)
You can create a form with all hidden elements containing the data from the first form and a "Yes" and "No" button below the "Are you sure?" text. You can use PHP sessions to avoid the hidden form elements. If there is a lot of data or confidential data or you do not want to re-validate the data from the second form, use sessions. Make sure you validate the data from either form before using it.
I know I'm like .. 10 years late. But for anyone still wondering I thought I could be of some help!
What I did for this exact problem was make sure I had multiple "divs" in my code. For me specifically, I had two main ones.
First, one whose id="main", and another whose id="popup" with the 'visible' property initially set to 'false' for the popup div.
Then, on whichever event you're looking for (button click for example) you'll simply set main.Visible = false and popup.Visible = true, then you could have more buttons in your popup (yes, no, cancel, confirm, etc.) which do the exact same thing, but in reverse!
The most important thing to make sure of is that you have the 'runat="server"' property in your divs so that you can access them in your CS code
Hope this was helpful! :)

How to prevent robots from automatically filling up a form?

I'm trying to come up with a good enough anti-spamming mechanism to prevent automatically generated input. I've read that techniques like captcha, 1+1=? stuff work well, but they also present an extra step impeding the free quick use of the application (I'm not looking for anything like that please).
I've tried setting some hidden fields in all of my forms, with display: none;
However, I'm certain a script can be configured to trace that form field id and simply not fill it.
Do you implement/know of a good anti automatic-form-filling-robots method? Is there something that can be done seamlessly with HTML AND/OR server side processing, and be (almost) bulletproof? (without JS as one could simply disable it).
I'm trying not to rely on sessions for this (i.e. counting how many times a button is clicked to prevent overloads).
I actually find that a simple Honey Pot field works well. Most bots fill in every form field they see, hoping to get around required field validators.
http://haacked.com/archive/2007/09/11/honeypot-captcha.aspx
If you create a text box, hide it in javascript, then verify that the value is blank on the server, this weeds out 99% of robots out there, and doesn't cause 99% of your users any frustration at all. The remaining 1% that have javascript disabled will still see the text box, but you can add a message like "Leave this field blank" for those such cases (if you care about them at all).
(Also, noting that if you do style="display:none" on the field, then it's way too easy for a robot to just see that and discard the field, which is why I prefer the javascript approach).
An easy-to-implement but not fool-proof (especially on "specific" attacks) way of solving anti-spam is tracking the time between form-submit and page-load.
Bots request a page, parse the page and submit the form. This is fast.
Humans type in a URL, load the page, wait before the page is fully loaded, scroll down, read content, decide wether to comment/fill in the form, require time to fill in the form, and submit.
The difference in time can be subtle; and how to track this time without cookies requires some way of server-side database. This may be an impact in performance.
Also you need to tweak the threshold-time.
What if - the Bot does not find any form at all?
3 examples:
Insert your form using AJAX
If you are OK with users having JS disabled and not being able to see/ submit a form, you can notify them and have them enable Javascript first using a noscript statement:
<noscript>
<p class="error">
ERROR: The form could not be loaded. Please enable JavaScript in your browser to fully enjoy our services.
</p>
</noscript>
Create a form.html and place your form inside a <div id="formContainer"> element.
Inside the page where you need to call that form use an empty <div id="dynamicForm"></div> and this jQuery: $("#dynamicForm").load("form.html #formContainer");
Build your form entirely using JS
// THE FORM
var $form = $("<form/>", {
appendTo : $("#formContainer"),
class : "myForm",
submit : AJAXSubmitForm
});
// EMAIL INPUT
$("<input/>",{
name : "Email", // Needed for serialization
placeholder : "Your Email",
appendTo : $form,
on : { // Yes, the jQuery's on() Method
input : function() {
console.log( this.value );
}
}
});
// MESSAGE TEXTAREA
$("<textarea/>",{
name : "Message", // Needed for serialization
placeholder : "Your message",
appendTo : $form
});
// SUBMIT BUTTON
$("<input/>",{
type : "submit",
value : "Send",
name : "submit",
appendTo : $form
});
function AJAXSubmitForm(event) {
event.preventDefault(); // Prevent Default Form Submission
// do AJAX instead:
var serializedData = $(this).serialize();
alert( serializedData );
$.ajax({
url: '/mail.php',
type: "POST",
data: serializedData,
success: function (data) {
// log the data sent back from PHP
console.log( data );
}
});
}
.myForm input,
.myForm textarea{
font: 14px/1 sans-serif;
box-sizing: border-box;
display:block;
width:100%;
padding: 8px;
margin-bottom:12px;
}
.myForm textarea{
resize: vertical;
min-height: 120px;
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<div id="formContainer"></div>
Bot-bait input
Bots like (really like) saucy input elements like:
<input
type="text"
name="email"
id="email"
placeholder="Your email"
autocomplete="nope"
tabindex="-1"
They wll be happy to enter some value such as
`dsaZusil#kddGDHsj.com`
After using the above HTML you can also use CSS to not display the input:
input[name=email]{ /* bait input */
/* do not use display:none or visibility:hidden
that will not fool the bot*/
position:absolute;
left:-2000px;
}
Now that your input is not visible to the user expect in PHP that your $_POST["email"] should be empty (without any value)! Otherwise don't submit the form.
Finally,all you need to do is create another input like
<input name="sender" type="text" placeholder="Your email"> after (!) the "bot-bait" input for the actual user Email address.
Acknowledgments:
Developer.Mozilla - Turning off form autocompletition
StackOverflow - Ignore Tabindex
What I did is to use a hidden field and put the timestamp on it and then compared it to the timestamp on the Server using PHP.
If it was faster than 15 seconds (depends on how big or small is your forms) that was a bot.
Hope this help
A very effective way to virtually eliminate spam is to have a text field that has text in it such as "Remove this text in order to submit the form!" and that text must be removed in order to submit the form.
Upon form validation, if the text field contains the original text, or any random text for that matter, do not submit the form. Bots can read form names and automatically fill in Name and Email fields but do not know if they have to actually remove text from a certain field in order to submit.
I implemented this method on our corporate website and it totally eliminated the spam we were getting on a daily basis. It really works!
How about creating a text field input box the same color as the background which must remain blank. This will get around the problem of a bot reading display:none
http://recaptcha.net/
reCAPTCHA is a free antibot service that helps digitize books
It has been aquired by Google (in 2009):
https://www.google.com/recaptcha
https://developers.google.com/recaptcha/
Also see
https://en.wikipedia.org/wiki/ReCAPTCHA
https://en.wikipedia.org/wiki/CAPTCHA for more general information
Many of those spam-bots are just server-side scripts that prowl the web. You can combat many of them by using some javascript to manipulate the form request before its sent (ie, setting an additional field based on some client variable). This isn't a full solution, and can lead to many problems (eg, users w/o javascript, on mobile devices, etc), but it can be part of your attack plan.
Here is a trivial example...
<script>
function checkForm()
{
// When a user submits the form, the secretField's value is changed
$('input[name=secretField]').val('goodValueEqualsGoodClient');
return true;
}
</script>
<form id="cheese" onsubmit="checkForm">
<input type="text" name="burger">
<!-- Check that this value isn't the default value in your php script -->
<input type="hidden" name="secretField" value="badValueEqualsBadClient">
<input type="submit">
</form>
Somewhere in your php script...
<?php
if ($_REQUEST['secretField'] != 'goodValueEqualsGoodClient')
{
die('you are a bad client, go away pls.');
}
?>
Also, captchas are great, and really the best defense against spam.
I'm surprised no one had mentioned this method yet:
On your page, include a small, hidden image.
Place a cookie when serving this image.
When processing the form submission, check for the cookie.
Pros:
convenient for user and developer
seems to be reliable
no JavaScript
Cons:
adds one HTTP request
requires cookies to be enabled on the client
For instance, this method is used by the WordPress plugin Cookies for Comments.
With the emergence of headless browsers (like phantomjs) which can emulate anything, you can't suppose that :
spam bots do not use javascript,
you can track mouse events to detect bot,
they won't see that a field is visually hidden,
they won't wait a given time before submitting.
If that used to be true, it is no longer true.
If you wan't an user friendly solution, just give them a beautiful "i am a spammer" submit button:
<input type="submit" name="ignore" value="I am a spammer!" />
<input type="image" name="accept" value="submit.png" alt="I am not a spammer" />
Of course you can play with two image input[type=image] buttons, changing the order after each load, the text alternatives, the content of the images (and their size) or the name of the buttons; which will require some server work.
<input type="image" name="random125454548" value="random125454548.png"
alt="I perfectly understand that clicking on this link will send the
e-mail to the expected person" />
<input type="image" name="random125452548" value="random125452548.png"
alt="I really want to cancel the submission of this form" />
For accessibility reasons, you have to put a correct textual alternative, but I think that a long sentence is better for screenreaders users than being considered as a bot.
Additional note: those examples illustrate that understanding english (or any language), and having to make a simple choice, is harder for a spambot than : waiting 10 seconds, handling CSS or javascript, knowing that a field is hidden, emulating mouse move or emulating keyboard typing, ...
A very simple way is to provide some fields like <textarea style="display:none;" name="input"></textarea> and discard all replies that have this filled in.
Another approach is to generate the whole form (or just the field names) using Javascript; few bots can run it.
Anyway, you won't do much against live "bots" from Taiwan or India, that are paid $0.03 per one posted link, and make their living that way.
I have a simple approach to stopping spammers which is 100% effective, at least in my experience, and avoids the use of reCAPTCHA and similar approaches. I went from close to 100 spams per day on one of my sites' html forms to zero for the last 5 years once I implemented this approach.
It works by taking advantage of the e-mail ALIAS capabilities of most html form handling scripts (I use FormMail.pl), along with a graphic submission "code", which is easily created in the most simple of graphics programs. One such graphic includes the code M19P17nH and the prompt "Please enter the code at left".
This particular example uses a random sequence of letters and numbers, but I tend to use non-English versions of words familiar to my visitors (e.g. "pnofrtay"). Note that the prompt for the form field is built into the graphic, rather than appearing on the form. Thus, to a robot, that form field presents no clue as to its purpose.
The only real trick here is to make sure that your form html assigns this code to the "recipient" variable. Then, in your mail program, make sure that each such code you use is set as an e-mail alias, which points to whatever e-mail addresses you want to use. Since there is no prompt of any kind on the form for a robot to read and no e-mail addresses, it has no idea what to put in the blank form field. If it puts nothing in the form field or anything except acceptable codes, the form submission fails with a "bad recipient" error. You can use a different graphic on different forms, although it isn't really necessary in my experience.
Of course, a human being can solve this problem in a flash, without all the problems associated with reCAPTCHA and similar, more elegant, schemes. If a human spammer does respond to the recipient failure and programs the image code into the robot, you can change it easily, once you realize that the robot has been hard-coded to respond. In five years of using this approach, I've never had a spam from any of the forms on which I use it nor have I ever had a complaint from any human user of the forms. I'm certain that this could be beaten with OCR capability in the robot, but I've never had it happen on any of my sites which use html forms. I have also used "spam traps" (hidden "come hither" html code which points to my anti-spam policies) to good effect, but they were only about 90% effective.
Another option instead of doing random letters and numbers like many websites do, is to do random pictures of recognizable objects. Then ask the user to type in either what color something in the picture is, or what the object itself is.
All in all, every solution is going to have its advantages and disadvantages. You are going to have to find a happy median between too hard for users to pass the antispam mechanism and the number of spam bots that can get through.
the easy way i found to do this is to put a field with a value and ask the user to remove the text in this field. since bots only fill them up. if the field is not empty it means that the user is not human and it wont be posted. its the same purpose of a captcha code.
I'm thinking of many things here:
using JS (although you don't want it) to track mouse move, key press, mouse click
getting the referral url (which in this case should be one from the same domain) ... the normal user must navigate through the website before reaching the contact form: PHP: How to get referrer URL?
using a $_SESSION variable to acquire the IP and check the form submit against that list of IPs
Fill in one text field with some dummy text that you can check on server side if it had been overwritten
Check the browser version: http://chrisschuld.com/projects/browser-php-detecting-a-users-browser-from-php.html ... It's clear that a bot won't use a browser but just a script.
Use AJAX to send the fields one by one and check the difference in time between submissions
Use a fake page before/after the form, just to send another input
I've added a time check to my forms. The forms will not be submitted if filled in less than 3 seconds and this was working great for me especially for the long forms. Here's the form check function that I call on the submit button
function formCheck(){
var timeStart;
var timediff;
$("input").bind('click keyup', function () {
timeStart = new Date().getTime();
});
timediff= Math.round((new Date().getTime() - timeStart)/1000);
if(timediff < 3) {
//throw a warning or don't submit the form
}
else submit(); // some submit function
}
Decided to add another answer, sorry.
We use a combination of two:
Honeypot field with name="email" (already mentioned by other answers) just be sure to use a sophisticated way to hide it , like moving off the screen or something. Because bots can detect display:none
A hidden field that is set by JavaScript when the user clicks (or focuses if you want to be TAB-friendly) on a required field (wasn't mentioned in other answers)
The 2nd option can even protect from a headless-browser type of spam (using phatnom.js or Selenium) because even JavaScript-bots don't bother actually clicking textboxes.
Blocks 99% of bots.
PS. Make sure to use the focus trick only on fields that are not being filled by password managers like LastPass or 1Passwor.
For the same reasons - mark your honeypot with autocomplete="false" tabindex="-1"
The best solution I've found to avoid getting spammed by bots is using a very trivial question or field on your form.
Try adding a field like these :
Copy "hello" in the box aside
1+1 = ?
Copy the website name in the box
These tricks require the user to understant what must be input on the form, thus making it much harder to be the target of massive bot form-filling.
EDIT
The backside of this method, as you stated in your question, is the extra step for the user to validate its form.
But, in my opinion, it is far simpler than a captcha and the overhead when filling the form is not more than 5 seconds, which seems acceptable from the user point of view.
Its just an idea, id used that in my application and works well
you can create a cookie on mouse movement with javascript or jquery and in server side check if cookie exist, because only humans have mouse, cookie can be created only by them
the cookie can be a timestamp or a token that can be validate
In my experience, if the form is just a "contact" form you don't need special measures. Spam get decently filtered by webmail services (you can track webform requests via server-scripts to see what effectively reach your email, of course I assume you have a good webmail service :D)
Btw I'm trying not to rely on sessions for this (like, counting how
many times a button is clicked to prevent overloads).
I don't think that's good, Indeed what I want to achieve is receiving emails from users that do some particular action because those are the users I'm interested in (for example users that looked at "CV" page and used the proper contact form). So if the user do something I want, I start tracking its session and set a cookie (I always set session cookie, but when I don't start a session it is just a fake cookie made to believe the user has a session). If the user do something unwanted I don't bother keeping a session for him so no overload etc.
Also It would be nice for me that advertising services offer some kind of api(maybe that already exists) to see if the user "looked at the ad", it is likely that users looking at ads are real users, but if they are not real well at least you get 1 view anyway so nothing loss. (and trust me, ads controls are more sophisticated than anything you can do alone)
Actually the trap with display: none works like a charm. It helps to move the CSS declaration to a file containing any global style sheets, which would force spam bots to load those as well (a direct style="display:none;" declaration could likely be interpreted by a spam bot, as could a local style declaration within the document itself).
This combined with other countermeasures should make it moot for any spam bots to unload their junk (I have a guest book secured with a variety of measures, and so far they have fallen for my primary traps - however, should any bot bypass those, there are others ready to trigger).
What I'm using is a combination of fake form fields (also described as invalid fields in case a browser is used that doesn't handle CSS in general or display: none in particular), sanity checks (i. e. is the format of the input valid?), time stamping (both too fast and too slow submissions), MySQL (for implementing blacklists based on e-mail and IP addresses as well as flood filters), DNSBLs (e. g. the SBL+XBL from Spamhaus), text analysis (e. g. words that are a strong indication for spam) and verification e-mails (to determine whether or not the e-mail address provided is valid).
One note on verification mails: This step is entirely optional, but when one chooses to implement it, this process must be as easy-to-use as possible (that is, it should boil down to clicking a link contained in the e-mail) and cause the e-mail address in question to be whitelisted for a certain period of time so that subsequent verifications are avoided in case that user wants to make additional posts.
I use a method where there is a hidden textbox. Since bots parse the website they probably fill it. Then I check it if it is empty if it is not website returns back.
Add email verification. The user receives an email and he needs to click a link. Otherwise discard the post in some time.
You can try to cheat spam-robots by adding the correct action atribute after Javascript validation.
If the robot blocks Javascript they can never submit the form correctly.
HTML
<form id="form01" action="false-action.php">
//your inputs
<button>SUBMIT</button>
</form>
JAVASCRIPT
$('#form01 button').click(function(){
//your Validations and if everything is ok:
$('#form01').attr('action', 'correct-action.php').on("load",function(){
document.getElementById('form01').submit()
});
})
I then add a "callback" after .attr() to prevent errors.