How to personalize a link to user's IP address - html

I am wondering how big of a job it is to be able to do the following:
I want to have certain web pages on a site only accessible by specific computers. Of course, I can always provide usernames and passwords, but I want it restricted even further.
To illustrate:
I give User-A access to page-A.
I give User-A a username and password to access page-A.
User-A tries to share the username and password with a friend (User-B).
User-B tries to access page-A with User-A's credentials, but it does
not work because User-B needs to be on User-A's computer to do so.
I know that this is possible to accomplish, since financial institutions employ this kind of security, but can I implement it on my my own? If so, how?
--Edit--
Yani mentioned that filtering by IP would not be wise, since user IP addresses often change. My question now turns to the use of sessions or localstorage/webstorage to control access to certain webpages.
What local data would you need to pull? I would imagine that a database would be required to store computer data for future reference by the system.

IP addresses are, for most home users, temporary only. The ISP will change them every few weeks/months, unless the user has a static IP (which usually costs more).
In addition, a user can take his laptop to a coffee shop and immediately log in from a different IP.
Therefore, IP address filtering is a good idea only if you want to geo-block users (country, state, etc.), but, to my honest opinion, not a good idea for authenticating a user over a long period of time.
You may just need to implement a cookie/session/localstorage with Javascript or server side technology such as PHP, which will be browser & computer specific.
Cookies + IP Address
Combining cookies/localStorage technology ALONG WITH IP address can actually be a good idea for having a 2nd level of securiy (i.e when IP changes, having an alert such as 'it seems as you are loging in from a different IP address, please answer security question...').
Also - when a user will login from a different browser but same computer (and same IP) you can have an extra verification question.
You can even implement a IP address history, such as gmail's.
However, if you had to choose only 1 of the methods - I'd definitely go with cookies/localStorage.
Examples of how to set and get local data in Javascript.
With localStorage (HTML5):
localStorage.setItem('userAuthenticated', '1');
localStorage.getItem('userAuthenticated');
With cookies:
function setCookie(cname,cvalue,exdays)
{
var d = new Date();
d.setTime(d.getTime()+(exdays*24*60*60*1000));
var expires = "expires="+d.toGMTString();
document.cookie = cname + "=" + cvalue + "; " + expires;
}
function getCookie(cname)
{
var name = cname + "=";
var ca = document.cookie.split(';');
for(var i=0; i<ca.length; i++)
{
var c = ca[i].trim();
if (c.indexOf(name)==0) return c.substring(name.length,c.length);
}
return "";
}
Hope this helps!

You probably will need to use .htaccess to do this. Try this and see if it works for what you are trying to do.
.htaccess: how to restrict access to a single file by IP?

Related

Catch 22? Blocked by CORS policy: Same server, internal/external IP, no SSL

My apologies if this is a duplicate. I can find a million results about CORS policy issues, but not about this specific one:
I developed a simple "speed test" site for my users (wfh employees of my company) to access. It tests speeds across the public net to different datacenters we utilize, and via the users' VPN connection to one of our DCs.
There are more complicated elements, but for a basic round-trip "ping" I have an extremely simple PHP script on the server that contains:
<?php
header('Access-Control-Allow-Origin: *');
header('Access-Control-Allow-Headers: *');
if ($_GET['simple'] == '1')
die('{ }');
?>
It is called like this:
$.ajax({
type: 'GET',
url: sURL,
data: { ignore: (pingCounter.start = new Date().getTime()) },
dataType: 'text',
timeout: iTimeout
})
.done(function(ret) {
pingCounter.end = new Date().getTime();
[...] (additional code omitted for brevity)
(I know this has additional overhead other than the raw round-trip network traffic timing, but I don't need sub-ms accuracy. I just need to be able to tell users "the problem is on your end" or "ah yes, the problem is the latency between your house and this particular DC".)
The same server running that PHP code is addressable at the following URLs at the DC wherein our VPN server lies:
http://speedtest-int.mycompany.com/ping.php
http://speedtest-ext.mycompany.com/ping.php
Public DNS resolves like this:
speedtest-int.mycompany.com IN A 1.1.1.1 (Actual public IP redacted)
speedtest-int.mycompany.com IN A 10.1.1.1 (Actual internal IP redacted)
If I access either URL from my browser directly, it loads fine (which is to say it responds with { }).
When loading via the JS snippet above, the call to http://speedtest-ext.mycompany.com/ping.php works fine.
The call to http://speedtest-int.mycompany.com/ping.php fails with "The request client is not a secure context and the resource is in more-private address space 'private'".
Fair enough, the solution is to add Access-Control-Allow-Private-Network: *, right?
EXCEPT that apparently can only be used with SSL:
https://developer.chrome.com/blog/private-network-access-update/
I have a self-signed cert on there, but that obviously fails by policy for that reason.
I could just get a LetsEncrypt cert for multiple subdomains. EXCEPT it will never validate the URL http://speedtest-int.mycompany.com because the LetsEncrypt servers won't be able to reach that to validate ownership, as it's a private IP.
I have no control over most of my users' machines, so I can't necessarily install trusted internal certs or change browser options. Most users use Chrome.
So is my solution to buy a UCC or wildcard cert?
I feel like I'm in a catch-22, and I don't want to spend however-much on a UCC cert for an internal app that will be very very very occasionally used by one of our 25 home-based employees when I want to prove that their home "internet is bad" and not the corp network.
Thanks in advance; I'm sure there's a stupidly obvious solution I'm not seeing.
(I'm considering pushing a /32 route to my VPN users for another real public IP to be used in place of the internal IP. Then I can have the "internal" test run against an otherwise publicly accessible IP which could be validated by LetsEncrypt, but VPN users would hit it via the VPN. Is that silly?)
Edit: If anyone is curious -- or it helps to clarify my goal here -- this is the output when accessing the speedtest page:
http://s.co.tt/wp-content/uploads/2021/12/Internal_Speedtest_Example-Redacted.png
It repeats for 20 cycles (or until stopped) and runs each element a varying number of times per cycle, collecting the average time for each. It ain't pretty, but it work(ed).

MediaWiki - Require confirmed emails before allowing read?

I'm trying to setup a MediaWiki for university students. Using the EmailDomainCheck, I prevent anyone except those with a university based email from creating accounts. Using $wgEmailConfirmToEdit, I can require that an email is confirmed before the user can edit files. However, as it is, a user can use a fake email from the correct domain to create an account. With the account they can view all pages (even though they cannot edit them). I do not want to grant them read access unless the email has been confirmed. Is this possible? Note, I want all confirmed emails of the correct domain to be automatically accepted. It should not require manual account creation acceptance.
You could try the following, as outlined in the Documentation
# Disable for everyone.
$wgGroupPermissions['*']['read'] = false;
# Disable for users, too: by default 'user' is allowed to read, even if '*' is not.
$wgGroupPermissions['user']['read'] = false;
# Make it so users with confirmed email addresses are in the group.
$wgAutopromote['emailconfirmed'] = APCOND_EMAILCONFIRMED;
# Hide group from user list.
$wgImplicitGroups[] = 'emailconfirmed';
# Finally, set it to true for the desired group.
$wgGroupPermissions['emailconfirmed']['read'] = true;
As Jenny Shoars has mentioned, you may wish to whitelist some pages such as:
$wgWhitelistRead = array("Main_Page", "Special:CreateAccount", "Special:ConfirmEmail");
So that non registered users can still create accounts and the like.
In theory,
$wgGroupPermissions['*']['read'] = false;
$wgGroupPermissions['emailconfirmed']['read'] = true;
should work. In practice, MediaWiki almost always used with an "everyone can read" or "you can read iff you are logged in" setup and others are not very well tested, so if that wiki had some highly sensitive private information I wouldn't do this, but I imagine for a university website that's not the case.
Alternatively, it should not be too hard to integrate an email confirmation step into account creation, but you'd have to write the code for that. EmailAuth (which does a similar check during login) might give you an idea of how that would look.

Html - single page - staying logged in

I have an Html page with a load of javascript that changes between views.
Some views require the person to be logged in, and consequently prompt for it.
How can I note the person has successfully logged in, using the javascript, that will not be a security issue, but will mean the person does not have to repeatedly log in for each view. I do not want to keep on going back to the server each time.
Edit:::
To explain more. Here are the problems I see.
Lets say I have the following in my javascript:
var isLoggedIn = true;
var userEmail = "myemail#mysite.com";
Anyone can hack my code to change these values and then get another person's info. That is not good. So instead of isLoggedIn do I need something like a hashed password stored in the javascript:
var userHashedPassword = "shfasjfhajshfalshfla";
But every where I read, they say you should not keep any password stuff in memory for any length of time.
So what variables do I keep and where? The user will be constantly flicking between non-user specific divs and user-based divs, and I do not want them to have to constantly log in each time.
****Edit 2:*****
This is what I am presently doing, but am not happy with.
There is a page/div with 3 radio buttons. Vacant games (does not require user information), My Game (requires knowledge of user and must be signed in), My Old Games (also requires logged in status).
When first going on the page it defaults on vacant games, and gets the info from the server, which does not require login.
In two variables in the javascript I have
var g_Email = "";
var g_PasswordEncrypted = "";
Note these are both 0 length strings.
If the user wants to view their games, they click the My Games radio button. The code checks to see if the g_Email and PasswordEncrypted are 0 length strings, if they are it goes to a div where they need to login.
When the user submits their loging info, it goes to the server, checks their details, and sends back an ok message, and all the info (My Games) that the user was requesting.
So if the login was a success, then
g_Email = "myemail#mysite.com";
g_PasswordEncrypted = "this is and encrypted version of the password";
If there is any failure in login, these two are instead set to "".
Then when the user navigates to any page that requires login, it checks to see if these two strings are filled. If they are, it will not go to a login page when you request information like My Games.
Instead it just sends the info in these strings to the server, along with the My Games request. The server still checks these Email and encrypted password are valid before sending back the info, but at the client side, the user has not had to repeatedly input this info each time.
If there is any failure in the server request, it just sends back an error message (I am using ajax) in the callback function, which knows to set the g_Email and g_PasswordEncrypted to "" if there is anything wrong. (In the latter case, the client side knows it has to re-request the login details because these two strings are "").
The thing I do not like is I am keeping the Encryted password on the person's client machine. If they walk away from their machine, someone can open up the debugger in something like chrome and extract these details, and then hack it into their machine some time later.
If javascript loads content for each view from the server then it is for server to know if a current session belongs to logged user or not. In case the user is not logged, the server responses with prompt to login, otherwise it sends content of the view.
If javascript bulds content for the views deriving it from the data that was already received from the server then it should use some variable keeping state of the user (logged/not_logged). And depending on that value javascript will either show a prompt to login or display required content of the view.

How to make MediaWiki account registrations only allow unique emails?

How can I set it so that MediaWiki will not allow a single email address to create multiple accounts? A spambot just created 5 accounts with a single email.
I've looked for configuration settings or extensions, but haven't been able to find one.
A related issue to this is the annoying creation of spam accounts with usernames similar to JameeiohpbrxvlsHeadlon.
Spam prevention measurements work quite well, as no actual spam articles are created, but only spam accounts. I have TorBlock, ConfirmEdit and SimpleAntiSpam installed to prevent spam accounts from being created, but this appears to fail.
This is not likely to be a very effective anti-spam strategy. Most spambots smart enough to register accounts with valid e-mail addresses are likely also smart enough to try a new address if the registration fails.
Personally, I've found the most effective anti-spam solution for small wikis to be ConfirmEdit with QuestyCaptcha. Just configure ConfirmEdit to require a CAPTCHA for account creation, so that you won't get spam accounts. The questions don't need to be hard to answer — indeed, they can be absolutely trivial, as long as they're unique to your site.
That said, you could do what you suggest by writing an AbortNewAccount hook to look up the user's e-mail address in the database and fail if you find a match, something like this (untested!):
$wgHooks['AbortNewAccount'][] = 'disallowDuplicateEmails';
function disallowDuplicateEmails( $user, &$message ) {
$email = $user->getEmail();
if ( !$email ) return true; // allow empty e-mail
$dbr = wfGetDB( DB_SLAVE );
$name = $dbr->selectField( 'user', 'user_name',
array( 'user_email' => $email ),
__METHOD__ );
if ( $name !== false) {
$message = wfMessage( 'signup-dup-email', $email, $name )->text();
return false;
}
return true; // no match
}
You'll also need to create the system message page MediaWiki:signup-dup-email, with content something like this:
The e-mail address <tt>$1</tt> is already used by [[User:$2|$2]].
Note that there are at least two potential issues with such a check:
It can allow people to "fish" for e-mail addresses of your users (something that MediaWiki normally treats as private information) by trying to register a new account with an address they suspect might belong to an existing user. This could be somewhat mitigated by omitting the username from the error message, but that would still leak the information that someone is using the address.
The code above doesn't check whether the address has been confirmed or not (and checking that would rather defeat its purpose, unless you also require all users to confirm their e-mail address), and so a malicious person could prevent someone else from registering with their e-mail address by creating a dummy account with the same address.
However, getting around the check would actually be rather easy, since a) it checks for an exact match, so e.g. just changing the capitalization of the host name would be enough to make the check pass, and b) it doesn't prevent users from changing their e-mail address to whatever they want after registering, anyway. Both of these holes could be blocked, but it would require more effort.

Retrieving information from a web page

My application is meant to speed up the retrieval of phone call information from our telephone system.
The best way to get this information is to create a new search on the telephone system's web interface and export the results to an Excel spreadsheet which my application then imports into a DataSet.
To get the export, from the login screen, the process goes as follows:
Log in
Navigate to Reports Page
Click "Extension Detail" link
Select "Extensions" CheckBox
Select the extensions (typically all the ones currently being used) from the listbox
Specify date range
Click on Export button
It's not a big job to do it manually every day, but, for reliability, it would be great if I can make my application do this automatically the first time it starts every day.
Since more than 1 person in the company is going to use this application, having a Windows Service do it would be even better.
I don't know if it'll help, but the system is Datatex Topaz Next Generation telephone management system: http://www.datatex.co.za/downloads/index.html#TNG
Can anyone give me a basic idea how to do this?
Also, can anyone post links (in comments if need be) to pages where I can learn more about how to do this?
I have done the something similar to fetch info from a website. I cannot give you a exact answer. But the idea is to send login info to the page with form values. If the site is relying on cookies, you can use this cookie aware WebClient:
public class CookieAwareWebClient : WebClient
{
private CookieContainer cookieContainer = new CookieContainer();
protected override WebRequest GetWebRequest(Uri address)
{
WebRequest request = base.GetWebRequest(address);
if (request is HttpWebRequest)
{
(request as HttpWebRequest).CookieContainer = cookieContainer;
}
return request;
}
}
You should be aware that some sites rely on a session id being passed so the first thing I did was to fetch the session id from the page:
var client = new CookieAwareWebClient();
client.Encoding = Encoding.UTF8;
var indexHtml = client.DownloadString(*index page url*);
string sessionID = fetchSessionID(indexHtml);
Then I had to log in to the page which you can do by uploading values to the page. You can see the specific form elements with "view source" but you have to know a little HTML to do so.
var values = new NameValueCollection();
values.Add("sessionid", sessionID); //Fetched session id
values.Add("brugerid", args[0]); //Username in my case
values.Add("adgangskode", args[1]); //Password in my case
values.Add("login", "Login"); //The login button
//Logging in
client.UploadValues(*url to login*, values); //If all goes perfect, I'm logged in now
And then I could download the page I needed. In your case you may use DownloadFile(...) if the file always have the same url (something like Export.aspx?From=2010-10-10&To=2010-11-11) or UploadValues(...) where you specify the values as before but saves the result.
string html = client.DownloadString(*url*);
It seems you have a lot more steps than I did. But the principle is the same. To see what values your send to the site to login etc. you can use programs such as Fiddler (windows) which can capture the activity going on. Essential you just do exactly the same thing but watch out for session id etc. which is temporary.
The best idea is really to use some native way to fetch data, but if don't got the code, database etc. you have to do it the ugly way. You may also need a HTML parser to fetch the data (ups, you don't because you export to a file). And last but not least, keep in mind that pages can change and there is great potential to fail to login, parse etc.
Please ask for if you are uncertain what is going on.
ADDITION
The CookieAwareWebClient is not my code:
http://code.google.com/p/gardens/source/browse/Montrics/Physical.MyPyramid/CookieAwareWebClient.cs?r=26
Using CookieContainer with WebClient class
I also found some relevant threads:
What's a good tool to screen-scrape with Javascript support?
http://forums.asp.net/t/1475637.aspx
With a HTTP client, you need to do the following:
Log in, using cookies or HTTP authentication
Request a page
Submit form data
This means that you need some class or component in your program that can do HTTP, cookies, authentication and forms. With this, you do the same requests a user would do.