Can you somehow use the base-tag to pass parameters? - html

I written some custom debugging code to a large framework, by adding ?debug to any url I get some custom server-data. Whenever I click a link, the ?debug disapears, ofcorse can I keep it there somehow? My idea was using the base-tag:
If(isset($_POST['debug']{
<base href="/images/">
}
But it doesn't seem to support parameters. Is there something similair?

Assuming you're using Apache you could just use mod_rewrite:
RewriteEngine on
# Test whether the current query string contains 'debug'
RewriteCond %{QUERY_STRING} !debug
# Internally append ('query string append') the extra parameter
RewriteRule (.*) $1?debug [QSA]
To limit this behaviour to only your computer add an extra condition in between:
# Only trigger the rule if the remote IP address exactly matches the string
RewriteCond %{REMOTE_ADDR} =192.168.1.1
And replace the IP with your own.

I think you have 2 options - the "easiest" would be to add a session variable on the server side that shows that all pages returned should be in debug mode. This brings it's own side effects, with reliance on the session being one of them.
The better option is to add the debug query string to all links on the page. This can be done on the server side when the page is rendered, but probably the best way would be to use something like jQuery to automatically add it to all the links (as described here: Jquery : Append querystring to all links)

Related

NGINX remove .html extension

So, I found an answer to removing the .html extension on my page, that works fine with this code:
server {
listen 80;
server_name _;
root /var/www/html/;
index index.html;
if (!-f "${request_filename}index.html") {
rewrite ^/(.*)/$ /$1 permanent;
}
if ($request_uri ~* "/index.html") {
rewrite (?i)^(.*)index\.html$ $1 permanent;
}
if ($request_uri ~* ".html") {
rewrite (?i)^(.*)/(.*)\.html $1/$2 permanent;
}
location / {
try_files $uri.html $uri $uri/ /index.html;
}
}
But if I open mypage.com it redirects me to mypage.com/index
Wouldn't this be fixed by declaring index.html as index? Any help is appreciated.
The "Holy Grail" Solution for Removing ".html" in NGINX:
UPDATED ANSWER: This question piqued my curiosity, and I went on another, more in-depth search for a "holy grail" solution for .html redirects in NGINX. Here is the link to the answer I found, since I didn't come up with it myself: https://stackoverflow.com/a/32966347/4175718
However, I'll give an example and explain how it works. Here is the code:
location / {
if ($request_uri ~ ^/(.*)\.html(\?|$)) {
return 302 /$1;
}
try_files $uri $uri.html $uri/ =404;
}
What's happening here is a pretty ingenious use of the if directive. NGINX runs a regex on the $request_uri portion of incoming requests. The regex checks if the URI has an .html extension and then stores the extension-less portion of the URI in the built-in variable $1.
From the docs, since it took me a while to figure out where the $1 came from:
Regular expressions can contain captures that are made available for later reuse in the $1..$9 variables.
The regex both checks for the existence of unwanted .html requests and effectively sanitizes the URI so that it does not include the extension. Then, using a simple return statement, the request is redirected to the sanitized URI that is now stored in $1.
The best part about this, as original author cnst explains, is that
Due to the fact that $request_uri is always constant per request, and is not affected by other rewrites, it won't, in fact, form any infinite loops.
Unlike the rewrites, which operate on any .html request (including the invisible internal redirect to /index.html), this solution only operates on external URIs that are visible to the user.
What does "try_files" do?
You will still need the try_files directive, as otherwise NGINX will have no idea what to do with the newly sanitized extension-less URIs. The try_files directive shown above will first try the new URL by itself, then try it with the ".html" extension, then try it as a directory name.
The NGINX docs also explain how the default try_files directive works. The default try_files directive is ordered differently than the example above so the explanation below does not perfectly line up:
NGINX will first append .html to the end of the URI and try to serve it. If it finds an appropriate .html file, it will return that file and will maintain the extension-less URI. If it cannot find an appropriate .html file, it will try the URI without any extension, then the URI as a directory, and then finally return a 404 error.
UPDATE: What does the regex do?
The above answer touches on the use of regular expressions, but here is a more specific explanation for those who are still curious. The following regular expression (regex) is used:
^/(.*)\.html(\?|$)
This breaks down as:
^: indicates beginning of line.
/: match the character "/" literally. Forward slashes do NOT need to be escaped in NGINX.
(.*): capturing group: match any character an unlimited number of times
\.: match the character "." literally. This must be escaped with a backslash.
html: match the string "html" literally.
(\?|$): match a literal "?" or the end of the string. This is done to avoid mishandling file names with something after ".html".
The capturing group (.*) is what contains the non-".html" portion of the URL. This can later be referenced with the variable $1. NGINX is then configured to re-try the request (return 302 /$1;) and the try_files directive internally re-appends the ".html" extension so the file can be located.
UPDATE: Retaining the query string
To retain query strings and arguments passed to a .html page, the return statement can be changed to:
return 302 /$1$is_args$args;
This should allow requests such as /index.html?test to redirect to /index?test instead of just /index.
Note that this is considered safe usage of the `if` directive.
From the NGINX page If Is Evil:
The only 100% safe things which may be done inside if in a location context are:
return ...;
rewrite ... last;
Also, note that you may swap out the '302' redirect for a '301'.
A 301 redirect is permanent, and is cached by web browsers and search engines. If your goal is to permanently remove the .html extension from pages that are already indexed by a search engine, you will want to use a 301 redirect. However, if you are testing on a live site, it is best practice to start with a 302 and only move to a 301 when you are absolutely confident your configuration is working correctly.
This has often come up for me as well and due to the configuration at work, location blocks are iffy at best and the / & .php blocks are locked down. Which means that most of the solutions don't work for me.
So here is one that I simplified from the Accepted answer above.
rewrite ^/(.*)\.html /$1/ permanent;
Works great for CMSs, where the underlying framework is generating the pages

Getting last page visited

I have a web site with some static web pages (webSiteA), which has a link to another web application (webAppB).
webAppB must know if the client was redirected from webSiteA. What are my options here?
One option I am thinking about is to create the link with a query string on webSiteA, and webAppB can check for that.
webSiteA is just a static html web site created using some web designer, and will be in http.
I guess the webAppB can also check for the last URL and check the IP for webSiteA, or by using referrer.
Are there any other options that may be considered a better way to do this? How safe is either of the methods above? How easy is it to spoof these?
The basic option is to use the referer.
You say website A is static and you don't need to enforce strong security. In this case the referer is also the only option.
If you need a proof that the user visited site A, you can do something like this :
Put a link like
/redirect.php?url=http://site-b/...
In this file you add a parameter to the URL that uniquely identifies the client, as for example :
http://site-b/...?t=identifier
where identifier can be something like
$identifier = md5($_SERVER['REMOTE_ADDR'] . $_SERVER['HTTP_USER_AGENT'] . $secret_string);
On website B you check if the identifier corresponds to the client's footprint. You have a proof that cannot be falsified.

What kind of example url I can use that will immediately cause a request to fail?

What is the "official" url I should use if I want to indicate just a resource that fails as soon as possible?
I don't want to use www.example.com since its an actual site that accepts and responds requests and I don't want something that takes forever and fails from a timeout (like typing using a random, private IP address can lead to).
I thought about writing an invalid address or just some random text but I figured it wouldn't look as nice and clear as "www.example.com" is.
If you want an invalid IP, trying using 0.0.0.0.
The first octet of an IP cannot be 0, so 0.0.0.0 to 0.255.255.255 will be invalid.
For more info, see this question: what is a good invalid IP address to use for unit tests?
https://www.rfc-editor.org/rfc/rfc5735:
192.0.2.0/24 - This block is assigned as "TEST-NET-1" for use in documentation and example code. It is often used in conjunction with domain names example.com or example.net in vendor and protocol documentation. As described in [RFC5737], addresses within this block do not legitimately appear on the public Internet and can be used without any coordination with IANA or an Internet registry. See[RFC1166].
Use .invalid, as per RFC 6761:
The domain "invalid." and any names falling within ".invalid." are special [...] Users MAY assume that queries for "invalid" names will always return NXDOMAIN responses.
So a request for https://foo.invalid/bar will always fail, assuming well-behaved DNS.
Related question: What is a guaranteed-unresolvable (but valid) URL?
if it's in a browser then about: is fairly useless - but it would be better if your service returned the correct HTTP status code - e.g. 200 = good, 404 = not found, etc.
http://en.wikipedia.org/wiki/List_of_HTTP_status_codes

Can a URL have multiple parts of subdomain to it?

I have a domain name abc.mydomain.com
This is a https URL ( http redirects to the https version )
However, I now need to be able to handle www.abc.mydomain.com to redirect to abc.mydomain.com
How can I do this? is it a webserver level redirect or something to be done at the DNS resolution.
I know my URL already has the "abc" as its sub-domain and I dont need a "www", however, we noticed that "www.news.google.com" resolves to "news.google.com" - hence wondering if I can achieve it too
Thank you!
In short, yes.
DNS works on a hierarchy - the DNS server for .com can delegate down to the nameserver for your domain which can delegate further, or just answer the requests, which needs to be your first step.
If you use Bind style zone files, you can do something like (where 123.45.67.89 is your webserver IP address):
* IN A 123.45.67.89
Then, you also need your webserver to resolve that to the right virtual host/redirect as desired.

Switch to SSL using a relative URL

I would like to create a relative link that switches the current protocol from http to https. The last place I worked had something set up on the server so that you could make that happen, but I don't remember much about it and I never knew how it worked.
The rationale for this is that I wouldn't need to hardcode server names in files that need to move in between production and development environments.
Is there a way for this to work in IIS 6.0?
Edit:
I am using .NET, but the "link" I'm creating will not be dynamically generated. If you really want the nitty gritty details, I am using a redirect macro in Umbraco that requires a URL to be passed in.
Here's a simple solution in VB.NET:
Imports System.Web.HttpContext
Public Shared Sub SetSSL(Optional ByVal bEnable As Boolean = False)
If bEnable Then
If Not Current.Request.IsSecureConnection Then
Dim strHTTPS As String = "https://www.mysite.com"
Current.Response.Clear()
Current.Response.Status = "301 Moved Permanently"
Current.Response.AddHeader("Location", strHTTPS & Current.Request.RawUrl)
Current.Response.End()
End If
Else
If Current.Request.IsSecureConnection Then
Dim strHTTP As String = "http://www.mysite.com"
Current.Response.Clear()
Current.Response.Status = "301 Moved Permanently"
Current.Response.AddHeader("Location", strHTTP & Current.Request.RawUrl)
Current.Response.End()
End If
End If
End Sub
Usage:
'Enable SSL
SetSSL(True)
'Disable SSL
SetSSL(False)
You could add this to the Page_Load of each of your pages. Or you could do something like I did and create a list of folders or pages that you want secured in your global.asax and set the SSL accordingly in the Application_BeginRequest method. And this will work with relative links and the HTTP or HTTPS status of a page will always be what you tell it to be in the code.
I have this code in place on several websites. But as an example, if you go to https://www.techinsurance.com you'll notice it automatically redirects to http because the home page doesn't need to be secured. And the reverse will happen if you try to hit a page that needs to be secured such as http://www.techinsurance.com/quote/login.aspx
You may notice that I'm using 301 (permanent) redirects. The side benefit here is that search engines will update their index based on a 301 redirect code.
Which language/framework are you using?
You should be able to create your own function in which you pass in the relative page and you deduce from the HttpRequest object and the Server object (again depending on the language or framework) what the host and URL are and then just simply redirect to that URL but with https as a prefix.
Here is a good CodeProject article on doing this by specifying certain directories and files that you want to use SSL. It will automatically switch these to and from https based on your needs.
I've use this for a project, and it works really well.
This is the same answer I gave here:
Yes you can. I recommend this free open source DLL that lets you designate which pages and folders need SSL and which don't:
http://www.codeproject.com/KB/web-security/WebPageSecurity_v2.aspx
So you can setup a page to be secure in your web.config like this:
<secureWebPages encryptedUri="www.example.com" unencryptedUri="www.example.com" mode="RemoteOnly" >
<files>
<add path="/MustBeSecure.aspx" secure="Secure" />
</files>
</secureWebPages>
We ended up buying ISAPI Rewrite to perform redirects at the web server level for certain URLs. That's not quite the answer I was looking for when I asked the question, but it's what works for us.