Reserved Characters in URI - cross-browser

Is a URI written like Method 1 likely to cause problems on certain browsers vs Method 2? If so, on which ones? Can someone point to a source?
Method 1
test.dev/mypage?attributes[]=1&attributes[]=2&attributes[]=3
Method 2
test.dev/mypage?attributes%5B%5D=1&attributes%5B%5D=2&attributes%5B%5D=3

See the W3C's specification of URIs:
https://www.w3.org/Addressing/URL/uri-spec.html
None of those characters []=& are listed as reserved or unsafe, so either method should be fine.

Related

Valid Lighthouse URL patterns

I can see that the Lighthouse CLI accepts a --blocked-url-patterns argument, but I can't find any definition of what constitutes a valid pattern, only that * is supported.
Is * literally the only supported pattern-ish character?
In this Google Lighthouse test file blockedUrlPatterns is an array of string with various patterns.
blockedUrlPatterns: ['http://*.evil.com', '.jpg', '.woff2'],
I realise this is coming in a little (two years) late, but for anyone else who wants to pass block patterns using the CLI you need to pass each pattern as its own param:
lighthouse https://example.com --blocked-url-patterns='http://*.evil.com' --blocked-url-patterns='*.jpg'
This will create the array required by Lighthouse - which you can see in the report produced if you search for blockedUrlPatterns
"blockedUrlPatterns":["http://*.evil.com","*.jpg"]

On a JSON - REST API how to perform warnings on non-critical errors or re-attempts

[I mention the sources I've looked at below]
I am designing an API and I found over and over again that warnings would be helpful, these were the 3 most common use cases:
To convey that a non-critical issue occurred when doing a POST:
(For example, data not matching some validation was ignored and the
resource was created anyway)
To convey that a side-effect occurred: (For example, an associated resource that was valid yesterday, is now expired and was removed - the main resource however is still valid)
To convey that an action was already performed: (For example, when POSTing payment data on an item that has already been paid for)
In all 3 instances the response to the verb is succesful, the relevant response is giving back the resource (in case 1 the resource was created, in case 2 the resource is still there, in case 3 the resource is now paid for)
In all 3 instances I feel like I should be informing the Client but I could not find the standard way of doing this.
Adding the response as part of the body would require me changing the resource model, which is something I'd rather avoid
A standarized "_warnings" key that is not part of the model per se is something I'm partial about (but I haven't seen implementations of it)
Ideally, I feel the information should be part of the headers, but the only similar approach I found was the Warnings headers which is deprecated (see below).
Things I've looked at:
Previous (relevant) question: https://softwareengineering.stackexchange.com/questions/315556/warnings-in-a-rest-api-as-not-critical-errors
Reason why is not 100% relevant: The question is focusing on the appropriate status code, and weather it should be doing warnings at all as opposed to focusing on the resources themselves and where to put the body
Standard warning header: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Warning
Reason why is not relevant Header has behaviour associated with it (caches), and is on the verge of changing (has been deprecated)

Mac OS X - Accept self-signed multi domain SSL Certificate

I've got a self-signed certificate for multiple domains (let's say my.foo.bar.com & yours.foo.bar.com) that I've imported using Keychain Access but Chrome will still not accept it, prompting me for verification at the beginning of each browsing session per domain.
The Certificate was generated using the x509v3 subject alternative name extension to validate multiple domains. If I navigate to the site before I import the certificate, I get a different warning message than after importing. Attached below is an image of the two errors (with the top being the error before imported)
Is there any way to accept a self-signed multi-domain certificate? I only get warnings in Chrome, btw. FF and Safari work great (except those browsers suck ;) )
UPDATE: I tried generating the cert both with the openssl cli and the xca GUI
The problem is that you're trying to use too broad a wildcard (* or *.com).
The specifications (RFC 6125 and RFC 2818 Section 3.1) talk about "left-most" labels, which implies there should be more than one label:
1. The client SHOULD NOT attempt to match a presented identifier in
which the wildcard character comprises a label other than the
left-most label (e.g., do not match bar.*.example.net).
2. If the wildcard character is the only character of the left-most
label in the presented identifier, the client SHOULD NOT compare
against anything but the left-most label of the reference
identifier (e.g., *.example.com would match foo.example.com but
not bar.foo.example.com or example.com).
I'm not sure whether there's a specification to say how many minimum labels there should be, but the Chromium code indicates that there must be at least 2 dots:
We required at least 3 components (i.e. 2 dots) as a basic protection
against too-broad wild-carding.
This is indeed to prevent too broad cases like *.com. This may seem inconvenient, but CAs make mistakes once in a while, and having a measure to prevent a potential rogue cert issued to *.com to work isn't necessarily a bad thing.
If I remember correctly, some implementations go further than this and have a list domains that would be too broad for second-level domains too (e.g. .co.uk).
Regarding your second example: "CN:bar.com, SANs: DNS:my.foo.bar.com, DNS:yours.foo.bar.com". This certificate should be valid for my.foo.bar.com and yours.foo.bar.com but not bar.com. The CN is only a fallback solution when no SANs are present. If there are any SANs, the CN should be ignored (although some implementations will be more tolerant).

What kind of example url I can use that will immediately cause a request to fail?

What is the "official" url I should use if I want to indicate just a resource that fails as soon as possible?
I don't want to use www.example.com since its an actual site that accepts and responds requests and I don't want something that takes forever and fails from a timeout (like typing using a random, private IP address can lead to).
I thought about writing an invalid address or just some random text but I figured it wouldn't look as nice and clear as "www.example.com" is.
If you want an invalid IP, trying using 0.0.0.0.
The first octet of an IP cannot be 0, so 0.0.0.0 to 0.255.255.255 will be invalid.
For more info, see this question: what is a good invalid IP address to use for unit tests?
https://www.rfc-editor.org/rfc/rfc5735:
192.0.2.0/24 - This block is assigned as "TEST-NET-1" for use in documentation and example code. It is often used in conjunction with domain names example.com or example.net in vendor and protocol documentation. As described in [RFC5737], addresses within this block do not legitimately appear on the public Internet and can be used without any coordination with IANA or an Internet registry. See[RFC1166].
Use .invalid, as per RFC 6761:
The domain "invalid." and any names falling within ".invalid." are special [...] Users MAY assume that queries for "invalid" names will always return NXDOMAIN responses.
So a request for https://foo.invalid/bar will always fail, assuming well-behaved DNS.
Related question: What is a guaranteed-unresolvable (but valid) URL?
if it's in a browser then about: is fairly useless - but it would be better if your service returned the correct HTTP status code - e.g. 200 = good, 404 = not found, etc.
http://en.wikipedia.org/wiki/List_of_HTTP_status_codes

is link / href with just parameters (starting with question mark) valid?

Is this link valid?
eng
I know the browsers treat it as expected and I know the empty link would be ok too - but is it ok to specify just the parameters?
I am curious because question mark ("?") is only a convention by most HTTP servers (AFAIK), though I admit it is a prevailing one.
So, to recap:
will all browsers interpret this correctly?
is this in RFC?
can I expect some trouble using this?
UPDATE: the intended action on click is to redirect to the same page, but with different GET parameters ("lang=en" in above example).
Yes, it is.
You can find it in RFC 1808 - Relative Uniform Resource Locators:
Within an object with a well-defined base URL of
Base: <URL:http://a/b/c/d;p?q#f>
the relative URLs would be resolved as follows:
5.1. Normal Examples
?y = <URL:http://a/b/c/d;p?y>
RFC 3986 - Uniform Resource Identifier (URI): Generic Syntax restates the same, and adds more details, including the grammar:
relative-ref = relative-part [ "?" query ] [ "#" fragment ]
relative-part = "//" authority path-abempty
/ path-absolute
/ path-noscheme
/ path-empty #; zero characters
Now, that is not to say all browsers implement it according to the standard, but it looks like this should be safe.
Yes - and it will hit the current url with parameters what you are passing.
It is very convenient to use in situations where you want to make sure you do not cross current page/form boundary and keep on hitting same ActionMethod or whatever is listening with different parameters.