How can I share links to my IPFS content if HTTP gateway URLs are blocked by some social media sites and firewall vendors? - ipfs

IPFS is a distributed filesystem to which anyone can add and retrieve content, which is indexed by CID (content-hash), so each file gets a deterministic, immutable identifier.
IPFS gateways make content from the network available to web clients. Because they serve HTML content, they have been abused in the past for phishing attacks. In response, several security vendors added some of the gateway hosts to their URL filters. 😕
Because the offending content lives at subdomains and not the root content, it would be proper to treat these URLs like GitHub pages or similar, and apply the block to the subdomain, not the full domain. We are pursuing inclusion of some of our gateways onto the public suffix list, which would help security vendors know to give IFPS gateway URLs the correct treatment.
In the meantime, when posting gateway URLs to social media, or accessing them via some security firewalls, the URL can be blocked.
Is there a better way to share this content?

My workaround to use for now is to try IPFS gateways that are on the public suffix list. For now that means
*.dweb.link
But there is a long list of public gateways here, so check them if you need to use another one. Sometimes it can take a while for you content to be available on a new gateway, but often the second request is faster. You can also run a gateway under a new name that hasn't been hit by the content filters yet.
If you original URL looks like
https://bafybeiemxf5abjwjbikoz4mc3a3dla6ual3jsgpdr4cjr3oz3evfyavhwq.ipfs.w3s.link/wiki/Vincent_van_Gogh.html
You'll want to change the last part of the hostname from w3s to dweb, so it looks like:
https://bafybeiemxf5abjwjbikoz4mc3a3dla6ual3jsgpdr4cjr3oz3evfyavhwq.ipfs.dweb.link/wiki/Vincent_van_Gogh.html

Related

Same origin policy by default [duplicate]

tl;dr; About the Same Origin Policy
I have a Grunt process which initiates an instance of express.js server. This was working absolutely fine up until just now when it started serving a blank page with the following appearing in the error log in the developer's console in Chrome (latest version):
XMLHttpRequest cannot load https://www.example.com/
No 'Access-Control-Allow-Origin' header is present on the requested
resource. Origin 'http://localhost:4300' is therefore not allowed access.
What is stopping me from accessing the page?
tl;dr — When you want to read data, (mostly) using client-side JS, from a different server you need the server with the data to grant explicit permission to the code that wants the data.
There's a summary at the end and headings in the answer to make it easier to find the relevant parts. Reading everything is recommended though as it provides useful background for understanding the why that makes seeing how the how applies in different circumstances easier.
About the Same Origin Policy
This is the Same Origin Policy. It is a security feature implemented by browsers.
Your particular case is showing how it is implemented for XMLHttpRequest (and you'll get identical results if you were to use fetch), but it also applies to other things (such as images loaded onto a <canvas> or documents loaded into an <iframe>), just with slightly different implementations.
The standard scenario that demonstrates the need for the SOP can be demonstrated with three characters:
Alice is a person with a web browser
Bob runs a website (https://www.example.com/ in your example)
Mallory runs a website (http://localhost:4300 in your example)
Alice is logged into Bob's site and has some confidential data there. Perhaps it is a company intranet (accessible only to browsers on the LAN), or her online banking (accessible only with a cookie you get after entering a username and password).
Alice visits Mallory's website which has some JavaScript that causes Alice's browser to make an HTTP request to Bob's website (from her IP address with her cookies, etc). This could be as simple as using XMLHttpRequest and reading the responseText.
The browser's Same Origin Policy prevents that JavaScript from reading the data returned by Bob's website (which Bob and Alice don't want Mallory to access). (Note that you can, for example, display an image using an <img> element across origins because the content of the image is not exposed to JavaScript (or Mallory) … unless you throw canvas into the mix in which case you will generate a same-origin violation error).
Why the Same Origin Policy applies when you don't think it should
For any given URL it is possible that the SOP is not needed. A couple of common scenarios where this is the case are:
Alice, Bob, and Mallory are the same person.
Bob is providing entirely public information
… but the browser has no way of knowing if either of the above is true, so trust is not automatic and the SOP is applied. Permission has to be granted explicitly before the browser will give the data it has received from Bob to some other website.
Why the Same Origin Policy applies to JavaScript in a web page but little else
Outside the web page
Browser extensions*, the Network tab in browser developer tools, and applications like Postman are installed software. They aren't passing data from one website to the JavaScript belonging to a different website just because you visited that different website. Installing software usually takes a more conscious choice.
There isn't a third party (Mallory) who is considered a risk.
* Browser extensions do need to be written carefully to avoid cross-origin issues. See the Chrome documentation for example.
Inside the webpage
Most of the time, there isn't a great deal of information leakage when just showing something on a webpage.
If you use an <img> element to load an image, then it gets shown on the page, but very little information is exposed to Mallory. JavaScript can't read the image (unless you use a crossOrigin attribute to explicitly enable request permission with CORS) and then copy it to her server.
That said, some information does leak so, to quote Domenic Denicola (of Google):
The web's fundamental security model is the same origin policy. We
have several legacy exceptions to that rule from before that security
model was in place, with script tags being one of the most egregious
and most dangerous. (See the various "JSONP" attacks.)
Many years ago, perhaps with the introduction of XHR or web fonts (I
can't recall precisely), we drew a line in the sand, and said no new
web platform features would break the same origin policy. The existing
features need to be grandfathered in and subject to carefully-honed
and oft-exploited exceptions, for the sake of not breaking the web,
but we certainly can't add any more holes to our security policy.
This is why you need CORS permission to load fonts across origins.
Why you can display data on the page without reading it with JS
There are a number of circumstances where Mallory's site can cause a browser to fetch data from a third party and display it (e.g. by adding an <img> element to display an image). It isn't possible for Mallory's JavaScript to read the data in that resource though, only Alice's browser and Bob's server can do that, so it is still secure.
CORS
The Access-Control-Allow-Origin HTTP response header referred to in the error message is part of the CORS standard which allows Bob to explicitly grant permission to Mallory's site to access the data via Alice's browser.
A basic implementation would just include:
Access-Control-Allow-Origin: *
… in the response headers to permit any website to read the data.
Access-Control-Allow-Origin: http://example.com
… would allow only a specific site to access it, and Bob can dynamically generate that based on the Origin request header to permit multiple, but not all, sites to access it.
The specifics of how Bob sets that response header depend on Bob's HTTP server and/or server-side programming language. Users of Node.js/Express.js should use the well-documented CORS middleware. Users of other platforms should take a look at this collection of guides for various common configurations that might help.
NB: Some requests are complex and send a preflight OPTIONS request that the server will have to respond to before the browser will send the GET/POST/PUT/Whatever request that the JS wants to make. Implementations of CORS that only add Access-Control-Allow-Origin to specific URLs often get tripped up by this.
Obviously granting permission via CORS is something Bob would only do only if either:
The data was not private or
Mallory was trusted
How do I add these headers?
It depends on your server-side environment.
If you can, use a library designed to handle CORS as they will present you with simple options instead of having to deal with everything manually.
Enable-Cors.org has a list of documentation for specific platforms and frameworks that you might find useful.
But I'm not Bob!
There is no standard mechanism for Mallory to add this header because it has to come from Bob's website, which she does not control.
If Bob is running a public API then there might be a mechanism to turn on CORS (perhaps by formatting the request in a certain way, or a config option after logging into a Developer Portal site for Bob's site). This will have to be a mechanism implemented by Bob though. Mallory could read the documentation on Bob's site to see if something is available, or she could talk to Bob and ask him to implement CORS.
Error messages which mention "Response for preflight"
Some cross-origin requests are preflighted.
This happens when (roughly speaking) you try to make a cross-origin request that:
Includes credentials like cookies
Couldn't be generated with a regular HTML form (e.g. has custom headers or a Content-Type that you couldn't use in a form's enctype).
If you are correctly doing something that needs a preflight
In these cases then the rest of this answer still applies but you also need to make sure that the server can listen for the preflight request (which will be OPTIONS (and not GET, POST, or whatever you were trying to send) and respond to it with the right Access-Control-Allow-Origin header but also Access-Control-Allow-Methods and Access-Control-Allow-Headers to allow your specific HTTP methods or headers.
If you are triggering a preflight by mistake
Sometimes people make mistakes when trying to construct Ajax requests, and sometimes these trigger the need for a preflight. If the API is designed to allow cross-origin requests but doesn't require anything that would need a preflight, then this can break access.
Common mistakes that trigger this include:
trying to put Access-Control-Allow-Origin and other CORS response headers on the request. These don't belong on the request, don't do anything helpful (what would be the point of a permissions system where you could grant yourself permission?), and must appear only on the response.
trying to put a Content-Type: application/json header on a GET request that has no request body the content of which to describe (typically when the author confuses Content-Type and Accept).
In either of these cases, removing the extra request header will often be enough to avoid the need for a preflight (which will solve the problem when communicating with APIs that support simple requests but not preflighted requests).
Opaque responses (no-cors mode)
Sometimes you need to make an HTTP request, but you don't need to read the response. e.g. if you are posting a log message to the server for recording.
If you are using the fetch API (rather than XMLHttpRequest), then you can configure it to not try to use CORS.
Note that this won't let you do anything that you require CORS to do. You will not be able to read the response. You will not be able to make a request that requires a preflight.
It will let you make a simple request, not see the response, and not fill the Developer Console with error messages.
How to do it is explained by the Chrome error message given when you make a request using fetch and don't get permission to view the response with CORS:
Access to fetch at 'https://example.com/' from origin 'https://example.net' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
Thus:
fetch("http://example.com", { mode: "no-cors" });
Alternatives to CORS
JSONP
Bob could also provide the data using a hack like JSONP which is how people did cross-origin Ajax before CORS came along.
It works by presenting the data in the form of a JavaScript program that injects the data into Mallory's page.
It requires that Mallory trust Bob not to provide malicious code.
Note the common theme: The site providing the data has to tell the browser that it is OK for a third-party site to access the data it is sending to the browser.
Since JSONP works by appending a <script> element to load the data in the form of a JavaScript program that calls a function already in the page, attempting to use the JSONP technique on a URL that returns JSON will fail — typically with a CORB error — because JSON is not JavaScript.
Move the two resources to a single Origin
If the HTML document the JS runs in and the URL being requested are on the same origin (sharing the same scheme, hostname, and port) then the Same Origin Policy grants permission by default. CORS is not needed.
A Proxy
Mallory could use server-side code to fetch the data (which she could then pass from her server to Alice's browser through HTTP as usual).
It will either:
add CORS headers
convert the response to JSONP
exist on the same origin as the HTML document
That server-side code could be written & hosted by a third party (such as CORS Anywhere). Note the privacy implications of this: The third party can monitor who proxies what across their servers.
Bob wouldn't need to grant any permissions for that to happen.
There are no security implications here since that is just between Mallory and Bob. There is no way for Bob to think that Mallory is Alice and to provide Mallory with data that should be kept confidential between Alice and Bob.
Consequently, Mallory can only use this technique to read public data.
Do note, however, that taking content from someone else's website and displaying it on your own might be a violation of copyright and open you up to legal action.
Writing something other than a web app
As noted in the section "Why the Same Origin Policy only applies to JavaScript in a web page", you can avoid the SOP by not writing JavaScript in a webpage.
That doesn't mean you can't continue to use JavaScript and HTML, but you could distribute it using some other mechanism, such as Node-WebKit or PhoneGap.
Browser extensions
It is possible for a browser extension to inject the CORS headers in the response before the Same Origin Policy is applied.
These can be useful for development but are not practical for a production site (asking every user of your site to install a browser extension that disables a security feature of their browser is unreasonable).
They also tend to work only with simple requests (failing when handling preflight OPTIONS requests).
Having a proper development environment with a local development server
is usually a better approach.
Other security risks
Note that SOP / CORS do not mitigate XSS, CSRF, or SQL Injection attacks which need to be handled independently.
Summary
There is nothing you can do in your client-side code that will enable CORS access to someone else's server.
If you control the server the request is being made to: Add CORS permissions to it.
If you are friendly with the person who controls it: Get them to add CORS permissions to it.
If it is a public service:
Read their API documentation to see what they say about accessing it with client-side JavaScript:
They might tell you to use specific URLs
They might support JSONP
They might not support cross-origin access from client-side code at all (this might be a deliberate decision on security grounds, especially if you have to pass a personalized API Key in each request).
Make sure you aren't triggering a preflight request you don't need. The API might grant permission for simple requests but not preflighted requests.
If none of the above apply: Get the browser to talk to your server instead, and then have your server fetch the data from the other server and pass it on. (There are also third-party hosted services that attach CORS headers to publically accessible resources that you could use).
Target server must allowed cross-origin request. In order to allow it through express, simply handle http options request :
app.options('/url...', function(req, res, next){
res.header('Access-Control-Allow-Origin', "*");
res.header('Access-Control-Allow-Methods', 'POST');
res.header("Access-Control-Allow-Headers", "accept, content-type");
res.header("Access-Control-Max-Age", "1728000");
return res.sendStatus(200);
});
As this isn't mentioned in the accepted answer.
This is not the case for this exact question, but might help others that search for that problem
This is something you can do in your client-code to prevent CORS errors in some cases.
You can make use of Simple Requests.
In order to perform a 'Simple Requests' the request needs to meet several conditions. E.g. only allowing POST, GET and HEAD method, as well as only allowing some given Headers (you can find all conditions here).
If your client code does not explicit set affected Headers (e.g. "Accept") with a fix value in the request it might occur that some clients do set these Headers automatically with some "non-standard" values causing the server to not accept it as Simple Request - which will give you a CORS error.
This is happening because of the CORS error. CORS stands for Cross Origin Resource Sharing. In simple words, this error occurs when we try to access a domain/resource from another domain.
Read More about it here: CORS error with jquery
To fix this, if you have access to the other domain, you will have to allow Access-Control-Allow-Origin in the server. This can be added in the headers. You can enable this for all the requests/domains or a specific domain.
How to get a cross-origin resource sharing (CORS) post request working
These links may help
This CORS issue wasn't further elaborated (for other causes).
I'm having this issue currently under different reason.
My front end is returning 'Access-Control-Allow-Origin' header error as well.
Just that I've pointed the wrong URL so this header wasn't reflected properly (in which i kept presume it did). localhost (front end) -> call to non secured http (supposed to be https), make sure the API end point from front end is pointing to the correct protocol.
I got the same error in Chrome console.
My problem was, I was trying to go to the site using http:// instead of https://. So there was nothing to fix, just had to go to the same site using https.
This bug cost me 2 days. I checked my Server log, the Preflight Option request/response between browser Chrome/Edge and Server was ok. The main reason is that GET/POST/PUT/DELETE server response for XHTMLRequest must also have the following header:
access-control-allow-origin: origin
"origin" is in the request header (Browser will add it to request for you). for example:
Origin: http://localhost:4221
you can add response header like the following to accept for all:
access-control-allow-origin: *
or response header for a specific request like:
access-control-allow-origin: http://localhost:4221
The message in browsers is not clear to understand: "...The requested resource"
note that:
CORS works well for localhost. different port means different Domain.
if you get error message, check the CORS config on the server side.
In most housing services just add in the .htaccess on the target server folder this:
Header set Access-Control-Allow-Origin 'https://your.site.folder'
I had the same issue. In my case i fixed it by adding addition parameter of timestamp to my URL. Even this was not required by the server I was accessing.
Example yoururl.com/yourdocument?timestamp=1234567
Note: I used epos timestamp
"Get" request with appending headers transform to "Options" request. So Cors policy problems occur. You have to implement "Options" request to your server. Cors Policy about server side and you need to allow Cors Policy on your server side. For Nodejs server:details
app.use(cors)
For Java to integrate with Angular:details
#CrossOrigin(origins = "http://localhost:4200")
You should enable CORS to get it working.

Iframes and Same-Origin-Policy and reverse proxy hack

I have been reading up on Iframes with different domains then the parent document and I am slightly confused.
I understand that if the Iframe is from the same domain as its parent document, the parent document can access the iframe's document. It seems like I could circumvent this with the following hack:
I set up a web server at mydomain.com
I serve the original page from mydomain.com/index.html
I setup a proxy on my webserver for mydomain.com/othersite -> site2.com
Add <iframe src="mydomain.com/othersite"> to the mydomain.com/index page
This seems like it would circumvent the same origin policy and the user would be none the wiser. Is there something I am missing?
Yes, there is something you are missing.
The Same Origin Policy secures the client-side of website access.
If you setup mydomain.com/othersite to be proxied to site2.com then the browser would not be sending the user's cookies for site2.com to your site at mydomain.com. All you would get is the cookies your site had set on mydomain.com for that user. That is, all you would be attacking was your mydomain.com session with site2.com, not the user's session with site2.com (as your reverse proxy effectively makes mydomain.com the client of this connection).
If there was a way to circumvent the Same Origin Policy this would have to be something client-side in order to have the browser send cookies to your domain.
I realise I've concentrated on cookies here, however cookies are an easy to grasp concept of an example of client objects that the Same Origin Policy protects. Your appoach would allow you to manipulate the DOM of site2.com but it would not be in the context of your visitor's access to site2.com, it would be in the context of your own access to site2.com - nothing that the visitor accesses could be changed unless they trusted your site enough to log into the proxied version site2.com directly.

Magento Multi-Store Setup / Store Codes Setting

I'm running Magento on a shared server with a single IP. I originally set it up as a single store with no plans to do multi-stores. Do I need to have store codes trailing each domain in magento to get this work correctly? They will all checkout at the main store URL. I have done this in the past and it has worked fine for me, but I was using store codes and with this instance I am not.
Will it completely jack up my SEO?
So I have store1.com (main store) and store2.com which needs to checkout at store1.com
Any help or link to a how to would be great. Have not been able to find a straight forward answer.
Your proposed setup of having store1.com and store2.com with a shared checkout URL of store1.com will work with a bit of work from yourself, but it's not clean or ideal in my opinion. Magento will append an SSID every time it switches domain to try and re-load the customers session data (They will have ?SSID=something). You would also need to change the checkout URL in your templates to only use the 1 domain which would require hard coding the full URL to the checkout and cart page in the store2.com templates.
Personally I would simply have separate checkouts for each domain which is supported straight out of the box in Magento without really doing anything. Why the need to have the checkout always under 1 domain? If it's because of SSL and 1 IP limitations then buy a UCC SSL certificate for multiple domains and have all the domains required to run on the server setup as SANS on the certificate. Cheap and simple. This way there is no need for store codes in URLs, SSIDs in domain switching, and the user will always stay on the same domain without any funny switching business or complications.
As a customer I would also be a little surprised to shop on one domain and then checkout on another these days, especially if one of the domains is international and this will ultimately effect your conversion rate.
You seem to be familiar with store views, so once you have setup your secondary store view, simply go into the admin and override the base URLs for the secondary domain. Point the store2.com domain to the same IP address you are using for store1.com. Setup a vhost on the server so store2.com effectively replicates the vhost for store1.com. You can use vhost directives so that magento initiates the correct store view for the relevant domain name in your new vhost.
SetEnv MAGE_RUN_CODE yourstorecode
SetEnv MAGE_RUN_TYPE store
You should now be able to have multiple sites/domains running on 1 magento instance each with an individual checkout URL. e.g. store1.com/checkout/onepage/ and store2.com/checkout/onepage/.
By using a UCC SSL certificate, the SSL will be valid for both domains and not cause you issues so no need for multiple IPs.

Is there a way, aside from SSL, to allow secure input on webpages?

I want to set up a project page on GitHub, so that it acts as a live site.
The site would require an API sid & token (both just long strings of text) that, in a self-hosted environment, the user would just add to the config file.
If I host this through GitHub project pages, users will supply their sid/token through a form. The page with the form will need to be served over SSL so that the sid/token aren't transferred as cleartext. The problem is that GitHub project pages don't allow SSL.
So, if I can find another secure way to take input through a form aside from using SSL, then I can host this whole thing a hosted service through GitHub project pages.
The project would be open source, so I don't expect any sort of encoding/hashing scheme to work, since the methods would be public.
The sid/token are being used in curl calls to an API which is sent over SSL. Perhaps there's a way to direct the form input directly to that SSL URL instead of having it go through the non-SSL GitHub project page...
Any ideas?
You can just give the action attribute of the form the HTTPS URL of the target script, if that's possible.
You could also use some kind of Challenge-Response encryption/hashing scheme using Javascript. The algorithm for that would be something like this:
Server generates unique, random token, saves it and sends it to the client along with the form HTML.
On the client side, Javascript intercepts the form submission and hashes the sensitive form data with the server-generated token as a salt.
Server can now check whether the hash is equal to its own calculated hash value
HOWEVER
A man-in-the-middle attacker with the ability to modify traffic (for example through ARP poisening, DHCP or DNS spoofing) could always strip all your client-side protection mechanisms from the served HTML. Have a look at SSLStrip for a tool to rewrite HTTPS URLs to unsecure HTTP URLs on the fly. The challenge-response could be defeated something like this:
Save token sent by the server, remove the Javascript from the HTML form.
As the form submission is not intercepted now, we get the raw input data.
Hash the data using the same algorithm that the Javascript would have performed.
Thank you for all the fish.
You see, an intercepting attacker can probably defeat any defense mechanism you try to make up.

How do I get the text in the adress field in the browser to change when the user surfs on and outside of the page?

This is somewhat of a newbie question I'm sure and I hope the community will excuse me for not knowing this (or not knowing the appropriate search terms to resolve my question).
So, this is the deal: I'm running a small webpage with a small amount of visitors. I've written the whole page in HTML and CSS myself and I host it in my private DropBox (http://dl.dropbox.com/u/3394117/Hemsida/Psykofil/Index.html).
I've bought the domain name "www.psykofil.org" from Loopia (www.loopia.se) and I've directed this domain to the index.html file referenced to above.
Now, this is what I want to happen: I have three different places you can go to on the page (you choose where to ge through a menu on the left). When one of these links is clicked, it takes the user to another .html-file. What I would like to happen here is that this is seen in the adress field so when he or she clicks on "x", it should say www.psykofil.org/x on top. Also, when he or she navigates away from the webpage through a hyperlink I would like the adress field to update to show the new location. Right now, no matter what the user does, it always says www.psykofil.org in the adress field.
I probably should mention that my options (freely translated from swedish) when I go to the configuration of my domain name at Loopia is the following:
DNS
Parking
Forwarding (the one I'm currently using)
Send to an external URL
(Unavailable because I don't have a web hotel with Loopia) Point to another domain in the account.
(Unavailable because I don't have a web hotel with Loopia) Own homefolder for webpage.
That's because your page is inside a <frameset>, so the address bar will never update.
You say "I've directed this domain to the index.html file referenced to above." It sounds like you've set up 'domain forwarding.' Framesets are often the 'trick' hosts use to keep the same URL - embedding the pages you're 'forwarding' to in a frameset. It's called "domain masking." See http://www.hostingmultipledomainnames.com/domainforwarding.htm for a description of how it works.
If you upload your actual html files to your site root, that should do the trick. If you're not sure how to do that and you're a new webmaster, you may want to be in touch with your web host's support. Otherwise, if you want to have that domain, but keep your files in your dropbox account, your options I believe get complicated (things like reverse proxies).
UPDATED:
Typically, when people create a website, they do three thing: register a domain, buy a web hosting account, and then associate their domain with their hosting account. You've done the first part, and have found a clever way of managing the second part, but you haven't done the third part.
The process is like this:
You register your domain. I.e., you pay $10-30 a year for the exclusive right to a given domain name. Registering the domain means that when people type 'http://mysite.com' into their browser, your domain will come up. However, it's just a placeholder - there isn't any real content there. All your files and images need to be uploaded to a server in order for people to see them.
You purchase a web hosting account. Or in your case, you upload your files to a publicly-accessible server, which has the advantage of being free. You then upload all your content.
This is the part you're missing. You now need to associate your domain name with your hosting account. This typically happens without your intervention when you purchase both your domain name and your web hosting account through one company.
However, if you acquire them separately, you need to do two things:
a. Log in to your domain registrar and point the domain name to your server for your web hosting account. This is a signal to the Internet - hey, when you type in the domain name 'http://ssss.com', go to this server.
b. Log in to your web hosting account and "park" the domain at your account. This may be hard to understand at first, but basically, just telling the Internet to go to this or that server when typing in your domain name isn't very useful.
If that's all we needed to do, I could just register http://my-amazon.com and point my domain to Amazon.com. Then people could surf Amazon.com as http://myamazon.com and I could get rich from selling this now incredibly popular domain.
But that doesn't work. In order for me to actually browse the web hosting account through my domain name, I need to "add" the domain name to my hosting account. Dropbox doesn't let you do that. It's a file-sharing system, which you've cleverly used as a web host. However, you'll never be able to log into Dropbox and park your domain there, because that's not what they do.
Summary: You can think of this process like a pass in basketball. You can throw the ball by sending the user to a server, but the server has to catch it. In order to catch the ball, the server needs to know it's coming.
Your domain registrar is 'faking' this process by adding one page to its own server, which links to "http://dl.dropbox.com/yourpage/etc/etc/Index.html". This way, your domain registrar doesn't have to worry about hosting all your content and the headaches of technical support and server space.
The downside is, you don't have a webhost that allows you to park a domain at the moment. The upside is you're saving about $60-100 per year (it might be more or less in Sweden), which is what a basic "shared" hosting account would cost.
You can decide if having distinct webpages (http://psykofil.org/contact.html" etc), is worth it for you, or whether you're fine for now with the very low-cost solution that isn't perfect but at least allows people to access your site. What you've come up with is actually pretty cool, but it does have some limitations.
Finally: If you do want to go ahead an buy server space so you can host your site, it will be less of a headache to buy it through Loopla, if the price and service are good. Typically, you are given the option when making the purchase of linking your account to your already-registered domain name. Then all you need to do is use an FTP program like Filezilla to upload your content to your account, and you're done.
It seems your host is "masking" the URL, meaning actual index.html page located at "www.psykofil.org" is in fact, loading your index page located via dropbox into an "iframe" , hence your main URL does not change to reflect the changes.
Solution: Upload your file to your main host and change the default index file that has iframes with the dropbox index file.
I believe it's because you're using frames. Were you to simply link to the other html page(i.e About page) then the address bar would update.