A good practice to set my custom domains on Heroku - html

Basically, Heroku gives you a domain named by default: XXX.herokuapp.com.
On my own, I have a set of REST apis that I would like to set on a domain named: api.myDomain.com.
On the same time, I have my HTML files (web view) distributed by the same server as the REST API.
It's similar to this; embedding static files on server in a dist folder.
I expect the domain serving those HTML/JS files to be www.myDomain.com.
I thought about putting them clearly on a distinct server dedicated to static files, but the fact is that Single Page Application should be controlled by a server, in order to make the refresh works (F5 redirecting to index.html), that's why I chose to use the same server as REST apis.
An alternative would be to dedicate ANOTHER server to treat only static files, independent of the REST Apis' server.
How to deal with both domains (api and www) while sources being on the same servers?
Or should I completely rethink the strategy?

You can just set both domains to direct to your page by setting it up in your app settings, but then I'm pretty sure that would mean api.myDomain.com/dist would show your static things, and www.myDomain.com could show your api things.
https://devcenter.heroku.com/articles/custom-domains
Another way you could do this might be by doing some things with your actual code, but we don't really know what your code is like right now.

Related

How to disable parent directory access in web file browsing without web server

I am writing a command line application that produces an index.html with links to other generated HTML files, but also some links to filesystem subdirectories. Here is an example of such a link:
Invoices
The intention for sharing this content is for the user to zip up the directory tree and send it to other parties for review. However, some users might think to use ngrok, or use screen sharing, to share their web browser to allow other people to access their local system. With ngrok they would be running a web server and might be able to configure the web server to protect against this, but with screen sharing that would not be possible. (Consider the case where a user might leave their web browser open to the remote user and step away, not realizing that the remote user can now examine their entire filesystem.)
The problem is the "Parent Directory" links. Using those links, the others could navigate above the intended directory root and navigate their entire filesystem. Here is an image to illustrate:
The directories linked to can have arbitrary numbers and levels of subdirectories, so hard-coding links on custom pages would probably be prohibitively complicated.
There is no web server involved here; the files are displayed by just opening index.html in a web browser, so .htaccess is not a solution. Also, I don't want to disable navigation, I only want to limit its upper bound.
Is there a way to prevent this access?
If there is no web server involved at all, there is no way to prevent that behaviour.
Edit:
You could of course write a browser plugin that limits the access to the parent directory using JavaScript. But every client would have to install that plugin.

How to block access to a static site?

I will host a static site (just a few pages actually) on Netlify, a cloud hosting provider. It would be my notes and may have sensitive code and API keys. I want it set up so that only I can access this site from internet and no one else. How can I block access to the static site for others?
Alternately, if I do the same with with Github Pages, is it possible to restrict access there?
You need an access control mechanism to protect your notes.
If you are running the web server doing the hosting, most web server programs (Apache and nginx are the two most popular) have built-in access control mechanisms, see link given by Carsten H or see Access Control with Apache or How to Set Up Password Authentication with Nginx (Digital Ocean guide).
If you are using Github Pages, it is possible to do access control, but a bit more tricky. You can create a Github OAuth application and ask people to authenticate using your Github OAuth app. The app will ask for their username, and check if that username matches a list of allowed Github users (probably just your Github username). If the usernames match, static content is served up, otherwise the user is redirected to a 403 forbidden page.
Also see the github-heroku-attack-rabbits project page for details of how to create the Flask app mentioned above (using flask-dance to authenticate users via your Github OAuth app). The Flask app can be hosted for free on Heroku.
Two more things to note regarding public/private repos:
If you are using Github Pages, the repository containing your notes will need to be private, otherwise the contents of your notes will be in a public repository (even if the Github Pages static page has an access control layer).
Just because a repo is private does NOT mean its Github Pages page is private. By default, a private repo's Github Pages page is accessible/readable by the public. It is up to you to put an access control mechanism in place to protect the page.
You can try the encryption route. Here, the name staticrypt really says everything (I have a demo here). It allows you to create a password for each page for your website. It used AES-256 encryption, so as far as I am concerned, a long password should suffice.
If you don't share the password, you will be the only one to view the webpage.
These are actually two questions and is good practice to ask them individually.
This is a frequently asked question and depends on your server, e.g. for Apache you can edit your .htaccess following this instructions
you need to create a private repository by checking the private repository option during the repository creation

Most Streamlined Way to use Basic Authentication with Web Application and CDN

I have a site whose pre-production environments use HTTP basic authentication to prevent unauthorized access. Recently, we've added a CDN (AWS Cloudfront) and we intend to use basic authentication (FWIW, using Lambda#Edge) for those pre-production CDN environments, as well.
While we've already implemented basic authentication on the web application (we're able to access the site after authentication), and have rudimentarily implemented basic authentication on the CDN (we're able to, say, access an image directly, after authentication), we're having trouble combining the two.
The web application includes images in the normal ways (e.g., via HTML and CSS includes). For instance, my site, https://www.example.com, has the following in its HTML:
<img src="https://cdn-files.example.com/foob.png" />
Using Chrome, when hitting the web application, I get a double-challenge (one for the app's domain and one for the CDN, each in turn), and the image loads.
Using Firefox, I get a single challenge, and the page loads, but the image fails to load (that request's response is 401).
Question 1: (Most streamlined option.) Is it possible, through the right configuration settings, to get the browser to pass through the credentials from the app's domain to the CDN domain? If so, what configurations are needed?
If not:
Question 2: (Less streamlined: Double-challenge.) What's the right combination of configurations (presumably, headers, etc.) to get the images, etc., to load on the web app?
I would prefer not to embed the credentials in the URLs, if at all possible.

Website Configuration on Google Cloud Storage without error pages

I have a single page application that I am (trying) to host on Google Cloud Storage.
The application is at index.html, and the application handles routing using Angular's html5Mode. (e.g.: routes like example.com/this and example.com/that are handled by the js application in index.html)
Using Google Cloud Storage's website configuration this is all well and good, except that routes that are accessed directly ("example.com/this") will 404 as they obviously do not map to a file.
I have set my 404 page to be my index page, but what I really need in order to run a single page application in html5Mode is that such routes ("example.com/this") will not return a 404 header - they will simply be handled by index.html and return a success header (200).
Is this possible?
Setting index.html to your 404 page from the website configuration seems to do the trick now.
Unfortunately, this isn't really possible. Google Cloud Storage's web mapping is pretty simple, and you can't create arbitrary rules based on patterns and the like.
You might want to consider either disabling html5mode or forcing a hashbang with html5mode. See this answer for more on that option.

Possible to load external jpg and serve as local url? Redirect w/o .htaccess?

I'm hosting images for client websites. I want them to be able to link to the images locally ie. www.myclient.com/clip1.jpg but have the image actually loaded from www.mysite.com/clip1.jpg. The idea is to provide security/anonyminity so the client doesn't have to reveal that they are using my service (through the images on my site).
Can this be done without editing .htaccess?
Thanks,
skibulk
If you don't want to reveal where the final origin is, then the image has to come from the server that you want it to appear to come from. A redirect will reveal the real origin.
You can proxy the images with with Apache directives, the equivalent for whatever non-Apache server is in use, or a server side script (written in the language of your choice that is supported by the server).
Just copying the images would probably be the most efficient approach though.