How to prevent people from accessing OpenShift web apps? - openshift

I just uploaded a Wildfly web application to my free OpenShift account so my team members can check it out. It's a work in progress so I don't want people to know and access it via the web. If I want someone to use it then I'll send them the URL "XXX-XXX.rhcloud.com"
Is there a way to prevent people from knowing and accessing my web application on OpenShift?

You can use Basic authentication in order to anyone provides login/password before access your contents.
In openshift there is a enviroment variable called $OPENSHIFT_REPO_DIR, is the path of your working directory i.e. /var/lib/openshift/myuserlongid/app-root/runtime/repo/
I create a new enviroment variable called SECURE wrapping the folder path.
rhc set-env SECURE=/var/lib/openshift/myuserlongid/app-root/data --app myappname
Finally I connect to my app with ssh
rhc ssh myappname
And create the .htpasswd file
htpasswd -c $SECURE/.htpasswd <username>
Note: The -c option to htpasswd creates a new file, overwriting any existing htpasswd file. If your intention is to add a new user to an existing htpasswd file, simply drop the -c option.
.htaccess file
AuthType Basic
AuthName "Authentication"
AuthUserFile ${SECURE}/.htpasswd
Require valid-user

I am not sure if you configure openshift so that url is private, however I am sure you can hack your way. Instead of hosting your app at "XXX-XXX.rhcloud.com", you can set root-url of your app to be "XXX-XXX.rhcloud.com/some_hash" (for ex: XXX-XXX.rhcloud.com/2d6541ff807c289fc686ad64f10509e0e74ba0be22b0462aa0ac3a7a54dd20073101ddd5843144b9a9ee83d0ba882f35d49527e3e762162f76cfd04d355411f1 )
When it comes people finding your website on search engines you can block crawlers with robot.txt or noindex meta tag. You can further read about them here and here

Related

how to host a private html webpage?

I want to host a private web page that only the intended recipient can see and is invisible to everyone else and doesn't get indexed by all the search engines and is shown as public search result.
I have my own domain and have a paid premium hosting plan.
And the web page consists of just index.html, a css and js file that's all.
So how can i make it private that if others try to open it will get access denied or some error. And the only person I share url can access it?
If you’re using Linux Distribution like Ubuntu, make sure u have installed apache2 and apache2-utils. Then create new user with following command
sudo htpasswd -c /etc/apache2/.htpasswd new_user
And if you want to create another user just leave -c flag so the command will look like this
sudo htpasswd /etc/apache2/.htpasswd second_new_user
In both cases, you’ll be prompted to enter password for each user.
However, this method is for your global apache2 configuration. Same process can be repeated for only one website. Just make sure, that you have .htaccess file in your project and then repeat the command. Don’t forget to change path so the command will look like this.
sudo htpasswd -c /var/www/my_website/.htpasswd new_user
Enter the password and then configure .htaccess like this
AuthType Basic
AuthName "Restricted Content"
AuthUserFile /var/www/my_website/.htpasswd
Require valid-user
If you stick to global apache 2 authentication, then point the change the third line to AuthUserFile /etc/apache2/.htpasswd
URL - is a unique identifier of a shared resource.
For denying access to those who have the url - but you have never had an intention for them to have it, you need to setup a password up front and send the password only to those you think should access your web page.
There are some web servers like Apache, you can set such security easily. I see the answer from Matej Bunček showing this.

How to download or list all files on a website directory

I have a pdf link like www.xxx.org/content/a.pdf, and I know that there are many pdf files in www.xxx.org/content/ directory but I don't have the filename list. And When I access www.xxx.org/content/ using browser, it will redirect to www.xxx.org/home.html.
I tried to use wget like "wget -c -r -np -nd --accept=pdf -U NoSuchBrowser/1.0 www.xxx.org/content", but it returns nothing.
So does any know how to download or list all the files in www.xxx.org/content/ directory?
If the site www.xxx.org blocks the listing of files in HTACCESS, you can't do it.
Try to use File Transfer Protocol with FTP path you can download and access all the files from the server. Get the absolute path of of the same URL "www.xxx.org/content/" and create a small utility of ftp server and get the work done.
WARNING: This may be illegal without permission from the website owner. Get permission from the web site first before using a tool like this on a web site. This can create a Denial of Service (DoS) on a web site if not properly configured (or if not able to handle your requests). It can also cost the web site owner money if they have to pay for bandwidth.
You can use tools like dirb or dirbuster to search a web site for folders/files using a wordlist. You can get a wordlist file by searching for a "dictionary file" online.
http://dirb.sourceforge.net/
https://sectools.org/tool/dirbuster/

Subdirectories in openshift project cannot be found

I built a site using a php openshift project and accessing the root directory via http works fine. However, all the root directories give me a 404 not found, like this one: http://test.toppagedesign.com/sites/
I checked with ssh, and /app-root/repo/sites and app-deployments/current/repo/sites/ both exist.
EDIT
Added a directory called php and now I have 503 errors for everything...
EDIT 2
I deleted the php directory, now the 503 errors are gone. However, I do still get 404 errors for the subdirectory.
Here is my directory tree: http://pastebin.com/hzPCsCua
And I do use git to deploy my project.
php is one of the alternate document roots that you can use, please see the March Release blog post here about this (https://www.openshift.com/blogs/openshift-online-march-2014-release-blog)
As for the sub-directories not working, can you ssh into your server and use the "tree" command to post the directory/file structure of your project? Also are you using Git to deploy your project or editing files directly on the server?
You need to have an index.php or index.html file in any directory that you want to work like app-domain.rhcloud.com/sites , if you just have sub-directories, how would it know what to show? Also, indexing (showing a folders contents) is not enabled for security reasons, and I believe there is no way to enable it.
This sounds like it could be a problem with how you are serving your static content.
I recently created a new sample app for OpenShift that includes:
a basic static folder
an .htaccess file (for serving assets in production)
support for using php's local server to handle the static content (in your dev environments)
Composer and Silex - a great starting point for most new PHP apps
You can serve the project locally if you have PHP-5.4 (or better), available in your dev environment:
php -S localhost:8080 -t static app.php
For a more advanced project that is built on the same foundation, take a look at this PHP+MongoDB mapping example. I wrote up a blog post with some notes on my process for composing that app as well.
Hope these examples help!

mercurial ssl access allow pull BUT require authentication for push

I have set up a mercurial server through SSL. In the apache config file I have set up an authentication using a mysql database.
I would like everyone to be able to pull from the repository without credentials, but restrict the push right to authenticated users. The way it is done now either everyone is authenticated both for pull and push, or nobody is.
My apache configuration is this:
<Location /hg/repo>
AuthType Basic
AuthName "Repository Access"
AuthBasicAuthoritative Off
AuthUserFile /dev/null
AuthMySQL On
AuthMySQL_Authoritative On
AuthMySQL_Host localhost
AuthMySQL_DB repo
AuthMySQL_User repo
AuthMySQL_Password_Table users_auth_external
AuthMySQL_Group_Table users_auth_external
AuthMySQL_Username_Field username
AuthMySQL_Password_Field passwd
AuthMySQL_Group_Field groups
AuthMySQL_Encryption_Types SHA1Sum
Require group pink-image
<LimitExcept GET>
Require valid-user
</LimitExcept>
</Location>
hg also requires authentication for the ssl pull, Regardless on the LimitExcept switch.
Is there a way to limit the authentication only for pushing to the repository?
A simple http access would not be sufficient because if somebody is a developer she checks out the code through https.
SSH access is not possible because some of the developers have the ssh port forbidden by the firewall.
One of the solutions would be if hg would remember the https credentials.
Thank You for reading the question.
The authentication should be wrapped into the exception rule.
<Location /hg/repo>
<LimitExcept GET>
AuthType Basic
AuthName "Repository Access"
AuthBasicAuthoritative Off
AuthUserFile /dev/null
AuthMySQL On
AuthMySQL_Authoritative On
AuthMySQL_Host localhost
AuthMySQL_DB repo
AuthMySQL_User repo
AuthMySQL_Password_Table users_auth_external
AuthMySQL_Group_Table users_auth_external
AuthMySQL_Username_Field username
AuthMySQL_Password_Field passwd
AuthMySQL_Group_Field groups
AuthMySQL_Encryption_Types SHA1Sum
Require group pink-image
</LimitExcept>
</Location>
One of the solutions would be if hg would remember the https credentials.
It can remember the credentials for push and pull. Look under the auth section of hg help config if you don't mind adding the details to one of the config files (either user's config or the repository clone's hgrc)
This would mean putting the password in the config file which you might not like so you could use the Mercurial Keyring Extension instead which stores the password more securely.
It turns out automatic credentials are not enough. The repository aught to be accessible through the web interface. However the same config file pops up an authentication dialog in the browser which makes the web interface unusable.

Can't seem to get ACL to work with hgweb.wsgi

I have hgweb.wsgi setup on an ubuntu server under apache2. Furthermore I have basic authing using the apache2 htpasswd approach. This all works nicely. However, we want to control what each user have access to and ACL seems to be the best approach. So inside the repos .hg folder I've created a hgrc and modified it according to the documentation for getting ACL up and running ( I've also enabled the extension ). The problem is I get no indication that the hgrc is used at all. If I add [ui] debug = true I still get nothing from the remote client. Sadly I'm not quite sure how to go about debugging this so any help would be much appreciated.
To make sure that a .hg/hgrc file in a repository is being consulted add something noticable to the [web] section like:
[web]
description = Got this from the hgrc
style = coal
name = RENAMED
If you don't see those in the web interface your .hg/hgrc isn't being consulted, and the most common reason for that is -- permissions. Remember that the .hg/hgrc has to owned by a user or group that is trusted by the webserver user (usually apache or www-data or similar). If apache is running under the user apache then chown the .hg/hgrc file over to apache for ownership -- root won't do and htpasswd user is irrelevant.
If that file is being consulted then you need to start poking around in the apache error logs. Turning on debug and verbose will put more messages into the apache error log, not into the remote client's output.