I've tried looking it up and am now coming back to here to see if I can get my question answered. Why should I make it so that others cannot view my indexes? Is there a security reason for this?
I'm just beginning in web development....so I definitely could use any help/info that you all can provide.
It's often considered a security best practice to hide directory listings. You may accidentally upload files to your docroot that you don't want to share to the world. Without knowing the URL, nobody would be able to access them. While this is a very thin layer of security, it can be helpful.
There are certainly times when you may want a directory listing, such as download directories. It's up to you to decide what is useful to you. If you don't need it, don't use it. If you do, use it.
Related
I've been asked to create a few different folders with the same name but with different capitalizations. The idea behind this is to allow for errors in capitalization when someone types in a specific url. They want to do something like this:
www.website.com/youtube
www.website.com/Youtube
www.website.com/youTube
www.website.com/YouTube
I believe this is bad practice for many reasons, mainly that it seems confusing and unnecessary and any updates to these pages will have to be done 4 times over. I've also noticed that VSCode won't let me create these directories from within the editor and my computer, a windows machine, won't let me do it from within the file manager either.
I've seen that this can create a problem with git as it won't recognize the files as separate files regardless of capitalization.
So really my questions are:
1.) Is there a way to do this?
2.) If so, is it a bad practice?
3.) If it's a bad practice, why?
I'd like to do it for them if possible, but not if there are some unforeseen consequences that I'm not aware of. Any insight would be appreciated.
Thanks in advance.
edit: Just to be clear, we already have www.website.com/youtube but a few users have reported that their browser autocorrects the 'youtube' section of the url to have the Y or the T capitalized. From what I see now, to accomplish this we must do something on the server side, of which my knowledge is limited. All I know for sure is that it is a Linux server.
To start with, the sane solution would be redirect those routes to the proper one, which is not an uncommon task. I don't know what your infrastructure looks like to the ease of doing so is unknown.
1.) Is there a way to do this?
Assuming that your server is Linux/BSD/Using anything but a Windows NTFS filesystem, yes. You can have one folder as source of truth and create symlinks. Or again, you could make the routes case-insensitive on whatever server you're using.
2.) If so, is it a bad practice?
Cloning the same information and making the same updates repeatedly is terrible practice. Making symlinks on the server is slightly less bad but still pretty bad practice, as that's cluttering up your directory tree with unnecessary nonsense.
3.) If it's a bad practice, why?
The idea isn't bad practice, you can make case-insensitive routes on most modern server configurations. The provided suggestions are pretty bad. But without knowing what your stack looks like, we can't provide much more information on how to do it.
It's the first time that I have had to set up a MediaWiki site and I am relatively new to it as a whole.
My problem is that users can upload files (only images in my case) without selecting one of the licenses in the drop down box on the Special:upload page, I have defined licenses though under MediaWiki:Licenses so there are licenses select-able. I would like the upload warning function to tell users that a license is required to upload a file (the same way you are warned the name is to short or whatever). The user must first pick a license before being able to continue.
I have searched around quite a bit and it doesn't seem as easy a changing a variable somewhere. If the solution is something that I should have known I apologize for posting a stupid question...
Well, I've not checked again but this is not really possible with MediaWiki:Licenses alone IIRC. You can however easily set MediaWiki:Licenses to that the default/fallback "license"/tag is something clearly bad like {{DELETEME}} and then set up an AbuseFilter rule to prevent the upload in question. AbuseFilter is very useful, so I recommend trying it anyway.
Alternatively, you can install the UploadWizard, which is very cumbersome to configure but is quite good at enforcing Wikimedia-specific habits like template/copyright paranoia. ;-) Then you can set $wgUploadNavigationUrl to point to UploadWizard, or even restrict Special:Upload visits to privileged users.
I am not able to put proper title for this question, but i will try to explain it here.
If I know that there are many files in this directory:
https://mail.google.com/mail/e/
like
https://mail.google.com/mail/e/320
https://mail.google.com/mail/e/ezweb_ne_jp.059
...
and many more.
But I can not list them, since directory listing might be denied as in above case.
Is there any way to list all such files. Or any workaround?
No, it is not possible. As far as I know you cannot get the list of all files or directories without the directory listing enabled. This cannot be done directly or there is no easy way to do this.
If you really want to find out the list of files, you will have to crawl the site and find all the links that are pointed to that directory. However, this method is not guaranteed to give you list of all files and directories as there may be many files which are not linked from anywhere and are just present there, some files which might be accessed in only some specific cases, etc. Also, if the sites see lot of traffic from your user or IP they might just ban your from the website.
No. You can't.
At best you could spider the entire web and look for urls that refer to this directory, but unless you can look into the directory in browse mode, it is impossible to get a listing of what's in there.
In real world terms, I've got a cardboard box here, and I'll tell you there's one jelly bean in there. Tell me what else is in there. Think this is possible?
If directory listing is disabled by server, then you can not have list of files.
You can try to do such kind of brute force, generate all possibilities of filenames.
When you receive HTTP 200, filename is valid.
But don't try to do this on public web sites, you can be blacklisted or something like this.
I need to decide on naming conventions for a new website.
I can use mod_rewrite at will.
My favourite solution would be to work with no file extension at all.
www.exampledomain.com/language/pagename
this would lead to "pagename" being treated as a directory. I would have to take that into account when using relative links.
Are there any other pitfalls I need to be aware of when doing this?
Is this legal, or are resources supposed to have a "name.prefix" structure?
Do you know of any clients that can't deal with this and start looking for /index.htm or .html?
Can you think of any SEO problems to be expected?
Unless you have a very good reason to add an extension, drop it.
are resources supposed to have a "name.prefix" structure?
Not that I know of. Normally not. Resources are just a concept. A custom resource format may have that extension requirement, the other would not. It will depend.
As for SEO, the short a link is, the better. It will increase relative weight of keywords. An extension would make links longer by 4 characters or more.
Do you know of any clients that can't deal with this and start looking for /index.htm or .html?
A problem may arise if you decide to support multiple entry points.
www.exampledomain.com
www.exampledomain.com/index.html
www.exampledomain.com//index.htm
www.exampledomain.com/index
These are all different urls to search engines. Some people will be linking to you with the shortest name, the others will use the other version. Then ultimately there will be different inbound links pointing to your site start page which will essentially be the same. Search engines will detect it and see it as content duplication. Consequently, your page rank will be divided between several url versions. Finally, all except one will likely be dropped out of their index entirely. To deal with this situation, decide for one "true" url and let others perform 301 redirect (moved permanently) to the "correct" url.
Dropping extensions actually has the significant benefit of not tying you to a specific language. If your URLs are http://example.com/page.php and you switch to another language, you'll either lose the existing URLs (bad!) or have to fake the PHP extension (clunky).
On my website, I have several html files I do not link off the main portal page. Without other people linking to them, is it possible for Jimmy Evil Hacker to find them?
If anyone accesses the pages with advanced options turned on on their Google toolbar, then the address will be sent to Google. This is the only reason I have can figure out why some pages I have are on Google.
So, the answer is yes. Ensure you have a robots.txt or even .htaccess or something.
Hidden pages are REALLY hard to find.
First, be absolutely sure that your web server does not return any default index pages ever. Use the following everywhere in your configuration and .htaccess files. There's probably something similar for IIS.
Options -Indexes
Second, make sure the file name isn't a dictionary word -- the odds of guessing a non-dictionary word fall to astronomically small. Non-zero, there's a theoretical possibility that someone, somewhere might patiently guess every possible file name until they find yours. [I hate these theoretical attacks. Yes, they exist. No, they'll never happen in your lifetime, unless you've given someone a reason to search for your hidden content.]
Your talking about security through obscurity (google it) and it's never a good idea to rely on it.
Yes, it is.
It's unlikely they will be found, but still a possibility.
The term "security through obscurity" comes to mind