How http and https relate to sitemap and site sources? - html

I am new in web development / SEO and stucked so hard on next moment:
We got sitemap file for helping SE robots index our pages correctly.
Sitemap could contain only URLs from current sitemap directory. For example: http://www.example.com/sitemap.xml can contain only links, whose exist in same catalog. But how data transfer protocols (http/https) relate to my finite directory, if it is just a way for transfer data? I have not two different folders with sources on my web server for http and https, lol. And indexing should not changing with protocol changes in URL. Same question i got for www subdomen. I know what a problem in my missunderstanding in web basics xD

Clients (such as search engine indexing bots and browsers) make HTTP requests to servers, which provide a response.
A URL is how a specific resource is located. It will specify the scheme/protocol, hostname, and path (and optionally a few other things).
A URL might specify HTTP or HTTPS (the latter adding an encryption layer).
The hostname portion of a URL might include www in the name or it might not.
When the server receives the request it will run some code to determine how to respond to it. A common and simple approach for that code is to match the path portion of the URL to part of the directory structure of the file system of the computer running the HTTP server software. It may, or may not, use different directories as the root for this depending on the hostname and protocol.
This means that you might have an HTTP server providing both HTTP and HTTPS and mapping www.example.com and example.com onto the same directory resulting in (at least) four different URLs all mapping onto any given file.
Best practise is to pick one of those as the canonical URL (with preference given to HTTPS and various arguments for with or without the www (which mostly revolve around convenience and how cookies for the primary hostname will be handled on other subdomains).
When writing absolute URLs (e.g. in sitemaps, emails and business cards), use the canonical URL.
It is generally recommended that the server be configured to issue 301 Redirects from the non-canonical URLs to the canonical equivalent.

Related

What's the difference between http://domain/path and http://domain/path/ in a url

I served a react page under 127.0.0.1/react/ sub directary with my gateway. It can be viewed by explorer with 127.0.0.1/react/. But if I input 127.0.0.1/react it returns my vue page served under 127.0.0.1 which failed to match any routes.
There is another example https://www.curseforge.com/minecraft/mc-mods.
https://www.curseforge.com/minecraft/mc-mods is okay while https://www.curseforge.com/minecraft/mc-mods/ returns 404 not found. What's the difference?
Ordinary users might treat them as same url, they would expect both of them can access the page. So how should I make them both accessable?
The server can return whatever it wants for any path, and doesn't need to follow any particular standard conventions. However, most servers do follow some norms:
/react/ usually ends up fetching the index file (usually index.html unless configured otherwise) from the react folder under the web root. This is returned to the client transparent... the client isn't redirected.
/react This is a request for a file named react under the main root. However, in the absence of such file, and the presence of a folder, it's common to redirect the client to /react/.
How you do this depends entirely on your server configuration. You didn't tell us the server, so we can't point you in the right direction.

AMP: why are files with .amp.html extensions not displayed on linux hosting?

I recently converted all the web pages of my website into amp. I rename them all in (.amp.html). I took care to test each page with the amp tester: https://ampbyexample.com/playground/
i also bought a domain name that points to https, a linux hosting at godaddy. Only here, when I send the files to the extensions (.amp.html) nothing is displayed on the domain name. By cons when I rename all files in (.html) simply, the website is displayed. My question is, why are files with .amp.html extensions not displayed?
The problem comes down to webserver configuration, and likely has two issues.
The first is that you're probably expecting a default document to appear when you don't request a specific one. For example, http://example.com/... the path here is just /, but a web server will commonly load index.html from disk. Chances are, your web server is not configured to load index.amp.html from disk.
The second issue may come down to a bad MIME type configuration. It's important that text/html; charset=utf-8 be sent as the Content-Type response header value for your HTML files.
If you have control over your webserver, you can reconfigure it yourself. You didn't tell us what server you're using, so we can't tell you specifically how to do that. If you don't have control over your webserver, you'll have to take it up with your hosting provider... GoDaddy. Or, just name things .html and you'll be fine!

Which robots.txt for forwarded subdomain?

In theory I have two subdomains set up in my hosting:
subdomain1.mydomain.com
subdomain2.mydomain.com
subdomain2 has a CNAME record pointing to an external service.
mydomain.com has a robots.txt that allows indexing everything.
subdomain2.mydomain.com has a robots.txt that allows indexing nothing due to the CNAME record.
If I set up a forward from subdomain1.mydomain.com to subdomain2.mydomain.com, which robots.txt would be used if accessing a link to subdomain1.mydomain.com? Does the domain forward work in the same way as a CNAME record when it comes to robots.txt?
This depends on your server setup.
Take the following config, for example:
server {
server_name subdomainA.example.com;
listen 80;
return 302 http://subdomainB.example.com$request_uri;
}
In this case, we're redirecting everything from subdomainA.example.com to subdomainB.example.com. This will include your robots.txt file.
However, if your configuration is set up to only redirect certain parts, your robots.txt file will only be redirected if it's on your list. This would be the case if you were redirecting only, say, /someFolder.
Note that if you don't return a 302 but just use a different root (e.g. subdomainA and subdomainB are different subdomains but serve the same content), your robots.txt content will be determined by the root directory.
So, therefore, if I'm understanding your config correctly, subdomain1 will use the the robots.txt from subdomain2.
The challenge you're running into is you're looking at things from the standpoint of whatever software you're trying to configure, but search engines and other robots only see the document they load from a URL (just like any other user with a web browser would). That is, search engines will try to load http://subdomain1.mydomain.com/robots.txt and http://subdomain2.mydomain.com/robots.txt, and it's up to you (through configuring whatever software your server is running) to ensure that those are in fact serving what you want.
A CNAME is just a way to add a redirection when loading what IP a browser should look at to resolve a domain name. A robot will use it when resolving the name to find out the "real" IP to connect to, but it doesn't have any further bearing on what the GET /robots.txt request does once it connects to the server.
In terms of "forwarding", that term can mean different things, so you'd need to know what a browser or robot would receive when it requested the page. If it's doing a 301 or 302 redirection to send the client to another URL, you'll probably get different results from different search engines on how they may honor that, particularly if it's being redirected to an entirely different domain. I probably would try to avoid it, just because a lot of robots are poorly written. Some search engines have tools to help you determine how their crawlers are reading your robots.txt URLs, such as Google's tool.

HTML link trailing slash [duplicate]

This question already has answers here:
What's the actual meaning of 'two requests to the server' in this context?
(2 answers)
Closed 6 years ago.
The w3schools documentation says:
Without a trailing slash on subfolder addresses, you might generate two requests to the server. Many servers will automatically add a trailing slash to the address, and then create a new request.
It is not clear what exactly this means. What difference does it make to add a trailing slash in the href urls, is there a best practice regarding adding a trailing slash.
These are two different URLs:
http://example.com/foo
http://example.com/foo/
Often, but not always, requesting the first URL will trigger the server to reply with a 301 Permanent Redirect to the second URL. The browser will then have to make a second request to the second URL.
This is most commonly the case when the URL is mapped on to a directory on the server's file system and the index.html (or other directory index) is being loaded.
Servers where the content is being dynamically generated (e.g. with an MVC framework like Perl's Catalyst) are less likely to do this. In that case you often have to be even more careful with where you link to because relative URLs will resolve differently from the two URLs.
Fundamentally, http://example.com/foo and http://example.com/foo/ are two entirely different URLs. Ultimately what's important is how the server serving those URLs will respond when queried for those URLs. And it's entirely up to the server what to do. .../foo may return a file while .../foo/ may return a directory listing. Or both may return a directory listing. Or a file. Or the same file. Or a random new response each and every time.
What W3S is pointing out is that many servers are by default configured to return a redirect response to the canonical version ending in a slash. Meaning, if you're requesting .../foo from that server, it will redirect you to .../foo/, which then causes your client to do a second request to .../foo/. Why or how or when a server may issue this redirect is entirely up to each server, and whether it's really such a popular practice is questionable (as is everything by W3S).
The important thing is that you point your URLs where you mean to point them. Is .../foo the correct URL because it's a file? Or is .../foo/ the correct URL because it's the root of a (virtual) directory? You decide, you make sure your server behaves appropriately.

difference between http and www

pardon me for asking a very basic doubt.
I have hosted a page in the site collinfo.annauniv.edu
The page opens fine when i enter the address as http://collinfo.annauniv.edu
But when i gave www.collinfo.annauniv.edu my browser shows 404 error.
What is the difference that http causes here in place of www.
The www. before your domain is actually a subdomain. It's essentially the same thing as help.microsoft.com or orders.amazon.com.
With that in mind, there are a few things that could be happening:
1) Your DNS records do not include the appropriate A Record for the www subdomain.
In this case, you'll need to setup an A record that points to your web site's IP address. If you don't know how to do this, your web host should be able to help.
2) Your server is not configured to handle the www subdomain.
If you're using the apache web server, it needs to be configured to show your web site when the user enters www before your domain. Again, your web host can set this up for you.
It all comes down to a misconfiguration issue. If you don't have experience administering web servers, you may want to give your web host a holler.
www comes from the (rather) old time where a domain had several sub-features, of which the web was not always the main service. For instance
www.domain.tld for web
mail.domain.tld for mail
ftp.domain.tld for ftp
domain.tld for web
but this is a convention - any subdomain may point to anything actually.
This is more a question of DNS declaration and/or web-server configuration ; in this case it is probably that the web-server configuration does not trigger the same pages for www.domain and domain (since you get a 404).
The author / administrator of collinfo.annauniv.edu either forgot to create a DNS entry for www.collinfo.annauniv.edu or did not create a virtual domain (web-server side) for it that would point to the same pages as collinfo.annauniv.edu.
HTTP is a protocol.
http://collinfo.annauniv.edu
Is the address of a resource which can be retrieved using HTTP.
annauniv.edu is the domain in your case.
collinfo is the subdomain.
www.collinfo is also considered as a subdomain but it does not exist. That's why you get HTTP 404 not found.
Subdomain can be anything, www is usually used as it usually mean World Wide Web.
WWW is a subdomain
HTTP is a protocol (language)
Whether you specify HTTP in the browser or not, the browser will always assume the request is being of "http" type and will ussually add http:// for you.
WWW however is just an alternative subdivision of the domain name, the same as in:
www.domain.com
site.domain.com
sub1.domain.com
sub2.domain.com
.....
etc.domain.com
In most cases the WWW subdomain will point to the same "page" as the main domain, which is usually called the "index" page, such as index.html, or index.php and in most cases the index page is hidden in the browser's address bar, unless you specifically type it in, such as http://www.yahoo.com/index.html, but you have to understand that if you have a full control of your webserver you can modify these, so WWW doesn't point to the same page or you can call you main page "home.html" instead of "index.html" and instruct your webserver to "point" your browswer to that page by default.
But things like HTTP are not easily changed, since HTTP is the main language of the web and most browswers use that as the primary means to access the webservers.
Peace!