How to remove subdomain from google index, which links to the main domain - subdomain

Can anyone tell me how can i remove the subdomain from google index, which links to the main domain.
Lets say my domain is www.myweb.com and my subdomain is cdn.myweb.com. Now here the Document Root of the subdomain is same as the main domain. So i could not use the robot.txt to stop google indexing, as it will remove indexing the main domain links too.
I search on google, bing and stackoverflow too, but i could not find the perfect answer for the question. Does any solve from yours side?

You can use dynamic robots.txt for this purpose.
Something like this...
httpd.conf (.htaccess):
RewriteRule /robots\.txt$ /var/www/myweb/robots.php
robots.php:
<?php
header('Content-type: text/plain');
if($_SERVER['HTTP_HOST']=='cdn.myweb.com'){
echo "User-agent: *\n";
echo "Disallow: /\n";
}else{
include("./robots.txt");
}

I'm using nginx, and have multiple subdomains in the same server block. I'd like the www subdomain to be included in google's index, and the rest of the subdomains to be excluded.
First, in my server block of the nginx config, I added the following to serve 2 different files at /robots.txt depending on the domain.
location ~ /robots.txt {
if ($host = 'www.example.com') {
rewrite ^/robots\.txt /robots.www.txt last;
}
}
Then in my site's root directory, and have the following 2 files:
robots.txt which blocks crawling and is the default for all subdomains
# Do not crawl subdomain
User-Agent: *
Disallow: /
robots.www.txt which allows crawling of all the site and is only served at www.example.com/robots.txt
User-agent: *
Disallow:

First thing is to add the robots.txt but in my case since my page were already indexed with the CDN subdomain it was too late for the robots. The best way I found was to go to the Google Webmaster Tools, add my cdn domain (cdn.mysite.com). Then go to Google index -> Remove URLs and removed the / url. It took few days to take effect.

Related

Which robots.txt for forwarded subdomain?

In theory I have two subdomains set up in my hosting:
subdomain1.mydomain.com
subdomain2.mydomain.com
subdomain2 has a CNAME record pointing to an external service.
mydomain.com has a robots.txt that allows indexing everything.
subdomain2.mydomain.com has a robots.txt that allows indexing nothing due to the CNAME record.
If I set up a forward from subdomain1.mydomain.com to subdomain2.mydomain.com, which robots.txt would be used if accessing a link to subdomain1.mydomain.com? Does the domain forward work in the same way as a CNAME record when it comes to robots.txt?
This depends on your server setup.
Take the following config, for example:
server {
server_name subdomainA.example.com;
listen 80;
return 302 http://subdomainB.example.com$request_uri;
}
In this case, we're redirecting everything from subdomainA.example.com to subdomainB.example.com. This will include your robots.txt file.
However, if your configuration is set up to only redirect certain parts, your robots.txt file will only be redirected if it's on your list. This would be the case if you were redirecting only, say, /someFolder.
Note that if you don't return a 302 but just use a different root (e.g. subdomainA and subdomainB are different subdomains but serve the same content), your robots.txt content will be determined by the root directory.
So, therefore, if I'm understanding your config correctly, subdomain1 will use the the robots.txt from subdomain2.
The challenge you're running into is you're looking at things from the standpoint of whatever software you're trying to configure, but search engines and other robots only see the document they load from a URL (just like any other user with a web browser would). That is, search engines will try to load http://subdomain1.mydomain.com/robots.txt and http://subdomain2.mydomain.com/robots.txt, and it's up to you (through configuring whatever software your server is running) to ensure that those are in fact serving what you want.
A CNAME is just a way to add a redirection when loading what IP a browser should look at to resolve a domain name. A robot will use it when resolving the name to find out the "real" IP to connect to, but it doesn't have any further bearing on what the GET /robots.txt request does once it connects to the server.
In terms of "forwarding", that term can mean different things, so you'd need to know what a browser or robot would receive when it requested the page. If it's doing a 301 or 302 redirection to send the client to another URL, you'll probably get different results from different search engines on how they may honor that, particularly if it's being redirected to an entirely different domain. I probably would try to avoid it, just because a lot of robots are poorly written. Some search engines have tools to help you determine how their crawlers are reading your robots.txt URLs, such as Google's tool.

Content Delivery Network configuration - CakePHP

I want to use Cloudfront to serve images and CSS for my cakephp website. I would like to just host the files on my host and use cloud front to speed up delivery of said files, I dont know how to proceed?
Till now I have created a distribution on CloudFront with my Origin Domain and CName and deployed it.
Origin Domain: example.com CName cdn.example.com
I added the CNAME for my domain:
cdn.mydomain.com with destination xx.cloudfront.net
Do I need to update the links in my HTML to that cname so if the stylesheet was http://example.com/app/webroot/css/style.css do I change that to http://cdn.example.com/app/webroot/css/style.css
You can go through and update your links to point to the CDN you would have to do this for every image and CSS/JS that you are serving from your CDN.
Another option would be to create a redirect in your .htaccess, perhaps something link this?:
RewriteRule ^css/(.*)$ http://cdn.mydomain.com/css$1 [R=301,NC,L]
I'm no .htaccess wizard so don't just copy and paste and expect it to work, but it should give you an idea of what you could do.

Should a subdomain be accessible as subfolder?

I have a subdomain for my website created in cPanel and I've noticed that in addition to being able to access the content through this URL:
subdomain.example.com
It can also be accessed via:
example.com/subdomain
Questions:
Is that normal?
Is there any way to only allow access to the
subdomain through, well, the subdomain?
There's nothing wrong with it as long as either you don't send users to the directory, or the applications and pages in that directory can handle using two different URLs (e.g. it uses only relative URLs).
If you want to block the directory, then try this htaccess directive:
RewriteRule ^subdomain/ - [L,R=404]

Cloudfront Custom Origin Is Causing Duplicate Content Issues

I am using CloudFront to serve images, css and js files for my website using the custom origin option with subdomains CNAMEd to my account. It works pretty well.
Main site: www.mainsite.com
static1.mainsite.com
static2.mainsite.com
Sample page: www.mainsite.com/summary/page1.htm
This page calls an image from static1.mainsite.com/images/image1.jpg
If Cloudfront has not already cached the image, it gets the image from www.mainsite.htm/images/image1.jpg
This all works fine.
The problem is that google alert has reported the page as being found at both:
http://www.mainsite.com/summary/page1.htm
http://static1.mainsite.com/summary/page1.htm
The page should only be accessible from the www. site. Pages should not be accessible from the CNAME domains.
I have tried to put a mod rewrite in the .htaccess file and I have also tried to put a exit() in the main script file.
But when Cloudfront does not find the static1 version of the file in its cache, it calls it from the main site and then caches it.
Questions then are:
1. What am I missing here?
2. How do I prevent my site from serving pages instead of just static components to cloudfront?
3. How do I delete the pages from cloudfront? just let them expire?
Thanks for your help.
Joe
[I know this thread is old, but I'm answering it for people like me who see it months later.]
From what I've read and seen, CloudFront does not consistently identify itself in requests. But you can get around this problem by overriding robots.txt at the CloudFront distribution.
1) Create a new S3 bucket that only contains one file: robots.txt. That will be the robots.txt for your CloudFront domain.
2) Go to your distribution settings in the AWS Console and click Create Origin. Add the bucket.
3) Go to Behaviors and click Create Behavior:
Path Pattern: robots.txt
Origin: (your new bucket)
4) Set the robots.txt behavior at a higher precedence (lower number).
5) Go to invalidations and invalidate /robots.txt.
Now abc123.cloudfront.net/robots.txt will be served from the bucket and everything else will be served from your domain. You can choose to allow/disallow crawling at either level independently.
Another domain/subdomain will also work in place of a bucket, but why go to the trouble.
You need to add a robots.txt file and tell crawlers not to index content under static1.mainsite.com.
In CloudFront you can control the hostname with which CloudFront will access your server. I suggest using a specific hostname to give to CloudFront which is different than you regular website hostname. That way you can detect a request to that hostname and serve a robots.txt which disallows everything (unlike your regular website robots.txt)

difference between http and www

pardon me for asking a very basic doubt.
I have hosted a page in the site collinfo.annauniv.edu
The page opens fine when i enter the address as http://collinfo.annauniv.edu
But when i gave www.collinfo.annauniv.edu my browser shows 404 error.
What is the difference that http causes here in place of www.
The www. before your domain is actually a subdomain. It's essentially the same thing as help.microsoft.com or orders.amazon.com.
With that in mind, there are a few things that could be happening:
1) Your DNS records do not include the appropriate A Record for the www subdomain.
In this case, you'll need to setup an A record that points to your web site's IP address. If you don't know how to do this, your web host should be able to help.
2) Your server is not configured to handle the www subdomain.
If you're using the apache web server, it needs to be configured to show your web site when the user enters www before your domain. Again, your web host can set this up for you.
It all comes down to a misconfiguration issue. If you don't have experience administering web servers, you may want to give your web host a holler.
www comes from the (rather) old time where a domain had several sub-features, of which the web was not always the main service. For instance
www.domain.tld for web
mail.domain.tld for mail
ftp.domain.tld for ftp
domain.tld for web
but this is a convention - any subdomain may point to anything actually.
This is more a question of DNS declaration and/or web-server configuration ; in this case it is probably that the web-server configuration does not trigger the same pages for www.domain and domain (since you get a 404).
The author / administrator of collinfo.annauniv.edu either forgot to create a DNS entry for www.collinfo.annauniv.edu or did not create a virtual domain (web-server side) for it that would point to the same pages as collinfo.annauniv.edu.
HTTP is a protocol.
http://collinfo.annauniv.edu
Is the address of a resource which can be retrieved using HTTP.
annauniv.edu is the domain in your case.
collinfo is the subdomain.
www.collinfo is also considered as a subdomain but it does not exist. That's why you get HTTP 404 not found.
Subdomain can be anything, www is usually used as it usually mean World Wide Web.
WWW is a subdomain
HTTP is a protocol (language)
Whether you specify HTTP in the browser or not, the browser will always assume the request is being of "http" type and will ussually add http:// for you.
WWW however is just an alternative subdivision of the domain name, the same as in:
www.domain.com
site.domain.com
sub1.domain.com
sub2.domain.com
.....
etc.domain.com
In most cases the WWW subdomain will point to the same "page" as the main domain, which is usually called the "index" page, such as index.html, or index.php and in most cases the index page is hidden in the browser's address bar, unless you specifically type it in, such as http://www.yahoo.com/index.html, but you have to understand that if you have a full control of your webserver you can modify these, so WWW doesn't point to the same page or you can call you main page "home.html" instead of "index.html" and instruct your webserver to "point" your browswer to that page by default.
But things like HTTP are not easily changed, since HTTP is the main language of the web and most browswers use that as the primary means to access the webservers.
Peace!