Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Rel canonical tag back to the same page the tag is on?
-
Very simple,
Why would a website (and I have seen tons doing this) link the rel canonical tag back to the same page the tag is on?
Example: somepage.htm has a canonical tag linking to somepage.htm
I thought the idea of this tag was to tell google if 2 pages are similar, this page is the original, and it's this page which should be indexed and the page with the tag on should pass all PR to the original.
Maybe im wrong and someone can help me out to understand this.
-
For all practical purposes, Google doesn't seem to index pages where it recognizes the canonical as legitimate. You won't find them in a "site:" query, "cache:" command, etc. Google may call that a "filter", but once it's reached that point, the URL is as good as de-indexed. There may be subtle, technical distinctions, but the end result is virtually the same.
-
Not quite. Canonical (per Matt Cutts) is considered a hint as to what the real page is. It doesn't stop the duplicate page from being crawled or indexed (a page that isn't indexed will not show up anywhere in Google for any query), it prevents the duplicate page from winning the duplicate race (i.e. if you don't pick a winner, Google will pick one for you).
-
Thanks Tom (and everyone else for the replies),
So if someone linked to a page with a querystring Google wouldn't index that page because the canonical tag is pointing to a url which doesn't have that query in?
I like the scraped part as well, that in itself makes it worth while.
-
Newegg.com uses this because they have affiliates, searches and numerous other things that affect their query strings.
Remember that ANY change to the query string is seen as a new page. So
domain.com?page=a&link=1
domain.com?page=a&link=2are considered separate pages, even if they return the same content.
Canonical is used to determine which duplicate page "wins" the index race. All other versions are considered duplicate and, thus, devalued.
-
There's a couple of reasons why people might want to do this (and why I do with all my websites)
First of all, the page/site might be scraped and replicated by a bot, particularly if it's an authority domain. Having your canonicals in place to begin with will help reduce the chance of your content being seen as duplicate, should a bot scrape your site.
Another reason would be if a website might generate any additional versions of the page through queries, eg www.domain.com/page.php?query2 - Having a self referring canonical will also tell Google that you want to rank the URL without any other queries, which can help prevent any of those queries appearing in the Google index and/or SERPs.
-
Hi,
I am not an expert, so please do not take my answer very seriously. What you mention, of making a canonical tag pointing to the same URL, looks fine. In my understanding, canonical tags were created to tell the search engines that a page is the right one, even if the system you are using creates address that could look like duplicate content. For example, if you are using a Content Management System like wordpress or Joomla, you could have the following:
-
http://domain.com/date/month/page1 and so on.
Search engines (again, I am not sure, I am just a newbee), could think all this pages are duplicate content, and could penalize you for this. But if you indicate with the canonical tag that the right url is http://domain.com/page1, then you are safe.
I hope somebody with more experience could help you better,
Best Regards,
Daniel
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why are http and https pages showing different domain/page authorities?
My website www.aquatell.com was recently moved to the Shopify platform. We chose to use the http domain, because we didn't want to change too much, too quickly by moving to https. Only our shopping cart is using https protocol. We noticed however, that https versions of our non-cart pages were being indexed, so we created canonical tags to point the https version of a page to the http version. What's got me puzzled though, is when I use open site explorer to look at domain/page authority values, I get different scores for the http vs. https version. And the https version is always better. Example: http://www.aquatell.com DA = 21 and https://www.aquatell.com DA = 27. Can somebody please help me make sense of this? Thanks,
On-Page Optimization | | Aquatell1 -
Duplicate page titles and hreflang tags
Moz is flagging a lot of pages on our site which have duplicate page titles. 99% of these are international pages which hreflang tags in the sitemap. Do I need to worry about this? I assumed that it wasn't an issue given the use of hreflang. And if that's the case, why is Moz flagging them as an issue? Thanks.
On-Page Optimization | | ahyde0 -
Does Rel=canonical affect google shopping feed?
I have a client who gets a good portion of their sales (~40%) from Google Product Feeds, and for those they want each (Product X Quantity) to have it’s own SKU, as they often get 3 listings in a given Google shopping query, i.e. 2,4,8 units of a given product. However, we are worried about this creating duplicate content on the search side. Do you know if we could rel=canonical on the site without messing with their google shopping results? The crux of the issue is that they want the products to appear distinct for the product feed, and unified for the web so as not to dilute. Thoughts?
On-Page Optimization | | VISISEEKINC0 -
Missing meta descriptions on indexed pages, portfolio, tags, author and archive pages. I am using SEO all in one, any advice?
I am having a few problems that I can't seem to work out.....I am fairly new to this and can't seem to work out the following: Any help would be greatly appreciated 🙂 1. I am missing alot of meta description tags. I have installed "All in One SEO" but there seems to be no options to add meta descriptions in portfolio posts. I have also written meta descriptions for 'tags' and whilst I can see them in WP they don't seem to be activated. 2. The blog has pages indexed by WP- called Part 2 (/page/2), Part 3 (/page/3) etc. How do I solve this issue of meta descriptions and indexed pages? 3. There is also a page for myself, the author, that has multiple indexes for all the blog posts I have written, and I can't edit these archives to add meta descriptions. This also applies to the month archives for the blog. 4. Also, SEOmoz tells me that I have too many links on my blog page (also indexed) and their consequent tags. This also applies to the author pages (myself ). How do I fix this? Thanks for your help 🙂 Regards Nadia
On-Page Optimization | | PHDAustralia680 -
Is there a SEO penalty for multi links on same page going to same destination page?
Hi, Just a quick note. I hope you are able to assist. To cut a long story short, on the page below http://www.bookbluemountains.com.au/ -> Features Specials & Packages (middle column) we have 3 links per special going to the same page.
On-Page Optimization | | daveupton
1. Header is linked
2. Click on image link - currently with a no follow
3. 'More info' under the description paragraph is linked too - currently with a no follow Two arguments are as follows:
1. The reason we do not follow all 3 links is to reduce too many links which may appear spammy to Google. 2. Counter argument:
The point above has some validity, However, using no follow is basically telling the search engines that the webmaster “does not trust or doesn’t take responsibility” for what is behind the link, something you don’t want to do within your own website. There is no penalty as such for having too many links, the search engines will generally not worry after a certain number.. nothing that would concern this business though. I would suggest changing the no follow links a.s.a.p. Could you please advise thoughts. Many thanks Dave Upton [long signature removed by staff]0 -
Creating New Pages Versus Improving Existing Pages
What are some things to consider or things to evaluate when deciding whether you should focus resources on creating new pages (to cover more related topics) versus improving existing pages (adding more useful information, etc.)?
On-Page Optimization | | SparkplugDigital0 -
Page speed tools
Working on reducing page load time, since that is one of the ranking factors that Google uses. I've been using Page Speed FireFox plugin (requires FireBug), which is free. Pretty happy with it but wondering if others have pointers to good tools for this task. Thanks...
On-Page Optimization | | scanlin0 -
Avoiding "Duplicate Page Title" and "Duplicate Page Content" - Best Practices?
We have a website with a searchable database of recipes. You can search the database using an online form with dropdown options for: Course (starter, main, salad, etc)
On-Page Optimization | | smaavie
Cooking Method (fry, bake, boil, steam, etc)
Preparation Time (Under 30 min, 30min to 1 hour, Over 1 hour) Here are some examples of how URLs may look when searching for a recipe: find-a-recipe.php?course=starter
find-a-recipe.php?course=main&preperation-time=30min+to+1+hour
find-a-recipe.php?cooking-method=fry&preperation-time=over+1+hour There is also pagination of search results, so the URL could also have the variable "start", e.g. find-a-recipe.php?course=salad&start=30 There can be any combination of these variables, meaning there are hundreds of possible search results URL variations. This all works well on the site, however it gives multiple "Duplicate Page Title" and "Duplicate Page Content" errors when crawled by SEOmoz. I've seached online and found several possible solutions for this, such as: Setting canonical tag Adding these URL variables to Google Webmasters to tell Google to ignore them Change the Title tag in the head dynamically based on what URL variables are present However I am not sure which of these would be best. As far as I can tell the canonical tag should be used when you have the same page available at two seperate URLs, but this isn't the case here as the search results are always different. Adding these URL variables to Google webmasters won't fix the problem in other search engines, and will presumably continue to get these errors in our SEOmoz crawl reports. Changing the title tag each time can lead to very long title tags, and it doesn't address the problem of duplicate page content. I had hoped there would be a standard solution for problems like this, as I imagine others will have come across this before, but I cannot find the ideal solution. Any help would be much appreciated. Kind Regards5