Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
404 or 410 status code after deleting a real estate listing
-
Hi there,
We manage a website which generates an overview and detailpages of listings for several real estate agents.
When these listings have been sold, they are removed from the overview and pages. These listings appear as not found in the crawl error overview in Google Search Console. These pages appear as 404's, would changing this to 410's solve this problem? And if not, what fix could take care of this problem?
-
Good answer Dirk.
I like your idea of adding valuable, relevant content to the pages Dirk, good thinking.
Personally, I'd rather Iet Google know these pages are removed intentionally and not due to errors, so 410 rather than leaving as 40.
One thing to be mindful of, though, is how much crawl budget you're willing to give to these pages. If we're talking about a lot of pages in bulk, I'd be worried how much crawl budget they'd eat up over time. As you point out, they'd likely drop in rank anyway due to loss of internal links too, so might be the cost to the crawl budget isn't worth it?.
Another solution (using your idea Dirk), would be to somehow automate the process of, when a listing is marked as sold, the listing is removed, other properties in the same area are added (as you suggest), then some time later (month or two?), a 410 header set.
The other option would be to 301 the old pages back to the area page for the properties (perhaps with something like a bootstrap message saying the property is sold but others in the area are available). This would pass juice etc back to that page. but, of course, you'd be telling G that the page had permanently moved, which isn't quite the case.
-
The answer from Kristen is correct. However changing 404 to 410 will just let these pages appearing as 410 in the Search Console. The fact that they are appearing is not a problem - it's just that Google wants to notify you that pages return a 4xx status. If this is intended (like in your case) you can just ignore these messages and mark them as fixed.
In your case you could as well consider another option - remove the pages from the listings but keep them published (with status 200). Update the page, indicating that the original property is sold but list some other (similar) properties as an alternative. This way, if there are external pages linking to the property page the link value doesn't get lost and if people would accidentally land on this page they still find content which could be interesting to them (as you remove the navigation links to these pages they become orphans - so little change that they will rank very high in Google)
Dirk
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Hreflang : mixing with/without country code for same language
Hello, I would like to display 3 different english versions of my website : 1 for UK, 1 for CA and 1 for other english users. It would look like this for a page: . (english content with £ prices) <link rel="alternate" href="https: xxx.com="" en-ca" hreflang="en-CA">(english content with $CA prices)</link rel="alternate" href="https:> <link rel="alternate" href="https: xxx.com="" en="" " hreflang="en">(english content without currency)</link rel="alternate" href="https:> I wonder if I can mix this hreflang without country code with hreflangs with country code for the 2 other specific versions... or if the hreflang without country code version will appear whatever the country, even if i specified it . In other terms, is hreflang="en" > hreflang="en-CA" + hreflang="en-GB" if tagged together on a same page? Thank you
Intermediate & Advanced SEO | | AlexisH0 -
410 or 301 after URL update?
Hi there, A site i'm working on atm has a thousand "not found" errors on google console (of course, I'm sure there are thousands more it's not showing us!). The issue is a lot of them seem to come from a URL change. Damage has been done, the URLs have been changed and I can't stop that... but as you can imagine, i'm keen to fix as many as humanly possible. I don't want to go mad with 301s - but for external links in, this seems like the best solution? On the other hand, Google is reading internal links that simply aren't there anymore. Is it better to hunt down the new page and 301-it anyway? OR should I 410 and grit my teeth while google crawls and recrawls it, warning me that this page really doesn't exist? Essentially I guess I'm asking, how many 301s are too many and will affect our DA? And what's the best solution for dealing with mass 404 errors - many of which aren't attached or linked to from any other pages anymore? Thanks for any insights 🙂
Intermediate & Advanced SEO | | Fubra0 -
Site structure: Any issues with 404'd parent folders?
Is there any issue with a 404'd parent folder in a URL? There's no links to the parent folder and a parent folder page never existed. For example say I have the following pages w/ content: /famous-dogs/lassie/
Intermediate & Advanced SEO | | dsbud
/famous-dogs/snoopy/
/famous-dogs/scooby-doo/ But I never (and maybe never plan to) created a general **/famous-dogs/ **page. Sitemaps.xml does not link to it, nor does any page on my site. Is there any concerns with doing this? Am I missing out on any sort of value that might pass to a parent folder?0 -
Should I delete 100s of weak posts from my website?
I run this website: http://knowledgeweighsnothing.com/ It was initially built to get traffic from Facebook. The vast majority of the 1300+ posts are shorter curation style posts. Basically I would find excellent sources of information and then do a short post highlighting the information and then link to the original source (and then post to FB and hey presto 1000s of visitors going through my website). Traffic was so amazing from FB at the time, that 'really stupidly' these posts were written with no regard for search engine rankings. When Facebook reach etc dropped right off, I started writing full original content posts to gain more traffic from search engines. I am starting to get more and more traffic now from Google etc, but there's still lots to improve. I am concerned that the shortest/weakest posts on the website are holding things back to some degree. I am considering going through the website and deleting the very weakest older posts based on their quality/backlinks and PA. This will probably run into 100s of posts. Is it detrimental to delete so weak many posts from a website? Any and all advice on how to proceed would be greatly recieved.
Intermediate & Advanced SEO | | xpers1 -
Repeatedly target a rolling list of kws..or is that cannibalization? Biggest Confusion in SEO Ive found
Also suggesting a WBF topic. Ive read and researched with no luck here... would love a Moz staff reply too! Is it better to blog repeatedly on the same topic (writing multiple blogs around the topic of "content marketing" for example in hopes Google sees you as an authority on the topic over time) OR is this keyword cannibalization? Is it better to have one powerful and comprehensive page on a topic if it makes sense. Thanks!
Intermediate & Advanced SEO | | RickyShockley0 -
Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)
Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
Intermediate & Advanced SEO | | browndoginteractive
2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality: http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results: Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index: robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages. I say "force" because of the crawl budget required. Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links. Best of both worlds: crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution: using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.0 -
ECommerce product listed in multiple places, best SEO practice?
We have an eCommerce site we have built for a customer and the products are allowed to appear in more than one product category within the web site. Now I know this is a bad idea from a duplicate content point of view, But we are going to allow the customer to select which out of the multiple categories the product appears in will be the default category. This will mean we will have a way of defining what the default url is for a product. So am I correct in thinking all the other urls where the product appears we should add a rel canonical to these pages pointing to the default url to stop duplicate content? Is this the best way?
Intermediate & Advanced SEO | | spiralsites0 -
Is 404'ing a page enough to remove it from Google's index?
We set some pages to 404 status about 7 months ago, but they are still showing in Google's index (as 404's). Is there anything else I need to do to remove these?
Intermediate & Advanced SEO | | nicole.healthline0