Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Cleaning up a Spammy Domain VS Starting Fresh with a New Domain
-
Hi- Can you give me your opinion please... if you look at murrayroofing.com and see the high SPAM score- and the fact that our domain has been put on some spammy sites over the years- Is it better and faster to place higher in google SERP if we create a fresh new domain? My theory is we will spin our wheels trying to get unlisted from alot of those spammy linking sites. And that it would be faster to see results using a fresh new domain rather than trying to clean up the current spammy doamin. Thanks in advance - You guys have been awesome!!
-
Disavowing has nothing to do with traffic.
Disavowing is all about spam signals from spammy links. That and only that.
-
Thanks again for all the advice- Truly appreciated-
What are your thoughts on "disavowing" with google- murrayroofing.com so when it sends traffic to the new murrayroofingllc.com google will hopefully ignore...? Can you see our account in MOZ. You can see the old domain is sending traffic since it is listed on the spammy sites.
-
You are always welcome.
If you got more questions, you can always hit me up on my Twitter @DigitalSpaceman
-
Thank you!!
-
Hard to say who and why is putting you on those websites.
The only way to truly get rid of those backlinks is to reach out to those websites' owners. You'd have to obviously find someone who speaks the language.
Now, what you can do though is this:
- Disavow all those crappy links - that'll get Google to lower the "spam score" of your website;
- Block all traffic by IPs, geolocation and/or hostnames/referrers (that'll prevent from actual unrelated traffic)
That should clean it up pretty good.
Of course, that requires full control and ownership of that domain and website code. If you can't get that - again, my suggestion is just to part ways. -
This is awesome info! Thank you. What are your thoughts on trying to get backlinks removed from sites in China where we have no way to contact them - none of the wording o the sites are in our language- and it seems like it would be impossible to get removed from some of them. Additional thoughts greatly appreciated. In analytics we see "more" traffic from china than the US-
I'm convinced a competitor may be listing us on these sites- Or one of these SEO guys that get really pissed when we turn them down. Could they be out putting our domain on listing sites?
-
Yeah, your suggestion makes sense.
Keep the old one while the new one is ranking up.
Now, here is perfect scenario for you - keep working on the new site, and get full ownership of the old one. Then through IP blocks, cloudflare, removing all spammy backlinks etc, get rid of all or most of the spammy traffic and signals. And then redirect.
-
Thank you again!
I should have been more clear- The old website gets traffic that does convert- If it loaded faster than 10 seconds I'm sure a lot more would convert- Super high bounce rate due to slooooow loading of that site. But we do get "valid leads" every week from it. But not a lot of leads- maybe 5 a week- but our jobs are large dollar jobs.
What is your thought on running both sites separately? We could go in and make sure they are not duplicate and assign different addresses and phone numbers to the old site- But this "seems" black hat- We would not be doing it to get both site to rank- but just so we don't lose the traffic- then in a year or so get rid of it. what are your thoughts?
-
"... maybe a lot of traffic will convert. "
WILL convert? so it's not converting now? If so, it's kind of optimistic that will change, no?
Since you don't own old domain, you can't really reliably do anything about it anyway.
At this point, I would say not to forward at all, start from scratch.
-
Thank you- Yes some of the traffic - maybe a lot of traffic will convert. The problem is old "printed" directories and other places where we can't update the domain. We get a lot of business from a printed catalog that won;t change for a year or more.
I will look at the suggestions you made about IP limitations. The other issue is we don't "own" the original domain so we have to ask the owner who is also our IT guy to change settings. This is another reason we bough the new domain.
Again thank you!
-
Couple ways you can go about it.
-
Is any of the traffic going to the old spammy domain any good? Does it convert? If not, then don't worry about redirecting, there wouldn't be any point, only spam signals
-
If there is some good traffic, then do IP limitations, hostnames limitations etc. That can be done in htaccess or on the server itself. There are other more elaborate ways to filter out spam traffic as well, but that depends on how you or your IT guy is familiar with it. One of the simplest solutions is to route all traffic through CloudFlare, it has quite nice spam filtering, and it's free.
Hope this helps.
-
-
Thank you- we're talking about murrayroofinllc.com in particular- we are not sure how to forward the old domain to the new- We "know how" we just don't know if we should- The reason we developed murrayroofingllc.com is because murray roofing.com had a high spam score and we got advice from this string to go for a new domain-
Now the concern is- if we forward all the traffic from murrayroofing.com to murrayroofingllc.com that the new domain murrayroofingllc.com will be negatively affected by the spammy traffic- Somehow murrayroofing.com got on some spam sites and we get a ton of spammy traffic from china- we don't want this traffis - and these sites there is "no way" to ask them to remove our website from their spam sites in china.
All thoughts are welcome here-
-
Ta Larry
Ok nothing much of substance, that said if ranking worth trying as it is an easier or usually faster route to page 1.
Had a look at the Murray Roofing site and has not been optimised for customer queries a roofing contractor would seek to rank for. As it seems you are keen to start afresh - can do both in parallel. No harm to either.
That said would suggest you also look at your google my business structure - your effectively a local play. Getting reviews and appearing in the local search pack for roofing contractors Omaha etc we would consider a client priority.
All the best go get them.
-
only for a few and we are in position 49 and 50 for them.
-
Hi
Is the current site ranking for any terms of value?
-
Hi there,
Yes, absolutely get new domain. If you look at DA - it's only 15 (not too bad in some cases). But if you look at backlink profile - you'll see that most of the links are from listing sites - homestead, yellowpages, ezlocal etc. You can replicate that profile after a day of work. And, as you said, spam score will only bring troubles.
Hope this helps.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
M.ExampleSite vs mobile.ExampleSite vs ExampleSite.com
Hi, I have a call with a potential client tomorrow where all I know is that they are wigged-out about canonicalization, indexing and architecture for their three sites: m.ExampleSite.com mobile.ExampleSite.com ExampleSite.com The sites are pretty large... 350k for the mobiles and 5 million for the main site. They're a retailer with endless products. They're main site is not mobile-responsive, which is evidently why they have the m and mobile sites. Why two, I don't know. This is how they currently hand this: What would you suggest they do about this? The most comprehensive fix would be making the main site mobile responsive and 301 the old mobile sub domains to the main site. That's probably too much work for them. So, what more would you suggest and why? Your thoughts? Best... Mike P.S., Beneath my hand-drawn portrait avatar above it says "Staff" at this moment, which I am not. Some kind of bug I guess.
Intermediate & Advanced SEO | | 945010 -
Linking from & to in domains and sub-domains
What's the best optimised linking between sub-domains and domains? And every time we'll give website link at top with logo...do we need to link sub-domain also with all it's pages? If example.com is domain and example.com/blog is sub-domain or sub-folder... Do we need to link to example.com from /blog? Do we need to give /blog link in all pages of /blog? Is there any difference in connecting domains with sub-domains and sub-folders?
Intermediate & Advanced SEO | | vtmoz0 -
Hreflang in vs. sitemap?
Hi all, I decided to identify alternate language pages of my site via sitemap to save our development team some time. I also like the idea of having leaner markup. However, my site has many alternate language and country page variations, so after creating a sitemap that includes mostly tier 1 and tier 2 level URLs, i now have a sitemap file that's 17mb. I did a couple google searches to see is sitemap file size can ever be an issue and found a discussion or two that suggested keeping the size small and a really old article that recommended keeping it < 10mb. Does the sitemap file size matter? GWT has verified the sitemap and appears to be indexing the URLs fine. Are there any particular benefits to specifying alternate versions of a URL in vs. sitemap? Thanks, -Eugene
Intermediate & Advanced SEO | | eugene_bgb0 -
301 redirect subdirectory to new domain
I'm planning on using 301 redirects to spin out a subdirectory of my current website to be its own separate domain. For instance, I currently have a website www.website.com and my writers write tech news at www.website.com/news. Now I want to 301 redirect www.website.com/news to www.technews.com. Will this have any negative impact on SEO? What are some steps that I can take to minimize these impacts?
Intermediate & Advanced SEO | | Chris_Bishop1 -
Location.href vs href?
I just got off a Google Hangout with John Mueller and was left a little confused about his response to my question. If I have an internal link in a div like widgetwill it have the same SEO impact as widget John said that as you are unable to attribute a nofollow in an onclick event it would be treated as a naked link and would not pass pagerank but still be crawled. Can anyone confirm that I understood it correctly? If so should all my links that have such an onclickevent also have an html ahref in the too? Such as widget Many times it is more useful for the customer to click on any area of a large div and not just the link to get to the destination intended? Clarification on this subject would be very useful, there is nothing easily found online to confirm this. Thanks
Intermediate & Advanced SEO | | gazzerman10 -
Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)
Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
Intermediate & Advanced SEO | | browndoginteractive
2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality: http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results: Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index: robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages. I say "force" because of the crawl budget required. Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links. Best of both worlds: crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution: using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.0 -
Domain expiration and seo
My domain name is free with my service with yahoo but it expires every year and gets extended automatically as I continue service, how does this impact my seo efforts? I've heard that the search engines prefer sites to expire in 3 years or more? Is this a fact?
Intermediate & Advanced SEO | | bronxpad0 -
How long does a new domain need to get a specific level of trust?
We are a small start-up in germany in the Sports and health sector. We currently are building a network of people in that sector and give each person a seperate wordpress blog. The idea is to create a big network of experts. My question is: How long is the period for google to trust a completely new URL? We set up each project and create content on the page. Each week the owner of the site puts up an expert article that contain keywords. And we set certain links from other blogs, etc. Also, do you think it is more important for a site to get say, 20 backlinks from anywhere. Or 5 backlinks from very trusted blogs, etc.?
Intermediate & Advanced SEO | | wellbo0