Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
How much does dirty html/css etc impact SEO?
-
Good Morning!
I have been trying to clean up this website and half the time I can't even edit our content without breaking the WYSIWYG Editor. Which leads me to the next question. How much, if at all, is this impacting our SEO. To my knowledge this isn't directly causing any broken pages for the viewer, but still, it certainly concerns me.
I found this post on Moz from last year:
http://outdoorsrank.com/community/q/how-much-impact-does-bad-html-coding-really-have-on-seo
We have a slightly different set of code problems but still wanted to revisit this question and see if anything has changed.
I also can't imagine that all this broken/extra code is helping our page load properly.
Thanks everybody!
-
Invalid code does not equate to slow code. Google used to purposely not close html and body tags to save bandwidth and loading time. I took the question as meaning w3c valid code.
-
Bad code can definately affect seo rankings for a website.
The question really depends on what exactly is wrong with the code. If its causing problems with page load speed times, that could be a problem for sure. HTML code and css code errors if written poorly or outdated does not really matter for actual seo and rankings on thier own. Unless these erros are causing other issues such as user experience which would increase bounce rates and affect rankings. If the code errors are causing duplicate pages., Html errors like missing H1, tags and such can affect on page seo scores which affect rankings. So in my honest opinion bad code can definately affect seo rankings for a website.
Free page speed test with results breakdown can be had there => http://www.webpagetest.org
Definately clean up all the HTML seo factors, usuability issues and page speed issues. Hope that helps, Have a great day, Joe
-
I love him...
-
As long as links are not broken and resources are not returning 404 errors, my understanding is that it is not affected at all. The only thing I would make sure of is that fetch as google returns the page looking like it should. Here is a video from the authority on the matter.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
SEO on dynamic website
Hi. I am hoping you can advise. I have a client in one of my training groups and their site is a golf booking engine where all pages are dynamically created based on parameters used in their website search. They want to know what is the best thing to do for SEO. They have some landing pages that Google can see but there is only a small bit of text at the top and the rest of the page is dynamically created. I have advised that they should create landing pages for each of their locations and clubs and use canonicals to handle what Google indexes.Is this the right advice or should they noindex? Thanks S
Intermediate & Advanced SEO | | bedynamic0 -
Default Wordpress 301 Redirects of JS and CSS files. Bad for SEO & How to Fix?
Hi there: We are developers with some digital marketing expertise, but a current issue has us perplexed. An outside SEO firm has asked us to clean up a large number of 301 redirects. Most of these are 'default' Wordpress behavior that relate to calling the latest version of a JS or CSS file. For instance, a JS file is called with this: https://websitexyz.com/wp-includes/js/wp-embed.min.js?ver=4.9.1 but ultimately redirects to this: https://websitexyz.com/wp-includes/js/wp-embed.min.js. We are being asked to prevent the redirect from happening by, presumably, calling the ultimate file to begin with. The issue is that, as far as we know, there's no easy way to alter WP behavior to call the ultimate file to begin with. Does anyone have any thoughts on this? Thanks.
Intermediate & Advanced SEO | | Daaveey0 -
What does Disallow: /french-wines/?* actually do - robots.txt
Hello Mozzers - Just wondering what this robots.txt instruction means: Disallow: /french-wines/?* Does it stop Googlebot crawling and indexing URLs in that "French Wines" folder - specifically the URLs that include a question mark? Would it stop the crawling of deeper folders - e.g. /french-wines/rhone-region/ that include a question mark in their URL? I think this has been done to block URLs containing query strings. Thanks, Luke
Intermediate & Advanced SEO | | McTaggart0 -
6 .htaccess Rewrites: Remove index.html, Remove .html, Force non-www, Force Trailing Slash
i've to give some information about my website Environment 1. i have static webpage in the root. 2. Wordpress installed in sub-dictionary www.domain.com/blog/ 3. I have two .htaccess , one in the root and one in the wordpress
Intermediate & Advanced SEO | | NeatIT
folder. i want to www to non on all URLs Remove index.html from url Remove all .html extension / Re-direct 301 to url
without .html extension Add trailing slash to the static webpages / Re-direct 301 from non-trailing slash Force trailing slash to the Wordpress Webpages / Re-direct 301 from non-trailing slash Some examples domain.tld/index.html >> domain.tld/ domain.tld/file.html >> domain.tld/file/ domain.tld/file.html/ >> domain.tld/file/ domain.tld/wordpress/post-name >> domain.tld/wordpress/post-name/ My code in ROOT htaccess is <ifmodule mod_rewrite.c="">Options +FollowSymLinks -MultiViews RewriteEngine On
RewriteBase / #removing trailing slash
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^(.*)/$ $1 [R=301,L] #www to non
RewriteCond %{HTTP_HOST} ^www.(([a-z0-9_]+.)?domain.com)$ [NC]
RewriteRule .? http://%1%{REQUEST_URI} [R=301,L] #html
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^([^.]+)$ $1.html [NC,L] #index redirect
RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /index.html\ HTTP/
RewriteRule ^index.html$ http://domain.com/ [R=301,L]
RewriteCond %{THE_REQUEST} .html
RewriteRule ^(.*).html$ /$1 [R=301,L]</ifmodule> The above code do 1. redirect www to non-www
2. Remove trailing slash at the end (if exists)
3. Remove index.html
4. Remove all .html
5. Redirect 301 to filename but doesn't add trailing slash at the end0 -
SEO impact difference between a URL Rewrite and 301 redirect
Hi guys and girls! Just putting a new site live, we changed the URL from one thing to another and I created a 301 file redirecting the urls like for like. The developer installing it has created a different file with columns like: RewriteRule ^page/ http://www.site/page [R=301,L] RewriteRule ^/page/ http://www.site/page [R=301,L] What's the difference? The page redirects but is there a difference between the 301 redirect and this URL rewrite in terms of SEO and link value?
Intermediate & Advanced SEO | | shloy23-2945840 -
/%category%/%postname%/ Permalink structure
Mostly everyone seems to agree that /%category%/%postname%/ is the best blog structure. I'm thinking of changing my structure to that because now it's structured by date which is bad. But almost all of my posts are assigned to more than one category. Won't this create duplicate pages?
Intermediate & Advanced SEO | | UnderRugSwept0 -
301 redirect from .html to non .html?
Previously our site was using this as our URL structure: www.site.com/page.html. A few months ago we updated our URL structure to this: www.site.com/page & we're not using the .html. I've read over this guide & don't see anywhere that discusses this: http://www.seomoz.org/learn-seo/redirection. I've currently got a programmer looking into, but am always a bit weary with their workarounds, as I'd previously had them cause more problems then fix it. Here is the solution he is looking to do: The way that I am doing the redirect is fine. The problem is of where to put the code. The issue is that the files are .html files that need to be redirected to the same url with out a .html on them. I can see if I can add that to the 404 redirect page if there is one inside of there and see if that does the trick. That way if there is no page that exists without the .html then it will still be a 404 page. However if it is there then it will work as normal. I will see what I can find and get back. Any help would be greatly appreciated. Thanks, BJ
Intermediate & Advanced SEO | | seointern0 -
Does capitalization matter for SEO?
Two places capitalization comes into play: (1) on-page use (title, h1, body text, img alt text, etc) (2) external anchor text I didn't think it mattered from Google's point of view for on-page usage (is this correct?) but I notice that OpenSiteExplorer' s 'anchor text distribution' tab shows different counts for the same keyword if it's capitalized in different ways (eg seomoz.org is listed separate from SEOmoz.org). Is that just OSE or does Google treat the keyword/phrase different based on its capitalization, too? And if so, then should I be creating external links to my site with the 'regular' and 'Capitalized' versions of my key phrases?
Intermediate & Advanced SEO | | scanlin1