Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Why crawl error "title missing or empty" when there is already "title and meta desciption" in place?
-
I've been getting 73 "title missing or empty" warnings from SEOMOZ crawl diagnostic.
This is weird as I've installed yoast wordpress seo plugin and all posts do have title and meta description. But why the results here.. can anyone explain what's happening? Thanks!!
Here are some of the links that are listed with "title missing, empty". Almost all our blog posts were listed there.
http://www.gan4hire.com/blog/2011/are-you-here-for-good/
-
I see. Thanks so much for the effort to explain in detail.
So, is it because of the yoast wordpress seo plugin i used? Are you using that for your site? Do you have such problem? Because I just installed it prior to the crawl. I was using All In One SEO earlier and the crawl didn't come back with such error.
Google and Bing seems to have no problem getting my title though. Should I fix it or just ignore the problem?
Thanks so much again!
-
Jason,
Go in and turn off your twitter, G+1, plug in and then re run the app. My guess is you will then see title tags through any moz tool. If so, you can choose a different widget or move placement. (when you deactivate the plug in make sure you clear the cache before running crawl).
Hope it helps
-
Thanks Alan,
I like a little mystery hunt
-
Well picked up Sha.
impressed with you level of detail.
-
Hi Jason,
There is obviously something going on with this that is affecting what some crawlers are seeing on your pages.
I ran the Screaming Frog Tool and it shows that the majority of your pages have empty Titles even though I can see that there are Titles loading in the browser.
On checking your code I see that you are using the pragma directive meta element , but it actually appears below the Title element in the code.
Example from your code:
<head> <title>Are You Socially Awkward? | Branding Blog | The Bullettitle> **<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />**
So I ran the page through the W3C Markup Validation Service and it also indicates that it sees no character encoding declaration:
No Character encoding declared at document level
No character encoding information was found within the document, either in an HTML
meta
element or an XML declaration.So, I believe the issue here may be related to the fact that the pragma directive should appear as close as possible to the top of the head element ie before the Title element.
The following is from the W3.org documentation on declaring character encoding. You will see that there is specific reference to the fact that the use of the pragma directive is required in the case of XHTML 1.x documents as yours is:
For XHTML syntax, you should, of course, have " />" after the content attribute, rather than just ">".
The encoding of the document is specified just after charset=. In this case the specified encoding is the Unicode encoding, UTF-8.
The pragma directive should be used for pages written in HTML 4.01. It should also be used for XHTML 1.x documents served as HTML, since the HTML parser will not pick up encoding information from the XML declaration.
In HTML5 you can either use this approach for declaring the encoding, or the newly specified meta charset attribute, but not both in the same page. The encoding declaration should also fit within the first 1024 bytes of the document, so you should generally put it immediately after the opening tag of the head element.
Hope that helps,
Sha
-
Cool. Thanks for reminding, Keri. I thought the help desk will reply to this thread.
Sure, I'll post more information back on this thread once I get the answer.
-
Thanks for accessing the site. I hope the next crawl, which will be next week, will be good. Will update you guys.
-
That's an interesting one. I'd email that to the help desk at help@seomoz.org to let them know about it. If there's some kind of cause of it that would be helpful for others to know, it'd be great if you could post more information back on this thread.
-
I just did a cral on your site using Bings ToolKit, and i did not find any errors concerneing tittle.
In fact your site has the best score i have ever got from a wordpress site. Usely a wordpress site is a mess, especialy with un-necasary 301's
I found only 2 html errors, 1 un-necessary redirect and multiple h1.
Wait to next crawl it may come good.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Expired domain 404 crawl error
I recently purchased a Expired domain from auction and after I started my new site on it, I am noticing 500+ "not found" errors in Google Webmaster Tools, which are generating from the previous owner's contents.Should I use a redirection plugin to redirect those non-exist posts to any new post(s) of my site? or I should use a 301 redirect? or I should leave them just as it is without taking further action? Please advise.
Technical SEO | | Taswirh1 -
How Does Google's "index" find the location of pages in the "page directory" to return?
This is my understanding of how Google's search works, and I am unsure about one thing in specific: Google continuously crawls websites and stores each page it finds (let's call it "page directory") Google's "page directory" is a cache so it isn't the "live" version of the page Google has separate storage called "the index" which contains all the keywords searched. These keywords in "the index" point to the pages in the "page directory" that contain the same keywords. When someone searches a keyword, that keyword is accessed in the "index" and returns all relevant pages in the "page directory" These returned pages are given ranks based on the algorithm The one part I'm unsure of is how Google's "index" knows the location of relevant pages in the "page directory". The keyword entries in the "index" point to the "page directory" somehow. I'm thinking each page has a url in the "page directory", and the entries in the "index" contain these urls. Since Google's "page directory" is a cache, would the urls be the same as the live website (and would the keywords in the "index" point to these urls)? For example if webpage is found at wwww.website.com/page1, would the "page directory" store this page under that url in Google's cache? The reason I want to discuss this is to know the effects of changing a pages url by understanding how the search process works better.
Technical SEO | | reidsteven750 -
Rel="Follow"? What the &#@? does that mean?
I've written a guest blog post for a site. In the link back to my site they've put a rel="follow" attribute. Is that valid HTML? I've Googled it but the answers are inconclusive, to say the least.
Technical SEO | | Jeepster0 -
"nofollow pages" or "duplicate content"?
We have a huge site with lots of geographical-pages in this structure: domain.com/country/resort/hotel domain.com/country/resort/hotel/facts domain.com/country/resort/hotel/images domain.com/country/resort/hotel/excursions domain.com/country/resort/hotel/maps domain.com/country/resort/hotel/car-rental Problem is that the text on ie. /excursions is often exactly the same on .../alcudia/hotel-sea-club/excursion and .../alcudia/hotel-beach-club/excursion The two hotels offer the same excursions, and the intro text on the pages are the exact same throughout the entire site. This is also a problem on the /images and /car-rental pages. I think in most cases the only difference on these pages is the Title, description and H1. These pages do not attract a lot of visits through search-engines. But to avoid them being flagged as duplicate content (we have more than 4000 of these pages - /excursions, /maps, /car-rental, /images), do i add a nofollow-tag to these, do i block them in robots.txt or should i just leave them and live with them being flagged as duplicate content? Im waiting for our web-team to add a function to insert a geographical-name in the text, so i could add ie #HOTELNAME# in the text and thereby avoiding the duplicate text. Right now we have intros like: When you visit the hotel ... instead of: When you visit Alcudia Sea Club But untill the web-team has fixed these GEO-tags, what should i do? What would you do and why?
Technical SEO | | alsvik0 -
Use webmaster tools "change of address" when doing rel=canonical
We are doing a "soft migration" of a website. (Actually it is a merger of two websites). We are doing cross site rel=canonical tags instead of 301's for the first 60-90 days. These have been done on a page by page basis for an entire site. Google states that a "change of address" should be done in webmaster tools for a site migration with 301's. Should this also be done when we are doing this soft move?
Technical SEO | | EugeneF0 -
How to block "print" pages from indexing
I have a fairly large FAQ section and every article has a "print" button. Unfortunately, this is creating a page for every article which is muddying up the index - especially on my own site using Google Custom Search. Can you recommend a way to block this from happening? Example Article: http://www.knottyboy.com/lore/idx.php/11/183/Maintenance-of-Mature-Locks-6-months-/article/How-do-I-get-sand-out-of-my-dreads.html Example "Print" page: http://www.knottyboy.com/lore/article.php?id=052&action=print
Technical SEO | | dreadmichael0 -
Crawling image folders / crawl allowance
We recently removed /img and /imgp from our robots.txt file thus allowing googlebot to crawl our image folders. Not sure why we had these blocked in the first place, but we opened them up in response to an email from Google Product Search about not being able to crawl images - which can/has hurt our traffic from Google Shopping. My question is: will allowing Google to crawl our image files eat up our 'crawl allowance'? We wouldn't want Google to not crawl/index certain pages, and ding our organic traffic, because more of our allotted crawl bandwidth is getting chewed up crawling image files. Outside of the non-detailed crawl stat graphs from Webmaster Tools, what's the best way to check how frequently/ deeply our site is getting crawled? Thanks all!
Technical SEO | | evoNick0 -
301 Redirect "wildcard" question
I have been looking at the SEOmoz redirect guide for some advice but I can't seem to find the answer : http://www.seomoz.org/learn-seo/redirection I have lots of URLs from a previous version of a site that look like the following: sitename.com/-c-25.html?sort=2d&page=1 sitename.com/-c-25.html?sort=3a&page=1 etc etc. I want to write a redirect so whenever a URL with the terms "-c-25.html" is requested it redirects to a specified page, regardless of what comes after the question mark. These URLs were created by our previous ecommerce software. The 'c' is for category, and each page of the cateogry created a different URL. I want to do these so I can rediect all of these URLs to the appropraite new cateogry page in a single redirect. Thanks for any help.
Technical SEO | | craigycraig0