After performing SEO audits on websites for over 10 years, I’ve audited a lot of websites. In the past few years, I have noticed an unfortunate trend in some very common mistakes that website owners have made. I end up having to point out these same very mistakes in nearly every SEO audit I’ve completed in the past year or so. The mistakes are what I’d call common website and web design mistakes–they are website issues and not search engine optimization issues. Regardless, these issues have an effect on a website’s traffic and visibility in the search engine results. So, I have to point them out during the technical SEO audits that I do for clients.

I call these mistakes common mistakes because 8 out of 10 technical SEO audits that I have performed in the past two years have at least 80 percent of these issues. I don’t mind that websites have these issues before they come to me for a search engine optimization audit. I can chalk it up to being job security as an SEO consultant. However, let’s review these 8 common mistakes that I come across during SEO audits. Check to see if your website is making any of these mistakes. If you are, using OnCrawl will help identify and fix these issues. I’ve detailed each issue below, explained why the issue is important, and how using OnCrawl can help you fix the issue.

1. Bad HTTP to HTTPs Migration
2. Duplicate Content Issues
3. Mega Menus and Internal Links
4. Site Structure Issues
5. Orphaned Content and Pages
6. Link Issues, Bad and Low Quality Links
7. Over Optimization or No Optimization
8. Incorrect Usage of Code or Directives

Common Mistake: Bad HTTP to HTTPs Migration

I see this mistake not only during SEO audits, but just by browsing the web. The main indicator that there is an issue is the broken lock symbol in the browser. If you visit a website that has a broken lock symbol, then the site has this issue. When websites are moved from HTTP to HTTPs (from the non-secure to a secure version (SSL), every mention and internal link on the domain name has to be changed to HTTPs. That means that if the website’s source code references an image (all sites have images), it should be loading that image via HTTPs and not HTTP. If it references HTTP like this:

< img src=”http://www.oncrawl.com/images/logo.png” alt=”OnCrawl logo” />

the page where that code appears would show a broken lock symbol. The URL for OnCrawl.com should be HTTPs and not HTTP as shown above. Another related issue would be internal links and how pages link to each other on the website. A properly migrated website from HTTP to HTTPs will always link from one page to another using an HTTPs URL in the link, and not HTTP. If the links on the site are like this:

< a href=” http://www.oncrawl.com/support/”>Contact Support

and link to the HTTP URL, then most likely this would cause a 301 Permanent Redirect to occur, redirecting visitors to https://www.oncrawl.com/support/, which is the HTTPs version of that page. When you crawl the website using OnCrawl, OnCrawl will report a LOT of redirects. That is not good. A proper website migration from HTTP to HTTPs will result in no redirects on the website when its crawled.

How to Fix This Common Mistake

Generally speaking, all of your site’s internal links and references to your full site URL in the source code of your site should point to the HTTPs version of your site and not HTTP. If you’re using WordPress, you can search the entire database of your site looking for your site’s HTTP URL (which includes the domain name) and replace it with the HTTPs version. You can also do that in other CMSs that have a database. For more information, I recently covered this in a previous OnCrawl article titled “Ensuring Your Domain Migration Goes Smoothly” (https://www.oncrawl.com/technical-seo/domain-migration/).

Common Mistake: Duplicate Content Issues

Duplicate content issues are fairly common, and it’s actually very difficult to make your website 100 percent free of duplicate content. If you have a main navigation on your site, that will appear on all the site’s pages, so it’s common to have at least some duplicate content. However, one of the common mistakes I run across is when a site will have a paragraph of text (or even more!) that appears on all pages of the site. For example, you might have a paragraph of text that’s “about your company” in the site’s footer. That doesn’t really cause a huge problem unless there are pages of your site that have less content on them than that paragraph of text in the site’s footer. If the paragraph in the footer is 200 words, and the rest of the page has less than 200 words (which is common), then the footer text will outweigh the other content on the page. That can become a problem when you have multiple pages on your site.

One website I audited had a sidebar that had recent testimonials from customers. It scrolled through the testimonials, so visitors only saw one testimonial at a time. That wouldn’t generally be a problem, but the source code of the page had over 30 testimonials that it “scrolled through”. All of the text of all 30 testimonials loaded on every page of the site. That was literally over 3500 words of content in the testimonials. No pages on the site had more than 3500 words on them. So, every single page on the site had the same 3500 words of content on it. We removed the testimonials from the sidebar and the site’s rankings and traffic skyrocketed.

There is technically no penalty for duplicate content on your site unless the duplicate content is so bad that it warrants a spam penalty from Google. That is fairly rare, if you’re copying pages and/or websites then you probably already know about the duplicate content. Most sites don’t purposely duplicate their content–so that can run into crawling issues. A site can easily use up their “crawl budget” by wasting it on duplicate content. It’s important to steer the search engine crawlers towards crawling pages that are unique and have the best content.

How to Fix This Common Mistake

Use OnCrawl’s Duplicate Content feature in the Crawl Report to learn about the amount of duplicate content that you may have on your site. If the site has duplicate title tags, duplicate meta description tags, and duplicate heading tags, for example, you may want to investigate whether those pages are duplicates or not. Some WordPress sites will generate duplicate pages with tags, categories, and archived pages like pages by date.

Common Mistake: Mega Menus

How many links does the website have in their navigation? I’ve literally seen hundreds of links (over 200-300 links) in one site’s main navigation. Not only is it overwhelming for site visitors to try and find what they’re looking for, it’s not very good for SEO in general. It’s a mega menu when a site has over 10 drop-down main items, and each of those drop-down main items have more than 10 items, and some even have sub-menu items. That ultimately means that you have over 100 pages all linking to each other. That’s 100 pages linking directly to 100 pages.

Sites like this can easily be (and should be) re-evaluated, as the site structure isn’t categorized very well. One site I performed an SEO audit on is an e-commerce site with over 300,000 product pages. The website sells office furniture, but also church furniture and exercise equipment. You could literally be on a page about church pews and then navigate directly to a page about yoga mats. I don’t know anyone buying church pews who would also be buying yoga mats at the same time.

This is an extreme example, but the main navigation has drop-down menus allowing someone to navigate to (the site links to) pages on those topics directly. Think of a website as several different “topical” areas or mini sites within one larger website. There’s a home page, but that home page should only link to the main categories. Those category pages should then link to pages within that category. Then, navigate to a subcategory and then down to a product page. Visitors should have to click to another category before they can go directly to an off-topic page.

Linking 200 pages to 200 pages in the navigation can never be good. If it’s a mega menu, then it needs to be fixed, which means that it’s related to the overall site structure issues that also occur on the site.

How to Fix This Common Mistake

There are several things that you need to review in order to start to get rid of a site’s mega menu. Take a look at the internal site structure, review topic silos (if they even exist on the site) and review your content clusters. Like pages should be linking to like pages (pages on the same topic).

Common Mistake: Site Structure Issues

This is actually very similar to the mega menu issue. If a site has a mega menu, then it’s not set up in a way where the site’s structure is ideal. As I mentioned earlier, the site structure should be set up in a way where like topics are grouped together. It’s pretty rare that a site should have a page that’s linked under a subtopic and then under a category and still has hundreds of internal links pointing to it. Are there orphaned pages on the site? In the OnCrawl report, after a site crawl, has it identified orphaned pages that have traffic and rank that aren’t a part of the site structure?

How to Fix This Common Mistake

Take a look at the internal popularity reports that OnCrawl creates, review the page groups by depth, the Inrank distribution, and the Inrank flow. Does the site have a mega menu on the site’s home page? Does the mega menu link to the subcategory pages? It shouldn’t. There should be a small number of category pages linked in the main menu on the site’s home page.

Are there clear topics or sections of the site? This will require a manual review of the site, you should be able tell fairly easily if there’s a mega menu with all the pages on the site or if there are sections. Another issue to review is the OnCrawl report’s Links section.

Note the average number of inlinks per page, and the average internal followed outlines per page. There generally should be more incoming links to pages than outgoing pages. If those numbers are very close, or there are more outgoing links than incoming links, then there could be a site structure problem. Are there pages on the site with less than 10 follow links? Are there orphaned pages not part of the site structure?

Common Mistake: Orphaned Content and Pages

Orphaned pages are more common than you think. Orphaned pages are simply that: when the crawler started at the site’s home page and went from link to link on the site, it couldn’t find a link to an orphaned page. Orphaned pages are found by OnCrawl because it analyzes Google Analytics data, Google Search Console data, and (hopefully) your site’s log file data. OnCrawl finds pages that aren’t linked on the site and identifies them as orphaned pages. They’re actually more common than you might expect because we tend to redesign our websites and leave pages on the web server but don’t link to them. It could be an old version of the site–or it could be content that’s not appropriate to include in the new version of the site. But, nonetheless, the pages still exist on the site even though they’re not linked directly from the current public-facing site. This happens–and orphaned pages can have lots of traffic and they can rank well.

How to Fix This Common Mistake

There are several ways you can fix this problem. In fact, if you provide OnCrawl with the right data, such as Google Analytics, Majestic, Google Search Console, and log file data, it identifies orphaned pages very well. So, provide OnCrawl with all that data if you have access to it. Log files can be very important, as it can reveal other data, such as 404 errors and redirects which will help you recover lost traffic and rankings.

To fix this common mistake, identify the orphaned pages that are important–then link them properly on the site. If there are links from other websites linking to your website, you can identify this through Majestic.com link data. Links to orphaned pages that are linked throughout the site can properly pass Inrank and PageRank over to other pages on the site, helping it rank.

Common Mistake: Link Issues, Bad and Low Quality Links

Take a look at the Majestic Domain Trust Flow and Domain Citation Flow numbers. Generally speaking, the Trust Flow should be higher than the Citation Flow by at least 10 points. If the Citation Flow is higher than the Trust Flow, you may have low quality links pointing to the website. You should generally have good, quality links pointing to your website rather than a lot of lower quality, untrusted sites linking to your website. It’s common to have some lower quality links pointing to your website, as we cannot control everyone who links to our site. However, we can control some of the links.

How to Fix This Common Mistake

Take the time to review all of the links to your website. Also, review the list of pages that have links. Are there any pages that have links that aren’t currently resolving? Maybe there’s an old page that doesn’t exist anymore that has links? Either bring back that page of content on the same URL or 301 redirect that page to another related page on the same topic–that way you’ll recover those links.

Common Mistake: Over Optimization or No Optimization

Another common mistake that I see all the time is on-page optimization that appears to have been done over 10 years ago, or SEO that’s been done on a site that is following SEO standards from 10 years ago. The current SEO best practices aren’t being followed, and usually this involves keyword stuffing, poorly written meta data like poorly written title tags or meta description tags, or even the improper use of heading tags.

How to Fix This Common Mistake

It’s pretty easy to fix this common mistake–learn the best practices for writing title tags, meta data, and use acceptable HTML coding practices. Ordered lists, bulleted lists, use of bold, italics, and the right font sizes and colors are a must nowadays. Cleaning up HTML code and using proper markup code such as Schema.org code and JSON-LD code can go a long way towards getting better rankings and more search engine traffic.

Common Mistake: Incorrect Usage of Code or Directives

Using too much or using the inappropriate or incorrect Schema.org code can result in a manual action from Google. While structured data is good, it must be implemented correctly or there can be severe consequences–it can backfire on you. Many sites don’t use any structured markup, and that’s just a missed opportunity nowadays.

Other related issues can involve incorrect implementation of code or even having conflicting directives on pages. For example, if you’re using a canonical tag on a page, then don’t disallow the search engines from crawling that page in the robots.txt file. If they can’t crawl the page, than they can’t see the canonical tag.

Other incorrect implementation of code or conflicting directives that I commonly see are:

  • incorrect rel next/prev usage
  • incorrect canonical tag directives
  • incorrect robots.txt directives

How to Fix This Common Mistake

Honestly, to be able to spot and fix these issues, you have to understand which each of these directives do and why they’re used. You don’t have to be a coder in order to understand schema.org code, but you do have to have a basic understand of why and when it’s used. I’m not a coder, but I can copy and paste the code–and there are plenty of code generators out there to help you get it right.

In Conclusion

We all make mistakes–and luckily all of these mistakes are very fixable, which can lead to a better optimized site, a much more search engine friendly, and visitor friendly site. A proper SEO audit, even if done yourself on your own website, will most certainly reveal one or more of these common mistakes. I personally have audited my own sites and found issues that I wasn’t aware of, but was able to fix after tools such as OnCrawl pointed them out to me. And still, we have to pick our battles, as some of the issues found may not be as important as others.