Over The Top SEO

OTT Blog

How Duplicate Content Hurts Your SERPs

Paul Hudson July 8, 2016

 
It goes without saying that SEO has changed over the years. In its halcyon days the processes available to boost page rankings and run campaigns was far simpler and easier to operate. This has changed in the time since, with Google clamping down ever harder on SEO in order to maintain the accuracy and authenticity of its search results, making content one of the most important factors for ranking.

What was previously a matter of simpler techniques has now become a delicate process, which requires both greater investment and caution. Those looking to run campaigns in hopes that their pages hit the top of Google and Bing search results look to invest ever greater sums of money in order to achieve the SERP results they desire.

This leads to the subject of the dangers of SEO in the modern sense. Specifically we’ll be looking at duplicate content and the extent to which this can damage your campaign and jeopardize your progress. While it might be the thought that duplicate content could be of immediate danger to a campaign you’d be surprised as to the manner in which Google defines and deals with such content.

So let’s be clear and define right away what Google considers duplicate content to be and how it can affect your SERPs.

Duplicate content is an identical body of text that appears at several different locations on the internet. There are a few reasons why this specifically causes difficulties for the Google search engine specifically. For one, a series of duplicate content means that it is harder for a search engine to confirm which of the two or more results should be listed and which page should be listed higher compared to the other entries.

It’s fair to see how serious this is for a search engine. The accuracy of an engine directly determines its integrity and reputation and must be dealt with accurately and consistently. As the algorithms within engines like Google have evolved over the year their capability in determining the rankings of duplicate content has increased, translating to more significant penalties for those who use such content in their SEO strategies.

So what are the common reasons for duplicate content? It’s fair at first glance to assume that any duplication was done deliberately, through reasons such as the specific descriptions of products and items or the terms and conditions for a service. These bodies of text can be specific and legal in nature, requiring that they be copied as is with no alterations.

It is however the case that duplication can be caused through no fault of your own. This can be a particular issue for online shops, where every single variation of a type of product such as the colour of a shirt may create a separate, unique URL. This can be a huge problem for duplication and the punishing thereof. If a specific product reaches high popularity, and has a number of variations, a search engine can index these as distinct and separate websites due to duplication.

You will also find that once your content and sites reach a certain level of popularity and viewcounts that “scrapers” will become an issue. These are automated processes or manual actions where entire bodies of content are taken, often without consent, and rehosted separately for a business purpose. This can result in a large series of duplication without the original author ever giving consent for this to be done.

You do have the option of minimizing the issue of duplication with techniques such as excluding pages from SERPs or setting preferred domains. These can avoid the more basic occurrences of duplication.

So how damaging is duplication exactly? It’s a complicated answer.

The simple response is that duplicated content as defined above has little to no penalty. This is referring to content that is a word for word duplication. Engines such as Google have techniques to pick up and identify this content and categorise it appropriately but will tend against penalizing.

The real danger comes from what is termed “boilerplate” content. This type of duplicate content is often created on a lower level and tends to be more oriented towards high volume, lower quality work that is sloppily copied and reworded from original articles. As engines like Google have evolved over time so has their capability in identifying the similarities between articles to identify this. Boilerplate content will be penalized and damage your SERPs should such content be used heavily.

This is a natural evolution of page ranking systems, done in attempt to push the page ranking systems towards genuine engagement and higher quality content. This has the added benefit of increasing the price of SEO work and creating more genuine content.

The point to keep in mind therefore is this: Duplication is not a significant issue and certainly nothing to be concerned about as long as it is not a key part of your SEO strategy – it won’t increase your SERP results but also won’t penalize you either.

The real danger is making significant use of boilerplate content. Search engines want communities to create engaging and original work and will reward this heavily. While a truly original piece of work may cost a significant amount, it will be better rewarded by search engines in that it will have a significant positive impact on SERPs. While techniques for SEO still exist to this day that can circumvent this, the goal for providers like Google is to slowly clamp down on low quality duplicated content and move towards original work that drives true engagement.