Evolución del SEO

Why SEO?

Since the advent of the Internet and search engines, there has been an interest in ranking at the top of search engine results. As the “tricks” to obtain the approval of each search engine were discovered, they have changed their evaluation patterns to offer a better user experience.

Search engines like google, at the beginning, were not able to read exactly the content of each web page, so they used “bots” that group patterns, giving a domain or a certain page a value for each search.

Therefore, at the beginning SEO approaches were crude, inelegant, and even abusive, and are known today as “Black Hat SEO”.

History of the Internet

Back in 1991 the first websites were born, and in a short time, cyberspace was full of them, with an urgent need to organize and classify so much digital content.

Initially the “World Wide Web” (www) was indexed manually, until June 1993, when a young man from Massachusetts, Matthew Gray, a student at MIT, developed the first search robot known as “World Wide Web Wonderer” or more familiarly as Wandex, shortly after, Gray would also create one of the first servers in history, www.mit.edu .

It did not take even a year for other similar systems to appear, such as Aliweb (still in operation), or later Webcrawler, which turned out to be the first full-text search engine.

It was also in 1993 that six students from Stanford University developed Architext, with the intention of compiling the vast amount of information on the web. Shortly thereafter, they secured further funding for the project, and in October 1995 Excite was officially born, revolutionizing the categorization of information by launching a portal that selected and offered access to different information posts based on their subject matter. Practically at the same time, other providers of this type of service, such as Alta Vista or Yahoo replicated with their own version.

Evolution of SEO - History of web search engines

However, if there is a year in which SEO is considered to be born, it is 1996, when Sergey Brin and Larry Page created the algorithm and search engine: BackRub. It was only a year later when the domain “Google” appeared, using the same algorithm, and in a short time it ended up taking over the market, being considered the most used search engine in the world in 2004.

Google updates

Since the appearance of the search engine par excellence, there have always been computer scientists trying to decipher the secrets of its algorithm, and developing new ones.

Since the emergence of the search engine par excellence, there have always been computer scientists trying to decipher the secrets of its algorithm, and developing techniques (sometimes unethical) to boost one domain or another to the top of its rankings. And for this reason Google’s technicians have strived to improve the algorithm, to offer the best web pages to its users, without the interference of these SEO techniques.

During the first years of the 21st century, Google would make different updates to the algorithm, but it would not be until 2011, until it would introduce its first major comprehensive update of the algorithm, which would change the rules of SEO completely, penalizing websites with link farms and very low quality. This update was called Panda.

Since then Google has made several updates every few months, and as a reference, we detail in different articles the most significant ones, along with the names that were assigned to them (sometimes officially, and sometimes unofficially):

  • Caffeine (2010)
  • Panda (2011)
  • Penguin (2012)
  • Venice (2012)
  • Pirate (2012)
  • Hummingbird (2013)
  • Pigeon (2013)
  • HTTPS/SSL (2014)
  • Mobilegeddon (2015)
  • RankBrain (2015)
  • Possum (2016)
  • Intrusive Interstitials Update (2017)
  • Mobile Page Speed Update (2018)
  • Medic (2018)
  • BERT (2019)
  • Carnage (2020)
  • Core Web Vitals (2021)

SEO against updates

SEO techniques must evolve along with the search engine algorithm. Initially, search engines were limited to reading and counting words. So a document mentioning “chicken recipes” 30 times would logically be indexed in the cookery library, and in the chicken section.

But… how do you choose which page is best for the user out of the whole library?

Search engines use several methods to choose one page over another, but one of the most important criteria, at least in the origins of search engines, is Keywords.

How do Keywords work?

When a user performs a query in a search engine, the words he types will be considered “keywords”, and the results will be displayed according to these words described. It is very likely that if we search for the word “banana” the results we are shown will be different than if we search for “banana”. This is because some pages have been ranked better for one word than for another, even if they both refer to the same element.

Number of keywords on SEO

Initially search engines valued that, following the previous example, a page that mentioned 30 times “chicken recipes” was more relevant (and therefore would be higher in the search engines), than another similar page that only mentions “chicken recipes” 15 times. But by the same token, a page mentioning “chicken recipes” 60 times would be even higher.

Word density

On the other hand, there is the concept of keyword density. It is logical to think that a text on the history of America may mention Christopher Columbus hundreds of times in its multiple pages, on the other hand a short biography about Columbus may only contain the word “Christopher Columbus” x13 times. However, being a shorter document, and clearly focused on this person, it will be of greater interest to the user searching for these keywords, and therefore will be placed higher on the search engine results page.

Keyword density can be calculated with a simple formula:

KW Density = Number of keywords ÷ Total words.

Black hat techniques with Keywords

This all started with the first SEO techniques to “trick” Google, called “black-hat SEO”.

Webmasters dedicated themselves to making endless pages with lots of text that included the keywords multiple times even if it did not enrich the article, and that the user would never get to read. It is a technique known as “Keyword stuffing”.

“Keyword stuffing” example:

When someone wants to prepare chicken recipes, one should know that some chicken recipes, are not like other chicken recipes, because there are very good chicken recipes, and other regular chicken recipes as well as bad chicken recipes. Our chicken recipes are the best of all chicken recipes, and that is why we consider ourselves to be the best chicken recipe website that anyone who wants to prepare chicken recipes can access when looking for chicken recipes.

This gibberish with 82 words in which “chicken recipes” appears 11 times(3 words x 11=33), turns out to have a keyword density of 33÷88= 37’5%.

That is to say that more than 37% of the content are keywords.

Imagine that our page tries to offer a good user experience, and starts with a good recipe of about 400 words in which naturally uses “chicken recipes” 3 or 4 times (it would be a density of 3%, but then after it, add the previous paragraph to raise this density (It would be about 45 “keywords” ÷ 500 words) of about 10% surpassing any other recipe that had not used this technique.

Algorithm update: “Keyword Stuffing”

Before the different evolutions of the search engines started, this paragraph, very difficult to understand, and giving the user very little real information, would have been among the first results of Google.

Fortunately for users, since 2018, google implements a “Keyword stuffing” factor, to avoid this abuse of some SEO’s, penalizing this type of actions, and encouraging web developers to make better content for the user, instead of “tricks” of little use to the consumer.

This was quite a game changer, with websites which have been ranking for long time omong the top results of the SERP, dropping dramatically to the latest positions.

Normally the “black hat” techniques seek to take advantage of the failures of the search engine algorithm. And google’s job with each of its updates is to prevent this deception and try to give users the best possible page for their searches.