A Brief History of Search Engine Optimization

Search engine optimization, often abbreviated to SEO, is the art and science of making web pages attractive to internet search engines. Some internet businesses consider search engine optimization to be the subset of search engine marketing.

In the mid-1990’s,  webmasters and search engine content providers started optimizing websites in order to appear in higher ranking for internet searches. Back then, the webmasters only had to provide a URL to the search engine of their choice and a web crawler would be sent from the search engine.  The web crawler would then extract links from the webpage and use the information for indexing. This was done by downloading the page to the search engine’s server and once the page was stored, a second program, called an indexer, pulled additional information from the webpage and would determine the weight of specific words. When this was completed, the page was ranked.

The internet was growing exponentially and it didn’t take long for people to understand that it was important to be highly ranked on search engines.

In the start, search engines used algorithms that web masters provided about the web pages for indexing and ranking pages. Webmasters quickly began abusing the system, causing search engines to develop more sophisticated and complex forms of  search engine optimization. The search engines developed a system that measured several factors: domain name, text within the title, URL directories, term frequency, HTML tags, on page key word proximity, Alt attributes for images, on page key word adjacency, text within NOFRAMES tags, web content development, sitemaps and on page keyword sequence.

Google came up with a new concept of evaluating pages on the internet called PageRank. PageRank considers and measures  the web page’s quantity and quality based on the pages incoming links. The PageRank method was so successful that Google began enjoy the steady stream of constant praise and positive buzz generated by the new method.

To help dissuade abuse by webmasters, many search engines such as Google, Microsoft, Yahoo and Ask.com,  will not disclose the algorithms that they employ when ranking pages. The signals used today in SEO typically consist of the following: keywords in the title, link popularity, keywords in links, pointing to the page, PageRank (Google), keywords that appear in the visible text, links from top page to the inner pages, and placing punch line at the top of the page.

Registering a webpage/website on a search engine is generally a simple task. For example, all Google requires is a link from a site already indexed and the web crawlers will visit the site and begin to spider its contents. A few days after registering on a search engine, the main search engine spiders will begin to index the website.

There are some search engines that will guarantee spidering and indexing for a small fee. These search engines do not however guarantee a certain rank. Webmasters who do not want web crawlers to index specific files and directories can use a standard robots.txt file. The file is located in the root directory. Web crawlers will occasionally still crawl a page, even if the webmaster has indicated that they do not want the page indexed.

If you enjoyed this post, make sure you subscribe to my RSS feed!
Posted in Digital Marketing and tagged , .

Leave a Reply

Your email address will not be published. Required fields are marked *