What Are Search Engine Spiders? Part I


What are search engine spiders? A question I am frequently asked. They are also called
crawlers, robots, agents, web-bots and others, but they are the same thing. Search engine spiders are software programs that seek out and record data from web pages on the World Wide Web for inclusion in search engine databases that are then indexed on certain keywords that people use when searching for information.

So what are spiders for? Their primary purpose at one time was to make a list of all the web pages that could be found on the web, using the word 'web' in the true sense of the word, as URLs that can be resolved by search engines, as opposed to the internet, which is the railroad track that these sites use to communicate and connect with each other.

Most people involved in internet marketing have a rough idea what spiders do in relation to search engines like Google, so before we get at all technical, let's have a look at what spiders can and can not do. What is spider food and what are arachnicides!

The vast majority of spiders only see html tags and text. They can assign importance to specific parts of your text according to the html tags you are using. Here, they can determine that anything in

<h1> </ h1> tags is very important, and if they are your keywords, then your site will benefit. However, they are illustrite where Java script is involved. They can neither speak nor read it so ignore it, just as they are blind to frames.


Leave a Reply