Google learns about contents on the web by a process called crawling which is done by googlebots which discovers new and updated pages and dead links and updates google index.Google bot crawls each of the pages it finds in the index and compiles a index of the words it comes across with their postion on the page and also processes information in content tags and also ALT tags and while in google search or a query that it searches from the index the contents which are most relevant to the user and it is based on a factor called relevancy.which has over 200 factors.
Above was the simplified version of this google page http://www.google.com/support/webmasters/bin/answer.py?hl=en&answer=70897#3
So when you develop a website how can this information be utilized ?
Ensure that your website can be crawled so that google attempts to index it and while indexing we should have Necessary keywords and and the contents should have factors which favours better seo and results in higher ranks and there by result in higher traffic.As highest traffic is the best form of success but beware of factors like bounce rate which indicates irrelevant search.
Follow webmaster tools to learn how to more about optimizing the pages
http://www.google.com/webmasters/edu/quickstartguide/index.html
Here is a simple image which has all these things explained
Infographic by PPC Blog
No comments:
Post a Comment