Twitter followers Facebook fans Total
New York Yankees 272,665 1,249,555 1,522,220
Boston Red Sox 10,127 1,052,128 1,062,255
Philadelphia Phillies 561,223 337,405 898,628
Chicago Cubs 4,595 537,492 542,087
San Francisco Giants 16,742 321,164 337,906
St. Louis Cardinals 3,976 305,230 309,206
Minnesota Twins 8,046 272,034 280,080
Detriot Tigers 11,493 248,167 259,660
Atlanta Braves 14,626 242,171 256,797
Chicago White Sox 1,524 226,825 228,349
Oakland Athletics 6,365 189,187 195,552
Los Angeles Dodgers 18,371 171,098 189,469
New York Mets 6,916 181,531 188,447
Milwaukee Brewers 1,653 169,004 170,657
Cleveland Indians 2,360 147,130 149,490
Seattle Mariners 6,669 135,240 141,909
Houston Astros 1,336 125,677 127,013
Texas Rangers 6,149 118,638 124,787
Kansas City Royals 7,361 112,616 119,977
Tampa Bay Rays 4,677 107,636 112,313
Colorado Rockies 1,237 109,960 111,197
Cincinnati Reds 10,910 98,632 109,542
Los Angeles Angels 4,701 98,922 103,623
Toronto Blue Jays 4,775 98,513 103,288
Baltimore Orioles 3,657 92,042 95,699
Florida Marlins 1,824 81,382 83,206
San Diego Padres 3,996 77,636 81,632
Pittsburgh Pirates 2,636 68,085 70,721
Arizona Diamondbacks 2,885 63,118 66,003
Washington Nationals 2,165 42,411 44,576
With over 15 years of digital marketing experience, I specialize in creating tailored strategies that elevate businesses' online presence and drive measurable results. My expertise spans SEO, SEM, social media marketing, content creation, and analytics, ensuring a comprehensive approach that aligns with your unique goals. I’ve successfully boosted visibility, engagement, and sales for various clients, leveraging data-driven techniques and staying ahead of industry trends.
Ads
Friday, May 28, 2010
100 Most Visited Websites in India
This summary is not available. Please
click here to view the post.
Tuesday, May 25, 2010
How Google Works?
If you aren’t interested in learning how Google creates the index and the database of documents that it accesses when processing a query, skip this description. I adapted the following overview from Chris Sherman and Gary Price’s wonderful description of How Search Engines Work in Chapter 2 of The Invisible Web (CyberAge Books, 2001).
Google runs on a distributed network of thousands of low-cost computers and can therefore carry out fast parallel processing. Parallel processing is a method of computation in which many calculations can be performed simultaneously, significantly speeding up data processing. Google has three distinct parts:
* Googlebot, a web crawler that finds and fetches web pages.
* The indexer that sorts every word on every page and stores the resulting index of words in a huge database.
* The query processor, which compares your search query to the index and recommends the documents that it considers most relevant.
Let’s take a closer look at each part.
1. Googlebot, Google’s Web Crawler
Googlebot is Google’s web crawling robot, which finds and retrieves pages on the web and hands them off to the Google indexer. It’s easy to imagine Googlebot as a little spider scurrying across the strands of cyberspace, but in reality Googlebot doesn’t traverse the web at all. It functions much like your web browser, by sending a request to a web server for a web page, downloading the entire page, then handing it off to Google’s indexer.
Googlebot consists of many computers requesting and fetching pages much more quickly than you can with your web browser. In fact, Googlebot can request thousands of different pages simultaneously. To avoid overwhelming web servers, or crowding out requests from human users, Googlebot deliberately makes requests of each individual web server more slowly than it’s capable of doing.
Googlebot finds pages in two ways: through an add URL form, www.google.com/addurl.html, and through finding links by crawling the web.
Screen shot of web page for adding a URL to Google.
Unfortunately, spammers figured out how to create automated bots that bombarded the add URL form with millions of URLs pointing to commercial propaganda. Google rejects those URLs submitted through its Add URL form that it suspects are trying to deceive users by employing tactics such as including hidden text or links on a page, stuffing a page with irrelevant words, cloaking (aka bait and switch), using sneaky redirects, creating doorways, domains, or sub-domains with substantially similar content, sending automated queries to Google, and linking to bad neighbors. So now the Add URL form also has a test: it displays some squiggly letters designed to fool automated “letter-guessers”; it asks you to enter the letters you see — something like an eye-chart test to stop spambots.
When Googlebot fetches a page, it culls all the links appearing on the page and adds them to a queue for subsequent crawling. Googlebot tends to encounter little spam because most web authors link only to what they believe are high-quality pages. By harvesting links from every page it encounters, Googlebot can quickly build a list of links that can cover broad reaches of the web. This technique, known as deep crawling, also allows Googlebot to probe deep within individual sites. Because of their massive scale, deep crawls can reach almost every page in the web. Because the web is vast, this can take some time, so some pages may be crawled only once a month.
Although its function is simple, Googlebot must be programmed to handle several challenges. First, since Googlebot sends out simultaneous requests for thousands of pages, the queue of “visit soon” URLs must be constantly examined and compared with URLs already in Google’s index. Duplicates in the queue must be eliminated to prevent Googlebot from fetching the same page again. Googlebot must determine how often to revisit a page. On the one hand, it’s a waste of resources to re-index an unchanged page. On the other hand, Google wants to re-index changed pages to deliver up-to-date results.
To keep the index current, Google continuously recrawls popular frequently changing web pages at a rate roughly proportional to how often the pages change. Such crawls keep an index current and are known as fresh crawls. Newspaper pages are downloaded daily, pages with stock quotes are downloaded much more frequently. Of course, fresh crawls return fewer pages than the deep crawl. The combination of the two types of crawls allows Google to both make efficient use of its resources and keep its index reasonably current.
2. Google’s Indexer
Googlebot gives the indexer the full text of the pages it finds. These pages are stored in Google’s index database. This index is sorted alphabetically by search term, with each index entry storing a list of documents in which the term appears and the location within the text where it occurs. This data structure allows rapid access to documents that contain user query terms.
To improve search performance, Google ignores (doesn’t index) common words called stop words (such as the, is, on, or, of, how, why, as well as certain single digits and single letters). Stop words are so common that they do little to narrow a search, and therefore they can safely be discarded. The indexer also ignores some punctuation and multiple spaces, as well as converting all letters to lowercase, to improve Google’s performance.
3. Google’s Query Processor
The query processor has several parts, including the user interface (search box), the “engine” that evaluates queries and matches them to relevant documents, and the results formatter.
PageRank is Google’s system for ranking web pages. A page with a higher PageRank is deemed more important and is more likely to be listed above a page with a lower PageRank.
Google considers over a hundred factors in computing a PageRank and determining which documents are most relevant to a query, including the popularity of the page, the position and size of the search terms within the page, and the proximity of the search terms to one another on the page. A patent application discusses other factors that Google considers when ranking a page. Visit SEOmoz.org’s report for an interpretation of the concepts and the practical applications contained in Google’s patent application.
Google also applies machine-learning techniques to improve its performance automatically by learning relationships and associations within the stored data. For example, the spelling-correcting system uses such techniques to figure out likely alternative spellings. Google closely guards the formulas it uses to calculate relevance; they’re tweaked to improve quality and performance, and to outwit the latest devious techniques used by spammers.
Indexing the full text of the web allows Google to go beyond simply matching single search terms. Google gives more priority to pages that have search terms near each other and in the same order as the query. Google can also match multi-word phrases and sentences. Since Google indexes HTML code in addition to the text on the page, users can restrict searches on the basis of where query words appear, e.g., in the title, in the URL, in the body, and in links to the page, options offered by Google’s Advanced Search Form and Using Search Operators (Advanced Operators).
Let’s see how Google processes a query.
1. The web server sends the query to the index servers. The content inside the index servers is similar to the index in the back of a book--it tells which pages contain the words that match any particular query term. 2. The query travels to the doc servers, which actually retrieve the stored documents. Snippets are generated to describe each search result. 3. The search results are returned to the user in a fraction of a second.
Copyright © 2003 Google Inc. Used with permission.
For more information on how Google works, take a look at the following articles.
* Google’s page on Google’s Technology, www.google.com/technology/.
* How does Google collect and rank results?, www.google.com/newsletter/librarian/librarian_2005_12/article1.html.
* Google’s PageRank Algorithm and How it Works, www.iprcom.com/papers/pagerank/
* Google’s PageRank Explained and How to Make the Most of It, www.webworkshop.net/pagerank.html
tags (keywords): crawling, google, PageRank, queries, results, spider, stop words, technology, URLs
Google runs on a distributed network of thousands of low-cost computers and can therefore carry out fast parallel processing. Parallel processing is a method of computation in which many calculations can be performed simultaneously, significantly speeding up data processing. Google has three distinct parts:
* Googlebot, a web crawler that finds and fetches web pages.
* The indexer that sorts every word on every page and stores the resulting index of words in a huge database.
* The query processor, which compares your search query to the index and recommends the documents that it considers most relevant.
Let’s take a closer look at each part.
1. Googlebot, Google’s Web Crawler
Googlebot is Google’s web crawling robot, which finds and retrieves pages on the web and hands them off to the Google indexer. It’s easy to imagine Googlebot as a little spider scurrying across the strands of cyberspace, but in reality Googlebot doesn’t traverse the web at all. It functions much like your web browser, by sending a request to a web server for a web page, downloading the entire page, then handing it off to Google’s indexer.
Googlebot consists of many computers requesting and fetching pages much more quickly than you can with your web browser. In fact, Googlebot can request thousands of different pages simultaneously. To avoid overwhelming web servers, or crowding out requests from human users, Googlebot deliberately makes requests of each individual web server more slowly than it’s capable of doing.
Googlebot finds pages in two ways: through an add URL form, www.google.com/addurl.html, and through finding links by crawling the web.
Screen shot of web page for adding a URL to Google.
Unfortunately, spammers figured out how to create automated bots that bombarded the add URL form with millions of URLs pointing to commercial propaganda. Google rejects those URLs submitted through its Add URL form that it suspects are trying to deceive users by employing tactics such as including hidden text or links on a page, stuffing a page with irrelevant words, cloaking (aka bait and switch), using sneaky redirects, creating doorways, domains, or sub-domains with substantially similar content, sending automated queries to Google, and linking to bad neighbors. So now the Add URL form also has a test: it displays some squiggly letters designed to fool automated “letter-guessers”; it asks you to enter the letters you see — something like an eye-chart test to stop spambots.
When Googlebot fetches a page, it culls all the links appearing on the page and adds them to a queue for subsequent crawling. Googlebot tends to encounter little spam because most web authors link only to what they believe are high-quality pages. By harvesting links from every page it encounters, Googlebot can quickly build a list of links that can cover broad reaches of the web. This technique, known as deep crawling, also allows Googlebot to probe deep within individual sites. Because of their massive scale, deep crawls can reach almost every page in the web. Because the web is vast, this can take some time, so some pages may be crawled only once a month.
Although its function is simple, Googlebot must be programmed to handle several challenges. First, since Googlebot sends out simultaneous requests for thousands of pages, the queue of “visit soon” URLs must be constantly examined and compared with URLs already in Google’s index. Duplicates in the queue must be eliminated to prevent Googlebot from fetching the same page again. Googlebot must determine how often to revisit a page. On the one hand, it’s a waste of resources to re-index an unchanged page. On the other hand, Google wants to re-index changed pages to deliver up-to-date results.
To keep the index current, Google continuously recrawls popular frequently changing web pages at a rate roughly proportional to how often the pages change. Such crawls keep an index current and are known as fresh crawls. Newspaper pages are downloaded daily, pages with stock quotes are downloaded much more frequently. Of course, fresh crawls return fewer pages than the deep crawl. The combination of the two types of crawls allows Google to both make efficient use of its resources and keep its index reasonably current.
2. Google’s Indexer
Googlebot gives the indexer the full text of the pages it finds. These pages are stored in Google’s index database. This index is sorted alphabetically by search term, with each index entry storing a list of documents in which the term appears and the location within the text where it occurs. This data structure allows rapid access to documents that contain user query terms.
To improve search performance, Google ignores (doesn’t index) common words called stop words (such as the, is, on, or, of, how, why, as well as certain single digits and single letters). Stop words are so common that they do little to narrow a search, and therefore they can safely be discarded. The indexer also ignores some punctuation and multiple spaces, as well as converting all letters to lowercase, to improve Google’s performance.
3. Google’s Query Processor
The query processor has several parts, including the user interface (search box), the “engine” that evaluates queries and matches them to relevant documents, and the results formatter.
PageRank is Google’s system for ranking web pages. A page with a higher PageRank is deemed more important and is more likely to be listed above a page with a lower PageRank.
Google considers over a hundred factors in computing a PageRank and determining which documents are most relevant to a query, including the popularity of the page, the position and size of the search terms within the page, and the proximity of the search terms to one another on the page. A patent application discusses other factors that Google considers when ranking a page. Visit SEOmoz.org’s report for an interpretation of the concepts and the practical applications contained in Google’s patent application.
Google also applies machine-learning techniques to improve its performance automatically by learning relationships and associations within the stored data. For example, the spelling-correcting system uses such techniques to figure out likely alternative spellings. Google closely guards the formulas it uses to calculate relevance; they’re tweaked to improve quality and performance, and to outwit the latest devious techniques used by spammers.
Indexing the full text of the web allows Google to go beyond simply matching single search terms. Google gives more priority to pages that have search terms near each other and in the same order as the query. Google can also match multi-word phrases and sentences. Since Google indexes HTML code in addition to the text on the page, users can restrict searches on the basis of where query words appear, e.g., in the title, in the URL, in the body, and in links to the page, options offered by Google’s Advanced Search Form and Using Search Operators (Advanced Operators).
Let’s see how Google processes a query.
1. The web server sends the query to the index servers. The content inside the index servers is similar to the index in the back of a book--it tells which pages contain the words that match any particular query term. 2. The query travels to the doc servers, which actually retrieve the stored documents. Snippets are generated to describe each search result. 3. The search results are returned to the user in a fraction of a second.
Copyright © 2003 Google Inc. Used with permission.
For more information on how Google works, take a look at the following articles.
* Google’s page on Google’s Technology, www.google.com/technology/.
* How does Google collect and rank results?, www.google.com/newsletter/librarian/librarian_2005_12/article1.html.
* Google’s PageRank Algorithm and How it Works, www.iprcom.com/papers/pagerank/
* Google’s PageRank Explained and How to Make the Most of It, www.webworkshop.net/pagerank.html
tags (keywords): crawling, google, PageRank, queries, results, spider, stop words, technology, URLs
Labels:
crawling,
google,
PageRank,
queries,
results,
spider,
stop words,
technology,
URLs
Tuesday, May 18, 2010
How to calculate conversion?
Conversions (1-per-click) count a conversion for every Ad Words ad click resulting in a conversion within 30 days. This means if more than one conversion happens following a single ad click, conversions after the first will not count.
Another way to say this is that conversions (1-per-click) will count at most one conversion per click. These metrics are useful for measuring conversions approximating unique customer acquisitions (e.g. leads).
Another way to say this is that conversions (1-per-click) will count at most one conversion per click. These metrics are useful for measuring conversions approximating unique customer acquisitions (e.g. leads).
Monday, May 17, 2010
Sunday, May 16, 2010
Top google criteria to rank Website
You must be wondering why your website is not listed in search results after you have created a beautiful, colorful and attractive website. Here are the reasons by Google is not behind your website
Factors to consider?
Server location.
Domain age.
Domain history.
Domain keywords.
On page keywords.
Number of times keyword is used.
% of keyword to page words.
Backlinks.
Keywords in those backlinks.
Keywords associated with your chosen keyword.
Number of pages.
Plus whatever is said above or below this post.
inbound links
parseability
title tag and description
relevant onpage content
keyword on page
keyword in alt tag of image
keyword in white text on white page
keyword in meta keyword area
keyword repeted hundreds of times in a list on the page relevant outbound links
Factors to consider?
Server location.
Domain age.
Domain history.
Domain keywords.
On page keywords.
Number of times keyword is used.
% of keyword to page words.
Backlinks.
Keywords in those backlinks.
Keywords associated with your chosen keyword.
Number of pages.
Plus whatever is said above or below this post.
inbound links
parseability
title tag and description
relevant onpage content
keyword on page
keyword in alt tag of image
keyword in white text on white page
keyword in meta keyword area
keyword repeted hundreds of times in a list on the page relevant outbound links
Friday, May 14, 2010
How safe are the top social-networking sites for teens? We take them for a test run.
Now a days, teen aged groups are using socail network often, this si a veryy good sign because we can see many people are aware of internet.
But now a days social networking sites have become hub of non social activities
please refer the link below
How safe are the top social-networking sites for teens? We take them for a test run.
http://online.wsj.com/public/article/SB115333833014811453-LjMFsXTCUjSigIarp2FhC0Y_TSs_20060822.html?mod=tff_main_tff_top
http://www.businessweek.com/the_thread/blogspotting/archives/2007/04/international_a.html
But now a days social networking sites have become hub of non social activities
please refer the link below
How safe are the top social-networking sites for teens? We take them for a test run.
http://online.wsj.com/public/article/SB115333833014811453-LjMFsXTCUjSigIarp2FhC0Y_TSs_20060822.html?mod=tff_main_tff_top
http://www.businessweek.com/the_thread/blogspotting/archives/2007/04/international_a.html
Labels:
face book,
hi5,
mysoace,
orkut,
socail networking,
socail networking sites,
twitter
Thursday, May 13, 2010
Tuesday, May 11, 2010
Website Analysis & Internet Marketing tool
This website is a good website for generating reports for website and compitative website i have been using it i also recommend this website to new users
Click the link to view the website
http://www.woorank.com/
Click the link to view the website
http://www.woorank.com/
Subdomains or subfolders which are better for seo?
Subdomains or Subfolders? What’s better for a blog/forum/etc: a subdomain (eg. http://jobs.searchenginejournal.com/) or subfolder (eg. http://www.seomoz.org/blog/)?
Subdomains and subfolders both have their advantages, especially when setting up blogs.
For blogs, I prefer a subfolder (http://www.seomoz.org/blog/) because the link juice which is sent to that blog is going to be naturally distributed to that main domain, and other subfolders under the domain.
Futhermore, the forum/blog will default logo, home page and other links back to the subfolder. If you set this up with a subdomain, by default, the links in the forum/blog itself will all point back to the subdomain. So, with a subfolder, both the inbound and internal linking structure favor the entire site.
With a subdomain, the forum or blog will be listed as a separate entity in the Google search results, which is good for owning the results and one’s reputation management. However, Google and other engines will generally not list more than two of these subdomains in the search results, unless those subdomains can prove to Google that they are independent and relevant entities.
I would like to reference Vanessa Fox, an ex-Googler and contributor to Search Engine Land :
Google is no longer treating subdomains (blog.widgets.com versus widgets.com) independently, instead attaching some association between them. The ranking algorithms have been tweaked so that pages from multiple subdomains have a much higher relevance bar to clear in order to be shown.
It’s not that the “two page limit” now means from any domain and its associated subdomains in total. It’s simply a bit harder than it used to be for multiple subdomains to rank in a set of 10 results. If multiple subdomains are highly relevant for a query, it’s still possible for all of them to rank well.
Home Depot is one site which has cleared the relevancy bar at Google with subdomains at HomeDepot.com that are actually marketed as individual sites. Take careers.homedepot.com and look into its backlinks, even if this subdomain was on a whole different domain, like HomeDepotJobs.com, it would probably rank just as highly.
So, in conclusion, if you’d like to build the equity of one web site or entity, I suggest using a subfolder. If you’d like to build an entire new entity with its own equity, launch a subdomain.
Read more: http://www.searchenginejournal.com/subdomains-or-subfolders-which-are-better-for-seo/6849/#ixzz0nbhEtn1W
Subdomains and subfolders both have their advantages, especially when setting up blogs.
For blogs, I prefer a subfolder (http://www.seomoz.org/blog/) because the link juice which is sent to that blog is going to be naturally distributed to that main domain, and other subfolders under the domain.
Futhermore, the forum/blog will default logo, home page and other links back to the subfolder. If you set this up with a subdomain, by default, the links in the forum/blog itself will all point back to the subdomain. So, with a subfolder, both the inbound and internal linking structure favor the entire site.
With a subdomain, the forum or blog will be listed as a separate entity in the Google search results, which is good for owning the results and one’s reputation management. However, Google and other engines will generally not list more than two of these subdomains in the search results, unless those subdomains can prove to Google that they are independent and relevant entities.
I would like to reference Vanessa Fox, an ex-Googler and contributor to Search Engine Land :
Google is no longer treating subdomains (blog.widgets.com versus widgets.com) independently, instead attaching some association between them. The ranking algorithms have been tweaked so that pages from multiple subdomains have a much higher relevance bar to clear in order to be shown.
It’s not that the “two page limit” now means from any domain and its associated subdomains in total. It’s simply a bit harder than it used to be for multiple subdomains to rank in a set of 10 results. If multiple subdomains are highly relevant for a query, it’s still possible for all of them to rank well.
Home Depot is one site which has cleared the relevancy bar at Google with subdomains at HomeDepot.com that are actually marketed as individual sites. Take careers.homedepot.com and look into its backlinks, even if this subdomain was on a whole different domain, like HomeDepotJobs.com, it would probably rank just as highly.
So, in conclusion, if you’d like to build the equity of one web site or entity, I suggest using a subfolder. If you’d like to build an entire new entity with its own equity, launch a subdomain.
Read more: http://www.searchenginejournal.com/subdomains-or-subfolders-which-are-better-for-seo/6849/#ixzz0nbhEtn1W
Labels:
domains,
sub domains,
subfolders,
website domains
Friday, May 7, 2010
Thursday, May 6, 2010
Free twitter follow me buttons
created some brand new Twitter Follow me buttons for users to use on their websites, blogs, emails, and more! Show everyone your on Twitter with these creative buttons.
To use them you will need to save them to your computer.Right click on the image and select "Save As" and then save to a directory on your computer.
http://www.mytweetspace.com/followmebuttons.php
To use them you will need to save them to your computer.Right click on the image and select "Save As" and then save to a directory on your computer.
http://www.mytweetspace.com/followmebuttons.php
Subscribe to:
Posts (Atom)