Wednesday, April 6, 2011

White Hat vs Black Hat SEO


What is white hat and black hat SEO and where do we draw the line between them? In short, white hat is used to describe techniques that are with in the guidelines of search engines and black hat (also known as spamdexing) those that use all the unethical ways to achieve top ranks (usually short-lived).


Matt’s description: “White hat SEOs adhere to the letter of the search engine guidelines (Google, Yahoo, MSN) and black hat SEOs will use any method they can to promote their site while trying to avoid getting banned. Gray hats are somewhere in between these two extremes.” – Matt Cutts


Well, Matt has actually answered both questions. There are a couple of gray areas between these two forms of SEO and therefore there isn’t a definite border between the two.


It is easy to describe what white hat SEO, but I thought that it would be of help for some of you if I listed black hat activities. Not that I want you to adopt them for yourself, of course, but that you should consider steering clear from them. Such actions could possibly get your site banded from some search engines. I am not going to explain all the tricks in the black book as there are too many.


    * Keyword Stuffing – well this is basically what the name says. Inserting or hiding keywords in a page to increase the density of that keyword. Search engines are wising up to this trick and can tell when keywords are being injected into the page.
    * Meta Tag Stuffing - repeating unrelated keywords in the meta tags.
    * Scraper Sites – also known as “made for Adsense” sites. They basically scrape results/information of search engine or news sites for example. This automatically updated data provides fresh content for the site. These sites are always full of adverts; that is where this type of site got its name from.
    * Hidden Links – hiding links where they can’t be seen in order to increase traffic either by search engines or unexpected clicks.
    * Mirror Sites – multiple sites all containing the same content hoping that search engines may rank some of the keywords in the URL higher in more than one of the URLs.


Some others that I have not explained are URL redirects, cloaking, blog spamming, spam blogs (splogs), referrer log spamming, doorway pages, link farms and many more…


WOW, there seems to be more black hat tactics than ethical SEO methods. My advice to you is this: stick to the search engine’s guidelines, don’t tarnish you name with shifty SEO tricks and you will do just fine. What you should be aiming to accomplish is a site with quality content or something helpful that people will enjoy and recommend to friends. Keep it real!

Source link - Page Strength

Monday, April 4, 2011

Robots.txt Vs Robots Meta Tag

Robots.txt



A robots.txt file restricts access to your site by search engine robots that crawl the web. These bots are automated, and before they access pages of a site, they check to see if a robots.txt file exists that prevents them from accessing certain pages. (All respectable robots will respect the directives in a robots.txt file, although some may interpret them differently. However, a robots.txt is not enforceable, and some spammers and other troublemakers may ignore it. For this reason, we recommend password protecting confidential information.)
You need a robots.txt file only if your site includes content that you don't want search engines to index. If you want search engines to index everything in your site, you don't need a robots.txt file (not even an empty one).


While Google won't crawl or index the content of pages blocked by robots.txt, we may still index the URLs if we find them on other pages on the web. As a result, the URL of the page and, potentially, other publicly available information such as anchor text in links to the site, or the title from the Open Directory Project (www.dmoz.org), can appear in Google search results.


In order to use a robots.txt file, you'll need to have access to the root of your domain (if you're not sure, check with your web hoster). If you don't have access to the root of a domain, you can restrict access using the robots meta tag.


Robots Meta Tag


When we see the noindex meta tag on a page, Google will completely drop the page from our search results, even if other pages link to it. Other search engines, however, may interpret this directive differently. As a result, a link to the page can still appear in their search results.


Note that because we have to crawl your page in order to see the noindex meta tag, there's a small chance that Googlebot won't see and respect the noindex meta tag. If your page is still appearing in results, it's probably because we haven't crawled your site since you added the tag. (Also, if you've used your robots.txt file to block this page, we won't be able to see the tag either.)


If the content is currently in our index, we will remove it after the next time we crawl it. To expedite removal, use the URL removal request tool in Google Webmaster Tools.


source link : Google webmaster central