Archives for 2005

Do the engines discount links from the same domain versus links from a different domain?

Not now. Not ever. To be sure, just follow the money.

Big sites involve big dollars and it is big dollars that makes the world go ’round, so you can bet that the now public Google will have joined the other public financed engines in a gentle but certain catering to the American greenback.

Consider: If my name is Bill and I build a 10,000 page website to support my software business, I expect to have a high PageRank at Google and I expect to be able to control the Link Reputation of my home page using my thousands of internal links. Now suppose some disgruntled open-source weenie links to me with the text "Windows is Evil" and gets ranked ahead of me for my own product name? I would have good reason to be upset.

So, you can bet that at the very next social event for young billionares, Bill will corner Larry and get it fixed.

But seriously, if internal links are significantly discounted relative to external links, then small sites always gain an advantage over large sites. This is very bad. In general, big sites actually do deserve to rank better than small sites, external linking being more or less equal, which internal links will accomplish automatically.

If you rely instead solely on external links, what you will find is that the first to get a top rank will continue to keep top rank because it is top ranked pages that get most of the links. This would make it even harder for a large site to displace a small site that happens to get top ranking.

And finally, just go look at some search results with OptiLink. Big sites have a clear advantage. Do they have more external links than small sites? Only some of the time. It still appears that links from any source are sufficient, so they might as well be your own.

Is a MiniNet an effective strategy for ranking against a million competitors?

The simple answer is yes. If that’s all you need to know, stop now and get back to work! 😉 But if you want to know why…

I am often asked variations of this question and the answer is always yes. Huh? Much like Hitchhiker’s Guide, it is the question itself that is wrong, leading therefore always to the same, unhelpful, answer.

The capacity to rank depends on total number of pages plus the stategy used to link those pages together. Megasite, MiniNet, Blog — doesnt’ matter — pages are pages and the way they are organized into domains simply does not matter. It is the linking that matters.

Once you understand how to do the linking to make best use of the pages you do have, then you can get to the "right" question — the one the Vogons (almost) destroyed to build an intergalactic bypass. 😉 Fortunately I caught it just in time.

For any ranking task, the real question is "how many pages do I need?"

Pages are the ultimate source of ranking power. Smart linking allows you to get the best use of that power. If you are not ranking where you want to, you must either use what you have more effectively (via linking) or create increase the raw power you have available (via more pages). Most ranking solutions involve some of both.

So back to those MiniNets…
Michael Campbell’s network structures are some of the best at using ranking power, so they are indeed a good place to start for most purposes (of course, there are always exceptions), leaving us only the real question of "how many pages". That is what OptiLink is designed to help answer. By examining the quantity and quality of linking employed by top ranking pages, you can estimate what you will need to build to be top-ranked yourself.

Can I use Blogs alone to get good rankings?

Blog or not blog is not the real issue — it’s all about pages. Blogging software just happens to be a readily available content management tool that works fairly well from a linking perspective.

Content (pages) is what ultimately creates Google PageRank and provides places to create links to other pages. The more pages the better, and blogs happen to be pretty decent at creating pages from content. The Mastering PageRank video shows a diagram of why that is.

There are examples of folks making money online with nothing but blogs — just as there are examples of folks making money online completely without the use of blogs. Success is about pages plus linking.

Is too much nofollow a bad thing?

Some webmasters worry that pages with lots of incoming links and very few outgoing links, or lots of nofollow links, or some other pattern that looks like using nofollow to game PageRank is being detected.

Certainly: Not yet. Probably: Not ever.

One of my clients ranks 4 in 3.2 million results and has religiously expunged nearly all off-site links to get there. This was done with the (classical?) Javascript Dynamic Link rather than the newer nofollow link because the site in question predates nofollow. A nofollow implementation should work as well.

Moreover, blogs that allow commenting, and that have nofollow enabled on comments, will look like PageRank is being gamed, when in fact, it is completely automated. This will become more common rather than less so, leading me to conclude that filtering on the use of nofollow is a non-starter.

The Google Top Three

Google depends primarily on three characteristics to rank pages. They are:

  • Page Title
    Having the search term in the title of the page you want rank is key to getting ranked for that term. If it is a multi-word term, don’t break up the term with additional words. For example, ranking for Miami Vacation is more easily done with a title like Best Miami Vacation Packages than with Miami and Orlando Vacations. It is only the title of the page you are trying to rank that matters — the titles of linking pages and other pages on your site are not considered.
  • Inbound Link text
    The link text that refers to a page is very important in ranking the page. The link text is the text that occurs between the <a> and </a> tags in HTML. This will generally be displayed as blue underlined clickable text in a user’s browser. The alt text in images does not seem to be used by the engines, only text is used.
  • PageRank
    This is a feature only at Google, at least until the year 2011, and is a major factor in ranking. It is also fairly involved to manipulate and is the slowest changing aspect of ranking.

By the way…
The other major engines, MSN and Yahoo, can not use PageRank but they do have other link "topology" based schemes. The simplest of these is "Link Popularity". We can be pretty sure that what MSN and Yahoo is more advanced than this, but it does appear to be way easier to "game" than PageRank. That said, if you optimize for PageRank, you will often do what needs to be done to rank at the other engines as well.

The real meaning of Google’s Many Patents

Google has been keeping the patent office busy the last couple years, and reading all those patents has been keeping quite a few SEO gurus busy as well. But figuring out what all those patents are really about is actually pretty easy. In fact, you don’t even have to read them.

Google is acquiring and hoarding Intellectual Property (IP) as a means to create a barrier to entry. This is a marketing and business idea, not an engineering and technical one. By patenting everything under the sun, they tie up core algorithms so that would-be competitors are blocked from using it to build search services. Google is not the first to use this practice — Intel in particular is known for it.

Google also profits from the added advantage that it keeps SEOs confused and busy reading long-winded material that isn’t actually being used, and probably won’t be.

That said, some of these "disclosures" are worth a look and the ideas should at least be tested against the index to see if there is any sign of them being implemented. Don’t count on it. Measure for it. For any one of these patents it’s a far better bet that they are just sitting on it, and not actually using it.

How to write the nofollow attribute

A question came up from one of the owners of the Mastering PageRank video concerning the way I wrote the nofollow attribute. There is a general issue here that should be answered, so I’ll do so here.

The way a browser or a spider processes HTML will create an internal data structure that has all the information from the page, but the order of attributes will NOT be preserved, nor even recognized. So, as an example, writing an <a> tag with the href first and the rel="nofollow" second is no different than giving the attributes in the other order.

In fact, this is required by the HTML and XML specifications: attribute order is not significant.

The way I usually write nofollow links is to place it as the first attribute, like so: <a rel="nofollow" href=… because this allows me to easily find it in the source when checking my pages for errors. But that’s just me.

Relative vs. Absolute links

Rumored to have come from an advanced SEO seminar is this latest SEO myth stemming from a complete lack of understanding of the underlying technology. As the story is told, Absolute Links pass PageRank but Relative Links do not. Horse-hockey!

There are actually three varieties of links. An absolute link is one that includes complete domain name and path information, like http://www.windrosesoftware.com/index.html. A domain relative link is one lacking a domain name, but including absolute path information, such as /site/index.php. The final form is the path relative link, which lacks both the domain name and the leading ‘/’. For example, site/index.php.

A Search Engine Spider, just like your desktop browser, is simply an HTTP/HTML client program. It makes a request via HTTP of a web server and processes the HTML text that is returned as a result. This is the entirety of the interation with the server. All that is left is to process the HTML locally.
To resolve the links in the document, the spider/browser has to take two steps.

First, the "base URL" for the document must be determined. By default, this will be the absolute URL of the document itself. However, the tag can be used in a document to override the base URL used for path relative links found within the document. All browsers and spiders must look for this tag and modify the base URL for the document appropriately before doing any link processing.

Second, with the base URL now in hand, the second step is to "canonicalize" each link. What that twenty dollar word means is "to put into standard form", which in the case of URLs is the same as saying "make all URLs absolute".

  • Absolute URLs obviously don’t change at all, they are already canonical;
  • Domain relative URLs get the domain added; and
  • Path relative URLs get the entire base URL added as a prefix.

So why does it have to be this way? Because spiders deal in "pages", not "sites", there is no way to process non-canonicalized URLs. You can either process absolute URLs or carry around the base URL separately — a relative URL is not meaningful in isolation of the document where it is found. This is so fundamental to the task of parsing HTML, that the only sensible place for the search engines to canonicalize URLs is in the software that does the spidering of pages. Once done, URLs of any variety will be identical.

Moreover, even absolute URLs have problems, owing to what I personally consider a bug in the HTTP specification, so even absolute URLs are not the basis of indexing within search engines. Google uses what the founders called a "docID" to uniquely identify the pages added to the Google index.

Somewhere early in the Google machine, all links are transformed from references via (absolute) URL to references involving the docID. For good technical reasons, the other engines will be similarly organized so that the (original) form of a URL will ceased to be known to the algorithms downstream of the spidering application.

Can linking to a non-related site hurt our PageRank?

All links divert PageRank based on the number of links on the page where the new link appears. The topic of the pages involved in the linking is of no importance to PageRank at all. PageRank only considers the linking structure. As far as PageRank is concerned, the pages could be blank.