Google and other search engines are notoriously secretive, and for good reason: if everyone fully understood the inner workings of their systems, those systems would be much easier to game. That’s why whenever a ‘black hat’ SEO appears to have found a loophole, they tend to be shut down fast and hard in the next algorithm update.

Accordingly, link crawling has become more sophisticated in the decades since Google launched its search engine. This has led to a number of misconceptions around how links are crawled.

 

Misconception #1: all backlinks are created equal

Link building is a respected discipline, and one that is tough to master. It is therefore unsurprising that many have attempted to take shortcuts through tactics such as reciprocal links, paid links, and low-quality directory links.

Backlinking is believed to help websites establish online trust and rank higher in search engine results pages. But the wrong kind of backlinking won’t help. You need links from contextually relevant, high authority sites, that deliver clear value to readers (that’s where a PR-led SEO agency can help out!).

If you’re running a PR, SEO, or video company, a link from Definition will be worth more than a link from a company that sells vegan dog treats for example, even if both websites have the same perceived levels of authority – the logic being we are marketing experts and would only link to sites and brands we respect.

Quick note: ‘nofollow’ links – which have the ‘nofollow’ tag and don’t pass any link juice – are still valuable if they’re from relevant sources – after all, if human beings can still click on them and reach your site, they’re definitely worth having! They also help maintain a natural link profile: if all of your site’s links are ‘followed’, it looks a bit suspicious – a smattering of nofollow links is only natural.

 

Misconception #2: Different versions of the same link will all be crawled

Imagine a scenario where you’re linking to the same page multiple times in a single blog post. You’d assume that Google will crawl all of them right?

This is typically not the case. We’re not 100% on this, but experiments show Google will see that the links are the same, and will only register the anchor text from the first one it comes across. It’s fine (as long as it’s not ridiculous) to have multiple links to the same page, but this rule means that your first link is the most important – so make sure it’s anchor-text optimised.

Note: this does not mean Google doesn’t pass PageRank via each link.

 

Misconception #3: Anchor text doesn’t matter

Following on from the last point, Google takes into consideration not just the optimised anchor text link, but also the text surrounding the link, especially if it is descriptive and includes semantically related phrases.

Anchor text of internal links on your site is just as important as anchor text from links on third party sites. Google hates it when the anchor text is non-descript e.g. ‘Click here’. ‘Here’ doesn’t tell them what the destination page is all about.

Misconception #4: Search engines treat JavaScript links the same way

Google only crawls anchor tags with href attributes (i.e. <a href= ), not JavaScript onclick attribute links or JavaScript links placed in a <div href> plus JavaScript format. If these are being used it could ruin your internal linking and distribution of PageRank.

 

Misconception #5: Multiple redirects don’t matter

It’s important to make sure you don’t have too many redirects for the same URL. Once you get beyond a URL being redirected more than five separate times, Google will give up. So focus on making sure that you minimise your redirect chains as far as possible: every redirect results in the loss of link equity anyways and slows down the page load, so it makes sense as a best practice exercise.

 

Don’t run before you can crawl. Talk to Luke Budka, director and SEO chief at Definition, about your search marketing needs.