Find Easy Links with Chase the Footprint

Finding backlink opportunities by searching for common footprints is a fairly basic tactic when it comes to link building. It’s simply the process of searching for frequently occurring phrases on websites that offer you the opportunity to gain a link if you were to leave a comment, submit a guest post or add your site to their web directory.

One of the main ways I use footprints is to look for websites where I can have a client’s product reviewed, run a giveaway or find a potential opportunity for a link via a guest blog. By simply searching for phrases such as “Submit a Guest Post” in combination with your keywords you can find lots of sites in your vertical that offer guest blogging opportunities.

You can take your link prospecting further by using Boolean Operators and Wildcards in your footprint searches to return more advanced results. For example a search for Apples AND Pears will return results where the words apples and pears both appear on the same web page but not necessarily in the same phrase.

Another way to find guest post opportunities is to follow your competitors footprints a lot of guest bloggers are quite lazy and will use the same author byline again and again. For example John Smith writes on behalf of Big Boy Business this means that all you have to do is type this phrase in to Google in quotations and you’ll find most of their guest blogs. Quite often these sites will have a fairly low submission criteria.

Chase The Footprint is a new tool designed by Dan Bochichio to help link builders find opportunities by searching for common footprints.  Simply input the keyword phrases into the box and then from the drop down menu you can chose to search for Wiki’s, Sponsorship Opportunities, Forums, Blogs and guest blogs.

If you combine Chase the Footprint with the SEOQuake toolbar you can then export the results into a CSV file and then manipulate the results in Excel or Google Docs. Dan also has provided a javascript bookmarklet you can use to export your search results into a spreadsheet too.

The tool is very new and Dan is open to the offer of suggestions for improvements or report any bugs you can contact him via his SEO website. I have already made the suggestion that it would be useful to be able to search different instances of Google such as UK, Australia etc to help link builders cast their nets far and wide. Happy link building!

Improving the Impact of Links in Old Content

It’s a well known fact in link building circles that Links in Old content simply aren’t as good as Links in new Content.

Taking some inspiration from a recent Whiteboard Friday I decided to test this theory. Cyrus Shepard, from SEOMoz, went through the theory that a link to your website from some old content does not pass as much “link juice” as a link from a new page; you can see the video below.

What do we mean by an “old page” when we talk about these old pages? From a technical, Google definition point of view, we’re talking about something that has been previously crawled and indexed by Google. Stale content, by stale we mean content that hasn’t been updated in a long time. It was written and it just stayed that way. There are no new blog comments. It has just been for two or three years the same way it was written. And old links. So this old page, all of the links that it got, it got years ago or months ago, and there are no new links coming in. That’s what we’re talking about when we talk about an old page. If it doesn’t meet these definitions, then it’s a new page.

Tim Grice, SEOWizz, did a study in March 2011 showing that links on Old Pages just weren’t worth it. Over a 5 week period Tim monitored the changes in search rankings by adding site wide text links in sidebars or footers, links inserted on indexed static pages with a PR 1 or more and finally he inserted links within entirely new content in fresh blog posts.

New Links in Old Content

Source: Links In Old, Crawled Content Don’t Pass Weight

As you can clearly see the rankings for the “old content links” barely changed at all over the period where as the links within the new content rose quickly.

This got me thinking, especially the statement from Cyrus, that in order for Google to consider the page as new again you would need to make a significant change to it or build some links to the old content. But exactly how much of a change would you need to make to a page?

The Experiment

I decided to build links to some of my test sites using the same principles that Tim used, the old pages were already indexed by Google & they had not had any new links built to them recently.

  • Link type 1 was my control this is a link where I only added the link into the text with exact match anchor text.
  • Link type 2 I inserted the link and inserted 1 paragraph within the content
  • Link type 3 I inserted the link and added two paragraphs
  • Link type 4 I inserted the link and did 5 social bookmarks to the old page

old links 1

No surprises that the link only and link plus 1 paragraph saw very little change in rankings but after seeing the great performance of both link type 4 and 3 so early on in the experiment I decided to edit two paragraphs of text and do 5 social bookmarks with another test page.

old links 2

So not only did the page climb the rankings rapidly it has stabilised its new ranking; better than the social bookmarks only pages.

Conclusions

By no means was this a completely controlled and perfect scientific experiment as there was a new Panda update during the period of the test as well as the fact that the the content in the test pages weren’t all exactly the same. But as you can clearly see just by simply adding a link to piece of old content and editing just a small amount of text on an old page it has less value than adding a link, editing some text on the page and building a few links to the old page.

This will flag to the spiders that this page is now relevant and to recrawl the page.Which in turn means that the bots will follow the links within that text once again. In an ideal world it is preferable to build links within new content and shows how important it is to continue with content based link building methods such as using blogger outreach, guest blogging but if you are building links to old content i.e. broken link building it’s worthwhile taking the time to add some more value to the old pages.

Are .Edu and .Gov Links Really Worth it?

Every SEO forum since the dawn of time Google has been debating the power of high authority links such as .Edu or .Gov links.  There are literally dozens of articles out there in the great wide web and every few months a different debate begins on how much these links actually effect your rankings.

It’s a common belief in some SEO circles that a link from .Edu or .Gov site is the best type of link you can ever get in order to improve your rankings. This belief comes from the fact it is not easy to obtain control of these domains on the open market as you have to be an educational or government establishment.

So are Google actually giving more or less weighting to certain Top Level Domains (TLD’s) well yes they can, and do look at the case of co.cc domains, but in the case of .Gov or .Edu links they claim not to do so as Googler JohnMu stated:

In general, I would like to add that no, backlinks from .EDU domains generally do not get “additional credibility from Google.” Because of that, the whole topic of working especially hard to talk webmasters of these domains into linking to your sites seems a bit problematic…

Some in the world of SEO will scream conspiracy that Google don’t want to let out the secret recipe but then let’s look at these types of links from the proper angle. Google may actually be telling the truth their algorithm may not give these TLD’s a considerably higher weighting than a .com or .co.uk but it is all actually based on Pagerank. Remember that thing Larry Page invented which means links from web pages with lots of links themselves carry more value than those that don’t.

Below is a video by Google Head of Web Spam, Matt Cutts from 2010 that also confirms this:

These types of web pages are often well linked to and many have been around for years. So what I am saying is that despite their “perceived” authority due to their offline status as an institue of learning it is actually the quality of the pages that are linking into these sites i.e. Corporate blue chip companies, Major hospitals or large news resources e.g. the Guardian, BBC that make them an authority online not the TLD.

How to get .Edu or .Gov Links?

With the right amount of time, hustle or money you can get a backlink on just about any site you want. Building relationships, investing in the right tools and good content will allow you to get these links easily.

From past experience I have used Broken Link Building on .Gov or .Edu sites, whereby you look for broken links on a resource and inform the webmaster to replace the link with a resource to your own content.

Outreach also works quite well in order to gain .edu links as most universities now provide their students and faculty with blogging platforms and sub domains so it is quite easy to email them relevant content to their blog or studies. If you really wanted to invest a significant amount of time and resources you could look for a piece of research they produced and reference it in a piece of your own content e.g. an infographic and its highly likely that you will obtain a link back as they will naturally want to share this with their peers.

Invite an academic to write an article on your blog or even come and speak to your workforce or at an industry conference you are running, chances are they will link to you as they wish to reference their engagements. As you can see you are only limited by your own imagination as to how you can obtain these types of links but by offering useful resources for Government or Academic webmasters to link to you will have a much higher success rate.

So What is their Value?

If you were to ask me what I look for in a link then, I value backlinks on the number of visitors, neigh, the number of pre-qualified visitors that the link can send me. What I mean by that is if I could get a few hundred visitors to my site from a back link, who are motivated to buy my product or subscribe to my mailing list, I would spend more time and money obtaining these links than just chasing links from Universities and Government sites.

So do Edu or Gov links help get you better rankings, yes but they are no better than any other well linked TLD, and remember after all SEO is not just about rankings!

How to use Scrapebox for Link Building not Spamming

Scrapebox is well known in the SEO community as a Grey Hat, Black Hat, Yellow Polka Dotted Hat link building tool that is mainly used by blog commenting spammers. If you have ever spent any time reading blogs you will have seen the stereotypical comments on blogs. They usually say things such as “Great Blog Post thanks for sharing” with a keyword rich anchor text link to a site selling fake Ugg boots.

I know a lot of my regular readers will have a heart attack at the recommendation of using Scrapebox as a “White Hat” Link Building tool. A lot of people in the SEO community hate the thoughts of automated link building and the sheer mention of a tool such as Scrapebox makes their skin crawl. I can already imagine several people ready to jump down to the comments and tell me that tools like this are ruining the internet…

Well “Soapbox White Hatters” I’m going to show you a way that you can actually use Scrapebox to make the internet a better place… in fact a safer place for all!

So what is this Scrapebox Link building technique?

This link building technique utilises some of the free plugins that you can get from Scrapebox, the main tactic in this technique is to find a compromised or malware infected site and open a dialogue with the site owner in an attempt to receive a link either via a Guest Post or by suggesting the site owner replaces broken links with your own.

Scrapebox currently costs $97 (there are a few coupons on the net for $57 if you search around) and for the amount of time and money this tool will save you it is more than worth the investment. Scrapebox allows you to harvest thousands of URL’s from Google and Bing in no time at all and by entering your own custom footprints e.g. “submit * guest post” [keyword] you will find lots of guest blogging opportunities for your niche quickly. You can also import .txt files with lots of different search terms to put your harvesting on steroids.

The first free plugin you will need is the Malware and Phishing Filter once you have installed this plugin it allows you to search a list of sites from Scrapebox to find sites that have been compromised by some form of Malware. If you have Google Webmaster Tools setup on your websites then Google will normally inform you that a site has been infected by malware. Sadly many bloggers and small business owners rarely check their sites for malware and not everyone knows how to setup Google Webmaster tools.

Import your list of scraped URLs into the Malware checker and run it. This will flag up any site that has been compromised by some form of malware. You now want to export all of these bad urls and using the OSE check for PA/DA of the pages. Starting with the sites with the highest authority I then work down my list.

You can run the list through the Scrapebox Whois tool or use Scrapebox itself to check the contact page for any email addresses. You do not want to visit these sites as there is a risk that your computer maybe infected by a virus.

Now you want to send an email to the webmaster informing them of the malware issue on their site and send them a link to some helpful blog posts on how to fix malware infected sites. (If you haven’t checked out John Doherty’s blog post on SEOMoz about outreach email then make sure you do!)

You obviously do not want to ask for a link at this point. Depending on the quality of the site it might be worth using your hustle to track down alternative contact details too such as phone number, Twitter Handle, LinkedIn profile etc.

I have had a very good success rate in contacting webmasters using this technique and quite often I find that they are very grateful for you pointing out the problem on their website. Now that you have the dialogue with the site owner I will leave it to your imagination as to what approach you use next to obtain the link. But, this a good time to check the site for broken links or pitch a guest blog as the webmaster will probably have to recover the content on the site. I have even had a few webmasters offer me the chance to buy their sites for a small fee as they don’t have the time or inclination to fix their site and keep it up to date anymore!

So there you have it one way in which you can use a well regarded spam tool to speed up your link building research and to help make the web a safer place.

More Scrapebox Goodies.

I’m going to cover off a couple of other tasks I like to use ScrapeBox for when I am carrying out my day to day role, and hopefully I can show you some great ways to save time and speed up those monotonous processes.  There are a lot of extra applications for ScrapeBox and I am going to leave it to your own judgement as to the “ethical” use of this tool.

How to find Blogs to Guest Post on using ScrapeBox

I know guest blogging has been getting a lot of stick recently and quite rightly so. Some link builders have been really abusing this great tactic over the past few months; but I am sure you won’t be doing that will you now.

In this guide I am going to run through some screenshots so you can see how easy it is to use. I hope you are pretty up to speed with your advanced Google search operators, because you’re going to be dusting them off once you get your hands on ScrapeBox.

In this guide I am suggesting ways to find guest post prospects but you could just as easily use this method to find blogs to place infographics, videos or whatever other outreach projects you are working on.

  1. You want to add in your main keyword in this case I am looking for SEO blogs to write for
  2. You select “Custom Footprint” and add in your different footprints
  3. Select the search engine(s) you want to scrape e.g. Bing, Google, Yahoo
  4. Click the “start harvesting” button and go grab a fresh cup of coffee
  5. Once you have finished harvesting your URL’s you want to remove the duplicates (and you can also use ScrapeBox to lookup the PageRank of the domain)
  6. You then want to export the list into a CSV file
  7. I then like to do some basic prospecting to qualify my targets e.g. Web Design, PageRank or mozRank, RSS Subscribers, Social Media presence

I am not going to cover off best practises on pitching your guest posts in this article but if you want some good pointers on outreach I suggest you read my blogger outreach interview.

How to use ScrapeBox to Check for Broken Links

Another ScrapeBox tool you will find quite handy is the bulk URL check, especially if you have a big list of URL’s to check on a regular basis.

Oh and did I mention, you don’t even need to buy a license for this tool it’s completely free!

All you have to do is open the tool and then import a list of links you want to check in one text file. Now upload another text file you want to add in your URL’s. There are two options here you can either check a link to a specific URL is live or just check a link to the domain is live. It only takes a few minutes to check your list and then you can export the failed links and check with the webmaster to see why it may have been removed.

How to use ScrapeBox to Scrape Google Image Search

Do you suffer from your images being stolen by webmasters without attribution?  Well you can use Scrapebox to search Google for your image and then return the URL’s. The best way to do this is to make sure your image file names contain a set of random letters and numbers that will make it easy for you to find them e.g. dog-photo-xc345.jpg

You can then do a quick Google image search with ScrapeBox for “dog-photo-xc345.jpg” and the URL’s will be revealed. I would then personally load this list into ScrapeBox and use the WHOIS lookup tool to find the contact information for the domain owner and reach out asking they provide a link to your site for fair use of your image.

“Haters gonna hate… Scrapers gonna Scrape…”

As I mentioned at the start of this post there are lots of other great uses for ScrapeBox such as this article on ScrapeBox Keyword Research and also some additional ScrapeBox Tricks and Tips by Dan Bochichio.

If you have any more Scrapebox tips and tricks drop them in the comments below.

Use Content Curation to Drive More Traffic

For the last couple of weeks I have been using a content curation service called Scoop.it to generate lots of referral traffic to my blog posts and I’ve decided to share what I have learned so far and also show how content curation can help you to grow your social network too.

What is Content Curation?

If you have a good understanding of the new social web landscape then there is no doubt that the term “content is king” comes up again and again. Content is the currency of the internet and by sharing your own great content and other people’s great content you will grow your social network and be held in high regard by your followers as the go to source or expert in your field.

By following all the latest news sources in your niche you can soon find yourself overrun with numerous RSS Feeds, Tweets, Google+ updates and Facebook shares flashing  before your eyes every day. This is where content curation comes in; quite simply content curation is the process of filtering out the best content that you find and then sharing this with your networks.

What is Scoop.it?

Scoop.it is a content curation service, but rather than have me rattle on about it you can watch this brief video below.

Scoop.it has the latest news delivered to you and allows you to re-share it with your social network. Another great aspect of Scoop.it is that other people can suggest for content to be added to your pages too. Scoop.it have a free entry profile which allows you to setup 5 pages and get used to the interface, if you want to curate information in more niches or have analytical data then you need to upgrade to a paid membership.

How to Use Scoop.it

The key to being a great content curator is by picking a niche in which to share your information and vigilantly sticking to it. The narrower the niche you decide to curate content in, the better. If you decide to setup a Scoop.it page about knitting patterns the majority of your regular followers are unlikely to be interested in your curated content on pictures of kittens.

First things first, go to Scoop.it and sign up with either your Twitter or Facebook Profile.

You will then see a screen similar to this one where you fill in the name, description and keywords for your new page.

Pro Tip: Use Google’s Keyword Tool to find keywords that people looking for your content may use. If you want to learn more you can read my blog post on using the Google Keyword Tool.

Now that your page is created you want to setup your news sources. Simply enter your keywords into the search box, these keywords will be checked regularly in Google, Digg and Youtube for the latest content in your niche. Next you want to click on Advanced Options. This is where knowing your niche comes into its own:

As you can see from the above image you can add in various personalised news sources such as RSS Feeds, Twitter accounts and lists, Google News Search, Google Blog Search and OPML files from Google Reader. You want to add all the best curators and thought leaders in your Niche to this list & the best blog feeds too.

Protip: Use a blog curator such as AllTop to grab your feed and setup a Twitter List of interesting people you can add to easily so you don’t have to keep adding them to your Scoop.it sources.

Now you are ready to start your new career as a content curator. After about an hour Scoop.it will have scraped your RSS feeds, Twitter Followers and searched Google for new content based on your keywords. Simply clicking Scoop.it will scoop the news to your page, from here you can share it with your Twitter and Facebook accounts, add tags to make the Scoop easier to find and change the text or images. If you don’t like a suggested article simply click discard and the page is removed.

Pro Tip: Install the Scoop.it App (its free) and add the Scoop.it bookmarklet to make it easier to Scoop content on the fly.

If you see a piece of content on another Scooper’s page you can “rescoop” it by clicking the arrows that look like a refresh button. It is also common etiquette to thank (thumbs up) your fellow content curator when you rescoop their find.

Generating Traffic to your Site with Scoop.it

Well there are two ways to get traffic from Scoop.it, the first is obviously to add your own blog posts & photos to your page. This will have limited results, just like running any web site, until you grow your following. Building a following takes time and may require weeks of curating and sharing great content, following other people on Scoop.it and commenting on other peoples Scoops. If you are anything like me this looks a lot like hard work, but by doing this I have noticed I am sharing lots more content with my Twitter followers and growing my followers.

So the second and quicker way, and I’m sure all the link builders have spotted this already, is to suggest your content to other users.

There are people on Scoop.it who already receive hundreds and hundreds of views per day to their pages and in one of the examples above they have had over 130k views in less than a few months. So by suggesting your own content to their page you have a chance that your post will be accepted and a good percentage of their fellow Scoopers will come flooding to your site and re-scoop your page to their followers and other social networks too.

To start you need to find who the influencers are in your niche. This is easy to do by searching for your keyword in the search bar at the top or by browsing the topics based on popularity and current trends.

You can then quickly research the curator as their profiles often contain links to their other social profiles e.g. Twitter or Facebook.

From the this example you can clearly see links to this Scoop.it users Linked.in, Twitter and Facebook Accounts.

As any experienced linkbuilder will tell you its important to build a relationship with a Social Media Influencer first rather than just bombard them with requests out of the blue. By engaging with them on other social networks and where possible find their website and contact information you can then begin to approach them with suggestions for their Scoop.it accounts.

My favourite tool at the moment to research potential link targets is Follower Wonk. It is a great way to learn more about who your influencer influences. It will also help you discover if they have any “Thought leaders” within their network, so you can gauge whether or not your suggested content will go Viral if it is shared by them too.

My last blog post on Automating Google+ with your other Social Media Accounts was curated on a very popular Google+ Scoop.it page. Over the next 48 hours I received about two hundred visitors from this page and two other pages that re-scooped my blog post. I also received a 10% increase in traffic from Twitter and Facebook than normal during this period too.

But, isn’t Content Curation Bad for my SEO?

In a post Panda World I can understand why people might worry about “duplicated content” but the thing about content curation services such as Scoop.it is that you never republish the whole web page. The web page is also linked back to from Scoop.it providing confirmation of the contents origins and although most of links on Scoop.it are no-followed, to prevent spamming, a link is still a link.

Many businesses forget that SEO is not just about links or chasing the number one spot in the SERP’s but by growing and diversifying the traffic to your website. By having diverse traffic sources you will be able to continue to grow your business online for a long time to come no matter what happens with the next big “algo” change.

If you have had any positive or negative experiences with curation services such as Scoop.it please leave a comment below.