Getting the most out of your link building efforts only makes sense. What use are a bunch of links if they are not working properly?
Link building remains one of the most important parts of marketing and optimizing web sites, but as SEOs we can often get distracted by quantity rather than quality. Link building efforts for search engine optimization purposes should always rely on clean links that can be crawled by search engine bots. A “clean crawlable link” is one that is not blocked with one of the following: Robots NoIndex meta tag, JavaScript redirect, blocked with robots.txt or a NoFollow tag.
If yours is a relatively new site, and you rely heavily on link requests, article submissions or other high labor, low return ways of gaining lesser quality links, you need to be sure that each one if performing to its whole potential, It’s very important to make sure the links acquired work properly for both users and search engines.
A few things to check for include:
1. The robots meta tag. Look at the page source code. If there is no robots tag, that’s fine.
If there is a robots meta tag, it should merely look like this,
meta name = “robots” content = “index, follow”
If the robots meta tag looks like this:
“robots” content = “noindex, nofollow”
OR
“robots” content = “index, nofollow”
Then you have a problem – the links will be useless for SEO purposes as they have the ‘no-follow’ attribute and Google will not recognize them.
2. A JavaScript redirect of links from a desired page. Position your cursor over the link and look at the url that appears in the status bar at the bottom of the browser. If it shows the correct link url, it’s probably all right.
3. Any robots.txt blocking. To keep search engine spiders from crawling links on a page, add the text “/robots.txt” to the end of the URL in question.
For example:articleshares.com/robots.txt
Here you can see:
User-agent: *
Disallow: /administrator/
Disallow: /cache/
Disallow: /components/
Disallow: /editor/
Disallow: /help/
Disallow: /images/
Disallow: /includes/
Disallow: /language/
Disallow: /mambots/
Disallow: /media/
Disallow: /modules/
Disallow: /templates/
Disallow: /installation/
The “disallow” instruction simply tells search engine spiders not to crawl the designated directories. In the case of some article sharing sites, any articles located in one of these directories may have links back to your site, but these are no good for SEO benefit. Readers can still click on the links and arrive at the indicated destination, so they are good for traffic.
4. No follow tags. You can check for these by right clicking on the link URL using a browser like MSIE or Firefox, then clicking on “Properties”. Look for the attribute called:
rel = nofollow.
If that’s there, then again, it’s not very valuable for SEO. If it’s not there at all, or has another value besides “nofollow” then you are good to go.
Link building that is based on forms or submissions are often no-followed after the sharing site discovers SEOs are populating the site with their content. Link building that is the result of creating and promoting content worth linking to is of much higher value. This is why if it is important to check link sources to determine their value for SEO benefit. Don’t throw out the baby with the bathwater… remember, links are for users too, not just search engines!
1.The ERA Of VOICE SEARCH Hello, 2020! Long gone are the days when we used to head over to the search engines on our desktops and
Read MoreThe year’s 2019! We have long laid our footsteps in this digital world. Did you know that more than 4 billion people al
Read MoreIntroduction about National SEO Services A National SEO service provider uses search engine optimization practice to enhance the
Read More