Tuesday, November 13, 2012

Establishing Ownership of Your Content ? the Rules Are Changing

I was sketching out two marketing plans over the holidays for a couple of new clients and decided it was time to incorporate some of the research data/results I?ve collected during the latter part of 2011. Generally I?d spend more time testing things on my own sites first, but I?m confident enough with the results of basic testing that I?ve decided to put the ideas into live production.

There are two basic interrelated concepts that I?ve been working on ? content length, and establishing ownership of new content in a way which minimizes the chance of your content being considered ?dupe? and increases your page authority and SERP?s.

The web is all about content, it?s basically one large article directory. The task for a search engine is to provide an efficient indexing system so we can connect with the information we are looking for in the fewest possible steps.

In the old days, when we bought our ?Encyclopedia Britannica?, we?d flip to the front to find a broad index of content, then flip to the back to try and find a specific piece of content. It was and still is a pain trying to find something specific in a large hardcopy publication.

Obviously, search engines automate that task pretty well on the web by recording billions of documents and serving up the most relevant to our needs in a few milliseconds.

However, Google has taken it all a few steps further. With the advent of their Page Rank algorithm a few years back Google demonstrated its capacity for collecting multiple sources of information and building actionable data profiles. Google has since added to its profile arsenal by recording the specific surfing habits of its users and the websites on which they land. Combining the personal data it records about us with the data taken from a website (via analytics or just simply from standard Google searches), Google can now match us with content deemed even more relevant to our needs.

So Google has become a very intelligent content indexing system, delivering more and more ?personalized? results based on our surfing habits, our demographic and the performance of the websites to which we are referred.

Duplicate Content

It is no secret to any webmaster that one of the main technology hurdles for Google is duplicate content. But why should Google care about duplication if it?s large enough and fast enough to index pretty much everything on the web? Well, actually it isn?t (large enough or fast enough). And therein lies the problem.

Google needs to know the source of published content. As the author of a piece of content I should have precedence over everyone else who publishes it. Google needs to know who owns the content so it can give preference and prominence to the source and not to someone who has merely replicated it for their own self-interest or gain. It?s one of the most critical yardsticks that Google has to judge us by. If it gets the source wrong in its algorithm, all other measurements will result in a false or negatively weighted outcome. It can?t reward quality content fairly, if it doesn?t know who has authored it.

Unsurprisingly this isn?t something that we hear Google making a big deal about. Why? Because they don?t have and never will have a perfect working solution. But it?s clear from some of the algorithm and policy changes during 2011 that Google is working hard to improve its chances of determining the true source of content.

The first step in a series of new steps was for Google to make a basic assumption about Article Directories. Article Directories contain a lot of content and fared well under the old system of ranking. We all know by now that some of the key directories,

affiliate_link EzineArticles for example, have taken a major hit under Google?s new system of ranking. In a certain sense the hit has been more about sending a message than it has been about cleaning up the web of duplicate content. In a way Google has behaved like a newly elected Government. When you?re trying to introduce a new way of thinking, it sometimes helps to make a few high-profile personnel changes. So Google has basically announced to the world that duplicate content is on its radar ? learn the new rules or face the axe.

When you look closely at the results of Panda it?s fairly easy to work backwards and reverse-engineer the thought processes involved. Article Directories contain primarily duplicate content, but not entirely, so Google must have factored other information into its decision to devalue AD?s. If you look at the whole scenario, it can give you valuable clues as to where things are headed. There are two clear problems with Article Directories and the type of content they provide a home for:

1 ? Duplication. Clearly, people create content, often for their own sites, then use multiple article directories to re-publish that same content, either in an attempt to gain backlinks, attract direct traffic or appeal to niche re-publishers of content (syndicators). Either which way it is duplicated, and the Article Directories are the catalyst for making that happen.

When you look at everything else contained in an AD (all non-duplicate content), you see the second problem ?

Poor quality content.

When you search an article directory for something unique, what you?ll often find is something that doesn?t read too well. In many cases that is because it has been mechanically spun from previous content. So in terms of value to the searcher, it?s even less useful than the original, which has already been tagged as a dupe.

So clearly the Article Directories, and the way in which they operate, are not going to garner sympathy from Google, who?ve taken on the task of improving the quality of the web.

So where does this leave us WRT content publishing? What are the rules and how do we play the game?

Google can?t announce the new rules yet, because they haven?t finished writing them. In a way Google is just like an intelligent marketer trying to optimize his own business. Google makes changes, tests the results, realigns its approach based on gathered data, then tests again. To stay at the top of its game, this process is perpetual ? it has to be.

How does that affect you, or how will it? First off you must not hide behind ?well it?s worked for me for the last 5 years so it must be OK? or by sticking your head in the sand and doing nothing. iFrame cloaking, IP cloaking/switching, Xrummer backlinking, etc. all worked for a while. These were strategies that worked and have since been marginalized (or are far along that path) by the Google team. So you need to take a look at your approach to publishing content. Even if you don?t use article directories or don?t provide a mechanism for people to republish your content, the new rules are still going to affect you. The good news is that if you?re smart enough, some good opportunities will start to appear.

There?s a new system of ranking search results being worked out right now which combines Site Authority and Page Rank, along with the newly collected data that Google has at its disposal.

So how exactly does it work?

I?ll be going into detail of how you can structure your content to achieve what I term a ?High Google Credit Score? in part 2 of this article, to be published soon. Or visit my website at http://www.webdesigndoorcounty.com/spn.html and request part 2 via email, or access all marketing article resources here

I?ll be going into detail of how you can structure your content to achieve what I term a ?High Google Credit Score? in part 2 of this article, to be published soon. Or visit my website at http://www.webdesigndoorcounty.com/spn.html and request part 2 via email.

Source: http://www.articlesbd.com/articles/273466/1/Establishing-Ownership-of-Your-Content--the-Rules-Are-Changing/Page1.html

robert de niro winner of x factor cheesecake recipe leona lewis carlos beltran air jordan 11 concord unemployment extension

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.