Monday, June 22, 2009

Note for the future: Jordan Hall Bullet Cash Method

Note for the future:

Jordan Hall, "Bullet Cash Method". www.bulletcashmethod.com
Associate of Mike Filsaime.

24 minutes video of boasting, claims what his method is NOT, scarcity ("this video is not here for long!", "this video is only for my friends!"), and no slider or fast forward button. Also, no substance, "cat in the bag" -- you are just supposed to believe that this guy is genuine after tons of others who are not.

On another hand, I guess, that's sales funnel in action. If somebody spent 24 minutes listening, he is more likely to take the next step. Need to think about it.

Labels:


Tuesday, August 09, 2005

Google AdWord costs: US vs. British Commonwealth

I expected the traffic targeted to US+Commonwealth to be cheaper than just US and Canada. It seems to be more or less the case. I tried several words in two campaigns, one targeted to US+Canada, and another for US and several British Commonwealth countries including Australia and New Zealand.

The estimates of a cost-per-click showed that some of words are expected to have the same cost ("career advancement" $0.11, "job promotion" $0.18), while some happened to be more expensive when targeting US+Canada only ("career development" $0.19 US, $0.16 CW, "pay raise" $0.15 vs. $0.14)

I am still puzzled by Google's insistence on "relevance". They never explain how it correlates with the price you pay. Some Interent experts state that more relevant keywords are cheaper, but so far I was not able to confirm that. I'll talk about that later.

Thursday, July 21, 2005

How to submit HTML form by email

Stupid, stupid, stupid, stupid Blogger.com... It just killed my post because it contained a PHP script...

Anyway... How do you submit an HTML form by email without exposing you email address to spammers? For reference, your email should not appear anywhere in HTML code, or it can be harvested. I tried several ready-to-use scripts, and all - all! - of them expose your email address. Ridiculous, isn't it?

Long story short, here is a PHP script you put on your site:

<?php
$recipient = "your@email.com";
$subject = "WriterProSite Submission";

// Special fields:
// 'redirect' -- where to go after submission
// 'subject' -- subject override


$target = $_SERVER['HTTP_REFERER'];
if ( isset($_POST['redirect']) ) {
$target = $_POST['redirect'];
}
header("Location: " . $target); /* Redirect browser */

if ( isset($_POST['subject']) ) {
$subject = "[" . $subject . "] " . $_POST['subject'];
}


$message = "";

foreach ($_POST as $key => $val) {
$message .= $key . ":\r\n" . $val . "\r\n\r\n";
echo $message . "<p>";
}
$message .= "Submitted on " . date("D M j, Y G:i:s T") . "\r\n";
$message .= "Submitted from the IP address " . $_SERVER['REMOTE_ADDR'] . "\r\n";
$message .= "User agent " . $_SERVER['HTTP_USER_AGENT'] . "\r\n";
$message .= "Submitted from the page " . $_SERVER['HTTP_REFERER'] . "\r\n";


mail($recipient, $subject, $message,
"From: visitor@{$_SERVER['SERVER_NAME']}\r\n" .
"Return-Path: visitor@{$_SERVER['SERVER_NAME']}\r\n" .
"X-Mailer: PHP/" . phpversion());
?>

And then you simply put an HTML form wherever you want. A form like that:

<form method=POST action="send.php">
Name: <input type=text name="name" size=40><br>
Email: <input type=text name="email" size=40><br>
Subject: <input type=text name="subject" size=70><br>
Comments: <textarea COLS=60 ROWS=5 name="comments"></textarea>

<input type=submit value="Submit"> <input type=reset>
</form>

Sunday, June 26, 2005

The Future of SEO

Search Engine Optimization is a Holy Theory of modern webmasters, and why shouldn’t it be? It gives what we all strive to get – the traffic. There could be a dark cloud or two on the horizon for both SEO and search engines, as they exist today. Let’s see.

Meta-Tags

First came the Meta-Tags. It was back then, during the mid-nineties, and every decent webmaster knew that it is not enough to put your webpage on Internet, you should also include metatags, so that search engines like AltaVista can index them properly and show your pages well to your prospective customers. There was even a theory, quite a substantiated one, that search engines may ignore your site altogether unless you include good metatags with description of the content and proper keywords.

“You want your keywords be relevant,” the Internet Wise Men were teaching all across the Net, “you want highly targeted traffic, you don’t want your visitors to feel cheated.” Well, as it happened, many webmasters did not care that much about targeted traffic. Really, what would you prefer, 10 clicks per day for “the best car wash in the world”, or 10,000 clicks per day for “really hot girls”? Add to the equation that you sell impressions of other people’s ads on your site – that was common practice in mid-90s – and the problem does not present a slightest challenge.

Content keywords

Yahoo and the other search engines of the time answered with the “keyword spamming” concept. Search engines started to look into the content and check that the proposed keywords are present, but still often relying on meta-tag keywords for search queries. You see, there were two major theories at the time: the theory of bad webmasters screwing up search engines and the not-so-popular theory that the search engines were idiots to rely on information provided by webmasters specifically for them. Clearly the first theory won at the time. You may say that the Meta Tags Keywords Era was replaced by the Content Keywords Era. But still they were the keywords provided by webmasters. Not surprisingly it did not improve the search results too much. After all, if you think about it, there are kinds of car washes that may make keywords “really hot girls” relevant. The engines changed, the webmasters adapted, the engines stayed screwed. At least until Google came.

Incoming links

Google’s revolutionary idea was to leave the content alone, and look at the network topology. Considering the Internet sites and links as a huge graph, you may see that there are a lot of sites that are almost not connected to the rest of the Internet, some sites that have a lot of outgoing links, and there are some that have a lot of incoming links to them. “You should be good if many webmasters chose to link to you,” the guys at Google decided. And they were right… at the time. While Internet topology was not affected by the profit of incoming links, it really was a pretty good metric. At least, that’s how Google beat out the rest of the search engines and came out #1 as we know it today. However, a strange thing happened: the theory of reciprocal links was born.

Webmasters realized that incoming links are now even more important than metatags. In fact, Google seems to ignore metatags altogether, and who can blame them? After all, if you want keywords, look into the content; keywords should happen there more often than the rest of the text. From the Google Age point of view, a “keywords” metatag is a special device to trick search engines, nothing more. Which, frankly, it really is, see the theory of metatags being wrong above. Anyway, once the Theory of Incoming Links was established, the solution for webmasters become evident – Reciprocal Links. It’s really evident, if incoming links are important, you link to me, I link to you, and we both get a boost on the search engines.

If you noticed, the relevance of Google search results became significantly poorer lately. Guess why? Could it relate to the fact that there are now automated solutions to make reciprocal links? Again, search engines faced a dilemma of two theories: bad webmasters who screw poor search engines, and the key premise that you can judge a site by its incoming links not being quite right.
Again, the theory of bad guys won. Google implemented a lot of sophisticated algorithms to recognize “non-legitimate” links like “link farms” (websites with a lot of outgoing unrelated links), “link exchanges” (pretty much the same on a reciprocal basis), and lately “clusters of reciprocal links with the same anchor texts” (when webmasters requests all reciprocal links with exactly the same anchor text). For reference, I do not work for Google. If I was, they would fire me for such public article, although in their place I would consider trying to hire me based on this article instead. :-)

The Story repeats itself

Now, reciprocal links are not dead yet, but you see, webmasters and search engines are in a Red Queen race against each other on reciprocal links. So, we can expect that more and more reciprocal links will be automatically recognized and banned. Actually, the fact that Google bans whole sites because of suspicious links indicates their recognition that it can screw them up. Otherwise they would just ignore these techniques like they ignore metatag keywords today.
Anyway, let’s look at the history again.
Granted, Google is pretty good at hunting down wrong links, but it’s based the strategy on a faulty premise all along.

Linked Articles

And, meanwhile, webmasters adapt. Lately, I started to notice an upstream of the Shared Content Theory. Consider, for example, a report “How to Quickly & Easily Get Dozens of Quality Links to Your Website Each Month, Without Lifting a Finger” from Internet marketing expert Jason Potash. Essentially, the pitch comes as “A lot of webmasters are hungry for fresh content. If you offer articles for free, they take and publish your article on their sites, and you get a lot of incoming links, that cannot be recognized as reciprocal, because they are not!”

By the way, mid-term – a few years – it’s likely to be the name of the game. “Would you build your house on quicksand?” the report asks. “You get your traffic with tricks, but they can stop working at any time. Would it be nice to have a plan B, one that will not sink overnight? Ignore Google U.S. patent application #20050071741 at your own peril!”

Yes, true, we would all like to have a stable plan B that would bring stable traffic for years. And mid-term, the solution may be right. The problem is that once it becomes popular, it is also going to be abused and worn out, just like reciprocal links. And the reason is that it’s not webmasters, it’s Google who built their house on quicksand.

Red Queen Race

Now, that may look like a bit too forward of a statement. Let me explain. The term Red Queen Race comes from an excellent book “The Red Queen” by Matt Ridley, which is named after the Red Queen from the book of Lewis Carrol “Through the Looking-Glass” who had to run as fast as she can just to stay in place. The topic of the book is the co-evolution of sexes as well as organisms and parasites. You may wonder how it is related to Google and search engines, but let me explain, and it will become clear.

You see, wherever there is a lot of some resource, let it be food, energy, or web traffic, there will be actors that try to get this resource ignoring the rules laid out by the owner of the resource. Say, bacteria find a huge source of food. It gets in and starts to feast and multiply in quantities. Granted, the place gets pretty dirty and toxic with the time, and occasionally either the food ends or the place becomes so toxic that survived bacteria have to search for the new sources of food. In fact, it becomes toxic faster than the excrements of the bacteria accumulate around. It looks like the food itself reacts to prevent bacteria from eating it. And, actually, it does. Because this food is our bodies, and that’s exactly what happens when we get sick.

To avoid being evicted from the food, bacteria evolves and adapts. In turn, our bodies also evolve and adapt to fight the new kinds of bacteria. And bacteria have to evolve and adapt again. The point is that the race never ends and it becomes more and more expensive for both sides, but as long as the food is still worth the effort – and in the world of bacteria food is always worth the effort – the race goes on.

Now, you have the resource – web traffic, the owners of the resource – search engines, and a huge number of small replicating units interested in the resource – websites and webmasters. Do you see any similarity?

Enforced web site evolution

Now, can Google or any other search engine break the circle of the Red Queen Race and cut off webmasters wishing to get a shortcut to popularity? Actually, theoretically speaking, they can. Let’s consider the analogy again. When bacteria survives, adapts and multiplies, it’s not because bacteria is smart. It’s just that this particular bacteria accidentally mutated into a form that is resistant to the latest antibiotic, that’s all. A lot of other bacteria mutated but were not so lucky and got killed. It’s a number game, a lot of mutations happens, and the ones that satisfy survival criteria – multiply. That’s it.

Now we get to the difference between bacteria and webmasters. Bacteria survival depends on its’ ability to work around body self-defense. Hence, the Red Queen Race. While Google measures websites by incoming links, working around Google restrictions becomes the most efficient survival criteria, and hence websites and search engines get locked in the Red Queen Race as well without any reasonable exit.

However, search engines hold the criteria of survival for the websites. Literally. It’s their algorithms that select the sites that will get the traffic. You see? Webmasters are not bad guys, after all, they just evolve and adapt to whatever survival criteria is imposed on them by search engines. It’s not webmasters who trick search engines into selecting them. It’s search engines that force webmasters into an odd behavior that search engines don’t really want!

So, what do the search engines really want? They want Internet users to come to their site, type in a questions or a few keywords, and get exactly the information he really wanted. That’s the core of their business. Either they provide relevant links or they are dead. That’s how Yahoo beat AltaVista. That’s how Google beat Yahoo. Once they are able to find the relevant information, the people come to them, they get traffic, and so they can sell advertising as well as become a fat source of Internet food – web traffic.

And here is the problem that search engines face. Internet users are not looking for metatag keywords. Internet users are not exactly looking for the content keywords either. And, surprise, surprise, Internet users are not looking for a lot of incoming links either. Really, think what a lot of incoming links means? It’s either a fat cat with a large affiliate program, or a trickster like those whom search engines are complaining about, or a result of a popularity contest. And people are normally looking for authoritative information, good services or merchandize. Or, they look for other people. In other words nothing that is indicated by a lot of incoming links.

You see, the central premise of Google is that people link to the good content. However, that’s simply not true. People link for a lot of different reasons: because somebody pays you, because somebody pays you if your visitors buy something on their sites, because they place your links, because you are pissed off by somebody and give a link that you hope your visitors will not follow, accompanied by a less that rosy review, and only a few links are placed by webmasters because they believe the referred site to be really relevant and useful to their visitors.

So, again, it’s not about websites tricking Google into bumping their rank, it’s about Google using the wrong search criteria that forces websites to provide things that Google does not really want, but demands from them for their survival. Imagine that Google found the ideal algorithm that finds the content that people really need. What will happen that very same moment? Two things:
Is your site safe?

If you rely on reciprocal links, articles and other similar techniques today, is your site safe tomorrow? Is Google likely to break the game?

Not exactly. First, it’s hard. Did you ever try to find something on Internet only to find yourself stuck unable to distinct the genuine true information from a junk? You bet! Of course, search engine are not supposed to distinct good well-founded material from an opinion article based mostly on the author’s imagination. That’s our job. But even the ability to distinct an informational article on the subject from the completely irrelevant stuff is not easy. And relevance of information is what the search engine service is about (after advertising, of course). They tried to do that with meta-tag keywords, they tried to do that with content keywords, they are now trying to do that with measuring incoming links, but none of them are really good indicators, at least not after search engines started to use them.

Not that Google does not try. Do you use the Google bar? Does it have Page Rank activated? If you are reading this article, it’s likely the case. What Page Rank does is report URLs of each page that you visit to Google. Well, not so much reports as requests the page rank to show it to you, but – and that’s very important “but” – in the process Google knows every page that you ever visited. Technically, it is not associated with your name, and Google does not really need your name, but it has your IP address or it may have an equivalent of a cookie. It means that Google may have a huge log like:

www.pancakes.com/ 12:51:35 userX
www.munchies.com/ 12:51:44 userY
www.pancakes.com/recipes.html 12:52:24 userX
www.munchies.com/map.htm 12:54:12 userY
www.munchies.com/order 12:54:34 userY
www.pancakes.com/oatmeal.html 12:57:45 userX

www.stupidstuff.com/ 19:31:15 userX

What does it mean? Look at the entries for UserX. It shows that he spent:

49 seconds on www.pancakes.com
2 minutes 10 seconds on www.pancakes.com/recipes.html
And either 6 hours 34 minutes 21 second or unknown
amount of time before closing the browser on www.pancakes.com/oatmeal.html

It also shows that these pages were visited once, and the whole log shows how many times the page was visited by all users with the Google Toolbar and the Page Rank activated. Now, you have a criteria that does not depend on links! It’s still not perfect, but it gives a lot of control over the page rank to the people, who really matter to Google, Internet users looking for stuff on search engines.

Locked in the game

Did I scare you? Well, it may still be ok to relax. You see, current methods will still continue to work for some time. Actually, with some modifications, like swapping articles instead of links, they will continue for as long as the search engines chose to stay in the Red Queen Race. Now, are they going to stay for long?

And here comes the Google U.S. patent application #20050071741 that was presented as a major scarecrow by the already mentioned report. You see, the whole patent is about using incoming links as a page rank indicator. You don’t patent stuff to use it for a couple of years. You patent something that you want to use for decades and that you don’t want your competition to use. See? Looks like Google is really committed to using Page Rank and incoming links as a popularity criteria.

So here we are. As long as Google relies on incoming links, the Red Queen Race is on and your methods may require some tweaking time to time but overall they will work. First, you will move from reciprocal links to content sharing, then a lot of idiots will start swapping content like they do with links now, then Google will start to recognize duplicates of the articles on different sites, then something else will pop up as a new technique. For example, people may start to write content for a single link. Small articles may be worth it, after all, people pay tens and sometimes hundreds of dollars for paid links. Whatever it will be, it will continue. You just cannot fend off all the bacteria, as long as you are alive, and are a juicy piece of food for them.

So, what should I do now?

Read this article. Understand the big picture. Understand that it’s not likely to dramatically change overnight. And continue to use the techniques that you are using now, like reciprocal links. Just keep in mind, you are in the Red Queen Race, and you are not a stupid bacteria, you don’t have to fail to adapt. So adapt. Vary anchor text in reciprocal links. Try to get a higher PR site to link to you.

Use new techniques like writing articles to share with other sites. By the way, this article is under Creative Commons license, you can use it on your site as long as you give credit to me in a form of the link below – yes, I am using this technique myself. And be ready to adapt again, and again, and again. It’s not so bad to run as fast as you can and still stay in one place. It certainly beats failing. And you don’t really have to get ahead of the search engine, you only have to get ahead of the fellow webmasters. That’s what high page rank on search engines is all about. So, even staying in place in a race against search engines, may mean getting quite ahead in your web-based business. Which is what you really want, right?

---
(CC) Ely Asher, 2005, owner of the site Disinfect Your Mind and the blog eBizPromotion: Just Numbers.
You can freely distribute and publish this article on your site, printed or other media, including for commercial use, as long as you give the credit to the author as it is done above including the links. For complete terms of the license, see Creative Commons Attribution-NoDerivs 2.5 License.

Saturday, June 18, 2005

Sticky notes efficiency

"Sticky notes" is a simple image that looks like local pop-up, although it is a part of the actual HTML page. I was wondering if "sticky notes" help with the click-through rates. It seems to be the case. In the same pop-under campaign that is already mentioned in previous posts (see the actual page here), I noticed the following results:

 StickyNo stickyTotal
Total hits:343714734672
Click-throughs:18321
% of click-throughs:0.52%0.20%0.45%

Friday, June 10, 2005

Site on free targeted traffic and publicity

I would not promote often on this blog, but I really liked this site. Their newsletter really gave me some cool ideas.

For example, I knew about (CC) -- Creative Commons license for quite a bit and I was kind of fan of Lessig's ideas, but until George McKenzie from
Free-Targeted-Traffic.com
metioned it in his 'Traffic Pulse' newletter, I did not even thought about putting it on my blogs.

Thank you, George!

Thursday, June 09, 2005

Sales Ends Today!!! - How they do that?

You probably have seen the sites:

Buy before it's too late!!! The sale end on <current date>

How do they do that? Very simple, it's not even funny. Just a little script like that:


<p style="text-align: center; font-weight: bold;">Buy before it's too late!!! The sale end on
<script language="JavaScript">
// (CC) Ely Asher, http://ebizpromotion.blogspot.com
var mon=new Array("January","February","March","April",
"May","June","July","August","September","October",
"November","December");
var d=new Date();
var y=d.getYear();
if (y < 2000) y = y + 1900;
// for some reason they always trying to fix Y2K bug,
// although I am not sure I ever saw
// a Javascript engine that have it
document.write(mon[d.getMonth()]+" "+d.getDate()+", "+y);
</script> !!!</p>


Simple, straightforward, and so-o-o evident... Amazing, but it seems to work.

Actually most of them are using much longer script, probably, as a result of borrowing from each other, but the result is the same -- the sale always ends today.

By the way, this script is under CC license, free to grab, just give credit -- the line that is in the code right now will do fine.

This page is powered by Blogger. Isn't yours?