Posts Tagged ‘Google’

Excluding Search Engines from Geo-Targetting Techniques

Saturday, May 23rd, 2009

I’ve spent the last few days transforming MainStWeb.com into a true local search engine, powered by the Praized API and, overall, I’m pretty pleased with the result.

One of the custom things I’ve added is IP-based geolocation. This means that every visitor gets a custom home-page based on their IP address. Very cool because it makes the site instantly relevant no matter where in North America you happen to be.

My only concern is that Google happens to be in Mountain View, California. Obviously, I don’t want MainStWeb’s indexed pages to be skewed towards Mountain View. Obvious fix is to programatically exclude search engines from the geo-targetting algo.

Except that I also know that Google frowns upon efforts to present a different experience to the GoogleBot than to real people.

So what to do? Would the search engine exclusion be considered acceptable in the eyes of Matt Cutts, et al?

Best. Hack. Ever. (making Google and Yahoo play nice)

Thursday, February 26th, 2009

That’s perhaps over-stating things a bit, but it’s certainly one of the most useful web hacks I’ve used – and use regularly.

The Problem:
Google search feeds cannot be pulled into Yahoo! Pipes.

Pipes’ slogan “Re-Wire the Web” apparently doesn’t apply to Google. I don’t know if this is a restriction of Yahoo or Google or both, but it’s very annoying if you’re trying to do some heavy-duty monitoring via Google search and want to be able to manipulate the feed before it reaches Google Reader… for example.

The Solution:
Google Feedburner. If you run the Google News/Blog Search feeds through Feedburner and then pull the Feedburner urls into Y!Pipes, all is well. You can manipulate the Google data anyway you like before consuming it.

Google Enabling Poor UI Design?

Monday, April 14th, 2008

I’ve been seeing coverage of Google’s decision to enable form crawling by GoogleBot. Has Google essentially given web-devs an excuse for poor form-driven site navigation?

You used be able to declare, with relative confidence, that what’s good for bot is good for the user. In this case, form driven navigation is difficult or unreliable for users with disabilities or low dexterity. Many forms also cause problems for users on mobile browsers.

That the G-Bot was also restricted from this type of navigation gave devs and designers the motivation required to avoid form-based nav. I fear that the minority of users affected by this type of nav, will not be high enough on the audience priority list without the Bot among their number.

Question: Will we see a resurgence of poor navigation design due to Google-Bot’s new powers?

Index first, rank later!

Friday, August 3rd, 2007

Part of any comprehensive web strategy is a healthy dose of SEO considerations. There are two major components to SEO – yet one of them is often ignored or an afterthought.

Many SEO consultants and pundits will chatter on incessantly about how to improve your search engine rankings. Thousands of dollars are spent on what are largely single-digit incremental improvements in rankings. Improving your site’s rankings, however, is not nearly as important as making each and every possible page of your site is in the index – and that nothing is in the index that shouldn’t be. It may seem basic, but it’s often overlooked that a page can’t rank unless it’s in the index.

There are two basic approaches to making sure that Google, Yahoo, MSN, etc. can find every page on your site. First is a solid and comprehensive navigation architecture with all of appropriate links. A crawler should be able to navigate to your site the same way that your visitors do. In cases where this isn’t possible (search results, mashups, form submission), it’s advisable to have an XML sitemap that conforms to the spec agreed upon by the major engines. This sitemap gives the search engine bots direct access to the full list of available pages on your site. Even if there are tens of thousands of dynamically generated pages possible on your site, these can be included in the sitemap.xml file, provided each has a unique URL.

The Strategy:

There are no ‘silver-bullet’ SEO techniques that will take you from page 10 to the top of page 1… but if your pages are not in the index, they’ll never rank at all.

Google: Our Customers Hate Our Product

Tuesday, July 24th, 2007

The official Google AdSense blog announced yesterday that they’d discovered what was preventing a number of their customers from successfully signing into their service using Firefox… Turns out a little add-on called AdBlock Plus was interfering with the login page. Did you catch that? A significant number of AdSense customers use ad blocking software. That in itself is not terribly surprising. What is surprising, and a little foolish I think, is AdSense announcing it to the world.

Is Google really the new Yellow Pages?!?

Wednesday, September 20th, 2006

Robert Scoble seems to think so, though given that I’m smack-dab in the middle of Kelsey’s DDC Conference, I suspect his statement will meet with some resistance.

Google makes a change that just makes sense

Monday, September 18th, 2006

Matt Cutts posted an article this morning about a change in the way Google handles URL queries. The change makes absolute sense – I’d always kinda wondered why Google didn’t do this in the first place.

Beyond explaining the change, Matt makes some very good, and oft-forgotten, points about what he calls N users and M users. N users are the relatively small number of Google power-users who understand and use the myriad of special queries available on Google – info:, inurl: ,site:, etc. The rest are M users – a larger number by several orders of magnitude.

At work, we continually remind ourselves and each other that 98% of the people in our office are N users. Similarly, our friends are likely at least M+ users – somewhere between M and N. Everyone who works on the web in a professional way must maintain a circle of contacts that are clear M users, that they can call on to say “Does this make sense to you?”. The challenge is, of course, that people learn and so M user testers have a shelf life – their usefulness expires as they learn how to “do the web.” My parents’ generation is the best source of M testers, but they are learning too. And, of course, the skill set of the M user is not a constant. A few years ago, profficiency with word processor software was limited to N users – whereas now, these skills (if basic) are ubiquitous at the M level.

The only way to discern the M/N split in skills is to pay close attention to your feedback. Assume that the vast majority of feedback, say 80% or more, is from M users. Those are the users to whom your site/app must make sense. The other 20% of N users will adapt. It is far too tempting to dismiss the feedback that says “I didn’t understand”, or “I’m confused by…” as a minority that didn’t try hard enough.

Congratulations to Google (and Matt) for a) recognizing the problem and b) risking the wrath of their vocal N users in an effort to improve the experience of their M users.