I read one of the ?href considered harmful? arguments, a couple of them actually, and got to thinking a bit about how web apps are built.. Having an application that runs entirely as one ?page?, with the page changing as various user inputs are posted back to it, isn’t that hard to do, and I think you could design a pretty capable framework that worked entirely that way.
Problem is, GoogleBot and all the other web crawlers indexing our content can’t see anything that isn’t visible from an href. I did a quick demo page to demonstrate what I’m talking about – the Cacheable Demo page doesn’t show you much when you first click on it, but when you make selections from the combo box, text appears on the page.
A URL can be like a jump right into the middle of your program, something that doesn’t make any sense without the context of the session that the link was generated from… but it can also identify a piece of information. Hiding the identification of the piece of information inside form fields that are POSTed to the page essentially hides the information from most of the Internet (since search engines won’t find it).
This isn’t something new obviously but the only really workable solution I can think of is instead of the search engines trying to ?reverse engineer? my site to figure out how to get to the content, wouldn’t it be nice if GoogleBot could call a web service that I supply, that describes the content on my site?