HTTP Compression != scalability

“This is becoming a broken record. Every couple of months some web site that hasn't properly prepared for the amount of bandwidth consumed by having a popular RSS feed loudly complains and the usual suspects complain that RSS is broken. This time the culprit is Weblogs @ ASP.NET and their mistake was not providing HTTP compression to clients speaking HTTP 1.0. This meant that they couldn't get the benefits of HTTP compression when talking to popular aggregators like Straw, Feed Demon, SharpReader, FeedDemon and RSS Bandit. No wonder their bandwidth usage was so high.”


That's Dare Obasanjo, one of the authors of RSS Bandit. 


So if Weblogs @ ASP.NET hadn't made the mistake of not supporting HTTP Compression, would that have made all their problems go away?   Of course not.  It would have helped - bought them some extra time - but if HTTP Compression reduces the required bandwidth by 80% (which it doesn't, but even if it did) that'd mean you could scale to 5x as large as you could without it.  There's still a limit, thanks to it being based on every client polling a single server.  And IMHO it's a limit that doesn't need to be there at all.


Here's my complaint about the way RSS works:  It requires that if you're popular, you pay the bandwidth cost for your popularity.  Yes, the web works that way, and it's a pain in the butt.  Ever been slashdotted?  Popularity is a curse for a small website.


Now, ever heard of a Usenet newsgroup being slashdotted?


Of course not.  A million people could be reading the same newsgroup at the same time, and because the news is spread over a huge number of servers, that's fine.  If I post something funny in rec.pets.cats, I don't have to worry about my bandwidth being chewed up when people start linking to it. 


I'm not saying “HTTP is broken, replace HTTP”, just that for this sort of information distribution, HTTP isn't the best solution.


Anyone want to fund my development of a server-based syndication protocol based loosely on NNTP?  :)