You are here

Roland's blog

PulpFiction - Vaporware :-) or the next big RSS reader for Mac OS X?

Submitted by Roland on Mon, 2004-04-19 15:29

Can't wait until May 15 to try this out! Looks like a great RSS reader but not a blog writer. Would be great if that was added but if not, I'm happy to use ecto.

From NSLog(); - PulpFiction:


PulpFiction: I've mentioned this application several times. I've displayed its icons. I've joked about it. I've hinted at it. It's been listed on the Freshly Squeezed Software site since before the introduction of Rock Star. I could claim I've been trying to build a buzz, but in reality, we've just been taking our time getting it done.

Originally intended to be released October 24 (does that date ring a bell?), I can now announce that PulpFiction should be out on May 15. Beta testing will begin May 1. The price will be set sometime between those two dates. Almost all of our features are done - we're bug-squashing and internally beta testing.

Many of you have ventured a guess at what PulpFiction is and does. Some of you have come close. Nobody's nailed it completely, and, oddly enough, none of you have ever noticed this button on my blog. It's been there for at least six months:

I seldom speak of NetNewsWire on my blog even though I've got eight dozen feeds attached to this blog and about 100 subscriptions. I've got all sorts of friends, tech blogs, etc. to keep up with. And now I'll admit it: I don't like NNW. It doesn't work as I want it to work. It offers no permanence, no filters, and no real built-in browser. It's got a Weblog editor I couldn't care less about and a price tag of $39. I've paid it, and I've used it for months. I appreciate that it suits some people, but I am no longer one of them. Brent does amazing work. So does Freshly Squeezed Software.

I'm a big fan of "if you don't like it, do something better." I believe that PulpFiction is that "something better." It's better for me, anyway, and it may be better for you. Yes, PulpFiction is an aggregator of XML feeds. Its top ten features...


Dvorak - WiFi signal on public property becomes public domain

Submitted by Roland on Mon, 2004-04-19 14:23

Seems reasonable to me. I am not sure that the "Canadian model" that Dvorak speaks about where "using unprotected 802.11 connections as bandwidth theft". I need to find a Canadian lawyer to see if that's actually true! Leave a comment if you know if Dvorak is right about the Canadian model.

From The Looming Legal Threat to Wi-Fi:


Let me jump in and propose a simple, logical public policy. Law enforcement doesn't need to get involved whenever some guy in a doughnut shop poaches a nearby Wi-Fi connection to check his e-mail, thinking he's on the shop's network. This shouldn't be a crime, even if he's intentionally poaching. We must put the burden of responsibility on the broadcaster, not the end user. It has to be made clear that people sending open connections all over town should be responsible for them.

Here's what I propose: Once a wireless signal leaves private property, it becomes public domain. If the person transmitting the signal wants it protected, then encryption is up to him or her. If someone beams an Internet connection into my home and I happen to lock onto the signal, he is trespassing on me, not the other way around. Public policy must reflect this logic. Keep it out of my house if you don't want me using it. Keep it out of my car. Keep it away from me in public places.

This policy makes sense because it lets anyone who wants to provide open access do so without hassle or fear. Groups in San Francisco and Seattle are openly promoting free 802.11 connectivity. Many coffee shops, restaurants, and community groups now provide free wireless access, and directories of these hot spots are easy to find online.

This ubiquity of access is to be encouraged as in the public interest. But it can't happen if the law doesn't make the person transmitting the 802.11 signal responsible, instead of blaming any roaming users who are simply grabbing open connections. If this means that a corporate network is wide open to hackers, because the company doesn't bother encrypting the signal it broadcasts all over town, then so be it.

We must not follow the Canadian model that views using unprotected 802.11 connections as bandwidth theft. My computer grabs wireless signals impinging on my house more often than it grabs my own 802.11 connection. It just does. Agencies shouldn't be required to sort this out; it would be a law enforcement nightmare. In fact, it's in the public interest to discourage law enforcement intervention in this area, or I could be arrested for accidentally connecting to another person's router, when I didn't want to connect to it in the first place. That's ridiculous. I'm sure that no cops want to get involved in this mess either.


Spike - Cross platform Zero configuration Sharing

Submitted by Roland on Mon, 2004-04-19 14:02

Excellent! Must try this! And it would very lovely to have a Linux in addition the existing Mac OS X and Windows 2000/XP versions someday.

From Porchdog Software:


When you share a Spike clipboard, you see a clipping as soon as it is copied on the source machine. You can immediately drag that clipping into your own document on your own machine, and save valuable time.

Spike is easy to use. Spike uses native copy and paste features to create visual clipboards containing thumbnail images of each clipping. The images are scalable, so you can identify the text or image you’re looking for without opening a file. And you can create multiple clipboards to organize the clippings so that they are easy to find.Spike is secure. All data is encrypted automatically, and you can password-protect access to your clipboard.

Spike requires no configuration. Just start Spike up and it finds all of the other shared Spike clipboards on your network. Simply click on a remote clipboard and visually inspect the clippings. Double click on a clipping to load it, then go to your favorite application and paste. Or drag the clipping from Spike to your application. Collaboration has never been this easy!


All we need is PubSub to track the RSS conversations

Submitted by Roland on Mon, 2004-04-19 12:37

Nico Macdonald nails it. In the near future, all websites will have RSS (or Atom or some such syndication format) files with the meta data he seeks. That plus the capabilities of PubSub gives you the ability to track the debate crucial to modern society. PubSub TODAY, unlike the much better marketed Technorati, allows you to see when anybody in the RSS generating world (which is blogs mostly for now but more and more traditional pubs like the BBC and the New York Times are generating RSS) references a topic by name or by URL. This is superior to Trackback because it doesn't require any software on the publishing end. TrackBack is a good concept and was needed in the pre PubSub days. Now with the advent of PubSub, TrackBack is no longer crucial. Somebody just has to fund :-) (easy for me to say since I don't have the money to do so!) PubSub so they can track all blogs and RSS generating sites, not just the subset they do now!

The future of Weblogging | The Register


High quality and informed debate about current affairs is crucial for any modern society. For most publishers Web-based discussion tools have failed to create such discussion, though there are exceptions such as The Economist and the Wall Street Journal. Webloggers already link extensively to, and comment on, articles published online (though some publications impede this by hiding all their story information from non-subscribers or by obscuring the story URL by adding in user, session, or page element information) and often create the most vigorous discussion about them. However, unlike Web-based discussion postings, Weblog-type debate is distributed and hard to get oversight of.

If online publishers, and particularly newspaper and current affairs publishers, syndicated the meta information on every article they published (title, author, date, introduction, and so on), readers could more easily find, review and organise those that were of interest to them. As writers they might choose to post a Weblog commenting on particular articles.

If publishers then used the ‘track back’ model to list an appropriately edited selection of these comments, in the context of each article, readers could follow the developing discussion and commentary. Tied to reputation management and good presentational tools, this would be likely to facilitate a greater awareness of new ideas and a more engaged (and possibly more informative) debate about them. And for the beleaguered publishing industry it would create greater engagement with its current readers, and may open up new audiences as well.

In the absence of large numbers of publishers taking up such a challenge it may be possible to achieve these ends in another fashion. Many services already aggregate Weblog links to individual Web pages and could present these to readers in a ‘browsing assistant’ window that refreshed with each change of page. A similar model was pioneered by the now defunct Third Voice, whose browser plug-in used a meta-server to allow readers to write on Web pages.

This idea is not new, and was a prominent request in the pre-BloggerCon discussion. It has even been implemented in a limited way with the Technorati Anywhere! bookmarklet. If it could be realised it would at least break open the small and slightly incestuous circles into which the blogerati have settled, allowing their ideas and those of the blogging masses to spread more widely. And it would break open the out-dated model of knowledge development and discussion still being peddled by the unduly smug proprietors of the fourth estate.


There is no RSS Bubble

Submitted by Roland on Mon, 2004-04-19 12:07

I agree with Steve Gillmor. There is no RSS Bubble. If anything RSS, syndication and aggregation have been under hyped and under utilized!

From The RSS Bubble:


So let's talk about the RSS Bubble. Between NewsGator on the Tablet PC and NetNewsWire on the Mac, I can capture and retain full-text feeds and RSS enclosures, view them with embedded browsers and media players, apply add-in capabilities to publish and auto-subscribe via drag-and-drop or right-click menu commands, and incorporate XML Web services such as search, reputation filtering, and conversation mining.

The emancipation of Web authoring has already democratized information publishing. In turn, RSS provides a mechanism to liberate the other end of the pub sub collaboration. When subscribers can harness the aggregated authority of the feeds and items they and their peers pay attention to, the resulting data will drive the next generation of Net business models.

This self-correcting feedback loop may have the added benefit of reducing the violent fluctuations we experience between boom and bust. We're already seeing this leveling effect in the Sun/Microsoft deal, with blog postings by James Gosling and Eric Rudder counteracting the noise from the media and analysts spaces.

So Christopher, you go your way and I'll go mine. I'm just calling it as I see it. I've yet to see anyone move to RSS and then abandon it. If that's a Bubble, bring it on. But if hype turns out to be true, is it hype?


McLaws announces Visual Blogger 2004

Submitted by Roland on Sun, 2004-04-18 23:28

Very cool. In the future you will be able to blog from every app that you use whether it's your development environment (like Visual Studio or Eclispe) or your Wordprocessor or your spreadsheet or...

From Scobleizer: Microsoft Geek Blogger:


Robert McLaws is working on Visual Blogger 2004. This is going to be an awesome way to edit a weblog. McLaws was telling me about the Visual Studio integration. Here's how it'll work. Highlight a few lines of code in Visual Studio that you want to blog about and right click. Choose "blog this" and Visual Blogger 2004 will open up, with the lines of code already in the editor, and will let you publish those lines of code to your blog with comments.


Great domain specific (for lawyers) example of how to use an RSS reader

Submitted by Roland on Sun, 2004-04-18 00:56

Ernie nails it. Need more examples like this for other domains.

From Ernie The Attorney: Get your legal news quickly & easily with a news reader:


Have you heard the term XML and RSS but not known what people are talking about? Have you heard of 'News Readers' or 'News Aggregators' but not really understood why people were so excited about them? Well, rather than just point you to an article on the topic, I'm going to show you how you can discover the power of News Readers for yourself.


Use your digital camera to record prices while shopping

Submitted by Roland on Sun, 2004-04-18 00:42

I do this with my Canon S400 digital camera all the time and once I get a camera phone (July for my birthday hopefully ), I definitely will do it with my camera phone.

From Reiter's Camera Phone Report: Detroit Free Press computer columnist discusses camera phone apps, moblogs:


One of the executives showed me his camera phone photos of computer printers. He was in the market for a new printer and took photos of printers -- with the price displayed -- as a way to jog his memory.


Richard Akerman's Digital Photography Sharing and Printing revisited

Submitted by Roland on Fri, 2004-04-16 16:48

Prompted by a comment from Richard on my last post, I revisited his digital photography sharing and printing guide and come to think of it, I was wrong, I couldn't find anything in his guide specifically that's out of date with respect to Panther! My apologies to Richard! The only thing you might want to add is a mention of EasyBatchPhoto for Mac OS X which does lossless JPEG rotation.

Richard Akerman: Digital Photography, Sharing, and Printing

Submitted by Roland on Fri, 2004-04-16 12:45

A wee bit out of date (not updated for Panther for instance) but useful nonetheless.

From Digital Photography, Sharing, and Printing via Darren Barefoot:


This site gathers together all the knowledge I have accumulated about digital photography. I see the same questions showing up in discussions again and again (the FAQ phenomenon) so I have created this page to provide a useful reference point, particularly for Canadians.

Keep in mind that when I started writing this, 3 Megapixels and 256 Megabytes of storage was state-of-the-art.


Google Local subtle mechanism for metadata collection?

Submitted by Roland on Thu, 2004-04-15 16:48

Hmmm. Google remains a company to watch and monitor.

From EDventure :: Google locle:


But second, consider Google’s AdWords system a subtle mechanism for metadata collection. Right now, you can specify geographic targeting. Someday soon, perhaps, you’ll be able to specify targeting by opening hours, or by language spoken, or by other criteria. For now, that information is used only for targeting rather than displayed…

But just as Google is implicitly  if transparently planning to collect huge amounts of e-mail, it’s also beginning to collect metadata about businesses. And it has the market pprsence to make such a collection interesting. For now, the information provided by AdWords advertisers is an interesting database; someday, perhaps it could support a variety of open APIs. (Take a look at SMB meta, courtesy of Dan Bricklin.)

The best analogy, perhaps, is to Wal-Mart’s efforts to get its suppliers to use RF-ID, faltering though they may be. In the long run, suppliers will adopt Wal-Mart’s standards, and other large customers will likely start to use those standards too. Here are some scenarios: Currently, most “commerce” searches are for products and the establishments that sell them. But unless you’re ordering online, those two searches are generally separate. There are few listings for what’s on sale at an individual store. But soon, it could make sense for a store to make limited access to its inventories available online, so that people could know exactly where to buy things.

And, of course, Google could sell anonymous data about those queries to merchants who wanted to stay in stock or pre-order based on what looks hot, or to manufacturers, fashion mavens and so on.  .

While right now Google is collecting information through AdWords for targeting, there’s no reason it couldn’t start using advertiser-entered data for display as well, as it already does with data feeds in Froogle. Some companies may start sending these new kinds of feeds expressly, while others might fill out a slightly more complex , domain-specific form when they advertise. Then hotels could start to compete on the basis of their swimming pool hours.


Jeffrey Veen, like me, only buys CDs direct from artists at their gigs

Submitted by Roland on Wed, 2004-04-14 20:53

Downloading music and TV programs 'illegally' is very similar to the illegal drug scene of the 60s. Everybody does it but nobody admits it. Yes, I have had a brief flirtation with downloading music and video, but as I have stated many times before on this blog, I no longer download 'illegal' music or video. In fact I don't even bother to buy CDs or DVDs except directly from artists at their gigs. After reading about how badly artists are treated and buying over 600 CDs in my life, I really don't miss it!

Life is too short to worry about the fat-cat and self serving MPAA and RIAA. And there is plenty of other stuff to do (like raise my beautiful son, eat great food and take advantage of legal downloads)!

From Jeffrey Veen: How I stopped buying CDs and started loving music:


So is this bad? Am I hurting the artists by "stealing" their music? I was talking to Jenny Conlee, accordion player for the Decemberists, at their last San Francisco show about this. She said she would much prefer to sell stuff at the gigs, rather than through stores -- though it doesn't scale as well. She told me that if you bought their album at the mall, they might see about $.80. Amazon nets them just over a buck. But at the gigs, where they sell CDs for just $10, the band keeps half. Not to mention the percentage of the door take.


TrailBlazer - Visual browser history

Submitted by Roland on Wed, 2004-04-14 18:57

(Via email from Brian Fisher) - Very cool! Need more innovation like this!

From MacWarriors TrailBlazer:


TrailBlazer solves the problem of getting back to a web page you've been to before, but didn't have the forethought to bookmark. The current solutions provided by most web browsers, their history menu, is just a list of titles and web addresses which aren't memorable enough to be useful.

The actual solution used by most people, is to retrace their own steps through different links until they find they page they are looking for. Our software solution provides the user with a graphical representation of the steps they took from page to page, such that they can simply click to their final destination page.


Joe Gregorio:Building a Atom-Powered Wiki in Python

Submitted by Roland on Wed, 2004-04-14 18:05

Very cool hack!
From An Atom-Powered Wiki:


Using Python and the XPath facilities of libxml2, it was straightforward to build an AtomAPI implementation for a wiki. There isn't even very much code: atom.cgi is just 146 lines of code, while atomfeed.cgi is just 122 lines.

This is just a basic client that does the minimum to support the AtomAPI. In a future article the way the server handles HTTP can be enhanced to provide significant performance boosts by using the full capabilities of HTTP. In addition, the SOAP enabling of the server will require some changes. After that we can add the ability to edit the wiki's templates.


Amazon's A9 Toolbar with Diary Feature = stealth blogging feature?

Submitted by Roland on Wed, 2004-04-14 17:58

Is A9's toolbar diary feature a stealth blogging system or merely a killer?

From > Company > What's New & Cool:


Web Search: Search the web and's Search Inside the Book™ results. You can also do searches on, the Internet Movie Database, Google, and look up words in a dictionary and thesaurus.

Search Highlighter: The toolbar will automatically highlight your search terms in a light yellow. By using the highlighter menu, you can see how many times your search terms appear on the page, and jump to each occurrence of a specific word. Hint: You don't have to do a search to use the highlighter. Just type one or more words in the search box and click the highlight button.

Your History: Keep track of your last sites visited (on any computer) and your most recent searches. It will keep track of the Web pages you recently visited--even if you switch computers.

Diary: This is the newest and (we think) coolest feature of the toolbar. You can take notes on any web page, and reference them whenever you visit that page, on any computer that you use. Your entries are automatically saved whenever you stop typing or when you go to another page.

Site Info: See information about the website you are visiting, including related links, site statistics (including traffic rank), sites linking to this site, and user ranking. Select from the menu to go to the site's page on where you can get more information and write a review about the site.

Pop-up Blocker: Stop those annoying pop-up ads.


LinkRanks - Yet another cool PubSub Feature

Submitted by Roland on Tue, 2004-04-13 17:08

Very cool. The following link is to try to establish a conversation thread around a topic using a common URN:
From PubSub: About Link Ranks:


Link ranks are our way of measuring the strength, persistence, and vitality of links appearing in weblogs. When PubSub reads a new weblog entry, we pull out any URIs we find and attach them to the entry in a separate field. This allows our users to include domain names or linked file types when creating subscriptions.

From this set of URIs, it's easy to find the most popular domains. Link ranks take one more step and calculate scores for each linking site; domains are then scored based on the values of the sites that link to them. The theory is basically that these are the links you're most likely to click on, if you read a weblog at random.

Unlike Google's PageRank system, link ranks are not iterative. Rather, we base link ranks on a simple formula that only looks at local links - links which are within one or two steps of any target site. Also, it's important to note that we only look at links which are in weblog entries - we don't read any of the other links on the page, like the side bars or blogrolls.

The intent of this system is not to measure the strength of any particular domain, but rather the relative likelihood that you'd find and follow a link to that domain. As such, the links are what's really important, not the pages themselves.

To calculate link ranks, we generate a link score for each domain. Link scores are calculated in three steps: first, we find a point value for every site that links to other sites. Second, we use the point values to generate link scores for each domain. Finally, we weight the daily scores over a fixed period to arrive at an aggregate score for the site - this ensures that more recent links are given more value than links from several days ago.



Subscribe to RSS - Roland's blog