Archive

Archive for the ‘Uncategorized’ Category

GLUtil on Hackage

August 31st, 2012 Comments off

The GLUtil Haskell package I wrote about before is now available on hackage.

Categories: Uncategorized Tags:

Apollo’s Platform Appeal

May 22nd, 2007 2 comments

Pete Lacey provides a fair analysis of Adobe’s Apollo. It’s a helpful post since it’s not really trying to strongly influence opinion about Apollo either way, even if it may inflame advocates of some of the technologies given somewhat gruff treatment. All told, the analysis sounds fine to me, but the development story still leaves me cold. If this is a packaged browser designed to normalize a web-based platform, are the critics morally, as well as technically, wrong for calling it a browser extension? I’m just not seeing it yet, I suppose. Ease of deployment is a worthy feature, but the Apollo deployment mechanism doesn’t seem easier than navigating to a URL in a browser. So, if it’s less easily deployed than a traditional browser-based app, it has to offer some combination of a better development story or better technology.

The technology advantage seems focused on the off-line access, a problem which sounds like it may soon be answered by the already-incorrectly-applied label of “browser extension.” There is also fairly vigorous debate about just how useful this feature is, so, by itself, it is not a killer technology feature, in my opinion. (Note: I am ready to be proved wrong on that.) The development advantage is similarly less than overwhelming. Rather than the promise of a new language, or even a new API that will streamline the typical workflows of RIA architects, we get languages and technologies we’ve seen before on top of a rationalized runtime environment. The unfortunate part about this is that platform normalization is a remarkably un-sexy feature that is actually way more important than over-inflated promises regarding new languages or libraries. Well, it’s unfortunate for Adobe, but I don’t know that such pragmatic concerns will lure developers still flush with the possibilities opened up by the eradication of replication and distribution costs.

Here’s my analogy for this situation: video game consoles (e.g. PlayStation 3, Xbox 360, Nintendo Wii) are introduced with a huge splash centering on new capabilities, but provide real, long-lasting value by defining a platform. What’s interesting about the video game console life cycle is that they gain initial adoption by providing something new, something flashy: better graphics, better sound, new controller. Such intuitively appealing benefits of the technology target consumers, whom developers rush to serve with games for the new systems. The developers, meanwhile, benefit tremendously from having compatibility testing largely eliminated when compared to PC game development. After the spark and sizzle has faded, consumers are left with a system that developers can become more and more familiar with, leading to greater resource utilization and less time spent figuring out how to get “Hello World” to run. Developers are comfortable, and consumers get games that just work as they should without any fiddling: it’s a good thing.

Apollo has the potential to be a developer-friendly environment by virtue of its stability and availability (assuming those things work out). But if the deployment is worse than a web page (i.e. the video game console costs a bit too much), and there’s nothing to grab consumer interest, what is going to drive adoption?

Categories: Uncategorized Tags:

I Knew I Was Afraid of URIs for a Reason

January 18th, 2007 Comments off

Netscape validates my fear of URIs (link thanks to Dare). There are a couple issues here.

First, how does something like this affect visions of the unfolding web? On the one hand, storing the same data in two live sources (not counting backups here) is commonly taken as an inefficient and dangerous practice. If the data needs to be changed, you should only have to change it in one place (i.e. DRY). On the other hand, repeatedly pulling the exact same document down over the network seems fairly inefficient itself. Not only inefficient, but it makes the system less robust by means of requiring a network connection between two points, and taking a dependency on a data source that is potentially not under the control of the consumer application’s developer. A change in the data source could violate assumptions made by the developer, thus leading to unpredictable changes in the functionality of the application. I think we can safely say that DRY is a good practice to follow in one’s own work, but is there a distributive law of DRY? I don’t know.

Second, as a commenter to the Netscape blog post linked above mentions, if the URI were used simply as an identifier rather than an actual link to a live resource, then this would not be as big of a problem. I genuinely dislike the apparent multiplexing of URL strings, including protocol-specifying prefixes, with a usage as an abstract, unique identifier. Confusion is a direct consequence of either usage: If I use a link like http://www.arcadianvisions.com/my_format effectively as a namespace, without actually creating something like a DTD and making it accessible from that location, someone looking to find information on the format would be misled by my identifying string. Similarly, if I did store a format specification at that location, someone might take a hard dependency on it and be forever reliant on my web hosting and directory layout.

In this case the data is static, so an identifier — one that does not actually locate a resource on the web — is sufficient as long as everyone includes their own copy of the referenced data with their application. But does this violation of DRY represent the potential for disaster? If a critical flaw is found in the format, how could we respond? My personal views, at this point in time, tend towards the notion that we should be pulling the data from the net repeatedly, but that a more data-centric localization scheme is needed.

Categories: Uncategorized Tags:

Software Widgets All Over the Place

August 31st, 2006 4 comments

Nick Carr discusses the idea (brought up by Chris Anderson) of wrapping bits of data with an appropriate application, and then embedding that bundle directly into a web page. Bruno Pedro extends the basic idea to include the notion that not all the data be included in that bundle. At some point of extension, I think you end up back with the Web as it is today. Sure, pages will be more dynamic, and Ajax-like techniques make mashups possible, but is it a revolution?

Does it make sense to have lots of little spreadsheets; a different one on each page? Let’s say computers are really fast and have tons of memory, so efficiency isn’t an issue. What about the user? Will all these apps have different UIs because they’re from different vendors? Will each of these AppWidgets contain only the subset of functionality the embeddor thought would be useful to the data being presented? If we say “No, never!” to these issues, do we end up with ActiveX?

Let’s say we’ve learned from the past and we can do embedded apps without any of the obvious pitfalls. Is the web ready for the idea of apps whose data is pulled from disparate sources? The existence of mashups says yes, but how widespread is reliable, successful interop? The problem of data exchange has a few key aspects. Syndication via RSS or Atom addresses the issue of packaging in a manner that seems more manageable than SOAP, but there is still the issue of creating a tenuous dependency with each data linkage. It seems that there are a few really stable sources of information, particularly large companies, but the dream seems to be tying together information from many small sources as well as the big guys. If every individual is a source of data, does that mean that we each have to worry about compatibility-breaking changes to our own personal APIs?

Categories: Uncategorized Tags:

What Role do Traits Play?

August 31st, 2006 Comments off

Over at ONLamp, chromatic has a great review of Roles in Perl 6. I’m very excited to see more languages take up Traits or Roles, as it truly does speak towards how many of us write software today. Or at least how we’re trying to write software despite language constraints. While dynamically typed languages serve their purposes well, I always have to look away when I write my own class-based dispatch in a method (particularly operators), or do a respond_to? check in the code when I’d be more than happy for the compiler to do this for me.

Of course, the limitations of inheritance-based polymorphism are all around us (as discussed in the above article), and attempts at making life better have had mixed results (interfaces). It seems as though some type of soft typing in combination with type inferencing will be what we’re all using in a few years (if you’re not already). The next question, then, is will programmers actually include optional type specifications, or will everyone just leave everything as the most generic type possible? I’d hate to think the future will be C#/Java/WhateverStaticallyTypedLanguage where everything is of type object.

Categories: Uncategorized Tags: