Wednesday, April 15, 2009

Zope Interfaces and Python Abstract Base Classes

Jim Fulton (the architect of Zope 2, Zope 3, zc.buildout, and so on) and I talked briefly on the first day of the sprints about whether we could take advantage of the Python 2.6/3.0 Abstract Base Classes (ABCs), either replacing Zope interfaces or somehow using them to our advantage.

So I don't misrepresent him, I'll state my conclusions. Ask Jim for his.

Unfortunately, I came to the conclusion that ABCs, as of their current implementation, are anemic and largely uninteresting from zope.component's perspective. Some points:

  • The basic ABC mechanism simply allows hooks into isinstance and issubclass, so that you can ask, for instance, ``isinstance(x, Y)``, and ``Y.__instancecheck__(x)`` is consulted, if the method exists. Jim said we could make zope.interface interfaces implement these hooks. That didn't seem very interesting to me: to me, ``isinstance(x, Y)`` is a very different semantic question than ``Y.providedBy(x)`` and the difference is valuable. Same for ``issubclass(X, Y)`` versus ``Y.implementedBy(X)``. I'd rather not have zope.interface muddy that water.
  • The abc module allows you to create abstract base classes. To declare that a class is a concrete implementation of the ABC, you can either have the concrete class subclass the ABC; or, you can call the ABC's ``register`` method. In the current implementation, the registration is one-way, stored on the ABC. Therefore, you can ask an ABC, via various internals, what its "subclasses" are; but you cannot ask a class or instance about the ABCs it subclasses or is an instance of. zope.component needs to be able to ask what a class implements, or what an instance provides.
  • The zope.interface implementation is highly optimized, and relies on cacheing results for speed. The abc module is not geared for speed.
  • I wondered if we could at least leverage the default Python ABCs, describing mappings and sequences, to create interfaces from them, deprecating the ones in zope.interface.common. Jim seemed disinterested in that, saying that he hadn't found ours useful. I have found them useful--as a shortcut to describing a contract based on a mapping or a sequence, usually--so that still seems like a possible win to me.

Jim pointed out that zope.interface scribbled its metadata on classes, which some people found distasteful, and at least the abc module didn't do that. This characteristic of ABCs might please some people. A global data structure might be another way for zope.interface to do its work while answering that concern, if that ever became something we wanted to solve.

So in conclusion, the basic mechanism and the module were not really of interest to me; I felt that the library of ABCs might at least be interesting, but Jim disagreed.

Too bad. But hey, maybe we can throw out the Zope testrunner in favor of nose! :-) Jim proposed that idea, and thought it might be doable, with a little extension trick to make nose support Zope layers....

Back from PyCon 2009...a week ago.

I came back about a week ago from PyCon 2009...and went straight into getting our house ready to offer for sale, and then performing Anima Mundi by Mark Scearce in Raleigh, NC with my wife, Karyn, and some old and new friends. Good stuff, but exhausting.

I have quite a few notes from PyCon. My first mission there was to announce some of Launchpad's recent open-source work, in particular lazr.restful. I am excited about that, and have things to say, but I'm going to wait to blog on that just a bit longer. The docs need some work, at the least.

While, of course, the Django contingent at PyCon was very large, I was pleasantly surprised that the Zope/Plone community had a good showing. There were a few talks from the community, and BoFs included "I'm not ashamed to be a Zope programmer," "I love Zope," "I hate Zope" (the same crowd attended both the love and hate variants, I'm told), and a generic "Zope" BoF. The sprinting was enthusiastic and lively, and some cross pollination with some of the other non-Django frameworks also added some excitement and interest. The huge international Plone community using and sharing generic Zope libraries also has increased energy behind the Zope libraries.

Perhaps the most interesting generic Zope conversation I had was an attempt to identify what unifies the "Zope" projects. Thanks to efforts to bring Zope 3 libraries to Zope 2 and Plone, there is arguably more of a common theme across projects than in the past. There was some consensus that the following two ideas unify Zope projects.

  • Zope provides an unusual degree of low-level pluggability and interchangeability thanks to a contract-based component system.
  • Zope usually used a graph traversal approach to converting URLs to code, which has an advantage in that it is arguably easier to convert code back to URLs (walk back up the graph) than other approaches that do not have a natural reciprocal.

(For thoughts on the second point, see my old blog post.)

I have several specific PyCon reports that I'll post separately. All in all, I very much enjoyed the conference, though much more for the conversations and the sprints than the talks.

Speaking at OSCON 2009: Launchpad Foundations

My speaking proposal for OSCON 2009 was accepted: "Launchpad Foundations: Learning to Leverage a Component Architecture". Here's the quick blurb:

Study gains and losses in how Launchpad, a collaboration web service for the open-source community, used a Python component library from Zope 3 to help manage a large project. Discuss when the approach might be appropriate. Code examples include automatic REST web service generation. Demonstrate how the component architecture might be leveraged in popular frameworks such as Django.

Go to the details page for the full blurb. It should be interesting and fun to prepare. I'm interested in seeing if I can fit in references to some of the work that the pypefitters guys have been doing as well. WSGI plus zope.component seem like a pretty good pluggable base combination for web applications to me.

Saturday, February 28, 2009

Getting VMWare Faster

I like Macs, and OS X. I like Ubuntu too, and with my job at Canonical, I develop with it.

I've been developing in Ubuntu in a VMWare Fusion image, for a variety of reasons. I recently got really tired of the slow speed I was experiencing, though. I decided to investigate what I could do to keep my Mac/Linux story portable, but faster, that didn't involve a new computer.

Random Googling to the rescue!

The most persistent advice I read was to get an external disk. I also read that using a reasonably fast external disk over a relatively slow connection like USB 2 might not do so well. I also read...a whole bunch of other things.

Here's what I did:

  • I already was using my FireWire 800 port for some other drives, so I decided to go for broke with an eSATA Express Card adapter.
  • I got a 500GB 5200RPM 2.5" disk drive in a rocketfish enclosure with eSATA and USB connections.
  • After connecting everything up, I built a new VMWare image on the external drive. I used Ubuntu 64 bit, because I read that VMWare could take advantage of some 64-bit opcodes. The 64bit Ubuntu image is named "AMD64" but I read that it works fine on Intel 64 bit. My experience bears this out.
  • I gave the image 1.5G of my 3G RAM.
  • I configured the image to pre-allocate the necessary space on the hard-drive because VMWare Fusion mentions that this might speed things up.
  • I made sure that the 3D graphics acceleration was not turned on for the image, since Linux can't use it anyway.

The results were gratifying! The image certainly feels snappier. More concretely, running a (presumably representative) subset of the Launchpad test suite took Between 20 and 30 minutes on the old image, and between two and three minutes on the new one. <happy sigh>

Now the next question would be "which changes made the biggest difference?" I'd love to know--but probably not enough to actually do the experiment. Instead, I'll use the faster speed to help in this coming week's sprint to abstract a REST webservice framework!

Monday, February 16, 2009

Oh, the farmer and the cowman should be friends: URI parsing with Routes versus graph traversal

(This post's title alludes to a song from the musical Oklahoma, in case you were wondering.)

I, like many web application developers, am impressed with the Routes model for mapping a URI to application code (as in RoR, or any number of Python versions). I plan to use it for "hobby" work, and I'm advocating it at my job.

For many web applications, it seems to work as well or better than the other approach to web application URI parsing with which I'm familiar, graph traversal. In the graph traversal approach I know, you typically divide up the URI path elements by slashes into individual path elements. For example, "/musical_theater/rodgers_and_hammerstein/oklahoma" becomes ["", "musical_theater", "rodgers_and_hammerstein", "oklahoma"]). Then you start with a given graph node and use each path element as input to traverse the graph. For instance, repoze.bfg strictly uses __getitem__ to traverse the graph, so the example URI above might equate to root_object["musical_theater"]["rodgers_and_hammerstein"]["oklahoma"].

The Routes model is particularly nice for web sites publishing square, non-hierarchical data. If you don't have a graph to traverse, then you have to do something else!

Moreover, I buy into the argument that Routes encourages you to think about your URI space separately from your model. This fits in well with REST philosophies, in particular if you regard your URIs as a significant aspect of your user interface.

In defense of graph traversal, I generally have found that traversing model objects has resulted in reasonable URIs. Also, one could traverse a graph of abstract traversal controllers instead of models (and in fact, at my job, that is what the code of Launchpad does, as of this writing).

But typically, graph traversal does tend to mix model and URI in a way that can force "model" objects into a system when all you really want is a URI.

For instance, in Zope sites that I have designed, I have frequently felt awkward about the top-level design--the part of the design in which you are arranging top-level access to your models. This part of the website functionality often does not map naturally to model objects. In Zope using the ZODB, the nodes in the traversed graph are usually persistent objects, and so the top-level objects have a "model" feel; and yet they are usually just scaffolding until you get into the meat--the real models--of the application.

As another example, URIs in which path elements are really query-string-like filters on a view rather than true graph traversal are possible, but not as natural with graph traversal systems. For example, consider this URI from,new_homes_lt/38.652833,38.976488,-85.838055,-85.455951_xy/10_zm/. (No, I'm not planning on moving to Indiana.) That URI reads well, and follows typical REST advice to move information into the URI. It's doable with graph traversal approaches, but is not really traversing a graph.

Graph traversal has some strengths as well, though.

An obvious one is when you have a graph to traverse. Perhaps you have a CMS in which documents can be arranged into arbitrarily nested folders. Or perhaps you have some concept of "projects" that can contain other projects, to an arbitrary depth.

Of course, in the same way that graph traversal can be made to handle pure-URI stories, such as with Launchpad's abstract traversal controllers, Routes can handle graph traversal. But I argue that graph traversal is more natural to, um, traversing graphs.

In particular, if you have graph nodes that can be dynamically created that have different traversal rules, as in the CMS example above, then defining how to traverse per graph node can be more natural and cleaner than specifying the rules in a routes file and a single controller.

Also, when a routes system starts to make heavy use of regular expressions--say, a rule that specifies anything beyond static strings, a controller, an id, a view, and a "catch all" for the rest of the URI--simple graph traversal approaches can be much easier to express and understand. (Examples of relatively simple traversal approaches are the Launchpad navigation traversers, or the repoze.bfg __getitem__ approach.)

So, they both have applicability. Maybe we can combine the two approaches when it makes sense. The farmer and the cowman should be friends. (You get to decide which approach is the farmer, and is which is the cowboy, though see the postscript.)

For some projects, Routes or graph traversal alone might fit the bill perfectly. I do tend to guess that Routes is the better general-purpose approach. But for some applications--if they present a complex data structure, for instance, and especially one in which one or more aspects of the site can be presented as a graph--then maybe you ought to have Routes for the top of your site, which then can defer to graph traversal for certain parts of your site that make sense.

megrok.trails goes down this road, but not quite the way I'm thinking of at the moment. It fits Routes-style traversal within a larger context of graph traversal. I'd like to turn that inside out: when appropriate, have a Routes mapping with a wildcard that consumes the entire tail end of a URI, and then sends this to an intermediate controller, which uses graph traversal on the wildcard part of the URI to find the "real" controller. Routes is entirely in charge initially, and explicitly defers to graph traversal if so requested.

I wouldn't be surprised to learn if such a thing existed for Routes. It would be pretty easy to code up. I'd like to use something like it.

Postscript: For what it's worth, I'm struck by an overwhelming desire to relate the farmer, making fences, to Routes, making nice, simple URI rules; and to relate the cowman, herding free-range cattle, to graph traversal, letting you walk over arbitrary model graphs. But metaphors like that sometimes get people up in arms, because the Routes people might want to be the rough-and-tumble cowboys, and the graph traversal people might want to be the practical and pragmatic farmers. So forget I said anything like that.)

Don't use __*__ in Python unless you are hacking Python

As an aside, I've essentially finished the REST book I was reading, so could theoretically launch into blogging about that; and I have been doing a lot of reading and exploring of the Inform 7 ideas I introduced earlier. But those are too big and daunting to write about quite yet.

When starting the Pylons book today, I noticed that their Routes library uses the __*__ pattern for some of their API (__before__ and __after__ at the least, it seems).

The same kind of pattern is sometimes in the Zope code base. zope.location uses __parent__ for back pointers. The component registry defined in zope.component uses __bases__ on instances, which is especially confusing because __bases__ has a special Python meaning on classes.

Why, for goodness sake?

The __*__ pattern is explicitly claimed by the language for internal bits. Here's the pertinent bit from the language reference:

System-defined names. These names are defined by the interpreter and its implementation (including the standard library); applications should not expect to define additional names using this convention. The set of names of this class defined by Python may be extended in future versions.

That seems pretty clear. The pertinent section of PEP 8 is pretty clear too:

__double_leading_and_trailing_underscore__: "magic" objects or attributes that live in user-controlled namespaces. E.g. __init__, __import__ or __file__. Never invent such names; only use them as documented.

There are plenty of other naming conventions that a framework can claim for internal bits. The ZODB, for instance, even though it was first written a pretty long time ago for the web world, used prefixes like _p_ to signify their own internal bits. It conveys the same kind of "I'm magic" idea, but does not step on the language's toes unnecessarily,

Apparently, there is a defense for older code that uses __*__: I read that Guido's initial style post from which PEP 8 evolved said that it was OK to claim __*__ names under special circumstances. The post is not at its old location any more, but thanks to the wayback machine we can see the evidence. Guido used to say this:

__double_leading_and_trailing_underscore__: "magic" objects or attributes that live in user-controlled namespaces, e.g. __init__, __import__ or __file__. Sometimes these are defined by the user to trigger certain magic behavior (e.g. operator overloading); sometimes these are inserted by the infrastructure for its own use or for debugging purposes. Since the infrastructure (loosely defined as the Python interpreter and the standard library) may decide to grow its list of magic attributes in future versions, user code should generally refrain from using this convention for its own use. User code that aspires to become part of the infrastructure could combine this with a short prefix inside the underscores, e.g. __bobo_magic_attr__.

OK. That was a bit of a waffle. But it's not there anymore, and in any case, some or all of the uses of the __*__ convention I've already listed have no particular need for claiming to be Python-level infrastructure.

How about we stop using __*__ now, unless we are hacking Python itself?

Monday, February 9, 2009

My Claim: "MTV" Is Silly.

repoze.bfg's creator, Chris McDonough, in an informative and corrective comment to my last blog post, among other things disagreed with me about my assertion that "repoze.bfg doesn't provide a model story; it provides a traversal story."

I'm afraid I didn't communicate myself well enough. He might still disagree, and I might still fail to make my points clearly, but I found this the most interesting observation I made, so let me try again.

  • I think applying Model-View-Controller to client-side application frameworks makes sense. MVC actually makes sense, with a reasonably clear delineation of responsibilities, when you look at Cocoa, for instance, or even in some of the more recent JS frameworks (Sproutcore, for instance).
  • All of the reasonably recent web frameworks out there with which I am currently even vaguely familiar (Zope 3, Django, RoR, bfg, Pylons) have a model and something responsible for rendering. The rendering usually goes out to a template, but not always. It's not really cooked that you have to, and usually that's regarded as an advantage and a flexibility. "Render however you want! Use whatever library you want!" So, the heart of the system is Model-View. The Controller isn't there, and the Template is an implementation detail of the View. "Model-Template-View" or "MTV" just seems silly to me. I'd prefer it if everyone acknowledged that our web frameworks usually just have "MV," and move on.
  • bfg doesn't care what it is traversing. Pretty much, give it things that it can use the __getitem__ protocol on, and then when it has consumed the path, it'll adapt to a view class. The __getitem__ bits could be a model...or not! What if your data model didn't jibe with your URL model? That's completely reasonable, and the routes guys have plenty of examples in their apps because of how they think about URLs. What if you like the __getitem__ pattern for your URLs, but your URL story is different than your model. You might build a true MVC system with bfg: pure data-driven models; the bfg traversal system used exclusively over "controller" objects that handle traversal and maybe request (i.e., form) parsing; and views that adapt the controllers to *only* render. Maybe the controllers even optionally have WADL-like contracts based on request inputs.

So, my point was actually not that bfg was cheating in any way--certainly, for instance, nothing like Zope 2. To recap, then.

  1. I find the "MTV" term to be specious generally, whatever the web framework. That's just a criticism of the term, as web framework marketing has adopted it in the past few years. I'd love for it to retire.
  2. interestingly, I don't think bfg is truly tied to the "MTV" model. It doesn't care what it traverses. The MTV model works fine, but a story like what I described, in which the models are maintained separately from the URL space, and the traversed objects are traversal "controllers," would also work well. Then thinking about any additional responsibility of the traversed objects is an interesting exercise, especially in light of REST-ian approaches.

So, yes, Chris is right, from one perspective, bfg is as much MTV as anybody else. That's fine. I'm just railing against the term, and saying that bfg can be used for more than just "MTV".

Saturday, February 7, 2009


As mentioned earlier, I've spent time looking at other frameworks lately.

I've spent more time on Chris McDonough's repoze.bfg than any of the others so far. This is probably because, as discussed below, it's very minimal. It's also documented well. Finally, it follows a few standard old Zope patterns that don't require much thought for me to process. Given all that, I can understand it quickly, and so am enticed to spend more time to read and think a bit about its design.

I looked at it again because of this recent bit of marketing: Chris makes his point, which is valid; and he's selling to his design's goals, which is the point of this kind of presentation.

As an aside, I'm a "can't we all just get along" kind of guy, so the fact that the trade-offs of repoze.bfg's design are not discussed in comparison with the other frameworks bothers me, even though having done so would have made the piece much worse marketing and much harder-to-read communication. (A repoze.bfg design tradeoff example: if you always need authentication and authorization for your web apps, you'll need to plug in and understand more WSGI middleware, and then the given comparison is not as pertinent, at least for Grok and Django.) I actually wouldn't be surprised if repoze.bfg still would do very well in the chosen metric, if the set up actually did include authorization and authentication middleware in the profile; and if it didn't discard the webob.Response. That would have been mildly more interesting to me. But, whatever, it's marketing, and I get Chris' point.

The name of the framework, "bfg" is funny on multiple levels. The level that sticks with me is that the F[*&^%$#] G[un] is really not that B[ig]. As the documentation points out, this is a very minimal framework:

Minimalism: repoze.bfg provides only the very basics: URL to code mapping, templating, and security. There is not much more to the framework than these pieces: you are expected to provide the rest.

That's nice for a "pay for what you eat" story, as the documentation says elsewhere. But it's also insufficient for any website I've ever made. There are at least some suggested patterns to follow elsewhere within the repoze meta-project: repoze.who and repoze.what are available for authentication and authorization, for instance.

But it is a framework that wants more guidance, more "rails," more framework, to get some basics done. What about web form helpers: maybe we ought to use Ian Bicking's stuff. Or what about REST helpers? You might be able to write some interesting adapters from a generic RESTful view to a CRUD-ish interface, like the patterns I've seen in Rails. But it's not there now (and the documentation states that it is an active goal to hide the zope component architecture, which could have helped with this).

While it's nice to be lightweight, I think this would make a more appealing sales pitch. Maybe it's in the plans to build related libraries and integrate them in "building with repoze.bfg" tutorials, or maybe it's antithetical to Chris' goals, who knows.

Of the three features that the framework provides, the view and templating story is the least interesting to me. It seems very similar to the Zope 3/Grok story. I intend to checkout Ian Bicking's webob library, and I hope to use chameleon at work and for hobby projects, but that's the extent of it.

The security story is very similar to Grok's. They both forego framework-level security checks during traversal, I believe, while they differ in the last step: repoze.bfg security-protects the last traversed object within the view code, as I understand it, while Grok security-protects the view. For what it is worth, I prefer the repoze.bfg approach.

The traversal story is the most interesting to me. When I first heard the repoze.bfg traversal plan of "__getitem__ over the model, period," as opposed to the more flexible standard traversal story of Grok/Zope 3, I was skeptical, but the more I think about it the more I like it. The traversal story in Zope 3 has always been a pain to use for me, and while there might be a more powerfully flexible way to alleviate that pain than the repoze.bfg approach (and I think Grok might have tackled this already), the __getitem__ simplicity still is appealing.

In that vein, I find that the assertion that the repoze.bfg code is not MVC but "MVT" (Model-View-Template) like Django doesn't feel right. repoze.bfg doesn't provide a model story; it provides a traversal story. As such, you could be traversing over models or controllers; the code doesn't care which. This is "TV" (Traversable-View); or "[MC]V," from a regex perspective on MVC; or some other odd acronym. Not MVT.

In any case, while I might explore using repoze.bfg on some hobby projects, primarily to get my hands dirty with WSGI and get a better handle on a couple of Ian Bicking's libraries, I won't be working with this at work, and I have a higher personal priority to get some time with Django. I'll probably continue to follow repoze.bfg's development from a distance for now...and be glad that I don't have to write any marketing myself.

Saturday, January 31, 2009

Interactive Fiction, Declarative Domain Specific Languages, and Web Frameworks: Introduction

This is the first part--just the introduction, really--to what I intend to be a small series.

I've liked interactive fiction pretty well since I was a child. If "interactive fiction" doesn't mean anything to you, you might be familiar with the old Infocom games of the 1980s: Zork, Planetfall, The Hitchhiker's Guide to the Galaxy, Leather Goddesses of Phobos, and so on. That's interactive fiction.

Even though interactive fiction hasn't had much of a commercial impact since the 1980s, the genre has been alive and well, and actually doing amazing things, ever since. Shareware and freeware drive it now.

If you're willing to give it a try, Emily Short is considered one of the current masters. Galatea, for instance, might be an eye-opener: free, only 10 minutes long to play, and not a puzzle per se, but pretty amazing. Give it a try. You also might enjoy checking out the experience of blueful. I intend to try out Lost Pig with my five year old, playing it with him. There are also plenty of long fantasy, sci-fi and mystery games out there to be found, still usually free; and free software to play the games for Windows, Mac, Linux, Palm, iPhone (search the Appstore for Frotz), and plenty of others. This is really cool stuff.

So how do you write interactive fiction? Well, that's cool too. What's going on there right now fascinates me, both because it inspires me to want to give it a try, and because it has a different perspective on similar issues that I've been observing with web frameworks.

Later in this series: Inform 7 versus TADS, domain specific languages and design, pondering declarative goal-directed programming for web applications, RoR and Hobo, and various other things that I will make up as I go along.

Buildbot and Amazon Web Services Elastic Compute Cloud ("AWS EC2")

If you might care about Buildbot, you probably already know about it: "a system to automate the compile/test cycle required by most software projects to validate code changes," as the site puts it. It's written in Python with the Twisted framework.

At Canonical, I've been doing some work with it, and, lately, on it. My first accepted git branch added some bzr helpers. That was cool, but small. The one that was accepted this week is a bit more interesting, I think.

I've made it possible to hook up one or more on-demand, or "latent" slaves to the buildbot master. In the current single concrete implementation of the abstract class, for example, a latent slave always claims to be connected and ready to perform builds, even though it does not really have a remote connection to a machine ready to do work. But when a latent slave gets a build request, it instantiates an Amazon Web Services ("AWS") Elastic Compute Cloud ("EC2") virtual machine, using the image (and therefore operating system) of your choice to run the tests. It uses the nice AWS Python library, boto, to make the magic happen.

Implementing the same thing for other similar cloud computing services should be pretty easy. The module now has an AbstractLatentBuildSlave class. All you have to do to is subclass that and implement one method to start a virtual machine, and one to stop it. It's described in the pertinent section of the documentation.

I figure the new feature is probably only interesting for a relatively small subset of buildbot users. But it's still pretty cool, and a significantly different variation on what was there before. This is supposed to be released in 0.7.10, which is slated to be RSN, as I understand it.

If you are curious enough about it to want to poke around, the git master branch is here. You can find the EC2 latent slave in buildbot/ The documentation also has extensive additions to try to help you get started.

Looks like I'll be doing a bit more buildbot work in February. Cool.


I feel like I've been playing a whole lot of catch-up lately.

Mark Ramm's September '08 blog posts about how Django could learn from Zope 2's mistakes made one point (actually from the second post) that struck home strongly: more innovation happens elsewhere than within one given community.  You have to pay attention to it and be a part of it.

I have been a part of some cool, and uniquely valuable, stuff working on and with Zope 3.  I've also tried to keep abreast of new technologies, studying Dojo, and Programming Collective Intelligence, and SproutCore, and Objective C, and so on.  Notice that another web framework is not in that list.  Maybe that's because I was working for a company, Zope Corporation, in which working with other frameworks was not really an option (not that I minded, mind you; I like Zope 3).

But employer policies and positions are not really a good excuse.  I should have been actively studying the other web frameworks anyway.

Now that I'm in a work environment with more opportunity for cross-pollination (working for Canonical) I feel like I'm trying to swim out of a backwater to catch up with the rest of the web developer world.  It's daunting and stimulating.  Ideally I can find a way to integrate the best of my past with what the rest of the world is doing.  I've been a part of some innovation too, and I believe some chunk of it is worth bringing forward.  But I need to catch up.

I've been saving up my notes on REST while I read Leonard Richardson's excellent O'Reilly book about it, hopefully for a series of posts.  It also is an interesting, if somewhat dated, view into the world of Ruby on Rails.  In the alt-Zope world, I've been looking into what Tres Seaver, Chris McDonough and friends have been doing with repoze, especially repoze.bfg.  (I've already been somewhat familiar with Grok, but that's so close to Zope 3 that studying it really doesn't go too far in the way of cross-pollination.)  Obviously I need to spend some quality time with Django (I've done just a bit so far) and I'm impressed enough with Mark Ramm's presentations that I figure I ought to spend some time with TurboGears.

Like I said, daunting. And stimulating.

But meanwhile...I've also been looking at what interactive fiction has been up to since the last time I looked!  And that's what I intend to blog about next: the declarative domain specific language in Inform 7, and maybe how it relates to this crazy web developer biz.

Saturday, January 17, 2009

Learning How to Blog

It's been awhile--over two years--since I blogged. The reasons are simple: I'm busy with work and family, and perhaps I associated blogging too much with professional or scholarly publishing. The second point had two corollaries: my posts were often too big, both in terms of engaging readers and finding time to write; and I felt insecure and shy about saying much. I'm still busy, and I still want my posts to have some value, but I'm hoping I can get back to blogging. Why?
  • I have stuff to say!
  • I'm working from home now for a distributed company (Canonical), so blogging is a way to share (non-proprietary) ideas in the company.
  • Twitter and Facebook have shown me lately the value of making connections with distributed friends and acquaintances.
  • It's a reason, and a way, to work out ideas gradually.
  • I now understand blogging as being (at least potentially) more social--more of a conversation than the higher stakes of a formal presentation or publication.
  • Assuming I don't perform the blogging equivalent of stripping in front of a religious monument, it is conceivably good publicity, if that ever helps. Or, maybe, there's no such thing as bad press? In any case, my boss wants me to take a more public position in the open-source community, in part because I'm driving some of the open-sourcing work for my team. (BTW, I may be going to PyCon! See anybody I know there?)
What do I have to say? Well, off the top of my head:
  • I'm working with the primary author of the O'Reilly REST book, and I have some nascent thoughts about REST and what I'm learning from him that I'd like to work out.
  • I presented a speech at a company conference this past October about my take on why and for whom the Zope interface and component libraries might matter. It would be nice to make that a bit more general and share it.
  • As I said, I'm working from home now, and I really enjoy it. One reason I enjoy it is the processes that my company, and my project, have for helping a distributed team work together. I'd like to share them, because I think the whole experience is great.
  • I've done some more open-source work lately, ranging from adding the capability for buildbot to launch AWS-EC2-based build slaves on demand, through adding support for the bzr revision control system to buildbot, through adding cookie helpers to testbrowser, to packaging and releasing some software developed by other folks in my company. I'd like to talk about them a bit.
So, there's plenty of reasons to blog. How can I make my blogging experience better? What can I learn from the last time I tried this?
  • As I said before, treat this as a conversation. I usually don't have time to prepare something that approaches being authoritative, so admit it, move on, and be ready to listen.
  • That said, remember ye olde paper-writing days from school: outlines RULE. Write an outline, at least an informal one, first.
  • If the outline starts to look big, do it in broad-brush and DIVIDE IT UP across several posts! The outline will help keep the posts make sense together, and focusing on one or two bullet points per post will keep the blog entries shorter and less imposing for readers, and will be a smaller, easier-to-schedule block of time for me to write.
  • Let my light(ness) shine! I have occasionally been reasonably amusing in my life, and, as a reader, I find that posts with a bit of lightness and levity are easier for me to read. Being lighthearted probably puts me in a better frame of mind when I write, too. "A spoonful of sugar helps the medicine go down"?
So, that's the current theory, anyway, for teaching myself how to blog. Let's see how it goes.