Tuesday 22 December 2009

The Joy of Haskell

About a year ago, I spent the best part of a day playing with Haskell, which was long enough to intrigue me as to its possibilities but not long enough to develop any real understanding of the language. Since then, my functional programming urges have lain dormant - until last week, when I decided to dust off my copy of Real World Haskell and spend a bit longer with the language.

So far, it has been an enjoyable experience, albeit a demanding one at times. I'm very impressed by the elegance and expressive power of Haskell. I've just been reading about currying and function composition and can sense a broadening of my mental horizons somewhat akin to that which I felt on first encountering the UNIX tools philosophy. It was a revelation then to realise just how much more flexible UNIX could be than rival operating systems, thanks to the idea of having many small, simple utilities, controlled via command line flags and connected via pipes. I'm beginning to feel that Haskell offers analogous benefits, compared with the other languages that I've been using for so many years.

That said, I'm encountering the odd thing that makes me cringe. A few of the functions seem to have rather ugly names - putStrLn being a prime example. But I guess I can learn to live with that.

Real World Haskell seems to be pretty good. My only significant quibble so far concerns the exercises, which could do a better job of reinforcing and building on the concepts introduced in the text.

It's too early to say whether my programming habits will be irrevocably changed by all this; I suppose it depends to a large extent on how much more time I'm prepared to spend on Haskell. I sense I'm on the verge of 'getting it', so who knows?...

Sunday 20 December 2009

Appropriate complexity in Java and Python

I've blogged previously that the notion of appropriate complexity is important in determining how easy it will be to learn a programming language. Ideally, small, simple problems should be solvable with small, simple programs, and program size or complexity should not need to increase substantially if there is a small increase in the size or complexity of the problem - always assuming, of course, that we remain within the 'comfort zone' of the language.

Let's consider how well Java meets this requirement by looking at the traditional first program: one that prints "Hello, World!" on the console. If we were dogmatic about exploiting Java's object-oriented nature, then such a program would have to look something like this:
public class Greeting
{
  public void display()
  {
    System.out.println("Hello, World!");
  }

  public static void main(String[] args)
  {
    greeting = new Greeting();
    greeting.display();
  }
}
A complete novice who is already nervous about the challenge of learning programming will probably regard this code with dismay and wonder just how much more difficult writing a non-trivial program is likely to be.

One way of minimising the complexity is to develop and execute code within an environment that hides much of the 'scaffolding' - BlueJ, for example. Alternatively, we could simplify things by adopting a largely procedural style:
public class Greeting
{
  public static void main(String[] args)
  {
    System.out.println("Hello, World!");
  }
}
But even this simpler code is replete with potentially confusing details. The novice will no doubt be able to identify the single line of code that causes a message to be printed on the console, but will wonder
  • Why the line appears inside something called main
  • What purpose is served by String[] args
  • What the words public, static and void signify
  • What a class is, and why the code has to be enclosed within it
How does one deal with such questions? Is it better to explain every detail of the program, running the risk of confusing or intimidating the less experienced, less confident students? Or is it better to defer explanations until later, running the risk that students will not be confident about writing their first few programs because they don't understand everything that they are expected to type? Both approaches clearly have drawbacks.

Python suffers from no such problems, offering us what is surely the simplest version possible of the traditional first program:
print 'Hello, World!'
(The Python 3 version includes parentheses, of course, because print is a built-in function rather than a statement in Python 3.)

Saturday 19 December 2009

Paul Cusick

Got an early present in the post the other day: Paul Cusick's Focal Point CD. A nice touch was the handwritten postcard from Paul himself, wishing me a Happy Christmas!

I discovered Paul via last.fm and recommend that you check out his work if you are a fan of contemporary rock music - particularly if you like the 'modern prog rock' sound of bands such as Porcupine Tree or The Pineapple Thief.

<a href="http://music.paulcusick.co.uk/album/focal-point-album">Focal Point by Paul Cusick</a>

Minimal surprise in Java and Python

I've blogged previously about the idea that a programming language will be easier to learn if it causes minimal surprise to the learner. Let's consider a couple of examples here.

Integer Division


In common with other popular languages like C, C++, Java, C# and Ruby, Python 2 implements 'floor division' for integers - so evaluating 1/2 will yield 0, for example. This is an unwelcome surprise for most programming novices, who naturally expect a language to implement the normal rules of arithmetic. In our experience of teaching Python 2 (and before that, Java and C++), most students make mistakes with integer division more than once before they learn to cope with this counterintuitive behaviour.

Python 3 changes int division so that it now returns a float value. The // operator can be used to obtain the old behaviour:
>>> 1 / 2
0.5
>>> 1 // 2
0
For some programmers whose expectations have been shaped by exposure to the C family of languages, this change has been the source of much anguish ever since it was first proposed in PEP 238 back in 2001 (as can be seen in comp.lang.python newsgroup discussion on the topic), but it removes what was clearly a significant hurdle for those new to programming.

Console I/O


In another post, I described the asymmetry in console I/O that exists in Python 2 as an example of redundancy, but it can also be viewed as an unexpected surprise. And there's something else, about console input specifically, that can be surprising to Python newbies.

For a long time, Python has had two built-in functions that read from the console: input and raw_input.

The first of these reads a string of characters and attempts to evaluate them, such that a sequence of digits yields an integer value, characters enclosed in quotes yield a string, etc. Unfortunately, this is less useful than it sounds. Consider the following simple Python 2 program:
# hello.py - a program that greets you

name = input('Who are you? ')
print 'Hello', name
Here are two attempts to run this program:
$ python hello.py
Enter your name: nick
Traceback (most recent call last):
  File "hello.py", line 3, in <module>
    name = input('Who are you? ')
  File "<string>", line 1, in <module>
NameError: name 'nick' is not defined

$ python hello.py
Enter your name: max
Hello <built-in function max>
In both cases, the student running the program has forgotten that string input should be enclosed in quotes, with the result that Python treats the inputs as names of objects in the global namespace. The first attempt fails because there is no object named nick, but the second attempt succeeds because Python has a built-in function named max. Both results are surprising, even baffling, to programming novices.

The less surprising option in Python 2 is to use raw_input instead of input. The raw_input function returns console input as a string object and allows the programmer to decide exactly how this string should be handled. The string can be left alone in cases where text is expected (as in hello.py) or it can be converted explicitly to the required type:
number = float(raw_input('Enter a value: '))
We have seen many cases where students have used input in their Python 2 programs, having failed to recognise that raw_input is a safer, less confusing alternative (in spite of our attempts to explain this most carefully). Python 3 prevents such confusion by providing a single function named input, equivalent to the raw_input function of earlier versions. Evaluation of the input string is still possible, but it must be done explicitly, with code like eval(input()).

Friday 18 December 2009

Separation of concerns in Java and Python

I've blogged previously about the idea that separation of concerns is important in making a programming language easy to learn. Dealing with very different concerns using the same language feature is a bad idea because the meaning of symbols then becomes highly context-dependent, making code harder to read. As an example, let's consider the ways in which text and binary data are handled by Java and Python.

Java got this right from Day One, more or less. In Java, the byte and char types are distinctly different, the latter being a 16-bit type supporting Unicode. A slight wrinkle in JDK 1.0 was the fact that I/O was based solely around the byte-oriented InputStream and OutputStream class hierarchies, but this was fixed in JDK 1.1 via the introduction of parallel, text-oriented class hierarchies based on Reader and Writer.

With Python, things have been rather different. Python predates both Java and the finalised Unicode 1.0 standard, so for a long time it represented text solely as strings of 8-bit ASCII characters. It must have seemed a convenient economy at that time to also use strings as a means of representing sequences of bytes read from a binary file, but this in effect tied the str type to an 8-bit character representation. When it eventually became necessary (in 2000) for Python to support Unicode, the only way that this could be achieved without breaking existing code was to introduce a new unicode type for Unicode-based strings.

The upshot of all this was that Python now had two overlapping types: str, to represent ASCII text and binary data; and unicode, to represent Unicode text.

Aside from the problem of not keeping the different concerns of text and binary data separate, this solution also demonstrates undesirable redundancy; ideally, there should be just one way of representing text.

Perhaps the most significant improvement made in Python 3 is the reimplementation of str as a Unicode-based type and the inclusion of an entirely separate bytes type for representing byte sequences. Nothing is lost in this more orthogonal scheme, thanks to the provision of an encode method in str that converts a string to the equivalent bytes object and a decode method in bytes that converts a byte sequence to the equivalent string - given an appropriate codec in each case, of course.

Thursday 17 December 2009

Minimal redundancy in Java and Python (2)

Following on from Part 1 of this discussion, let's consider some more examples of minimal redundancy (or the lack thereof) in the Java and Python programming languages.

Integer Representation


The C & C++ languages support eight different primitive integer representations: ten if you include char and unsigned char. Java has only four primitive integer types (lacking the unsigned types of C & C++) but also has wrapper classes for each of these, plus a BigInteger class for representing arbitrary-precision integers - giving a grand total of nine representations. There is too much choice here for the novice programmer, for whom considerations of optimisation and efficient memory usage are unimportant compared with the business of learning the fundamentals of programming.

Python has historically provided only two representations: an int type equivalent to the long of Java, C and C++; and a long type similar to Java's BigInteger. Whilst two is certainly a big improvement on nine, there is still unnecessary redundancy here. The distinction between arbitrary-precision and fixed-length integer representations and the notion that CPU support makes the latter more efficient are worthy of discussion, certainly, but I would argue that such issues fall outside the scope of an introductory programming course.

Python 3 removes the redundancy completely by having a single integer type, complementing its single floating-point type.

Console I/O


It is common in introductory programming courses to use the console for input and output. Simple output has always been straightforward in Java, courtesy of System.out.println, but the language has yet to acquire a correspondingly simple System.in.readln method. Java SE 6 was the first release to improve console I/O significantly, via a Console class proving printf and readLine methods.

Python has also suffered from a lack of symmetry in its support for console I/O, albeit of a more subtle kind. Consider, for example, a simple program to greet its user by name:
name = raw_input('Who are you? ')
print 'Hello', name
Can you spot the conceptual redundancy in this code? Console input is handled with a function call, but console output involves a print statement. Two distinctly different concepts are being used here, when one would do. Fortunately, Python 3 corrects the design oversight by making print a function.

I have to admit, I wasn't particularly convinced that this was a problem until a student asked me for help one day during a lab class. The exercise involved writing a program to convert a temperature from the Celsius scale to the Fahrenheit scale, and his solution, written for Python 2.5, looked something like this:
ctemp = float(raw_input('Enter temperature in Celsius: '))
ftemp = 9.0*ctemp / 5 + 32
print(ctemp, 'deg. C', is, ftemp, 'deg. F')
His question was quite simple: "Can you tell me how to get rid of the brackets and commas in the output?" Having written a function call to handle data input, he must have temporarily forgotten the syntax for printing and assumed, reasonably enough, that it would be handled symmetrically by Python, i.e. as a function call; instead, he ended up constructing a tuple by accident and passing it to the print statement! He might have realised his error, but for the fact that we hadn't yet discussed tuples at that point in the course... Thankfully, such confusion cannot arise in Python 3.

You may have noticed the occurrence of raw_input in the two preceding code examples and perhaps wondered why the more obvious-sounding built-in function input isn't being used instead. It turns out that the latter can cause nasty surprises for novices, as I'll discuss in another post. For now, I'll simply point out that this is another example of redundancy in Python. The input function is entirely unnecessary, given that its behaviour can be duplicated by combining another built-in function, eval, with raw_input. Python 3 fixes things by having a single function called input that behaves just like the old raw_input function.

Object-Oriented Programming


There is no redundancy to speak of in Java's object model. True, it gives us concrete classes, abstract classes and interfaces, but these things are distinctly different from each other. (The fact that Java forbids multiple inheritance means that a purely abstract class is not equivalent to an interface, for example.)

Things have not been so clear-cut in Python, due to the way in which Python's object model has evolved. Before Python 2.2 arrived in December 2001, there were significant differences between Python's built-in types and user-defined classes, such that it was impossible to create a user-defined subclass of a built-in type. Version 2.2 went a long way towards healing the class/type split by introducing new-style classes alongside the existing classic classes, and by turning most of the built-in types into these new-style classes. Since version 2.2, it has therefore been possible to begin class definitions in two ways:
# Old-style
class Foo:
    ...

# New-style
class Foo(object):
    ...
Students often fail to recognise that these examples are two different kinds of class. This could be a particular problem for those coming to Python from Java, where class Foo and class Foo extends Object are entirely equivalent ways of beginning a class definition.

Unfortunately, many of the books and online tutorials on Python published since the release of version 2.2 have done little to clarify the distinction between old- and new-style classes or provide adequate guidance to novice programmers on which kind of class should be used. Some (e.g., Norton et al, Lutz) have ignored new-style classes altogether whilst others (e.g., Mount et al) have acknowledged their existence but used old-style classes almost exclusively in example code. In some cases (e.g., Hetland) there is a balanced discussion of the two class types and a few titles (e.g., Chun, Telles) have encouraged a more modern approach by concentrating almost exclusively on new-style classes. This lack of consistency can be very confusing for students.

Python 3 solves the problem by removing old-style classes entirely, leaving us with a single object model based on new-style classes. In Python 3, class definitions start off much as they did before, but with the essential difference that object is implicitly a superclass (as is the case in Java and C#). Thus, there is no longer any difference between the two styles of class definition shown above.

Minimal redundancy in Java and Python (1)

I've blogged previously about the idea that minimal redundancy is important in making a programming language easy to learn; in essence, a language should avoid providing multiple, redundant ways of achieving the same goal. In this post, I'll present an example of how Java and Python compare when evaluated according to this principle.

Let's consider the concept of iteration through a sequence of things. In Java, the syntax required to implement this concept varies considerably, depending on the type of sequence being handled and the version of Java that we happen to be using. For example, if we wish to iterate through the characters of a string, printing each one on the console, we would accomplish this with code such as
for (int i = 0; i < message.length(); ++i) {
  System.out.println(message.charAt(i));
}
The equivalent approach to indexing characters in an array is a little different, with length being an attribute of the array rather than a method call and square brackets being used in place of the charAt method:
for (int i = 0; i < messageChars.length; ++i) {
  System.out.println(messageChars[i]);
}
The equivalent for loop for iterating through a Vector of objects such as integers is again subtly different; indeed, it can be written in two ways - either using the old syntax prevalent before the appearance of the Collections framework in JDK 1.2 or using the alternative get method call that was added to bring the Vector class into line with other containers from the Collections framework:
for (int i = 0; i < vec.size(); ++i) {
  System.out.println(vec.elementAt(i));
}

for (int i = 0; i < vec.size(); ++i) {
  System.out.println(vec.get(i));
}
Of course, it is also possible to enumerate items in a Vector one at a time rather than indexing them, but yet again there is more than one approach to this. For example, we can use the Enumeration interface that predates JDK 1.2 or we can use the Iterator interface introduced with the Collections framework:
Enumeration<Integer> enumerator = vec.elements();
while (enumerator.hasMoreElements()) {
  System.out.println(enumerator.nextElement());
}

Iterator<Integer> iterator = vec.iterator();
while (iterator.hasNext()) {
  System.out.println(iterator.next());
}
Admittedly, JDK 1.5 helped the Java programmer by introducing a 'for each' loop syntax that greatly simplifies the Vector enumeration examples above and can even be used to enumerate array elements:
for (int number : vec) {
  System.out.println(number);
}

for (char character : messageChars) {
  System.out.println(character);
}
Useful though this undoubtedly is, it has yet to replace the older approaches as the obvious way to do things; all of those older approaches are still viable in Java, and continue to be described in textbooks and online tutorials.

Now consider a text file. This can be regarded as a sequence of lines, and Java provides a couple of obvious ways of reading such a sequence - but, unfortunately, the syntax is different yet again from the examples shown earlier:
// Before JDK 1.5
BufferedReader inputFile = new BufferedReader(new FileReader("foo.txt"));
String line = inputFile.readLine();
while (line != null) {
  System.out.println(line);
  line = inputFile.readLine();
}

// Since JDK 1.5
Scanner inputFile = new Scanner(new File("foo.txt"));
while (inputFile.hasNextLine()) {
  System.out.println(inputFile.nextLine());
}

We can summarise the preceding arguments by saying that characters in a string, objects in a container and lines in a file are all examples of a sequence of things, but Java requires different iteration syntax in each case. To make matters worse, in two of the three cases there are multiple options available!

Python is vastly simpler in comparison, supporting each of these scenarios with one obvious, consistent syntax:
for character in string:
    print character

for number in numbers:
    print number

input_file = open('foo.txt')
for line in input_file:
    print line,
True, we could also write these examples using while loops, but that would be a very obviously inferior way of doing things - so much so that I've never seen it advocated in any book or online tutorial on Python programming.

Wednesday 16 December 2009

What makes a programming language easy to learn?

This is something that I've been thinking about for a while.  I can't claim to have any deep insights or to have conducted any rigorous research, but intuition plus the anecdotal evidence from over twelve years of teaching programming to undergraduates leads me to suggest four principles that a language should follow:
  • Minimal redundancy
  • Separation of concerns
  • Minimal surprise
  • Appropriate complexity
Minimal redundancy essentially means that a language should avoid having many similar ways of achieving a particular goal - interestingly, the opposite of the well-known Perl mantra "There's More Than One Way To Do It". Having unnecessary, redundant features means that there is more for students to learn, for little material benefit. Teaching only a subset of the language won't help matters, because students will inevitably come across the features you omit via other sources such as textbooks, websites found via Google, etc - and this creates uncertainty as to which is the 'best' way to do something.

Separation of concerns is the counterbalance to minimal redundancy.  Whereas it is desirable for a language to keep syntax and features to a minimum, this should not be done to the extent that very different concerns are dealt with using the same syntax or language feature. In essence, a language should avoid overloading its syntax with different, context-dependent meanings if possible, because this can cause unnecessary confusion, make code harder to read, etc.

Minimal surprise means that, where possible, a language should behave in a manner consistent with the normal expectations of the novice programmer.  Surprising behaviour forces the learner to expend extra mental effort to process and understand what they are seeing; furthermore, it can have a detrimental effect on programmer confidence.

Appropriate complexity means that small, simple tasks should be achievable with small, simple programs; moreover, small increases in the size and complexity of a task should result in correspondingly small increases in program size and complexity.

In forthcoming blog posts, I'll expand on each of these principles and give examples of how well Python conforms to them in comparison with another language that is used in higher education :)

Tuesday 15 December 2009

System usage monitoring with Django

I haven't blogged about Django in a long while - partly because I've been too busy doing other things, such as using it to create a real web application (as opposed to the personal web apps and teaching apps that I'd previously written).

The real web application in question is a system to analyse usage of a commercial virtual learning environment (VLE) deployed at a higher education institution.  This VLE is large and heavily used, as the following stats from the 2008-9 academic year indicate:

  30,000+ users
  258 million hits against web server
  1.9 million logins from students
  Average of 6,200 logins per day, (peaking at 20,000)

Our system is able to determine the type of student that has logged into the VLE.  It also uses built-in knowledge of the local network plus GeoIP data to identify where logins are coming from.  Information on the time and location of login and type of student are stored in a database, and Django is used to provide a web-based interface to this database.  The results of queries are presented to the user in various forms - typically as tabular data (paginated if necessary) plus a chart of some sort if appropriate.  We use matplotlib to generate pie charts, bar charts and time series charts, plus the Google Chart API to produce maps showing the countries from which students are accessing the VLE when they are away from university.

In future blog posts, I'll go into more detail about how it all works...

Saturday 12 December 2009

The one that wasn't a game...

Following on from yesterday's post about the recent graphics programming assignment done by my first-year students, it is worth noting that one of our students was adventurous enough to implement something other than a game.

Alex Hawdon wrote a very interesting program that invokes p0f to analyse incoming attempts to establish TCP connections with your machine, interprets IP addresses using a GeoIP database and then serves up the results as a KML file over HTTP - the latter then being monitored via Google Earth, of course.  The net result: a 3D, real-time visualisation of attempted hacks against your machine.

Friday 11 December 2009

Game Demos

Yesterday was the demonstration session for our open-ended graphics programming coursework.  The students seemed to enjoy themselves - even to the point of turning up in suitably festive clothing:


The aim of the whole exercise was to develop something in Python that involved dynamic graphics. We didn't require it to be a game, but almost everyone chose this option - probably because we devoted several lectures and one of the lab classes to explaining how Pygame could be used for this!

There were some very impressive submissions.  Ian Kernick (pictured above) and Sean Watson - the self-styled 'S & I Productions' - gave us Nazi Zombies: a third-person shooter with suitably gory graphics and sound effects.  So polished was this piece of work that it even had its own movie intro!  One notable feature was the layering of sprites to create a pseudo-3D effect in which the player could move behind or in front of the zombies.

Zombies were also the main enemy faced by the player in Alex Hawes' game, but this time the player was moved around using an XBox 360 game controller.  Ed Worthy used a similar control mechanism in his Geometry Wars clone - all the more impressive for being apparently implemented from scratch in a marathon all-night programming session right before the deadline!  I'd love to see what he could have done with better time management...

There was plenty of other good work on display.  Julie Tillier showed us a very complete and playable PacMan-style maze game and Minos Galanakis demoed a ballistics game in which the player controlled the angle and speed of a projectile fired from a cannon.  Projectile motion was governed by realistic physics and the player's score was synchronised with a high-score table stored externally on an FTP server.

It was common for students to take their inspiration from existing, often very familiar games.  In addition to those examples mentioned above, we saw a decent effort at implementing a Guitar Hero clone (driven by key presses rather than the authentic guitar-shaped controller, regrettably) and also a Frogger-style game called Intoxication, in which the protagonist was a student evading heavy traffic to drink pints of beer that appeared randomly at the roadside!  Hmm...

We rather liked an unusual-looking game from Stefan Piazza and John Lau, called Bit Defence.  It had colourful, distinctive vector-style graphics and was notable for being a two-player game, with multiple levels and power-ups.  The player-controlled sprites had to orbit around a central disc-shaped zone that they were defending, as well as rotating so that they were always facing outwards.  A lot of thought had clearly gone into how this motion could be smoothly and efficiently animated.


Saturday 5 December 2009

Python's Moratorium

Jesse Noller has posted a thorough and well-reasoned explanation of Python's recently-announced moratorium on new language features.  I'm surprised that people are grumbling about the moratorium, but it is good to see a well-argued response to the critics.

I'm very keen for us to standardise on Python 3.x in our teaching at Leeds, but the (understandably) slow pace of transition to the new syntax for some of the large third-party packages and frameworks that we use (e.g., Pygame, wxPython, etc) has meant that we've been reluctant up until now to make the commitment.  My hope is that we'll see greater enthusiasm for making the '2 to 3' transition, now that Python is not going to be a moving target for the next couple of years.

Of course, this is a community where we should all pitch in and help to make change happen - so I'm not going to sit back and wait for others to do all the work.  I just have to figure out where I can make an effective and useful contribution...

Thursday 26 November 2009

Eclipse on Karmic

Couldn't figure out why an install of Eclipse SDK 3.5.1 onto my now Karmic-enabled laptop wasn't running correctly, and then a few minutes of googling led me to this very useful tip.

Banishing the Drifting Pointer

A well-known problem with Dell Latitude laptops such as the D800 concerns the mouse pointer, which can start drifting across the screen of its own accord, for no apparent reason.

The cause of the problem is the small thumbstick embedded in the keyboard. Under Windows XP, I had applied the simple fix of disabling the stick in the mouse preferences dialog, but when I upgraded my D800 to Ubuntu 9.10 the other day, the problem reappeared - and there was no handy option in System Preferences to correct the issue.

Some blogs advocate the drastic step of cutting cables inside the machine, but fortunately I came across a less scary solution in the comment to another blog entry.

Adapting this to my own system, I ended up creating a file /etc/hal/fdi/preprobe/disable.fdi containing the following:
<?xml version="1.0" encoding="UTF-8"?>
<deviceinfo version="0.2">
  <device>
    <match key="input.product" string="DualPoint Stick">
      <merge key="info.ignore" type="bool">true</merge>
    </match>
  </device>
</deviceinfo>

Wednesday 25 November 2009

God Bless The Pretty Things

My signed copy of Boo Hewerdine's new album arrived in the post today, and it's a corker - his best since Anon, I think.

Tuesday 17 November 2009

Watershed

I've used Linux at home for many years now, starting with SuSE 5-point-something and moving eventually to Ubuntu at around the time of the 'Gutsy' release. I've tended to upgrade every 12-18 months, and each time I've installed from scratch. The install experience has improved gradually with time, each successive upgrade causing fewer problems, but this weekend was something of a watershed: my first completely problem-free Linux install. Everything just worked - even sound, traditionally the thing that has always caused me grief in the past.

Tuesday 10 November 2009

A new beginning

Managing a blog on my own server was becoming a chore for various reasons, so I've decided to experiment with an externally-hosted solution. This is also an opportunity to try out a different service (Blogger rather than Wordpress). So far so good...