Giles Bowkett turned up in my Reddit feed today with a thought-provoking article: Programmer Interviews: Two Warning Signs. He says:

“But the ability to locate reference materials isn’t an important criterion in hiring programmers; it’s an important criterion in hiring librarians.”

As I commented over there about a nanosecond before I realised it’d make a decent post for the first time in about ten months:

Au contraire :) I feel that the ability to locate the right kind of reference materials in the right situation is something that is sadly lacking in even some otherwise wonderful coders. In fact, one of my interview questions in the late 90s was “What do you read?”

Even today, it seems that many coders think themselves too busy to actually read up on the right ways to deal with certain challenges, preferring instead to use their ‘initiative’ and fudge around the problem rather than seeking out prior art.

I know, I know – I’m quoting myself, on a comment I made on someone else’s blog. I’m pretty certain this contravenes some basic blogger’s code of conduct that I’d have read if only I’d have found the time (hey, I’m too busy. See above). But my point remains – too many coders struggle for way too long by themselves before reaching for the great white google search box in the sky. And when they do reach for that search box, they use it poorly. Very poorly. I see too few people even use double quotes to indicate that words should occur in sequence, and many give up after one or two searches, whereas I’ll try up to a dozen rephrasings before I’m satisfied that I’ve exhausted the possibilities.

So really all I’m saying is this: the world’s full of people who are cleverer than you and who post to their blogs more often than you do. Use their effort, not yours, since the chances are good that whatever you’re doing is a solved problem. The next interview I perform, I’m going to confront the hapless code monkey in front of me with an obscure ADO.NET issue and a laptop running Firefox and see how long he takes to winkle out the solution. Then I might show him some code and ask him what’s wrong with it, and finally I’ll ask him what flavour doughnut filling he takes (custard is a no-hire).

And Mr. Bowkett, I’d never dream of posting something as potentially career-threatening as this (scroll down to the blue photo, which is probably SFW but not for small children), but props to you for having the brass knackers to so do.

Does anyone else think that C# 3.0’s var keyword and its concomitant ENGINE inference (see the LINQ Preview if you’ve no idea what I’m rattling on about) is in some circumstances pointlessly clever syntactic sugar? Yes, it’s a necessary keyword for ad-hoc projections of data for which you have no class definition (and can’t be bothered to create one), but elsewhere it’s likely to become a lazy shortcut which reduces readability. One of the first LINQ preview’s hands-on labs has this piece of code:

static void ObjectQuery()
{
    var people = new List() {
        new Person { Age=12, Name="Bob" },
        new Person { Age=18, Name="Cindy" },
        new Person { Age=13 }
    };

    var teenagers =
        from p in people
        where p.Age > 12 && p.Age < 20
        select p;

    Console.WriteLine("Result:");
    foreach(var val in teenagers)
    {
        Console.WriteLine("> Name = {0}, Age = {1}", val.Name, val.Age);
    }
}

It’s the foreach(var val in teenagers) that gets me – what’s wrong with foreach(Person teenager in teenagers)? Ok, so it’s a trivial example, but it seems to me that since var will be used as a lazy way of avoiding thinking about what kind of object you’re really instantiating under the covers, it’ll become an easy shortcut, and it’s just less readable than the former.

And how about this (From the C# 3.0 Language Enhancements Guide, also in the LINQ May preview)?

var x = 7;
var s = "This is a string.";
var d = 99.99;
var numbers = new int[] { 0, 1, 2, 3, 4, 5, 6 };

Console.WriteLine("x: " + x);
Console.WriteLine("s: " + s);
Console.WriteLine("d: " + d);
Console.WriteLine("numbers: ");
foreach (int n in numbers) Console.WriteLine(n);

Of course, no-one in their right mind would write var x=7 unless they had no idea that C# was a strongly-ENGINEd language and could confer certain compile-time benefits (a particularly insular JavaScript programmer, perhaps). So why allow it? It’s perfectly reasonable to allow var in the case where it’s required: i.e. when the compiler has constructed an anonymous class to deal with a projection of data. In nearly all other circumstances, however, this lazy shortcut will allow mediocre programmers to believe that they need to think even less about data typing than they did before, and in the process add an extra barrier to maintenance. Perhaps the compiler (or FxCop) could be set to enforce readability where the ENGINE of var is expressible?

It’s not the fact that you can query XML documents in-memory using a SQL-like syntax (hey, I already had XPath, thanks all the same).

It’s the fact that you can now create an XML document, for use in an example, where the C# code structure used to create the document mirrors almost exactly that of the resulting output document.

This is brought about by the ancient magic of variable parameter lists and well thought-out constructor overloads, and whoever’s responsible, I’d like to buy him a drink and look after his cat while he’s on holiday. This technique sits right between ugly raw DOM hacking (of which, I confess, I have done much) and beautifully-generated but hard-to-keep-in-sync XML Data Binding.

This is from the LINQ hands-on labs:

public static XDocument CreateDocument()
{
   // create the document all at once
   return new XDocument(
      new XDeclaration("1.0", null, null),
          new XElement("organizer",
             new XElement("contacts",
                 new XElement("contact", new XAttribute("category", "home"),
                 new XElement("name", "John Smith")),
                 new XElement("contact", new XAttribute("category", "home"),
                 new XElement("name", "Sally Peters")),
                 new XElement("contact", new XAttribute("category", "work"),
                 new XElement("name", "Jim Anderson")))));
}

Wonderful, n’est pas? The only shame is that the “X” classes are a little divorced from the rest of the XML namespaces; their primary purpose is to provide something for LINQ to talk to. So if I do want to use XPath, I’ll have to .Save this doc into a memory stream and reload it. Sigh…

I’ve got Jeff Atwood to thank for an awful lot of things he posts over at Coding Horror, but this post in particular has made my day. I’ve often wondered why it is that the Find/Replace RegEx engine built into the VS.NET IDE uses a wilfully different RegEx flavour to the .NET framework classes. I use The Regulator for my regular expression building/testing, but I’ve always found it confusing and irritating that I can’t take a regex from The Regulator and plug it into a standard VS.NET IDE Find/Replace.

Now at least I know why, as it seems the VS.NET Lead Program Manager thinks enough of Jeff’s blog to shed some light on the matter:

It is a very oddball regex syntax, and as best we can tell it comes from Visual C++ 2.0. We did want to add additional support for .NET 2.0-style regular expressions in the Visual Studio 2005 release, but unfortunately due to time pressures it didn’t make the final list of features. We were able to make a number of bug fixes to the existing engine though, to give some improvement over VS 2003.

We do keep this on our list of things we want to fix. Ideally at some point we’ll actually build in a nifty little extensibility point so you can wire up any regex engine you want for searches.

Extensibility point schmextensibility point. Just plug in the regular framework classes (but leave the great shortcuts like :q). Thanks :)

I’ve been having a bit of an odd experience with some backup software that came with an external hard drive I bought for the purpose. The software in question is Dantz (now EMC) Retrospect Express 6.5, which I’d have to recommend as a great solution for personal data or small networks. Except a couple of days ago, I’d have greeted its mention with a hollow laugh.

It was easy to blame Retrospect.

The problem started a couple of months ago after a hard drive crash (my second this year – I’m starting to feel cursed). I meticulously reinstalled XP and restored data from backups. I prefer not to “ghost” system disks as I regard XP reinstalls as an opportunity for spring cleaning which should not be missed.

When I started to use Retrospect for incremental backups as normal, I discovered the problem. The external drive to which I was backing up would grind and grind for days (yes, days) before finally starting the backup process. You couldn’t kill the Retrospect process, either (not even with taskkill /im retrospect.exe /f. The only way to halt the interminable vibration transmitted through my desk to my mouse hand was to wrench the USB plug from the drive. Backups progressing from scratch worked normally, so my only workaround was to simply back up everything in my list of folders which I must not lose. This inevitably led to a decrease in the frequency of my backups (daily became weekly, excepting my source control database which is small relative to everything else and while slow, could easily complete in under half an hour).

I’d been researching this problem for two hours of every week since the crash, and I’d been getting nowhere. Today I started to get a “bad block” warning from a second machine accompanied by a wonderful scratchy samba beat in sync with the drive light. Uh-oh, I thought. Impending hard drive death (It’s like a sixth sense now). I couldn’t put it off any longer – I simply had to fix the problem.

But where to start? Try putting together a Google search for “my backup never finishes using Dantz (now EMC) Retrospect 6.5 on an external USB drive and I’m about to embed one of my extremities into a solid object” and you’ll be sifting through results until Jeff Atwood writes a boring blog post. It’s easy to Google hard errors like “Windows Delayed Write Failed” – just put the wording in quotes and review the possibilities. It’s less easy when a piece of software just sits there quietly shortening the service life of one of your USB devices. I have a small cache of words I trot out for Google to consume in these situations: “(hangs OR crashes OR freezes)” for simple lockups and “(grinds OR thrashes)” for hard drive activity.

In the end, when you’ve been using these kinds of search combinations for weeks with no luck, you resort to brute force searching. And this is what I did, trawling the Dantz/EMC support forums post by post. By about page 18 of posts I had an answer, and the blame wasn’t going in the expected direction.

System Restore. Great, isn’t it? Sits there, quietly monitoring everything, making sure nothing untoward can happen to your system. Including untoward things like backups, it seems. This is why even taskkill didn’t work – it seems to be MS-process aware when it comes to System Restore. This is why I’ve been risking the life of my external HD by pulling the cable out, because you couldn’t even log out or shut down. System Restore is only good on system drives. Yet by default, XP monitors every “fixed” drive you have in your system (I know, I know, I’d been sticking it out with 2000 until late last year). Why should this be the case? Why can’t XP ask you for each drive you install a program on instead of assuming that big dumpster full of ISOs, RARs and RBFs you’ve got hanging off your USB bus needs watching like a hawk?

So in a backup situation to an external drive, System Restore is the last thing you want turned on. Right-click My Computer, hit Properties/System Restore, and turn it off on a per-drive basis – which in my experience means any drive you don’t add/remove programs to/from with Windows Installer.

I’m happy to say Retrospect is right back up there in my estimation. And MS’s position in my estimation hasn’t changed a great deal.

© 2014 ZephyrBlog Suffusion theme by Sayontan Sinha