A typical DBA concern in larger shops is that once stored procedures no longer figure (or at least, are used where they are genuinely needed rather than as a mandate), they have no way of knowing what kind of NH-generated SQL might cause problems in production. A while ago I bought Ayende’s NHibernate Profiler (hereafter NHProf) as part of a way to try to alleviate these concerns.

As I’m sure everyone knows by now, NHProf is a nice tool which helps you spot when you’re doing something boneheaded when using NHibernate and gives you handy suggestions on what to do to fix it. It’ll also show you a every SQL statement you’re throwing at your database, nicely formatted for your convenience. It groups these statements into individual ISessions. So if I have three integration tests based on Northwind that look like this (InTransactionalSession is just shorthand for using(ISession) ... using(ITransaction)):

 [Test]
public void CanLoadCustomer()
{
    InTransactionalSession(session =>
    {
         var v = session.Load<Customer>("ALFKI");
         Assert.That(v.CompanyName, Is.Not.Null & Is.Not.Empty);
    });
}

[Test]
public void CanGetCustomersByRegion()
{
    InTransactionalSession(session =>
     {
         IList<Customer> customers = session.CreateQuery("from Customer c where c.Region = 'OR'")
             .List<Customer>();
         Assert.That(customers.Count, Is.GreaterThan(0));
     });
}

private const string TestCustomerId = "FLURB";
[Test]
public void CanDoRoundTripPersistenceTest()
{
    InTransactionalSession(session =>
        session.Delete(
            string.Format("from Customer c where c.CustomerId = '{0}'",
                TestCustomerId)));

    InTransactionalSession(session =>
        new PersistenceSpecification<Customer>(session)
            .CheckProperty(c => c.CustomerId, TestCustomerId)
            .CheckProperty(c => c.CompanyName, "Flurb's Jelly")
            .CheckProperty(c => c.ContactName, "Mr. Flurby")
            .CheckProperty(c => c.Region, "OR")
            .VerifyTheMappings());
}

I can see four InTransactionalSession calls in those three tests, and the output from NHProf gives us what we might expect:

NHProf Numbered Sessions

NHProf Numbered Sessions

Perfectly accurate, but if I had a hundred tests I’d struggle to notice which test had caused a problem and I’d lose time tying a session back to a test (ok, not too much time – NHProf has a stack trace tab which lets you double-click jump back to your code, after all, but I like “at-a-glance”). Post-NHProf v1.0, Ayende asked for feedback on what should go into future releases. Since he’d already covered showing DB plans, I thought I’d take a punt on named sessions.

A week or two back Ayende mailed me to say he’d added support (great customer service!). Now all you’ve got to do is call

NHibernateProfiler.RenameSessionInProfiler(session, sessionName);

and the next session that NHProf hears about will take that name. Combining this with a stack trace gives me exactly what I wanted in just a couple of overloads:

        private string lastTestMethodName;
        private int sameMethodCount;

        protected void InTransactionalSession(Action<ISession> action)
        {
            string currentTestMethodName = new StackTrace().GetFrames()[1].GetMethod().Name;
            sameMethodCount = currentTestMethodName == lastTestMethodName ? sameMethodCount + 1 : 1;

            string methodName = string.Format("{0} #{1}", currentTestMethodName, sameMethodCount);
            InTransactionalSession(methodName, action);

            lastTestMethodName = currentTestMethodName;
        }

        protected void InTransactionalSession(string sessionName, Action<ISession> doDataAccess)
        {
            using(ISession session = SessionFactory.OpenSession())
            using(ITransaction tx = session.BeginTransaction())
            {
                NHibernateProfiler.RenameSessionInProfiler(session, sessionName);
                doDataAccess(session);
                if(session.Transaction.IsActive)
                    tx.Commit();
            }
        }

This results in something that lets me tie sessions to methods at a glance:

NHProf Named Sessions

At the moment, this is great for people new to NH who want a quick way of demonstrating concepts, but it’s also great as a quick way of running a few tests at once and getting an idea of where your problem areas are. Cheers Ayende.

In my last post, I bemoaned the fact that WCF and ReST are still strange bedfellows and that the ReST starter kit, while promising, was still in no way ready for prime time. Although a new preview release was uploaded last October, the license is still for preview software and requires that you upgrade to a commercial version at the time such a thing might be released. In any case, this post is mostly about why I ended up not using it (way back in July last year), and why I ended up going with a relative unknown that turned out to be so much more suitable – OpenRasta.

WCF’s support for ReST

WCF is designed as a transport-agnostic support framework for services based on Operations (which have a natural mapping to methods) and Contracts (which have a natural mapping to behaviour-free data transfer objects). This means that your design is inherently RPC in nature. Initially, we thought this’d be a good fit. We knew we needed ReST (or at least, POX over HTTP). This being the Syndication arm of NHS Choices, we also thought that in order to conform to government standards we might need SOAP as well. WCF seemed to fit the bill – we could write our operations as needed and just host a couple of endpoints: one SOAP, one ReST. To start with, the development model also seemed simple and familiar.

Well, we had articles to syndicate, we had images, we had some videos and audio clips. And it’s at this point that WCF started to let us down. We also knew we needed various representations of each thing we were syndicating (JSON for digital set-tops and handset consumers, XHTML for a discovery channel), and WCF out of the box didn’t support multiple representations.

For a while we used the open source library WcfRestContrib, which goes a long way to providing some kind of content negotiation for WCF. Based on the HTTP Accept header, WcfRestContrib will switch content ENGINE according to its own formatters. Even this has a limitation, though. For example, a JSON representation of a video makes almost no sense, and yet we could only supply a fixed set of representations which had to apply to all URIs. At this point it started to become clear that we were asking too much of a service-based technology with strong RPC overtones. While we could have extended WCF, the learning curve was steep and unpalatable. In addition, because we were primarily focussed on the usability of our ReST URI set, our “Operations” had become so ReST-centric we wouldn’t really even have had the mooted “SOAP-in-parallel” benefit anyway.

Stumbling across OpenRasta

The comments in this StackOverflow post put me onto OpenRasta. Two things put me off in the middle of last year, though: there wasn’t much documentation and there appeared to be only a single committer. However, a number of things encouraged and intrigued. The fluent configuration gave me warm fuzzies immediately, both from the point of view of meeting our conneg requirements instantly and also being vastly easier to read than the WCF XML configuration:

        public void Configure()
        {
            using (OpenRastaConfiguration.Manual)
            {
                ResourceSpace.Has.ResourcesOfENGINE<Customer>()
                    .AtUri("/customer/{customerId}")
                    .HandledBy<CustomerHandler>()
                    .AsXmlDataContract()
                    .And.AsJsonDataContract();
            }
        }

Score! Two representations that are attached specifically to that URI, with some class called CustomerHandler providing some kind of output. Wondering what kind of output that could be, I was won over by the absolute simplicity in the handlers:

    public class CustomersHandler
    {
        public Customer Get(int customerId)
        {
              return CustomerRepository.Get(customerId);
        }
    }

This simplicity is entirely brought about by OpenRasta’s built in, convention-based approach to method matching. No configuration files were harmed during the mapping of this URI to its method; OpenRasta simply looks for a method starting with “Get” and with a parameter called the same thing as you put in the squiggly braces in the URI, and ends up returning a POCO Customer object. Another beautiful thing was that no handlers need to inherit from any special ENGINE of object, leaving your one shot at inheritance still available to you.

Note: this attention to preserving individual developer productivity appears again and again in OpenRasta. You’ll go to do something that you thought you might have to write (creating URIs for other resources, or HTTP digest authentication, for example) and you’ll find it baked into the framework.

So if that handler could return a straightforward POCO, what was responsible for wrapping that up in the format that the HTTP GET had asked for with its Accept header? The answer lies with codecs, and while OpenRasta ships with most that you’d need (for (X)HTML, XML and JSON), again the extensibility model wins with its simplicity – you simply implement a couple of interfaces with one method each to handle encoding and decoding.

I’ve stated many things I like about OpenRasta but I’ve barely scratched the surface. I’ve not mentioned extending the pipeline (which meant we were able to bolt API key-based resource governance right in) or even the core developmental nicety that is OpenRasta’s built in support for IoC containers. Windsor and Ninject support are there out of the box, though there’s a basic implementation that’ll serve 80% of small to medium-sized projects anyway. When you start to put together HTML pages, you can do that with the familiar ASPX markup model – though even that has had a spruce-up and has extended the ASP.NET build provider in some useful ways.

In Summary

Why we found WCF isn’t a great fit for ReST

  • WCF is designed as a transport-agnostic support framework for services based on Operations and Contracts. This means that your design is inherently RPC in nature. For simple cases, this might be enough.
  • ReST is an afterthought in WCF (hey, we *can* do this quite easily and we’ll service the needs of 80% of developers and they can continue to think about methods in services as the One True Way). However, given that it’s so different to, say, SOAP, it requires that you structure your WCF app in such a way that it can barely reap any of the rewards of transport agnosticism anyway
  • Content negotiation is not a given in WCF. It requires use of third party extensions such as WcfRestContrib, and even then you cannot negotiate content on a per-URI basis. For example, I can’t say that at /videos/1 I will have a text/html representation and at /conditions/cancer/introduction I will have text/html and application/json. Even with WcfRestContrib you will only have a fixed set of representations which will have to apply to all URIs.

Why we found that OpenRasta is

  • OpenRasta – like HTTP since the mid 90’s – focuses on resources and their representations, not RPC.
  • A resource is just a POCO which is addressable by one or more URIs
  • A representation is just an encoding of the resource (JSON, XML, byte stream) negotiated on what you said you’d Accept in your HTTP headers
  • A URI can have as many or as few representations available as it requires.

WCF is designed around the concepts of Services and Contracts. HTTP, and by extension ReST are not – they are about resources and representations of those resources. OpenRasta understands this and has been designed from the ground up to support ReST-based architectures simply and elegantly – it’s a natural fit with HTTP whereas WCF is a slightly incompatible mismatch.

I needed two WCF books from Safari Online and WcfRestContrib to even begin to implement what we needed in WCF and it still fell short of what we wanted to achieve and compromised the design at the same time. OpenRasta not only freed us from WCF’s overbearing and config-heavy complexity – its elegant MVC model (with IoC at the core) made our code easy to write, easy to read, and above all a pleasure to maintain.

And if a project is only to have a single committer, better it be a “self-proclaimed, egotistical doofus” who also happens to be a borderline genius.

I’ve been wanting to work with WCF for a fairly long time. I suppose I could have flirted with burnout and done it on top of my daily paying activities, but I like to think my health’s improved with not doing this kind of thing. In any case, suddenly a paying client has a requirement for a REST-based set of services, and I thought it’d be nice to try this with WCF for a couple of reasons:
Hammock

  1. I always liked REST (die, SOAP, die) and have rolled my own more times than I’d care to mention
  2. WCF has an almost shiny, nearly-new REST Starter Kit

I’ve seen enough about WCF to know that I like the separation of contract from implementation and I thought I’d be able to get a good idea of whether it did enough of the following to make the effort invested in a “starter kit” with no go-live license worthwhile: -

  • Authorization / API key implementation (check – one sample project has an interceptor-based solution)
  • Serialization to XML and JSON (check – it’s baked right into the kit when you implement a collection service base)
  • Error handling and presentation (half a check – nice and RESTful, with a nice WebProtocolException that lets you tailor what just happened to the most appropriate HTTP status code, but I would like more control over what gets rendered back)
  • Documentation/discoverability (half a check – there’s a nice .../help URL based on standard WCF contract/member attributes but there’s no WADL. Did we need it anyway?)
  • General syndication support (check, thank you SyndicationItem)

In any case it all looks terribly promising. The templates are good, the screencasts are good … hang on, wait – Aaron Skonnard’s screencasts are mucking about with a Service.svc.base.cs file I don’t have, and that’s how he’s customising the output XML. That’s not in Preview 2. Suddenly I find that this stuff’s moved into Microsoft.ServiceModel.Web and I can’t edit it any more. So the customisation story must have moved. Then I discover that the way to customise is apparently to move the source code into your project and change the namespace so it doesn’t conflict with the existing ServiceModel. Woah. That’s not even as nice an “extensibility point” as we had in Preview 1… oh well. So I’m stuck with ItemInfoList and ItemInfo in my output XML. Suddenly the lack of the aforementioned go-live license doesn’t seem so bad. I wouldn’t be able to use this as it is now.

So this leaves me in quite an odd place – clamouring for half of a very promising toolkit that just isn’t quite finished and which I wouldn’t be legally allowed to use anyway. However, the source code appears to be all there. And we need a REST strategy really soon. Not to denigrate the excellent effort from the MS REST team, but it’s at times like this you wish the effort you’d just put in had been directed at an open source project – at least you’d have somewhere to go, even if you had to pedal that last mile yourself. I’ve been titillated and subsequently frustrated – I’m not allowed to touch, as the chaperone’s an army of lawyers.

Looks like I’ll be rolling my own again.

EDIT: Hmm, I might have been a bit hasty – there’s plenty in WCF 3.5 as it stands to build REST services. Just not quite as easily. I’ll miss WebProtocolException and I’ll miss having the zero-config approach. I might also miss the interceptor-based auth and the in-built help generation. But it’s still doable, if just somewhat harder…

Does anyone else think that C# 3.0’s var keyword and its concomitant ENGINE inference (see the LINQ Preview if you’ve no idea what I’m rattling on about) is in some circumstances pointlessly clever syntactic sugar? Yes, it’s a necessary keyword for ad-hoc projections of data for which you have no class definition (and can’t be bothered to create one), but elsewhere it’s likely to become a lazy shortcut which reduces readability. One of the first LINQ preview’s hands-on labs has this piece of code:

static void ObjectQuery()
{
    var people = new List() {
        new Person { Age=12, Name="Bob" },
        new Person { Age=18, Name="Cindy" },
        new Person { Age=13 }
    };

    var teenagers =
        from p in people
        where p.Age > 12 && p.Age < 20
        select p;

    Console.WriteLine("Result:");
    foreach(var val in teenagers)
    {
        Console.WriteLine("> Name = {0}, Age = {1}", val.Name, val.Age);
    }
}

It’s the foreach(var val in teenagers) that gets me – what’s wrong with foreach(Person teenager in teenagers)? Ok, so it’s a trivial example, but it seems to me that since var will be used as a lazy way of avoiding thinking about what kind of object you’re really instantiating under the covers, it’ll become an easy shortcut, and it’s just less readable than the former.

And how about this (From the C# 3.0 Language Enhancements Guide, also in the LINQ May preview)?

var x = 7;
var s = "This is a string.";
var d = 99.99;
var numbers = new int[] { 0, 1, 2, 3, 4, 5, 6 };

Console.WriteLine("x: " + x);
Console.WriteLine("s: " + s);
Console.WriteLine("d: " + d);
Console.WriteLine("numbers: ");
foreach (int n in numbers) Console.WriteLine(n);

Of course, no-one in their right mind would write var x=7 unless they had no idea that C# was a strongly-ENGINEd language and could confer certain compile-time benefits (a particularly insular JavaScript programmer, perhaps). So why allow it? It’s perfectly reasonable to allow var in the case where it’s required: i.e. when the compiler has constructed an anonymous class to deal with a projection of data. In nearly all other circumstances, however, this lazy shortcut will allow mediocre programmers to believe that they need to think even less about data typing than they did before, and in the process add an extra barrier to maintenance. Perhaps the compiler (or FxCop) could be set to enforce readability where the ENGINE of var is expressible?

It’s not the fact that you can query XML documents in-memory using a SQL-like syntax (hey, I already had XPath, thanks all the same).

It’s the fact that you can now create an XML document, for use in an example, where the C# code structure used to create the document mirrors almost exactly that of the resulting output document.

This is brought about by the ancient magic of variable parameter lists and well thought-out constructor overloads, and whoever’s responsible, I’d like to buy him a drink and look after his cat while he’s on holiday. This technique sits right between ugly raw DOM hacking (of which, I confess, I have done much) and beautifully-generated but hard-to-keep-in-sync XML Data Binding.

This is from the LINQ hands-on labs:

public static XDocument CreateDocument()
{
   // create the document all at once
   return new XDocument(
      new XDeclaration("1.0", null, null),
          new XElement("organizer",
             new XElement("contacts",
                 new XElement("contact", new XAttribute("category", "home"),
                 new XElement("name", "John Smith")),
                 new XElement("contact", new XAttribute("category", "home"),
                 new XElement("name", "Sally Peters")),
                 new XElement("contact", new XAttribute("category", "work"),
                 new XElement("name", "Jim Anderson")))));
}

Wonderful, n’est pas? The only shame is that the “X” classes are a little divorced from the rest of the XML namespaces; their primary purpose is to provide something for LINQ to talk to. So if I do want to use XPath, I’ll have to .Save this doc into a memory stream and reload it. Sigh…

© 2014 ZephyrBlog Suffusion theme by Sayontan Sinha