In my last post, I bemoaned the fact that WCF and ReST are still strange bedfellows and that the ReST starter kit, while promising, was still in no way ready for prime time. Although a new preview release was uploaded last October, the license is still for preview software and requires that you upgrade to a commercial version at the time such a thing might be released. In any case, this post is mostly about why I ended up not using it (way back in July last year), and why I ended up going with a relative unknown that turned out to be so much more suitable – OpenRasta.

WCF’s support for ReST

WCF is designed as a transport-agnostic support framework for services based on Operations (which have a natural mapping to methods) and Contracts (which have a natural mapping to behaviour-free data transfer objects). This means that your design is inherently RPC in nature. Initially, we thought this’d be a good fit. We knew we needed ReST (or at least, POX over HTTP). This being the Syndication arm of NHS Choices, we also thought that in order to conform to government standards we might need SOAP as well. WCF seemed to fit the bill – we could write our operations as needed and just host a couple of endpoints: one SOAP, one ReST. To start with, the development model also seemed simple and familiar.

Well, we had articles to syndicate, we had images, we had some videos and audio clips. And it’s at this point that WCF started to let us down. We also knew we needed various representations of each thing we were syndicating (JSON for digital set-tops and handset consumers, XHTML for a discovery channel), and WCF out of the box didn’t support multiple representations.

For a while we used the open source library WcfRestContrib, which goes a long way to providing some kind of content negotiation for WCF. Based on the HTTP Accept header, WcfRestContrib will switch content ENGINE according to its own formatters. Even this has a limitation, though. For example, a JSON representation of a video makes almost no sense, and yet we could only supply a fixed set of representations which had to apply to all URIs. At this point it started to become clear that we were asking too much of a service-based technology with strong RPC overtones. While we could have extended WCF, the learning curve was steep and unpalatable. In addition, because we were primarily focussed on the usability of our ReST URI set, our “Operations” had become so ReST-centric we wouldn’t really even have had the mooted “SOAP-in-parallel” benefit anyway.

Stumbling across OpenRasta

The comments in this StackOverflow post put me onto OpenRasta. Two things put me off in the middle of last year, though: there wasn’t much documentation and there appeared to be only a single committer. However, a number of things encouraged and intrigued. The fluent configuration gave me warm fuzzies immediately, both from the point of view of meeting our conneg requirements instantly and also being vastly easier to read than the WCF XML configuration:

        public void Configure()
        {
            using (OpenRastaConfiguration.Manual)
            {
                ResourceSpace.Has.ResourcesOfENGINE<Customer>()
                    .AtUri("/customer/{customerId}")
                    .HandledBy<CustomerHandler>()
                    .AsXmlDataContract()
                    .And.AsJsonDataContract();
            }
        }

Score! Two representations that are attached specifically to that URI, with some class called CustomerHandler providing some kind of output. Wondering what kind of output that could be, I was won over by the absolute simplicity in the handlers:

    public class CustomersHandler
    {
        public Customer Get(int customerId)
        {
              return CustomerRepository.Get(customerId);
        }
    }

This simplicity is entirely brought about by OpenRasta’s built in, convention-based approach to method matching. No configuration files were harmed during the mapping of this URI to its method; OpenRasta simply looks for a method starting with “Get” and with a parameter called the same thing as you put in the squiggly braces in the URI, and ends up returning a POCO Customer object. Another beautiful thing was that no handlers need to inherit from any special ENGINE of object, leaving your one shot at inheritance still available to you.

Note: this attention to preserving individual developer productivity appears again and again in OpenRasta. You’ll go to do something that you thought you might have to write (creating URIs for other resources, or HTTP digest authentication, for example) and you’ll find it baked into the framework.

So if that handler could return a straightforward POCO, what was responsible for wrapping that up in the format that the HTTP GET had asked for with its Accept header? The answer lies with codecs, and while OpenRasta ships with most that you’d need (for (X)HTML, XML and JSON), again the extensibility model wins with its simplicity – you simply implement a couple of interfaces with one method each to handle encoding and decoding.

I’ve stated many things I like about OpenRasta but I’ve barely scratched the surface. I’ve not mentioned extending the pipeline (which meant we were able to bolt API key-based resource governance right in) or even the core developmental nicety that is OpenRasta’s built in support for IoC containers. Windsor and Ninject support are there out of the box, though there’s a basic implementation that’ll serve 80% of small to medium-sized projects anyway. When you start to put together HTML pages, you can do that with the familiar ASPX markup model – though even that has had a spruce-up and has extended the ASP.NET build provider in some useful ways.

In Summary

Why we found WCF isn’t a great fit for ReST

  • WCF is designed as a transport-agnostic support framework for services based on Operations and Contracts. This means that your design is inherently RPC in nature. For simple cases, this might be enough.
  • ReST is an afterthought in WCF (hey, we *can* do this quite easily and we’ll service the needs of 80% of developers and they can continue to think about methods in services as the One True Way). However, given that it’s so different to, say, SOAP, it requires that you structure your WCF app in such a way that it can barely reap any of the rewards of transport agnosticism anyway
  • Content negotiation is not a given in WCF. It requires use of third party extensions such as WcfRestContrib, and even then you cannot negotiate content on a per-URI basis. For example, I can’t say that at /videos/1 I will have a text/html representation and at /conditions/cancer/introduction I will have text/html and application/json. Even with WcfRestContrib you will only have a fixed set of representations which will have to apply to all URIs.

Why we found that OpenRasta is

  • OpenRasta – like HTTP since the mid 90’s – focuses on resources and their representations, not RPC.
  • A resource is just a POCO which is addressable by one or more URIs
  • A representation is just an encoding of the resource (JSON, XML, byte stream) negotiated on what you said you’d Accept in your HTTP headers
  • A URI can have as many or as few representations available as it requires.

WCF is designed around the concepts of Services and Contracts. HTTP, and by extension ReST are not – they are about resources and representations of those resources. OpenRasta understands this and has been designed from the ground up to support ReST-based architectures simply and elegantly – it’s a natural fit with HTTP whereas WCF is a slightly incompatible mismatch.

I needed two WCF books from Safari Online and WcfRestContrib to even begin to implement what we needed in WCF and it still fell short of what we wanted to achieve and compromised the design at the same time. OpenRasta not only freed us from WCF’s overbearing and config-heavy complexity – its elegant MVC model (with IoC at the core) made our code easy to write, easy to read, and above all a pleasure to maintain.

And if a project is only to have a single committer, better it be a “self-proclaimed, egotistical doofus” who also happens to be a borderline genius.

I’ve been having a play with Ubiquity today. It shows a lot of promise and is a nice thing to use – both from a command user point of view and the point of view of a command author. It’s great that they’ve baked jQuery right in too. That should keep me coming back.

As a small learning exercise, I’ve made a search command for the site that I currently work on – NHS Choices. Visit the Ubiquity Commands page to subscribe to the command (you’ll need to have previously installed Ubiquity).

Håkon Wium Lie, the original proposer of CSS, answers questions from Slashdotters about the origins of the language, why it’s not progressing as fast as web designers would like, and why he lies about the pronunciation of his first name.

If you have even a passing interest in CSS, this is a good read. Of particular interest is the answer to the question:

> by Dolda2000

> If you were allowed (perhaps by court order, which wouldn’t be
> unthinkable) to force Microsoft to do one (1) change in Internet
> Explorer, what would that be?
I would force them to support one (1) single web page before shipping IE7, namely Acid2. By using a tiny amount of resources to get Acid2 right, Microsoft can save web designers and users endless amounts of frustration in the future. It would also be an honorable thing to do.

However, in answer to another question further down, he tells us why this dream scenario will never happen:

It’s quite clear that Microsoft has the resources and talent to support CSS2 fully in IE and that plenty of people have reminded them why this is important. So, why don’t they do it? The fundamental reason, I believe, is that standards don’t benefit monopolists. Accepted, well-functioning, standards lower the barrier of entry to a market, and is therefore a threat to a monopolist.

From that perspective, it makes sense to leave CSS2 half-implemented. You can claim support (and many journalists will believe you), and you also ensure that no-one can use the unimplemented (or worse: buggily implemented) features of the standard. The only way to change the equation is to remind Microsoft how embarrassing it is to offer a sub-standard browser. And to use better browsers.

So there you have it. IE7 might help a little – and frankly it would be a relief just to be able to use the years-old child and attribute selectors, even if we have to wait a few more years before IE7’s penetration is such before it’s safe – but IE as a browser is going to drag its feet because MS doesn’t want the web to compete with Windows as a platform. So we as web developers must continue to use ASP.NET 2.0 with Firefox, Firebug, the Web Developer Toolbar, CSSVista, and all the other nifty little tools which are growing into the space which MS steadfastly refuses to occupy. And all the while, we must embarrass MS into some semblance of standards compliance.

Just think about for a moment though – as an ASP.NET web developer, wouldn’t you love to be able to ditch the “code for Firefox, fix for IE” mentality, and have a fully integrated AJAX IDE where you could debug your JavaScript in an integrated manner in Visual Studio and not have to worry about a separate browser for CSS? Wouldn’t it be nice if Visual Studio was your CSS IDE and you could see your changes live and be certain that your layout would render the same in any browser?

The nice folks at SiteVista have released a great free product – CSSVista -that allows you to edit CSS and see the results in Firefox and IE at the same time. This has quickly overtaken Chris Pederick’s Web Developer Toolbar as my most-used CSS tool. It’s not without its quirks and it’s astonishingly basic (it’s v0.1 after all!). However, it can handle relative paths to images and still display them (something Web Developer Toolbar can’t do). You get a split screen view of IE and Firefox at the same time, and this gives you instant visual feedback as you ENGINE CSS.

CSSVista Screenshot

SiteVista make their money from serving you the results of multiple browser CSS testing, so have a look at their paid-for service while you’re there. I’ll be subscribing to their blog to get news about v0.2 :)

I don’t use many (actually, I try not to use any) of the built-in styling properties of ASP.NET server controls.  The only property that means a thing to me is CssClass.  However, there are a few properties I find I always have to set to avoid crummy markup.  Chief amongst these is the GridView’s GridLines property – if you don’t set this to ‘None’ and CellSpacing to -1, CSS styling of grid borders isn’t possible as inline styles will override any of your stylings.

I don’t really theme sites either, largely because of the aforementioned bad markup – the presentational properties result in a slew of inline styles applied to many elements which bloats the HTML. Since I’m not using them for anything else, a single ‘Default’ theme’s skin files actually works very well as “policy” style document.  I always set these two properties on any GridView - now I can set them once in a ‘Default’ theme and every GridView I use will inherit the policy. Not precisely what themes were intended for, but pretty useful nonetheless (at least until ASP.NET Control Adapters arrive later this month) :)

© 2014 ZephyrBlog Suffusion theme by Sayontan Sinha