Monday 6 August 2012

New blog

I don't blog often, but when I do it's now at http://blog.semeosis.com. Here I've written about some of the things I've looked into recently (last couple of years) including:

  • The star schema benchmark for benchmarking database performance against star schemas (in particular getting this working with SQL Server
  • Erlang (in particular an intro for people from a C#/Java background)
  • Setting up emacs for writing Standard ML (which is to F# what C is to C++/ObjC)
  • Talks I've given on things like NoSQL and CQRS with links to recordings and slidedecks
  • Who knows what other things by the time you're reading this
As can be seen from these topics my focus is far less on the world of Microsoft and .NET than it was, hence the move. Of course, I'll be leaving this content up. It still seems to get a fair number of visitors, and for some reason that post on uninstalling AVG screwing up Exchange on a Small Business Server still seems to be helping people out.

Monday 25 January 2010

Thomas Kuhn, Paradigms, NoSQL and the RDBMS

It’s been a long time since I last blogged, and this may be the last time I blog here as I feel the title no longer adequately reflects my interests but…

As I prepare my talk, on NoSQL generally and CouchDB in particular, for #DDD8 the applicability of Kuhn’s concept of the paradigm to the current NoSQL/RDBMS situation becomes more and more interesting for me.

I’ll start by, in brief, stating what is at least in some quarters considered self-evident truth:

  • The RDBMS has been the norm for storing data from applications for a long time;
  • The RDBMS tech’s (Oracle in particular, but the others to differing degrees as well) has become too complex and has suffered from feature-bloat;
  • The problems which the RDBMS can’t solve (at the least elegantly) has grown in number and importance;
  • Technology (in particular hardware) has changed dramatically;
  • The problems which we want an RDBMS to solve have changed dramatically also.

Ok that said, let’s briefly look at what Kuhn says in his seminal book ‘The Structure of Scientific Revolutions’ (my copy is in my parents attic so I’m going here from memory & old uni essays on my HDD).

Science doesn’t build one discovery upon the next in a neat fashion. An initial paradigm arises in response to particular problems which it is seen to be helpful in solving.

“Paradigms gain their status because they are more successful than their competitors in solving a few problems that the group of practitioners have come to recognise as acute. To be more successful is not, however, to be either completely successful with a single problem or notably successful with any large number.” (Kuhn 1962:23)

Once this has happened it achieves a level of dominance (hegemony perhaps) over thinking within that community and further problems are defined in its terms. In order to accommodate the solving of new, different, problems the theory grows, becoming increasingly unwieldy. For some problems however embellishing the model to resolve them is not possible. Over time such anomalies grow in number and in importance. Eventually ‘rebels’ propose counter-theories. Kuhn describes this period as one of crisis and of revolution. An alternate candidate to the existing paradigm will only supplant the current paradigm if it allows for a better accommodation of the previously accrued anomalies and also provides a new impetus for scientific study.

So just to make the parallels (relevance) between Kuhn’s theory of paradigms and the situation with storing data that I’m suggesting clear, I’ll go over my initial bullet points in the light of what I’ve just said about Kuhn:

  • The RDBMS model (and in particular the expression of this model in tech’s like Oracle & SQL Server) has had a dominant paradigmatic (hegemonic) status for a long time now;
  • In order to answer new problems the RDBMS tech (and to an extent, though I know less of them, the models that support this tech) has become bloated as it is embellished (has features added);
  • A growing number of problems which the RDBMS cannot solve (at least in ways that are considered reasonable) have become increasingly important and visible;
  • Technology changing is a bit like the laws of the universe changing for a physicist. No wonder we’re churning paradigms in IT. The move from aristotelian to newtonian physics was in large part due to the problems that were being posed and the inability of the former paradigm to answer them effectively. Well that’s happened for us, but also the universe changed. Processor speed, size (and cost) of available memory, … All these things have changed dramatically and have a massive bearing on the utility of any theory.

I think that we are in a period of crisis when it comes to how we store data. I don’t mean this in a bad way. Periods of crisis are incredibly creative periods where new models are considered and experimented with. I don’t think that we will necessarily see the emergence of a single replacement paradigm either. Through the recognition that different problems require different solutions I think that we may see a number of ‘new’ paradigms emerge which don’t so much compete as compliment each other (network-oriented solutions like Neo4J and document databases like CouchDB for example). Nor am I suggesting that the RDBMS model will go away. I do though think that the big vendor bloat that has occurred, with all the associated costs which this brings, may mean that lighter-weight RDBMS implementations emerge. That said, they may just wither and die as problems like the object-relational impedance mismatch are unlikely to disappear soon I suspect, and ORMs just add another layer of bloat to the RDBMS. In fact ORMs remind me of the ‘doing the wrong thing righter just makes you wronger’ quote from Dr. Russell Ackoff that I love.

Next up (but probably not here), ’agile/lean as liberal modes of governance – the applicability of Foucault to project management in IT’, or, ‘changing the enterprise using wars of position – a guide to Gramsci for agents of change’, depending on what I feel like.

Kuhn, Thomas S. (1962) The Structure of Scientific Revolutions. Chicago: University of Chicago Press

Saturday 28 March 2009

Building a Reusable Builder: An internal DSL in C#

Sitting and enjoying the OpenSpace Coding Day at Conchango, and in particular enjoying Ian Coopers talk on Internal DSLs in C# I came to thinking how it might be quite simple to create a reusable builder object.

I’ve done some posts on this sort of thing before, and have spent quite a bit of time recently at work constructing the beginnings of a Language Workbench (I say beginnings as I’m following YAGNI and there’s only one DSL that uses it so far).

Anyway here’s what I’ve just sketched out at the back of the class.

public class Builder<T>
{
    public BuildingInProgress<T> With(T targetObject)
    {
        return new BuildingInProgress<T>(targetObject);
    }
}

public class BuildingInProgress<T>
{
    private readonly T _target;

    public BuildingInProgress(T targetObject)
    {
        _target = targetObject;
    }

    public BuildingInProgress<T> Set<TValue>(Expression<Func<T, TValue>> property, TValue value)
    {
        PropertyInfo info = ExpressionHelpers.GetPropertyInfo(property);
        info.SetValue(_target, value, null);
        return this;
    }
}

public static class ExpressionHelpers
{
    public static string GetMemberName<TSubject, TMember>(Expression<Func<TSubject, TMember>> subjectMember)
    {
        PropertyInfo propertyInfo = getMemberExpressionMember(subjectMember.Body);
        return propertyInfo.Name;
    }

    public static PropertyInfo GetPropertyInfo<TType,TMember>(Expression<Func<TType,TMember>> property)
    {
        return getMemberExpressionMember(property.Body);
    }

    private static PropertyInfo getMemberExpressionMember(Expression expression)
    {
        if (expression == null) return null;

        switch (expression.NodeType)
        {
            case ExpressionType.MemberAccess:
                {
                    MemberExpression memberExpression = (MemberExpression)expression;
                    return (PropertyInfo)memberExpression.Member;
                }
            default:
                throw new InvalidOperationException(string.Format("The Expression Type was not expected: {0}", expression.NodeType));
        }
    }
}

And that’s basically as far as I have got in ten minutes (whilst paying attention as well of course;). Obviously it has weaknesses, the static reflection would need much building out to handle more than simple MemberExpressions for example.

Here is a really trivial example of its intended use:

[TestFixture]
public class BuilderFixture
{
    [Test]
    public void Where_SetIsCalled_Then_AValueShouldBeSet()
    {
        const string testValue = "testValue";
        const int otherTestValue = 12;
        Foo foo = new Foo();
        Builder<Foo> fooBuilder = new Builder<Foo>();
        fooBuilder.With(foo)
            .Set(f => f.Bar, testValue)
            .Set(f => f.Baz, otherTestValue);
        Assert.AreEqual(testValue,foo.Bar);
        Assert.AreEqual(otherTestValue, foo.Baz);
    }
}

public class Foo
{
    public string Bar { get; set; }
    public int Baz { get; set; }
}

Thursday 22 January 2009

Trying to Bind Form Elements in the View to a Dictionary in the Model with ASP.NET MVC

I've got a class that has, instead of statically typed properties, a dictionary that is used to store different values depending on what is needed for any given instance.

    public class Baz
    {
        public Baz()
        {
            Foo = new Dictionary<string, string> { { "Bar", null } };
        }
 
        public IDictionary<string, string> Foo { get; set; }
    }


I'm using ASP.NET MVC and I want to bind the object to the View. The View is dynamically generated based on a template object, like this:


<% foreach (Field field in ViewData.Model.Template.Fields)
    {%>

        <%= Html.Encode(field.DisplayText) %>
        <%= Html.TextBox(string.Format("Foo[{0}]", field.Name)) %>
 <% } %>


In the above, the field.Name will correspond to the Key of a KVP placed into the dictionary of the Foo instance during its' manufacture.

I am using the DefaultModelBinder, and if I look at the ASP.NET MVC (Beta) source code and in particular the BindModelCore method I see that the first thing it does is to find out if the type being dealt with is a Dictionary type, if it is then the UpdateDictionary method gets called. The first line of the UpdateDictionary method assigns the result of a call to CreateSubPropertyName passing in the BindingContext.ModelName as the prefix, and the string 'index' as the property name.

Now I can't see how this works. The result of this code seems to be that, where the ModelName is foo, a key of 'foo.index' is looked for in the route data, query string and form values (DefaultValueProvider.GetValue). Now this isn't found unless the id's for my form elements that tie in to the Dictionary are called 'foo.index' (yep, same id for multiple elements - already smells very wrong). That means altering my View code to look like this:


 <% foreach (Field field in ViewData.Model.Template.Fields)
    {%>
        <%= Html.Encode(field.DisplayText) %>
        <%= Html.TextBox("Foo.index")) %>
 <% } %>


And, giving each the name of foo.index, I get back the values of each as the attempted values and progress into the code which iterates through these values (now stored in a string[] called 'indicies'. The first line of code in the foreach loop (still in DefaultModelBinder.UpdateDictionary) calls CreateSubIndexName, passing in the ModelName and one of the attempted values, so if the attempted value was 'test', it will return foo[test]. Now this key is formatted how I'd want, but uses the attempted value as the key, which is not what I'd want.

The very next thing that happens is a call to DefaultModelBinder.BindProperty which calls CreateSubPropertyName passing in the ParentContext.ModelName, which right now is 'foo[test]' and the string 'key'. This results in a 'newName' of 'foo[test].key'. The consequence of this is that DefaultValueProvider.GetValue is called and looks for an id called 'foo[bar].key', unsurprisingly this doesn't exist, as I had to give all of the dictionary bound form elements an id of 'foo.index', when the null value is returned up then an exception results and the attempt to bind the elements stops.

What I'd like is for the form element id to read Foo[Bar] (assuming no prefix required, otherwise Baz.Foo[Bar]), and for the binding process to use this to get the attempted value, and to then Try and update the KeyValuePair in the Dictionary Foo where the key is Bar with the attempted value. Is that too much to ask?

Now it's entirely possible (probable?) that I've not understood how to make the UpdateDictionary method tick, Googling has revealed nothing, so I'd be appreciative of any information anyone reading this might be able to impart. Otherwise, if I get a chance, I may look to implement my own ModelBinder that has this behaviour (but i'd rather not).

Monday 19 January 2009

Refactored: Fluent Test Data Builders Using Generic Extension Methods and Lambda Expressions

Working further with the Fluent Test Data Builder I was working on yesterday has led me to refactor what I had, and the result is, I think, even nicer!

I've added a new method SetValue, and changed the from getting the Member.Name to getting the PropertyInfo from the Member. The other significant change is that I'm now wrapping the SetLength and SetValue methods in try catch blocks so that I can intercept a TargetInvocationException and throw the InnerException in its place, this helps to ensure that when I use the ExpectedExceptionAttribute I am catching the exception I really expect. The drawback with this is the stack trace, but I'm content to live with that for now.

publicstatic T SetLength<T>(this T obj, Expression<Func<T, string>> expression, int length)
{
try
    {
        PropertyInfo member = getMemberExpressionMember(expression.Body);
        string currentValue = expression.Compile().Invoke(obj) ?? string.Empty;
        string newValue = currentValue.PadRight(length);

        member.SetValue(obj, newValue, null);

        return obj;
    }

    catch (TargetInvocationException tie)
    {
        if (tie.InnerException != null) throw tie.InnerException;
        throw;
    }
}
 
public static TObj SetValue<TObj, TVal>(this TObj obj, Expression<Func<TObj, TVal>> expression, TVal value)
{

    try
    {
        PropertyInfo member = getMemberExpressionMember(expression.Body);
        member.SetValue(obj, value, null);
        return obj;
    }
    catch (TargetInvocationException tie)
    {
        if (tie.InnerException != null) throw tie.InnerException;
        throw;
    }
}



So how am I making use of this fluent interface? Well the methods above are two of those I have written, and probably the most interesting, but I have others too. These fall into two main categories:



  1. setting default valid values;

  2. creating default valid objects.


For setting default valid values, these are done as ExtensionMethods on the type I'm interested in. For example:



        public static Extract WithDefaultTitle(this Extract extract)
        {
            extract.QualificationTitle = "Mr";
            extract;
        }


For creating default valid objects, this is also done with an ExtensionMethod on the type that I'm interested in. As my need for different default starting points increases then I can easily write more. Here's an example:


        public static Extract AsValid(this Extract extract)
        {
            return extract
                .WithDefaultCode()
                .WithDefaultTitle()
                .SetValue(e => e.Id, 111)
        }


With these methods available to me I can combine them with my SetValue and SetLength methods to then alter the state of any given property so that I can test the implications of the value set in a degree of isolation (at least consistency).  In particular I am using this to test that behaviour to validate my object works correctly given different combinations of property values on the object under test. Here's an example of this:


        [Test]
        [ExpectedException(typeof(InvalidOperationException), "An Id must be present.")]
        public void NullIdShouldThrowAnInvalidOperationExceptionOnValidate()
        {
            Extract extract = new Extract()
                .AsValid()
                .SetValue(e => e.Id, null);
            extract.Validate();
        }
 
        [Test]
        public void A_SurnameWithLessThan35CharactersShouldBePaddedTo35CharactersInFormattedProperties()
        {
            Extract extract = new Extract()
                .AsValid()
                .SetLength(e => e.Surname, 34);
                
            Assert.AreEqual(34, extract.Surname.Length);
            Assert.AreEqual(35, extract.FormattedProperties["Surname"].Length);
        }

Personally I'm happy with how this is progressing, but we'll see how it goes as I begin to push the current envelope a bit more.

Sunday 18 January 2009

Fluent Test Data Builders Using Generic Extension Methods and Lambda Expressions

Whilst writing some extension methods this evening to make creating objects in unit tests simpler I found myself writing methods that looked sufficiently similar to each other that I felt the code lacked a little DRYness.

The code being written has to output a data extract in a fixed field length formatted text file. As such the length of the formatted fields is very important, so I had created some methods to initialise the properties of my object with given lengths so that I could ensure the desired behaviour during formatting (such as trimming, padding, exception throwing, etc...). These methods looked much like this:

public static DataExtract WithSetLengthSurname(this DataExtract dataExtract, int length) 
{ 
    dataExtract.Surname = string.Empty.PadRight(length); 
    return dataExtract; 
}
 
public static DataExtract WithSetLengthForenames(this DataExtract dataExtract, int length) 
{ 
    dataExtract.Forenames = string.Empty.PadRight(length); 
    return dataExtract; 
} 
 

As you can see these methods are very similar. What I wanted I felt was to take this fluent interface for creating my DataExtract object, but to make the whole thing more generic.


This is what I came up with:


public static T SetLength<T>(this T obj, Expression<Func<T,string>> expression, int length) 
{ 
    string memberName = getMemberExpressionMemberName(expression.Body); 
    string currentValue = expression.Compile().Invoke(obj) ?? string.Empty; 
    string newValue = currentValue.PadRight(length); 
    obj.GetType().GetProperty(memberName).SetValue(obj, newValue, null); 
    return obj; 
} 
 
private static string getMemberExpressionMemberName (Expression expression) 
{ 
    if (expression == null) return string.Empty; 
 
    switch (expression.NodeType) 
    { 
        case ExpressionType.MemberAccess: 
            { 
                MemberExpression memberExpression = (MemberExpression) expression; 
                return memberExpression.Member.Name; 
            } 
        default: 
            throw new InvalidOperationException(string.Format("The Expression Type was not expected: {0}", expression.NodeType)); 
    } 
 
}

With this code then I can now make a call like this:


DataExtract extract = new DataExtract().SetLength((e) => e.Forenames, 10);

Now it's not unreasonable to look at this and wonder what has been gained, after all I could have written this:


DataExtract extract = new DataExtract { Forenames = "".PadRight(10) };

I suppose for me it comes into its own when combined with other extension methods as a part of a wider ranging Fluent Interface based test object builder, like this:


CandidateExtract extract = new CandidateExtract() 
    .SetLength((e) => e.Surname, 36) 
    .WithValidQualificationCode() 
    .WithValidQualificationTitle() 
    .SetLength((e) => e.Forenames, 36);

As can be seen in the above some property specific methods still exist, because the behaviour that each implements is sufficiently unique, but where the behaviour is very similar (such as setting a string to a default length) this seems to me to provide a number of benefits:


  • the number of methods appearing in Intellisense drops making it easier to find what you want quickly
  • the number of builder methods to be written drops (in my case significantly)
  • by using a fluent interface readability is enhanced.

Saturday 6 December 2008

WCF With The Castle Windsor Facility

Towards the end of a rather busy Saturday of coding in the office I decided to take on the exposing of some services at the boundary of a system I am working on. We're using the Castle project's Windsor container for our IOC on all of our new projects and so I figured that it would make sense to do a short spike into the WCF facility that ships with it to see whether this would be worth using going forwards. The short answer is that I think it is.

It proved very simple to get going. I'd recommend anyone looking to use this facility gets the latest version of the source code in the trunk before starting and has a look at the demo project in there as this proved very helpful.

I quickly defined an interface and set it up as a contract (as per normal with WCF), I then created a class that implemented the interface. At this point in the Global.asax.cs I configured my Windsor container mappings like this:


ServiceDebugBehavior returnFaultsAndMex =

                new ServiceDebugBehavior

                    {

                        IncludeExceptionDetailInFaults = true,

                        HttpHelpPageEnabled = true

                    };

            

            ServiceMetadataBehavior metadata =

                new ServiceMetadataBehavior {HttpGetEnabled = true};

 

            container = new WindsorContainer()

                .AddFacility<WcfFacility>()

                .Register(

                          Component.For<INHibernateSessionManager>()

                              .Named("NHibernateSessionManager")

                              .ImplementedBy<NHibernateSessionManager>(),

                          Component.For<IBarRepositoryFactory>()

                              .Named("BarRepositoryFactory")

                              .ImplementedBy<BarRepositoryFactory>()

                              .DependsOn(Property.ForKey("sessionFactoryConfigPath")

                                             .Eq(Path.Combine(

                                                     Path.GetDirectoryName(

                                                         GetType().Assembly.Location),

                                                     "Hibernate.cfg.xml"))),

                          Component.For<BarService>()

                              .Named("BarDomainService")

                              .ImplementedBy<BarService>(),

                          Component.For<IBarEnterpriseService>()

                              .Named("BarEnterpriseService")

                              .ImplementedBy<BarEnterpriseService>());



Next up was the .svc file:


<% @ServiceHost Service="BarEnterpriseService" 

    Factory="Castle.Facilities.WcfIntegration.DefaultServiceHostFactory, Castle.Facilities.WcfIntegration" %>



With a bit of work in the web.config file then, with a press of F5, I can navigate to the svc and get the MEX page:


  <system.serviceModel>

    <services>

      <service name="Foo.Bar.EnterpriseServices.BarEnterpriseService" behaviorConfiguration="ReturnFaultsAndMex">

        <endpoint contract="Foo.Bar.EnterpriseServiceContracts.IBarEnterpriseService"

                          binding="basicHttpBinding"/>

      </service>

    </services>

    <behaviors>

      <serviceBehaviors>

        <behavior name="ReturnFaultsAndMex" >

          <serviceDebug includeExceptionDetailInFaults="true" />

          <serviceMetadata httpGetEnabled="true" />

        </behavior>

      </serviceBehaviors>

    </behaviors>

  </system.serviceModel>



This is great and very nicely integrates the whole WCF experience into my IOC centric application. I do still have a couple of areas where I have questions. In the global.asax file I included details of two behviours, for error details and to enable the MEX service. This code was lifted from the sample in the Castle trunk. I still needed to add these behaviours explicitly into the web.config though. Present or removed these registrations seem to have no effect, and I find the same to be the case with the demo app.