The great thing about art is that it is all about the end product. The reality of the journey is obscured through layers and layers of oils, acrylics or whatever medium. Some critics will analyze technique, but the truth is all evidence of rework has been removed. All you have is the final product, and for better or for worse your opinion about it. If only development was like this.

However it is not. Us as developers live in a world where the core of your creation is exposed. From the shiny façade right down to the bowels of the implementation, every aspect of it is open to inspection and criticism.

So what does all of this have to do with the topic at hand? Well quite a lot actually. You see, most developers are perfectionists, or at least like the idea of being a perfectionist. So we read. And read. And read. We are expected to know business, business process, the latest techs and tomorrow’s techs. We have to foster an intimate relationship with knowledge, not only to sate our own curiosity, but also to survive in the game.

There is however an unseen danger in reading, more specifically high level philosophical texts on design, which is, inherently like art, subjective. For the sake of brevity I will refer to these texts as “books”. The danger is we very seldom get to ponder the points we read. Sure we process and understand, but we seldom question or extrapolate to real world scenarios. As such, more often than not a book smart developer is seen handing over their mind and creativity to the books author. The book smart developer spends more time memorizing the book verbatim than trying to process the “essence” of the message.

Technical books tend to be accurate. Sure there might be a bug here or there, but by in large they are spot on. And they have to be. This brings me to my first point in why book smart developers are bad developers. Technical books are an exercise in academia. Being subject to industry peers, they have to be right on the money. They drill down into their subject matter at nauseum, and are not allowed to waiver their opinion lest they relinquish the position of the “authoritive resource”. The problem now is the book smart developer has placed substantial investment in reading the text both in time and money.

Investment has a strange psychological side effect, once someone has invested in something, they don’t want to back out. There are many studies on this, one of the biggest being the subprime crisis in the US, and anyone who has studied mathematics will be able to tell you about Gamblers Ruin. As such, the book smart developer tends to adopt the attitude of “this is the way” and in doing so becomes a disciple of the author.

Onto point number two. I am not sure if anyone noticed, but during this electronic age, everyone forgot to upgrade books. The technology predates pretty much most things and hasn’t evolved much. I am not talking about a new eReading device here. I am talking about the communication style. It is one way. Half duplex. And it blows. One-way communication is opinionated, un-engaging, and dictorial. Couple this with the book smart developer, and you have the ultimate mind control device. Heck you might as well give away a free packet of Kool-Aid with every purchase. But we can’t blame books. They got one shot to get it right. Unless they push their message across in one reading, they have failed. They are medium constrained. Book authors get this, sadly book smart developers don’t. You see in every opinion there is some give and take, this normally comes in the form of some negotiation until an amicable understanding is in place. Usually in any form of negotiation, the initiator has to overstate their case, and then whittle it down to a point where both parties are happy. This principle is applied to teaching as well as bargaining. Books do not negotiate, nor do they care for your level of understanding on the subject matter. If the book smart developer has missed the point, well let’s just say the book doesn’t care.

Onto point number three; on completion of a new book, the book smart developer has an innate need to decorate every conversation with buzz words and terminology. While buzzwords may go down well at a tech convention to make you look good, they do very little in the way of educating others. As developers, in this fast pace environment, we have the civic duty to disseminate and pass on newly acquired information on to our peers. Lexically relevant buzzwords are completely counterproductive in this regard. If the person you are educating knows the buzzword, then typically they know the subject matter. Again it blocks the “essence” of the message. Buzzwords are all encompassing summaries; if you using buzzwords whilst explaining something, you are failing at communicating. Book smart developers tend to use buzzwords to shroud their lack of understanding in the subject matter. When challenged on these topics, you will see them characteristically make a dash for the reference material.

Point number four is more of a side effect of being a book smart developer. Book smart developers quickly loose what I think is the most important skill a developer has; the ability to infer and adapt. The books smart developer is the guy who is constantly trying to get the problem to fit the mold, instead of taking what they know, and adjusting it to fit the problem domain. Martin Fowler groupies tend to slide into this category quite nicely. He is the guy that tells the product designer it can’t be done (removing time and money from the triangle). Early on in my career while working for an investment company, our rather insightful CEO gave us a compelling presentation, during this he put forward a statement that has since become my mantra, it goes something like this: “Who makes the f&^king rules”. Just cause the book says do X, Y then Z, doesn’t mean you can’t slip W in there somewhere. If development was so cast in stone, then why haven’t we seen stability in it as in a discipline like accounting which hasn’t seen innovation in centauries?

Book smart developers fit into two categories. There are those that talk the talk and those who walk the walk. Both are doomed to fail.

The talkers however are the lesser of two evils. They are the ones that claim “We must be agile” then go on to tell you they run a 6 month sprint. They are the ones who cry at meetings “We must be domain driven” Then go back to their desk to carry on working on cross cutting layers. Basically they have missed the boat. Unless the book explicitly states it, it isn’t so.

The walkers are a more nefarious bunch. They are the types that implement a pattern with surgical precision. They are an ardent hard working bunch who are passionate about code, but do not give them small problems to solve. You see, if you study anything enough, it becomes a science. And if you don’t believe me, find a group of joggers and ask them “Which is the best way to tie my laces?” Likewise take a small problem that should normally take a few hours to do. Give it to the walker, and tell them they got a week to do it. They will take a week, and deliver the most convoluted, over engineered piece of code imaginable. It will work and it will be one hundred percent academically correct. Extrapolate this to an entire system, and well you get the picture; a “perfect”, un-maintainable and un-scalable mess.

Sadly, as developers we have all at some point in time have fallen foul to BSD syndrome (my first coined buzzword, someone open a wikipedia entry.). So what do you do?

Well a good start is diversifying opinions. Take the age old debate of stored procedures vs. in code queries. Truth being told; they both have their place. Don’t argue the philosophy of each approach, but rather the practicality of each.

Acknowledge that no solution is perfect. Like

art, there is always one more brush stroke to put in place. The real accomplished artist knows when to put the brush down. Don’t over analyze (Over, being the operative word here)

The cake is a lie. As developers, Greenfield development is our holy grail. However these sorts of projects are few and far between. While most texts revolve around Greenfield development, broadly speaking very few teach how to retrofit their approach to existing, monolithic systems. Ironically these systems are the ones that need the most help. As such it is pointless trying to implement an approach down to the letter. “SCRUM” doesn’t fail, “Agile” doesn’t fail. It is the change management, lack of understanding and unwillingness to modify the process that failed.

Stop being arrogant; you a developer, we are arrogant. Stop it. Saying “I don’t know” is a great way to learn new stuff. Remember, while your mouth is moving you are not listening. This will open up a fantastic learning technique to substitute books, I call it conversation. More information can be gleamed in ten minutes of engagement that and hour of reading. Engage in forums, twitter and all the rest.

There are no rules; just good and bad decisions. Don’t let a book dictate policy to you. Take the good bits; leave out the ones that don’t work for you. However always remain cognizant of your ability. If you have no experience in the field, I would say lean on the text a little more.

The world is not black and white, so embrace the shades of grey. As humans we have a built in mechanism (heuristics) that clamps fuzzy concepts to our closest understanding, this kind of goes against the natural order of things. We need to stop being so pedantic in our thinking and more fluid in our decision making. As stated before, a book does not know of your specific situation, or worse still, your heuristics have incorrectly interpreted the books message.

I have actually experienced this. A developer had implemented a processing routine in code that took several hours to run; it was done in code because “that’s where business logic goes”. It could have been implemented on the database to be run in a matter of minutes, the individual was adamant his method was right, and technically he was correct. Expect there was a problem, the business case was failing, and there were not enough hours in the day to process the job. It is this dogmatic thinking that leads to bad systems. Sadly that naïve young developer was me, and I am going to chalk that one up to experience thank you very much.

My last tip for developer Nirvana is to not have any investment in your code. Having the mindset of “I have don’t this once, I can do it better if I do it again” goes a long way to achieving this. Being able to scrap a few hundred lines of code at will is almost liberating. It also falls in line with the Agile/SCRUM/Lean/TDD/Buzzword paradigm in that it makes refactoring second nature; (And I mean proper refactoring where you decouple dependencies, not simply “right click -> extract method”). It also opens you mind up to different approaches and you tend to embrace change a lot easier.

The Entity Framework is easy to slide into for Greenfield applications, isn’t everything when you got a clean slate to work with? But what about retrofitting into an existing application? One of the challenges here is the connection string itself. In a legacy system, (by legacy I mean one that does not use EF 🙂 ) We have the connection string stored in the connectionStrings section of the App or Web config file. We certainly do not want to introduce another connection string just for our EDMX, as that gives us more points of maintainability. We need to be able to build up one of those funny EDMX connection strings from an existing regular connection string. And no, string concatenation is NOT the answer here.

Here is the code: (in

  1. Namespace Repository
  2. Public Class ConnectionFactory
  3. Public Shared Function GetEDMXConnectionString(Of T As ObjectContext)(ByVal connectionName As String) As EntityConnection
  4. Dim connectionString = ConfigurationManager.ConnectionStrings(connectionName).ConnectionString
  5. Dim dbConnection As New SqlConnection(connectionString)
  6. Dim resourceArray As String() = {“res://*/”}
  7. Dim assemblyList As Assembly() = {GetType(T).Assembly}
  8. Dim metaData As New MetadataWorkspace(resourceArray, assemblyList)
  9. Dim edmxConnection As New EntityConnection(metaData, dbConnection)
  10. Return edmxConnection
  11. End Function
  12. End Class
  13. End Namespace

There are a few things to look at here.

  1. We use a generic type of our ObjectContext. This is needed to get the assembly for the meta data.
  2. We create a stock SQL Connection
  3. We create a MetaDataWorkspace
  4. We then create an EntityConnection passing in the meta data and the SQL connection

You can then pass the resulting EntityConnection straight into your ObjectContext constructor.

That’s right. I have skipped part 3 for now. We will return to that. Lets take a step back and look at some inheritance and splitting scenarios that the Entity Framework supports. Relational inheritance is a way of describing your relational data in a OO form, where the rules of polymorphism hold true for your model. Splitting is a way of classifying your data so that it maps onto sets of entities in your model.

Inheritance Scenarios


TPT: Table per Type

I have a 1-1 table relationship in my database that represents a base class – sub class structure

  • Person is a contact
  • Business is a contact
How to
  • Drag the 2 entities onto the designer
  • Link them with the inheritance tool


  • Use the context menus.

This is the model first design default behaviour.

  • Polymorphic rules apply to loading. Querying the base type will cast to subtypes along with their respective data.
TPH: Table per Hierarchy

I have a single table in the database where sets of columns identify difference inheritance structures and there is a condition column (lookup or code) that identifies the entity type.


A user table that has a lookup on user type that varies as the user passes through my user lifecycle workflow (I.E.: Anonymous -> Registered -> Made Purchase -> VIP), as he does so, more methods and options are opened up.

How To
  • Drag an entity onto the designer and link it to the table.
  • Mark this entity as abstract
  • Delete non common properties including the keying column.
  • Create another entity that derives from the base entity
  • Under the mapping set a condition/s on the switch column/s
  • Add the additional columns unique to the child entity.
  • Polymorphic rules apply to loading. Querying the base type will cast to subtypes along with their respective data.
TPC: Table per Concrete Class

I have multiple identical tables that represent an inheritance structure. Conceptually they are the same, but structurally there is additional information that is mutually exclusive (I.E.: Relationships)

  • Employee -> CurrentEmployee that maps to Employees table with relationships to Manager and Subordinates
  • Employee -> PreviousEmployee that maps to the PreviousEmployee table. It merely is a track of prior employees.
How To
  • This is not supported by the designer.
  • On the designer, create the two entities with their default mappings
  • View the designer in XML
  • Copy one of your entity elements from the CSPI layer and paste it.
  • Mark it as abstract and name it as the base type.
  • Delete all the common properties from the original entity and set its base type to your new base type.
  • For the other entity, mark its base type as your base type.
  • What this does is retains the mapping to the original for the second entity whilst classifying the conceptual entity as that of the base type.
  • Polymorphic rules apply to loading. Querying the base type will cast to subtypes along with their respective data.

Splitting Scenarios

Vertical Splitting

We have two tables with a 1 – 1 relationship that we want to represent in a single entity. This is like TBT inheritance mapping without the inheritance structure.


I have a User table with a 1 – 1 relationship with a Location table, since in my application a user is associated with a single location. I want to represent the User Entity with the users location information in it.

How To
  • Add both entities onto the designer.
  • Select and Cute the Location properties from the Location Entity
  • Paste them into the User Entity
  • Delete the Location Entity.
  • When prompted if you want to remove the Location entity from the Store layer, click No
  • Go to the mapping details on the User entity and add the Location table to the mappings list. This will automatically populate the property mappings.
  • When selecting this creates an inner join transparently
  • Because this is NOT inheritance, polymorphic loads do not apply.
Horizontal Splitting

I have 2 similar tables with the same column names and I want only one entity. However I still want to distinguish between the two via a Boolean flag


I have two tables that represent batches of data. Processed and Unprocessed. I want to easily remove a row from the Unprocessed table and insert it into the Processed table when I have finished processing it.

How To
  • Add the two entities to the designer
  • Delete the Processed entity
  • When prompted if you want to remove its data from the Store Layer, click No
  • Select the Unprocessed entity and add the Processed table to its mappings
  • Right click the Unprocessed entity and add a new scalar property called “IsProcessed” of type Boolean
  • Go to the EDMX XML, as this next step is not supported by the designer
  • In the mapping layer under the Processed ENTITY you will see 2 mapping elements, one for Processed Table and one for Unprocessed Table, under the Processed TABLE element, add a condition element
    <Condition Name=”IsProcessed” Value=”false” />
  • Under the UnProcessed TABLE element, add a condition element
    <Condition Name=”IsProcessed” Value=”true” />
  • Creates a union on the set when selecting
  • When changing the Boolean flag between conditions, the row is deleted from the one table and inserted into the other transparently.
Table Splitting

I have one table but I want to create a navigation property to some other columns on the same entity.


I have a Person entity with a picture column on it. I do not want to load this image each time I query the entity. Rather I want to have a Picture property that I can navigate to and defer the loading of the image.

How To
  • Add the entity to the designer
  • Copy and paste the entity
  • Rename the second entity
  • Pluralize the second entities Entity Set Name from its properties.
  • Delete all unwanted properties from the second entity
  • Set the mapping on the second entity to the same Store as the first.
  • Delete opposite properties on the first entity
  • Add an association element to the designer
  • Set the multiplicity to 1 – 1
  • Double click on the association entity
  • Set the Principal to the second entity
  • Setting the contexts ContextOptions.LazyLoadingEnabled = false will cause the entity to NOT load the navigation property EVEN when it is navigated to. It will instead return null.
  • With lazy loading enabled this will query the navigation property when it is called, as per expected lazy load behaviour
  • Using the Include(string path) method on the query allows you to force the loading of the navigation property upfront.

Whilst playing with the Entity Framework, I though it would be pretty cool if the auto generated code would create and implement an interface along with the standard data context and entity classes. This would make unit testing with a mocking framework really easy, and allow you to write tests without having to access a data access store. Usually I write a wrapping repository to mock out, but this is a little tiresome. Since the Entity Framework is really flexible, this shouldn’t be too much trouble. So lets give it a bash.

I used the POCO class T4 templates as a basis. You can read more about them here. I am no T4 expert, but the code is intuitive enough to make sense of it, and more importantly to make some modifications to it.

Making the Changes

We first create a new class library project and create a folder for our EDMX and supporting classes in the repo.

01-New Project

After this we create a standard EDMX

02-Add New EDMX

I am going to play with a basic invoicing model


I am also going to be doing this in design first mode, but it is no different if you are generating off an existing database. If we take a look at the generated code behind we can see a couple of things.

04-Designer Code

We see that our container inherits from the ObjectContext base class, we also see that outside of this class is a region for all the generated entities. These entities by default derive from the EntityObject class. The objective of the POCO T4 template is to decouple this using dynamic proxies and some black voodoo magic. Also if we take a look at the properties for the ObjectContext we notice that the Code Generation Strategy is set to Default.

05-Code Generation Strategy

What we are going to do next is to tell the EDMX to use our own custom code generation strategy. Right clicking on the design surface, we select “Add Code Generation Item..” from the context menu.

06-Add Code Generation Item

If you have installed the POCO generator extension then you will see it in the dialog box. Select this.

07-POCO Entity Generator

What this does is adds two T4 templates to the project, one for generating the new ObjectContext and the other for generating the entities. These .TT files have the generated code behind .CS files. You can explore them as well as the template that generated them. To help in viewing the T4 template I recommend downloading a copy of Tangible T4 Editor, this gives you some nice highlighting and some intellisense on the T4 template code.

08-Template Files

Next, we need to copy the .Context.TT file and paste it in the folder, this will give us a file to work with for the interface. Rename it to something more appropriate like “”. We now dive into this new file and make some changes so that it generates an interface off the EDMX file.

10-Interface Definition

We create a prefix an “I” onto the container name, and implement the IDisposable interface onto the interface. Also make sure to change “class” to interface. We then remove all the unnecessary stuff like the constructor definitions.

11- Property Definition

For the property definitions we change them to a return type of IObjectSet. This is so we can mock out the return types for any LinQ / Lambda queries against the interface. Also make sure we only implement the GET portion of the property. Also get rid of all the implementation stuff around the property so that only the definition is left.

Onto the function imports. We do a similar thing, reduce to definitions and remove the implementation stuff.

12-Function Imports

For the VB.Net version of the POCO class generator there is a bug in the function import code, I blogged on this here. You will have to code around this. To be fair, I haven’t tested the ObjectResult return type, but if you cant instantiate it then you will have to do something similar to the properties and find an appropriate interface to substitute.

13-Generated Interface

If all goes well, on saving the template we get a nice clean interface generated against our context entities.

We also need to add some of the functionality found in the ObjectContext, like SaveChanges, etc. So it may be invoked through the interface. Add what you need.

18-Additional Context Methods

The next task is now modifying the generator for the ObjectContext so that it implements our interface.

14-Context Implementing Interface

We do this by modifying the definition of the ObjectContext to implement the interface.

We also change the property definition to return IObjectset. Also note for VB.Net, since you need to implement interfaces explicitly, you will also need to add this on the end of the property.

15-Properties Implement IObjectset

We can now inspect the new generated ObjectContext.

16-New Generated Context

Testing it Out

Lets write some implementation code to test. Typically one write some form of service that implements the data context. We also want to unit test the service by mocking out the repository.

Lets create a basic service.

17-Create Service

Some things to note here. We use a constructor dependency injection pattern here to allow us to inject our own flavor of repository. Also we have a method called “CreateInvoice” that receives some information (via a front end or wherever) and creates the header and associative entities for an invoice. You should be able to eyeball the implementation.

We now need a little helper class that implements the IObjectSet interface so we can return it. We also dont want it to be completely dumb as it would be nice if we could base our assertions off this object to verify the integrity of the data the service method is producing. So we base it off a List class. This was pulled off MSDN

19-Fake Obkect Set

We call this FakeObjectSet appropriately. Notice it is a generic.

We also add Rhino Mocks as our mocking framework.

20- Add Rhino Mocks

We then create a fancy mocking test to verify the behaviour of the service method. I am not going to worry about philosophies of mocking vs. stubbing here. This is just an example of a test.

21- Create fancy mock test

Some interesting points here are:

We use the generate our FakeObjectSets with fake data relevant to the test.

We use the mocking framework to return this data to the service.

The Repo is mocked.

We can query the repo after the fact to verify the data in it. We compare this to our input data to ensure all business rules have been applied.

We then run the test.

22- Verify Results

Green lights baby…green lights.

For getting started with the Entity Framework follow my multi part tutorial

Windows Live Writer

Posted: August 6, 2010 in Tools
Tags: Tags:

This blog post is kind of experimental and informational. I decided to try out Windows Live Writer which is a blog posting tool by Microsoft. It is meant to work seamlessly with most popular blogging engines, and gives you the nice rich UI experience as well as a few extra goodies. Let me run through some of the features.

Editing and Formatting

Formatting is done via the a pretty intuitive toolbar at the top. Most common layout features are available.


The three tabs at the bottom let you see the final result without having to post. This is a pretty neat feature.


What it does is download your theme from your blog, and then sets up an small render prototype for you. Also because you are working on your local machine, all rich copy and paste tooling works. This is great for images when coupled with the snipping tool that ships with Windows 7.

image The context menu on docked on the right gives you some nice formatting options for the currently selected element. For images it gives you some nice pixel effects like drop shadow and the regular suspect.


There are a couple of elements available to you:

  • Hyperlink
  • Picture
  • Photo Album
  • Table
  • Maps
  • Tags
  • Video

Plus an options to add some additional plugins. The maps looks interesting so I am going to give that a try.

Map picture

Not too sure I like the pushpin graphic, but hey it gets the job done and is pretty easy to do.

It also handles tagging an categories. All in all I would say pretty neat. Will be using it from now on I reckon.

Ok I give up. Is someone able to explain to me why the VB POCO generator T4 template refuses to generate my function imports. Not only that, but it doesn’t even generate the region for the function imports.
This pic shows the T4 code clearly defining the function import region

And this pic clearly shows the code NOT being generated.

Any ideas?

UPDATE: Found the problem

To answer my question, it seems that yes there is a bug. Looking at the template I cam across this nugget:

If edmFunction.ReturnParameter Is Nothing Then
  Continue For
End If

Essentially what this does is checks the return type of the import function and quits the rest of the iteration if the import function has no return type. Seems logical? NO.

You see in VB functions that don’t return a value are Subs. Since the generator can only handle functions, any function imports that don’t return a collection will not be generated. This is an issue for non-query functions.

You got a couple of options:

  1. Ensure your procs ALWAYS return something. Not always feasible.
  2. Modify the T4 template to handle Subs

I will try options 2.

Keep you posted

UPDATE: Found a potential solution

Ok I did my work around and replaced my “Function Imports” section in the T4 template with the following:

region.Begin("Function Imports")
For Each edmFunction As EdmFunction In container.FunctionImports
Dim parameters As IEnumerable(Of FunctionImportParameter)  = FunctionImportParameter.Create(edmFunction.Parameters, code, ef)
Dim paramList As String = String.Join(", ", parameters.Select(Function(p) "ByVal " & p.FunctionParameterName & " As " & p.FunctionParameterType).ToArray())
If edmFunction.ReturnParameter Is Nothing Then
<#=Accessibility.ForMethod(edmFunction)#> Sub <#=code.Escape(edmFunction)#>(<#=paramList #>)
For Each parameter As FunctionImportParameter In parameters
If Not parameter.NeedsLocalVariable Then
Continue For
End If
Dim <#=parameter.LocalVariableName #> As ObjectParameter
If <#=If(parameter.IsNullableOfT, parameter.FunctionParameterName & ".HasValue", parameter.FunctionParameterName & " IsNot Nothing")#> Then
<#=parameter.LocalVariableName#> = New ObjectParameter("<#=parameter.EsqlParameterName#>", <#=parameter.FunctionParameterName #>)
<#=parameter.LocalVariableName#> = New ObjectParameter("<#=parameter.EsqlParameterName#>", GetType(<#=parameter.RawClrTypeName #>))
End If
MyBase.ExecuteFunction("<#=edmFunction.Name#>"<#=code.StringBefore(", ", String.Join(", ", parameters.Select(Function(p) p.ExecuteParameterName).ToArray()))#>)
End Sub
Dim returnTypeElement As String = code.Escape(ef.GetElementType(edmFunction.ReturnParameter.TypeUsage))
<#=Accessibility.ForMethod(edmFunction)#> Function <#=code.Escape(edmFunction)#>(<#=paramList #>) As ObjectResult(Of <#=returnTypeElement #>)
For Each parameter As FunctionImportParameter In parameters
If Not parameter.NeedsLocalVariable Then
Continue For
End If
Dim <#=parameter.LocalVariableName #> As ObjectParameter
If <#=If(parameter.IsNullableOfT, parameter.FunctionParameterName & ".HasValue", parameter.FunctionParameterName & " IsNot Nothing")#> Then
<#=parameter.LocalVariableName#> = New ObjectParameter("<#=parameter.EsqlParameterName#>", <#=parameter.FunctionParameterName #>)
<#=parameter.LocalVariableName#> = New ObjectParameter("<#=parameter.EsqlParameterName#>", GetType(<#=parameter.RawClrTypeName #>))
End If
Return MyBase.ExecuteFunction(Of <#=returnTypeElement#>)("<#=edmFunction.Name#>"<#=code.StringBefore(", ", String.Join(", ", parameters.Select(Function(p) p.ExecuteParameterName).ToArray()))#>)
End Function
End If

It seems to build and generate what seems to be a decent Sub. Please note, this is UNTESTED.

I recently stumbled onto this neat presentation on Continuous Deployment.

“Continuous Deployment – Introducing the Continuous Deployment concept with background about testing, monitoring, tools and culture requirements.”