Posts Tagged ‘ADO.NET Entity Framework’

That’s right. I have skipped part 3 for now. We will return to that. Lets take a step back and look at some inheritance and splitting scenarios that the Entity Framework supports. Relational inheritance is a way of describing your relational data in a OO form, where the rules of polymorphism hold true for your model. Splitting is a way of classifying your data so that it maps onto sets of entities in your model.

Inheritance Scenarios


TPT: Table per Type

I have a 1-1 table relationship in my database that represents a base class – sub class structure

  • Person is a contact
  • Business is a contact
How to
  • Drag the 2 entities onto the designer
  • Link them with the inheritance tool


  • Use the context menus.

This is the model first design default behaviour.

  • Polymorphic rules apply to loading. Querying the base type will cast to subtypes along with their respective data.
TPH: Table per Hierarchy

I have a single table in the database where sets of columns identify difference inheritance structures and there is a condition column (lookup or code) that identifies the entity type.


A user table that has a lookup on user type that varies as the user passes through my user lifecycle workflow (I.E.: Anonymous -> Registered -> Made Purchase -> VIP), as he does so, more methods and options are opened up.

How To
  • Drag an entity onto the designer and link it to the table.
  • Mark this entity as abstract
  • Delete non common properties including the keying column.
  • Create another entity that derives from the base entity
  • Under the mapping set a condition/s on the switch column/s
  • Add the additional columns unique to the child entity.
  • Polymorphic rules apply to loading. Querying the base type will cast to subtypes along with their respective data.
TPC: Table per Concrete Class

I have multiple identical tables that represent an inheritance structure. Conceptually they are the same, but structurally there is additional information that is mutually exclusive (I.E.: Relationships)

  • Employee -> CurrentEmployee that maps to Employees table with relationships to Manager and Subordinates
  • Employee -> PreviousEmployee that maps to the PreviousEmployee table. It merely is a track of prior employees.
How To
  • This is not supported by the designer.
  • On the designer, create the two entities with their default mappings
  • View the designer in XML
  • Copy one of your entity elements from the CSPI layer and paste it.
  • Mark it as abstract and name it as the base type.
  • Delete all the common properties from the original entity and set its base type to your new base type.
  • For the other entity, mark its base type as your base type.
  • What this does is retains the mapping to the original for the second entity whilst classifying the conceptual entity as that of the base type.
  • Polymorphic rules apply to loading. Querying the base type will cast to subtypes along with their respective data.

Splitting Scenarios

Vertical Splitting

We have two tables with a 1 – 1 relationship that we want to represent in a single entity. This is like TBT inheritance mapping without the inheritance structure.


I have a User table with a 1 – 1 relationship with a Location table, since in my application a user is associated with a single location. I want to represent the User Entity with the users location information in it.

How To
  • Add both entities onto the designer.
  • Select and Cute the Location properties from the Location Entity
  • Paste them into the User Entity
  • Delete the Location Entity.
  • When prompted if you want to remove the Location entity from the Store layer, click No
  • Go to the mapping details on the User entity and add the Location table to the mappings list. This will automatically populate the property mappings.
  • When selecting this creates an inner join transparently
  • Because this is NOT inheritance, polymorphic loads do not apply.
Horizontal Splitting

I have 2 similar tables with the same column names and I want only one entity. However I still want to distinguish between the two via a Boolean flag


I have two tables that represent batches of data. Processed and Unprocessed. I want to easily remove a row from the Unprocessed table and insert it into the Processed table when I have finished processing it.

How To
  • Add the two entities to the designer
  • Delete the Processed entity
  • When prompted if you want to remove its data from the Store Layer, click No
  • Select the Unprocessed entity and add the Processed table to its mappings
  • Right click the Unprocessed entity and add a new scalar property called “IsProcessed” of type Boolean
  • Go to the EDMX XML, as this next step is not supported by the designer
  • In the mapping layer under the Processed ENTITY you will see 2 mapping elements, one for Processed Table and one for Unprocessed Table, under the Processed TABLE element, add a condition element
    <Condition Name=”IsProcessed” Value=”false” />
  • Under the UnProcessed TABLE element, add a condition element
    <Condition Name=”IsProcessed” Value=”true” />
  • Creates a union on the set when selecting
  • When changing the Boolean flag between conditions, the row is deleted from the one table and inserted into the other transparently.
Table Splitting

I have one table but I want to create a navigation property to some other columns on the same entity.


I have a Person entity with a picture column on it. I do not want to load this image each time I query the entity. Rather I want to have a Picture property that I can navigate to and defer the loading of the image.

How To
  • Add the entity to the designer
  • Copy and paste the entity
  • Rename the second entity
  • Pluralize the second entities Entity Set Name from its properties.
  • Delete all unwanted properties from the second entity
  • Set the mapping on the second entity to the same Store as the first.
  • Delete opposite properties on the first entity
  • Add an association element to the designer
  • Set the multiplicity to 1 – 1
  • Double click on the association entity
  • Set the Principal to the second entity
  • Setting the contexts ContextOptions.LazyLoadingEnabled = false will cause the entity to NOT load the navigation property EVEN when it is navigated to. It will instead return null.
  • With lazy loading enabled this will query the navigation property when it is called, as per expected lazy load behaviour
  • Using the Include(string path) method on the query allows you to force the loading of the navigation property upfront.

Whilst playing with the Entity Framework, I though it would be pretty cool if the auto generated code would create and implement an interface along with the standard data context and entity classes. This would make unit testing with a mocking framework really easy, and allow you to write tests without having to access a data access store. Usually I write a wrapping repository to mock out, but this is a little tiresome. Since the Entity Framework is really flexible, this shouldn’t be too much trouble. So lets give it a bash.

I used the POCO class T4 templates as a basis. You can read more about them here. I am no T4 expert, but the code is intuitive enough to make sense of it, and more importantly to make some modifications to it.

Making the Changes

We first create a new class library project and create a folder for our EDMX and supporting classes in the repo.

01-New Project

After this we create a standard EDMX

02-Add New EDMX

I am going to play with a basic invoicing model


I am also going to be doing this in design first mode, but it is no different if you are generating off an existing database. If we take a look at the generated code behind we can see a couple of things.

04-Designer Code

We see that our container inherits from the ObjectContext base class, we also see that outside of this class is a region for all the generated entities. These entities by default derive from the EntityObject class. The objective of the POCO T4 template is to decouple this using dynamic proxies and some black voodoo magic. Also if we take a look at the properties for the ObjectContext we notice that the Code Generation Strategy is set to Default.

05-Code Generation Strategy

What we are going to do next is to tell the EDMX to use our own custom code generation strategy. Right clicking on the design surface, we select “Add Code Generation Item..” from the context menu.

06-Add Code Generation Item

If you have installed the POCO generator extension then you will see it in the dialog box. Select this.

07-POCO Entity Generator

What this does is adds two T4 templates to the project, one for generating the new ObjectContext and the other for generating the entities. These .TT files have the generated code behind .CS files. You can explore them as well as the template that generated them. To help in viewing the T4 template I recommend downloading a copy of Tangible T4 Editor, this gives you some nice highlighting and some intellisense on the T4 template code.

08-Template Files

Next, we need to copy the .Context.TT file and paste it in the folder, this will give us a file to work with for the interface. Rename it to something more appropriate like “”. We now dive into this new file and make some changes so that it generates an interface off the EDMX file.

10-Interface Definition

We create a prefix an “I” onto the container name, and implement the IDisposable interface onto the interface. Also make sure to change “class” to interface. We then remove all the unnecessary stuff like the constructor definitions.

11- Property Definition

For the property definitions we change them to a return type of IObjectSet. This is so we can mock out the return types for any LinQ / Lambda queries against the interface. Also make sure we only implement the GET portion of the property. Also get rid of all the implementation stuff around the property so that only the definition is left.

Onto the function imports. We do a similar thing, reduce to definitions and remove the implementation stuff.

12-Function Imports

For the VB.Net version of the POCO class generator there is a bug in the function import code, I blogged on this here. You will have to code around this. To be fair, I haven’t tested the ObjectResult return type, but if you cant instantiate it then you will have to do something similar to the properties and find an appropriate interface to substitute.

13-Generated Interface

If all goes well, on saving the template we get a nice clean interface generated against our context entities.

We also need to add some of the functionality found in the ObjectContext, like SaveChanges, etc. So it may be invoked through the interface. Add what you need.

18-Additional Context Methods

The next task is now modifying the generator for the ObjectContext so that it implements our interface.

14-Context Implementing Interface

We do this by modifying the definition of the ObjectContext to implement the interface.

We also change the property definition to return IObjectset. Also note for VB.Net, since you need to implement interfaces explicitly, you will also need to add this on the end of the property.

15-Properties Implement IObjectset

We can now inspect the new generated ObjectContext.

16-New Generated Context

Testing it Out

Lets write some implementation code to test. Typically one write some form of service that implements the data context. We also want to unit test the service by mocking out the repository.

Lets create a basic service.

17-Create Service

Some things to note here. We use a constructor dependency injection pattern here to allow us to inject our own flavor of repository. Also we have a method called “CreateInvoice” that receives some information (via a front end or wherever) and creates the header and associative entities for an invoice. You should be able to eyeball the implementation.

We now need a little helper class that implements the IObjectSet interface so we can return it. We also dont want it to be completely dumb as it would be nice if we could base our assertions off this object to verify the integrity of the data the service method is producing. So we base it off a List class. This was pulled off MSDN

19-Fake Obkect Set

We call this FakeObjectSet appropriately. Notice it is a generic.

We also add Rhino Mocks as our mocking framework.

20- Add Rhino Mocks

We then create a fancy mocking test to verify the behaviour of the service method. I am not going to worry about philosophies of mocking vs. stubbing here. This is just an example of a test.

21- Create fancy mock test

Some interesting points here are:

We use the generate our FakeObjectSets with fake data relevant to the test.

We use the mocking framework to return this data to the service.

The Repo is mocked.

We can query the repo after the fact to verify the data in it. We compare this to our input data to ensure all business rules have been applied.

We then run the test.

22- Verify Results

Green lights baby…green lights.

For getting started with the Entity Framework follow my multi part tutorial

Ok I give up. Is someone able to explain to me why the VB POCO generator T4 template refuses to generate my function imports. Not only that, but it doesn’t even generate the region for the function imports.
This pic shows the T4 code clearly defining the function import region

And this pic clearly shows the code NOT being generated.

Any ideas?

UPDATE: Found the problem

To answer my question, it seems that yes there is a bug. Looking at the template I cam across this nugget:

If edmFunction.ReturnParameter Is Nothing Then
  Continue For
End If

Essentially what this does is checks the return type of the import function and quits the rest of the iteration if the import function has no return type. Seems logical? NO.

You see in VB functions that don’t return a value are Subs. Since the generator can only handle functions, any function imports that don’t return a collection will not be generated. This is an issue for non-query functions.

You got a couple of options:

  1. Ensure your procs ALWAYS return something. Not always feasible.
  2. Modify the T4 template to handle Subs

I will try options 2.

Keep you posted

UPDATE: Found a potential solution

Ok I did my work around and replaced my “Function Imports” section in the T4 template with the following:

region.Begin("Function Imports")
For Each edmFunction As EdmFunction In container.FunctionImports
Dim parameters As IEnumerable(Of FunctionImportParameter)  = FunctionImportParameter.Create(edmFunction.Parameters, code, ef)
Dim paramList As String = String.Join(", ", parameters.Select(Function(p) "ByVal " & p.FunctionParameterName & " As " & p.FunctionParameterType).ToArray())
If edmFunction.ReturnParameter Is Nothing Then
<#=Accessibility.ForMethod(edmFunction)#> Sub <#=code.Escape(edmFunction)#>(<#=paramList #>)
For Each parameter As FunctionImportParameter In parameters
If Not parameter.NeedsLocalVariable Then
Continue For
End If
Dim <#=parameter.LocalVariableName #> As ObjectParameter
If <#=If(parameter.IsNullableOfT, parameter.FunctionParameterName & ".HasValue", parameter.FunctionParameterName & " IsNot Nothing")#> Then
<#=parameter.LocalVariableName#> = New ObjectParameter("<#=parameter.EsqlParameterName#>", <#=parameter.FunctionParameterName #>)
<#=parameter.LocalVariableName#> = New ObjectParameter("<#=parameter.EsqlParameterName#>", GetType(<#=parameter.RawClrTypeName #>))
End If
MyBase.ExecuteFunction("<#=edmFunction.Name#>"<#=code.StringBefore(", ", String.Join(", ", parameters.Select(Function(p) p.ExecuteParameterName).ToArray()))#>)
End Sub
Dim returnTypeElement As String = code.Escape(ef.GetElementType(edmFunction.ReturnParameter.TypeUsage))
<#=Accessibility.ForMethod(edmFunction)#> Function <#=code.Escape(edmFunction)#>(<#=paramList #>) As ObjectResult(Of <#=returnTypeElement #>)
For Each parameter As FunctionImportParameter In parameters
If Not parameter.NeedsLocalVariable Then
Continue For
End If
Dim <#=parameter.LocalVariableName #> As ObjectParameter
If <#=If(parameter.IsNullableOfT, parameter.FunctionParameterName & ".HasValue", parameter.FunctionParameterName & " IsNot Nothing")#> Then
<#=parameter.LocalVariableName#> = New ObjectParameter("<#=parameter.EsqlParameterName#>", <#=parameter.FunctionParameterName #>)
<#=parameter.LocalVariableName#> = New ObjectParameter("<#=parameter.EsqlParameterName#>", GetType(<#=parameter.RawClrTypeName #>))
End If
Return MyBase.ExecuteFunction(Of <#=returnTypeElement#>)("<#=edmFunction.Name#>"<#=code.StringBefore(", ", String.Join(", ", parameters.Select(Function(p) p.ExecuteParameterName).ToArray()))#>)
End Function
End If

It seems to build and generate what seems to be a decent Sub. Please note, this is UNTESTED.

In the previous post we looked at the architecture of the Entity Framework. Carrying on from there we will see how to setup a basic Model First Design Scenario.

Common Tasks

For any operation against the Entity Framework, you will need to instantiate a data context. Since this object is bound to external resources it is important to clean it up after use. Wrapping the instantiation in a “using” block should take care of this.

A Simple Designer Scenario

Is this example we will add an entity to the designer in model first mode. We can then add some properties. We will also run through some of the options available to you as we go.

It is a good idea to keep all your EF Models in a class library off your main project. This will afford you the opportunity to reuse the model between multiple projects.

To add a EF Model

  • Right click the project -> Add New Item
  • Select ADO .NET Entity Data Model
  • Give your model a name

  • Select “Empty Model” from the wizard

This will create an EDMX file and add it to your project. You will also notice some references being added to the project.

You should now be presented with the designer surface. Selecting the design surface, explore the properties of the data context but hitting F4

Lets run through some of the more interesting properties here, bearing in mind that we are going to model first then generate our database off the model. This means we need to provide Entity Framework with some working knowledge about our database.

  • Code Generation Strategy: Allows us to use our own T4 templates for code generation.
  • Connection String: Notice this is blanked out, this will be populated once we bind the model to a database.
  • Database Schema Name: Notice in this image it is bold, meaning it has been changed. The default is “dbo”, however it is convenient to partition off your system areas. This will add every generated entity in the model to the database schema specified.
  • Lazy Load Enabled: This allows us to use deferred loading of entities.
  • Metadata Artifact Processing: We will go into this in more detail, but for now we can say it relates to the special way entity framework handles its connection strings.
  • Namespace: This is the code namespace the model will be stored under
  • Pluralize New Objects: This tries to cleverly pluralize the entity collections and database object names. So Company would have a collection name of Companies.
  • Transform Related Text Template on Save: This specifies whether to invoke the T4 code generation when the EDMX is saved.
  • Validate on Build: Verifies that the model is valid and there is no unbound entities or properties.

Once we are happy with the data context we can begin creating entities. We can either drag an entity from the toolbox onto the design surface, or we can use the context menus from the design surface. Either method is fine, but the context menu gives you a net dialog to full out: right click -> Add -> Entity…

  • Properties
    • Entity Name: The name of your entity (singular)
    • Base Type: Relates to inheritance, which we will cover later on.
    • Entity Set: The collection name for your entity, this get auto populated but you can change it if you feel the need.
  • Key Property
    • Create Key Property: Tell the designer to create a property the entity can key off. Every entity MUST have a key. Composite keys can be defined later in the designer.
    • Property Name and Property Type: If generating a single primary key, you can set the type and name of it here

Once the entity has been created we can start adding properties to it. Right click on the properties section in the entity -> Add -> Scalar. Give your property a name. You can get more detail on your property by hitting F4 while it is selected:

Here I have created a Company entity and given it a few properties. Inspecting one of these we see:

  • Concurrency Mode: If set, this property will for part of the update statement generated along with the primary key. This is similar to the concurrency model in the old DataAdapters, except you have much finer control in the Entity Framework. By way of example, lets look at two SQL statements:

A) No Concurrency Control:

    UPDATE Company SET RegistrationNumber = ‘9999’ WHERE Id = 3

B) With Concurrency Control:

    UPDATE Company SET RegistrationNumber = ‘9999’ WHERE Id = 3 AND RegistrationNumber = ‘123’

In example A there is no guarantee that the record has not been modified since we read it. The only way we could detect a concurrency issue is if the row was deleted in which case the ROWS AFFECTED will return 0.

In the example B, because the original value/s of the row participates in locating the record, we can now detect if a concurrency violation has taken place since out last read, i.e.: The record has been changed, since no match will be made. If ROWS AFFECTED returns 0, a concurrency exception is thrown, and the application must offer up some form of conflict resolution.

  • Default Value: This is the default code value. Note: This does NOT set the database default value for a property. It will only initialize the objects property value.
  • Entity Key: Specifies whether this property participates in the entities key.
  • Fixed Length: This is for database generation, will make the property either varchar of char if a string.
  • Max Length: This is for database generation, will set the field length in the database.
  • Nullable: Affects both the entity and the database object.
  • StoreGeneratedPattern: Allows the database to set the value of this property, this is primarily used for auto increment values.
  • Type: the CLR type of this property
  • Unicode: if the type is a string, specifies whether the property supports an extended character set (nvarchar vs varchar)

We now have a basic entity model to query. Although at this stage it is not bound to a database


LinQ to Entities

Since the Entity Framework supports the Linq query interface we have all the Linq commands available to us. The following link provides several querying examples that should cover most development scenarios.

It is important to note however that it is never a good idea to have overly complicated database queries in a Linq statement, lest we lose SQL pre-compilation optimizations, instead for this it is probably better to wrap the query logic in a stored proc and expose it via the entity framework. Here is an example of a simple query layout that one might find in a service class that handles paging.

Some things to note here:

  • The data context instance is wrapped in a using statement so it is nicely cleaned up after use.
  • We ToList() the query, in essence what this does is execute the query and enumerate the results into a List object.
Lazy Loading

Entity framework supports deferred loading, that is, a query will not be executed unless it is used via iteration or other operation. This gives us the ability to logically supplement the query as we pass it through our code. We could also create a base query (say limit the results to the current user ID) and then build upon this. Lazy loading may be switched on an off via the DataContext.

Loading a Network Graph

It is also possible when querying an entity to return all related entities via a navigation property. In this case we instruct the data context to also load a “path”. This will load the queried entities as well as all related entities in the graph. This is a useful for caching or master / detail scenarios. We do this by querying using lambda expressions queries off the data context collection using the “Include” method:

Context.Companies.Include(“Employees”).Select(x => x.Name == “My Start-up”);

In this example we load the company “My Start-up” and all its Employees.

Loading from Inheritance

It is possible to define an inheritance relationship between entities (covered later on). However when querying, it is important to note that principles of polymorphism apply. When querying a base type, all sub types that match the base type query are returned too. Although returned as an enumeration of the base type, the underlying CLR type that they will be instantiated as is the sub type. This has some profound implications on the data loading from SQL too. Any properties that extend the base type will also be loaded from their respective mapped types. If this behaviour is undesirable, then perhaps one should consider implementing a splitting scenario over an inheritance structure.

In the next part we will look into generating the database off the model as well as Inserting, Deleting and Updating.

Recently I decided to roll up my sleeves and demystify the workings of the Entity Framework. I had taken a look at it a while back but found the tooling a little premature, so I put it back on the shelf and carried on bashing away with LinQ to SQL which to my mind “did everything that Entity Framework does”. Boy was I wrong!

This is the first of a multi part series that will take us from EF Noobs to Pirate Ninja EF Ubersoldata. But first. The basics.

What is EF?

Simply put, the Entity Framework is an object relational mapper. That is it takes database objects like tables, sprocs, views, etc and maps them into a nice easy to use object model that can be called in code. If done correctly, this should take the nightmare out of managing the translation of SQL types and structures into CLR types. Entity Framework off this and SO much more.


The basic architecture of Entity Framework is similar to most ORM mappers. The DataContext is a class that holds all the meta data about the model as well as the entity definitions. The DataContext is broken up into three physical layers that aid the transformation from the data layer into the conceptual layer. These three layers are the Conceptual Layer, Storage Layer and the Mapping Layer. Any data changes (CRUD, etc) made in the Conceptual Layer are passed through the mapping layer and mapped onto SQL commands, generated dynamically or called via procs. This gives us a powerful mechanism to perform some advanced operations with the complexity abstracted from the developer. These include:

  • Inheritance scenarios
  • Data splitting scenarios
  • Data composition scenarios
  • Parent / Child awareness in the conceptual layer
  • Renaming and remapping of database entities.
  • Allows strong typed set based operations in code

CSDL: Conceptual Layer

The conceptual layer is a layer whose functionality maps closely onto your applications functionality. Moreover, since we can have multiple Data Contexts, it gives us the ability to map our conceptual layer closely with our problem domain and remove the nuances of having to handle database specific issues in our code. We can target multiple EDMX files to different parts of the system.

SSDL: Store Layer

The Storage layer maps directly onto the database structure. Since this layer has semantic knowledge of the database, it can perform code generation for most database scenarios at runtime. You may map most database entities in the Store layer and bubble them up to the Conceptual Layer. This includes:

  • Tables
  • Views
  • Stored Procs
  • Functions

In some more advanced scenarios you are permitted to define SQL commands in your store and treat them as first class Store elements

MSDL: Mapping Layer

The Mapping Layer ties the Conceptual and Store layers together and handles translations between the two. The existence of this layer allows us some complex binding and mapping scenarios (These will be outlined later on) There are several options when mapping entities:

  • Procs to CLR methods
  • CRUD through auto generated SQL
  • CRUD though stored proc to entity mapping

The EDMX and Code Generation

When creating a new entity data model is created, in fact what happens is the designer tool generates an XML file (.EDMX). We can explore this file by opening the EDMX file in an XML editor. From there you can clearly see the three sections: CSDL, SSDL and MSDL. There is also a fourth section for the designer to track the graphical objects on the design surface.

Using T4 template code generation the XML is parsed and the code behind is generated. This code behind file includes the classes for the three layers as well as the data context. Since T4 code generation is used, we can change the T4 template to use a custom template if we want to modify or extend the generated classes. Out the box the generator creates a Table per Type inheritance strategy with table mappings and all entity classes deriving off the EntityObject class. We will see how to change these behaviors later on.

Model First vs Data First

The Visual Studio 2010 tooling provides a rich modelling environment. We are given two options to create the data model.

  • Model First
  • Generate from Existing Database

Model first allows us to play with the designer and let us generate a database off the model. It abstracts some of the noramlization concerns like associative entities. This is a very natural way for a developer to work. From this we can specify some database options like schema namespace, properties that enlist in concurrency checking, etc. At this stage all your work resides on the conceptual layer. Given this it has enough information to generate a new database script. Use this method when designing a new system from scratch as it employs some best practices that are often overlooked. Once generated, the designer then populates its store layer and mapping layer to map onto the newly created database.

Generate from existing database does exactly what it says. It will go through the database and extract database objects and build the model from the store layer upwards. Once complete, you may remove entities from the conceptual layer and be prompted if you would like to remove from the mapping and store layer accordingly. In some cases you want items to exist in the store layer but mapped/remapped to one or more entities in the conceptual layer. You may also choose to update the model from the database as it changes over time.

In part 2 we will be diving into some code and basic examples.