Jun 20, 2007

Introducing the Entity Framework

Databases tend to be a necessary evil to most developers. They want a way to persist data, but they do not want to deal with any of the minutiae of writing code against them. This is not a new phenomenon. I have seen commercial packages going back into the eighties that purported to make working with the database completely seamless. Whether they are called “persistence layers”, “object-relational tools” or even “data abstraction layers”; many of these tools are aimed at speed to market or rapid application development (RAD) solutions. There is no best solution here. Depending on your project or enterprise, the requirements will often dictate the right tool for the job.

When Microsoft announced they were going to release a new technology called “The Entity Framework”, it was met with interested skepticism by most of the development community. Immediately there were comparisons to popular object-relational tools (NHibernate, LLBLGen Pro, et al). What got lost in the haze of comparisons was that the Entity Framework was a completely different animal. The most important piece of information in this article is that the Entity Framework is not meant to solve the same problem that these other tools are trying to meet.

In this article I will introduce you to this new technology and try to explain the “why” and “where” of the Entity Framework. I am purposely skimping on the “how” because details of the implementation are in a constant state of flux in response to customer and community involvement.

For this article I am using the Entity Framework as it exists in the Orcas Beta 1 version delivered by Microsoft in April of 2007.

The Problem…

Teams build software, not developers. Different members of a team think of data differently. For example:

  • Developers think in class diagrams
  • Analysts think in OR diagrams
  • DBAs think in ER diagrams

There is an impedance mismatch between these different models. Developing a common language for these models is one approach to solve the mismatch, but these groups continue to think in different ways about a project’s data.

The mismatch of data model continues outside out the realm of software development teams. We have a larger set of tools and technologies that each try and develop their own models for consuming data. If we look at how our data is consumed, only some of it comes in the form of software projects. Data is mined, reported on, warehoused and exposed through interoperability points. Each of these points has their own idea of what the data looks like. This means that either the schema is exposed up through the middle tier or application models are pushed down to the middle tier. Developing a common dictionary for data model within an organization is difficult because there is not a common grammar that works across different fiefdoms.

Read Full Article : Click Here