Concurrent Unit Tests with Service Locator

6714266_sMy talk at Microsoft Summit created a nice discussion with some of the participants about writing isolated unit tests when using the Service Locator.

It started from the part where I was showing how the AppBoot helps in dependencies management by enforcing consistency on how the Dependency Injection is done. With the AppBoot we make sure that Dependency Injection is used and that it is done only through the constructor. The question that started the discussion was:

Does this mean that I will have constructor parameters for everything? Including utility services like ILog? If so, it means that I will pollute the constructors with these details and…. things may get over complicated

My answer was that for logging or other similar utilities we could make static helpers that make them easier to be called. Such a helper would wrap a Service Locator, so we do not make a strong dependency on the logging implementation or library. Something like this:

public static class Logger
{
  public static void Error(string headline, string message)
  {
    ILogSrv log = ServiceLocator.Current.GetInstance<ILogSrv>();
    log.WriteTrace(new Trace(headline, message, Severity.Error));
  }
  
  public static void Warning(string healine, string message)  
  {
    ILogSrv log = ServiceLocator.Current.GetInstance<ILogSrv>();
    ...
  }
  
  public static void Trace(string functionName, string message)
  { 
    ILogSrv log = ServiceLocator.Current.GetInstance<ILogSrv>();
    ...
  }
    
  public static void Debug(string message, object[] variables)
  { 
    ILogSrv log = ServiceLocator.Current.GetInstance<ILogSrv>();
    ...
  } 
} 

This makes my class code to depend on some static functions (Logger.Error()), but that seems a good compromise as long as the underneath things remain as simple as in the above snippet.

Now, if we are to write some unit tests in isolation, we would like to use a stub for the ILogSrv interface, and we can do that by making a setup like this:

[TestClass]
public class UnitTests
{
 private Mock<IServiceLocator> slStub;

 [TestInitialize]
 public void TestInitialize()
 {
   slStub = new Mock<IServiceLocator>();
   ServiceLocator.SetLocatorProvider(() => slStub.Object);
 }

 [TestMethod]
 public void PlaceNewOrder_FromPriorityCustomer_AddedOnTopOfTheQueue()
 {
   Mock<ILogSrv> dummyLog = new Mock<ILogSrv>();
	
   slStub.Setup(l => l.GetInstance<ILogSrv>()).Returns(dummyLog.Object);

     ...
   }
  ...
}

This code configures the ServiceLocator.Current to return an instance which gives the dummy ILogSrv when needed. Therefore, the production code will use a dummy ILogSrv, which probably does nothing on WriteTrace().

For logging this may be just fine. It is unlikely that we would need different stub configurations for ILogSrv in different tests. However, things may not be as easy as this, for other services that are taken through the ServiceLocator.Current. We might want different stubs for different test scenarios. Something like this:

// --- production code ---

public class UnderTest
{
  public bool IsOdd()
  {
    IService service = ServiceLocator.Current.GetInstance<IService>();
    int number = service.Foo();
    return number%2 == 1;
  }
}

// --- test code ---

private Mock<IServiceLocator> slStub = new Mock<IServiceLocator>();
ServiceLocator.SetLocatorProvider(() => slStub.Object);

[TestMethod]
public void IsOdd_ServiceReturns5_True()
{
   Mock<IService> stub = new Mock<IService>();
   stub.Setup(m => m.Foo()).Returns(5);

   slStub.Setup(sl => sl.GetInstance<IService>()).Returns(stub);
   ...
}

[TestMethod]
public void IsOdd_ServiceReturns4_False()
{
   Mock<IService> stub = new Mock<IService>();
   stub.Setup(m => m.Foo()).Returns(4);

   slStub.Setup(sl => sl.GetInstance<IService>()).Returns(stub);
   ...
}

Because our production code depends on statics (uses ServiceLocator.Current to take its instance), when these tests are ran in parallel, we will run into troubles. Think of the following scenario: Test1 sets up the slStub to return its setup for the IService stub. Then, on a different thread Test2 overwrites this setup and runs. After that, when the code exercised by Test1 gets the IService instance through the static ServiceLocator.Current it will receive the Test2 setup, therefore the surprising failure.

By default MS Test or VS Test will run tests from different test classes in parallel, so if we have more test classes which do different setups using the ServiceLocator.SetLocatorProvider(), we will run into the nasty situation that sometimes our tests fail on the CI server or on our machine.

So, what should we do?

One option is to avoid the dependencies to the statics and to get the service locator through Dependency Injection through constructor. This would make the above example like below:

// --- production code ---

public class UnderTest
{
  private IServiceLocator sl;
  public UnderTest()
  {
     sl = ServiceLocator.Current;
  }

  public UnderTest(IServiceLocator serviceLocator)
  {
     this.sl = serviceLocator;
  }

  public bool IsOdd()
  {
     IService service = sl.GetInstance<IService>();
     int number = service.Foo();
     return number%2 == 1;
   }
}

// --- test code ---

[TestMethod]
public void IsOdd_ServiceReturns5_True()
{
   Mock<IService> stub = new Mock<IService>();
   stub.Setup(m => m.Foo()).Returns(5);

   Mock<IServiceLocator> slStub = new Mock<IServiceLocator>();
   slStub.Setup(sl => sl.GetInstance<IService>()).Returns(stub);

   var target = new UnderTest(slStub.Object);
   ...
}

[TestMethod]
public void IsOdd_ServiceReturns4_False()
{
   Mock<IService> stub = new Mock<IService>();
   stub.Setup(m => m.Foo()).Returns(4);

   Mock<IServiceLocator> slStub = new Mock<IServiceLocator>();
   slStub.Setup(sl => sl.GetInstance<IService>()).Returns(stub);

   var target = new UnderTest(slStub.Object);
   ...
}

This would be a good solution and I favour it in most of the cases. Sometimes I add, as in the above snippet, a constructor without parameters that is used in the production code and one which receives the ServiceLocator as a parameter for my unit tests code.

The other option, which is the answer to the question at the start of the post, looks a bit more magical :). It fits the cases when we need and want to keep the simplicity the static caller brings. Here, we keep the production code as is and we make the unit tests to safely run in parallel. We can do this by creating one stub of the IServiceLocator for each thread and record it on a thread static field. We can do it with a ServiceLocatorDoubleStorage class that wraps the thread static field and gives the tests a clean way to setup and access it.

 public static class ServiceLocatorDoubleStorage
 {
        [ThreadStatic]
        private static IServiceLocator current;

        public static IServiceLocator Current
        {
            get { return current; }
        }

        public static void SetInstance(IServiceLocator sl)
        {
            current = sl;
        }

        public static void Cleanup()
        {
            SetInstance(null);
        }
 }

Now, the unit tests will use the ServiceLocatorDoubleStorage.SetInstance() instead of the ServiceLocator.SetLocatorProvider(). So the test code from the above sample transforms into:

[TestClass]
public class UnitTest
{
  [AssemblyInitialize]
  public static void AssemblyInit(TestContext context)
  {  // the production code will get it through 
     //   ServiceLocator.Current, so this is needed
     ServiceLocator.SetLocatorProvider(
          () => ServiceLocatorDoubleStorage.Current);
  }

  private Mock<IServiceLocator> slStub;

  [TestInitialize]
  public void TestInitialize()
  {
     slStub = new Mock<IServiceLocator>();
     ServiceLocatorDoubleStorage.SetInstance(slStub.Object);
  }

  [TestMethod]
  public void IsOdd_ServiceReturns5_True()
  {
    Mock<IService> stub = new Mock<IService>();
    stub.Setup(m => m.Foo()).Returns(5);

    slStub.Setup(sl => sl.GetInstance<IService>()).Returns(stub);
    ...
  }
  ...
}

With this, each time a new thread is used by the testing framework, the test on it will first set its own stub of the ServiceLocator and then it will run. This makes that the ServiceLocator stubs even if they are static resources, not to be shared among different tests on different threads. On the code samples from my Code Design Training, on github here, you can find a fully functional example that shows how this can be used and how it runs in parallel.

To conclude, I would say that Dependency Injection and Service Locator should be used together. I strongly push towards using Dependency Injection in most of the cases because it makes the dependencies clearer and easier to manage, but definitely there are cases where Service Locator is needed or makes more sense. In both cases writing isolated unit tests should be easy and may be a good check of our design and dependencies.

 


This is discussed and explained in detail in my Code Design Training

 


Featured image credit: noche via 123RF Stock Photo
Posted in .NET, Design, Technical, Unit Testing | Tagged , , , , | Leave a comment

Speaker at Microsoft Summit

MSSummit2015Last week I had the opportunity to speak for the first time at the Microsoft Summit.

It was a nice and pleasant experience. I have talked about how we could achieve a high quality code design by enforcing consistency with the support of an Application Infrastructure in a large and complex enterprise system.

I think I did one of my best presentations. I placed my story into the context of re-architecting and re-implementing a legacy system using modern techniques and technologies. I have focused my talk on the challenge of building, testing and deploying for On-Premises, but at the same time being able to easily migrate to Azure and leverage the advantages of PaaS. The focus went on implementing a loosely coupled design, so we can replace certain components when migrating to the cloud.

At the end I had some very interesting questions, which turned into good suggestions for me too. One was about Dependency Injection vs Service Locator and how would I unit test, when Service Locator is chosen. I will detail this in my next technical blog post. Its a good idea for a technical topic, so thanks!

Another question was if I am thinking to make the design practices that I have presented available to a broader audience through other means then my Code Design training, like writing on MSDN for example. Definitely I will put some time and thought into this. Maybe besides articles in MSDN, making my course available on an online platform like Pluralsight would also be an idea worth investing in. Thanks for this suggestion as well!

The overall conference was a successful event in my opinion. I’ve liked the mixture of business and technology. I’ve also liked the area with the partners’ stands where you could see demos on how technology can optimise businesses. I had a pleasant surprise to see that our friends from ConSix startup, had a stand presenting their product.

All in all it was a nice week. Thank you Microsoft for inviting me! I’m looking forward to the next editions.

I have uploaded my slides over here if you’d like to take a look before the recordings are made available.

Posted in Personal | Tagged , , | Leave a comment

Crosscutting Concerns

44294685_sThe Crosscutting Concerns are the areas in which high-impact mistakes are most often made when designing an application. There are common causes that lead to this and there are common practices that can help to avoid such mistakes. In this post I will talk about the way I usually address most of the crosscutting concerns at the start of a project, when usually there are many other more urgent things to think of.

The crosscutting concerns are the features of the design that may apply across all layers. This functionality typically supports operations that affect the entire application. If we are thinking of a layered architecture, usually the components from more layers (Presentation Layer, Business Layer or Data Access Layers) use components that address in a common way a concern, or which assures that a critical non-functional requirement is going to be met.

The crosscutting concerns are very much related to the quality requirements (also named non-functional requirements or quality attributes). In fact the crosscutting concerns should derive from these requirements. They are usually implemented in a separate area of the code and specifically address these requirements. For example:

  • Logging concern derives from Diagnostics, Monitoring or Reliability requirements
  • Authentication and Authorization concern derives from Security requirements
  • Caching concern derives from Performance requirements
  • Error Handling concern derives from Diagnostics, Availability and Robustness requirements
  • Localization concern derives from Multi-language and Globalization requirements
  • Communication concern derives from Scalability, Availability or Integration requirements

One of the most common mistake with the crosscutting concerns is that we tend to neglect their importance at the start of the project, and we poorly consider them or we don’t do it at all. Then, when we already have a ton of code written, somewhere in the second half of the project, from various reasons these quality requirements become actual again. But now… because of all the code we have, the cost of addressing them in a consistent and robust manner becomes very high. To add consistent and useful logging for example, now, would mean to go back through all the code and all the tested functionality and change it to call a logging function. This may be costly. The same goes for authorization, localization and many others.

The challenge comes from the fact that the crosscutting components are like support code for the components that implement the functional requirements. They should be done before we start developing functionality. But this is often not a good idea. At the start of the project it is good to start implementing the functionalities, so we can get feedback and show progress, not to spend too much time on things that may be postponed.

The key is to address them at the very beginning, but not to implement them. We should just identify them and design only the most important aspects. We should postpone the making of any time consuming decision. I’m not saying make uninformed decisions because of lack of time and change them later, I am saying to design in a way that allows postponing these decisions.

Lets take Logging for example. We can easily define the logging interface, looking at the monitoring and diagnostics requirements and considering the complexity of the application. It will be something like this:

public static class Log
{
  public static void Error(string healine, string message)
  { }
  
  public static void Warning(string healine, string message)  
  { }
  
  public static void Trace(string functionName, string message)
  { }
    
  public static void Debug(string message, object[] variables)
  { }
}

For the start the functions can do nothing. They can be left unimplemented. Later, we can come back to this and invest time to decide which logging framework should we use. We will think about the configurability of the logging, whether we should log in the database or not, weather we should send some logs by email, whether we should integrate with a monitoring tool and which one, at a later time. Until then, all the code that implements the functional use cases, has a support for logging and if we make a clear and simple logging strategy then we will have meaningful calls to these functions.

Maybe in a few sprints, we’ll come with a simple implementation of this, which will write logs in a text file, to help us in debugging. Then later, we could integrate the framework we know best to write the logs in csv file so we can easily filter and search. By the time we get near the deployment in different test environments, we will know more about the logging the system needs and it will be easier to make a good decisions on how to address this concern’s implementation better. However, in all this time the code we are writing calls the logging interface, so we don’t need to go back and search for meaningful places to insert these calls.

The same practice can be applied for all of the crosscutting concerns. I also show it in a previous post Localization Concern which covers the localization aspects in detail.

So, the idea is that at the start of the project we should abstract each component that addresses a crosscutting concern. We can do this by thinking on how the other layers will interact wiLoggingth it, and define the abstraction from their perspective. The other layers should depend on this abstraction only. We can assure this by hiding (or encapsulating) the implementation of the crosscutting concern. By encapsulating the implementation we have the flexibility to change it or  to replace the frameworks we use without a significant impact on the other code. In most of the cases the dependencies will be like in this diagram, where ApplicationComponents do not know about the Log4Net library, which is just an implementation detail.

 

 

 

 

 

As a conclusion, following this approach, we can have the interface (abstractions) of the crosscutting concerns components very early in the project and have it called from all the use cases. With this we postpone the real implementation of these concerns and we limit the cost of adding it later. The time to define these abstraction is usually short. It goes from a few hours to a few days depending on the size and complexity of the application, but it always pays off rather than having this tackled for the first time in the second half of the project.

 


This topic is discussed in more detail in my Code Design Training

 


Featured image credit: noche via 123RF Stock Photo
Posted in Design, Technical | Tagged , , , , | Leave a comment

DRY vs Coupling

1282145_sWhile reviewing my previous post another great discussion, which may arise from paying attention to your references, came to my mind: Don’t Repeat Yourself (DRY) vs Coupling. Each time you add a new reference it means that you want to call the code from the other assembly, therefore you are coupling the two assemblies.

We have always been told that we should not tolerate duplication. We should always DRY our code. We have even become obsessed with it and each time we see few lines of code, which resemble we immediately extract them into a routine and have it called from the two places. What we often don’t realize is that we’ve just coupled those two places. DRY comes at a cost, and that cost is coupling.

Loosely coupled designs is another desiderate that we have. Lack of coupling means that the elements of our code are better isolated from each other and from change. In general the looser the coupling is the better the design is, because it is less resistant to change. And, change is certain!

On the other hand, of course that duplication is bad. It is one of the primary enemies of a well-designed system. It usually adds additional work, additional risk and additional unnecessary complexity. It is even more problematic when the code is copied & pasted and then small changes are done on it. The code is not identical, but it has teeny-tiny, hard to observe variations. Like an equals operator changed with a differs or a greater or equals changed with a strict greater. The code is not identical, and the commonality is not abstracted. In such a context abstracting the commonality in an interface and making all the callers to depend on the abstraction, but not on the implementation, which should be well encapsulated / hidden, is the key to a good design. Here we couple them together, we pay the coupling cost, but with clear benefits.

Don’t tolerate coupling without benefits! Don’t pay a cost if it doesn’t bring you something back. If by DRYing stuff out you don’t make things simpler but rather more complex, then something may be wrong in your design. In this example if we have created a correct abstraction, which truly represents the commonality then changes of its implementation should not trigger changes into its callers, and changes in the callers will not trigger changes into the implementation.

Coming back to my previous post where I’ve talked about the benefits that come from monitoring our references, which is just another way of managing the dependencies in our code, we can have this discussion on the example there, too.

The application there is well isolated from the frameworks it uses. It has more types of UI. One was in WPF as a desktop app, and one was just a console app. It may have had a web UI as well. Another thing that I’ve emphasised there was that we might have some rules that say which references are allowed and which aren’t. Here is the diagram of the references:

DependenciesGraph1

When we are working on the WpfApplication we might find ourselves rewriting some of the code that we’ve already had written in the ConsoleApplication. The first thing that comes to mind would be to reference the assembly and reuse the code. But we can’t. Its against the rules, because we want that the different UIs to be independent. Making references among them would mean that the WPF needs the console UI, or even more strange the web UI would need the desktop UI. So, we are left with two options:

  1. duplicate the code in both assemblies
  2. create a new assembly (CommonUI) and put the code there

Option 2. reduces duplication, but it still creates coupling. Now all the UI assemblies will reference this common thing, and if it is not well abstracted then we may have indirect dependencies among the UIs. A change in WPF may trigger a change in the common assembly, which will trigger a change in the console or in the web UI. Tricky! We should see if it pays off. If it is just about some helpers it might be better to tolerate the duplication and don’t increase the coupling. On the other hand if it is something that has to be commonly presented in all the UIs of our application, then abstracting and encapsulating it in a common assembly might make our design better.

This is also a good example that we should try to DRY as much as possible in a bounded context, but we should not DRY among different context, because we will couple them together with low benefits.

In the end it is again about making the correct tradeoffs and realising that each time we make a decision we are also paying a cost. Dan North puts this very nicely in a talk that I like very much called Decisions, Decisions

 


This is discussed in more detail in my Code Design Training, when talking about programming principles.

 


Featured image credit: bond138 via 123RF Stock Photo
Posted in .NET, Design, Technical | Tagged , , , , , , , | Leave a comment

Using ReSharper for Assembly References Diagrams

20574249_sA few posts back I talked about how we can use the assembly references to enforce consistency and separation of concerns (here and here are the old posts). I argue there that if we derive from the architecture the assemblies of our application, the kind of code (responsibility) each one holds and how they can reference each other, then monitoring this can become a useful tool to assure that the critical architectural aspects are followed by the implementation.

In this post I will show how am I verifying the assembly references and the tool that I use. It is usually the first thing I do when I start a review on a new project. I find it an efficient way to spot code design smells, bad questionable design decisions or implementation “shortcuts”, which may hurt really badly the project on the long run.

Let’s dive into details. My favorite tool for this is ReSharper. More exactly the Project Dependency Diagram. It can be generated from the ReSharper | Architecure menu. What I like the most about it is that in ReSharper 9, it is augmented with the Type Dependency Diagram, and now you can easily drill down in any reference to see the dependencies among the classes, and even further to the lines of code that make that reference needed. Now, when you see a reference that shouldn’t be there you can easily find the code that created it and reason about it. (In ReSharper 8, I was using the Optimize Refernces view to drill down to the lines of code that made a reference needed.)

I’m not going to explain in detail how the tool works, you can read about on the ReSharper blog. Let’s look at an example instead.

DependenciesGraph1Here I have generated the reference diagram for a demo project from my Code Design Training (code available on GitHub here). A few things to notice as architectural principles to follow for this project:

  • The application has more UI clients (a WPF and Console for now). They cannot have dependencies (references) one on the other, because we want them independent.
  • The application consists of more modules. Each module has its own assemblies and they cannot have dependencies (references) among each other. The modules interact only through the Contracts assembly which has only pure interfaces (service contracts) and DTOs (data contracts).
    • This is applying the DIP (see here) and makes that the implementation of the modules is encapsulated and abstracted through the types in the Contracts assembly.
    • The UI gest the functionality implemented by the modules only through the contracts. The UI cannot have direct references to the implementation of the modules, neither to the Data Access. This enforces that the application logic does not depend on the UI, but the other way around (again applying DIP).

Any new reference, which does not obey the above architectural principles will easily be found when we re-generate the diagram from code.

If we go in more details we see other development patterns.

DependenciesGraph2

Each module has a .Services assembly which implements or consumes Contracts. The module assemblies may reference and use the DataAccess or the Common assembly from the Infrastructure. These are not necessarily rules as strict as the above, but more like conventions which create a consistency on how a module is structured. The reference diagram can help a lot to see how these evolve.

Look again to the diagrams. Which reference do you think is strange and might be wrong? The Sales module references the DataAccess. This is fine. It needs to use the IRepository / IUnitOfWork to access data. But, one of the Sales module assemblies is referenced back by the DataAccess. This is wrong. We would want that when the implementations of any of the modules changes, the Infrastructure assemblies not to be affected, because if they are then their change may trigger changes into the other modules as well. So, we’ll have a wave of changes which starts from one module and propagates to other modules. If you look to the first diagram, this reference looks like it creates a circular dependency which may be a better smell that something is wrong. If we right-click the reference we can open the Type Dependency Diagram.

DependenciesGraph33

Here we see that the SalesEntities, from the DataAccess, is the class that created this reference. If I hold the cursor on the dependency arrow I get all the classes it depends on. This class is the EntityFramework DbContext for the Sales module. It should not be here, but the DataAccess needed it to new it up. (in fact this is a TODO that I have postponed for a while in my demo project). To fix this ‘wrong’ reference we have to invert the dependency. So we should create an abstraction. We can make a IDbContextFactory interface into the DataAccess, move the SalesEntities into one of the Sales module assemblies and implement there the factory interface to new it up.

This is a good example of how this tool can help us to find code at wrong level of abstraction, by spotting wrong dependencies. SalesEntities is a high level concept class. It describes the domain of the Sales module. It was wrongly placed into a lower level mechanism implementation, that implements data access.

If you can spread this code review practice among your team more benefits will follow. Each time a new reference or a new assembly appears the team will challenge if it fits into the rules and into the reasoning behind those rules. It will get you towards contextual consistency: this is how we build and structure things in this project. It may not apply to other contexts, but it makes sense in ours and we know why. Consistency is the primary mechanism for simplicity at scale. Reviewing dependencies easily and fast, shared idioms and guiding principles help create and sustain consistency. Once you have consistency, difference is data! There has to be a good reason why things that break consistency in a particular spot are tolerated.

To conclude, the tool that we use to generate the references diagram is not the most important thing here. I like ReSharper, but you can get almost the same with the architecture tools from Visual Studio Enterprise / Ultimate. What is important is to use a tool that can generate useful dependencies diagrams from the code and to constantly monitor them. The entire team should be doing this. By reviewing these dependencies regularly you make sure that the critical architectural principles and requirements are still met.

 


Featured image credit: vska via 123RF Stock Photo
Posted in .NET, Design, Technical | Tagged , , , , , , , | Leave a comment

I’ve quit my job. Hire me!

IMG_9842Last Friday marked an important milestone for me – it was my last day working for ISDC. After 10 years with ISDC I have decided to put an end to my job there. I think it is a good moment in my life and in my career to try something else, to try to work as an independent programmer / software architect. For those that are interested, I’d like to tell you what I’ve been doing in recent years and give a hint of what will come next.

ISDC

I’ve started in the .NET Department as a junior developer and I’ve exited as a software architect. A long way. I had the opportunity to work with most of the Microsoft technologies, to switch projects and contexts quite often. This helped me a lot in learning and expanding my experience. I have also found in ISDC great people to work with and to learn from. I have found the right models and good mentors. I am grateful for all that. I have also given a lot back. I have pushed for doing things the right way, close to the highest level of the industry standards. I pushed for quality and I had an important contribution in increasing the technical quality delivered by the .NET teams. I remember being characterised as the quality guy that not only talks the talk, but actually walks the walk. I was the one that introduced Unit Testing in .NET teams and worked hard to make it part of the development process and a common practice in all of the teams. I also was one of the key members in some of the most difficult and important projects we had.

In the last years I have focused on starting projects. It starts with envisioning the technical solution that fits the requirements, and continues with working close with the project management to build the right team and to define and implement the strategy that may lead to reaching project goals in budget and time. In the beginning I was also leading the development of the application infrastructure that sets the project on the right path from the technical perspective. I think this defines quite well my software architect role for a project in ISDC. It is not the same as the architect in a product company or as the Enterprise Architect. It focuses on one project. It involves making difficult decisions, making tradeoffs and explaining to all the stakeholders the consequences of their choices. It’s a worth having experience.

The Future

So where to next? I like this graphic from The Future of Work by Jacob Morgan. It shows nicely how I’ll try to work next. I find myself standing right where the half-grey half-green guy is (though ISDC did some of the things better than this illustrates).

The_evolution_of_the_employee16

As an independent I would like to use my experience to help various teams or companies to start projects on the right track or to get out from a nasty situation related to software development. I think that I can bring a lot of value by coming in, work with the team to set things on a right path, make sure that the team can take it over and be independent of me and then gradually step out. I don’t see myself staying for a very long time in a project. I may stay close, help whenever something new comes up, but if I do my work well the team should do fine without me after a while. Sometimes my involvement may be only to give training or coaching on certain topics or technologies.

Beside my previous experience there are two more things that make me believe I can do this. First is that I’m not entirely new to it. Since 2013 I only work with a part time job. This year I had a three days/week job, so I had two days to work as independent consultant. I’ve already helped some teams in different companies with training, coaching or to start new projects.

The other thing is that I’m not alone. I am part of iQuarc. This gives me a great deal of assurance. I know I will always find the kind of support, advice or help that I need from my colleagues. At core, we all have the same level of expertise and at the same time we have complementary skills. We share same values, principles and passion. Together we can make a great team and we can successfully respond to many kinds of requests.

So, what will change? Now, I have all the time for this. I’m all in. I’m available for hire. Here’s a brief summary of what I can do for you:

  • Custom software development. I love writing code, so I’d also like writing code for you. I can take small or big tasks. It does’t really matter. I can work in a team or by myself, remote or on-site, as necessary. My experience is with C# and related technologies / frameworks.
  • Software architecture. I have experience in designing small and large applications. I can design the entire project, not only the technical part. This may include a complete strategy from requirements to deployment with needed staff for each phase of the project.
  • Reviews. If you need an external party to review your code or your design I’ll be happy to do so. I can do code reviews at different levels, from looking at the big picture, at the way the code is structured and the way dependencies go, towards lower details on how classes and functions are written or tested. When reviewing code or design I can focus on specific quality requirements like security, performance, maintainability, scalability or other that may be of interest for you or I could do a more general check.
  • Training and coaching. In the past years I have developed and given two standard trainings: Code Design and Unit Testing, in which I address a wide range of subjects about coding. I can visit your company to deliver lectures and workshops. Beside these topics I could easily spin out workshops on others, like estimations or time management, depending on your needs. I am also a strong believer on learning on the job, so I could join your team only to coach you on a specific technique or a specific issue, working on your project. We can also pair while doing so.
  • Development process. Along the years I have experienced different ways of organising software development teams. If you need help with this I can do so. We can see together whether Scrum fits your context or not and how to tweak it. I can also help with Continuous Integration, Continuous Delivery, TFS, Git etc.
  • Round table. Sometimes people simply want to have a meeting with someone to validate certain topics or ideas. I’m happy to visit you for a meeting with you and your team where we can discuss your questions, sketch together on a whiteboard, look at code, etc. in an ad hoc fashion.

This list isn’t exhaustive, so if you have other ideas for how you think I may be able to help you, please contact me.

Posted in Personal | Tagged , , | Leave a comment

Localization Concern

37715689_sLocalization (also known as internationalization) is one of the concerns that is most of the times overlooked when we design an application. We almost never find it through the requirements, and if we do or if we ask about it, we usually postpone thinking about it and we underestimate the effort of adding it later. In this post I will summaries few key aspects which take little time to take into consideration in our design, and they can save a lot of effort on the long term.

Localization Service

One of the first things that I do is to define, what I call the Localization Service. It is nothing more than a simple interface, with one or two simple methods:

public interface ILocalizationService
{
     string GetText(string localizationKey);
     string Format<T>(T value);
}

Notice that the functions do not take as input parameters the language or culture code. The implementation will take care of taking them from the current user. The interface stays simple.

At the beginning of the project I don’t do more than this. I just put in a trivial implementation that doesn’t do much and I postpone the rest of the decisions. Now when screens are built we can just call this interface, and later we’ll make a real implementation of it. We already have a big gain: when we’ll build the localization we don’t need to go through all the screens and modify them to translate the texts. The localization service is called from the beginning.

To make it a simple call, we can have a static wrapper:

public static class Localizer
{
  public static string GetText(string localizationKey)
  {
    var s = ServiceLocator.Current.GetInstance<ILocalizationService>();
    return s.GetText(key);
   } 

   // same for Format<T>(..)
}

We can decide later if the translations are stored in resource files or in the database or somewhere else. For now we can just pick the quickest implementation (hardcoded in a dictionary maybe) and move on. We can change it later without modifying the existent screens.

Localization Key

The next thing to consider are some conventions for building the localization keys. We are going to have many texts and it will make a big difference to have some consistent and meaningful keys rather than randomly written strings.

To do this I usually try to define some categories for the translated strings. Then for each category we can define conventions of patterns on how we will create the keys. In most of the applications we’ll have something similar with the following:

  • Labels on UI elements
    • these are specific texts that appear on different screens. Things like buttons, menus, options, labels, etc
    • Pattern:  <EntityName>.<ControllType>.<LabelKey>
    • Example: Person.Button.ChangeAddress
  • Specific messages or text
    • These are text that are specific to a functionality or a screen
    • Pattern:   <MessageType>.<Functionality>.<MessageKey>
    • Example: Message.ManagePersons.ConfirmEditAddress
  • Standard (or generic) labels or messages
    • these are text that appear o different screens of the applications
    • Pattern:   <MessageType>.<MessageKey>
    • Example: ErrorMessage.UnknownError
  • Metadata
    • These are names of business entities or their properties that need to be displayed. Usually these are column names in list screns or labels in edit screens
    • Pattern: <EntityType>.<Property>
    • Example: Person.Name

With such categories and conventions in place, we get many benefits in debugging, managing translations and even in translating texts.

If the application screens are built by templates (like all of the list or edit screens are similar and are around one business entity), later, we could go even further and write generic code which builds the localization key based on the type of the screen and the type of the entity, and it could automatically call the localization service. For example in a razor view, we could write a html helper like:


// usage
@Html.LabelForEx(m => m.Subject);

// implementation
public static MvcHtmlString LabelForEx<TModel, TValue>(this HtmlHelper<TModel> html, Expression<Func<TModel, TValue>> expression)
{
  string entityName = ModelMetadata.GetEntityName(expression, html.ViewData);
  string propName = ModelMetadata.GetPropName(expression, html.ViewData);

  string localizationKey = $"{entityName}.{propName}";
  string text = Localizer.GetText(localizationKey);

  return html.LabelFor(expression, text);
}

I think that these two: the Localization Service Interface and the Conventions for the Localization Keys are the aspects that should be addressed by the design at the beginning of the project. Next I will go through other two important aspects of localization: Managing Translations and Data Localization.

Managing Translations

One of the aspects that is usually ignored when designing for localization is the process of entering and managing the translated texts in different languages: translating the application.

This process can be difficult if the one that does the translation does not have the context of the text she is translating. Usually a word by word translation in a table is not working well. It can be even more difficult if she does not get fast feedback of the changes into the application screens. Emailing the translations to the developers and waiting for a new release can be very annoying. This difficult process can be even more costly if it was postponed until the last moment and it happens a few weeks before the release into production, when usually there are many other urgent things.

The conventions for the localization keys can play an important role in this. They could give some context, and if the one that translates the application can upload the translations into the app and get fast feedback is usually good enough. This means that we need to design and implement some functionality to upload and show the translated text, to avoid a painful process.

Another approach that works well is to implement into the application functionality to translate it. For a web app, the one that does the translation will access the application in “translate mode” and when she hovers the mouse on a text a floating div with an input is shown where she can input the translated text. The text is saved into the database and the page reloaded with the translation in it.

Even if this sounds difficult to implement, it is not and for an application that has a large variety in the texts it displays and needs to be translated in many languages, it worths the effort and makes the translation changes easy.

Data Localzation

Data Localization is about keeping some of the properties of the business entities in more languages. Imagine that your e-commerce app gets used in France and it would be better to have a translation for the name and the description of your products. For instance for the name of the mouse product, you will need to store its name in french: souris d’ordinateur.

One of the solution to implement this is to create a translation table for each table that has columns, which should be kept in more languages. This allows us to add more languages over time.

DataLocalization

The columns of the Products will keep the data in the default language (or language agnostic) and the Products_Trans table will keep the translations in a specific language. Here we’ll have only the columns that need translations: Name and Description.

If we are to add this functionality later into our project, we need to go back in all the existent screens and change them not to read data from one table (Products), but also to join it with the translations table (Products_Trans).  This may be very costly, because changing tens of screens built months ago may put our project in jeopardy.

The alternative, is to build some generic mechanism that automatically does the join under the scenes, based on some conventions and metadata. If we use Entity Framework, LINQ and we have the data access made though a single central point as I’ve described in Separating Data Access Concern post, then this can be achieved.

We need to rely on some conventions:

  • the translation tables and EF mapped entities have the same name with _Trans suffix
  • the translated columns have same name with the ones in the main table
  • some catalog that gives the entity names (tables), for which there is a translation table

With this, and by knowing that the all the LINQ queries go through our one Repository and UnitOfWork implementation as described in the above post, we intercept the lambda expression of each query, parse it, and recreate it with the join for the translation table.

To implement this we make that all the IQueryable our Repository returns to be a wrapper over the one returned by the EF.

private IQueryable<T> GetEntitiesInternal<T>(bool localized) where T : class
{
   DbSet<T> dbSet = context.Set<T>(); 

   return localized ?
            new DataLocalizationQueryable<T>(dbSet, this .cultureProvider)
            : dbSet;
}

The DataLocalizationQeryable wrapper uses a visitor to go through the lambda expression and for each member assignment  node, from the Select statement, which needs to be translated, gets the value from the related property of the translation entity. Here is a code snippet that gives an idea of how the wrapper is implemented:

public class DataLocalizationQueryable<T> : IOrderedQueryable<T>
{
  private IQueryable<T> query;
  private ICultureProvider cultureProvider;
  private ExpressionVisitor transVisitor; 

public DataLocalizationQueryable(IQueryable<T> query, ICultureProvider cultureProvider)
{
  ...
  transVisitor = new DataLocalizationVisitor(this.cultureProvider.GetCurrentUICulture());
  this.Provider = new DataLocalizationQueryProvider(query.Provider, this.translationVisitor, cultureProvider);
} 

public IEnumerator<T> GetEnumerator()
{
   return query.Provider.CreateQuery<T>(
       this.transVisitor.Visit(query.Expression)).GetEnumerator();
}

class DataLocalizationQueryProvider : IQueryProvider
{
  private IQueryProvider efProvider;
  private ExpressionVisitor visitor;
  private readonly ICultureProvider cultureProvider; 

  public IQueryable<TElement> CreateQuery<TElement>(Expression expression)
  {
     return new DataLocalizationQueryable<TElement>(
            efProvider.CreateQuery<TElement>(expression), cultureProvider);
  }
}

class DataLocalizationExpressionVisitor : ExpressionVisitor
{
   private const string suffix = "_Trans";
   private const string langCodePropName = "LanguageCode";
   private readonly CultureInfo currentCulture;

   public DataLocalizationExpressionVisitor(CultureInfo currentCulture)
   { ... }

   protected override MemberAssignment VisitMemberAssignment(MemberAssignment node)
   { ... }
 ...
}

Even if modifying lambda expressions at runtime isn’t a trivial task, we do it only once as an extension to the data access and we avoid going back and modifying tens of screens.

With this, we have covered the most common aspects of localization and we’ve seen that if we pay some thought to it when we design our application we can easily avoid high costs or painful processes on the long run.


This topic is discussed in more detail in my Code Design Training

 


Featured image credit: noche via 123RF Stock Photo
Posted in .NET, Design, Technical | Tagged , , , , , | Leave a comment

Dependency Inversion and Assemblies References

31749513_sIn my last posts I have talked about using assembly references to preserve critical design aspects. In Enforce Consistency with Assemblies References I talk about how we can use references to outline the allowed dependencies in code and how to use a references diagram to discover code at wrong levels of abstractions. In Separating Data Access Concern I show how we can enforce separation of concerns by using references and I detail this with the data access example. In this post I will talk about the relation between assembly references, in the context of above posts, and the Dependency Inversion Principle (DIP).

When we reference another assembly we take a dependency on it. If assembly A references assembly B it means that A depends on B. Taking this to the data access example, it means that a business logic assembly depends on the data access assembly.

BL-DAThis seems to be in contradiction with DIP which says that “High level modules should not depend on low level modules”. The business logic is the high level module, and the data access is just details on how we get and store data. The contradiction may be even more clear if we refer to the The Clean Architecture of Uncle Bob, where he points that the application should not depend on frameworks.

Let’s look more closely into DIP, and let’s focus on the INVERSION word. DIP doesn’t only say that we invert the dependency, but more importantly we do it by inverting the ownership of the contract (the interface).

DIP

After DIP is applied as above diagram shows, the contract is owned by the high level layer, and no longer by the low layer. The essence in DIP is that the changes on the contract should be driven by the high level modules, not by the low level ones. When the contract ownership is inverted, the dependency is also inverted, because now the low level module depends on the high level one by complying with its contract.

In our example, the business logic assemblies depend on the DataAccess assembly because the IRepository and IUnitOfWork interfaces are placed into the DataAccess. If we would move them into the business logic assemblies, then we would invert the reference. Even more, now we could have more DataAccess assemblies which have different implementations, one   with Entity Framework one with NHibernate and at the application startup we could choose which one to use for that specific deployment or configuration.

However, this is not practical. We may have more business logic assemblies that need to access data, so which one should contain these references? We could make an assembly only with the data access interfaces, to solve it. With this, we would also keep the possibility to have more data access implementations. But do we really need more data access implementations? In most cases we don’t. So, it doesn’t worth to separate them.

Now, coming back to the initial question: if we keep the data access interfaces into the DataAccess assembly and the rest of the assemblies reference it, are we following DIP?

YES, as long as these interfaces change ONLY based on the needs of the business logic modules and not because of implementation details, we follow DIP. From a logical separation point of view they are owned by the business logic layer, and the data access implementation depends on them. For practical reasons they are placed in the same assembly with the implementation, because it doesn’t worth creating one only with the interfaces for this case.DIP-DA

As long as the implementation details and specifics do not leak into these interfaces, they represent correct abstractions and the implementation remains well encapsulated.

Following the same reasoning, I some times create a Contracts assembly, which contains the underlying abstractions of the applications. These abstract concepts that are specific to the application and not to one module. They are the truths that do not vary when the details are changed. I may have more functional modules, which have assemblies that implement or consume these contracts.

This figurMdulese shows this, by outlining that the functional modules do not reference one on each other but they all reference the Contracts assembly. If you go deep into DIP description in Uncle’s Bob paper here, you will find this approach very similar with the Button-Lamp example from Finding the Underlying Abstraction section.

 


This topic is discussed in more detail in my Code Design Training, when talking about SOLID principles.

 


Featured image credit: jacephoto via 123RF Stock Photo
Posted in .NET, Design, Training | Tagged , , , , , | Leave a comment

Enforce Consistency with Assembly References

6055203_sIn this post I’ll describe some key aspects that I consider when designing the assemblies that build a system.

When we structure our code into assemblies (generally named binaries, libraries or packages in other platforms than .NET) we are reasoning about three main things:

  • Deployment: different assemblies are deployed on different containers. Some assemblies end up on the UI client, some on the application server and some on third party servers we may use;
  • Separate Concerns: we put code that addresses similar concerns in one assembly and we separate it from the code that addresses different concerns. This may translate into encapsulate the implementation of one functional area into a module and offer it through an abstract interface to others. It may also translate into separate the data access concern from business logic;
  • Assure Consistency: we want that certain things to always be done in the same way through the entire application, so we define an assembly that will be reused by other assemblies around our application

Another important aspect of referencing assemblies (or linking binaries) is that we cannot have circular references. This may play an important role in managing dependencies. It may help us to avoid circular dependencies among modules or components if we carefully map them to assemblies.

Each time I design the main structure of a new application I do it with all the above in mind. These, used well can bring huge advantages in managing the growth of the application. Even if it is about a rich desktop UI client, which has in same process the business logic and a local database, so the entire application deploys in one container, I will still have more assemblies because I want to use the other advantages. I want to use assembly references to enforce separation of concerns and to enforce consistency. These are the most important tools to manage the complexity of the application, which is critical in large applications developed by more people or even more teams.

When the initial assembly structure is defined we have: the assemblies (or assembly types) and the rules of how they reference each other. This should satisfy the deployment requirements and it should reflect the concerns that must be separated and the things that must be done consistently. I usually put it in a simple diagram with boxes for assemblies and arrows for allowed references. Where there are no arrows, there cannot be references. This diagram not only that helps to explain and verify the design, but it can also be used when reviewing the implementation. If in code we see references that are not in the diagram it may be a fault in the implementation (encapsulation or abstraction leaking, code at wrong level of abstraction, etc.) or it may be a case which was not handled by the design, so the diagram needs to be adjusted.

Let’s dive into details, by looking at some examples.

Logging, is a cross-cutting concern, which in most of the cases we implement by using a third party library. We look for a library (Log4Net for example) which has a high Loggingdegree of configurability and it can log in files, in databases or send the traces to a web service. In all the cases where we write a log trace, we want to specify in the same way the type, the criticality, the priority and the message. We want to use Log4Net in the same way everywhere in our app. Consistency is important. When something needs to be changed, we want to be able to do the change following the same recipe in all the places where we log.

We can easily enforce this by wrapping the external library in one of our assemblies. Our assembly defines the Log interface which we’ll use in the application. This interface shapes the logging library to our application specific needs. All the configuration and tweaking is done now in one single place: our Logging assembly which implements the Log interface. It is the only one that may reference Log4Net. The rest of the code of the application doesn’t even know that Log4Net is used.

 

In general any external library gives a very generic API and it is very extendable to many kind of applications. The most applications the library fits, the most successful the library is. ExternalWhen we plug such library in our application we need to tweak it to our specifics and we need to use it in the same way in all the cases.

Even if wrapping it is a very simple solution, it is very powerful. It isolates the change. If something needs to be changed in how the external library is configured, now we don’t need to go through the entire application where it was used. It is directly used in only one place: our wrapper assembly. Even more when we need to replace the external library or to upgrade it to a new version the changes are again isolated in our wrapper. We can isolate in our wrapper all the concerns of communicating with the external library which may include concerns about communication with external systems, security concerns, error handling and so on.

 

An example for using assembly references to enforce separation of concerns is to separate data access implementation.DataAccess

In this example the only assembly that can make a connection to the database is the DataAccess assembly. It implements all the data access concerns and offers an abstract interface to above layers. Even more, it does not contain the data model classes, so the business logic (validations or business flows) are kept outside. For more details on how this could be implemented you can refer to my previous post: Separating Data Access Concern.

In the end, here is a simplified references diagram.References

Here we can see that we do not have references between the assemblies that implement the business logic, the Functional Modules. They communicate only through abstract interfaces placed into the Contracts assembly. The Contracts assembly contains only interfaces and DTOs. No logic.  With this we make sure that we will not create dependencies between implementation details of the functional modules. The functional modules can access data through the DataAccess assembly, but they cannot go directly to the database. They don’t have any UI logic since they do not have references to UI frameworks assemblies (like System.Web or System.Windows). The UI assembly gets the functionality and data only through the abstract interfaces from Contracts assembly. They can’t do data access otherwise. All are linked through Dependency Injection, which is abstracted by the AppBoot assembly.

To conclude, even if you didn’t start with all these in mind when you’ve created the assemblies of your app, I think it worth the effort to draw such a diagram at any moment, because it will show opportunities to bring more order, clarity and better ways to manage the size of your project.

 


this design approach is discussed in detail in my Code Design Training

 


Featured image credit: aspect3d via 123RF Stock Photo
Posted in .NET, Design, Technical, Training | Tagged , , , | Leave a comment

Reflecting on IT Camp 2015

11357123_835770433170552_5497326105137311128_o

At this time, last week, I was getting ready to get on the stage at the fifth edition of IT Camp. I was starting to feel butterflies in my stomach. Even if it was the third time in a row that I was speaking here I was getting nervous. I remember a friend telling me, before my talk, that this is a good sign. That it means that I care, even if I am comfortable with the talk and the topic. I guess it is true. I do care a lot about these opportunities, I always prepare them carefully and I do my best to say something meaningful to the audience, to have something that sticks in their mind, to make a small impact.

This year I talked about refactoring. I’ve presented a pattern of refactoring that showed how we can get to lower coupling and to a better separation of concerns by trying to increase the cohesion of our classes. I hope that I have mange to transmit that refactoring is part of software development and we cannot code without refactoring. I would be glad to see now everyone from the audience with their hand up when asked if they refactor, knowing that refactoring is not a way they compensate for their mistakes, is not something that developers like and managers hate, but is part of what we do. It is normal for every good developer. At the end I was asked the question I always get when I talk about good practices: “How do I convince my manager that this is right? that this is something that is needed?” This time I answered that we should try to translate it in no technical terms by using metaphors and I pointed a short video, that I value a lot, where Ward Cunningham presents the Debt Metaphor. An answer maybe inspired by Dan North whom, I’ve recently met at Craft Conference.

What I love the most about IT Camp is that it manages to create this great learning and experience sharing atmosphere. Even if it is at its fifth edition, the enthusiasm is everywhere. It’s like the holiday we were waiting for. People are eager to comment the sessions, to share the good and the bad things from their work. I always get back to work with higher energy and with revitalized believe that small things matter and that we all can make a difference even when we feel too far from decision making. IT Camp has proven that if you stick to strong principles, and if you can learn from the past editions, you can constantly improve and you can keep high standards once you get there.

If I were to pick only two things that characterized this edition from content perspective I would say security and a track appealing to managers.

Security topics were very present. There were many experts on security among the speakers and it was the subject of many discussions during the breaks or at the beers at day ending. I think that creating awareness on security is much needed for IT industry of Cluj. We need this. There are too many cases where under the delivery pressure we stop thinking about security after we are done with the login screen and we take huge risks for us and for our customers without even knowing. Putting it on the top of the agenda at one of the most relevant conferences in our area helps.

From the business track, which targeted CxOs and managers in general, I hope to get more managers attending developers’ conferences and meetups. I believe that in general, we need to make a better team with management. We need to understand each other and to really work together to make a common strategy that leads us to achieve common goals. We need to close the gap between these worlds. I think that if managers come to developers events and if we get closer to the business, this gap may shrink. And I believe that community events can play an important role in this.

Mihai, Tudy and all organizing team, THANK YOU! for doing it again and I’m looking forward to the next edition.

UPDATE: You can see my slides on slideshare and the code demo on github.

Posted in Personal | Tagged , , | Leave a comment