AmbientContext Services – Nuget packages

I finally got to the point where I have been using the AmbientContext pattern for a number of AmbientServices and made a few github repositories, some AppVeyor CI builds and some nuget packages so I could consume them in other projects without copy/paste.

The base implementation of the abstract AmbientService can be found here

https://github.com/nrjohnstone/AmbientContext

and the nuget package is here if you want to start implementing some of your own

https://www.nuget.org/packages/AmbientContext/

 

I also created two packages for the most common cross cutting concerns I implement using the AmbientService pattern, namely DateTime resolving and Logging.

https://github.com/nrjohnstone/AmbientContext.DateTimeService

https://github.com/nrjohnstone/AmbientContext.LogService.Serilog

with the respective nuget packages being here ;

https://www.nuget.org/packages/AmbientContext.DateTimeService/

https://www.nuget.org/packages/AmbientContext.LogService.Serilog/

If anyone finds them useful, drop me a line and let me know.

Advertisements
Posted in Code

RESTful Microservices

Last week I was able to attend a workshop in London on implementing microservices using the concept of RESTful communication.

Jim Webber, the presenter, is an articulate character which combines well with his relaxed delivery style and engaging banter. Being a colonial, antipodean and probably the loudest person in the room meant that I was a prime target for some of his banter, but it was all good natured and kept everyone focused.

The workshop was held in the well designed premises “CodeNode” owned by SkillsMatter, and covered a good range of topics, the majority being from the RESTful point of view, but some of them being around microservices in general.

Given that I have already been making a tentative journey down the microservices route in the past 6 months, some of the more interesting takeaways I got from the conference were

Microservices Definition

Microservices is often misunderstood as an architectural style but is actually more correctly thought of as a delivery pattern, since the term broadly encompasses the practices that will lead to a responsive delivery model.

A microservice is not n number of lines of code, or a process that can be rewritten in x number of sprints… it’s not about size at all, but rather the context, specifically the business context. By focusing on business contexts as the boundary of a microservice it is more likely to be relatively stable than something that has been divided into microservices along different lines.

Organizational Maturity

Any problem that is solved with a single process (Monolith) is always going ramp up in complexity when you start breaking it apart into multiple processes, this goes up even further when the multiple processes are running on different hosts and even more if you are involving multiple instances of the same process (fault tolerance / load balancing ).

Distributed systems are hard to build and hard to debug.

If your organization does not value or see the need for the following ;

  • Local governance and decision making within teams
  • Automated testing
  • Centralized logging
  • Automated delivery (CI, deployment)
  • Monitoring and alerting

then you should definitely not be entering the world of microservices, otherwise you are more likely to reap all of the drawbacks of many small moving parts but none of the benefits.

REST does events

Given that a lot of my early coding days were in the realms of real time control systems, state machines and I/O driven systems, I must admit I did have this myth in my head as well, that REST may be great for CRUD operations but probably not that useful for “event” driven systems.

Jim did a great job of challenging this myth, especially with regards to “real time” events. One of the first things he says is that if you need sub millisecond or sub tens of millisecond responses then REST is definitely not for you. But, the thing you need to challenge yourself with is do you really need that?

Most processes within an enterprise system are quite happy knowing about events half a second, 1 second, 5 seconds or even a few minutes later. It really depends on the problem you are trying to solve.

If the problem you are trying to solve is more about throughput, rather than latency, then exposing events as a restful endpoint that clients must poll allows you to leverage the same infrastructure as the web does. This means all of the things like caching, proxy’s, reverse proxy’s etc… can all take a large amount of load off a service.

The example in particular was exposing the events using AtomPub protocol, having one endpoint that was for the most recent events and providing other endpoints that represent archived events. The archiving of the events was an implementation detail, but for our purposes lets say the recent events endpoint only supplied the most recent 10 events.

The great thing about events is that they never change, which makes them excellent candidates for caching. This means that once a client has retrieved an archived event resource, that resource can be cached for a year, and then all other requests for that resource will hit the cache rather than your end point.

Also, again depending on the problem being solved and its ability to tolerate latency, you could cache your “current” event resource for something like 5 or 10 seconds, meaning that if you have a large consumer load, only the first consumer every 10 seconds will hit your service and generate “work” while all the other requests will hit the cached copy of that resource.

There was lot’s of other good stuff about using etags for cheap checks on a resource to see if it has changed for example, but the demonstration on how you can do events using REST and be able to handle a large volume traffic were very thought provoking.

Hypermedia links / HATEOS

This was also an excellent discussion point, the use of the HATEOS (Hypermedia As The Engine Of State) and has changed the way I think about this particular concept.

Originally I viewed the HATEOS as extra work for little benefit, but once you understand what it is actually giving you, it is powerful.

Essentially by providing hypermedia links within resources you are providing a list of valid transitions from the current state of the resource. You are also removing the need for the client to hard code any knowledge of what your API looks like, allowing you the freedom to refactor parts of it without breaking the clients.

All of this provides that people follow the rules of course, for example if someone ignores your hyperlinks and still hard codes their path to “/foobar/wibble/5” and you decide to refactor the API so that the foobar/wibble resource is provided from a completely different host, then they will break and discussions will need to be had. However, within an organization for internal consumption these rules should ( in theory šŸ˜› ) be easier to follow.

Summary

The workshop was an excellent 3 days spent, and I was also lucky in the fact the other people attending were engaging and interesting, so we had some great conversations over coffee breaks… and some fairly decent table tennis skills were on display as well.

The next session for this workshop is being held in November

https://skillsmatter.com/courses/541-fast-track-to-restful-to-microservices

 

 

 

 

 

Tagged with:
Posted in Article, Uncategorized

TDD Workshops @ Coolblue

It’s been a bit over 2 months now since joining Coolblue and I’ve been lucky to have made some good friends already that share a like minded passion for Software Craftsmanship.

Devon, the lead for one of the back office teams, is one such friend and between the two of us and some others we are striving to ensure that the place we work can be an environment where other developers can learn and master their craft and, hopefully, we can learn new things as well.

One of the things we felt was lacking was a routine TDD workshop, where those of us with experience in TDD and using it daily can help and guide others who want to learn this skill and how to apply it on a daily basis.

To that end, we had our first TDD workshop last Friday, which was framed as an introduction style session using a single keyboard and pair programming in the Randori style. This allows the other attendees to watch and absorb what the pair are doing, while everyone being able to have a turn if they like.

The kata itself was the good old FizzBuzz, which I was worried might not prove enough of a challenge to last us an hour. This proved unfounded however as some good discussions occurred during the session which meant that we pretty much finished on time with the implementation done, although there was still some scope for refactoring after that.

OneĀ of the more interesting questions that came up were

  • Why call the instance being tested “sut” or “target” surely we need a more descriptive name, like the name of the class being tested?

I think this question here again leads back to the idea that test code and production code are the same thing. While in terms of importance they are both equally important, test code has a different value proposition than production code.

With production code we want qualities such as readability balanced withĀ ease of change and ability to understand the overall process of the system.

Test code has a focus on readability as well, but it is the main focus, with the main difference being when someone is reading a test, if you don’t have a good naming convention between TestFixtures indicating what the thing is being tested, it adds cognitive overload by having to read the name of the TestFixture (in the best case scenario) to having to read the test and figure out which of the “new” statements is the Subject Under Test (worst case scenario).

By sticking to the convention of calling the thing being tested the “sut” it means that in every test fixture, no matter which class is being tested, all you need to thing about when reading tests is that the “sut” is the thing being tested. That coupled with the name of the test being informative decreases the time it takes to grok a test.

 

Given that the kata workshop went well, the idea I have been floating for the next workshop is more of a round robin style pairing session where there is a keyboardĀ for every 2 people and every 5 minutes everyone moves 1 seat to the right. This will lead to the situation where the people pairing alternate and when you move to the new keyboard you are potentially adding to the code and tests of N number of people who came before you !

Tagged with:
Posted in Uncategorized

Akka.NET and DI Testing

Having had a few solid months of Akka.NET under my belt, and now using it to write some production microservices, I quickly wanted to note down some of the more useful aspects that come into play when testing Actors.

In every project I’ve worked on over the last few years, I’ve always implemented IoC with a composition root and most importantly unit tests that verify you can resolve the composition root class.

This works extremely well when introducing new dependencies as you have a failing test that nags you about not being able to resolve the composition root class until you go and add a registration for your new dependency and abstraction.

Things are not as clear cut when using Actors however.

Resolving the composition root is fine, but because with Akka.NET you also need to create an actor system and the root level actors, this is generally something I’ve avoided doing as part of the composition root and have instead moved into the “run” logic of the main application class.

This means that not only do you need to test your main application composition root (which I’ve found with the actor model to be a very small footprint, since most of the logic is contained by actors) you also need to test the DI capability of your actor system.

At this point, I implemented more targettedĀ tests using the Akka.NET DI resolver to verify that I can resolve an instance of each actor in the system. This is important because not all actors are created immediately on startup. Some actors may only be considered single run workers, and are started by a parent actor to fulfill a specificĀ task, so resolving the root actor system will not test these ones.

In implementing resolve tests for each actor, I quickly found that the syntax I thought would work was not actually running the constructors.

Here is one of the original tests

public class ActorResolveTests 
    {
        private readonly Akka.Actor.ActorSystem _actorSystem;

        public ActorResolveTests()
        {
            _actorSystem = Akka.Actor.ActorSystem.Create("TestActorSystem");
            SetupAutoFacDependencyResolver();
        }

        internal void SetupAutoFacDependencyResolver()
        {
            var builder = new ContainerBuilder();
            builder.RegisterModule(new ActorSystemModule(_actorSystem));
            var container = builder.Build();
            var dependencyResolver = new AutoFacDependencyResolver(
                    container, _actorSystem);
        }

        [Fact]
        public void RandomImageProvider_should_be_resolvable()
        {
            // act
            var actorRef = _actorSystem.ActorOf(
                    _actorSystem.DI().Props<RandomImageProvider>());

            // assert
            actorRef.Should().NotBeNull();
        }
    }

In the above test fixture (XUnit2) we basically create an actor system, configure an AutoFac container using a module that has all of the required registrations, then the test itself attempts to create an actor reference using the DI().Props call..

The problem with this approach is that the actor system creates the actors ASYNCHRONOUSLY, which means you will always actually get back a valid IActorRef instanceĀ that points to the location the future actor will be.

For unit testing having asynchronous events is less than ideal and I would be loath to put a Thread.Sleep in the test as this would result in a brittle test that will fail at some point due to differences in machine, CPU usage.

A much better way to exersize the DI container that builds your actors is to call the NewActor() method on the Props object itself as follows

public class ActorResolveTests : TestKit
    {
        private readonly Akka.Actor.ActorSystem _actorSystem;

        public ActorResolveTests()
        {
            _actorSystem = Akka.Actor.ActorSystem.Create("TestActorSystem");
            SetupAutoFacDependencyResolver();
        }

        internal void SetupAutoFacDependencyResolver()
        {
            var builder = new ContainerBuilder();
            builder.RegisterModule(new ActorSystemModule(_actorSystem));
            var container = builder.Build();
            var dependencyResolver = new AutoFacDependencyResolver(
                container, _actorSystem);
        }

        [Fact]
        public void RandomImageProvider_should_be_resolvable()
        {
            // act
            var actorRef = 
                   _actorSystem.DI().Props<RandomImageProvider>().NewActor();

            // assert
            actorRef.Should().NotBeNull();
        }
    }

From here, it’s not too far to add some reflection that finds all types inheriting from the Akka.NET actor base types and then generating DI resolvability tests on each of them.

The benefit from these type of tests, especially for child actors that are created at run time in response to a message, is that you know immediately if they are going to be resolvable, rather than relying on someone to run the application up and follow the manual testing path that will get them to that point.

Tagged with: , ,
Posted in Code

Ambient Context

Everyone sane agrees that dependencies for classes should come in via a single constructor on the class requiring the dependencies.

Also, anyone sane should also agree that the use of the static keyword is an anathema to loosely coupled and testable code and should be avoided at all costs.

But…

It’s always felt a bit wrong to have to pollute a classes constructor with an ILogger interface, and yet in pursuit of the constructor only injection pattern this is what you need to do.

The ILogger dependency is one of those things that a class does not need directly to achieve it’s responsibility and yet if we want to log then we need to pass that dependency in.

These types of dependencies are described best under the heading “cross cutting concerns” that is, they are dependencies that are across the entire structure of an application.

A number of years back I bumped into a pattern called Ambient Context, I think from Mark Seemann’s old .NET blog and his excellent book on DI, but I never could get use the static cling that associated the pattern.

Then I had the good fortune to have another Senior Developer, John Fahey, join our team and after working with him for more than the past year, he has come up with a rather nice implementation of the Ambient Context pattern.

I’m sure he will recognize my input into the collaborative design process as valuable, as it was mainly around the “what, are you on this static bandwagon again !” and “how will you test that”, but after some back and forth with my feedback and challenges to write non brittle unit tests that leave no state behind, I think the result is quite good.

It relies behind the scenes on setting a static creational method that will be used each time the ambient context is called to return an instance of the required concrete class as well as allowing any instance of the Ambient Context to have it’s internal instance. This can be done at the composition root by injecting the method or if an acceptable default already exists, supplying this as an override in the implementing class.

The base class is a generic and looks like this

public abstract class AmbientService<T> where T : class
{
    public delegate T CreateDelegate();

    private T _instance;

    public static CreateDelegate Create { get; set; }

    protected virtual T DefaultCreate()
    {
        return null;
    }

    public bool InDesignMode { get; set; }

    static AmbientService()
    {
        Create = () => null;
    }

    public T Instance
    {
        get
        {
            if (_instance == null && !InDesignMode)
            {
                if (Create != null) _instance = Create();
                if (_instance == null)
                {
                    _instance = DefaultCreate();
                    if (_instance == null)
                    {
                        NoCreate();
                    }
                }
            }
            return _instance;
        }
        set
        {
            if (value == null) throw new ArgumentNullException("value");
            _instance = value;
        }
    }

    private static T NoCreate()
    {
        string message = $"Create not setup for AmbientService<{typeof (T).Name}>";
        throw new Exception(message);
    }
}

And a cut down example version of a date/time service with a default implementation that abstracts access to the god awful .NET static DateTime API

public class AmbientDateTimeService : AmbientService<IDateTimeService>, IDateTimeService
{
    protected override IDateTimeService DefaultCreate()
    {
        return new DateTimeService();
    }

    public DateTime Now => Instance.Now;
}

and the wrapper class, even though it should be fairly obvious …

public class DateTimeService : IDateTimeService
{
    public DateTime Now => DateTime.Now;
}

To use an Ambient Context in a class Foo requiring some date time calculations is as simple as newing up the ambient context like so …

public class Foo
{
    public AmbientDateTimeService DateTimeService { get; } = new AmbientDateTimeService();

    public void Bar()
    {
        DateTime start = DateTimeService.Now;
        // and some other stuff
    }
}

How will I ever test this I hear you scream as you notice how there is no setter provided on the property for the ambient context… but remember, not only can you set the static Create method on all the AmbientDateTimeService instances, but also the internal instance used by each AmbientDateTimeService as well.

So testing is as simple as injection a mock of IDateTimeService into the instance of AmbientDateTimeService for the subject under test.

[TestFixture]
public class FooTests
{
    [Test]
    public void TestSomething()
    {
        var sut = new Foo();
        var mockDateTimeService = new Mock<IDateTimeService>().Object;
        sut.DateTimeService.Instance = mockDateTimeService;
        // rest of test method
    }
}

There is no static cling since each instance of Foo will have its own instance of AmbientDateTimeService where the instance is being set to a mock.

In all cases where a class has a dependency that is part of its single responsibility, this should come in via the constructor. But for those few cases where the dependency is a cross cutting concern, such as logging, then an ambient context implemented in this manner is an elegant solution.

Some of the things I use Ambient Context for these days are logging, date time access and resource manager access

Posted in Code

Canterbury Software Summit

I had the opportunity to attend the Canterbury Software Summit for 2015 yesterday, which was a very polished and well run event.

http://softwaresummit.co.nz/

For me the highlight of the technical stream was Ben Amor, from Xero, who presented a piece of addressing Technical Debt. It was shame that the allotted time for the talk was only 30 minutes as I felt that given 60 minutes Ben could have gone into some more depth and really knocked it out of the park.

It was good to hear someone else also mention the Mikado method, which if you recall is something that I have bought into practice at my workplace and is an excellent formal method for attacking complex software changes.

Visited various booths, and as an example of just how small New Zealand is, I bumped into someone who’s name I had seen in comment blocks all over the place at my previous place of employment, NZAS, but had never met as they left before I started in my dev role there.

The Software Summit is an event Canterbury should be proud of and a big thanks to all the sponsors who contributed and made it happen, as well as the hard working team that organized and ran the event.

After the talks had finished and the closing speech had been given, I bumped into a couple of budding software developers, Ray and John, who knew me from various .NET User GroupĀ meetings, and they wanted to know if they could ask me some questions on some of the things they had heard me talking about at various points in time over the last few months.

The main question we talked about comes back to the good old OOP encapsulation rule, and it’s still a bit disappointing that institutions teaching code are still banging on about “private everything” and “inheritance” when these are some of concepts that can cause the most pain if not really grasped.

In short, the question was around “If I need to make sure I have everything private apart from my public methods, how to I write a unit test to assert something?”

Not a bad question, and a lot of the confusion can generally be cleared up by understandingĀ the Single Responsibility Principal from SOLID.

If you have a private field/property on a class, that a public method changes and there is no way to assert the state change through the public API, then you probably have a class with multiple responsibilities. Instead of raising the visibility to public to access the state, what you probably need to do is understand the responsibility that the field/property represents and move it to a new class, along with any of the methods that manipulate its state. Then modify your original class to maintain its current behavior by using this new class and its behavior.

This means you now have a smaller class, where the previously hidden state is now part of its public API, it’s more likely going to be SR and you can write unit tests against it.

I really think our Polytechs and Universities should place more emphasis, or at least equal emphasis, on SOLID while they are teaching OOP, especially with regards to inheritance.

A Pluralsight course I was watching recently that was discussing the Liskov Principal from SOLID had a nice way of describing inheritance. Rather than using the traditional “is a” approach, you should really think of it as “is a substitute for the base class”.

If it looks like a duck, walks like a duck, but can’t fly, then maybe it’s not a substitute for a duck after all, maybe it’s actually an animal that doesn’t implement the IFlyingAnimal interface.

Posted in Uncategorized

Mikado Method

A few months back I stumbled onto an interest book proposing a pragmatic formalized approach for tackling complex changes in software engineering.

The book is called the Mikado Method, written by Ola Ellnestam and Daniel Brolund and is available from Manning here http://www.manning.com/books/the-mikado-method.

After reading some of the sample chapters I was impressed enough that I got it added to our technical library where I work and over the following weeks read it and started putting it into practice.

How often do you have that experience of starting a piece of work, lets say adding some functionality that will require changes to existing classes to make the new functionality tidier to implement, and end up after some hours (or days) with a broken mess. Once you chase that rabbit a certain distance down that hole, there is no going back…

It’s a common feeling when you start to change something, and you can feel the complexity ratcheting up, but you’ve invested 30 minutes so far and you are sure that given another 30 you’ll have it done and dusted… 30 minutes later, maybe just another hour and all of this will be compiling and good to go… before you know it you are committed, in for a penny in for a pound and you need to fight your way to the end.

This is where the Mikado Method shines.

The basic premise is that when you are tackling a goal of a given size/complexity you want to do it in a methodical, systematic way, rather than chasing the compiler all over town.

It teaches that most of what developing systems is about is not the code that you cut, but rather learning about the system, domain and technology in use. Making the actual code changes accounts for a fragment of the total development time.

Essential the Mikado Method boils down to the following high level concepts

  • Set a Goal
  • Experiment
  • Visualize
  • Undo

Setting a goal – consists of thinking about the future state you are trying to arrive at and clearly stating it as a goal. Goals serve as the start point for a change and the success criteria for a change. They are also the basis of your experiments

Experiment – If you are lucky enough to work with a static language (showing my bias there!) then this phase generally consists of trying the step required to complete the goal and observing what parts of the system break. A simple example might be wanting to extract a method out from a class into a new class of its own.

Visualize – This is where you write down the goal, and any resulting breakages then become prerequisite goals of the goal being experimented on/with. Following on from the example above, you may find that extracting the method resulting in some undefined fields in the new method, since they were on the old class, and maybe there were also some other private methods that this method depended on. Writing these down as goals might consist of one goal being “pass all fields into target method as parameters” while a second might be “move xyz private method to new class”

Undo – The key part to the Mikado Method is this, once you have recorded the results of your experiment on your Mikado Graph, revert the code to its previously working state. That’s right, GIT RESET HARD. If you have been focused in anĀ  experimental mindset, you shouldn’t have that many changes to lose, and they were breaking changes anyway, plus you know a lot more about how the system will respond to the particular change being done.

Eventually you will find the “leaf” nodes of the Mikado graph where you make a change and nothing breaks, the system compiles and everything works. One such change might be the “pass all fields into target method as parameters”. At this point, these changes are checked in as a commit, the goal is ticked on the Mikado graph and the next goal is attempted.

I’ve used the Mikado Method for at least the last 4-5 sprints and I’m definitely sold on it as way of approaching changes in a systemic, incremental fashion.

A sample graph looks something like this one

Some of the other benefits of using a Mikado Graph are

  • For goals that are long term and will require multiple sprints to achieve, it allows small chunks to be experimented on in each sprint and any successful goals can be checked in and committed, since a goal is only achieved if the system continues to work. This allows potentially expensive refactorings and code changes to be done incrementally rather than chewing up an entire sprint or two with no features being worked on.
  • Having the Mikado Graph visible lets everyone see the progress to the goals being made, and potentially allow multiple people or teams to work towards a common goal
  • By writing all of the breaking changes and results of experiments down as Mikado goals, your head is free of clutter and can concentrate on the current goal at hand without trying to keep the entire state of the change in your head at once
  • If you get hit by a bus (a favorite metaphor in our team since we work near an extremely busy/dangerous road) then someone else will have an visual idea of where you were heading
  • Complex changes are broken down into bite sized commits which make for easy code reviewing, since you only commit goals that don’t break the system. There is nothing worse than getting a commit from someone who started off with a goal, and then just smashed away at it for hours/days on end and then checked everything in once it compiled again. Ouch.

All of this lead to me running a workshop today to present the Mikado Method and how you go about applying it, using a real work example that we are currently under taking which is upgrading a certain logical area of our projects from older version of .NET (3-3.5) to 4.5. This may sound simple, but to put things in perspective we are talking about a solution with > 100 projects with lots of dependencies to navigate around.

The slides I used for my presentation are fairly brief, but you can view them here.

https://docs.google.com/presentation/d/1vxxqm1EAFaQ9noJVqXR_FsGGObCy3mATHT7oOyE7zEM/edit?usp=sharing

The slides and this blog post don’t do it justice, instead, spend the $ (its around $45 USD) and get it added to your technical library today. It’s worth the money.

Reference:

Another blog post about the Mikado Method from 2011

https://theholyjava.wordpress.com/2011/04/28/what-ive-learned-from-nearly-failing-to-refactor-hudson/

Posted in Uncategorized