XP Days Ukraine 2017

I recently spoke in the Ukraine at XP Days 2017 and had a great time.

Mikalai and the team at XP Injection did a fantastic job of organizing this event and gathered together some great speakers covering a wide raft of topics. While most of the talks where in Russian, there was still at least one talk in English in each time slot, more than catering for the non-Russian speaking attendees.

The schedule had my talk on Ports & Adapters opening immediately after the keynote from Venkat and was also competing against another english talk by Noam Almog on Rapid Development with Microservices.

The various scenarios playing out in my head didn’t help with the nerves either… If the keynote speech was awesome (which it was) then I’d have people with high expectations at my talk, if the keynote sucked, then I’d have disillusioned people at my talk … If everyone was still high on microservices concaine then I’d have a very small audience, but if people wanted to really know more about ports and adapters then I’d have a full room, and judging by the number of chairs this would be around 200 people.

Thankfully, the keynote talk by Venkat had a high focus on what I’ve always referred to as designing for testability, something that the port’s and adapter’s pattern is extremely good for, and I got a reasonable amount of people show up to my talk, not a full room thankfully, but still enough to make it one of the larger talks I’ve given so far.

The video stream of my talk is available on YouTube here for those who have asked for it, and the full playlist of the talks is here.

One of the features the conference organizers had was that after a speaker finished his talk, we had an area set aside out in the foyer where people could come and have more lengthy discussions around the topic we were speaking on.

This area worked really well, and after I had finished speaking I had an enjoyable hour and half with various people discussing port’s and adapters, scribbling on whiteboards and some more on the fly coding examples !

For those of you who have never attended an XP Days event run by XP Injection, definitely keep it on your radar for 2018 since it’s certain they will continue to get better each year and continue to have excellent topics relevant to most people.

Posted in Uncategorized

Ports & Adapters – Software Architecture

Alistair Cockburn’s description of a software architecture that divides the “inside” and “outside” of an application was something I first ran into quite a while ago, and at first I didn’t quite grok why it was better than my current experiences with N-Tier.

However, with time, experience and quite often pain, perspectives can shift, and over the last couple of years things fell into place. It has been almost exclusively a weapon of choice I use when building a new application.

As with anything that is different from the masses of opinion, it can be quite difficult to explain to fellow developers and teams how to implement the pattern. This can be due to any number of problems, for example sheer wrong headedness, but most often it is just due to the fact that to see the benefits of an architectural style such as ports and adapters, you really need to have your A game already in terms of understanding the following

  • System/Application/Domain (Business Logic) Boundaries
  • Dependency Inversion (by this I mean understanding what is meant by high level and low level, not Dependency Injection, see Fowler)
  • Abstractions (this is not adding an interface to every class and having the interface live with the implementation !)
  • Coupling (be comfortable with statics and concrete classes where appropriate, not everything needs to be an instance and, see above, not everything needs to be an abstraction
  • Unit testing (again, this is not having a matching test fixture for every class, if you’re a TDD practitioner, most of the classes that exist when you’re done are refactorings and implementation details of your original starting point. This is especially true if you use outside in development appropriately, see Seemann . For what it’s worth, outside in development is what I agree that Kent Beck generally meant when he bought forth the TDD concept)

But I’m off on a tangent (as usual).

I have often been asked to provide an easy .NET example for people to use to familiarize themselves with Ports and Adapters, and it has been on my list of things to get do… The only problem with my list of things “to get to” is it’s quite long, and my sons are pretty good at nerd sniping me with things like “lets write a game in Scratch” or, more lately, lets write a minecraft mod..

Now that I’ve sat down and reflected over the last few intense months of starting a bunch of projects at Coolblue for an integration project that are all using Ports and Adapter style architecture, I thought it might be time to “take the show on the road” so to speak.

So I had started putting a presentation together and then figured I might as well find a forum other than my work colleagues, after all there is only so much they can take of me muttering on about Mikado methods, boundaries, idempotency, concurrency and the other things that get in my bonnet.

I’ll be talking at the XP Days event in the Ukraine, and this gave me the motivation needed to start creating a git repo with a small sample .NET application demonstrating a simple Book Ordering domain, and various examples of ports and adapters being wired up.

There are examples of adapters that use an IoC container internally, which is nice if you have some complicated composition going on that you don’t want to do by hand, and there are examples of upstream ports that invoke various use cases from the domain, such as allowing a message on a RabbitMq queue to invoke the logic for placing a book order request.

The example also focuses on the situation where you might have common business logic but have to support multiple different hardware configurations for clients, not something we really do at Coolblue since we are an internal software company, but the ability to write a second adapter to migrate from say, a document database to a relational database, or from direct calls to a database to REST calls via a more suitable service.

The code is available here and is by no means finished as an example yet, there is also an extensive disclaimer about UNDERSTANDING the example and not copy/pasting things out of it into production code. The example focuses directly on ports and adapters and ignores millions of other things that are critical to any MVP going into battle.





Also, if anyone wants me to come and run the presentation through at a user group I’d be only too happy.


Tagged with:
Posted in Article

Behind The Scenes @ Coolblue

So, now that the dust has settled and I’ve got some other commitments out of the road I can write a bit about the Behind The Scenes event that we host at Coolblue a few weeks back.

Originally the event was planned for December last year (2016) and I was approached to see if I would like to speak on a topic for the Behind the Scenes. A colleague of mine Pat was going to be the other speaker so we got together to see if we could thrash out a common theme to our talks.

Pat was keen to talk about some of the new tech angles we are using at Coolblue around logging and also been quite the thrillseeker (in my opinion!) was going to do some live coding demos as well. With the topic in mind, ELK stack and Serilog / structured logging it was easy for me to come up with a talk that matched.

People that have worked with me over the previous years know that I have a particular penchant for TDD and refactoring legacy software, and those that have worked with me since I found the book “The Mikado Method” around the end of 2014 will also know how passionate I am about applying the method and encouraging / mentoring anyone else who is keen to give it a go.

So what was my topic? Well, given that the original push for ELK stack and structured logging with Serilog came from a number of awesome people at Coolblue, I figured a talk on how the team I have been working with for the last 4-5 months used the Mikeado Method to refactor an application from using Log4Net with email appenders to using Serilog via Redis so logs could be viewed in our ELK stack.

The details of how we prepare for a Coolblue Behind The Scenes talk can be found on Pat’s excellent blog post here.

My slides for the talk that I gave after Pat’s can be found here.

The slides by themselves are not that enlightening, so I also managed to record the video for my talk (originally we were going to record both talks but technical due to gremlins in the works unfortunately Pat missed out). Once I have the green light that I can share the recording I’ll make that available to those that have asked for it.

In the upcoming weeks (or when I get a roundtooit) I am hoping to have a blog post written up for the very large refactoring undertaking that has been going on within the current team I’m embedded in. This involved taking an existing solution with tangled dependencies and code with some hard to reason about pieces and using the Mikado method over a number of sprints (still on going) to move the solution architecture to a hexagonal implementation, namely Ports and Adaptors.

I’ve used Ports and Adapters on a number of projects and it never fails to succeed in making it obvious for developers to figure out where infrastructure code goes (adapters) how it gets used in the domain (ports) and most importantly, where the interfaces live (domain) to ensure that the dependencies / references are pointing in the right direction.

Something I find lacking is some good reference  / template examples of how people have implemented their adapters and then wired them into their application so I also hope to come up with a small repo on git that demonstrates some of the patterns I’ve found to be useful when implementing this architectural style.

Posted in Article

Hot Swap Katas

For our Software Craftsmanship session at Coolblue this week I decided to run a style of kata that I had developed in New Zealand but had not had the opportunity to run here.

Our Software Craftsmanship sessions had until recently always been focused on C#, after all that’s what most of us back office developers are using, but recently I’ve started inviting the front end developers to our sessions as well.

Last week was our first session where we had Java Script, PHP and C# all being used for kata practice, which was great to see, especially since I haven’t used PHP for about 10 years and was amazed that it actually had a test framework, runner etc…

So, the type of exersize I ran for this weeks session was what I like to call “Hot Swap” katas and the actual exersize was to implement a Sudoku move validator.

The basic premise is that everyone brings a running test written in whatever language they are comfortable with. Everyone starts implementing the same kata then after a 10 minute interval everyone has to stop, mid keystroke and immediately swap to the next keyboard on their right.

You are allowed to ask some basic questions of the person who has just be “bus factored” like “how do I run your tests in language XYZ” but you can’t talk about implementation detail and what they were thinking, you need to figure that out from the tests.

The best thing I like about this type of kata is how it really highlights a few things ;

Up Front Design

As much as I love TDD it does highlight that you need to do some up front design, scoped appropriately to the user story at hand. There are a lot of times where I see people pick up a story and immediately start slinging code. This is ok if your doing the Mikado method (another passion of mine) and intend on throwing all the code away and recording your experimentation results, but sometimes just taking 30 minutes to an hour to think outside-in how the story is going to be implemented can lead to some insights that are not immediately obvious.

Outside In / Top Down

It highlights how with TDD you want to try and start from the outside boundary of your feature as this makes it easier to understand when you swap keyboards as the tests will often describe overall expected behavior better. Quite often in TDD the biggest mistake is starting from the bottom, smallest detail and building that first then working your way up.

Test Naming / Design

TEST NAMING / DESIGN !!! The main thing this exersize highlights is how critical the naming of a test is ! Nothing will slow you down more or potentially mislead you if someones test names are too vague or generic.

In one of the implementations during our kata I ran into some parameterized tests that were tpassing in the value to be passed to the method AND the asserted result as a boolean. The test was attempting to verify if the number was within range to be allowed in a Sudoku cell, but passing all the values into a test named “CanPutNumber_ShouldAllowValidNumbers” is loosing the opportunity to have some more specific tests.

I would always suggest in these situations where you have a parameterized test that also passes in the assert value, you actually have multiple requirements that are better represented as individual tests.





The very last test I might make parameterized and have two values, 1 and 9


In conclusion, the kata worked out really well. Not only did we had Python in the mix this time, which I used quite extensively a few years ago but was amazed how much I had forgotten, but we also had the difference in IDE’s (Visual Studio versus Rider) and Mac laptops, which was quite “educational” as well.

If you run code katas, give this one a try and let me know how you find it !



Posted in Article

The Clean Coder

Recently I had the privilege of attending an Agile Software Craftsmanship workshop given by perhaps one of the most influential people in my coding career, Robert C. Martin, or as he is better known, Uncle Bob.

While the workshop itself was definitely aimed at newer people to our trade, or those hovering around the edges of the Software Craftsmanship movement, I still found good value in attending.

A wide range of topics was covered, the programme for the next course can be seen here, but the chance to ask Uncle Bob some of the more curly questions that crop up over a career of working with various personalities was too good to miss.

So, after the first day of the workshop I asked Bob what he had planned for tea that night… When he replied that he had nothing special on I suggested we head out to a local London pub, down some ales with a meal and have a good old yarn.

What an opportunity !

SO after a few pints of guinness, and a great meal, the following topics were some of those discussed ;

Single Responsibility Principal

This is one of my pet peeves at the moment. Now, I think the SOLID principals are a great set of guidelines when it comes to building maintainable software but the one I think that gets misinterpreted a lot, and by extension does the most damage, is SRP.

The original definition, and the one I use almost exclusively when discussing SOLID, is Single Reason For Change, but at some point the words change and “Responsibility” crept in.

The problem with Responsibility is it’s a very loaded English word. Taken to its OCD limit, it means you can’t use something like the Repository pattern, because having Insert, Update, Delete and Get methods on the same class is very clearly way more that a single responsibility (!)

The damage this does, especially when coupled with the crack cocaine of dependency injection containers, is clear when you have to understand one of these projects and find yourself wading through a quagmire of interfaces and classes, each of which may only have a single method, and a single implementation … and a single class that actually uses the damn thing !

Two other measures of software quality that are often forgotten about while rabidly frothing at the mouth about Single Responsibility are Coupling and Cohesion. The breaking up of related elements that belong together, such as public methods on a Respository for updating and inserting records into FoobarInserter and FoobarUpdater more often than not results in shotgun surgery when a change has to be made.


Well, Bob agreed that the original definition he coined, Single Reason For Change, was not about “responsibilities” at all, but more about the code only being affected by a single actor.

So if you had code in a class that was impacted by changing requirements from, let’s say the publishing department due to the formatting of some data, and then also changing requirements from the legal department due to changes in law, then this would mean a class has more than a single actor that will cause it to change.

This doesn’t mean however that we just shovel everything into a single class per actor, and as with anything there are no black and white rules, however before you are tempted to split something out into a separate class ask yourself the following ;

What is the reason for change that would cause me to have to alter this class and no other class versus what is the reason for change that would cause me to alter this class and the other class I am separating its “responsibility” from ?

In the case of the Repository versus Database Commands it might go something like this

If I have a FooRepository, then the worst case scenario if the database table changes I need to edit a bunch of public methods in the same class, each of which have their own single reason for change at a more granular abstraction level. I have a single class to find.

Now I have a FooInsertCommand, FooUpdateCommand, FooDeleteCommand and whatever other implementations there might be. At best, the person has organized them into a namespace and following a naming convention… I hope… in this scenario if the database table changes I am reminded of Shalloway’s law, in that I can be sure I will only find the N-1 places that I need to change the first time around.

Using “Responsibility” in SRP also, ipso facto, leads to a place where you cannot have any sort of API classes, since they would naturally violate this.

Enough of my rantings for now, I will follow up with some more discussions on other software topics that Bob and myself discussed in future blog posts.





Tagged with:
Posted in Article

Advanced CQRS

A few weeks ago I had the opportunity to attend Greg Young’s “Advanced CQRS and DDD” workshop in London.

While some of the material Greg covered in the workshop was familiar, mainly around the concept of asynchronous messaging and it associated headaches, some of it was new ground.

Over the three day workshop we evolved a scenario that modeled a restaurant business process consisting of the following basic work flow ;

  1. Waiter takes the order from customer and gives it to the cook
  2. The cook adds ingredients and cooks the order and gives it to the assistant manager
  3. The assistant manager adds the prices for the meal items and gives it to the cashier
  4. The cashier takes payment from the customer and marks the order as paid

The first implementation involved basically creating a document that represented an order, creating classes to represent the actors mentioned above and having each of the instances call the required method on another instance.

Tightly coupled, synchronous and hard to change the business process if required.

Once we had this in place we evolved the system using a common interface (IOrderHandler) and a series of decorators that added abilities to each of the underlying “actors”. These decorators involved things like having a concurrent queue to recieve “messages” on, having a thread to process messages asynchronously and some decorators that injected various “faults” into the system, such as messages being dropped or sent multiple times.

At a certain point when you are moving towards maximal decoupling, it also becomes apparent that your actual “business process” is becoming more and more abstracted and hard to grok.

Enter the “Process Manager” pattern, in this case we created a new actor called the “Midget” whose job it was to handle the routing of messages from one actor to another, essentially representing the business process in a single place. Think about that for a second, if you have a single class representing the flow of a process (via messaging) you also have a single place to understand that flow and also the option of having alternative business processes if required.

We actually went down the route of implementing an alternative business process, in the form of the waiter flagging a customer as “dodgy” which in turn changed the business process to be (at this point we have refactored to use a publish/subscribe message bus)

  1. Waiter takes the order from the customer, creates a “Dodgy Customer Midget” to handle the business process
  2. Dodgy Customer Midget places a command on the message bus “CalculateOrder”
  3. Dodgy Customer Midget waits for the event “OrderCalculated”
  4. Meanwhile, the assistant manager reacts to the command “CalculateOrder”, does his work and then sends an event “OrderCalculated” with the order itself as payload
  5. Dodgy Customer reacts to the event “OrderCalculated” …

etc… until the business process of making the dodgy customer pay first, BEFORE his meal is cooked, has occurred.

Another very interesting concept we discussed and implemented was avoiding the deserialization of the message / payload and writing a wrapper class that can manipulate the structure directly. The reason for this is the following ;

Say a process called “A” sends a message with two fields called “Name” and “Title” to process “B”.

Process “B” then deserializes the message into its structure, which only has the “Name” property. At this point you have now lost information from the original message. When it comes time for “B” to potentially send that message on, lets say to “C” then when “C” receives the message and deserializes it, even if “C” has a structure to accept “Name” and “Title”, the information for Title is lost.

By keeping the raw message itself, in our case a JSON payload, and instead writing a wrapper to manipulate the structure directly (in this case using Newtonsoft and JObject calls) you can change the bits of the message you know about, while leaving the rest untouched.

I put together a very small demo that highlights the concept being used between method calls that would simulate the reply/request of messages in a distributed system, available on github here -> decoupled-messaging-demo

Some of the topics we didn’t go into a lot of detail on during the course, which was a bit unfortunate, but luckily Greg had nothing to do on Monday night so I suggested we head out and grab a beer and some food at a local pub. Greg knows London extremely well and had a great place in mind.

Over the course of a few beers and some food, I managed to ask a lot more of the hairy questions I had regarding DDD, eventually consistent databases and CQRS in general. Between the technical talk I also found out that we had some common ground around both being huge fans of Ice Hockey (as well as both having played in the past) and having experience with Boxing and martial arts. Big thanks to Greg for his time and the great conversation over beers that night !

If you are heading into the DDD territory and are also considering CQRS / Event sourcing then I would definitely recommend his course at Skills Matter once you have the basics embedded.


Posted in Article

.NET Pathfinder / Architect

A few weeks ago I made a decision to move into a new role at Coolblue.

During my career in New Zealand I’ve avoided the often loaded “Architect” roles within various companies and offers due to nature of the work. The role is usually one of typical ivory towerishness, well depicted by Uncle Bob in one of his earlier videos on Software Architecture, and discussed in various articles.

I’ve always believed, similar to Uncle Bob (or perhaps because of?) that Software Architecture is about the intent of the software, not about the given framework xyz used to accomplish the goal. All too often the emphasis is given on choosing NServiceBus over RabbitMQ, or Oracle over SQLServer. I’m not saying that these are not important decisions, but rather that in terms of agile and what makes you go fast, it’s the intent of the software you are writing that is your “Architecture”.

Here at Coolblue, the role of Pathfinder has a wide variety of tasks and responsibilities, but the main attraction was being able to have influence over what I believe the role of the Pathfinder should be.

In summary, a Pathfinder should be an exemplary professional, skilled in the way of Software Craftsmanship that is available to any and all teams that require him. He should spread best practices among the teams in terms of TDD, Clean Test Design, SOLID, Fault Tolerant Design and DDD (to name a few) but should also have an eye on the future making sure that we have the correct technologies on the radar for investment.

But mainly, he should be coding. He should actively be working in a team for a reasonable proportion of his time, helping the team of the moment to achieve success in their stories with a focus on help teams to avoid technical debt or to clean it up where it has been sown.


Posted in Article

TDD Kata’s vs Real World

Quite often when we are running TDD workshops there are new people who come along to find out what it is all about and struggle to understand the why.

Some of the comments I most here are that the exercises we work through for the katas are “too simple”, or some of the decisions we make seem too arbitrary to be useful in real life programming.

For example when doing the string calculator example, from Roy Oshergrove’s site, a comment was made when someone wanted to introduce code and refactor the existing tests we had built up so they didn’t depend on “,” being the default separator and the general TDD advise was that we shouldn’t do this because it wasn’t part of the MVP as we knew it. The comment was around the usual lines of “but I’m an experienced software developer and I can see we might need it down the track in case someone removes the support for comma as the default operator”.

If someone decides to remove the support for comma as the default operator however, then this is a change to functionality that should cause a test to break somewhere and I am OK with this, but the one improvement that maybe we could make to the tests is refactor all of the logic for creating the input “string” into a common helper method that concatenated the list of numbers to be summed with the comma.

Anyway, moving on, most of the questions like this tend to come around because the problems and code we are solving for katas are small, and deliberately so as they are the kind of thing you want to sit down and practice for 10-20 minutes in the morning before you start for the day, or practice in a 1 hour session with a team. This makes them hard to relate to the real world of user stories and features.

One of the intuition pumps I encourage people to use when thinking about TDD with katas is try to relate each step of a kata to a feature or user story in the real world. When you scale things up in this manner, some of the decisions you make during a kata for a single “step” start to make a lot more sense if you imagine that “step” was a user story that was going to cost you a day or two of development.

So, if you are running TDD katas, or starting to practice them yourself, or if you are one of those people that like to ask questions like this when you attend a TDD workshop, just remember it really is like a karate kata, it really is about practicing things like Red,Green,Refactor, keyboard skills, thinking patterns, test naming and test design to name a few.

Challenge yourself to see the kata as a feature that is being developed over a number of sprints and each step being a user story or number of user stories that are worth days of developer time, rather than a single test or two during a kata.

Tagged with: ,
Posted in Article

AmbientContext Services – Nuget packages

I finally got to the point where I have been using the AmbientContext pattern for a number of AmbientServices and made a few github repositories, some AppVeyor CI builds and some nuget packages so I could consume them in other projects without copy/paste.

The base implementation of the abstract AmbientService can be found here


and the nuget package is here if you want to start implementing some of your own



I also created two packages for the most common cross cutting concerns I implement using the AmbientService pattern, namely DateTime resolving and Logging.



with the respective nuget packages being here ;



If anyone finds them useful, drop me a line and let me know.

Posted in Code

RESTful Microservices

Last week I was able to attend a workshop in London on implementing microservices using the concept of RESTful communication.

Jim Webber, the presenter, is an articulate character which combines well with his relaxed delivery style and engaging banter. Being a colonial, antipodean and probably the loudest person in the room meant that I was a prime target for some of his banter, but it was all good natured and kept everyone focused.

The workshop was held in the well designed premises “CodeNode” owned by SkillsMatter, and covered a good range of topics, the majority being from the RESTful point of view, but some of them being around microservices in general.

Given that I have already been making a tentative journey down the microservices route in the past 6 months, some of the more interesting takeaways I got from the conference were

Microservices Definition

Microservices is often misunderstood as an architectural style but is actually more correctly thought of as a delivery pattern, since the term broadly encompasses the practices that will lead to a responsive delivery model.

A microservice is not n number of lines of code, or a process that can be rewritten in x number of sprints… it’s not about size at all, but rather the context, specifically the business context. By focusing on business contexts as the boundary of a microservice it is more likely to be relatively stable than something that has been divided into microservices along different lines.

Organizational Maturity

Any problem that is solved with a single process (Monolith) is always going ramp up in complexity when you start breaking it apart into multiple processes, this goes up even further when the multiple processes are running on different hosts and even more if you are involving multiple instances of the same process (fault tolerance / load balancing ).

Distributed systems are hard to build and hard to debug.

If your organization does not value or see the need for the following ;

  • Local governance and decision making within teams
  • Automated testing
  • Centralized logging
  • Automated delivery (CI, deployment)
  • Monitoring and alerting

then you should definitely not be entering the world of microservices, otherwise you are more likely to reap all of the drawbacks of many small moving parts but none of the benefits.

REST does events

Given that a lot of my early coding days were in the realms of real time control systems, state machines and I/O driven systems, I must admit I did have this myth in my head as well, that REST may be great for CRUD operations but probably not that useful for “event” driven systems.

Jim did a great job of challenging this myth, especially with regards to “real time” events. One of the first things he says is that if you need sub millisecond or sub tens of millisecond responses then REST is definitely not for you. But, the thing you need to challenge yourself with is do you really need that?

Most processes within an enterprise system are quite happy knowing about events half a second, 1 second, 5 seconds or even a few minutes later. It really depends on the problem you are trying to solve.

If the problem you are trying to solve is more about throughput, rather than latency, then exposing events as a restful endpoint that clients must poll allows you to leverage the same infrastructure as the web does. This means all of the things like caching, proxy’s, reverse proxy’s etc… can all take a large amount of load off a service.

The example in particular was exposing the events using AtomPub protocol, having one endpoint that was for the most recent events and providing other endpoints that represent archived events. The archiving of the events was an implementation detail, but for our purposes lets say the recent events endpoint only supplied the most recent 10 events.

The great thing about events is that they never change, which makes them excellent candidates for caching. This means that once a client has retrieved an archived event resource, that resource can be cached for a year, and then all other requests for that resource will hit the cache rather than your end point.

Also, again depending on the problem being solved and its ability to tolerate latency, you could cache your “current” event resource for something like 5 or 10 seconds, meaning that if you have a large consumer load, only the first consumer every 10 seconds will hit your service and generate “work” while all the other requests will hit the cached copy of that resource.

There was lot’s of other good stuff about using etags for cheap checks on a resource to see if it has changed for example, but the demonstration on how you can do events using REST and be able to handle a large volume traffic were very thought provoking.

Hypermedia links / HATEOS

This was also an excellent discussion point, the use of the HATEOS (Hypermedia As The Engine Of State) and has changed the way I think about this particular concept.

Originally I viewed the HATEOS as extra work for little benefit, but once you understand what it is actually giving you, it is powerful.

Essentially by providing hypermedia links within resources you are providing a list of valid transitions from the current state of the resource. You are also removing the need for the client to hard code any knowledge of what your API looks like, allowing you the freedom to refactor parts of it without breaking the clients.

All of this provides that people follow the rules of course, for example if someone ignores your hyperlinks and still hard codes their path to “/foobar/wibble/5” and you decide to refactor the API so that the foobar/wibble resource is provided from a completely different host, then they will break and discussions will need to be had. However, within an organization for internal consumption these rules should ( in theory 😛 ) be easier to follow.


The workshop was an excellent 3 days spent, and I was also lucky in the fact the other people attending were engaging and interesting, so we had some great conversations over coffee breaks… and some fairly decent table tennis skills were on display as well.

The next session for this workshop is being held in November







Tagged with:
Posted in Article, Uncategorized