Last week I was able to attend a workshop in London on implementing microservices using the concept of RESTful communication.
Jim Webber, the presenter, is an articulate character which combines well with his relaxed delivery style and engaging banter. Being a colonial, antipodean and probably the loudest person in the room meant that I was a prime target for some of his banter, but it was all good natured and kept everyone focused.
The workshop was held in the well designed premises “CodeNode” owned by SkillsMatter, and covered a good range of topics, the majority being from the RESTful point of view, but some of them being around microservices in general.
Given that I have already been making a tentative journey down the microservices route in the past 6 months, some of the more interesting takeaways I got from the conference were
Microservices is often misunderstood as an architectural style but is actually more correctly thought of as a delivery pattern, since the term broadly encompasses the practices that will lead to a responsive delivery model.
A microservice is not n number of lines of code, or a process that can be rewritten in x number of sprints… it’s not about size at all, but rather the context, specifically the business context. By focusing on business contexts as the boundary of a microservice it is more likely to be relatively stable than something that has been divided into microservices along different lines.
Any problem that is solved with a single process (Monolith) is always going ramp up in complexity when you start breaking it apart into multiple processes, this goes up even further when the multiple processes are running on different hosts and even more if you are involving multiple instances of the same process (fault tolerance / load balancing ).
Distributed systems are hard to build and hard to debug.
If your organization does not value or see the need for the following ;
- Local governance and decision making within teams
- Automated testing
- Centralized logging
- Automated delivery (CI, deployment)
- Monitoring and alerting
then you should definitely not be entering the world of microservices, otherwise you are more likely to reap all of the drawbacks of many small moving parts but none of the benefits.
REST does events
Given that a lot of my early coding days were in the realms of real time control systems, state machines and I/O driven systems, I must admit I did have this myth in my head as well, that REST may be great for CRUD operations but probably not that useful for “event” driven systems.
Jim did a great job of challenging this myth, especially with regards to “real time” events. One of the first things he says is that if you need sub millisecond or sub tens of millisecond responses then REST is definitely not for you. But, the thing you need to challenge yourself with is do you really need that?
Most processes within an enterprise system are quite happy knowing about events half a second, 1 second, 5 seconds or even a few minutes later. It really depends on the problem you are trying to solve.
If the problem you are trying to solve is more about throughput, rather than latency, then exposing events as a restful endpoint that clients must poll allows you to leverage the same infrastructure as the web does. This means all of the things like caching, proxy’s, reverse proxy’s etc… can all take a large amount of load off a service.
The example in particular was exposing the events using AtomPub protocol, having one endpoint that was for the most recent events and providing other endpoints that represent archived events. The archiving of the events was an implementation detail, but for our purposes lets say the recent events endpoint only supplied the most recent 10 events.
The great thing about events is that they never change, which makes them excellent candidates for caching. This means that once a client has retrieved an archived event resource, that resource can be cached for a year, and then all other requests for that resource will hit the cache rather than your end point.
Also, again depending on the problem being solved and its ability to tolerate latency, you could cache your “current” event resource for something like 5 or 10 seconds, meaning that if you have a large consumer load, only the first consumer every 10 seconds will hit your service and generate “work” while all the other requests will hit the cached copy of that resource.
There was lot’s of other good stuff about using etags for cheap checks on a resource to see if it has changed for example, but the demonstration on how you can do events using REST and be able to handle a large volume traffic were very thought provoking.
Hypermedia links / HATEOS
This was also an excellent discussion point, the use of the HATEOS (Hypermedia As The Engine Of State) and has changed the way I think about this particular concept.
Originally I viewed the HATEOS as extra work for little benefit, but once you understand what it is actually giving you, it is powerful.
Essentially by providing hypermedia links within resources you are providing a list of valid transitions from the current state of the resource. You are also removing the need for the client to hard code any knowledge of what your API looks like, allowing you the freedom to refactor parts of it without breaking the clients.
All of this provides that people follow the rules of course, for example if someone ignores your hyperlinks and still hard codes their path to “/foobar/wibble/5” and you decide to refactor the API so that the foobar/wibble resource is provided from a completely different host, then they will break and discussions will need to be had. However, within an organization for internal consumption these rules should ( in theory 😛 ) be easier to follow.
The workshop was an excellent 3 days spent, and I was also lucky in the fact the other people attending were engaging and interesting, so we had some great conversations over coffee breaks… and some fairly decent table tennis skills were on display as well.
The next session for this workshop is being held in November