Friday, 2 May 2014

Watch 'Crazy House', on Friday, after 3 pm, at your work.

If you are lucky, while working on an open space, sometimes, you may see people running back and forth, impinging each other and behaving exactly like headless chickens - especially after 3 pm on Friday ;) The reason why you may see that somewhat peculiar picture is because of either somebody's birthday (i.e. cookies and doughnuts) or production issue ;) Check your mailbox and if there is no email about doughnuts, it has to be an issue on prod!


Usually, managers refrain from communicating a problem to business, before getting some initial knowledge about it. However, do not be mislead by lack of email. That state would not last long. That is actually the time, when managers start to create an Asterix and Obelix 'Crazy House' cartoon atmosphere, harass developers, L3 or any other support team. Of course, they think they do their best to behave in a calm and professional way, but subconsciously they do what all people do, when they are jeopardized - they are clustering into harassment groups behind the back of (un)lucky developer, creating totally unnecessary pressure. 

Then, there is a time for usual questions cannonade: where are we, what we know, what is the risk, what is happening, what are our options etc. You may have an ironic answer for all of them, but you must keep a poker face.



In fact, calmness is your bless. The more clam and methodic your approach would be, the sooner you would be at your home sipping whisky. So calm down and follow below steps:
  1. Ask for a little bit of context and clarification what they think is all that fuss about.
  2. Do not easily believe what 'headless chickens' are saying ;) Check it yourself. If they knew what is going on, they would not harass you ;) Simply ask for evidence based on which they raised an alert. It might be a monitoring screen shoot, DB query, logs, whatever. You may see something, they may have skipped. Also, it is pretty common that people misinterpret what they see and hit 'panic alarm' straight away. It happens, you must live with that.
    You may also ask a couple of context enriching questions like 'who said that' and 'based on what premises you thing so'.
  3. If you really can smell the issue, do an initial investigation and try to estimate:
    - impact
    - risk
    - also it would be marvelous, if you could apply some trends checking here.
    If possible, check whether same or similar situation was happening in the past. Perhaps, it is a regular behaviour.
  4. If there was something similar happening in the past, try to dig out the knowledge about that issue and its remediation. JIRA ticket, conversation with other peers in the project, basically anybody or anything might be helpful.
  5. If not, then it is a genuine issue, which has to be investigated.
  6. If you finally come across the solution and it involves a fix on prod, try to mitigate its risk, by doing the smallest possible change. 
    Remember: small change = small impact (according to stable system definition).
    Assess minimal, optimal and maximal downtime of the system vs. you SLAs. Communicate it clearly to your business stakeholders, prod services and managers.
    Also do not let them suggest solution, unless it has reasonable base.
  7. Test your solution on prod like environment (including starting and stopping your app) and write full sequence of steps. Do not let anybody add/change/remove any of steps, before agreeing it with devs, prod services and testing them properly.
    It's better to have a broken system, where you know what is going on, rather than possibly fixed one, where nobody is entirely sure, if and how it's gonna work.





Saturday, 22 March 2014

Growing applications handled by micro services approach

Let's have a look on characteristics of systems we are building. In general, clients spend only about 10% of their money on building application. The rest 90% of their money is the cost of ownership. Our systems are getting less maintainable. They grow in size and from time to time we are witnesses of IT failures, like 'the biggest IT failure ever seen'. Also, for some reason, there is a tendency to rewrite the entire application every 2 to 5 years.

Above observations and concerns are leading us to one of the great bugaboos of software applications over the years, which is their size. The problem is that we simply tend to build too big systems! Although above statements seem to be quite obvious, we still do same mistakes all over again. 


Let's think for a while and ask ourselves a question: what we do, when a given class is doing too many things? We simply go and refactor it, so that it sticks to SRP principle. Second question: what we do when our system is doing too much? Basically, we keep adding new features. It is extremely counter intuitive, but that is what we do. In fact, I am not even surprised, as the most common way of teaching TDD is to apply it together with SOLID principles on a unit level. Of course it is true, but it might be extended to all levels of abstraction. However, having a tangled application, it is our duty to apply SRP principle on a system level. Such an approach may lead us, but is not limited, to micro services architecture.


Micro services characteristics

There is no official definition of micro services, but they do have their own, specific characteristics. Micro services might be treated as a result of applying refactoring, with an SRP principle in mind, on a system level. They enable easier testing and re-usage of services. Also, you may treat them as fine-grained SOA approach. Moreover, micro services create real options, as you may adapt different (right) tools for different jobs (Unix system analogy). Exploiting micro services style we are getting independent deployment and scale, easier test and a change (side effect of small systems!) and very important option - flexibility.

Architect and his job

Let's step back for a second and describe what an architect does? An architect is a person who is the 'chief builder' of a system. His role is to satisfy clients' needs. In addition, he should do 'just enough architecture and design upfront'. 

In terms of architect's work, it might be compared to city planning. All major requirements have to be envisaged in the plan and embraced accordingly. For instance, there has to be a place for industrial and housing zones, possibly some sub-zones for light and heavy industry, too. I am sure, people would also appreciate nice city centre with greenery and leisure bits. On the flip side, it would not be a good idea to build a playground for children in the middle of industrial zone, instead of housing area. What is more, such a city should have shared utilities like: electricity, waterworks and gas pipelines. 



All these decisions and many others must be done upfront. However, they cannot restrain or impose too tight constraints on zones themselves. That is why, an architect should do 'just enough design upfront'.

Getting maximum flexibility with upfront design
'Just enough' is in fact very fuzzy quantifier of 'upfront design', so in real life we need more concrete guidelines. We should be starting from asking a question, how we can provide just enough upfront design in order to provide flexibility in our architecture? 

Having city planning example in mind, it may lead us to a conclusion that such a high level plan is actually something what business people call a capability map. In simple terms, it is a diagram, which maps business needs and requirements into a set of services talking to each other, where services might be zones and interfaces between services might be utilities, in our city planning example.

In order to get a flexible design, apparently contradictory, but in fact complementary concepts are used i.e. evolutionary architecture and emergent design.


In a nutshell, evolutionary architecture is 'all the stuff that is hard to change later' ;) - usually these are interfaces and contracts between components/systems. That is why one of the core rules of evolutionary architecture suggests to defer decisions as late as possible (aka keep your options open).
On the other hand, there is a concept of emergent design. It is an idea of deriving a system design in the continuous process of adding incremental changes.
Here comes another, tricky question, how we should use these techniques? Luckily, the rule itself is fairly simple. We apply emergent design on a level of bounded contexts/services (zones) and evolutionary architecture on a level of gaps between bounded contexts i.e. interfaces.

Hexagonal architecture

Also, while designing the general architecture and communication between components/services, it is very important not to obstruct the natural, creative flow of the thoughts by distorted diagrams.



We should be using right tools from the very beginning - hexagonal architecture approach is one of them. It gives us extremely clear view on what should happen in the core of a given application and how that application should communicate with external systems. Apart from that hexagonal architecture introduces more generic concept of looking at a system. In contrary to common approach, we are able to freely express more than two integration points in our model (i.e. UI and DB integrations). 



Incidentally, if you were imaging a system that is described in terms of hexagonal architecture and sticking to SRP principle, instantly you would get a handy tool for depicting micro services and making SRP fully visible on a system level! Apart from that such a micro service also conforms to Bovine's conjecture: "objects should be no bigger than one's head", which in our case applies to the system.


Integration

Assuming we have micro services in place, we can start introducing well-defined interfaces (contracts) between them. It will give us a power of communication with external world e.g. micro services.

There are two, very important attributes of decent software:
  • high cohesion
  • loose coupling
High cohesion relates to a service. Basically it means service should have a single reason to change - that is where SRP comes from. In terms of loose coupling, it might be treated as description of components dependencies to other components. In this case it should be as loose, as possible.

Over the years, in software design we went through many integration patterns and anti-patterns, as well. It would be worth going through majority of them and pointing out theirs characteristics.

Data oriented integration (dependency on DB level):
  • no loose coupling
  • hard to reason about
  • brittle approach
  • difficult to maintain and change

Procedure oriented integration (RCP, CORBA, WSDL-binding, JAXB, Java Serialization):
  • method calls across boundaries
  • sharing serialized object (tight dependency)
  • adding methods with different input parameters doing similar things
  • above leads to 'God classes' syndrome 
  • above leads to very difficult change
  • above leads to 'objects explosion'
  • above leads to SRP and interface segregation principle (ISP) violation

Document oriented integration (JMS, messaging in general):
  • asynchronous (decoupled)
  • allows additive changes without breaking existing clients
  • adding a field in document without break
  • renaming a field in document, by adding new field - no breaks
  • requires middleware

Resource oriented integration (REST, not HTTP):
  • language agnostic 
  • expose state
  • noun oriented, not verb oriented like HTTP

When we talk about integration, it would be hard to forget about Postel's law: "Be liberal in what you do, conservative in what you expect". It might be translated into slightly more practical hint: "only bind to what you need, to reduce breaking service consumption".

As we see, there are many integration styles. However, RESTful approach looks like the most promising one. It is a fairly light concept, providing desired components' properties like loose coupling and it does not repeat errors from the past.

Summary
I believe I have outlined main issues in software design, prodding us to better understanding of the nature of the problem and finding out some better solutions. Searching for a right design is a continuous process and starts from the very beginning. Collecting and defining clients' requirements, the time architects spend on thinking about all use cases and satisfying client defined criteria, developers correctly modelling and implementing ideas can never be neglected. All these actions are crucial in the process of building anti-fragile and well-designed software.

Monday, 13 January 2014

Class naming convention and information theory

How many times have you seen classes named with one of these suffixes: manager, util, helper, common, impl etc.? How many times have you seen columns in same DB table called *_id? Probably far too many. 

Have you ever thought about informativeness of above suffixes? Isn't it loosing its point of passing a piece of information to other developers, when it is overused? To be frank, it looks like a noise, a message which is blurred by useless words. The more often those funny words occur in code base, the less information they carry. That is actually, what Shannon said when he was building keystones for information theory.

In fact, it is very important, to give a proper name for a class, DB column name etc. On the other hand, it is also very significant not to overuse same words all over again in names. It is not only getting less readable, but also adding a kind of boilerplate, giving no informativeness value, at all. There is of course a value in pissing other developers off, but I guess it is rather doubtful pleasure ;)





Sunday, 3 November 2013

Finally's block case against thread and daemon

Thread vs Daemon
There is a fundamental difference between JVM thread and daemon. The easiest way to see it, is by reading Oracle's documentation regarding method setDaemon in Thread class. Let's see what JavaDoc says:
" (setDaemon method) Marks this thread as either a daemon thread or a user thread. The Java Virtual Machine exits when the only threads running are all daemon threads."

I guess is pretty obvious. When there are no 'user threads' or simply threads, spawned by a given JVM, JVM exits. 

Finally block
However there is one more gotcha there. It is the problem of finally block execution. Now, let's analyze Oracle's documentation devoted to finally block:
"The finally block always executes when the try block exits. This ensures that the finally block is executed even if an unexpected exception occurs. But finally is useful for more than just exception handling — it allows the programmer to avoid having cleanup code accidentally bypassed by a returncontinue, or break. Putting cleanup code in a finally block is always a good practice, even when no exceptions are anticipated."

Just below that general definition, there is an interesting note:
"Note: If the JVM exits while the try or catch code is being executed, then the finally block may not execute. Likewise, if the thread executing the try or catch code is interrupted or killed, the finally block may not execute even though the application as a whole continues."

Nice. I think, I am getting slightly confused, now. Moreover, Oracle gives another important note, at the end of the page, saying that finally block prevents from resource leaks: 
"Important: The finally block is a key tool for preventing resource leaks. When closing a file or otherwise recovering resources, place the code in a finally block to ensure that resource is always recovered."

Question
So my question is, how finally block behaves, when daemon and thread are interrupted?

Experiment
I prepared two, fairly simple examples. There is a SampleProcess class, which is intended to be run in background i.e. that is what newly spawned thread or daemon will do. It will be kicked off by the ProcessLauncher object from either ThreadRunner or DaemonRunner runner classes.

package common;

public class SampleProcess {

 public void start() {
  try {
   System.out.println("Sample process is going to sleep.");
   sleep(5000);
  } finally {
   sleep(1500);
   System.out.println("Sample process is running finally block.");
  }
 }

 private void sleep(long timeToSleep) {
  try {
   Thread.sleep(timeToSleep);
  } catch (InterruptedException e1) {
   System.out.println("Sample process is running catch block for time to sleep = " + timeToSleep);
  }
 }
}
package common;

public class ProcessLauncher {

 public void launchAsThread() {
  this.launch("Thread", false);
 }

 public void launchAsDaemon() {
  this.launch("Daemon", true);
 }

 private void launch(String processName, boolean isDaemon) {

  Thread process = new Thread(){

   @Override
   public void run() {
    new SampleProcess().start();
   }
  };
  process.setDaemon(isDaemon);
  process.start();
  System.out.println(processName + " runner is going to interrupt sample process.");
  process.interrupt();
 }
}
package daemon;

import common.ProcessLauncher;

public class DaemonRunner {

 public static void main(String[] args) {
  new ProcessLauncher().launchAsDaemon();
 }
}
package thread;

import common.ProcessLauncher;

public class ThreadRunner {

 public static void main(String[] args) {
  new ProcessLauncher().launchAsThread();
 }
}

Results
And here are results for thread and daemon, respectively:
Thread runner is going to interrupt sample process.
Sample process is going to sleep.
Sample process is running catch block for time to sleep = 5000
Sample process is running finally block.

Process finished with exit code 0
Daemon runner is going to interrupt sample process.
Sample process is going to sleep.
Sample process is running catch block for time to sleep = 5000

Process finished with exit code 0

Conclusion
The important conclusion is fact that daemon processes are somewhat unsafe and misleading. They DO NOT ALWAYS run finally block, which prevents us from resource leaks. So, when JVM halts, it only waits for 'user threads' to be finished. Any remaining daemon threads are simply stopped. Finally blocks are not run - JVM just stops.



That is why, daemon threads should be used with at most care and prudence. Also, bear in mind that using daemons for tasks related to resource management (I/O) is walking on a thin ice.

You can find entire example on my GitHub account.

Saturday, 7 September 2013

Real options come into being in real life

About two weeks ago, I came back from my holidays, where I, my brother and our friend spent time wandering in the mountains. We love hiking and we have many friends sharing our passion. Having such a hobby, it is almost obvious that there would be a constant, informal competition between all of us. When we meet with our friends, we often spend time discussing different tracks, unexplored routes and places, transition times between places etc.

We know that three of us create quite strong and reliable team, where everybody can count on each other in difficult situation. We have proven that plenty of times and we know, what we can expect from each other. That is why we decided to break out our friends' records. We were aspiring to establish three records in terms of: longest track, longest time spent in the mountains walking and the biggest total elevation crossed during entire trip.
We chose a track and prepared gear to go out, in the evening. We got up at midnight and went out. At 6 am, we were sitting at the highest peak on our route, delighting in the sunrise, at the crack of dawn. We took vast amount of photos, made a GoPro movie and went further chasing our records. Nothing was bidding fair our breakdown. 

We started to feel lack of physical power and motivation at about 16th hour of our wandering. All of us knew that feeling. We were trying not to talk about it, as it might have depressed others even more quickly. However, the truth was that we ate all our food, snacks, sweets, fruit and recovery gels. We also drunk entire water, isotonic drinks and tea we had with us. Each of us was motivating others to walk, being on a last legs same time. We were extremely suffering from lack of mental and physical power.

The interesting thing is that when you are fed up with everything and you hate your decision about breaking records, you miss your bed and glass of water, then you can transform that internal anger and find some more mental strength. It was helping us with keeping on pushing forward. 

Having one hour left to our stop, a shelter, we met very tall guy, standing on tracks cross and checking something in a map. He asked us how long would it take to go down following track he was pointing to.
We had just looked on each other, silently agreeing that the tall guy was completely lost in the mountains. However, he was talking so fast, and was so convinced that he is in a totally different location that we gave up. We did not have enough energy to argue with him. We waited, when he finished and told him: "You are in wrong valley, mate. You have about four hours to get to the place you would like to be and additional one and a half to come down". We were persuading him that the only choice he had was to follow the route we would be going down. The tall guy told us that he could have done that path in about three hours, as he is training ultra marathon running. We said okay, you might be right, however bear in mind that sunset would be in top thirty minutes, you have no food and water in a just one third of your camel bag and that is not all. The most dangerous thing was a bear, which was living in the neighborhood and approaching tracks from time to time. The tall guy admitted it was pointless to go to the location he wanted to be and bearing in mind sunset, lack of food and unwanted hand shake with bear, he decided to take our piece of advice. He decided to run down along route we pointed to him.

Before he left, we gave him an option of coming down with us through the forest, as we know that track very well and we had head lights. Also, we said that we had to stay for twenty minutes in hostel to buy some water and at least one Snickers per person.

Initially, he was reluctant and decided to run down himself. However, when we reached the hostel he was waiting for us with his mind changed.
He found our option useful and decided to exercise it. All of us went down chatting and laughing. At the end, he offered us a lift home, which we took as a miracle, as we were extremely exhausted.

It was the other day, when I realized that we had offered the tall guy a real option. It was having all option's properties like: 

  • value
  • expiry date
  • event transforming an option to a commitment

Option's value is not that easy to estimate. But, when you spare a while, thinking about it, you would probably come to the conclusion that you do not need to know its exact value. Humans are good in comparing and prioritizing, but they tend not to give correct absolute values. Thus, comparing is very good technique of evaluation options. It was obvious that the only other choice the tall guy had was to spend a night in the mountains, hugging with the bear (if lucky enough). So, basic comparison of two options gave him a fairly good understanding of his priorities. He had chosen to go down safely.

Expiry date was actually determined by the last point he knew for sure we would be heading to - the hostel.

Transformation moment was a moment, when option stopped being something, which may happen in the future and became a commitment of following three, exhausted guys. He knew perfectly well, why he would exercised that option. It was better for the tall guy to go with us, instead of spending a night in the mountains.

The conclusion is simple: real options are everywhere. They surround us. The only thing we need to do is to start thinking about them and seeing them. 
The most important lesson learnt for me is fact that even in very difficult situation, in unfavorable conditions and being on a last legs, one can still come up with a real option, which might be exercised.

Sunday, 11 August 2013

Wrestling with Singleton (Logger) - round 3

Singletons have many faces. One of them is logging mechanism. Before we start having a deeper look into entire case, let's go straight away to a simple example, to set some context.
package com.korczak.oskar.refactoring.singleton.logger.before;

import com.korczak.oskar.refactoring.singleton.logger.Engine;
import org.junit.runner.RunWith;
import org.mockito.InjectMocks;
import org.mockito.Mock;
import org.mockito.runners.MockitoJUnitRunner;
import org.testng.annotations.BeforeMethod;
import org.testng.annotations.Test;

import static org.mockito.Mockito.verify;
import static org.mockito.MockitoAnnotations.initMocks;

@RunWith(MockitoJUnitRunner.class)
public class CarTest {

 @Mock private Engine engine;
 @InjectMocks private Car car;

 @BeforeMethod
 public void setUp() {
  initMocks(this);
 }

 @Test
 public void shouldRunEngine() {
  car.start();

  verify(engine).run();
 }

 @Test
 public void shouldStopEngine() {
  car.stop();

  verify(engine).stop();
 }
}
package com.korczak.oskar.refactoring.singleton.logger.before;

import com.korczak.oskar.refactoring.singleton.logger.Engine;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class Car {

 private Logger carLogger = LoggerFactory.getLogger(Car.class);

 private Engine engine;

 public Car(Engine engine){
  this.engine = engine;
 }

 public void start(){
  engine.run();
  carLogger.info("Engine is running.");
 }

 public void stop(){
  engine.stop();
  carLogger.info("Engine is stopped.");
 }
}
We can also assume that above snippet is actually a part of, so called, fairly decent application, which is logging everything everywhere, so to speak.

What can you tell about such an application?
Is there anything particular, which is stemming out from our imaginary application?
It may look like to somewhat silly task, but unfortunately it has very deep implications, in terms of design, solving production issues, sometimes performance, overall usage and approach. It is quite a few things tangled to logging, as such.

First of all, let's decompose above snippet with attention to some basic facts:
  • Static logger is a singleton
  • Breach of unit test definition i.e. system under test is writing to file/console
  • Lack of readability to some extent
  • A bit of noise in business logic
Now, let's discuss above points a bit further and infer some more, useful information.




Logger is a Singleton
Logger is created via static call and a Singleton is returned. However, it is not that bad, as we are only writing to Singleton and never reading from it. It means we are not exploiting a global state introduced in our JVM by Singleton. On the other hand, it is worth bearing in mind that static call means no seams. There are two direct implications of that: 
  • untested production code, as there are no seams
  • message will be logged anyway during running tests

Unit test definition breach
That is how we came to the second fact related to unit tests.
Michael Feathers defines unit tests as: 

A test is not a unit test if:

  • It talks to the database
  • It communicates across the network
  • It touches the file system
  • It can't run at the same time as any of your other unit tests
  • You have to do special things to your environment (such as editing config files) to run it.

  • I suppose we can agree that above test is an integration one, rather then a unit sort of thing. Feathers's definition sticks to what people understand under unit test term. 
    Why don't we finally admit that so called unit tests, which are logging information to a file/console whatever, are in fact a decent integration tests. Full stop.

    What is the business related reason?
    Going further one can focus on a business reason in that whole tangled situation and answer a question: is there any business related reason to log particular information in particular place?
    The most common answer would be that by logging we mean sending a message from application to whoever is browsing logs. Usually, it would be two main groups of recipients: L1/L2 and L3/testes/developers teams. 
    L1/L2 guys would like to see only nasty things happening in a given application e.g. errors, alerts and perhaps warnings. On the other hand, L3/testers/developers would like to see a bit more i.e. states generated by application during its processing etc. It is worth reiterating that none of them is a business person.

    When we dig a bit deeper and ask what is the underlying principle of logging, we will probably come to below two points:
    • log a state of processed unit
    • say 'I was here' or 'I have chosen that path in you application logic'
    Above things tell us where we are and are often called diagnostic reasons/purposes. They share a common feature that none of them is driven by any business requirement. Let's be honest, business people are not reading logs. They can only read spreadsheets, but you probably know that already, I suppose.

    Readability and noise
    Unfortunately, logging affects a bit negatively the overall readability of business logic. It adds additional noise and clutters codebase, when it is overused.

    How it might be done?
    The easiest might be to imagine an event driven system, where an event comes in, it is being processed and some final response is sent. As it was pointed, logging would be used for diagnostic purposes. If yes, I am questioning the logger usage, almost at all. Why don't we change our applications design so that it can monitor, diagnose and track the state of all, incoming events itself? 
    I already can hear answers: 'it is too much effort', 'we don't need it' or better one 'our project is different'. ;)
    Partially, it might be true and I agree it is pointless, in some trivial cases, to build errors aware applications for your younger sister's diary application, although still it is worth at least considering it in some larger projects. 
    Also, don't get me wrong, if there is a clear business drive for having something logged, just do it. The only thing I am asking to remember is fact that logging is a sort of contract between application and whoever is reading logs. If there is a business need for having some log entry, simply test it. 

    When you have self aware system in place and an issue occurs, you may ask your application for some useful 'diagnostic information' (including states) on demand via say REST API or whatever. 

    Implementing that sort of approach is in fact a win-win situation for all project stakeholders. Not only L3 and developers are able to deal with errors in processing, but also testers, L1, L2 teams are able to do that. Even BAs start analyzing and solving some burning questions themselves. 
    In a long run, it is less work for busy developers, everybody can leverage and use diagnostic information. What's even more important, the knowledge and the responsibility is shared across entire team/stakeholders. It only requires two things: 
    1. Stop logging everything in the log files
    2. Add sort of issues memory or events state memory to your system
    The first step here is to start testing logging in your applications. Try to make logger a collaborator of your class, not a static call and use it where it is really necessary. 
    Testing logging enforces thinking about every single message, which might be potentially logged. Additional effort spent on writing test for logger, should actually drive aware developer through all above concerns, helping to understand the business reason behind particular piece of code.
    package com.korczak.oskar.refactoring.singleton.logger.after;
    
    import com.korczak.oskar.refactoring.singleton.logger.Engine;
    import org.junit.runner.RunWith;
    import org.mockito.ArgumentCaptor;
    import org.mockito.Captor;
    import org.mockito.InjectMocks;
    import org.mockito.Mock;
    import org.mockito.runners.MockitoJUnitRunner;
    import org.slf4j.Logger;
    import org.testng.annotations.BeforeMethod;
    import org.testng.annotations.Test;
    
    import static org.mockito.Mockito.verify;
    import static org.mockito.MockitoAnnotations.initMocks;
    import static org.testng.AssertJUnit.assertEquals;
    
    @RunWith(MockitoJUnitRunner.class)
    public class DecentCarTest {
    
     @Mock private Engine engine;
     @Mock private Logger decentCarLogger;
     @InjectMocks private DecentCar decentCar;
    
     @Captor private ArgumentCaptor argument;
    
     @BeforeMethod
     public void initializeMocks() {
      initMocks(this);
      argument = ArgumentCaptor.forClass(String.class);
     }
    
     @Test
     public void shouldRunEngineAndLogMessage() {
      decentCar.start();
    
      verify(engine).run();
      verify(decentCarLogger).info(argument.capture());
      assertEquals("Engine is running.", argument.getValue());
     }
    
     @Test
     public void shouldStopEngineAndLogMessage() {
      decentCar.stop();
    
      verify(engine).stop();
      verify(decentCarLogger).info(argument.capture());
      assertEquals("Engine is stopped.", argument.getValue());
     }
    }
    
    package com.korczak.oskar.refactoring.singleton.logger.after;
    
    import com.korczak.oskar.refactoring.singleton.logger.Engine;
    import org.slf4j.Logger;
    
    public class DecentCar {
    
     private Engine engine;
     private Logger decentCarLogger;
    
     public DecentCar(Engine engine, Logger decentCarLogger){
      this.engine = engine;
      this.decentCarLogger = decentCarLogger;
     }
    
     public void start(){
      engine.run();
      decentCarLogger.info("Engine is running.");
     }
    
     public void stop(){
      engine.stop();
      decentCarLogger.info("Engine is stopped.");
     }
    }
    

    You can find entire example on my GitHub account.

    Round three is finished.

    Friday, 21 June 2013

    Wrestling with Singleton - round 2

    Here comes the time for round 2 of our fight with Singleton. This time, we will try to get rid off unwanted, hard-wired dependency, by applying Michael Feathers's trick. 



    During first round, we used a simple, wrapping mechanism, enabling developer to disjoin Singleton from class under test. Although it is not complicated, it does amend production code in few places. What Michael Feathers has suggested in his book "Working Effectively with Legacy Code", is a slightly different approach. It also does changes in production code, but just in Singleton class itself. One may say it is a less invasive way of dealing with that sort of problem.
    Anyway, let's get started in the same place, we started last time:

    public class PropertiesCache {
    
     private static PropertiesCache instance = new PropertiesCache();
    
     private PropertiesCache() {
    
     }
    
     public static PropertiesCache getInstance() {
      return instance;
     }
    
     public boolean overrideWith(File fileProperties) {
      return someWeirdComplicatedFilePropertiesLogic(fileProperties);
     }
    
     private boolean someWeirdComplicatedFilePropertiesLogic(File fileProperties) {
      if (fileProperties.length() % 2 == 0) {
       return true;
      }
      return false;
     }
    }
    
    public class SamplePropertiesCacheUsage {
    
     public boolean overrideExistingCachePropertiesWith(File fileProperties){
      PropertiesCache cachedProperties = PropertiesCache.getInstance();
      return cachedProperties.overrideWith(fileProperties);
     }
    }
    I added a static setter to PropertiesCache class using InteliJ tool called: code - generate setter. Second move is a manual change of constructor's modifier: from private to protected.
    public class PropertiesCache {
    
     private static PropertiesCache instance = new PropertiesCache();
    
     protected PropertiesCache() {
    
     }
    
     public static PropertiesCache getInstance() {
      return instance;
     }
    
     public static void setInstance(PropertiesCache instance) {
      PropertiesCache.instance = instance;
     }
    
     public boolean overrideWith(File fileProperties) {
      return someWeirdComplicatedFilePropertiesLogic(fileProperties);
     }
    
     private boolean someWeirdComplicatedFilePropertiesLogic(File fileProperties) {
      if (fileProperties.length() % 2 == 0) {
       return true;
      }
      return false;
     }
    }
    
    Now, I created two classes inheriting from the Singleton. They stub the overrideWith method. As you can see, there is also a simple, but valuable test created.
    public class StubbedForTruePropertiesCache extends PropertiesCache {
    
     @Override
     public boolean overrideWith(File fileProperties) {
      return true;
     }
    }
    
    public class StubbedForFalsePropertiesCache extends PropertiesCache {
    
     @Override
     public boolean overrideWith(File fileProperties) {
      return false;
     }
    }
    
    public class SamplePropertiesCacheUsageTest {
    
     private File dummyFileProperties;
     private SamplePropertiesCacheUsage propertiesCache;
    
     @BeforeMethod
     public void setUp() {
      dummyFileProperties = new File("");
      propertiesCache = new SamplePropertiesCacheUsage();
     }
    
     @Test
     public void shouldReturnTrueDueToWeirdInternalSingletonLogic() {
      PropertiesCache.setInstance(new StubbedForTruePropertiesCache());
    
      boolean result = propertiesCache.overrideExistingCachePropertiesWith(dummyFileProperties);
    
      assertThat(result, is(equalTo(TRUE)));
     }
    
     @Test
     public void shouldReturnFalseDueToWeirdInternalSingletonLogic() {
      PropertiesCache.setInstance(new StubbedForFalsePropertiesCache());
    
      boolean result = propertiesCache.overrideExistingCachePropertiesWith(dummyFileProperties);
    
      assertThat(result, is(equalTo(FALSE)));
     }
    }
    That's all. We have relaxed the coupling between Singleton and system under test. We have tests. Design is also improved a bit. We reached our goal.

    As previously, you can find entire refactoring exercise on my GitHub account.

    Round two is finished.

    Sunday, 26 May 2013

    Wrestling with Singleton - Round 1

    How many times you were dealing with Singletons in your codebase. To be frank, it has been always a problem to properly understand the nature of Singleton, its usage and refactoring methods. Singleton, as such is not an embodied evil. It is rather the usage that developers think they "design". 

    In order to fully understand the problem, let's have a quick look on Gang of Four (GoF) Singleton definition:
    "Ensure a class only has one instance, and provide a global point of access to it.". 
    The big hoo-ha is focusing on second part of above definition: "... providing a global point of access to it (to a single object).". In GoF's implementation, they provided a global point of access, by taking the advantage of static getInstance() method. While it perfectly well fulfills assumptions and leading concept of Singleton definition, it also introduces "an extra, unwanted feature" i.e. a global state visible to every class. 

    Well, some pesky guy, with devil-may-care attitude may say, so what! Apparently nothing, however I can bet that such a smart alec has never written a single line of unit test, especially in legacy code. The thing is people invented Singleton pattern to maintain a single instance of the object among the entire set of object graphs aka an application. Providing a global access point in a correct way is slightly more tricky to materialize, than just using static getInstance() method. However, it is still feasible to do it in a right way.

    Have you ever thought, why nobody finds faults with Spring or with any other Dependency Injection (DI) framework? People do not moan about DIs libraries, even though there is a way to make an object a singleton. Now, it sounds odd! In fact, the answer is hidden in lowercase singleton. Spring is able to create, maintain and inject a singleton object, without exposing it as a global state. It is worth to notice that Spring deals with Singleton problem, correctly. It not only meets GoFs definition without adding any unnecessary burden i.e. no static getInstance() method, but also provides desired inversion of control. It is an xml configuration, which is enabling us to mark a bean as a singleton and that is it. If you want to use it, you will have to inject it, as any other bean via constructor, setter or field. DI framework, in its construction, promotes testability by enforcing the concept of seam.

    If you are a bit nosy person, you should ask this sort of question: is it the only, correct way I can use singletons? Obviously, the answer is: no, it is not. The reason why DI framework makes better use of singletons is the fact that it combines single instance of some class with dependency injection.
    If you do not want to use Spring for some reason or it is simply an overkill for your solution, then there are at least two ways you can choose. You can either use the 'wrap the Singleton' or 'inherit from singleton' approach. In this article, I will focus on the former one. In a nutshell, it is a dependency injection for poor man going along with Singleton. Incidentally, it is quite powerful technique, when it comes to legacy code refactoringLet's have a look on a model GoF's implementation of Singleton pattern and its usage in sample legacy code: 


     
       public class PropertiesCache {
    
    	private static PropertiesCache instance = new PropertiesCache();
    
    	private PropertiesCache() {
    
    	}
    
    	public static PropertiesCache getInstance() {
    		return instance;
    	}
    
    	public void overrideWith(File fileProperties) {
    		// some logic comes here
    	}
     }
    
     
    public class SamplePropertiesCacheUsage {
    
    	public void overrideExistingCachePropertiesWith(File fileProperties){
    		PropertiesCache cachedProperties = PropertiesCache.getInstance();
    		cachedProperties.overrideWith(fileProperties);
    	}
    }
    

    It is a very simple and extremely common scenario, which shows the tight coupling between SamplePropertiesCacheUsage and Singleton classes. Bear in mind that Singleton might be quite substantial in size, as it is a properties cache, all in all. Moreover, some cunning developer before you, might have armed Singleton with quite a few "handy methods" for loading properties from file, merging them from different sources applying precedence policies on a top of that etc. Generally speaking, nothing pleasant and it is you, who have to wrestle with that code, now.



    Let's assume that our goal is to get rid off that tight dependency to Singleton. Second, more implicit assumption is that our IDE will slightly change Singleton call in our production code.

    Okay, let's get started. First thing we should do is to look for test for SamplePropertiesCacheUsage. Wait a second, but we are going to start our digging in legacy code, so do not even bother to look for any test. It might have been quite difficult to write such test for developer, anyway. As a matter of fact, we quickly found that we have to refactor using IDE's built in methods.

    In my InteliJ IDE it will be a few steps process. First of all, let's extract a private method called getInstance(), encapsulating Singleton static call. This method is not static, any more.
     
    public class SamplePropertiesCacheUsage {
    
    	public void overrideExistingCachePropertiesWith(File fileProperties){
    		PropertiesCache cachedProperties = getInstance();
    		cachedProperties.overrideWith(fileProperties);
    	}
    
    	private PropertiesCache getInstance() {
    		return PropertiesCache.getInstance();
    	}
    }
    

    Our next step will be to extract a PropertiesCacheWrapper class with public getInstance() method, from SamplePropertiesCacheUsage Singleton client.
     
    public class SamplePropertiesCacheUsage {
    
    	private PropertiesCacheWrapper propertiesCacheWrapper = new PropertiesCacheWrapper();
    
    	public void overrideExistingCachePropertiesWith(File fileProperties){
    		PropertiesCache cachedProperties = propertiesCacheWrapper.getInstance();
    		cachedProperties.overrideWith(fileProperties);
    	}
    
    	private PropertiesCache getInstance() {
    		return propertiesCacheWrapper.getInstance();
    	}
    }
    
    
     
    public class PropertiesCacheWrapper {
    	public PropertiesCacheWrapper() {
    	}
    
    	public PropertiesCache getInstance() {
    		return PropertiesCache.getInstance();
    	}
    }

    Now, it is time for initializing propertiesCacheWrapper field in the constructor. You may also need to manually delete inlined initialization of propertiesCacheWrapper field. 
    This is actually the moment, when the injection of PropertiesCacheWrapper happens.


    public class SamplePropertiesCacheUsage {
    
    	private PropertiesCacheWrapper propertiesCacheWrapper;
    
    	public SamplePropertiesCacheUsage(PropertiesCacheWrapper aPropertiesCacheWrapper) {
    		propertiesCacheWrapper = aPropertiesCacheWrapper;
    	}
    
    	public void overrideExistingCachePropertiesWith(File fileProperties){
    		PropertiesCache cachedProperties = propertiesCacheWrapper.getInstance();
    		cachedProperties.overrideWith(fileProperties);
    	}
    
    	private PropertiesCache getInstance() {
    		return propertiesCacheWrapper.getInstance();
    	}
    }
    

    As a last step, we may delete getInstance() method from SamplePropertiesCacheUsage, as it is no longer used.
    public class SamplePropertiesCacheUsage {
    
    	private PropertiesCacheWrapper propertiesCacheWrapper;
    
    	public SamplePropertiesCacheUsage(PropertiesCacheWrapper aPropertiesCacheWrapper) {
    		propertiesCacheWrapper = aPropertiesCacheWrapper;
    	}
    
    	public void overrideExistingCachePropertiesWith(File fileProperties){
    		PropertiesCache cachedProperties = propertiesCacheWrapper.getInstance();
    		cachedProperties.overrideWith(fileProperties);
    	}
    }
    


    Let's have a look on what happened. Now, we have Singleton invocation wrapped in a separate class. What is more, SamplePropertiesCacheUsage class does have a constructor type seam, which is used to inject  PropertiesCacheWrapper. The code is now at least testable, so we are able to write a test for SamplePropertiesCacheUsage class.

    @RunWith(MockitoJUnitRunner.class)
    public class SamplePropertiesCacheUsageTest {
    
    	@Mock private PropertiesCache cachedProperties;
    	@Mock private PropertiesCacheWrapper propertiesCacheWrapper;
    	@Mock private File file;
    
    	@BeforeMethod
    	public void initializeMocks() {
    		initMocks(this);
    		given(propertiesCacheWrapper.getInstance()).willReturn(cachedProperties);
    	}
    
    	@Test
    	public void shouldOverrideExistingPropertiesWithFileProperties() {
    		SamplePropertiesCacheUsage samplePropertiesCacheUsage = new SamplePropertiesCacheUsage(propertiesCacheWrapper);
    
    		samplePropertiesCacheUsage.overrideExistingCachePropertiesWith(file);
    
    		verify(cachedProperties).overrideWith(file);
    	}
    }
    
    

    Everything looks good, now. We have a unit test describing SamplePropertiesCacheUsage class, which was previously using static call to Singleton class. We also got rid off tight dependency to Singleton. 

    You can find entire refactoring exercise on my GitHub account.

    Round one is finished.