Showing posts with label Code Quality. Show all posts
Showing posts with label Code Quality. Show all posts

Monday, 13 January 2014

Class naming convention and information theory

How many times have you seen classes named with one of these suffixes: manager, util, helper, common, impl etc.? How many times have you seen columns in same DB table called *_id? Probably far too many. 

Have you ever thought about informativeness of above suffixes? Isn't it loosing its point of passing a piece of information to other developers, when it is overused? To be frank, it looks like a noise, a message which is blurred by useless words. The more often those funny words occur in code base, the less information they carry. That is actually, what Shannon said when he was building keystones for information theory.

In fact, it is very important, to give a proper name for a class, DB column name etc. On the other hand, it is also very significant not to overuse same words all over again in names. It is not only getting less readable, but also adding a kind of boilerplate, giving no informativeness value, at all. There is of course a value in pissing other developers off, but I guess it is rather doubtful pleasure ;)





Sunday, 11 August 2013

Wrestling with Singleton (Logger) - round 3

Singletons have many faces. One of them is logging mechanism. Before we start having a deeper look into entire case, let's go straight away to a simple example, to set some context.
package com.korczak.oskar.refactoring.singleton.logger.before;

import com.korczak.oskar.refactoring.singleton.logger.Engine;
import org.junit.runner.RunWith;
import org.mockito.InjectMocks;
import org.mockito.Mock;
import org.mockito.runners.MockitoJUnitRunner;
import org.testng.annotations.BeforeMethod;
import org.testng.annotations.Test;

import static org.mockito.Mockito.verify;
import static org.mockito.MockitoAnnotations.initMocks;

@RunWith(MockitoJUnitRunner.class)
public class CarTest {

 @Mock private Engine engine;
 @InjectMocks private Car car;

 @BeforeMethod
 public void setUp() {
  initMocks(this);
 }

 @Test
 public void shouldRunEngine() {
  car.start();

  verify(engine).run();
 }

 @Test
 public void shouldStopEngine() {
  car.stop();

  verify(engine).stop();
 }
}
package com.korczak.oskar.refactoring.singleton.logger.before;

import com.korczak.oskar.refactoring.singleton.logger.Engine;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class Car {

 private Logger carLogger = LoggerFactory.getLogger(Car.class);

 private Engine engine;

 public Car(Engine engine){
  this.engine = engine;
 }

 public void start(){
  engine.run();
  carLogger.info("Engine is running.");
 }

 public void stop(){
  engine.stop();
  carLogger.info("Engine is stopped.");
 }
}
We can also assume that above snippet is actually a part of, so called, fairly decent application, which is logging everything everywhere, so to speak.

What can you tell about such an application?
Is there anything particular, which is stemming out from our imaginary application?
It may look like to somewhat silly task, but unfortunately it has very deep implications, in terms of design, solving production issues, sometimes performance, overall usage and approach. It is quite a few things tangled to logging, as such.

First of all, let's decompose above snippet with attention to some basic facts:
  • Static logger is a singleton
  • Breach of unit test definition i.e. system under test is writing to file/console
  • Lack of readability to some extent
  • A bit of noise in business logic
Now, let's discuss above points a bit further and infer some more, useful information.




Logger is a Singleton
Logger is created via static call and a Singleton is returned. However, it is not that bad, as we are only writing to Singleton and never reading from it. It means we are not exploiting a global state introduced in our JVM by Singleton. On the other hand, it is worth bearing in mind that static call means no seams. There are two direct implications of that: 
  • untested production code, as there are no seams
  • message will be logged anyway during running tests

Unit test definition breach
That is how we came to the second fact related to unit tests.
Michael Feathers defines unit tests as: 

A test is not a unit test if:

  • It talks to the database
  • It communicates across the network
  • It touches the file system
  • It can't run at the same time as any of your other unit tests
  • You have to do special things to your environment (such as editing config files) to run it.

  • I suppose we can agree that above test is an integration one, rather then a unit sort of thing. Feathers's definition sticks to what people understand under unit test term. 
    Why don't we finally admit that so called unit tests, which are logging information to a file/console whatever, are in fact a decent integration tests. Full stop.

    What is the business related reason?
    Going further one can focus on a business reason in that whole tangled situation and answer a question: is there any business related reason to log particular information in particular place?
    The most common answer would be that by logging we mean sending a message from application to whoever is browsing logs. Usually, it would be two main groups of recipients: L1/L2 and L3/testes/developers teams. 
    L1/L2 guys would like to see only nasty things happening in a given application e.g. errors, alerts and perhaps warnings. On the other hand, L3/testers/developers would like to see a bit more i.e. states generated by application during its processing etc. It is worth reiterating that none of them is a business person.

    When we dig a bit deeper and ask what is the underlying principle of logging, we will probably come to below two points:
    • log a state of processed unit
    • say 'I was here' or 'I have chosen that path in you application logic'
    Above things tell us where we are and are often called diagnostic reasons/purposes. They share a common feature that none of them is driven by any business requirement. Let's be honest, business people are not reading logs. They can only read spreadsheets, but you probably know that already, I suppose.

    Readability and noise
    Unfortunately, logging affects a bit negatively the overall readability of business logic. It adds additional noise and clutters codebase, when it is overused.

    How it might be done?
    The easiest might be to imagine an event driven system, where an event comes in, it is being processed and some final response is sent. As it was pointed, logging would be used for diagnostic purposes. If yes, I am questioning the logger usage, almost at all. Why don't we change our applications design so that it can monitor, diagnose and track the state of all, incoming events itself? 
    I already can hear answers: 'it is too much effort', 'we don't need it' or better one 'our project is different'. ;)
    Partially, it might be true and I agree it is pointless, in some trivial cases, to build errors aware applications for your younger sister's diary application, although still it is worth at least considering it in some larger projects. 
    Also, don't get me wrong, if there is a clear business drive for having something logged, just do it. The only thing I am asking to remember is fact that logging is a sort of contract between application and whoever is reading logs. If there is a business need for having some log entry, simply test it. 

    When you have self aware system in place and an issue occurs, you may ask your application for some useful 'diagnostic information' (including states) on demand via say REST API or whatever. 

    Implementing that sort of approach is in fact a win-win situation for all project stakeholders. Not only L3 and developers are able to deal with errors in processing, but also testers, L1, L2 teams are able to do that. Even BAs start analyzing and solving some burning questions themselves. 
    In a long run, it is less work for busy developers, everybody can leverage and use diagnostic information. What's even more important, the knowledge and the responsibility is shared across entire team/stakeholders. It only requires two things: 
    1. Stop logging everything in the log files
    2. Add sort of issues memory or events state memory to your system
    The first step here is to start testing logging in your applications. Try to make logger a collaborator of your class, not a static call and use it where it is really necessary. 
    Testing logging enforces thinking about every single message, which might be potentially logged. Additional effort spent on writing test for logger, should actually drive aware developer through all above concerns, helping to understand the business reason behind particular piece of code.
    package com.korczak.oskar.refactoring.singleton.logger.after;
    
    import com.korczak.oskar.refactoring.singleton.logger.Engine;
    import org.junit.runner.RunWith;
    import org.mockito.ArgumentCaptor;
    import org.mockito.Captor;
    import org.mockito.InjectMocks;
    import org.mockito.Mock;
    import org.mockito.runners.MockitoJUnitRunner;
    import org.slf4j.Logger;
    import org.testng.annotations.BeforeMethod;
    import org.testng.annotations.Test;
    
    import static org.mockito.Mockito.verify;
    import static org.mockito.MockitoAnnotations.initMocks;
    import static org.testng.AssertJUnit.assertEquals;
    
    @RunWith(MockitoJUnitRunner.class)
    public class DecentCarTest {
    
     @Mock private Engine engine;
     @Mock private Logger decentCarLogger;
     @InjectMocks private DecentCar decentCar;
    
     @Captor private ArgumentCaptor argument;
    
     @BeforeMethod
     public void initializeMocks() {
      initMocks(this);
      argument = ArgumentCaptor.forClass(String.class);
     }
    
     @Test
     public void shouldRunEngineAndLogMessage() {
      decentCar.start();
    
      verify(engine).run();
      verify(decentCarLogger).info(argument.capture());
      assertEquals("Engine is running.", argument.getValue());
     }
    
     @Test
     public void shouldStopEngineAndLogMessage() {
      decentCar.stop();
    
      verify(engine).stop();
      verify(decentCarLogger).info(argument.capture());
      assertEquals("Engine is stopped.", argument.getValue());
     }
    }
    
    package com.korczak.oskar.refactoring.singleton.logger.after;
    
    import com.korczak.oskar.refactoring.singleton.logger.Engine;
    import org.slf4j.Logger;
    
    public class DecentCar {
    
     private Engine engine;
     private Logger decentCarLogger;
    
     public DecentCar(Engine engine, Logger decentCarLogger){
      this.engine = engine;
      this.decentCarLogger = decentCarLogger;
     }
    
     public void start(){
      engine.run();
      decentCarLogger.info("Engine is running.");
     }
    
     public void stop(){
      engine.stop();
      decentCarLogger.info("Engine is stopped.");
     }
    }
    

    You can find entire example on my GitHub account.

    Round three is finished.

    Friday, 21 June 2013

    Wrestling with Singleton - round 2

    Here comes the time for round 2 of our fight with Singleton. This time, we will try to get rid off unwanted, hard-wired dependency, by applying Michael Feathers's trick. 



    During first round, we used a simple, wrapping mechanism, enabling developer to disjoin Singleton from class under test. Although it is not complicated, it does amend production code in few places. What Michael Feathers has suggested in his book "Working Effectively with Legacy Code", is a slightly different approach. It also does changes in production code, but just in Singleton class itself. One may say it is a less invasive way of dealing with that sort of problem.
    Anyway, let's get started in the same place, we started last time:

    public class PropertiesCache {
    
     private static PropertiesCache instance = new PropertiesCache();
    
     private PropertiesCache() {
    
     }
    
     public static PropertiesCache getInstance() {
      return instance;
     }
    
     public boolean overrideWith(File fileProperties) {
      return someWeirdComplicatedFilePropertiesLogic(fileProperties);
     }
    
     private boolean someWeirdComplicatedFilePropertiesLogic(File fileProperties) {
      if (fileProperties.length() % 2 == 0) {
       return true;
      }
      return false;
     }
    }
    
    public class SamplePropertiesCacheUsage {
    
     public boolean overrideExistingCachePropertiesWith(File fileProperties){
      PropertiesCache cachedProperties = PropertiesCache.getInstance();
      return cachedProperties.overrideWith(fileProperties);
     }
    }
    I added a static setter to PropertiesCache class using InteliJ tool called: code - generate setter. Second move is a manual change of constructor's modifier: from private to protected.
    public class PropertiesCache {
    
     private static PropertiesCache instance = new PropertiesCache();
    
     protected PropertiesCache() {
    
     }
    
     public static PropertiesCache getInstance() {
      return instance;
     }
    
     public static void setInstance(PropertiesCache instance) {
      PropertiesCache.instance = instance;
     }
    
     public boolean overrideWith(File fileProperties) {
      return someWeirdComplicatedFilePropertiesLogic(fileProperties);
     }
    
     private boolean someWeirdComplicatedFilePropertiesLogic(File fileProperties) {
      if (fileProperties.length() % 2 == 0) {
       return true;
      }
      return false;
     }
    }
    
    Now, I created two classes inheriting from the Singleton. They stub the overrideWith method. As you can see, there is also a simple, but valuable test created.
    public class StubbedForTruePropertiesCache extends PropertiesCache {
    
     @Override
     public boolean overrideWith(File fileProperties) {
      return true;
     }
    }
    
    public class StubbedForFalsePropertiesCache extends PropertiesCache {
    
     @Override
     public boolean overrideWith(File fileProperties) {
      return false;
     }
    }
    
    public class SamplePropertiesCacheUsageTest {
    
     private File dummyFileProperties;
     private SamplePropertiesCacheUsage propertiesCache;
    
     @BeforeMethod
     public void setUp() {
      dummyFileProperties = new File("");
      propertiesCache = new SamplePropertiesCacheUsage();
     }
    
     @Test
     public void shouldReturnTrueDueToWeirdInternalSingletonLogic() {
      PropertiesCache.setInstance(new StubbedForTruePropertiesCache());
    
      boolean result = propertiesCache.overrideExistingCachePropertiesWith(dummyFileProperties);
    
      assertThat(result, is(equalTo(TRUE)));
     }
    
     @Test
     public void shouldReturnFalseDueToWeirdInternalSingletonLogic() {
      PropertiesCache.setInstance(new StubbedForFalsePropertiesCache());
    
      boolean result = propertiesCache.overrideExistingCachePropertiesWith(dummyFileProperties);
    
      assertThat(result, is(equalTo(FALSE)));
     }
    }
    That's all. We have relaxed the coupling between Singleton and system under test. We have tests. Design is also improved a bit. We reached our goal.

    As previously, you can find entire refactoring exercise on my GitHub account.

    Round two is finished.

    Sunday, 26 May 2013

    Wrestling with Singleton - Round 1

    How many times you were dealing with Singletons in your codebase. To be frank, it has been always a problem to properly understand the nature of Singleton, its usage and refactoring methods. Singleton, as such is not an embodied evil. It is rather the usage that developers think they "design". 

    In order to fully understand the problem, let's have a quick look on Gang of Four (GoF) Singleton definition:
    "Ensure a class only has one instance, and provide a global point of access to it.". 
    The big hoo-ha is focusing on second part of above definition: "... providing a global point of access to it (to a single object).". In GoF's implementation, they provided a global point of access, by taking the advantage of static getInstance() method. While it perfectly well fulfills assumptions and leading concept of Singleton definition, it also introduces "an extra, unwanted feature" i.e. a global state visible to every class. 

    Well, some pesky guy, with devil-may-care attitude may say, so what! Apparently nothing, however I can bet that such a smart alec has never written a single line of unit test, especially in legacy code. The thing is people invented Singleton pattern to maintain a single instance of the object among the entire set of object graphs aka an application. Providing a global access point in a correct way is slightly more tricky to materialize, than just using static getInstance() method. However, it is still feasible to do it in a right way.

    Have you ever thought, why nobody finds faults with Spring or with any other Dependency Injection (DI) framework? People do not moan about DIs libraries, even though there is a way to make an object a singleton. Now, it sounds odd! In fact, the answer is hidden in lowercase singleton. Spring is able to create, maintain and inject a singleton object, without exposing it as a global state. It is worth to notice that Spring deals with Singleton problem, correctly. It not only meets GoFs definition without adding any unnecessary burden i.e. no static getInstance() method, but also provides desired inversion of control. It is an xml configuration, which is enabling us to mark a bean as a singleton and that is it. If you want to use it, you will have to inject it, as any other bean via constructor, setter or field. DI framework, in its construction, promotes testability by enforcing the concept of seam.

    If you are a bit nosy person, you should ask this sort of question: is it the only, correct way I can use singletons? Obviously, the answer is: no, it is not. The reason why DI framework makes better use of singletons is the fact that it combines single instance of some class with dependency injection.
    If you do not want to use Spring for some reason or it is simply an overkill for your solution, then there are at least two ways you can choose. You can either use the 'wrap the Singleton' or 'inherit from singleton' approach. In this article, I will focus on the former one. In a nutshell, it is a dependency injection for poor man going along with Singleton. Incidentally, it is quite powerful technique, when it comes to legacy code refactoringLet's have a look on a model GoF's implementation of Singleton pattern and its usage in sample legacy code: 


     
       public class PropertiesCache {
    
    	private static PropertiesCache instance = new PropertiesCache();
    
    	private PropertiesCache() {
    
    	}
    
    	public static PropertiesCache getInstance() {
    		return instance;
    	}
    
    	public void overrideWith(File fileProperties) {
    		// some logic comes here
    	}
     }
    
     
    public class SamplePropertiesCacheUsage {
    
    	public void overrideExistingCachePropertiesWith(File fileProperties){
    		PropertiesCache cachedProperties = PropertiesCache.getInstance();
    		cachedProperties.overrideWith(fileProperties);
    	}
    }
    

    It is a very simple and extremely common scenario, which shows the tight coupling between SamplePropertiesCacheUsage and Singleton classes. Bear in mind that Singleton might be quite substantial in size, as it is a properties cache, all in all. Moreover, some cunning developer before you, might have armed Singleton with quite a few "handy methods" for loading properties from file, merging them from different sources applying precedence policies on a top of that etc. Generally speaking, nothing pleasant and it is you, who have to wrestle with that code, now.



    Let's assume that our goal is to get rid off that tight dependency to Singleton. Second, more implicit assumption is that our IDE will slightly change Singleton call in our production code.

    Okay, let's get started. First thing we should do is to look for test for SamplePropertiesCacheUsage. Wait a second, but we are going to start our digging in legacy code, so do not even bother to look for any test. It might have been quite difficult to write such test for developer, anyway. As a matter of fact, we quickly found that we have to refactor using IDE's built in methods.

    In my InteliJ IDE it will be a few steps process. First of all, let's extract a private method called getInstance(), encapsulating Singleton static call. This method is not static, any more.
     
    public class SamplePropertiesCacheUsage {
    
    	public void overrideExistingCachePropertiesWith(File fileProperties){
    		PropertiesCache cachedProperties = getInstance();
    		cachedProperties.overrideWith(fileProperties);
    	}
    
    	private PropertiesCache getInstance() {
    		return PropertiesCache.getInstance();
    	}
    }
    

    Our next step will be to extract a PropertiesCacheWrapper class with public getInstance() method, from SamplePropertiesCacheUsage Singleton client.
     
    public class SamplePropertiesCacheUsage {
    
    	private PropertiesCacheWrapper propertiesCacheWrapper = new PropertiesCacheWrapper();
    
    	public void overrideExistingCachePropertiesWith(File fileProperties){
    		PropertiesCache cachedProperties = propertiesCacheWrapper.getInstance();
    		cachedProperties.overrideWith(fileProperties);
    	}
    
    	private PropertiesCache getInstance() {
    		return propertiesCacheWrapper.getInstance();
    	}
    }
    
    
     
    public class PropertiesCacheWrapper {
    	public PropertiesCacheWrapper() {
    	}
    
    	public PropertiesCache getInstance() {
    		return PropertiesCache.getInstance();
    	}
    }

    Now, it is time for initializing propertiesCacheWrapper field in the constructor. You may also need to manually delete inlined initialization of propertiesCacheWrapper field. 
    This is actually the moment, when the injection of PropertiesCacheWrapper happens.


    public class SamplePropertiesCacheUsage {
    
    	private PropertiesCacheWrapper propertiesCacheWrapper;
    
    	public SamplePropertiesCacheUsage(PropertiesCacheWrapper aPropertiesCacheWrapper) {
    		propertiesCacheWrapper = aPropertiesCacheWrapper;
    	}
    
    	public void overrideExistingCachePropertiesWith(File fileProperties){
    		PropertiesCache cachedProperties = propertiesCacheWrapper.getInstance();
    		cachedProperties.overrideWith(fileProperties);
    	}
    
    	private PropertiesCache getInstance() {
    		return propertiesCacheWrapper.getInstance();
    	}
    }
    

    As a last step, we may delete getInstance() method from SamplePropertiesCacheUsage, as it is no longer used.
    public class SamplePropertiesCacheUsage {
    
    	private PropertiesCacheWrapper propertiesCacheWrapper;
    
    	public SamplePropertiesCacheUsage(PropertiesCacheWrapper aPropertiesCacheWrapper) {
    		propertiesCacheWrapper = aPropertiesCacheWrapper;
    	}
    
    	public void overrideExistingCachePropertiesWith(File fileProperties){
    		PropertiesCache cachedProperties = propertiesCacheWrapper.getInstance();
    		cachedProperties.overrideWith(fileProperties);
    	}
    }
    


    Let's have a look on what happened. Now, we have Singleton invocation wrapped in a separate class. What is more, SamplePropertiesCacheUsage class does have a constructor type seam, which is used to inject  PropertiesCacheWrapper. The code is now at least testable, so we are able to write a test for SamplePropertiesCacheUsage class.

    @RunWith(MockitoJUnitRunner.class)
    public class SamplePropertiesCacheUsageTest {
    
    	@Mock private PropertiesCache cachedProperties;
    	@Mock private PropertiesCacheWrapper propertiesCacheWrapper;
    	@Mock private File file;
    
    	@BeforeMethod
    	public void initializeMocks() {
    		initMocks(this);
    		given(propertiesCacheWrapper.getInstance()).willReturn(cachedProperties);
    	}
    
    	@Test
    	public void shouldOverrideExistingPropertiesWithFileProperties() {
    		SamplePropertiesCacheUsage samplePropertiesCacheUsage = new SamplePropertiesCacheUsage(propertiesCacheWrapper);
    
    		samplePropertiesCacheUsage.overrideExistingCachePropertiesWith(file);
    
    		verify(cachedProperties).overrideWith(file);
    	}
    }
    
    

    Everything looks good, now. We have a unit test describing SamplePropertiesCacheUsage class, which was previously using static call to Singleton class. We also got rid off tight dependency to Singleton. 

    You can find entire refactoring exercise on my GitHub account.

    Round one is finished.

    Sunday, 7 April 2013

    Definition of Decent Failure

    Being software developer and failing seems to be inseparable. Is it right or wrong? Is it something we should worry about? Is it really inscribed into developers every day life? Does it affect only us, developers? What does failing mean? When is it good to fail? Can we always fail?

    Well, one may say that my questions look more like existential considerations of philosophy students, rather than a topic for a blog post. By and large, it would be a correct statement, however there is a subtle thread connecting software development and failures, which is leading me to these questions.

    Life means development
    Eastern explanation style will definitely try to do two things:
    • use outside-in approach (deduction)
    • try to percept a life from the perspective of what it can bring to us.
    There is one thing we constantly should try to do in life. It is a self development. One of the most expressive and direct examples of such a philosophy, were Japanese warriors. They were spending whole days mastering their skills, so that they were better prepared to serve their masters and for inevitable situations like life, fight or death.

    Development means progress
    Going further, that self-development clearly meant progress. It was progress not only in terms of physical skills, but also in terms of mental strength, world comprehension and the sense of life.



    Progress means series of experiments
    Samurais knew perfectly well that their progress was depending on their hard and tough training, topped with dealing with unusual and unpredictable situations. These peculiar situations were in fact devoted to testing and verifying theories, processes, phenomenons and hypothesis they were thought during their life. 
    The most valuable warriors could shift all what they learnt into series of experiments, so that they could prove themselves and their knowledge. It means that most distinguished samurais were experimenters.

    Series of experiments mean experience
    However, not only that. Samurais were also experienced and educated people in many aspects. 
    There is an interesting intuition related to experience, among people, namely old age and hoar. Aged warriors were considered as privileged people, following at least below rules: 
    • there are many paths leading to the same place
    • choose the right path (minimize the risk of loosing life, health, family etc.)
    These two, rather obvious statements, are barely prerequisites to the most instructive parts of each experiment i.e. a path and a result. Therefore, from the very beginning samurai's were made sensitive to two observations:
    • each valuable experiment has a result
    • experiment without any result is a pure waste.
    It means that every, single action should have an effect. An effort without any result is waste of time, energy and resources.

    Experience means failures
    Going further, every result can have one of two values: true or false. Incidentally, it is worth to think about experiment in terms of test. Test is a scenario, which exercises some idea, question or system and evaluates to a result. 
    One may ask, which result value is better: true or false? Hmm ... that's actually a very good concern. Let's ponder on it for a while then. 
    The result, which is true, shows you that hypothesis you have in mind, is correct. Also, it shows that path you have chosen was not too severe comparing to your current level of experience. It confirms your believes. On the other hand, result equal to false, is showing you much more. It tells you that your assumption was wrong. In the samurais' world, every situation in which they failed and were not causing too much damage to themselves, family and other people or things was invaluable. Why? Because, by failing they did a recon of area they were unsure, hopefully with low cost (e.g. few bruises or broken ribs). By failing in a relatively safe way, they lower the risk of entering unknown area in the future. 



    Fail early, fail often, fail fast with decent failure
    Samurais' were able to see a value in discovering limitations and risky areas. They greatly appreciated early intelligence and impact of actions they did. That is why, they were mastering their skills on a daily basis. Tough and repeatable training, body memory, permanent checks of their mental and physical limitations, fights with bokkens instead of katanas, self defense without weapon and against weapon etc. All of that were tests in isolated context, aiming to train and solve difficult and unusual situations, as well as common manoeuvres happening on a battle field. They were treating their body as system, which needs to be under constant test. By looking at that problem from different angles and considering many aspects of each perspective, they were hardening their minds, bodies and hearts for ultimate test - real life. 

    All in all, failing seems to be a good thing. However, we have to bear in mind that we cannot just fail. It has to be a controlled failure, something which I call a decent failure. In fact, it is a regular failure holding three, below properties: 
    • early - saving time, don't go through all process to get a feedback
    • often - frequent feedback
    • fast - short feedback loop

    Be okay with the decent failure
    Having a decent failures mechanism in place is actually great success!! Decent failure is the best thing you can experience, as it shows you all your limitations and pins down the problem in a safe and quick manner.

    Sunday, 10 February 2013

    Awaitility - testing asynchronous calls in Java

    Asynchronicity plays an important role in systems we develop, nowadays. Below are examples, where we are using asynchronous operations, almost on a daily basis, in our systems:
    • writing to cache with asynchronous write to database behind the scenes or otherwise
    • JMS queues or topics, which are asynchronous itself
    • retrieving data from external systems in asynchronous way
    Asynchronous operations are often used for caching data in order to decrease the number of expensive calls to: database, webservice etc. There are also mutations of above caching approach. For instance, one may combine sending time consuming or complicated algorithmic operations to JMS queue (asynchronous system). Later on, when results of such an intensive operation/algorithm would be available, it might be simply cached in a local in-memory storage. Next time, when user would try to fetch same data, it will be fast and easy for her to retrieve previously calculated results from cache. Such an approach, gives you an ability to build very fast, scaleable, flexible and user friendly applications/GUIs.
    However, there is one caveat here. It is quite difficult to test such an asynchronous system. If we like to do it from scratch, we will need to provide some threads' handlers, timeouts and generally deal with concurrency, which makes the whole test more obscure and less focused on business principles ruling the entire system. If above situation is true, it means that we found a niche and basically a need for some library to handle our problem. 
    The answer for our headache is Awaitility. It's a quite simple and powerful, Java based library, to test asynchronous calls. Moreover, it does have a concise and expressive DSL to define expectations.




    Now, let's see Awaitility in action. 
    I wrote a simple application, which is available on GitHub. The basic idea is that there is a DelayedFileCreator class, which is responsible for creating a file on a filesystem. It also tries to mimic an expensive and time consuming calculations, by time delay. Important thing is a result of that operation. This sort of asynchronous calls are having either a state returned or they cause a change of state somewhere else - in our case it is a newly created file on a filesystem. 


    package asynchronousexample;
    
    import java.io.File;
    import java.io.IOException;
    
    public class DelayedFileCreator implements Runnable {
    
     private static final int THREE_SECONDS = 3000;
    
     private File file;
     private Timer timer;
    
     public DelayedFileCreator(Timer aTimer, File aFile) {
      timer = aTimer;
      file = aFile;
     }
    
     @Override
     public void run() {
      sleepBeforeCreatingFile();
      createNewFile();
     }
    
     private void sleepBeforeCreatingFile() {
      try {
       timer.sleep(THREE_SECONDS);
      } catch (InterruptedException ie) {
       throw new RuntimeException(ie);
      }
     }
    
     private void createNewFile() {
      try {
       file.createNewFile();
      } catch (IOException ioe) {
       throw new RuntimeException(ioe);
      }
     }
    }
    
     
    Ideally, we would like to be able to write a test, which invokes overridden method run(), waits until it is done and asserts the result of asynchronous operation. Awaitility hits the nail on the head. Below example shows an integration test, using nice and readable DSL, to assert the result.


    package integration;
    
    import asynchronousexample.AsynchronousTaskLauncher;
    import asynchronousexample.DelayedFileCreator;
    import asynchronousexample.Timer;
    import org.testng.annotations.BeforeTest;
    import org.testng.annotations.Test;
    
    import java.io.File;
    import java.util.concurrent.Callable;
    import java.util.concurrent.ExecutorService;
    import java.util.concurrent.Executors;
    
    import static com.jayway.awaitility.Awaitility.with;
    import static com.jayway.awaitility.Duration.ONE_HUNDRED_MILLISECONDS;
    import static com.jayway.awaitility.Duration.TEN_SECONDS;
    import static com.jayway.awaitility.Duration.TWO_HUNDRED_MILLISECONDS;
    import static org.hamcrest.Matchers.equalTo;
    
    public class CreateFileAsynchronouslyIntegrationTest {
    
     private static final int THREAD_POOL_SIZE = 3;
     private static final String FILENAME = "sample.txt";
    
     @BeforeTest
     public void deleteFileFromFileSystem() {
      File file = new File(FILENAME);
      if (file.exists()) {
       file.delete();
      }
     }
    
     @Test
     public void shouldAsynchronouslyWriteFileOnDisk() throws Exception {
      AsynchronousTaskLauncher launcher = prepareAsynchronousTaskLauncher();
      Runnable delayedFileCreatorTask = prepareDelayedFileCreatorWith(FILENAME);
    
      launcher.launch(delayedFileCreatorTask);
    
      with().pollDelay(ONE_HUNDRED_MILLISECONDS)
        .and().with().pollInterval(TWO_HUNDRED_MILLISECONDS)
        .and().with().timeout(TEN_SECONDS)
        .await("file creation")
        .until(fileIsCreatedOnDisk(FILENAME), equalTo(true));
     }
    
     private Runnable prepareDelayedFileCreatorWith(String filename) {
      Timer timer = new Timer();
      File file = new File(filename);
      return new DelayedFileCreator(timer, file);
     }
    
     private AsynchronousTaskLauncher prepareAsynchronousTaskLauncher() {
      ExecutorService executorService = Executors.newFixedThreadPool(THREAD_POOL_SIZE);
      return new AsynchronousTaskLauncher(executorService);
     }
    
     private Callable fileIsCreatedOnDisk(final String filename) {
      return new Callable() {
       public Boolean call() throws Exception {
        File file = new File(filename);
        return file.exists();
       }
      };
     }
    }
    
    
     
    The whole beauty lies in readability and expressiveness of Awaitility. We do not have to take care about threads' handling, concurrency aspects etc. Everything is being done by Awaitility. 

    I really encourage you to use this small, but very handy library to test your asynchronous calls. 

    Saturday, 10 November 2012

    Which game are you playing ?

    Few days back, I spoke with my friend about pretty "funny" situation. He told me that developers at his project are playing a game, which my friend called Pair Programming vs. Code Review. If developers like each other, they play Pair Programming. If they do not like each other, then they play Code Review.

    As you can see those developers found a problem, where it was not even existing. That's a rare and undesired skill, which often manifest itself in pseudo pragmatic teams, who are probably not sticking to 5th Agile manifesto principle: 
    "Build projects around motivated individuals. 
    Give them the environment and support they need, and trust them to get the job done."
    Generally, it is called over-engineering! But not only that, they are loosing time for guerrilla warfare, rather than focusing on helping each other. They also seem not to have any respect to each other. 

    The question is: should it be this way? Of course, no!
    I would like to be clear about Pair Programming and Code Review, so let's start from some definitions.

    Scratching Tiger, Hidden Dragon - Pair Programming

    One developer types in code at the keyboard. He works at the tactical level. The other one looks for mistakes and thinks strategically about whether the code is being written right and whether the right code is being written. 

    You can also describe pair programming in a more holistic and tao oriented way, as a yin-yang or a unity and harmony arousing from diversity. There are many approaches explaining principles of pair programming like coder and observer, driver and navigator, however I am always keen on scratching tiger and hidden dragon parallel. 



    In other words, it is a conjunction of opposite features like active and passive, linear and non-linear or specific and general in a collective action such as solving a problem - programming in this case.
    To be fairly honest, one can extend and generalize this approach to any sort of task you have got to do. 

    The overall idea is very easy to grasp and simple. Let's look at pair programming from two, different perspectives: tiger and dragon.

    Scratching Tiger:

    1. Active person writing the code, almost scratching it
    2. Keeps on focusing on syntax and details
    3. Uses his linear part of brain, logical (left) hemisphere to solve arousing problems


    Hidden Dragon:
    1. Passive person pays attention to code quality and acceptance criteria fulfillment, almost like dragon, flying in the sky and observing what tiger does
    2. Can see whole solution, has an overall idea of subsequent tasks and steps
    3. Uses his non-linear part of brain, abstract (right) hemisphere to imagine a flow and grasp the bigger picture
    You would probably agree that above description is somewhat static and simply a bit boring. Frankly, you are right, but the key here is movement and synergy you can get from it.

    How can we get the movement ?

    Again, this is simple. You need a proper space. Comfortable desk, two chars and two keyboards. These things are really important, so do not neglect any of it, especially two keyboards. These prerequisites enable higher level of collaboration between developers. Apart from that they are not loosing time and ideas, while swapping keyboards.

    And now comes the most important part. Developers are creating a collective mind, which is buzzing and boiling due to new, fresh ideas. The worst thing you can do, is not switching between Tiger and Dragon role. You have to do it frequently, without any hesitation and freely. You cannot be afraid of your partner saying leave that keyboard, etc. When you see Tiger did a typo, initialized a loop in a wrong way or did not do refactoring, which might be done - simply take over and use your own keyboard to fix it. You need to change perspectives very often and switch your brain from linear to non-linear mode and vice versa. That is the only way you can solve difficult problems. 



    Why pairing on solving tasks?

    Mainly because of four reasons:
    • Readability - two people take care about readable code, both of them have to understand what was written
    • Code quality - percentage of defects found 40% - 60% (Steve McConnell "Code Complete")
    • Bus factor - knowledge sharing and mitigating the risk of single point of failure
    • Pressure - group shields developers from external pressure much better than single developer can protect himself. What's more, in stressful situations people tend to cluster to face difficult situation.

    Okay, so when it is worth to pair? 


    It is definitely more expedient to pair on crucial and strategically important tasks. It is also useful to pair on both very hard, non-trivial implementations and medium level difficulty tasks. There is no use of pairing on relatively easy tasks, as they do not add so much value, comparing to effort.

    Always remember about rotating pairs, not giving an opportunity to get used to your partner. The efficiency is decreasing then. Moreover, it is also good not to pair newbies, as there is not master-apprentice relation, which negatively affects pairing process. Definitely people need to like each other.


    Code Review - No junk in a trunk


    It is a sort of semi formal walkthrough the code aiming to:

    1. share knowledge (mitigate bus factor)
    2. check design from high level perspective
    3. check at least SOLID rules appliance
    4. check definition of done (DoD)
    5. increase resilience before holidays

    When can you perform code review?

    Of course after development and testing, but before merging to master or trunk, so that there is no junk in a trunk.



    The programmer or pair, who developed a solution, walks some other team member through new code. They start from telling the story and acceptance criteria, then they are explaining applied design and showing working, green, CI  system tests. There is also a place for Q&A and potentially some code review remarks or changes. Code review is able to discover between 20% to 40% of defects (Steve McConnell "Code Complete").

    Summary

    Code Review and Pair Programming are two, complementary techniques improving the general quality of the provided solution. Apart from that they mitigate the risk associated with bus factor, by knowledge sharing.
    Both of them create a useful unity in development cycle. 
    You may even use scratching tiger, hidden dragon parallel, in this case, as well. Pair Programming would be a tiger and code review a dragon looking at the solution from the distance. 

    It is much cheaper (sic!) to let developers be tigers and dragons, rather than giving them a wigging that they loose time for crap.

    Rarely you have to choose between these two approaches. Then do it wisely and choose pairing. If your management is trying to be clumsy lean oriented and tells you should not do pairing, as it is a pure waste, take at least code review. Remember, half pairing is still better than not paring at all. It is always good to discuss design, start doing the most tricky bit together and then split and finish what you started separately, if time is so valuable. 
    However, in fact it does not hold true. You are writing robust and clean solutions faster while paring and reviewing, than not.