Friday, 21 June 2013

Wrestling with Singleton - round 2

Here comes the time for round 2 of our fight with Singleton. This time, we will try to get rid off unwanted, hard-wired dependency, by applying Michael Feathers's trick. 



During first round, we used a simple, wrapping mechanism, enabling developer to disjoin Singleton from class under test. Although it is not complicated, it does amend production code in few places. What Michael Feathers has suggested in his book "Working Effectively with Legacy Code", is a slightly different approach. It also does changes in production code, but just in Singleton class itself. One may say it is a less invasive way of dealing with that sort of problem.
Anyway, let's get started in the same place, we started last time:

public class PropertiesCache {

 private static PropertiesCache instance = new PropertiesCache();

 private PropertiesCache() {

 }

 public static PropertiesCache getInstance() {
  return instance;
 }

 public boolean overrideWith(File fileProperties) {
  return someWeirdComplicatedFilePropertiesLogic(fileProperties);
 }

 private boolean someWeirdComplicatedFilePropertiesLogic(File fileProperties) {
  if (fileProperties.length() % 2 == 0) {
   return true;
  }
  return false;
 }
}
public class SamplePropertiesCacheUsage {

 public boolean overrideExistingCachePropertiesWith(File fileProperties){
  PropertiesCache cachedProperties = PropertiesCache.getInstance();
  return cachedProperties.overrideWith(fileProperties);
 }
}
I added a static setter to PropertiesCache class using InteliJ tool called: code - generate setter. Second move is a manual change of constructor's modifier: from private to protected.
public class PropertiesCache {

 private static PropertiesCache instance = new PropertiesCache();

 protected PropertiesCache() {

 }

 public static PropertiesCache getInstance() {
  return instance;
 }

 public static void setInstance(PropertiesCache instance) {
  PropertiesCache.instance = instance;
 }

 public boolean overrideWith(File fileProperties) {
  return someWeirdComplicatedFilePropertiesLogic(fileProperties);
 }

 private boolean someWeirdComplicatedFilePropertiesLogic(File fileProperties) {
  if (fileProperties.length() % 2 == 0) {
   return true;
  }
  return false;
 }
}
Now, I created two classes inheriting from the Singleton. They stub the overrideWith method. As you can see, there is also a simple, but valuable test created.
public class StubbedForTruePropertiesCache extends PropertiesCache {

 @Override
 public boolean overrideWith(File fileProperties) {
  return true;
 }
}
public class StubbedForFalsePropertiesCache extends PropertiesCache {

 @Override
 public boolean overrideWith(File fileProperties) {
  return false;
 }
}
public class SamplePropertiesCacheUsageTest {

 private File dummyFileProperties;
 private SamplePropertiesCacheUsage propertiesCache;

 @BeforeMethod
 public void setUp() {
  dummyFileProperties = new File("");
  propertiesCache = new SamplePropertiesCacheUsage();
 }

 @Test
 public void shouldReturnTrueDueToWeirdInternalSingletonLogic() {
  PropertiesCache.setInstance(new StubbedForTruePropertiesCache());

  boolean result = propertiesCache.overrideExistingCachePropertiesWith(dummyFileProperties);

  assertThat(result, is(equalTo(TRUE)));
 }

 @Test
 public void shouldReturnFalseDueToWeirdInternalSingletonLogic() {
  PropertiesCache.setInstance(new StubbedForFalsePropertiesCache());

  boolean result = propertiesCache.overrideExistingCachePropertiesWith(dummyFileProperties);

  assertThat(result, is(equalTo(FALSE)));
 }
}
That's all. We have relaxed the coupling between Singleton and system under test. We have tests. Design is also improved a bit. We reached our goal.

As previously, you can find entire refactoring exercise on my GitHub account.

Round two is finished.

Sunday, 26 May 2013

Wrestling with Singleton - Round 1

How many times you were dealing with Singletons in your codebase. To be frank, it has been always a problem to properly understand the nature of Singleton, its usage and refactoring methods. Singleton, as such is not an embodied evil. It is rather the usage that developers think they "design". 

In order to fully understand the problem, let's have a quick look on Gang of Four (GoF) Singleton definition:
"Ensure a class only has one instance, and provide a global point of access to it.". 
The big hoo-ha is focusing on second part of above definition: "... providing a global point of access to it (to a single object).". In GoF's implementation, they provided a global point of access, by taking the advantage of static getInstance() method. While it perfectly well fulfills assumptions and leading concept of Singleton definition, it also introduces "an extra, unwanted feature" i.e. a global state visible to every class. 

Well, some pesky guy, with devil-may-care attitude may say, so what! Apparently nothing, however I can bet that such a smart alec has never written a single line of unit test, especially in legacy code. The thing is people invented Singleton pattern to maintain a single instance of the object among the entire set of object graphs aka an application. Providing a global access point in a correct way is slightly more tricky to materialize, than just using static getInstance() method. However, it is still feasible to do it in a right way.

Have you ever thought, why nobody finds faults with Spring or with any other Dependency Injection (DI) framework? People do not moan about DIs libraries, even though there is a way to make an object a singleton. Now, it sounds odd! In fact, the answer is hidden in lowercase singleton. Spring is able to create, maintain and inject a singleton object, without exposing it as a global state. It is worth to notice that Spring deals with Singleton problem, correctly. It not only meets GoFs definition without adding any unnecessary burden i.e. no static getInstance() method, but also provides desired inversion of control. It is an xml configuration, which is enabling us to mark a bean as a singleton and that is it. If you want to use it, you will have to inject it, as any other bean via constructor, setter or field. DI framework, in its construction, promotes testability by enforcing the concept of seam.

If you are a bit nosy person, you should ask this sort of question: is it the only, correct way I can use singletons? Obviously, the answer is: no, it is not. The reason why DI framework makes better use of singletons is the fact that it combines single instance of some class with dependency injection.
If you do not want to use Spring for some reason or it is simply an overkill for your solution, then there are at least two ways you can choose. You can either use the 'wrap the Singleton' or 'inherit from singleton' approach. In this article, I will focus on the former one. In a nutshell, it is a dependency injection for poor man going along with Singleton. Incidentally, it is quite powerful technique, when it comes to legacy code refactoringLet's have a look on a model GoF's implementation of Singleton pattern and its usage in sample legacy code: 


 
   public class PropertiesCache {

	private static PropertiesCache instance = new PropertiesCache();

	private PropertiesCache() {

	}

	public static PropertiesCache getInstance() {
		return instance;
	}

	public void overrideWith(File fileProperties) {
		// some logic comes here
	}
 }
 
public class SamplePropertiesCacheUsage {

	public void overrideExistingCachePropertiesWith(File fileProperties){
		PropertiesCache cachedProperties = PropertiesCache.getInstance();
		cachedProperties.overrideWith(fileProperties);
	}
}

It is a very simple and extremely common scenario, which shows the tight coupling between SamplePropertiesCacheUsage and Singleton classes. Bear in mind that Singleton might be quite substantial in size, as it is a properties cache, all in all. Moreover, some cunning developer before you, might have armed Singleton with quite a few "handy methods" for loading properties from file, merging them from different sources applying precedence policies on a top of that etc. Generally speaking, nothing pleasant and it is you, who have to wrestle with that code, now.



Let's assume that our goal is to get rid off that tight dependency to Singleton. Second, more implicit assumption is that our IDE will slightly change Singleton call in our production code.

Okay, let's get started. First thing we should do is to look for test for SamplePropertiesCacheUsage. Wait a second, but we are going to start our digging in legacy code, so do not even bother to look for any test. It might have been quite difficult to write such test for developer, anyway. As a matter of fact, we quickly found that we have to refactor using IDE's built in methods.

In my InteliJ IDE it will be a few steps process. First of all, let's extract a private method called getInstance(), encapsulating Singleton static call. This method is not static, any more.
 
public class SamplePropertiesCacheUsage {

	public void overrideExistingCachePropertiesWith(File fileProperties){
		PropertiesCache cachedProperties = getInstance();
		cachedProperties.overrideWith(fileProperties);
	}

	private PropertiesCache getInstance() {
		return PropertiesCache.getInstance();
	}
}

Our next step will be to extract a PropertiesCacheWrapper class with public getInstance() method, from SamplePropertiesCacheUsage Singleton client.
 
public class SamplePropertiesCacheUsage {

	private PropertiesCacheWrapper propertiesCacheWrapper = new PropertiesCacheWrapper();

	public void overrideExistingCachePropertiesWith(File fileProperties){
		PropertiesCache cachedProperties = propertiesCacheWrapper.getInstance();
		cachedProperties.overrideWith(fileProperties);
	}

	private PropertiesCache getInstance() {
		return propertiesCacheWrapper.getInstance();
	}
}

 
public class PropertiesCacheWrapper {
	public PropertiesCacheWrapper() {
	}

	public PropertiesCache getInstance() {
		return PropertiesCache.getInstance();
	}
}

Now, it is time for initializing propertiesCacheWrapper field in the constructor. You may also need to manually delete inlined initialization of propertiesCacheWrapper field. 
This is actually the moment, when the injection of PropertiesCacheWrapper happens.


public class SamplePropertiesCacheUsage {

	private PropertiesCacheWrapper propertiesCacheWrapper;

	public SamplePropertiesCacheUsage(PropertiesCacheWrapper aPropertiesCacheWrapper) {
		propertiesCacheWrapper = aPropertiesCacheWrapper;
	}

	public void overrideExistingCachePropertiesWith(File fileProperties){
		PropertiesCache cachedProperties = propertiesCacheWrapper.getInstance();
		cachedProperties.overrideWith(fileProperties);
	}

	private PropertiesCache getInstance() {
		return propertiesCacheWrapper.getInstance();
	}
}

As a last step, we may delete getInstance() method from SamplePropertiesCacheUsage, as it is no longer used.
public class SamplePropertiesCacheUsage {

	private PropertiesCacheWrapper propertiesCacheWrapper;

	public SamplePropertiesCacheUsage(PropertiesCacheWrapper aPropertiesCacheWrapper) {
		propertiesCacheWrapper = aPropertiesCacheWrapper;
	}

	public void overrideExistingCachePropertiesWith(File fileProperties){
		PropertiesCache cachedProperties = propertiesCacheWrapper.getInstance();
		cachedProperties.overrideWith(fileProperties);
	}
}


Let's have a look on what happened. Now, we have Singleton invocation wrapped in a separate class. What is more, SamplePropertiesCacheUsage class does have a constructor type seam, which is used to inject  PropertiesCacheWrapper. The code is now at least testable, so we are able to write a test for SamplePropertiesCacheUsage class.

@RunWith(MockitoJUnitRunner.class)
public class SamplePropertiesCacheUsageTest {

	@Mock private PropertiesCache cachedProperties;
	@Mock private PropertiesCacheWrapper propertiesCacheWrapper;
	@Mock private File file;

	@BeforeMethod
	public void initializeMocks() {
		initMocks(this);
		given(propertiesCacheWrapper.getInstance()).willReturn(cachedProperties);
	}

	@Test
	public void shouldOverrideExistingPropertiesWithFileProperties() {
		SamplePropertiesCacheUsage samplePropertiesCacheUsage = new SamplePropertiesCacheUsage(propertiesCacheWrapper);

		samplePropertiesCacheUsage.overrideExistingCachePropertiesWith(file);

		verify(cachedProperties).overrideWith(file);
	}
}


Everything looks good, now. We have a unit test describing SamplePropertiesCacheUsage class, which was previously using static call to Singleton class. We also got rid off tight dependency to Singleton. 

You can find entire refactoring exercise on my GitHub account.

Round one is finished.

Sunday, 7 April 2013

Definition of Decent Failure

Being software developer and failing seems to be inseparable. Is it right or wrong? Is it something we should worry about? Is it really inscribed into developers every day life? Does it affect only us, developers? What does failing mean? When is it good to fail? Can we always fail?

Well, one may say that my questions look more like existential considerations of philosophy students, rather than a topic for a blog post. By and large, it would be a correct statement, however there is a subtle thread connecting software development and failures, which is leading me to these questions.

Life means development
Eastern explanation style will definitely try to do two things:
  • use outside-in approach (deduction)
  • try to percept a life from the perspective of what it can bring to us.
There is one thing we constantly should try to do in life. It is a self development. One of the most expressive and direct examples of such a philosophy, were Japanese warriors. They were spending whole days mastering their skills, so that they were better prepared to serve their masters and for inevitable situations like life, fight or death.

Development means progress
Going further, that self-development clearly meant progress. It was progress not only in terms of physical skills, but also in terms of mental strength, world comprehension and the sense of life.



Progress means series of experiments
Samurais knew perfectly well that their progress was depending on their hard and tough training, topped with dealing with unusual and unpredictable situations. These peculiar situations were in fact devoted to testing and verifying theories, processes, phenomenons and hypothesis they were thought during their life. 
The most valuable warriors could shift all what they learnt into series of experiments, so that they could prove themselves and their knowledge. It means that most distinguished samurais were experimenters.

Series of experiments mean experience
However, not only that. Samurais were also experienced and educated people in many aspects. 
There is an interesting intuition related to experience, among people, namely old age and hoar. Aged warriors were considered as privileged people, following at least below rules: 
  • there are many paths leading to the same place
  • choose the right path (minimize the risk of loosing life, health, family etc.)
These two, rather obvious statements, are barely prerequisites to the most instructive parts of each experiment i.e. a path and a result. Therefore, from the very beginning samurai's were made sensitive to two observations:
  • each valuable experiment has a result
  • experiment without any result is a pure waste.
It means that every, single action should have an effect. An effort without any result is waste of time, energy and resources.

Experience means failures
Going further, every result can have one of two values: true or false. Incidentally, it is worth to think about experiment in terms of test. Test is a scenario, which exercises some idea, question or system and evaluates to a result. 
One may ask, which result value is better: true or false? Hmm ... that's actually a very good concern. Let's ponder on it for a while then. 
The result, which is true, shows you that hypothesis you have in mind, is correct. Also, it shows that path you have chosen was not too severe comparing to your current level of experience. It confirms your believes. On the other hand, result equal to false, is showing you much more. It tells you that your assumption was wrong. In the samurais' world, every situation in which they failed and were not causing too much damage to themselves, family and other people or things was invaluable. Why? Because, by failing they did a recon of area they were unsure, hopefully with low cost (e.g. few bruises or broken ribs). By failing in a relatively safe way, they lower the risk of entering unknown area in the future. 



Fail early, fail often, fail fast with decent failure
Samurais' were able to see a value in discovering limitations and risky areas. They greatly appreciated early intelligence and impact of actions they did. That is why, they were mastering their skills on a daily basis. Tough and repeatable training, body memory, permanent checks of their mental and physical limitations, fights with bokkens instead of katanas, self defense without weapon and against weapon etc. All of that were tests in isolated context, aiming to train and solve difficult and unusual situations, as well as common manoeuvres happening on a battle field. They were treating their body as system, which needs to be under constant test. By looking at that problem from different angles and considering many aspects of each perspective, they were hardening their minds, bodies and hearts for ultimate test - real life. 

All in all, failing seems to be a good thing. However, we have to bear in mind that we cannot just fail. It has to be a controlled failure, something which I call a decent failure. In fact, it is a regular failure holding three, below properties: 
  • early - saving time, don't go through all process to get a feedback
  • often - frequent feedback
  • fast - short feedback loop

Be okay with the decent failure
Having a decent failures mechanism in place is actually great success!! Decent failure is the best thing you can experience, as it shows you all your limitations and pins down the problem in a safe and quick manner.

Friday, 8 March 2013

German General von Manstein about Agile

Fifth principle of Agile Manifesto says: 

"Build projects around motivated individuals. 
Give them the environment and support they need, 
and trust them to get the job done."


Before you build a project or to be more specific a team, you have to choose 'motivated individuals'. Above rule is just a general guideline, not telling you anything how to do it. So the question, related to the way of choosing motivated people, remains open and is somewhat worrisome, as we still do not have a simple and clear protocol how to do it. 
To be fairly honest, it is always hard to give one, neat and concise rule describing what to do. However, instead of that I will quote German's general, von Manstein of the German Officer Corps, rule to be used as a tool for choosing motivated employees. 
Von Manstein used to divide his soliders into four groups:

  1. Lazy and stupid - leave them alone, as they are doing no harm
  2. Hardworking and intelligent - they would be an excellent staff officers, they can ensure that every detail is properly introduced and considered
  3. Lazy and intelligent - these are people suited for the highest office
  4. Hardworking and stupid - they are a menace, who should be fired at once, as they create and add irrelevant work for everybody aka "cannon fodder"



The rule is rigid and acute not living any space for people being in between.
It is not the most detailed approach, however we can leverage its features, anyway. It can be treated more like a filter, rather than just a simple rule. It gives us some view on who we should avoid. It also tells us about rough activities and positions people may hold.
It is not the best rule allowing you to choose 'motivated individuals' , but it is definitely the very first step on that path. 


The Pareto Principle
It's also worth to know Pareto Principle, which says: 
"80% of your results will come from 20% of your efforts".
It means that you need to work hard to identify 20% of things you do, which generate 80% of what you do. In other words, this principle teaches us how to be lazy and not to fall into the activity trap. Be honest with yourself, nobody cares how busy you are, they just do care what you produce. You don't have to work hard. It's better to decide, what NOT to do, so that you have more time for valuable 20% of your core activities, which gives you 80% of return of your time investment (ROI).

Self Assesment
So what you can do now, is to assess your position against von Manstein and Pareto principles and see where you are. Sometimes it might be very painful process, but definitely it's worth doing that. If you are finding yourself too low in hierarchy and you are willing to pull yourself up a bit try below clues:

  • Don’t try to keep all people happy all the time
  • Have a work plan managing your time
  • Practice saying “no”
  • Don't stuck in one square, say Smart and Hardworking. Try to improve yourself, change your character, so that you will be eligible for another square: Smart and Lazy. Train your laziness in an intelligent and smart way.

Sunday, 10 February 2013

Awaitility - testing asynchronous calls in Java

Asynchronicity plays an important role in systems we develop, nowadays. Below are examples, where we are using asynchronous operations, almost on a daily basis, in our systems:
  • writing to cache with asynchronous write to database behind the scenes or otherwise
  • JMS queues or topics, which are asynchronous itself
  • retrieving data from external systems in asynchronous way
Asynchronous operations are often used for caching data in order to decrease the number of expensive calls to: database, webservice etc. There are also mutations of above caching approach. For instance, one may combine sending time consuming or complicated algorithmic operations to JMS queue (asynchronous system). Later on, when results of such an intensive operation/algorithm would be available, it might be simply cached in a local in-memory storage. Next time, when user would try to fetch same data, it will be fast and easy for her to retrieve previously calculated results from cache. Such an approach, gives you an ability to build very fast, scaleable, flexible and user friendly applications/GUIs.
However, there is one caveat here. It is quite difficult to test such an asynchronous system. If we like to do it from scratch, we will need to provide some threads' handlers, timeouts and generally deal with concurrency, which makes the whole test more obscure and less focused on business principles ruling the entire system. If above situation is true, it means that we found a niche and basically a need for some library to handle our problem. 
The answer for our headache is Awaitility. It's a quite simple and powerful, Java based library, to test asynchronous calls. Moreover, it does have a concise and expressive DSL to define expectations.




Now, let's see Awaitility in action. 
I wrote a simple application, which is available on GitHub. The basic idea is that there is a DelayedFileCreator class, which is responsible for creating a file on a filesystem. It also tries to mimic an expensive and time consuming calculations, by time delay. Important thing is a result of that operation. This sort of asynchronous calls are having either a state returned or they cause a change of state somewhere else - in our case it is a newly created file on a filesystem. 


package asynchronousexample;

import java.io.File;
import java.io.IOException;

public class DelayedFileCreator implements Runnable {

 private static final int THREE_SECONDS = 3000;

 private File file;
 private Timer timer;

 public DelayedFileCreator(Timer aTimer, File aFile) {
  timer = aTimer;
  file = aFile;
 }

 @Override
 public void run() {
  sleepBeforeCreatingFile();
  createNewFile();
 }

 private void sleepBeforeCreatingFile() {
  try {
   timer.sleep(THREE_SECONDS);
  } catch (InterruptedException ie) {
   throw new RuntimeException(ie);
  }
 }

 private void createNewFile() {
  try {
   file.createNewFile();
  } catch (IOException ioe) {
   throw new RuntimeException(ioe);
  }
 }
}
 
Ideally, we would like to be able to write a test, which invokes overridden method run(), waits until it is done and asserts the result of asynchronous operation. Awaitility hits the nail on the head. Below example shows an integration test, using nice and readable DSL, to assert the result.


package integration;

import asynchronousexample.AsynchronousTaskLauncher;
import asynchronousexample.DelayedFileCreator;
import asynchronousexample.Timer;
import org.testng.annotations.BeforeTest;
import org.testng.annotations.Test;

import java.io.File;
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;

import static com.jayway.awaitility.Awaitility.with;
import static com.jayway.awaitility.Duration.ONE_HUNDRED_MILLISECONDS;
import static com.jayway.awaitility.Duration.TEN_SECONDS;
import static com.jayway.awaitility.Duration.TWO_HUNDRED_MILLISECONDS;
import static org.hamcrest.Matchers.equalTo;

public class CreateFileAsynchronouslyIntegrationTest {

 private static final int THREAD_POOL_SIZE = 3;
 private static final String FILENAME = "sample.txt";

 @BeforeTest
 public void deleteFileFromFileSystem() {
  File file = new File(FILENAME);
  if (file.exists()) {
   file.delete();
  }
 }

 @Test
 public void shouldAsynchronouslyWriteFileOnDisk() throws Exception {
  AsynchronousTaskLauncher launcher = prepareAsynchronousTaskLauncher();
  Runnable delayedFileCreatorTask = prepareDelayedFileCreatorWith(FILENAME);

  launcher.launch(delayedFileCreatorTask);

  with().pollDelay(ONE_HUNDRED_MILLISECONDS)
    .and().with().pollInterval(TWO_HUNDRED_MILLISECONDS)
    .and().with().timeout(TEN_SECONDS)
    .await("file creation")
    .until(fileIsCreatedOnDisk(FILENAME), equalTo(true));
 }

 private Runnable prepareDelayedFileCreatorWith(String filename) {
  Timer timer = new Timer();
  File file = new File(filename);
  return new DelayedFileCreator(timer, file);
 }

 private AsynchronousTaskLauncher prepareAsynchronousTaskLauncher() {
  ExecutorService executorService = Executors.newFixedThreadPool(THREAD_POOL_SIZE);
  return new AsynchronousTaskLauncher(executorService);
 }

 private Callable fileIsCreatedOnDisk(final String filename) {
  return new Callable() {
   public Boolean call() throws Exception {
    File file = new File(filename);
    return file.exists();
   }
  };
 }
}

 
The whole beauty lies in readability and expressiveness of Awaitility. We do not have to take care about threads' handling, concurrency aspects etc. Everything is being done by Awaitility. 

I really encourage you to use this small, but very handy library to test your asynchronous calls. 

Monday, 28 January 2013

Tuesday, 15 January 2013

The A Team has A players

I think, we all know that universities' programmes do not conform well to  industry requirements. I would be even tempted to state that computer science and IT are the most vulnerable subjects, in terms of coherence between what is taught and what is required. Basically, teaching too much science can affect industry skills. What I mean is a proportion of industry to science skills (I/S). To my mind, it should be close to 1:1 proportion. In fact, we often observe much higher contribution of science, comparing to industry one. My guess, based on my private research, would be that nowadays the ratio is floating somewhere between 1:4 to 2:3.

To be clear, I have nothing against science. I am even extremely keen on science from my childhood. What's more, I always saw a huge value in being open minded and having analytical skills. 
On the other hand, some knowledge of current, top industry technologies, approaches and tricks is of at most importance. These are your tools. You will be using them on a daily basis. You have to know how to use them!

Okay, rather than grumbling and ranting, let's go straight to the definition of set of requirements and skills necessary in industry. Incidentally, these might be things necessary in science, as well. 
Below requirements were initially defined and put together by Wojtek Seliga, but then enriched in few places by me:

Basic programming skills:
- java.util.concurrent package
- GC (how it works, types)
- Socket programming and threads
- TCP/IP, HTTP (Basic Auth, Cookies, Session)
- Scalability - how to build scalable apps
- Performance - how and when to pay attention for performance
- Transactions - types of transactions
- CAP rule

Java Core:
- interface, class
- composition, inheritance
- collections
- complex types, algorithms
- hash code, searching complexity, putting complexity
- concurrency (thread, monitor, synchronization different approaches, semaphore, cyclic barrier, latch)
- streams
- immutability (thread safe, object pools)
- reflection, AOP, byte code manipulation, generation, dynamic proxy - it explains how dependency injections frameworks, testing libraries work
- Web technologies stack: Struts, Filter, Servlet, Server socket (socket bind accept)

Libraries:
- JDK a lot of classes, on the other hand, not so many to scroll down and eyeball check
- Guava - very good library
- Apache Utils
- Yoda Time - don't use Date and Calendar
- Spring, Nano, Pico, Guice - as dependency injection frameworks

Tools:
- know some IDE (keyboard shortcuts, IDE shouldn't slow you down)
- debugger (not System.out.println all over the place)
- profiler (bottleneck, deadlock)
- internet traffic analyzer (WireShark, Fiddler), FireBug

Books:
- Effective Java 2nd edition (Joshua Bloch) - after reading you think you know everything
- Java concurrency in practice - after reading you think you know nothing


Personality:
- intelligent, smart, active
- willing to change and go out from his comfort zone
- looks for new technologies, experience, A players are not worried to suggest something new, because even if new idea wouldn't work they learn on their own mistakes
- they have high self esteem (often times it's correct)
- they can often find a job
- pragmatic
- there is some 'public track' of them in the internet (forums, blogs, conferences, twitter etc.)


So these are things and topics required by companies from people they are hiring. Now think, how many of these you were taught at university and how many of them you were taught and you know on a decent level. 

Good Recruiter
We were talking about proper set of skills for new joiner. Now let's think about person I call a good recruiter and let's analyze his approach to a candidate. A good recruiter is somebody, who is able quickly and honestly estimate the level of developer. So how they think?
First of all, small companies pays attention for knowledge. Secondly, the most important language in IT is English. It is a sort of must be nowadays and at least good command of English is essential. The candidate should also have a correct financial self esteem and knowledge of market trends at his work. A good recruiter should also check skills, which developer will be using all the time at work i.e.:
- naming variables and methods,
- IDE knowledge, 
- way of writing tests
- refactoring

These are things, which new developer will be coming across on a daily basis and these sort of things should be checked. Please, give me a break with intelligence tests and questions about falling bulb from the skyscraper etc. It's all pointless! Check real skills, which will be used in real life, not some artificially invented questions to show off to the recruited person.



The best boss is the most stupid person in company
There is also a concept of A players. Companies should hire only A and A+ players. 
A players have rightly high self esteem. That's why the best boss is the most stupid boss, as he knows that only by hiring people better than himself, he will be able to build valuable company.
B players are scared a bit. They don't know what other will say. They are a bit lost. B player is hiring weaker, than himself.
C players - misery.

People tend to blindly repeat that developer is the most precious asset of the company. 
If yes, why companies are rising salary when precious developer is quitting?
If developer is so important, stop your most significant project and put the best people (A and A+) to interview, because you can loose cool guy, who is waiting for his turn out there. He will pay off 100 times anyway.

What we see in reality is that companies are sending whoever to interview new, fresh blood to the company. Is it the right move? Is it how it should be done? Definitely not.
If company really truly believes that developer is the most important asset for them, they should use their best people to recruit others. 

Remember: The A team has only A players.

Monday, 10 December 2012

Code Retreat in ThoughtWorks 2012

This time I was in taking a part in Code Retreat organized by London branch of ThoughtWorks and it was awesome !!



I decided to spent half of a day writing code in languages other than Java. I have chosen Groovy, Ruby and C#. I have to say that all sessions went very well. 
Okay, so let me describe them briefly: 

#session 1: Groovy with no constraints - just a warm up in a language similar to Java.


#session 2: Java and baby steps - this one was very good. The quite nice and neat design was emerging from the code driven by our tests.


#session 3: Ruby and no more than 4 lines of code per method - actually 4 lines of code per method wasn't hard to achieve in Ruby. To be fairly honest, the longest method we had, had literally 2 lines. What's more, Ruby guy, I was pairing with, has shown me some magic in Vim. I thought I knew Vim quite good, however it appeared I am a barely novice :P


#session 4: Java and immutable objects - this was actually quite interesting one, as well. Immutable objects were affecting our design in a way we did not expect, at the very beginning. 


#session 5: Java and no primitives - again another interesting design emerged.


#session 6: C# and tell don't ask (our choice) - that was actually quite a challenge to use only procedures.


All in all, it was very educational, eyes opening and enlightening day. Although there were couple of tricks I learned, I think the most valuable one is related to the way of coping & pasting code. If you really need to do it, do it as below:

- copy & paste the code
- use question marks to denote parts of code you have to amend
- replace question marks with valid code starting from the top

Just to make things clear, let's have a look on a below example:


"Any live cell with fewer than two live neighbours dies, as if caused by under-population."

@Test
public void liveCellWithFewerThanTwoLiveNeighboursDies() {
	int oneNeighbour = 1;
	Cell liveCell = new Cell(ALIVE, oneNeighbour);

	liveCell.tick();

	assertThat(liveCell.getState(), is(equalTo(DEAD)));
}

And here you decided to copy above test in order to tweak it slightly, so that you can get another valid test.

"Any live cell with more than three live neighbours dies, as if by overcrowding."
@Test
public void liveCellWith???NeighboursDies() {
	int ???Neighbour??? = ???;
	Cell liveCell = new Cell(ALIVE, ???Neighbour???);

	liveCell.tick();

	assertThat(liveCell.getState(), is(equalTo(DEAD)));
}

And here is the result:

@Test
public void liveCellWithMoreThanThreeLiveNeighboursDies() {
	int fiveNeighbours = 5;
	Cell liveCell = new Cell(ALIVE, fiveNeighbours);

	liveCell.tick();

	assertThat(liveCell.getState(), is(equalTo(DEAD)));
}

As simple as that, but helps you to avoid silly errors.