Singletons have many faces. One of them is logging mechanism. Before we start having a deeper look into entire case, let's go straight away to a simple example, to set some context.
What can you tell about such an application?
Is there anything particular, which is stemming out from our imaginary application?
It may look like to somewhat silly task, but unfortunately it has very deep implications, in terms of design, solving production issues, sometimes performance, overall usage and approach. It is quite a few things tangled to logging, as such.
First of all, let's decompose above snippet with attention to some basic facts:
package com.korczak.oskar.refactoring.singleton.logger.before; import com.korczak.oskar.refactoring.singleton.logger.Engine; import org.junit.runner.RunWith; import org.mockito.InjectMocks; import org.mockito.Mock; import org.mockito.runners.MockitoJUnitRunner; import org.testng.annotations.BeforeMethod; import org.testng.annotations.Test; import static org.mockito.Mockito.verify; import static org.mockito.MockitoAnnotations.initMocks; @RunWith(MockitoJUnitRunner.class) public class CarTest { @Mock private Engine engine; @InjectMocks private Car car; @BeforeMethod public void setUp() { initMocks(this); } @Test public void shouldRunEngine() { car.start(); verify(engine).run(); } @Test public void shouldStopEngine() { car.stop(); verify(engine).stop(); } }
package com.korczak.oskar.refactoring.singleton.logger.before; import com.korczak.oskar.refactoring.singleton.logger.Engine; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class Car { private Logger carLogger = LoggerFactory.getLogger(Car.class); private Engine engine; public Car(Engine engine){ this.engine = engine; } public void start(){ engine.run(); carLogger.info("Engine is running."); } public void stop(){ engine.stop(); carLogger.info("Engine is stopped."); } }We can also assume that above snippet is actually a part of, so called, fairly decent application, which is logging everything everywhere, so to speak.
What can you tell about such an application?
Is there anything particular, which is stemming out from our imaginary application?
It may look like to somewhat silly task, but unfortunately it has very deep implications, in terms of design, solving production issues, sometimes performance, overall usage and approach. It is quite a few things tangled to logging, as such.
First of all, let's decompose above snippet with attention to some basic facts:
- Static logger is a singleton
- Breach of unit test definition i.e. system under test is writing to file/console
- Lack of readability to some extent
- A bit of noise in business logic
Now, let's discuss above points a bit further and infer some more, useful information.
Logger is a Singleton
Logger is a Singleton
Logger is created via static call and a Singleton is returned. However, it is not that bad, as we are only writing to Singleton and never reading from it. It means we are not exploiting a global state introduced in our JVM by Singleton. On the other hand, it is worth bearing in mind that static call means no seams. There are two direct implications of that:
Unit test definition breach
That is how we came to the second fact related to unit tests.
Michael Feathers defines unit tests as:
A test is not a unit test if:
It talks to the database
It communicates across the network
It touches the file system
It can't run at the same time as any of your other unit tests
You have to do special things to your environment (such as editing config files) to run it.
- untested production code, as there are no seams
- message will be logged anyway during running tests
Unit test definition breach
That is how we came to the second fact related to unit tests.
Michael Feathers defines unit tests as:
A test is not a unit test if:
I suppose we can agree that above test is an integration one, rather then a unit sort of thing. Feathers's definition sticks to what people understand under unit test term.
Why don't we finally admit that so called unit tests, which are logging information to a file/console whatever, are in fact a decent integration tests. Full stop.
What is the business related reason?
Going further one can focus on a business reason in that whole tangled situation and answer a question: is there any business related reason to log particular information in particular place?
The most common answer would be that by logging we mean sending a message from application to whoever is browsing logs. Usually, it would be two main groups of recipients: L1/L2 and L3/testes/developers teams.
L1/L2 guys would like to see only nasty things happening in a given application e.g. errors, alerts and perhaps warnings. On the other hand, L3/testers/developers would like to see a bit more i.e. states generated by application during its processing etc. It is worth reiterating that none of them is a business person.
The most common answer would be that by logging we mean sending a message from application to whoever is browsing logs. Usually, it would be two main groups of recipients: L1/L2 and L3/testes/developers teams.
L1/L2 guys would like to see only nasty things happening in a given application e.g. errors, alerts and perhaps warnings. On the other hand, L3/testers/developers would like to see a bit more i.e. states generated by application during its processing etc. It is worth reiterating that none of them is a business person.
When we dig a bit deeper and ask what is the underlying principle of logging, we will probably come to below two points:
- log a state of processed unit
- say 'I was here' or 'I have chosen that path in you application logic'
Readability and noise
Unfortunately, logging affects a bit negatively the overall readability of business logic. It adds additional noise and clutters codebase, when it is overused.
How it might be done?
The easiest might be to imagine an event driven system, where an event comes in, it is being processed and some final response is sent. As it was pointed, logging would be used for diagnostic purposes. If yes, I am questioning the logger usage, almost at all. Why don't we change our applications design so that it can monitor, diagnose and track the state of all, incoming events itself?
I already can hear answers: 'it is too much effort', 'we don't need it' or better one 'our project is different'. ;)
Partially, it might be true and I agree it is pointless, in some trivial cases, to build errors aware applications for your younger sister's diary application, although still it is worth at least considering it in some larger projects.
Also, don't get me wrong, if there is a clear business drive for having something logged, just do it. The only thing I am asking to remember is fact that logging is a sort of contract between application and whoever is reading logs. If there is a business need for having some log entry, simply test it.
When you have self aware system in place and an issue occurs, you may ask your application for some useful 'diagnostic information' (including states) on demand via say REST API or whatever.
Implementing that sort of approach is in fact a win-win situation for all project stakeholders. Not only L3 and developers are able to deal with errors in processing, but also testers, L1, L2 teams are able to do that. Even BAs start analyzing and solving some burning questions themselves.
In a long run, it is less work for busy developers, everybody can leverage and use diagnostic information. What's even more important, the knowledge and the responsibility is shared across entire team/stakeholders. It only requires two things: Also, don't get me wrong, if there is a clear business drive for having something logged, just do it. The only thing I am asking to remember is fact that logging is a sort of contract between application and whoever is reading logs. If there is a business need for having some log entry, simply test it.
When you have self aware system in place and an issue occurs, you may ask your application for some useful 'diagnostic information' (including states) on demand via say REST API or whatever.
Implementing that sort of approach is in fact a win-win situation for all project stakeholders. Not only L3 and developers are able to deal with errors in processing, but also testers, L1, L2 teams are able to do that. Even BAs start analyzing and solving some burning questions themselves.
- Stop logging everything in the log files
- Add sort of issues memory or events state memory to your system
The first step here is to start testing logging in your applications. Try to make logger a collaborator of your class, not a static call and use it where it is really necessary.
Testing logging enforces thinking about every single message, which might be potentially logged. Additional effort spent on writing test for logger, should actually drive aware developer through all above concerns, helping to understand the business reason behind particular piece of code.
Testing logging enforces thinking about every single message, which might be potentially logged. Additional effort spent on writing test for logger, should actually drive aware developer through all above concerns, helping to understand the business reason behind particular piece of code.
package com.korczak.oskar.refactoring.singleton.logger.after; import com.korczak.oskar.refactoring.singleton.logger.Engine; import org.junit.runner.RunWith; import org.mockito.ArgumentCaptor; import org.mockito.Captor; import org.mockito.InjectMocks; import org.mockito.Mock; import org.mockito.runners.MockitoJUnitRunner; import org.slf4j.Logger; import org.testng.annotations.BeforeMethod; import org.testng.annotations.Test; import static org.mockito.Mockito.verify; import static org.mockito.MockitoAnnotations.initMocks; import static org.testng.AssertJUnit.assertEquals; @RunWith(MockitoJUnitRunner.class) public class DecentCarTest { @Mock private Engine engine; @Mock private Logger decentCarLogger; @InjectMocks private DecentCar decentCar; @Captor private ArgumentCaptorargument; @BeforeMethod public void initializeMocks() { initMocks(this); argument = ArgumentCaptor.forClass(String.class); } @Test public void shouldRunEngineAndLogMessage() { decentCar.start(); verify(engine).run(); verify(decentCarLogger).info(argument.capture()); assertEquals("Engine is running.", argument.getValue()); } @Test public void shouldStopEngineAndLogMessage() { decentCar.stop(); verify(engine).stop(); verify(decentCarLogger).info(argument.capture()); assertEquals("Engine is stopped.", argument.getValue()); } }
package com.korczak.oskar.refactoring.singleton.logger.after; import com.korczak.oskar.refactoring.singleton.logger.Engine; import org.slf4j.Logger; public class DecentCar { private Engine engine; private Logger decentCarLogger; public DecentCar(Engine engine, Logger decentCarLogger){ this.engine = engine; this.decentCarLogger = decentCarLogger; } public void start(){ engine.run(); decentCarLogger.info("Engine is running."); } public void stop(){ engine.stop(); decentCarLogger.info("Engine is stopped."); } }
You can find entire example on my GitHub account.
Round three is finished.