Continuous integration tools are the software development approach of regularly integrating code changes into a shared code source. Usually, this would occur at least once or several times a day (based on the number of code commits) and this approach helps committing small changes more frequently over committing large changes rarely. Every commit produces a build when tests are run to enable detecting if anything is damaged by the changes.
Here are a few practices of the best continuous integration tools process that will help you in your pursuit of an easier .
1. Perform integration testing before unit testing–This is probably an unconventional process since most of us have been trained that if you discover a defect in the development cycle later, the costlier it is to resolve. Thus, it is always better to complete all the details before moving on to the “significant matters” such as integration testing.
The challenge is that that claim was embedded in the waterfall development model, in which you did not move to the subsequent development until the existing phase was complete. Once you goon to agile development, however, the idea is not relevant. Agile enables you with the flexibility to make any changes in the business logic which are essential as you move along.
2. Avoid testing business logic with integration testing–This is the main purpose of unit tests. Mixing unit tests with integration tests can cause severe consequences on the time it requires to run your test suite. Unit tests are usually very quick, so they are run for each build generated in the CI environment.
As they aim for basic correctness of code, running them often is crucial to identify errors early on in business logic, so that the software developer who created the bug can resolve it immediately. Since integration tests take too much time to run, they should not be incorporated in every build cycle, but somewhat closer to a daily build.
3. Understand the difference between integration testing and unit testing – There are many significant indications that help you easily differentiate an integration test from a unit test:
Encapsulation– Unit tests are well encapsulated and do not utilize external resources, on the other hand integration tests use extra components or structures such as the network, database, or file system.
Complexity– Unit tests aim at small and different parts of the code, so that they are usually easy to write. On the other hand, integration tests are more complex, frequently requiring tooling, and setting up various infrastructures.
Test failure- When a unit test fails, there is a clear indication of a bug in the business logic of the code. But when an integration test fails, there is no need to check code that implements business logic; the unit tests must clear out bugs at that level. It is highly probable that something has altered in the environment and needs to be tackled.
4. Maintain your testing suites separately – Integration tests must not be run together with unit tests. Software developers working on the business logic in the code should run unit tests and obtain near-immediate feedback to make sure that they have not breached anything prior to committing code. If their test suite needs too much time and they are unable to wait for it to complete before committing code, they are expected to just stop running tests entirely (both, integration tests and unit tests). This also implies that the unit tests are not appropriately maintained, which can ultimately get you into a position where the attempt required to update your test suite with the code affects and leads to actual delays in delivery.
By maintaining your test suites separately, your developers can comfortably run the quick unit tests during development and prior to committing any code. The lengthy and monotonous integration tests should be kept set aside for the build server in a separate test suite that can be run less frequently.
5. Log extensively –A unit test is known to have a certain scope and usually tests extremely minor pieces of your application, so when it fails, it is usually easy to know why and resolve the issue. On the other hand, integration tests are quite different. Their range may cover numerous software modules, not to mention various devices and hardware elements, in any functional flow. Thus, if an integration test fails, it could be much more complex to detect the cause.
Extensive logging is the only approach to evaluate a failure, permitting you to discover where the problem exists. However, be aware that extensive logging can have a substantial effect on performance, so it must be performed only when required. Ensure you use a proficient logging framework that can be controlled through flags that let for minimal logging during normal production usage and gradually more detail to be logged in the outcome of a problem.
6. Do not halt at integration testing – Testing does not end with how your software components work with one another, or even how they work with third-party components. Your software is ready to be deployed in a full production ecosystem that could involve virtualization tools, load balancers, DNS servers, proxy servers, databases, mail servers, and much more. Your clients’ user experience relies not only on your application but the way it is deployed in your production environment and the way it works with all those other exterior components. These are the elements that you should be aiming at, so once you have confirmed your high-level architecture with integration testing, ensure that you also run system tests that precisely simulate your production environment.