Using shared Jenkins libraries for CI/CD pipelines

Context

While working in one of many product teams for a large company, I gained some insights on different ways of using Jenkins for CI/CD pipelines. An issue commonly encountered is the question on whether to use shared libraries for supporting the creation of build and deployment pipelines. Each team is responsible for their respective products, including the associated CI/CD pipelines to build and deploy them. As an effect, these teams are following different approaches on whether or how to use shared libraries for creating these pipelines. Now I would like to share some thoughts on how to make an informed decision on this topic.

What is this all about?

As the name shared library might indicate, it is a means to extract common functionalities that can be shared between multiple projects. That is, multiple Jenkinsfiles may use the same function which is implemented in exactly one place. So this is about applying the DRY principle.

In this short article I am not going to explain how to set up a shared library. If you are interested in these details, please have a look at the official documentation.

Why would I need a shared library at all?

Spoiler: If you use Jenkins for just a single project then you may just want to stop reading here …​ unless you are interested in the topic, of course 😉

As stated above, shared libraries extract reusable functionality into a single place. Therefore, their usage is potentially a good thing for

  • companies that want to establish a common build process for multiple products

  • teams that are responsible for multiple projects, wanting to be able to tweak their builds in an efficient way

Especially in the second case I experienced some resentment against centralizing common functionality. The argument is usually simplicity: If I have everything inside my Jenkinsfile then I have everything I need to understand my build/deploy process and I can quickly tweak it if necessary.

While there is some truth in this argument, I would like to reply that - depending on what the pipeline is intended to do - that Jenkinsfile might become very large. This will again reduce my ability to understand it.

Besides that, I think that a developer which is keen to understand the build process will not be deterred by the need to peek into a shared library. And otherwise, a developer not interested in the build process would try to avoid meddling with it either way.
Another thought to consider: Copying and pasting similarly structured Jenkinsfiles sounds simple, but may quickly become error-prone in the presence of a larger number of projects. This is especially so, when changes have to be applied throughout.

How to use?

As with most challenging topics, there is no "one size fits all" solution. However, there are ways that appear better than others. These will be covered here, based on different scenarios.

Not enough projects

If you just have very few projects at hand (maybe just one) then it might really be overkill to set up a shared library. Unless, of course, you plan to extend your portfolio.

Differing projects

If you have multiple projects which have little in common (especially concerning the way they are built/deployed), then the possibility of extracting commonalities is restricted. You may not be able to factor out larger workflows or complete build steps. However, it may be possible to have a shared library serving as a "toolbox", providing low-level functions which can be used to simplify your actual pipeline steps.

Similar projects

If your project portfolio consists of multiple similar projects (regarding build/deployment), you may be able to factor out entire build steps into your shared library, I call this the "workflow" approach.

This way, each of your pipeline steps may even consist of only a single function call. You might also use your shared library to create your pipeline steps altogether.

This scenario allows for the most options but also requires you to choose one of them.

The "toolbox" approach is easiest to understand and maintain. It is easy to use simple functions while implementing your pipeline steps and it is also easy to maintain these simple functions.
However, this approach does not let you control the build process. Thus, you might end up with very different looking Jenkinsfiles that do similar things in different ways.

The "workflow" approach makes the Jenkinsfiles look similar, as they delegate entire pipeline steps into the shared library. Its advantage: you may tweak your build/deploy process in one place. Its disadvantage: Maintaining the shared library is more difficult as it requires deeper knowledge of the build process itself.
You may create all of your pipeline steps from a single entrypoint, which will drastically simplify your Jenkinsfile. However, this approach comes with a caveat: it may prevent you from introducing extra pipeline steps which might be needed only for specific cases. Providing the implementation of common pipeline steps through the shared library while allowing for additional steps in the Jenkinsfile yields more flexibility here. You should check your requirements on which approach seems more fitting for you.

You might also go for a somewhat hybrid approach. Based on a layer of simple functions ("toolbox") you may place further layers of more complex functionality on top of the lower layers.
This allows for the most flexibility in implementing Jenkinsfiles which, again, may not be that similar to each other. Maintaining such a shared library might also be challenging, as the layering of functionality is crucial und must be done in a consistent and understandable way.

Hints for improving maintainability:

  • Start with a small set of public functions which are intended to be used from a Jenkinsfile and enlarge the capabilities as required

  • Do not write big monolithic functions but split them into smaller ones, which are easier to understand

  • Group functions into files based on a clear common context

  • Mark helper functions as nonpublic (e.g. by prefixing them with underscore and document its meaning). That way you are free to alter your implementation without breaking the public API

What should I be aware of?

If you decide to give shared libraries a chance then you should take care of some important aspects, such as the following:

Choose your strategy

Think about which strategy you want to support with your library. Whether it be a simple toolbox or a more sophisticated approach:

  • Analyze your project portfolio to check which strategies are feasible

  • Consider the opinions and capabilities of your team members - they will have to use it

  • Consider who will be responsible for maintaining the shared library

  • Maintain your chosen strategy in order to prevent chaos

A side note on maintaining your strategy: Naturally, this hint only works as long as the situation does not change. If the number of your projects increases, then you should indeed reconsider your strategy decision. Just remember to implement changes in a controlled fashion and not in an ad-hoc style. A reduction of projects is usually no reason to change your chosen strategy if your implemented solution is clean and understandable.

Have good documentation

It is of vital importance to document your shared library so that it can be easily used by the whole team. Documentation should cover all relevant aspects:

  • How to add the shared library to a new project?

  • What is the intended usage strategy?

  • How does the API of the shared library look like and how am I supposed to use it?

Be consistent in your implementation

Using a shared library can be a pain in the ass if it is just a dump of incoherent functions of different granularity. To make it better, ensure that your library is consistent:

  • functionality is grouped by topic

  • domain-agnostic and domain-specific functionality is separately packaged

  • functions packaged together (e.g. within the same script) have a similar granularity

Think about migration

Whatever your shared library might look like: If it is going to be used, then sooner or later feature requests or change requests will arise. As the point in having a shared library is to serve as foundation for multiple projects, you need a migration strategy for this case. Especially if you want or need to introduce breaking changes, it might not be feasible to update the concerned projects all at once.

One approach which has worked for me is the following:

Use semantic versioning for your shared library. That way you can identify breaking changes by increasing the major version.

Use branch names identified by the major version to manage different versions of your shared library. This allows you to support multiple versions of your shared library simultaneously. For example start with branch '1' and, if introducing a breaking change, spawn a new branch '2'. A user of the shared library may then reference the library branch which will make minor changes immediately available to him without extra work.

Tag each version of your shared library. Naturally, the major version of each tag will match the name of the branch you are working on. When working on branch '1' you may have tags '1.0.0', '1.1.0' for example. Tagging allows the more cautious users of the library to refer to tags instead of branches. Thus they may even control the application of minor changes to their pipelines.

When implementing features or bugfixes you should use separate branches of your shared library. That way you can safely test your changes without breaking production code.

Have release notes describing relevant changes and provide a migration guide where necessary.

Some code snippets at last

Referring a shared library in the Jenkinsfile:

1
2
3
4
5
@Library(['mylib@1'])

pipeline {
  //...
}

This will make the functionality of mylib available for usage within the pipeline. Here the version 1 is explicitly referenced, which might be a branch or tag name.

The "workflow" approach may appear somehow like this in a Jenkinsfile:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
//...
stage('Build') {
  steps {
    stageBuild()
  }
}

//...

post {
  always {
    stagePost()
  }
}
//...

You define the steps and their order, but the implementation is delegated to single function calls into the shared library. These functions can create sub-steps if necessary.

The "toolbox" approach is more explicit and uses only helper functions to simplify the implementation residing within the Jenkinsfile:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
//...
stage('Build') {
  steps {
    buildMavenProject()
    shareTestReport()
  }
}

//...

post {
  success {
    //...
  }

  failure {
    sendMail(recipient, 'the build failed...')
  }
}
//...

About the author
Andreas Senft is an experienced developer with focus on backend development and DevOps

read more from Andreas