Folker Bernitt

No Plugin CI/CD Builds

TL;DR

Avoid over-reliance on CI/CD plugins that obscure processes and hinder local reproducibility. Instead, use scripts and Makefiles to automate builds, tests, and linting, ensuring consistency across local and CI environments. Plugins should complement, not replace, developer workflows—for tasks like environment provisioning, secrets management, or caching. Scripted automation simplifies switching CI tools and maintains fast feedback cycles, critical for developer productivity.

CI and plugins

A continuous integration pipeline that automatically lints, builds, and tests code changes is part of most if not all teams. There are many tools and products in this ecosystem to support and enable teams with managing their CI setup.

Some of them, e.g., Jenkins, come with a whole plethora of plugins or, in the case of GitHub Actions, a myriad of re-useable actions. The goal is to enable sophisticated CI pipelines and make them more accessible and usable. Many teams make extensive use of them, sometimes with an unintentional consequence: Developers can no longer run builds, linters or tests locally, as they don’t have the plugins or actions available on their development environment.

While this lack of local reproducibility already provides challenges for most developers, for teams that leverage trunk-based development and try to avoid a pull request-based workflows, the problems but also the stakes are even higher. Keeping the trunk green for the majority of the time is nigh impossible if one couldn’t run the tests easily locally.

Script your build automation

Before the rise of fully integrated CI products, projects were often built with a simple call to make or make test or using shell scripts. These approaches provide both, automation of repetitive tasks, and executable documentation. One main benefit of this approach is, that this automation can be re-used on the CI build as well. The automation goes beyond running one-liner build commands, it could also leverage dockerized build toolchains, spawning test containers, or extend all the way to fully automated deployment. Running parts of the automation before commiting or pushing the changes can easily be integrated into the workflow by leveraging tools like pre-commit.

CI servers should orchestrate, not obscure

These scripts and Makefiles can be executed by the CI runners as well. The various stages of a pipeline, like build, test, or lint, end up becoming simple calls to the very same scripts that the developers use locally. Of course, that is not the whole truth, as toolchains need to be downloaded, test environments to be set up or credentials to be injected. These are parts of the automated orchestration, some of them are already part of the scripts (e.g., downloading and running a toolchain docker container).

Deviations are small and mostly limited to interactions with the environment, if statements or context specific includes (e.g., scripts/env.ci.sh vs. scripts/env.dev.sh) are sufficient and, in my experience, rarely if ever become complex.

CI plugins and complex GitHub Actions often obscure the underlying processes, making it challenging for developers to run specific tests or linters locally. Even simple plugins that just wrap the linter execution result in situations, where a developer would first need to understand what the plugin does, and reproduce the setup, before he could - as a one of - run the linter locally. While using plugins might seem convenient at first and help teams build momentum quickly, in my experience, the long-term costs outweigh the initial benefits. Having one way that is reused everywhere not only ensures consistency but also codifies how things are run.

Which plugins or actions are valuable then?

There are situations where plugins and re-usable actions have their merits. When they do, they typically do not interfere with the developer workflow but rather complement it.

A few examples:

  • CI environment provisioning: Installing tools that are necessary for the scripts to work, especially when the toolchain is not dockerized.
  • Authentication and Secrets Management: The CI environment needs access to credentials or authentication/authorization. The means to obtain them often differ from the developer ones.
  • Caching dependencies: caching dependencies to speed up the pipeline, e.g., via actions/cache. Typically, given in local dev environment (e.g., maven’s ~/.m2/ repository), but important for ci runners that start in a clean state.
  • Monitoring & Analytics: Plugins or actions that integrate with monitoring systems can provide useful build insights:
    • New Relic or Datadog plugins to report build performance metrics.
    • GitHub Actions that send build notifications to Slack or Microsoft Teams.
  • Notifications: Notify developers about failed build via Slack, Teams, or other channels.

In all the examples provided, the developer does not depend on them in their day to day development work.

Switching CI environments

One positive - but unintended - side effect of a scripted approach is, that it becomes significantly easier to switch between different CI environments, as the CI runners mostly execute scripts - a capability all solutions of all venders have in common. Only the pipelines themselves would need to be remodeled. In cases where teams rely on plugins, these would also need to be available on the other CI tool for it to be a viable option.

Conclusion

A scripted approach ensures that developers on their dev machine and the CI runner execute the very same automation, making it easier to reproduce and reason about for developers. While plugins and sophisticated actions might initially sound like a shortcut, fast feedback cycles for developers matter more eventually.

Tags: