The term “Golden Path” refers to a standard or preferred method of achieving a specific outcome. It is often viewed as the most efficient, maintainable, or reliable route to reach a goal, providing a benchmark for best practices. In software development, the Golden Path often refers to a standardized approach to writing, testing, and deploying code.
In the initial phase, we’ll meticulously go through each component of this Golden Path manually, thoroughly examining every aspect. This hands-on approach will enable us to get a detailed understanding of all components involved. However, in subsequent stages, we plan to automate the initiation process of a new service, adhering to the principles outlined in the Golden Path, using a service management platform known as Backstage.
While this guide uses a Node.js template as an illustrative example, please note that an array of other templates catering to different requirements will also be readily available for use in the future. This diversity in templates ensures our Golden Path methodology is versatile and adaptable to a wide range of projects and objectives.
In Practice
But in practice, our Golden Path refers to a set of steps and practices that optimize the development and deployment process. Here is a detailed breakdown of what our Golden Path entails:
Complete source code for a specific language: The foundation of our Golden Path is the source code written in Node.js version 18. This means that all our coding and development tasks will be carried out in this specific language version, providing a uniform base for our project.
Automatic version detection: An integral part of our Golden Path is the use of the package.json file to automatically detect the version of our software. The package.json file is a vital tool in Node.js projects, as it stores information about the project, including the current version. This allows us to keep track of our software’s version without manual intervention, enhancing efficiency.
Kubernetes and YAML best practices: To ensure that our Kubernetes configuration and YAML files adhere to best practices, we incorporate kube linting and static code analysis into our Golden Path. Kube linting is the process of checking Kubernetes configurations for errors, while static code analysis is a method of debugging by examining the code without running the program. Together, these practices help maintain the quality and reliability of our code.
A default pipeline for building, testing, and deploying: Our Golden Path includes a default pipeline that is designed to streamline the process of building, testing, and deploying images. This pipeline provides a structured and standard approach for these critical tasks, ensuring that they are carried out in a manner consistent with our organization’s standards.
Packaging standardization: To ensure a standard packaging method, we use Helm, a package manager for Kubernetes, and archive our packages in Chart-Museum, a Helm chart repository. Helm helps us manage our Kubernetes applications, while Chart-Museum provides a place to store and share our charts.
Proxy to npm registry: Also includes the use of a proxy to the npm registry, which stores packages of reusable JavaScript code. The proxy prevents us from having to pull the same packages multiple times, reducing redundancy and enhancing the speed and efficiency of our development process.
Our tasks during this phase will include a comprehensive array of exercises, such as:
Updating our source code and creating a new git commit using a non-main branch: This step demonstrates the good practice of using separate branches for different developments and features, thereby ensuring the main branch stays error-free.
Testing the automatically created GitHub webhooks: GitHub webhooks provide us with a way to automatically trigger certain actions whenever specific events happen within our repository. We will test these webhooks to ensure they function as expected.
Debugging and testing the Tekton EventListeners: Tekton event listeners are designed to react to external events and kick off certain tasks accordingly. We will be running tests and performing debugging to guarantee their optimal performance.
Initiating and executing a PipelineRun: A pipeline run represents the process of running the entire pipeline (from start to finish) to verify the smooth functioning and compatibility of all the integrated elements.
Investigating our CI/CD namespace: We’ll start by examining our Continuous Integration/Continuous Deployment (CI/CD) namespace, which is an integral part of Kubernetes allowing us to compartmentalize different environments within a cluster.
Defining the pipeline
The focus of our pipeline will be:
Cloning the source code from Git: We’ll procure an exact copy of the source code from our Git repository.
Identifying runtime and gathering metadata about our application version: We’ll determine the runtime environment and extract specific metadata about the version of our application.
Implementing basic out-of-the-box best practices validations with Kube-linter: Kube-linter is a static analysis tool that checks Kubernetes YAML files and Helm charts to ensure the applications represented in them adhere to best practices. We’ll use it to run some basic checks on our configuration.
Building the image with Kaniko: Kaniko is a tool used to build container images from a Dockerfile, inside a container or Kubernetes cluster. We’ll use Kaniko to construct our application image.
Executing the application unit tests: These tests are designed to verify the correct operation of individual units of source code.
Deploying to our project development namespace: Finally, we’ll deploy our tested and validated code to our project development namespace.
Considering we are working on a non-main branch, our progress will cease at this stage, and we will abstain from promoting this application version to any further stages like production.
Upon successful completion of these steps, we will be able to see the application deployed and accessible through a public ingress endpoint.
Next, we plan to merge the new feature from our branch back into the main branch and then push it. Consequently, all the steps detailed above will be executed once again, but this time, the application could potentially be promoted to a new stage.
This brings us to the heart of GitOps. We will raise a pull request to a new environment, featuring the updated version of the newly constructed application. This demonstrates the principle of GitOps, where Git is the single source of truth and all changes to the system are implemented through controlled Git operations.