Send all Git untracked files and files in binaries :. If the expiry time is not defined, it defaults to the instance wide setting 30 days by default, forever on GitLab. You can use the Keep button on the job page to override expiration and keep artifacts forever. After their expiry, artifacts are deleted hourly by default via a cron job , and are not accessible anymore. Examples of parsable values:.
Requires GitLab Runner The reports keyword is used for collecting test reports from jobs and exposing them in GitLab's UI merge requests, pipeline views. Read how to use this with JUnit reports. NOTE: Note: If you also want the ability to browse the report output files, include the artifacts:paths keyword. Although JUnit was originally developed in Java, there are many third party ports for other languages like JavaScript, Python, Ruby, etc.
See JUnit test reports for more details and examples. The collected JUnit reports will be uploaded to GitLab as an artifact and will be automatically shown in merge requests.
NOTE: Note: In case the JUnit tool you use exports to multiple XML files, you can specify multiple test report paths within a single job and they will be automatically concatenated into a single file.
The codequality report collects CodeQuality issues as artifacts. The collected Code Quality report will be uploaded to GitLab as an artifact and will be automatically shown in merge requests. The sast report collects SAST vulnerabilities as artifacts. The collected SAST report will be uploaded to GitLab as an artifact and will be automatically shown in merge requests, pipeline view and provide data for security dashboards.
The collected Dependency Scanning report will be uploaded to GitLab as an artifact and will be automatically shown in merge requests, pipeline view and provide data for security dashboards. The collected Container Scanning report will be uploaded to GitLab as an artifact and will be automatically shown in merge requests, pipeline view and provide data for security dashboards. The dast report collects DAST vulnerabilities as artifacts.
The collected DAST report will be uploaded to GitLab as an artifact and will be automatically shown in merge requests, pipeline view and provide data for security dashboards. The collected License Compliance report will be uploaded to GitLab as an artifact and will be automatically shown in merge requests, pipeline view and provide data for security dashboards.
The performance report collects Performance metrics as artifacts. The collected Performance report will be uploaded to GitLab as an artifact and will be automatically shown in merge requests. The metrics report collects Metrics as artifacts. The collected Metrics report will be uploaded to GitLab as an artifact and will be automatically shown in merge requests.
By default, all artifacts from all previous stages are passed, but you can use the dependencies parameter to define a limited list of jobs or no jobs to fetch artifacts from. To use this feature, define dependencies in context of the job and pass a list of all previous jobs from which the artifacts should be downloaded.
You can only define jobs from stages that are executed before the current one. An error will be shown if you define jobs from the current stage or next ones.
Defining an empty array will skip downloading any artifacts for that job. The status of the previous job is not considered when using dependencies , so if it failed or it is a manual job that was not run, no error occurs. In the following example, we define two jobs with artifacts, build:osx and build:linux.
When the test:osx is executed, the artifacts from build:osx will be downloaded and extracted in the context of the build. The same happens for test:linux and artifacts from build:linux. The job deploy will download artifacts from all previous jobs because of the stage precedence:.
If the artifacts of the job that is set as a dependency have been expired or erased , then the dependent job will fail. The needs: keyword enables executing jobs out-of-order, allowing you to implement a directed acyclic graph in your. This lets you run some jobs without waiting for other ones, disregarding stage ordering so you can have multiple stages running concurrently.
Linux path: the linux:rspec and linux:rubocop jobs will be run as soon as the linux:build job finishes without waiting for mac:build to finish. The production job will be executed as soon as all previous jobs finish; in this case: linux:build , linux:rspec , linux:rubocop , mac:build , mac:rspec , mac:rubocop. Regular expressions are the only valid kind of value expected here.
You must escape special characters if you want to match them literally. When a job fails and has retry configured, it is going to be processed again up to the amount of times specified by the retry keyword.
If retry is set to 2, and a job succeeds in a second run first retry , it won't be retried again. By default, a job will be retried on all failure cases. To have a better control on which failures to retry, retry can be a hash with the following keys:. To retry only runner system failures at maximum two times:. If there is another failure, other than a runner system failure, the job will not be retried.
To retry on multiple failure cases, when can also be an array of failures:. The job-level timeout can exceed the project-level timeout but can not exceed the Runner-specific timeout.
This value has to be greater than or equal to two 2 and less than or equal to This creates N instances of the same job that run in parallel. Marking a job to be run in parallel requires only a simple addition to your configuration file:.
TIP: Tip: Parallelize tests suites across parallel jobs. Different languages have different tools to facilitate this. You can then navigate to the Jobs tab of a new pipeline build and see your RSpec job split into three separate jobs. Introduced in GitLab Premium When a job created from trigger definition is started by GitLab, a downstream pipeline gets created.
Learn more about multi-project pipelines. The most simple way to configure a downstream trigger to use trigger keyword with a full path to a downstream project:. It is possible to configure a branch name that GitLab will use to create a downstream pipeline with:. It is possible to mirror the status from a triggered pipeline:. It is possible to mirror the status from an upstream pipeline:.
Defaults to false. This value will only be used if the automatic cancellation of redundant pipelines feature is enabled. When enabled, a pipeline on the same branch will be canceled when:. TIP: Tip: Set jobs as interruptible that can be safely canceled once started for instance, a build job. In the example above, a new pipeline run will cause an existing running pipeline to be:. NOTE: Note: Once an uninterruptible job is running, the pipeline will never be canceled, regardless of the final job's state.
Using the include keyword, you can allow the inclusion of external YAML files. You must only refer to aliases in the same file. Instead of using YAML anchors, you can use the extends keyword. NOTE: Note:. The configuration is a snapshot in time and persisted in the database. Any changes to referenced. You can only use files that are currently tracked by Git on the same branch your configuration file is on. In other words, when using a include:local , make sure that both.
All nested includes will be executed in the scope of the same project, so it is possible to use local, project, remote or template includes. To include files from another private project under the same GitLab instance, use include:file.
You can also specify ref , with the default being the HEAD of the project:. All nested includes will be executed in the scope of the target project, so it is possible to use local relative to target project , project, remote or template includes. All nested includes will be executed only with the permission of the user, so it is possible to use project, remote or template includes. The remote file must be publicly accessible through a simple GET request as authentication schemas in the remote URL is not supported.
All nested includes will be executed without context as public user, so only another remote, or public project, or template is allowed. Nested includes allow you to compose a set of includes.
A total of includes is allowed. Duplicate includes are considered a configuration error. A hard limit of 30 seconds was set for resolving all files. You can include your extra YAML file s either as a single string or an array of multiple values.
The following examples are all valid. Single string with the include:local method implied:. Single string with include method specified explicitly:. Array with include:remote being the single item:.
Array with multiple include methods specified explicitly:. In the following example, the content of. The following example shows specific YAML-defined variables and details of the production job from an include file being customized in. The merging lets you extend and override dictionary mappings, but you cannot add or modify items to an included array. For example, to add an additional item to the production job script, you must repeat the existing script items:.
The examples below show how includes can be nested from different sources using a combination of different methods. In this example,. It is an alternative to using YAML anchors and is a little more flexible and readable:. In the example above, the rspec job inherits from the. GitLab will perform a reverse deep merge based on the keys.
GitLab will:. The maximum nesting level that is supported is The following example has two levels of inheritance:. The algorithm used for merge is "closest scope wins", so keys from the last member will always shadow anything defined on other levels. For example, if you have a local included.
Then, in. This will run a job called useTemplate that runs echo Hello! It has a special syntax, so the two requirements below must be met:. Read more on GitLab Pages user documentation. Floats are not legal and cannot be used. They can be set globally and per-job. When the variables keyword is used on a job level, it overrides the global YAML variables and predefined ones.
They are stored in the Git repository and are meant to store non-sensitive project configuration, for example:. These variables can be later used in all executed commands and scripts.
The YAML-defined variables are also set to all created service containers, thus allowing to fine tune them. Except for the user defined variables, there are also the ones set up by the Runner itself. Apart from the variables you can set in. Learn more about variables and their priority. May change or be removed completely in future releases. If left unspecified, the default from project settings will be used.
There are three possible values: clone , fetch , and none. It clones the repository from scratch for every job, ensuring that the local working copy is always pristine. Indicates that the job starts the environment. The deployment is created after the job starts.
It does not trigger deployments. Read more about preparing environments. For more detail, read Stop an environment. When an environment expires, GitLab automatically stops it. Possible inputs : A period of time written in natural language. Every time the review app is deployed, that lifetime is also reset to 1 day. Related topics : Environments auto-stop documentation. Use the kubernetes keyword to configure deployments to a Kubernetes cluster that is associated with your project. Example of environment:kubernetes : deploy : stage : deploy script : make deploy-app environment : name : production kubernetes : namespace : production This configuration sets up the deploy job to deploy to the production environment, using the production Kubernetes namespace.
Additional details : Kubernetes configuration is not supported for Kubernetes clusters that are managed by GitLab. To follow progress on support for GitLab-managed clusters, see the relevant issue. Related topics : Available settings for kubernetes.
The common use case is to create dynamic environments for branches and use them as Review Apps. Possible inputs: The name of another job in the pipeline.
A list array of names of other jobs in the pipeline. Example of extends :. When creating the pipeline, GitLab: Performs a reverse deep merge based on the keys. Merges the. The extends keyword supports up to eleven levels of inheritance, but you should avoid using more than three levels. In the example above,. Related topics: Reuse configuration sections by using extends. Use extends to reuse configuration from included configuration files.
Similar to image: used by itself. When the Docker container is created, the entrypoint is translated to the Docker --entrypoint option. Use inherit: to control inheritance of globally-defined defaults and variables. Possible inputs : true default or false to enable or disable the inheritance of all default keywords. A list of specific default keywords to inherit. Example of inherit:default : default : retry : 2 image : ruby It does not inherit 'interruptible'.
Possible inputs : true default or false to enable or disable the inheritance of all global variables. A list of specific variables to inherit. Use interruptible if a job should be canceled when a newer pipeline starts before the job completes. This keyword is used with the automatic cancellation of redundant pipelines feature. When enabled, a running job with interruptible: true can be cancelled when a new pipeline starts on the same branch. Example of interruptible : stages : - stage1 - stage2 - stage3 step-1 : stage : stage1 script : - echo "Can be canceled.
Not canceled, after step-2 starts. Additional details : Only set interruptible: true if the job can be safely canceled after it has started, like a build job. To completely cancel a running pipeline, all jobs must have interruptible: true , or interruptible: false jobs must not have started. Use needs: to execute jobs out-of-order. Relationships between jobs that use needs can be visualized as a directed acyclic graph. You can ignore stage ordering and run some jobs without waiting for others to complete.
Jobs in multiple stages can run concurrently. Possible inputs : An array of jobs. An empty array [] , to set the job to start as soon as the pipeline is created. Example of needs : linux:build: stage : build script : echo "Building linux Linux path: The linux:rspec job runs as soon as the linux:build job finishes, without waiting for mac:build to finish. The production job runs as soon as all previous jobs finish: linux:build , linux:rspec , mac:build , mac:rspec.
Additional details : The maximum number of jobs that a single job can have in the needs: array is limited: For GitLab.
For more information, see our infrastructure issue. For self-managed instances, the default limit is This limit can be changed. If needs: refers to a job that uses the parallel keyword, it depends on all jobs created in parallel, not just one job. It also downloads artifacts from all the parallel jobs by default. If the artifacts have the same name, they overwrite each other and only the last one downloaded is saved.
This feature is enabled on GitLab. On self-managed GitLab When a job uses needs , it no longer downloads all artifacts from previous stages by default, because jobs with needs can start before earlier stages complete.
With needs you can only download artifacts from the jobs listed in the needs: configuration. Use artifacts: true default or artifacts: false to control when artifacts are downloaded in jobs that use needs. Must be used with needs:job. Possible inputs : true default or false. Additional details : In GitLab Use needs:project to download artifacts from up to five jobs in other pipelines.
The artifacts are downloaded from the latest successful pipeline for the specified ref. If there is a pipeline running for the specified ref, a job with needs:project does not wait for the pipeline to complete.
Instead, the job downloads the artifact from the latest pipeline that completed successfully. Possible inputs : needs:project : A full project path, including namespace and group.
If the project is in the same group or namespace, you can omit them from the project: keyword. Concurrent pipelines running on the same ref could override the artifacts. When using needs:project to download artifacts from another pipeline, the job does not wait for the needed job to complete. Directed acyclic graph behavior is limited to jobs in the same pipeline. Make sure that the needed job in the other pipeline completes before the job that needs it tries to download the artifacts.
Related topics : To download artifacts between parent-child pipelines , use needs:pipeline:job. A child pipeline can download artifacts from a job in its parent pipeline or another child pipeline in the same parent-child pipeline hierarchy. Possible inputs : needs:pipeline : A pipeline ID. Must be a pipeline present in the same parent-child pipeline hierarchy.
Example of needs:pipeline:job : Parent pipeline. The child pipeline can use that variable in needs:pipeline to download artifacts from the parent pipeline.
To download artifacts from a job in the current pipeline, use needs. To need a job that sometimes does not exist in the pipeline, add optional: true to the needs configuration. If not defined, optional: false is the default. Jobs that use rules , only , or except , might not always exist in a pipeline. When the pipeline is created, GitLab checks the needs relationships before starting it.
Without optional: true , needs relationships that point to a job that does not exist stops the pipeline from starting and causes a pipeline error similar to: 'job1' job needs 'job2' job, but it was not added to the pipeline Keyword type : Job keyword.
Possible inputs : job: : The job to make optional. When the branch is not the default branch, the build job does not exist in the pipeline. The rspec job runs immediately similar to needs: [] because its needs relationship to the build job is optional. The latest pipeline status from the default branch is replicated to the bridge job. Possible inputs : A full project path, including namespace and group.
The behavior changes to needs:pipeline:job. You can use only and except to control when to add jobs to pipelines. Use only to define when a job runs. Use except to define when a job does not run.
Four keywords can be used with only and except : refs variables changes kubernetes See specify when jobs run with only and except for more details and examples. Possible inputs : An array including any number of: Branch names, for example main or my-feature-branch. Enables merge request pipelines , merged results pipelines , and merge trains. Add except: schedules to prevent jobs with only: branches from running on scheduled pipelines.
For example, the following two jobs configurations have the same behavior: job1 : script : echo only : - branches job2 : script : echo only : refs : - branches If a job does not use only , except , or rules , then only is set to branches and tags by default. Possible inputs : An array including any number of: Paths to files. Wildcard paths to files in the root directory, or all directories, wrapped in double quotes.
If you use only: changes with other refs, jobs ignore the changes and always run. If you use except: changes with other refs, jobs ignore the changes and never run. Related topics : only: changes and except: changes examples.
Use changes with new branches or tags without pipelines for merge requests. Use changes with scheduled pipelines.
Keyword type : Job-specific. Possible inputs : The kubernetes strategy accepts only the active keyword. Example of only:kubernetes : deploy : only : kubernetes : active In this example, the deploy job runs only when the Kubernetes service is active in the project.
The content is then published as a website. Keyword type : Job name. Example of pages : pages : stage : deploy script : - mkdir. The stage: deploy ensures that this job will run only after all jobs with stage: test complete successfully. To trigger a job from a webhook of another project you need to add the following webhook URL for Push and Tag events change the project ID, ref and token :.
You can pass any number of arbitrary variables in the trigger API call and they will be available in GitLab CI so that they can be used in your. The parameter is of the form:. This information is also exposed in the UI. Please note that values are only viewable by Owners and Maintainers. Using trigger variables can be proven useful for a variety of reasons:. Consider the following.
Whether you craft a script or just run cURL directly, you can trigger jobs in conjunction with cron. The example below triggers a job on the master branch of project with ID 9 every night at :.
Old triggers, created before GitLab 9. Triggers with the legacy label do not have an associated user and only have access to the current project. The Overflow Blog. Who owns this outage? Building intelligent escalation chains for modern SRE. Podcast Who is building clouds for the independent developer? Featured on Meta. Now live: A fully responsive profile. Reducing the weight of our footer. Related Hot Network Questions.
0コメント