The Ocular Manual
- The Ocular Manual
About This Guide
Ocular is built atop Kubernetes, providing a simplified API that allows for easy configuration of static code scanning over a vast amount of software assets.
Ocular allows for the core aspects of code scanning to be configured via containers. This enables security engineers to highly tailor the scanning to the needs of their organization. Ocular additionally ships with a set of default integrations for common use cases.
This document is meant to be a guide for users and implementers, to help them understand the concepts behind Ocular and our implementation. It should help you better understand Ocular’s behavior and some of the design decisions behind it.
This document is NOT intended to be a tutorial overview. For that, see the Getting Started Guide for an easy introduction to Ocular.
Source Code Availability
Source code is available at crashappsec/ocular. See the installation guide for how to download and install Ocular to your Kubernetes cluster. We will be making source code available at the time of our public launch.
Basic Concepts
Ocular is intended to be a dedicated code scanning orchestration system decoupled from continuous integration/continuous deployment (CI/CD) pipelines responsible for code deployment. This separation allows security engineers to not disrupt developer workflows and provide a testbed for security tooling. Ocular allows for executions to be regularly occurring or run ad-hoc and additionally over a large set of software assets.
The system is architected to provide a flexible and configurable framework, enabling users to define:
- Targets: The specific assets or artifacts to be analyzed.
- Scanners: The tools and processes utilized for scanning.
- Result Storage: The designated locations for storing and managing scan outputs.
- Enumeration: The means by which you determine which targets to analyze
The most basic unit of execution is a container image, which allows for a high level of customization, so long as those containers conform to the standard specified later in this manual
Use of Kubernetes
Ocular is built atop kubernetes very transparently. We encourage users to use any existing kubernetes to help them monitor or configure scans. If you see a section in this guide prefixed with ‘[K8s]:’, it will describe the kubernetes implementation behind the section.
Authentication
Ocular uses Kubernetes API Access system to authenticate
users. Currently the API only supports Token reviews. The
easiest way to get a bearer token would be to create a service
account token. an Ocular install from Helm comes bundled with
two service accounts: ocular-admin
with full access to Ocular,
and ocular-operator
with read access to resources, and the ability
to trigger pipelines and searches.
The command below will generate a token for the ocular-admin
user that is
valid for 24 hours:
kubectl create token ocular-admin --duration 24h
This token should then be used in the Authorization
header,
prefixed with Bearer
.
Permissions
Ocular checks the service account token against the Kubernetes API, meaning the token will need to be authenticated and have RBAC permissions The table below shows the permissions needed for each endpoint. If the endpoint is not listed below, no permissions are needed
Method | Resource(s) | K8s Permissions |
---|---|---|
GET | downloader , crawler , uploader , profile | get on configmap |
POST , DELETE | downloader , crawler , uploader , profile , secret | update on configmap |
GET | secret | get on secret |
POST , DELETE | get on secret | |
GET | pipelines , searches | get on jobs , list on jobs |
POST | pipelines , searches | create on jobs |
DELETE | pipelines , searches | delete On jobs |
GET | scheduled-searches | get on jobs , list on cronjobs |
POST | scheduled-searches | create on cronjobs |
DELETE | ``scheduled-searches` | delete On cronjobs |
API Usage
The API supports both JSON and YAML for encoding. The following table shows the different content and accept headers for each
Encoding | Content Type Header | Accept Header |
---|---|---|
YAML | application/x-yaml | application/yaml |
JSON | application/json | application/json |
An Open API JSON spec and a swagger UI are served at the endpoints /api/swagger/openapi.json
and /api/swagger
respectfully. There are only enabled when Ocular is run in development mode.
This can enabled by setting OCULAR_ENVIRONMENT
to the value development
, or the setting environment: development
in your Helm values.
The API also exposes 2 metadata endpoints that may be useful:
/health
which should always respond with{"success": true, "response": "ok"}
/version
which responds with the API version, build time and git commit SHA (e.g.{"version": "v0.0.1", "buildTime": "2025-07-11T12:51:34Z", "commit": "95bb78a16cd2e01c1dc8f267141e073943b8af67"}
)
Resource Definitions
Ocular is configured via the definition of a few “resources”, which are static definitions in YAML (or JSON) format and define either a container image to run, secrets, or some configuration options for the scanning process.
These resources are then used to configure the scanning process and are read during the execution of container images.
Containers
The system allows customization of many aspects of the application through container definitions. The container definition template is shared between many resources and is used to define the container image to run, the pull policy, and any secrets that should be mounted into the container. In order to not duplicate documentation, the container definition is documented below. There exists 2 main types of container definitions in the system:
- User Container: A standard container definition that is used to define a container image to run.
- User Container With Parameters: A container definition that is used to define a container image to run, along with a set of parameters that can be passed to the container when it is invoked. It is a superset of the UserContainer definition, and adds a set of parameters that can be passed to the container when it is invoked.
The following resources use the User Container definition:
- Downloaders: Used to define a container image that will download the target content to a specific directory in the container filesystem.
- Scanner (a subset of Profile): Used to define a container image that will scan the target content and produce artifacts.
The following resources use the User Container With Parameters definition:
- Uploaders: Used to define a container image that will upload the artifacts produced by the scanners to a specific location.
- Crawlers: Used to define a container image that will enumerate targets to scan and call the API to start scans for those targets.
User Container
# Example definition of a user container.
# Container Image URI
image: "myorg.domain/crawler:latest"
# Pull policy for the container image.
# See https://kubernetes.io/docs/concepts/containers/images/#updating-images
imagePullPolicy: IfNotPresent
# Command and args to execute when the container is run.
# See https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#define-a-command-and-arguments-when-you-create-a-pod
command: ["python3", "crawler.py"]
args:
- "--verbose"
- "--folder=./"
# Set environment variables for the container.
# These are set in the container environment when the container is run.
# NOTE: Environment variables should not contain sensitive information.
# instead, use secrets to mount sensitive information into the container.
env:
- name: LOG_LEVEL
value: "debug"
# Secrets to mount when the container is executed.
# The 'name' field will be used to identify the secret in the system.
# If a secret is marked 'required', the system will ensure that the secret exists before allowing the container to run.
# Secrets can be mounted as environment variables or as files in a specific directory.
# The 'mountType' field specifies how the secret should be mounted (either `envVar` for environment variables or `file` for files).
# The 'mountTarget' field specifies the target location for the secret, a file path for `file` mountType or an environment variable name for `envVar` mountType.
secrets:
- name: token
mountType: envVar
mountTarget: MY_TOKEN
User Container with Parameters
# A User Container With Parameters definition is the same as a User Container definition,
# with the addition of a `parameters` section.
# The other sections are the same as the User Container definition
# and are documented above.
# Container Image URI
image: "myorg.domain/crawler:latest"
# Pull policy for the container image.
imagePullPolicy: IfNotPresent
# Command and args to execute when the container is run.
command: ["python3", "main.py"]
args:
- "--verbose"
env:
- name: LOG_LEVEL
value: "debug"
secrets:
- name: token
mountType: envVar
mountTarget: MY_TOKEN
# Parameter definitions for the container.
# These are provided when the container is invoked,
# For uploaders, it is expected to be defined in the profile that uses the uploader.
# For crawlers, it is expected to be defined when the crawler is invoked or scheduled to run.
parameters:
# Parameter names should be in uppercase and use underscores.
# This is due to the fact that the parameters are passed as environment variables to the container.
# The parameter names will be converted to uppercase and underscores will be used instead of spaces.
MY_REQUIRED_PARAMETER:
description: The name of the GitHub organization to crawl.
required: true # If true, The container will not be allowed to run without this parameter being provided.
MY_OPTIONAL_PARAMETER:
description: My description of the parameter.
required: false # If false, The 'default' value will be used if the parameter is not provided.
default: "default_value"
Downloaders
Downloaders are container images (defined by the user) that are used to write a target to disk for scanning. The container images are expected to read the target identifier and optional identifier version from environment variables and then write the target to the current directory. Downloaders will be defined via the API and then can be ran by referencing them when creating a new pipeline (see the pipelines section for more details on how to execute a pipeline).
A Downloader definitions is the same as a User Container definition. To view all options, see the section linked, the example below is provided for reference and does not include all options.
# Example definition of a downloader.
# It can be set by calling the endpoint `POST /api/v1/downloaders/{name}`
# with the body containing the definition in YAML or JSON format (depending on Content-Type header).
# The name of the downloader is set by the `name` path parameter and is used to identify the uploader in the system.
# Container Image URI
image: "myorg.domain/downloader:latest"
# Pull policy for the container image.
imagePullPolicy: Always
# Command and args to execute when the crawler is run.
command: ["./download"]
args:
- "--path"
- "."
# Set environment variables for the crawler.
env:
- name: GIT_CONFIG
value: "/creds/aws-config"
# Secrets to mount when the downloader is executed.
secrets:
- id: downloader-github-token
mountType: envVar
mountTarget: GITHUB_TOKEN
Profiles
A profile is a collection of scanners (containers), a list of artifacts that each scanner produces, and a list of uploader names that will be used to upload the artifacts produced by the scanners. A profile will be executed as part of a pipeline, where a target is downloaded to the current working directory, the scanners are executed in parallel followed by the uploaders. Uploaders are defined separately and can be reused across different profiles, see the uploaders section for more information on how to define an uploader. For more information on how to execute a profile, see the pipelines section. Profiles are intended to separate the types of scans and allow for triggering different sets of scans based on the target type, target source, or other criteria.
A profile has 3 components:
- Scanners: A list of container images that will be executed in parallel to scan the target content. This has the same definition as a User Container definition
- Artifacts: A list of artifacts that the scanners will produce. Each artifact is a file path relative to the ‘results’ directory in the container filesystem.
The path to the results directory is provided as the
OCULAR_RESULTS_DIR
environment variable. - Uploaders: A list of uploader names and parameters that will be used to upload the artifacts produced by the scanners. See the uploaders section for more information on how to define an uploader.
# Example definition of a profile.
# It can be set by calling the endpoint `POST /api/v1/profiles/{name}`
# with the body containing the definition in YAML or JSON format (depending on Content-Type header).
# The name of the profile is set by the `name` path parameter and is used to identify the profile in the system.
# Each item of the scanners list is a User Container definition.
# See /docs/definitions/CONTAINERS.md for the full list of options and schema definition.
# An example is provided below and does not include all options.
scanners:
- image: "myorg.domain/scanner:latest"
imagePullPolicy: IfNotPresent
command: ["/bin/sh", "-c"]
args:
- "python3 --verbose scanner.py --results-dir=$OCULAR_RESULTS_DIR/report.json"
env:
- name: LOG_LEVEL
value: "debug"
secrets:
- name: token
mountType: envVar
mountTarget: MY_TOKEN
- image: "myorg.domain/another-scanner:latest"
imagePullPolicy: IfNotPresent
command: ["/bin/bash", "-c"]
args:
- "./my-scanner.sh --output=$OCULAR_RESULTS_DIR/output.txt"
secrets:
- name: my-config
mountType: file
mountTarget: /etc/config.yaml
# List of artifacts that the scanners will produce.
# These are file paths relative to the 'results' directory in the container filesystem.
# The path to the results directory is provided as the `OCULAR_RESULTS_DIR` environment variable.
artifacts:
- "report.json"
- "output.txt"
# List of uploaders to use for uploading the artifacts produced by the scanners.
# Each item is a map with the uploader name and any parameters that should be passed to the uploader.
# The uploader must be defined in the system, or the profile will fail to be created.
# Additionally, All required parameters for the uploader must be provided or the profile will fail to be created.
# To view the parameters that can be passed to an uploader, check the definitions from the endpoint `/api/v1/uploaders/{uploader_name}`
# for the specific uploader.
uploaders:
- name: "my-uploader"
parameters:
PARAM1: "value1"
PARAM2: "value2"
- name: "another-uploader"
parameters:
CUSTOM_PARAM: "custom_value"
Uploaders
Uploaders are container images that are used to process or upload data to a another system. The container images are expected to read the files to upload from the paths given to them via command line arguments and then perform the upload operation on those files. Uploaders will be defined via the API and then can be ran by referencing them in a profile definition (see the profiles sections for more details on how to define a profile).
An Uploader definition is the same as a User Container with Parameters definition. See the section linked for the full list of options and schema definition. The example below is provided for reference and does not include all options.
# Example definition of an uploader.
# It can be set by calling the endpoint `POST /api/v1/uploaders/{name}`
# with the body containing the definition in YAML or JSON format (depending on Content-Type header).
# The name of the uploader is set by the `name` path parameter and is used to identify the uploader in the system.
# Container Image URI
image: "myorg.domain/uploader:latest"
# Pull policy for the container image.
imagePullPolicy: IfNotPresent
# Command and args to execute when the crawler is run.
command: ["ruby", "uploader.rb"]
args:
- "--quiet"
- "-f"
- "./"
# Set environment variables for the crawler.
env:
- name: AWS_CONFIG
value: "/creds/aws-config"
# Secrets to mount when the crawler is executed.
secrets:
- id: uploader-aws-config
mountType: file
mountTarget: "/creds/aws-config"
# Parameter definitions for the crawler.
# These are provided when the crawler is invoked,
# or scheduled to run.
parameters:
DOWNLOADER:
description: The name of the downloader to use for created pipelines
required: false
default: "git"
GITHUB_ORGS:
description: The name of the GitHub organization to crawl.
required: true
PROFILE_NAME:
description: The name of the profile to use for created pipelines
required: true
SLEEP_DURATION:
description: How long to sleep in between pipeline creations (e.g. 1m30s)
required: false
default: "1m30s"
Crawlers
Crawlers are container images that are used to enumerate targets to scan. The container is expected to gather a set of targets to scan, then call the API to start scans for those targets. The container will be given an authenticated token to the API, allowing them to call the API to start scans or other crawlers. The crawlers can be run on a schedule or on demand, and can be configured to pass a set of parameters when invoked. For more information on how to execute a crawler, see the searches section.
A Crawler definition is the same as a User Container With Parameters definition. See the section for the full list of options and schema definition. The example below is provided for reference and does not include all options.
# Example definition of a crawler.
# It can be set by calling the endpoint `POST /api/v1/crawlers/{name}`
# with the body containing the definition in YAML or JSON format (depending on Content-Type header).
# The name of the crawler is set by the `name` path parameter and is used to identify the crawler in the system.
# Container Image URI
image: "myorg.domain/crawler:latest"
# Pull policy for the container image.
imagePullPolicy: IfNotPresent
# Command and args to execute when the crawler is run.
command: ["python3", "crawler.py"]
args:
- "--verbose"
- "--scan-folder=./"
# Set environment variables for the crawler.
env:
- name: LOG_LEVEL
value: "debug"
# Secrets to mount when the crawler is executed.
secrets:
- id: crawler-github-token
mountType: envVar
mountTarget: GITHUB_TOKEN
# Parameter definitions for the crawler.
# These are provided when the crawler is invoked,
# or scheduled to run.
parameters:
DOWNLOADER:
description: The name of the downloader to use for created pipelines
required: false
default: "git"
GITHUB_ORGS:
description: The name of the GitHub organization to crawl.
required: true
PROFILE_NAME:
description: The name of the profile to use for created pipelines
required: true
SLEEP_DURATION:
description: How long to sleep in between pipeline creations (e.g. 1m30s)
required: false
default: "1m30s"
Secrets
Secrets container sensitive information that is required by containers to run. Secrets can be mounted as environment variables or as files in a specific directory. A secret associates data with a name in the API.
Secrets are referenced by their name in any user container or user container with parameters definition. See the containers section for more details on how to define a user container or user container with parameters.
# Example definition of a secret.
# It can be set by calling the endpoint `POST /api/v1/secrets/{name}`
# It will take the raw data from the request body and store it in the system.
# meaning this comment block would exist in the secret :)
my-secret-value
Executions
Executions represent the actual execution of the containers to either complete a scan (pipelines) or to find targets (search). All executions happen in 1 or more jobs (or cron-jobs).
Execution Status
The execution status have the following options:
Status | Description |
---|---|
NotRan | The job has not been run |
Pending | The job is waiting to be scheduled |
Running | The job is currently running |
Success | The job completed successfully |
Failure | The job completed with an non-zero exit code |
Cancelled | The job was cancelled externally |
Error | The job was not able to be run due to an error |
Pipelines
Pipelines are the core of the Ocular system. They are used to download target content, run scanners on it, and upload the results to 3rd party systems. When triggering a pipeline, the user will provide a target identifier (e.g. a URL or a git repository), an optional target version, the name of the downloader to use, and a profile to run. The pipeline will then execute the following steps:
- Download: The pipeline will run the specified downloader with the target identifier and version set as environment variables. The downloader is expected to download the target content to its current working directory. Once the container exits (with code 0), the pipeline will proceed to the next step.
- Scan: The pipeline will run the scanners specified by the provided profile, which are run in parallel.
Each scanner will be executed in its own container (but still on the same pod), with the current working directory set to the directory where the downloader wrote the target content.
The scanners should produce artifacts, and send them to the
artifacts
directory in the container filesystem (the path is given as theOCULAR_ARTIFACTS_DIR
environment variable). - Upload: Once all scanners have completed, the pipeline will extract the artifacts (listed in the profile) and run the uploaders in parallel. The uploaders will be passed the paths of the artifacts produced by the scanners as command line arguments. The uploaders are expected to upload the artifacts to a specific location (e.g. a database, cloud storage, etc.).
- Complete: Once all uploaders have completed, the pipeline will be considered complete.
Currently, there is no feedback mechanism for the pipeline execution, so the user will need to check the API status of the pipeline execution to see if it was successful or not.
A pipeline request is a simple YAML or JSON object that contains the following fields:
# Example pipeline request
# It can be sent to the endpoint `POST /api/v1/pipelines`
# with the body containing the request in YAML or JSON format (depending on Content-Type header).
# Target identifier (e.g. a URL or a git repository)
# It is up to the downloader to interpret this identifier and download the target content.
target:
identifier: "http://github.com/myorg/myrepo"
downloader: "my-custom-git-downloader" # Name of the downloader to use for this pipeline
# version: "v1.0.0" # Optional version of the target, if applicable
profileName: "my-custom-profile" # Name of the profile to use for this pipeline
[K8s]: A pipeline is executed via 2 kubernetes jobs. One for the scanning that has the downloader as an init container, and the ‘scanners’ section of the profile as the main containers. The other job is the upload job, where the uploaders are the main containers. We transfer the artifact files between the 2 using a side car container.
The response to a pipeline request is the state of the pipeline execution. This can be queried via the endpoint GET /api/v1/pipelines/{pipeline_id}
.
id: "12345678-1234-1234-1234-123456789012" # Unique identifier of the pipeline execution
scanStatus: "Pending" # Current state of the scan aspect (steps 1 & 2) of the pipeline
# The state will start as "Pending" and will change to "Running" when the downloader container begins its execution.
uploadStatus: "NotRan" # Current state of the uploader aspect (step 3) of the pipeline
# The status will start as `NotRan` until the scan completes, then start transition to `Pending`
profile: "my-custom-profile" # Name of the profile used for this pipeline
target:
identifier: "http://github.com/myorg/myrepo"
downloader: "my-custom-git-downloader" # Name of the downloader to use for this pipeline
# version: "v1.0.0" # Optional version of the target, if was provided
Searches
Searches are used to find targets that can be scanned by the pipeline.
They are typically used to discover new targets or to find targets that match certain criteria.
Searches are executed by running a crawler, which is a container image that is expected to gather a set of targets to scan and call the API to start scans for those targets.
The container will be given an authenticated token to the API, allowing it to call the API to start scans or other crawlers.
The token is located in a file mounted at the path specified by the environment variable OCULAR_SERVICE_ACCOUNT_TOKEN_PATH
.
The API base URL is also provided as an environment variable (OCULAR_API_BASE_URL
).
The search will execute the following steps:
- Run Crawler: The search will run the specified crawler with the parameters provided in the request. The crawler is expected to gather a set of targets to scan and call the API to start scans for those targets.
- Start Pipelines: Once the crawler has gathered the targets, it should call the API to start pipelines for each target. The pipelines will execute as normal (see pipelines for more details). NOTE: crawlers should space out the pipeline creation to avoid overwhelming the system with too many pipelines at once. (A solution to this is actively being worked on, but currently the crawler should implement its own throttling logic.)
- Complete: Once the crawler has completed, the search will be considered complete.
Currently, there is no feedback mechanism for the search execution, so the user will need to check the API status of the search execution to see if it was successful or not.
A search request is a simple YAML or JSON object that contains the following fields:
# Example search request
# It can be sent to the endpoint `POST /api/v1/searches`
# with the body containing the request in YAML or JSON format (depending on Content-Type header).
crawlerName: "my-custom-crawler" # Name of the crawler to run
parameters: # Parameters to pass to the crawler
GITHUB_ORGS: "myorg" # Example parameter, the crawler should define its own parameters
SLEEP_DURATION: "1m30s" # Example parameter, the crawler should define its own parameters
If the search is scheduled, the request should use the endpoint POST /api/v1/scheduled/searches
with the addition
of the schedule
field, which is a cron expression that defines when the search should be executed.
# Example scheduled search request
# It can be sent to the endpoint `POST /api/v1/scheduled/searches`
# with the body containing the request in YAML or JSON format (depending on Content-Type header).
crawlerName: "my-nightly-crawler" # Name of the crawler to run
parameters: # Parameters to pass to the crawler
GITHUB_ORGS: "myorg" # Example parameter, the crawler should define its own parameters
SLEEP_DURATION: "1m30s" # Example parameter, the crawler should define its own parameters
schedule: "0 0 * * *" # Cron expression for the schedule (e.g. every day at midnight)
The response to a pipeline request is the state of the pipeline execution. This can be queried via the endpoint GET /api/v1/schedules/{id}
.
or GET /api/v1/scheduled/searches/{id}
for scheduled searches.
id: "12345678-1234-1234-1234-123456789012" # Unique identifier of the pipeline execution
status: "Pending" # Current state of the pipeline execution (e.g. Pending, Running, Completed, Failed)
# The status will not be included if the response is a schedule response
crawler: "my-custom-profile" # Name of the profile used for this pipeline
parameters: # Parameters passed to the crawler
GITHUB_ORGS: "myorg" # Example parameter, the crawler should define its own parameters
SLEEP_DURATION: "1m30s" # Example parameter, the crawler should define its own parameters
# schedule: "0 0 * * *" # Cron expression for the schedule (if was scheduled search)
Default Integrations
Ocular comes bundled with a set of default integrations meant to solve a lot of common use-cases of Ocular. Currently Ocular bundles default uploaders, downloaders and crawlers, along side the Helm chart installation.
The source code of these implementations can be found in the GitHub repository crashappsec/ocular-default-integrations.
For the most up to date documentation on each of these,
be sure to get the definition of the integration using
the endpoint /api/v1/{integration}/{name}
. For example,
the latest documentation of the git downloader can be
viewed at /api/v1/downloaders/git
.
Default Uploaders
S3
The S3 uploader will upload the artifact files to an s3 bucket. It will put the files in a folder named the pipeline ID of the scan.
Parameters:
BUCKET
(Required): The name of the S3 bucket to upload toSUBFOLDER
: Subfolder to place resulting files into, will be used a prefix before pipeline IDREGION
: The name of the AWS region to use
Secrets:
uploader-awsconfig
: The aws config file to the SDK to use
Webhook
Webhook uploader will send the contents of the artifact as the body of an HTTP request.
Parameters:
METHOD
(Required): The HTTP Method to use for the requestURL
(Required): The URL to use for the request
Default Downloaders
Git
Git downloader will interpret the target identifier as a git URL and will clone the repository to the local filesystem. It will use the target version as the ref to check out, or the default branch if no version is specified.
Secrets:
downloader-gitconfig
: The gitconfig file to use. Will be mounted to/etc/gitconfig
in the container
Docker
docker downloader will interpret the target identifier as a container URI will write the image locally, as target.tar
in the target directory.
It will use the target version as the image tag to pull down, or the latest
if no version is specified.
Secrets:
downloader-dockerconfig
: The docker config file to use.Will be mounted to/root/.docker/config.json
, and have that path set as the value forDOCKER_CONFIG
NPM
NPM downloader will interpret the target identifier as an npm package name, and will download the package tar.gz
and
unpack it into the target directory.
It will use the target version as the package version to pull down, or the latest version if no version is specified.
PyPi
PyPi downloader will interpret the target identifier as a PyPi package name, and will download all the packages files (
.whl
, source files, etc.) to the target directory.
It will use the target version as the package version to pull down, or the latest version if no version is specified.
S3
S3 downloader will interpret the target identifier as a s3 bucket name and will clone the bucket to the local filesystem. It will use the target version as the object versions to pull down, or the latest version if no version is specified.
Secrets:
downloader-awsconfig
: The aws config file to use. Will be mounted to/root/.aws/config
in the container.
GCS
GCS downloader will interpret the target identifier as a GCS bucket name and will clone the bucket to the local filesystem. It will use the target version as the object versions to pull down, or the latest version if no version is specified.
Secrets:
gcs-application-credentials
: The google cloud credentials to use. Will be set as the environment variablesGOOGLE_APPLICATION_CREDENTIALS
in the container.
Default Crawlers
GitHub
GitHub crawler will crawl GitHub organizations for all repositories and create pipelines for each repository found.
Parameters:
PROFILE_NAME
(Required): The name of the profile to use for created pipelinesSLEEP_DURATION
: How long to sleep in between pipeline creations (e.g. 1m30s), defaults to2m
DOWNLOADER
: The name of the downloader to use for created pipelines, defaults togit
GITHUB_ORGS
: The names of the github organizations to crawl (comma separated)
Secrets:
crawler-github-token
: The github token to use for authentication. Will be set as the environment variableGITHUB_TOKEN
in the container.
GitLab
GitLab crawler will crawl GitLab groups for all repositories and create pipelines for each repository found. If no groups are provided, it will crawl the entire gitlab instance.
Parameters:
GITLAB_GROUPS
: Comma separated list of gitlab groups to callPROFILE_NAME
(Required): The name of the profile to use for created pipelinesSLEEP_DURATION
: How long to sleep in between pipeline creations (e.g. 1m30s), defaults to2m
DOWNLOADER
: The name of the downloader to use for created pipelines, defaults togit
GITLAB_URL
: The URL of the GitLab instance, default tohttps://gitlab.com
Secrets:
crawler-gitlab-token
: The gitlab token to use for authentication. Will be set as the environment variableGITLAB_TOKEN
in the container.
Debugging Techniques
Since Ocular is built using kubernetes, we encourage you to use kubernetes tooling to debug.
A K8s client may be useful for looking at logs of jobs, or viewing events for why an image might not be pulled or failed to start.
Ocular is open-source to allow users as much control over their scanning as possible.
Below are some common kubectl
commands that may come in handy:
# Get all logs for a pipeline
# this assumes $PIPELINE_ID is set to the ID of the pipeline
kubectl logs --all-containers --all-pods \
"$(kubectl get pods \
-l id=$PIPELINE_ID \
--no-headers -o custom-columns=":metadata.name" \
)"
# Get events for pipeline
# this assumes $PIPELINE_ID is set to the ID of the pipeline
for job in scan upload; do
kubectl events --for \
"pod/$(kubectl get pods
-l id=$PIPELINE_ID,resource=$job-execution \
--no-headers -o custom-columns=":metadata.name")"
done