Enumerating targets using searches

Searches are jobs that Ocular runs intended to enumerate targets to run pipelines over. Searches are the execution of a crawler. There exists a CronSearch as well, which is a resource that will run a Search on a cron schedule.

In this example, we will implement a simple use-case of scanning all GitHub repositories that are apart of the Crash Override GitHub organization crashappsec.

Step 1. Selecting a Crawler

Ocular comes bundled with a default GitHub crawler, which has support for enumerating all github repositories that are apart of a list of GitHub orgs given as a parameter GITHUB_ORGS. Additionally the crawler will need the parameters PROFILE, which is the profile to use for pipelines that are created. The crawler (and the other default crawlers) have additionally optional parameters to specify settings for the pipelines that are created:

  • DOWNLOADER_OVERRIDE: Will override the downloader choosen by the crawler (i.e. for GitHub and GitLab crawlers its ocular-defaults-git). This is required for the static crawler
  • PIPELINE_TTL: The TTL of the pipelines created. This is parsed as a duration (i.e. 10s, 30m, 2h45m, etc.) and is the amount of time before cleanup after a pipeline finishes. Not setting means no TTL.
  • SLEEP_DURATION: How long to sleep in between creating new pipelines
  • SCAN_SERVICE_ACCOUNT: The service account to set for the scan jobs of the pipelines
  • UPLOAD_SERVICE_ACCOUNT: The service account to set for the upload jobs of the pipelines

(You can see this if you get the definition of the GitHub crawler using the command kubectl describe crawler ocular-defaults-github) For more information on the default crawlers see the default integrations manual section

NOTE: The GitHub crawler supports using an authenticated token by setting the secret key ‘github-token’ in the secret ‘crawler-secrets’ in the same namespace.

Lets now start the search. We will need to supply:

  • The crawler name (in this case ocular-defaults-github)
  • The parameters

The snippet below shows an example curl command. We set the org to be our crashappsec org, the downloader to be the default git downloader named git, and the profile to the one we created in the quick start guide example-profile.

cat <<EOF | kubectl create -f -
apiVersion: ocular.crashoverride.run/v1beta1
kind: Search
metadata:
	generate-name: example-search-
spec:
	crawlerRef:
		name: ocular-defaults-github
		parameters:
			- name: "GITHUB_ORGS"
			  value: "crashappsec"
		    - name: "PROFILE"
			  value: "example-profile"
	ttlSecondsAfterFinished: 30
EOF

This will start the search as a kubernetes job. After some time you will pipelines begin to be created for each repo in the org. By default the GitHub crawler starts them 2 minutes apart, but this can be customized with the SLEEP_DURATION parameter

If we wanted to run this search every night at midnight, We can instead use the resource CronSearch

This resource accepts a schedule string that is a cron schedule expression for when to run the search, in this case it should be 0 0 * * *, and the template for the search to run at that time.

cat <<EOF | kubectl apply -f -
apiVersion: ocular.crashoverride.run/v1beta1
kind: CronSearch
metadata:
  name: example-cronsearch
spec:
  schedule: "0 0 * * *" # every day at midnight
  searchTemplate:
    spec:
		crawlerRef:
			name: ocular-defaults-github
			parameters:
				- name: "GITHUB_ORGS"
				  value: "crashappsec"
			    - name: "PROFILE"
				  value: "example-profile"
	    ttlSecondsAfterFinished: 30
EOF

Now, every night at midnight the search will run the GitHub crawler, with the parameters set above.

Summary

In this guide, you’ve learned how to:

  1. Start a search and enumerate scan targets
  2. Schedule a search