Supported languages

Bearer supports the following language and framework combinations. When you scan a codebase, Bearer will automatically select the appropriate language based on the file extension. For example, if your application is composed of Ruby and JavaScript code, Bearer will automatically apply the right language and set of rules.

Language Frameworks Bearer OSS Bearer Pro Cross-file Analysis
Java Spring, Java
Python Django
C# .Net Alpha
Ruby Ruby on Rails
JavaScript / TypeScript Express, React
PHP Symfony
Go Gorilla
Kotlin Android, Spring, Ktor
Elixir Phoenix
VB.Net .Net

Legend:

  • Bearer OSS: Available in the open-source CLI version
  • Bearer Pro: Available in the commercial Pro version
  • Cross-file Analysis: Advanced interprocedural and inter-file analysis capabilities (Pro only)

Language and framework support

Bearer CLI works across a variety of languages, especially:

  • Dynamically typed languages such as Ruby or JavaScript.
  • Optionally strong typed languages such as TypeScript.
  • Strong typed languages such as Java.

You can access the complete list of security rules and associated vulnerabilities supported for each language by Bearer CLI in the rules section.

Framework support

Bearer CLI supports the majority of frameworks, requiring only core language support to perform its analysis. However, certain frameworks may require specialized rules: these are mentioned in the table above in the "framework" section. If you observe any gaps in support for a particular framework, please submit an issue with relevant details and examples.

What is Cross-file Analysis?

Cross-file analysis, also known as interprocedural and inter-file analysis, is an advanced static analysis technique that traces data flow and control flow across function and file boundaries within an entire codebase. Unlike traditional SAST tools that analyze code at the function or single-file level, cross-file analysis provides a holistic view of how data moves through your application.

How Cross-file Analysis Works

Modern applications are built with modular architectures where code is distributed across multiple files, modules, and components. A vulnerability might originate in one file, flow through several functions across different modules, and manifest as a security issue in a completely different part of the codebase.

Cross-file analysis follows the complete data flow path by:

  1. Tracing data origins: Identifying where sensitive data enters your application (user input, database queries, API calls, etc.)
  2. Following transformations: Tracking how data is processed, transformed, and passed between functions and files
  3. Detecting sinks: Identifying where data ends up in potentially dangerous operations (database queries, file operations, external API calls, etc.)
  4. Analyzing control flow: Understanding the execution paths and conditions that affect data flow

How do we evaluate language support?

The development of a robust Static Application Security Testing (SAST) tool hinges on two crucial performance metrics: recall and precision. A modern, efficient SAST solution aims to minimize the false positive rate, to avoid inundating developers with irrelevant findings. At the same time, it should not overlook relevant vulnerabilities, or it risks giving a false sense of security.

The methodology we employ for testing our software is instrumental in achieving this level of confidence. Although there are a few benchmarking projects available for SAST, they often fall short of providing a comprehensive assessment. The multitude of coding styles and the diversity of potential vulnerabilities as numerous as the developers themselves necessitate a well-rounded test suite that truly represents how code is written. Relying solely on benchmarking projects, in this case, is insufficient.

Given this, as part of our language release procedure, we heavily depend on Open Source projects to evaluate the quality of our support. Our engineering team has composed an in-depth post detailing our approach, which we strongly recommend you review here.

How does Bearer precision compare to solutions like Semgrep, Snyk, Checkmarx or SonarQube?

Establishing a high degree of accuracy for a SAST is challenging, and comparing them becomes an even more complex task. As a part of our in-house toolkit, we have undertaken the task of determining how Bearer stands against other well-established solutions in the market. The comprehensive results of our comparative analysis, along with an open data set, are available here.