Links

Amazon Web Services

Bearer offers native integration to Amazon Web Services. It is used to pull Datastores from your account and check Data Security best practices as well as evaluate what Data is stored inside them.
Our Integration has two levels of features:
  1. 1.
    Standard Integration: Read-only, no access to data
  2. 2.
    Advanced Integration: Deploy an on-premises data classification engine to classify data held in Datastores
In both cases, Bearer NEVER sends confidential data to its SaaS platform. Anything sensitive stays on your AWS resources. Only metadata or configuration data are sent back to us.

What data is retrieved from the AWS API?

Bearer supports the following AWS Datastore services:
  • RDS Service
  • S3
  • DynamoDB
An inventory entry is created for each service detected as follows:
  • RDS: one datastore per cluster
  • S3: one datastore per bucket
  • DynamoDB: one datastore per table
From the AWS account's region, Bearer will determine the corresponding Processing Location of the Datastore. They can't be manually changed. You can retrieve the automatic mapping we perform.
Bearer will automatically pull all technical and configuration data. It can be retrieved in the Properties tab.
The following security measures are automatically assessed:
  1. 1.
    Access Control
  2. 2.
    Identity Management
  3. 3.
    Backups
  4. 4.
    Encryption
  5. 5.
    Logs
Relevant items are displayed in the Security Measures tab of each retrieved Datastore.

Standard Integration

The standard installation is read-only and never accesses your data directly. It pulls helpful information about the datastores, like their locations and metadata, to make connections with other resources in the Bearer dashboard.
The standard integration is required in order to use the Advanced Integration.

Install AWS integration

  1. 1.
    Go to Settings > Integrations > Cloud Platform
  2. 2.
    Select AWS:
  3. 3.
    Click Add AWS Account. Give the AWS account a name (to be displayed on our dashboard)
  4. 4.
    Click Create Account and read the instructions.
  5. 5.
    Clicking Launch Stack will send you to AWS Console to complete the rest of the procedure through a Cloud Formation template.
  6. 6.
    After the process has been completed on the AWS side, Bearer will display the account as Complete.
Bearer supports multiple AWS accounts. You may configure as many as your organization requires through the same process.

Retrieve AWS Datastores

You can filter the inventory on a specific origin. Each configured AWS account will be displayed in the Origin filter option of the inventory.

Advanced Integration

When using a datastore, you have the option to enable our "AWS advanced integration." This extra step is required to populate the detections in the Data processing tab.
Pressing the Launch Stack button will launch a pre-populated Create Stack form in CloudFormation on AWS.
CloudFormation stacks are made up of three substacks: Region, VPC, and Database.
By configuring the stack template for a region, it will automatically apply to all VPC and databases in that region. RegionStackName and VPCStackName, if applicable, are automatically filled in by the template.
Follow the prompts to complete the stack configuration on AWS. For database authentication, we support IAM or SecretManager.
When using IAM, make sure to configure the user following the AWS guide. We use bearer_extractor as an example, but you may use any user name as long as it is set in the DatabaseUser field. For SecretManager, include the arn in the SecretArn field.
Complete the form, acknowledge the capability requirements, and finally press the Create stack button to finish building the stack.
Great work! Once complete, we'll begin scanning for data processing. This can take about 10 minutes to populate in your dashboard. You can check the connection by confirming the Launch Stack now displays as enabled.
Once the scan and processing are complete, your dashboard should look something like this:
Currently, we only support RDS data stores, but support for S3 and DynamoDB is coming soon.

Deploy at scale with Terraform

We also support AWS integration configuration through Terraform. For help getting started, contact support. Here is an example configuration:
resource "aws_cloudformation_stack" "bearer-extractor" {
name = "bearer-advanced-rds-terraform-${module.db.primary.id}" # whatever they want really
template_url = "https://bearer-aws-extractor.s3.eu-west-1.amazonaws.com/cloudformation/combined_rds_advanced.yaml"
capabilities = [
"CAPABILITY_NAMED_IAM",
"CAPABILITY_AUTO_EXPAND"
]
parameters = {
RegionStackName = ""
VpcStackName = ""
SecretArn = ""
CrossAccountRoleName = "cross-account-bearer-role"
VpcId = module.vpc.vpc_id
SubnetIds = join(",", module.vpc.private_subnets)
DatabaseEngine = module.db.primary.engine
DatabaseArn = module.db.primary.arn
DatabaseResourceId = module.db.primary.resource_id
DatabaseHost = module.db.primary.address
DatabasePort = module.db.primary.port
DatabaseName = module.db.primary.db_name
DatabaseUser = "bearer_extractor"
ExternalId = "XXXXX-XXXX-XXX-XXXXX"
DatabaseSecurityGroupId = module.db.sg.id
UpdateDatabaseSecurityGroup = true
# Analyzer part
CreateAnalyzer = true
AnalyzerInputExpirationDays = 1
AnalyzerLogRetentionDays = 7
CreateS3Endpoint = true
CreateSecretManagerEndpoint = true
ExportExpirationDays = 7
ExtractorLogRetentionDays = 7
}
}