# Aembit Documentation - Full Content > This file contains the full content of Aembit's documentation for machine learning model training and context. > Each document includes its title, URL, description, and complete content. # Get Started ============= # Conceptual overview URL: https://docs.aembit.io/get-started/concepts/ Description: This page provides a high-level conceptual overview of Aembit and its components import { CardGrid } from '@astrojs/starlight/components'; import ImageLinkCard from "/src/components/ImageLinkCard.astro"; This topic explains how Aembit operates behind the scenes (at a high level) to provide secure, seamless access between workloads. Use the links in each section to dive deeper into specific topics related to how Aembit works or start configuring and using those features. ## Aembit as an identity broker Aembit operates conceptually as an identity broker. It acts as an intermediary, facilitating secure access requests initiated by a [Client Workload](/astro/get-started/concepts/client-workloads) (like an application or script) attempting to connect to a target [Server Workload](/astro/get-started/concepts/server-workloads) (like an API or database). These workloads may operate across security boundaries or reside in different compute environments. For example, a Client Workload in AWS accessing a Server Workload in Azure. By centralizing the brokering function, Aembit helps you simplify the management of trust relationships and Access Policies across your disparate security boundaries and environments. ## Aembit's architecture Aembit consists of two cooperating systems: [Aembit Edge](#aembit-edge) and [Aembit Cloud](#aembit-cloud). Aembit Edge communicates with Aembit Cloud to handle authentication and authorization of access between your workloads. Separating the control plane and the data plane enables you to centralize policy management in the cloud while keeping the enforcement mechanism close to the workloads in your environments. The interception model employed by Aembit Edge is key to enabling the "No-Code Auth" capability. ### Aembit Edge [Aembit Edge](#aembit-edge) acts as the **data plane** or interception point and runs alongside Client Workloads in your infrastructure (such as a Kubernetes cluster). The primary function of Aembit Edge is to intercept outbound network requests from Client Workloads destined for target Server Workloads. Upon interception, Aembit Edge sends requests from Client Workloads to Aembit Cloud which handles the authentication and authorization of that request. If Aembit Cloud approves access, then Aembit Edge does the following: 1. Receives a credential from Aembit Cloud. 1. Injects the credential into the original request "just-in-time." 1. Forwards the modified request to the intended target Server Workload. Aembit Edge also sends detailed access event logs to Aembit Cloud for auditing purposes. --- ### Aembit Cloud [Aembit Cloud](/astro/get-started/concepts/aembit-cloud) acts as the **control plane** and receives requests intercepted by Aembit Edge. Aembit Cloud determines whether to authorize Client Workload requests and what credential to deliver. The primary functions of Aembit Cloud are to: 1. Evaluate access requests. 1. Authenticate Client Workloads and attest their identities through a [Trust Provider](/astro/get-started/concepts/trust-providers). 1. Enforce [Access Policies](/astro/get-started/concepts/access-policies) (including [Access Conditions](/astro/get-started/concepts/access-conditions) such as GeoIP or time). 1. Interact with external [Credential Providers](/astro/get-started/concepts/credential-providers) to obtain and issue necessary credentials. 1. Communicate access decisions to Aembit Edge. You can [administer Aembit Cloud](/astro/get-started/concepts/administration) through your unique, and isolated Aembit Tenant to define access rules, configure trust and credential sources, and monitor access events. Aembit Cloud logs all Access Authorization Events so you can [audit and report](/astro/get-started/concepts/audit-report) metadata related to access control. :::tip[Aembit only logs metadata] Crucially, Aembit functions purely as a control plane; it doesn't process or log any actual data from your workloads, only metadata related to access control. ::: --- ## Access Policies Aembit uses [Access Policies](/astro/get-started/concepts/access-policies) to control which Client Workloads can access which Server Workloads and under what conditions. Access Policies evaluate the following components when making access decisions: - **Client Workloads** - Any non-human entity that initiates an access request to consume a service or resource provided by a Server Workload. - **Trust Providers** - Attest to workload identities and provide information about the environment in which they operate with high reliability and trustworthiness. - **Access Conditions** - Criteria Aembit checks when evaluating an Access Policy to determine whether to grant a Client Workload access to a target Server Workload. - **Credential Providers** - Systems that provide access credentials, such as OAuth tokens, service account tokens, API keys, or username-and-password pairs. - **Server Workloads** - Software applications that serve requests from Client Workloads such as third-party SaaS APIs, API gateways, databases, and data warehouses. For a simplified illustration of the Access Policy evaluation flow, see [Evaluation flow: how Aembit grants access](/astro/get-started/how-aembit-works#evaluation-flow-how-aembit-grants-access). If a request meets all requirements, Aembit allows the connection and injects the credential. If any step fails, Aembit denies the request and logs the reason. --- ## Trust Providers Instead of Client Workloads managing and presenting a long-lived secret for authentication, Aembit uses [Trust Providers](/astro/get-started/concepts/trust-providers) to cryptographically verify the identity of Client Workloads attempting to access target Server Workloads. Trust Providers verify a Client Workload's identity using evidence obtained directly from its runtime environment—also known as workload attestation. Aembit integrates with many Trust Providers to support attestation across different environments: - AWS - Azure - Kubernetes - CI/CD platforms - Aembit Agent Controller in Kerberos environments Trust Providers supply cryptographically signed evidence, such as platform identity documents or tokens, about the Client Workload to Aembit Cloud. Aembit Cloud then validates this evidence to confirm the workload's identity before proceeding with access policy evaluation. Upon successful attestation, Aembit Cloud gains high confidence in the Client Workload's identity without relying on a shared secret. --- ## Access Conditions Aembit uses [Access Conditions](/astro/get-started/concepts/access-conditions) to provide a mechanism for adding dynamic, context-aware constraints to Access Policies—similar to Multi-Factor Authentication (MFA) for human identities. Access Conditions allow Access Policies to incorporate rapid environmental or operational factors into the access decision. For example: - **Time** - restrictions based on the time of day or day of the week - **GeoIP** - geographic location of the requesting workload During [Access Policy evaluation](/astro/get-started/how-aembit-works#evaluation-flow-how-aembit-grants-access), after Aembit Cloud matches the Client and Server Workloads to an Access Policy *and* it verifies the Client Workload's identity, Aembit Cloud explicitly evaluates all associated Access Conditions. Only if all Access Conditions pass, along with the Client Workload's identity check, does the Access Policy grant access and trigger the Credential Provider. Aembit also integrates with external security posture management tools, such as Wiz or CrowdStrike. This allows Access Policies to enforce conditions such as "Aembit only grants access if Wiz reports a healthy security posture for that Client Workload. --- ## Credential Providers Aembit uses [Credential Providers](/astro/get-started/concepts/credential-providers) to facilitate secure authentication between workloads. Credential Providers generate and manage the credentials needed for a Client Workload to authenticate to a Server Workload when an Access Policy determines to grant a Client Workload access. Credential Providers abstract away the complexity of different authentication mechanisms and credential types, providing a consistent interface for workload-to-workload authentication regardless of the underlying systems. When an Access Policy evaluation succeeds, Aembit Cloud triggers the Credential Provider to generate the appropriate credentials for the specific authentication mechanism that the target Server Workload requires. This interaction is what allows a Client Workload to authenticate to a Server Workload without storing or managing long-lived credentials. This design limits exposure and prevents credential sprawl. Aembit supports many types of Credential Providers to accommodate different authentication requirements: - **Basic Authentication** - For systems requiring username/password authentication - **OAuth 2.0** - For modern API authentication flows - **API Key** - For services using API key-based authentication - **Certificate-Based** - For systems requiring mutual TLS authentication - **Cloud Provider Credentials** - For accessing cloud services (AWS, Azure, GCP) through Workload Identity Federation (WIF) - **SAML** - For enterprise federated authentication scenarios - **Kubernetes Tokens** - For Kubernetes-based workloads You can also set up Credential Providers for external secrets management systems like HashiCorp Vault, AWS Secrets Manager, or Azure Key Vault to retrieve sensitive authentication material when needed. To provide **credential lifecycle management** capabilities, Aembit offers [Credential Provider integrations](/astro/user-guide/access-policies/credential-providers/integrations/) with services like GitLab to create, rotate, and delete access credentials on your behalf. --- ## Observability Aembit logs every access request (Access Authorization Events) and administrative change. These logs help you understand what's happening, troubleshoot problems, and meet compliance goals and requirements. Aembit logs the following: - Each request's source, destination, and decision. - The specific policy that allowed or blocked access. - Details about which Trust Provider verified an identity. - What credential Aembit delivered (or why it didn't). You can view this information in your Aembit Tenant UI or export it to external log systems for long-term storage and analysis by setting up a [Log Stream](/astro/user-guide/administration/log-streams/). See [Audit and report](/astro/get-started/concepts/audit-report) --- ## Administration Administration in Aembit provides a comprehensive framework for managing security policies, credentials, and access controls across your organization to control and monitor how your users access and use Aembit. To administer Aembit, you can do so through your unique, dedicated environment—your [Aembit Tenant](#about-aembit-tenants). Aembit's Administration UI provides centralized management of all Aembit's primary components, including Access Policies. Additionally, you can configure and manage advanced Aembit Edge Component features such as TLS Decrypt, PKI-based TLS, proxy steering methods, and more. Aembit's administration system follows a Role-Based Access Control (RBAC) model, allowing you to delegate specific administrative responsibilities while maintaining the principle of least privilege. Aembit's administration capabilities include: - **Admin Dashboard** - A central interface providing visibility into system status, recent activities, and security alerts. - **Users** - Management of human users who interact with the Aembit administrative interface. - **Roles** - Predefined and custom sets of responsibilities that you can assign to your users to control their administrative access. - **Permissions** - Granular controls that define what actions your users can perform within your Aembit Tenant. - **Discovery** - Tools for identifying and cataloging workloads across your infrastructure. - **Resource Sets** - Logical groupings of resources that help organize and manage access at scale across your environment. - **Log Streams** - Configuration for sending security and audit logs to external monitoring systems. - **Identity Providers** - Integration with external identity systems for authenticating administrators. - **Sign-On Policies** - Rules governing how administrators authenticate to the Aembit system. ### About Aembit Tenants Aembit Tenants serve as isolated, dedicated environments within Aembit that provide complete separation of administrative domains and security configurations. Each tenant operates independently with its own set of: - **Administrative Users** - Users who manage the tenant have no access to other tenants. - **Resources** - All workloads, policies, and configurations are tenant-specific. - **Security Boundaries** - Complete isolation makes sure configurations in one tenant can't affect others. --- ## Aembit Terraform Provider Aembit supports scalable, repeatable infrastructure-as-code (IaC) workflows through the [Aembit Terraform Provider](https://registry.terraform.io/providers/Aembit/aembit/latest). Terraform gives you the ability to: - Codify access policies and workload identity configuration. - Version control changes to your identity and access infrastructure. - Apply changes consistently across staging, production, and multicloud environments. - Automate onboarding for new workloads, trust providers, and credential integrations. This helps reduce manual steps, eliminate configuration drift, and ensure your access policies are reproducible and reviewable. The Aembit Terraform Provider supports all core Aembit resources: | Resource Type | Terraform Support | |-----------------------|------------------------------| | Trust Providers | ✅ Create and configure | | Client Workloads | ✅ Manage identity matching | | Server Workloads | ✅ Define endpoints, auth | | Credential Providers | ✅ Integrate secrets/tokens | | Access Policies | ✅ Authorize workload access | | Access Conditions | ✅ Enforce dynamic controls | | Resource Sets | ✅ Segment environments | | Roles & Permissions | ✅ Assign fine-grained access| This full coverage enables you to declare your Aembit configuration as code, just like cloud resources or Kubernetes objects. --- ## Additional resources - [Access Policies](/astro/get-started/concepts/access-policies) - [Audit and report](/astro/get-started/concepts/audit-report) - [Administering Aembit](/astro/get-started/concepts/administration) - [Scaling with Terraform](/astro/get-started/concepts/scaling-terraform) --- # Access Conditions overview URL: https://docs.aembit.io/get-started/concepts/access-conditions/ Description: Description of Access Conditions and how they work --- # Access Policies overview URL: https://docs.aembit.io/get-started/concepts/access-policies/ Description: Description of Access Policies, their components, and how they work --- # Aembit administration URL: https://docs.aembit.io/get-started/concepts/administration/ Description: Discover Aembit's administration capabilities This page provides an of all administrative capabilities available in your Aembit Tenant. ## Admin dashboard The Admin dashboard serves as your command center for monitoring the health and activity of your Aembit deployment. It provides real-time visibility into workload connections, credential usage, and potential security issues. This visibility allows you to identify and address operational concerns. The [Admin dashboard](/astro/user-guide/administration/admin-dashboard/) provides: - Summary metrics for configured workloads and entities - Workload event history with severity indicators - Client and Server Workloads connection metrics - Credential usage analytics - Application protocol distribution - Access condition failure monitoring ## User management User management in Aembit allows you to control who can access your Aembit Tenant and what actions they can perform. This capability is essential for implementing the principle of least privilege and making sure you have proper separation of duties within your organization. [User management](/astro/user-guide/administration/users/) features include: - [Add users](/astro/user-guide/administration/users/add-user) with specific roles and contact information - Configure external authentication options - Manage user credentials and access rights ## Roles and permissions Aembit's role-based access control system allows you to create customized roles with precise permissions. This enables you to delegate administrative responsibilities without granting excessive privileges. This granular approach to access control helps maintain security while supporting collaborative administration. [Role-based access control](/astro/user-guide/administration/roles/) provides: - [Create specialized roles](/astro/user-guide/administration/roles/add-roles) beyond default SuperAdmin and Auditor - Configure granular permissions for each role - Integrate with Resource Sets for multi-tenancy ## Identity providers Identity provider integration allows you to leverage your existing identity infrastructure with Aembit. By connecting your corporate identity provider, you can make sure consistent authentication policies across your organization. This integration simplifies user management through automatic provisioning and role mapping. [Identity provider integration](/astro/user-guide/administration/identity-providers/) enables: - Connect with [SAML 2.0 providers](/astro/user-guide/administration/identity-providers/creating-identity-providers) (Okta, Google, Microsoft Entra ID) - Enable Single Sign-On (SSO) authentication - Configure [SSO automatic user creation](/astro/user-guide/administration/identity-providers/automatic-user-creation) for new users ## Resource Sets Resource Sets provide powerful multi-tenancy capabilities, allowing you to segment your Aembit environment for different teams, applications, or business units. This isolation makes sure administrators can only manage resources within their assigned domains. It supports organizational boundaries while maintaining centralized oversight. [Resource Sets](/astro/user-guide/administration/resource-sets/) allow you to: - [Create isolated resource groups](/astro/user-guide/administration/resource-sets/creating-resource-sets) - [Add workloads and resources](/astro/user-guide/administration/resource-sets/adding-resources-to-resource-set) to specific sets - [Assign roles](/astro/user-guide/administration/resource-sets/assign-roles) for managing each Resource Set - [Deploy Resource Sets](/astro/user-guide/administration/resource-sets/deploying-resource-sets) using specific methods ## Log streams Log streams extend Aembit's audit and monitoring capabilities by forwarding logs to external systems. This enables long-term storage, analysis, and compliance reporting. The integration with your existing security monitoring infrastructure allows Aembit activity to become part of your organization's overall security operations. [Log streams](/astro/user-guide/administration/log-streams/) allow you to: - Forward logs to [AWS S3 buckets](/astro/user-guide/administration/log-streams/aws-s3) - Export logs to [Google Cloud Storage](/astro/user-guide/administration/log-streams/gcs-bucket) - Configure multiple stream types for different log categories ## Sign-on policy Sign-on policy controls how administrators authenticate to the Aembit platform. This central configuration point allows you to enforce strong authentication requirements. It makes sure that access to this privileged system follows your organization's security standards. The [Sign-on policy](/astro/user-guide/administration/sign-on-policy/) page allows you to: - Configure SSO enforcement requirements - Set up multi-factor authentication policies - Manage authentication grace periods --- # About Aembit Cloud URL: https://docs.aembit.io/get-started/concepts/aembit-cloud/ Description: A high-level overview of Aembit Cloud's purpose and its features --- # About Aembit Edge URL: https://docs.aembit.io/get-started/concepts/aembit-edge/ Description: A high-level overview of Aembit Edge's purpose and its features ## Agent Controller The Agent Controller is a key Aembit Edge Component that establishes secure communication between the Agent Proxy and Aembit Cloud. Agent Controllers establish trust from within your network on behalf of other Edge Components. --- # Auditing and reporting URL: https://docs.aembit.io/get-started/concepts/audit-report/ Description: A high-level overview of Aembit's auditing and reporting capabilities --- # Client Workloads URL: https://docs.aembit.io/get-started/concepts/client-workloads/ Description: Client Workloads overview --- # Credential Providers overview URL: https://docs.aembit.io/get-started/concepts/credential-providers/ Description: Overview of Credential Providers and their role in Aembit --- # Scaling Aembit with Terraform URL: https://docs.aembit.io/get-started/concepts/scaling-terraform/ Description: Description of how to scale with the Aembit Terraform provider Aembit supports scalable, repeatable infrastructure-as-code workflows through its official **Terraform provider**. By managing Aembit resources declaratively in code, you can automate onboarding, ensure consistent policies across environments, and scale access controls alongside your infrastructure. This guide explains how the Aembit Terraform Provider works and how to use it to scale Aembit in production environments. ## Why Use Terraform with Aembit? Terraform gives you the ability to: - **Codify access policies and workload identity configuration** - **Version control changes** to your identity and access infrastructure - **Apply changes consistently** across staging, production, and multicloud environments - **Automate onboarding** for new workloads, trust providers, and credential integrations This helps reduce manual steps, eliminate configuration drift, and ensure your access policies are reproducible and reviewable. ## What Can You Manage? The Aembit Terraform Provider supports all core Aembit resources: | Resource Type | Terraform Support | |----------------------|-----------------------------| | Trust Providers | ✅ Create and configure | | Client Workloads | ✅ Manage identity matching | | Server Workloads | ✅ Define endpoints, auth | | Credential Providers | ✅ Integrate secrets/tokens | | Access Policies | ✅ Authorize workload access| | Access Conditions | ✅ Enforce dynamic controls | | Resource Sets | ✅ Segment environments | | Roles & Permissions | ✅ Assign fine-grained access| This full coverage enables you to declare your Aembit configuration as code, just like cloud resources or Kubernetes objects. ## How the Terraform Provider Works 1. **Authenticate** with your Aembit tenant by providing an access token. 2. **Declare resources** like workloads, policies, and credential providers in `.tf` files. 3. **Run `terraform apply`** to push the desired state to Aembit. 4. Aembit **provisions or updates** the corresponding resources in your tenant. Example provider block: ```hcl provider "aembit" { token = var.aembit_api_token tenant_id = var.aembit_tenant_id } ``` --- # Server Workloads overview URL: https://docs.aembit.io/get-started/concepts/server-workloads/ Description: Overview of Server Workloads and their role in Aembit --- # Trust Providers overview URL: https://docs.aembit.io/get-started/concepts/trust-providers/ Description: Overview of Trust Providers and their role in Aembit --- # Proof of Concept URL: https://docs.aembit.io/get-started/customer-poc/ Description: This page provides customers with the steps needed to evaluate Aembit's platform and product ## Introduction This page describes the steps for a customer to evaluate Aembit's Workload Credential Management, which includes the following configuration steps: - Configure Aembit Cloud - Create an Access Condition - Configure and test Aembit Edge Components ## Aembit Cloud Configuration Steps To configure your Aembit Cloud Tenant, you need to log into your tenant and then perform the following steps: - create a Trust Provider - create a Client Workload - create a Server Workload - create a Credential Provider - create an Access Policy ### Create a Trust Provider To create a **Trust Provider**, you need to specify the type of Trust Provider. In this example, you will set the Trust Provider type to *GCP Identity Token* by following the steps listed below. 1. Click on the **Trust Provider** link in the left navigation pane. 2. Click on the **New** button to create a new Trust Provider. You should see the Trust Provider page appear. ![New Trust Provider Page](../../../assets/images/poc_create_trust_provider.png) 3. Enter information in the following fields: - **Name** - Name of the Trust Provider. - **Description** - Optional text description of the Trust Provider. - **Trust Provider** - Select **GCP Identity Token** from the drop-down menu for this field. 4. In the Match Rules section, click on **New Rule**. Two new fields appear: **Attribute** and **Value**. - In the **Attribute** dropdown menu, select **email**. - In the **Value** field, enter the email address of the GCP Cloud Run Job Service Account. 5. Copy the `Client ID`, which is available after you click **Save**. This value needs to be placed into the `{aembit-tp-ClientId}` variable below. ### Create a Client Workload Once you have created a Trust Provider, you need to now create a Client Workload. Perform the steps listed below. 1. Click on the **Client Workloads** link in the left navigation pane. 2. Click on the **New** button to create a new Client Workload. You should see a Client Workload page appear. ![New Client Workload Page](../../../assets/images/poc_create_client_workload.png) 3. Enter information in the following fields: - **Name** - Name of the Client Workload. - **Description** - Optional text description of the Client Workload. - **Client Identification** - Enter the following values: - In the **Client Identification** drop-down menu, select **GCP Identity Token**. - In the **Value** fields, enter the email address of the GCP Cloud Run Job Service Account. ### Create a Server Workload Next, you need to create a Server Workload. Perform the steps listed below. 1. Click on the **Server Workloads** link in the left navigation pane. 2. Click on the **New** button to create a new Server Workload. You should see a Server Workload page appear. ![New Server Workloads Page](../../../assets/images/poc_create_server_workload.png) 3. Enter information in the following fields: - **Name** - Name of the Client Workload. - **Description** - Optional text description of the Client Workload. - **Service Endpoint Information** - Enter the following values: - Set the **Host** value to `secretmanager.googleapis.com`. - Select `HTTP` from the Application Protocol drop-down menu. - Check the **Port** and **Port Forward** TLS checkboxes. Ensure that both ports are set to `443`. - For Authentication, select `HTTP Authentication` with the `Bearer` Authentication Scheme ### Create a Credential Provider After creating the Server Workload, you must now create a Credential Provider by performing the steps listed below. 1. Click on the **Credential Providers** link in the left navigation pane. 2. Click on the **New** button to create a new Credential Provider. You should see a Credential Provider page appear. ![New Credential Providers Page](../../../assets/images/poc_create_credential_provider.png) 3. Enter information in the following fields: - **Name** - Name of the Credential Provider. - **Description** - Optional text description of the Credential Provider. - **Credential Type**: This needs to be *Google Workload Identity Federation*. - In the **OIDC Issuer URL** field, copy the OIDC Issuer URL and enter it into a GCP Workload Identity Federation Pool Provider (of type *OpenID Connect (OIDC)*) - In the **Audience** field, copy the default audience of the GCP Workload Identity Pool Provider into the Aembit Credential Provider **Audience** field (without `https`, so it should start with `//iam.googleapis.com/projects`) - Within the GCP Workload Identity Pool Provider, set the **Attribute Mapping** ***OIDC 1*** value to `assertion.tenant` - Configure your GCP Workload Identity Pool with a Connected Service Account and set the Aembit Credential Provider **Service Account Email** to the email address of the Connected Service Account. **Notes** - The GCP Cloud Run Job Service Account and Workload Identity Federation Pool Connected Service Account do not need to be the same Service Account. - Use the **Verify** button to ensure the Credential Provider is working properly before proceeding. ### Create an Access Policy Next, you need to create an Access Policy. Because you have already created a Trust Provider, Client Workload, Server Workload, and Credential Provider, you simply need to perform the following step: 1. Create an **Access Policy** for all of the above Aembit elements. - When adding the element of each type, use the **Existing** button and select the previously created element. - When all elements have been added, click ***Save & Activate** to finalize the Access Policy. ### Create an Access Condition (Optional) The final step you may choose to perform is to create an Access Condition that restricts the time a credential may be retrieved. Perform the steps listed below. If you choose to create an Access Condition, this Access Condition is added to the Access Policy. 1. Click on the **Access Conditions** link in the left navigation pane. 2. Click on the **New Access Condition** button to create a new Access Condition. You should see an Access Condition page appear. ![New Access Conditions Page](../../../assets/images/poc_create_access_condition.png) 3. Enter information in the following fields: - **Name** - Name of the Access Condition. - **Description** - Optional text description of the Access Condition. - **Integration** - The type of integration for the Access Condition. For now, select *Aembit Time Condition*. - Select the Time Zone that should be used for the Time Condition (the values and specific days and times for which access will be allowed). **Note** If the credential request occurs outside of these defined periods, the request is rejected as Unauthorized. ## Aembit Edge Configuration and Testing **Note:** The following steps are based on using GCP Cloud Shell and should be adjusted to your development environment. 1. Go to the folder which has your GCP Cloud Run Job source code. 2. To build and deploy to GCP Cloud Run, execute the following command (adjusting configuration values as needed): `gcloud run jobs deploy {job-name} --source . --tasks {tasks} --max-retries {retries} --region {region} --set-env-vars "AEMBIT_CLIENT_ID={aembit-tp-ClientId}"` **Note** This step relates to the previous step (Step #1). 3. To execute the GCP Cloud Run Job, the following command can be used, or you can go into the GCP Console and click **Execute** within the specific GCP Cloud Run Job. `gcloud run jobs execute {job-name} --region {region}` 4. Review Authorization Events Now that the Cloud Run Job has executed, you can go into your Aembit Tenant and the Reporting section, and select **Access Authorization Events**. **Note:** It can take from 1-3 minutes for events to show up in this view. --- # How Aembit works URL: https://docs.aembit.io/get-started/how-aembit-works/ Description: Aembit's core concepts and how they work import { CardGrid } from '@astrojs/starlight/components'; import ImageLinkCard from "/src/components/ImageLinkCard.astro"; This topic provides an overview of how Aembit works, including its core architecture, key components, and the flow of access requests. It also covers the main components of Aembit, including Workloads, Access Policies, Trust Providers, Access Conditions, Use the links in each section to dive deeper into specific topics related to how Aembit works or start configuring and using those features. ## The core problem Aembit solves Most organizations secure workload access using static, long-lived secrets (API keys, passwords, tokens) that are: - Difficult to securely distribute and store - Prone to leakage and theft - Hard to rotate regularly - A significant security risk when compromised Aembit shifts this paradigm by **managing access instead of secrets**, using verifiable workload identity to secure communications without requiring applications to store credentials. ## Core architecture Aembit consists of two main components: ### Aembit Edge: The data plane The **data plane** deployed in your environment that: - Intercepts traffic from your applications - Gathers identity information - Injects credentials for seamless authentication - Forwards traffic to target services --- ### Aembit Cloud: The control plane The **control plane** that: - Makes policy decisions - Verifies workload identities - Issues credentials - Provides administration tools - Captures audit logs --- ## Evaluation flow: How Aembit grants access To understand how Aembit works, knowing how Aembit makes decisions when evaluating Access Policies is useful. The following diagram is a simplified illustration of the Access Policy evaluation flow: ```d2 title: Access Policy evaluation flow title: { label: "Simplified Aembit Access Policy Evaluation Flow" near: top-center shape: text style.font-size: 58 style.underline: true } Client_Workload: { shape: rectangle label: "Client Workload\n(App, Script)" } Server_Workload: { shape: rectangle label: "Server Workload\n(API, DB, SaaS)" } Aembit_Edge: { shape: rectangle label: "Aembit Edge" } Client_Environment: { shape: cloud label: "Client's Runtime\nEnvironment\n(AWS, Azure, K8s)" } Aembit_Cloud: { shape: rectangle label: "Aembit Cloud" style.stroke: "#666666" Access_Policy: { label: "Access Policy" tooltip: "Defines WHO (Client) can access WHAT (Server) under WHICH conditions" style.stroke-dash: 4 Trust_Provider: { shape: hexagon label: "Trust Provider" tooltip: "Verifies Client Workload identity via environment attestation (AWS IMDS, K8s Token, SPIFFE)" } Access_Conditions: { shape: hexagon label: "Access Conditions" tooltip: "Evaluates dynamic context (Time, Geo, Security Posture) before granting access" } Credential_Provider: { shape: hexagon label: "Credential Provider" tooltip: "Generates/Retrieves ACCESS credentials (OAuth, STS, API Key) for the Server Workload" } } AuthZ_Log: { shape: cylinder style.stroke: "teal" label: "Authorization Log" tooltip: "Records detailed audit trail of every access attempt and policy decision" } } # Flow of Access Request Evaluation Client_Workload -> Aembit_Edge: "1. Initiates Request to Server" { style.animated: true } Aembit_Edge -> Client_Environment: "2a. Get Identity Evidence" { style.animated: true } Client_Environment -> Aembit_Edge: "2b. Return Identity Evidence" { style.animated: true } Aembit_Edge -> Aembit_Cloud.Access_Policy: "3. Send Request to\nAccess Policy" { style.animated: true } Aembit_Cloud.Access_Policy.Trust_Provider -> Aembit_Cloud.Access_Policy.Access_Conditions: "4. Verified Identity -> Evaluate Access Conditions" { style.animated: true } Aembit_Cloud.Access_Policy.Access_Conditions -> Aembit_Cloud.Access_Policy.Credential_Provider: "5. Conditions Met -> Request Access Credential" { style.animated: true } Aembit_Cloud.Access_Policy -> Aembit_Edge: "6. Return\nAccess Credential" { style.animated: true } Aembit_Edge -> Server_Workload: "7. Inject Credential\n& Forward Request" { style.animated: true } # Logging occurs in parallel Aembit_Cloud.Access_Policy -> Aembit_Cloud.AuthZ_Log:"Logs Access Authorization Events" { style.stroke: "teal" style.stroke-dash: 2 } ``` Access Policies follow a multi-step process to decide whether to allow a Client Workload to access a Server Workload: 1. **Request Initiation & Interception** - A Client Workload attempts to connect to a Server Workload. 1. **Identify the Workloads** - Aembit Edge observes the Client Workload's identity using metadata from your environment—such as Kubernetes service account names, VM identity tokens, or cloud-specific signals. 1. **Match request to an Access Policy** - Aembit Cloud compares the request to existing Access Policies. If no policy matches both workloads, Aembit denies the request. 1. **Verify Identity with Trust Providers** (optional) - Aembit checks with a Trust Provider (like AWS, Azure, or Kubernetes) to verify the Client Workload's identity. This process removes the need for long-lived secrets by leveraging native cloud or orchestration signals. 1. **Evaluate Access Conditions** (optional) - If the request matches a policy, Aembit checks whether it satisfies any extra conditions. For example, it might require the workload to run in a specific region or during certain hours. 1. **Retrieve Credentials from a Credential Provider** - When the request passes all checks, Aembit contacts the Credential Provider to retrieve the appropriate credential—such as an API key or OAuth token. 1. **Inject the Credential** - Aembit Edge injects the credential directly into the request, typically using an HTTP header. The Client Workload never sees or stores the credential. ## Key components Aembit is built around several key components that work together to provide secure workload access: ### Workloads: Non-human identities Aembit uses the term **workload** to refer to any application, service, or job that initiates requests to access other services or resources. Aembit organizes identities using two workload types: - **Client Workloads** - Applications, services, or jobs initiating requests - **Server Workloads** - Target APIs, databases, or services receiving requests --- ### Access Policies: Defining access rules Central to Aembit's authorization model, Access Policies connect: - A Client Workload (who wants access) - A Server Workload (what they want to access) - A Trust Provider (how to verify the client's identity) - Access Conditions (when/where/under what circumstances access is allowed) - A Credential Provider (what credentials to issue) Access Policies are the rules that define whether a specific Client Workload can access a specific Server Workload. --- ### Trust Providers: Verifying identity Trust Providers verify Client Workload identity through environmental attestation rather than shared secrets. They support many platforms: - AWS, Azure, and GCP environments - Kubernetes clusters - CI/CD pipelines (GitHub Actions, GitLab) - On-premises systems via Kerberos This "secretless" approach means your applications don't need to store authentication credentials for their own identity. --- ### Access Conditions: Adding context Access Conditions add contextual requirements to Access Policies, such as: - Time restrictions - Geographic location - Security posture (via integrations with tools like Wiz or CrowdStrike) This provides a Multi-Factor Authentication (MFA)-like strength for non-human access. --- ### Credential Providers: Managing credentials Credential Providers obtain or generate the authentication credentials needed to access target services: - Temporary cloud provider tokens - OAuth tokens - JWTs - API keys Aembit prioritizes short-lived, just-in-time credentials whenever possible. --- ## Auditing logs: Tracking access requests Aembit provides detailed audit logs of all access requests and administrative changes called **Access Authorization Events**. --- ## Administrative capabilities: Managing Aembit Aembit includes features for enterprise management: - **Resource Sets**: Logically group resources by environment or team - **Role-Based Account Control (RBAC)**: Control administrative access within the platform - **Identity Providers**: Integrate with existing identity systems for user management - **Audit Logging**: Track policy evaluations and administrative changes - **Log Streams**: Export logs to external systems for analysis - And more... --- ## Aembit Terraform Provider: Scaling Aembit Aembit provides a comprehensive Terraform provider for managing configurations as code, enabling: - Automated, consistent deployment - Version-controlled configurations - Integration with DevOps workflows --- ## Additional resources - [Conceptual overview](/astro/get-started/concepts/) - [Access Policies](/astro/get-started/concepts/access-policies) - [Audit and report](/astro/get-started/concepts/audit-report) - [Administering Aembit](/astro/get-started/concepts/administration) - [Scaling with Terraform](/astro/get-started/concepts/scaling-terraform) --- # Quickstart: Access Policy enhancements URL: https://docs.aembit.io/get-started/quickstart/quickstart-access-policy/ Description: Enhancing the Aembit quickstart guide to set up a Trust Provider, Access Conditions, and reporting After completing the [Quickstart guide](/astro/get-started/quickstart/quickstart-core) and setting up your sandbox environment, it's time to enhance your Access Policies by integrating Trust Providers, Access Conditions, and reporting. These features enhance your workload security giving you finer control over how you grant access within your sandbox environment and provide you with insights about those interactions. To build upon your quickstart foundation, you will complete practical steps to implement the following features: - [Trust Provider](#configure-a-trust-provider) - This verifies workload identities, making sure only authenticated workloads can securely interact with your resources. - [Access Conditions](#configure-access-conditions) - Enforce detailed rules such as time-based or geo-based restrictions, to tailor access policies to your needs. - [Reporting](#reporting) - Tools to help you monitor and analyze workload interactions in your sandbox environment, providing insights into policy effectiveness and system health. With these enhancements, Aembit empowers you to make the most of your sandbox setup and prepare for more advanced scenarios. ## Before you begin You must have completed the following *before* starting this guide: - [Aembit quickstart guide](/astro/get-started/quickstart/quickstart-core) and it's prerequisites. ## Configure a Trust Provider Trust Providers allow Aembit to verify workload identities without relying on traditional credentials or secrets. By using third-party systems for authentication, Trust Providers ensure that only verified workloads can securely interact with your resources. These steps use Docker Desktop Kubernetes deployments. 1. From your Aembit tenant, go to **Access Policies** and select the Access Policy you created in the quickstart guide. 1. In the top right corner, click **Edit**. 1. In the **Trust Provider** section, hover over **+**, then click **New**. - **Name** - QuickStart Kubernetes Trust Provider (or another user-friendly name). - **Trust Provider** - Kubernetes Service Account 1. In the **Match Rules** section, click **+ New Rule**, then enter the following values: - **Attribute** - `kubernetes.io { namespace }`. - **Value** - `aembit-quickstart`. 1. Select **Upload Public Key**. 1. Browse for the `.pub` file or copy its contents and paste them into the **Public Key** field: Obtain the public key specific to your environment. Use the following locations for your operating system: - **Windows** - `%USERPROFILE%\AppData\Local\Docker\pki\sa.pub` - **macOS** - `~/Library/Containers/com.docker.docker/pki/sa.pub` ![Configuring Trust Provider](../../../../assets/images/quickstart_trust_provider.png) 1. Click **Save**. You'll be taken back to the **Access Policies** page. 1. Click **Save** in the top right corner to save the Access Policy. Stay on this page to [Configure Access Conditions](#configure-access-conditions). By associating this Trust Provider with an Access Policy, Aembit validates workload identities based on the rules you defined. For example, Aembit automatically authenticates Kubernetes service accounts running in the `aembit-quickstart` namespace and denies accounts from all other namespaces. This makes sure that only workloads within that namespace can access your sensitive resources. Aembit supports a wide variety of Trust Providers tailored for different environments: - [Kubernetes Service Account](/astro/user-guide/access-policies/trust-providers/kubernetes-service-account-trust-provider) - [AWS roles](/astro/user-guide/access-policies/trust-providers/aws-role-trust-provider) - [Azure Metadata Service](/astro/user-guide/access-policies/trust-providers/azure-metadata-service-trust-provider) This flexibility allows you to seamlessly integrate Trust Providers that align with your existing infrastructure. For more details on Trust Providers, including advanced configurations and other types, see [Trust Provider Overview](/astro/user-guide/access-policies/trust-providers/add-trust-provider) and related sub-pages. ## Configure Access Conditions Access Conditions allow you to define specific rules to control when and how credentials are issued to workloads. These conditions strengthen security by ensuring access is granted only when it aligns with your organization's policies. :::note[Paid feature] Access Conditions are a paid feature. To enable this feature, contact [Aembit Support](https://aembit.io/support/). ::: 1. In the top right corner of the **Access Policies** page, click **Edit**. 1. In the **Access Condition** section, hover over **+**, then click **New**. - **Name** - QuickStart Time Condition (or another user-friendly name). - **Integration** - Aembit Time Condition 1. In the **Conditions** section, select the appropriate timezone for your condition. 1. Click the **+** icon next to each day you want to include in your Time Condition configuration, such as Monday from 8 AM to 5 PM. :::tip[Include your current time] Make sure your current time falls within the time period you set so the condition remains in effect while following this guide. ::: 1. Click **Save**. You'll be taken back to the **Access Policies** page. 1. Click **Save** in the top right corner to save the Access Policy. ![Configuring Access Condition](../../../../assets/images/quickstart_access_condition.png) With this configuration, Aembit grants access to the workloads you specified only during the days and timeframes you defined. If the conditional access check fails, Aembit denies access, and an displays an error message on the client workload. Aembit logs this action and detailed information about the failure, including the `accessConditions` field with an `Unauthorized` result, which you can find in the associated logs. In the next section, [Reporting](#reporting), you'll see how to review these logs. Aembit also supports other types of Conditional Access configurations, such as [GeoIP restrictions](/astro/user-guide/access-policies/access-conditions/aembit-geoip) and integrations with third-party vendors such as [CrowdStrike](/astro/user-guide/access-policies/access-conditions/crowdstrike). These options allow you to build comprehensive and flexible access policies suited to your organization's needs. For more details on Access Conditions, see [Access Conditions Overview](/astro/user-guide/access-policies/access-conditions/) and explore related sub-pages to configure additional types. ## Reporting Reporting is crucial for maintaining security and operational efficiency. It provides a clear view of access attempts, policy evaluations, and credential usage, enabling you to identify potential issues and maintain compliance. To access the Reporting Dashboard, in your Aembit tenant, select **Reporting** from the left nav menu. By default, you will see the **Access Authorization Events** page, where you can review event details related to workload access attempts. In the top ribbon menu, there are three key reporting categories: - **Access Authorization Events** - View event logs for all access attempts. Each event is broken down into its evaluation stages, showing which policies were applied, whether they succeeded, and the reason for any failures. - **Audit Logs** - Track system changes, such as user actions, configuration updates, or policy changes. - **Workload Events** - Monitor events generated from the traffic between Client Workloads and Server Workloads. These events provide detailed information about all requests and responses, helping you analyze workload interactions comprehensively. ![Reporting Dashboard](../../../../assets/images/quickstart_reporting_dashboard.png) You also have filters available to you to narrow down your view by **Timespan**, **Severity**, and **Event Type**. These filters help you analyze events more efficiently, focusing on specific time periods or issues that require your attention. For now, you'll look at **Access Authorization Events**. As they provide essential insight into how Aembit evaluates access requests. ### Access Authorization Events Whenever a Client Workload attempts to access a Server Workload, Aembit generates Access Authorization Events. These events capture access attempts, log how access was evaluated, and display the outcome (granted or denied). The process is divided into three stages: - **Access Request** - Captures initial request details, including source, target, and transport protocol. - **Access Authorization** - Evaluates the request against Access Policies, detailing results from Trust Providers, Access Conditions, and Credential Providers. - **Access Credential** - Shows how credentials were retrieved and injected, or explains any failure reasons. To review these stages, follow these steps: 1. **Filter by Request** - In the filtering options, locate the **Event Type** and select **Request**. Then, click on an event in the list to inspect it. ![Access Request Event](../../../../assets/images/quickstart_reporting_access_request.png) This event provides key details about the connection attempt. It shows when the request happened, where it's coming from, and which workload made the request. For the quickstart, you should see: - **Target Host** - `aembit-quickstart-server.aembit-quickstart.svc.cluster.local` - **Service Account** - `aembit-quickstart-client` Both should match what you configured in the Access Policy. 1. Filter by **Authorization** - Change the **Event Type** filter to **Authorization** and select an event from the list. ![Access Authorization Event](../../../../assets/images/quickstart_reporting_access_auth.png) This event shows how access was evaluated against the Access Policy. It displays the result (**Authorized** or **Unauthorized**) and highlights key components that Aembit checked. For the quickstart sandbox environment, you'll see that Aembit successfully: - Identified the Client Workload, Server Workload, and Access Policy. - Attested the Trust Provider. - Verified the Access Condition. - Identified the Credential Provider. When Aembit successfully identifies and verifies these components, Aembit grants access to that Client Workload. 1. **Filter by Credential** - Change the **Event Type** filter to **Credential** and select an event from the list. ![Access Credential Event](../../../../assets/images/quickstart_reporting_access_credential.png) This event tracks how Aembit retrieves credentials to enable access. It shows whether the credential retrieval was successful and which Credential Provider was used. For the quickstart sandbox environment, you'll see that Aembit successfully: - Identified the Client Workload, Server Workload, and Access Policy. - Retrieved the Credential Provider, verifying that the Client Workload had the required credentials for secure access. At this stage, everything is in place—the request was successfully authorized, credentials were securely retrieved, and the Client Workload can now access the Server Workload. For more detailed insights into Access Credential Events and other reports, visit the [Reporting](/astro/user-guide/audit-report/) page. These pages provide further guidance on using filters, understanding event data, and troubleshooting potential issues. :::tip[Quickstart completed!] Congratulations on completing the quickstart! You now have a solid foundation in Aembit's key capabilities. But this is just the beginning—Aembit has much more to offer! Our full documentation provides in-depth guides and advanced techniques to help you expand your access policies and strengthen workload identity management. ::: For your next steps, you can either try configuring Aembit with your real client workloads or explore additional possibilities to tailor it to your needs. In both cases, see the following resources: - **Server Workload Cookbook** - Offers ready-to-use recipes for popular APIs and services. Explore guides such as [Salesforce REST](/astro/user-guide/access-policies/server-workloads/guides/salesforce-rest) and [GitHub REST](/astro/user-guide/access-policies/server-workloads/guides/github-rest) to learn how to authorize secure access to these resources. - **Exploring Deployment Models** - Aembit supports diverse deployment environments beyond Kubernetes. For detailed examples and guidance, visit the [Support Matrix](/astro/reference/support-matrix) and explore related sub-pages to learn about configuring deployments for specific environments like [Virtual Machines](/astro/user-guide/deploy-install/virtual-machine/), [AWS Lambda Containers](/astro/user-guide/deploy-install/serverless/aws-lambda-container), and more. Check out these guides and more to optimize your workloads with confidence! --- # Quickstart: Aembit core setup URL: https://docs.aembit.io/get-started/quickstart/quickstart-core/ Description: Aembit's quickstart guide - practical experience automating and securing access between workloads import { Tabs, TabItem } from '@astrojs/starlight/components'; import { Steps } from '@astrojs/starlight/components'; Aembit is a cloud-native, non-human identity and access management platform that provides secure, seamless access management for workloads across diverse environments. It simplifies how organizations control and authorize access between client and Server Workloads, ensuring that only the right workloads can access critical resources at the right time. Aembit shifts the focus away from long-term credential management by enabling automated, secure access management for workloads connecting to services. By concentrating on managing access rather than secrets, Aembit provides a flexible and security-first approach to non-human identity across a wide range of infrastructures. ## In this guide This quickstart guide provides a practical introduction to Aembit's capabilities. Here's what you'll do: 1. Set up a sandbox environment with pre-configured client and Server Workloads using Docker Desktop with Kubernetes. 1. Deploy workloads and configure a secure Access Policy between the client and server. 1. Gain practical experience managing automated, secure access between workloads. **Estimated Time to Complete**: ~15 minutes (if prerequisites are already installed). By completing this quickstart guide, you will have practical experience creating an example of Aembit's capabilities—ensuring quick results as you implement access management in a real-world environment. Once you are comfortable with these foundational steps, Aembit offers the flexibility to manage access for more complex and scalable workloads across a range of infrastructure setups. ## Before you begin Before starting Aembit's quickstart guide, you must complete the following prerequisites: 1. [Sign up with Aembit](#sign-up-with-aembit) and you can access your Aembit tenant at `https://.aembit.io`. 1. [Install Docker Desktop and enable Kubernetes](#install-docker-desktop-and-enable-kubernetes). 1. [Install Helm](#install-helm). :::note The Aembit quickstart guide doesn't require complex network configurations, such as a static external IP, outbound connection adjustments, or firewall rule changes. These prerequisites are designed to work securely and seamlessly within your local environment. ::: ### Sign up with Aembit Visit the [Sign Up page](https://useast2.aembit.io/signup) to create an account and set up your tenant for accessing the platform. A tenant in Aembit is your organization's dedicated workspace within the platform. It isolates your workloads, access policies, and configurations, ensuring secure and efficient management of your environment. Your tenant ID is a unique identifier for your workspace and is used to access your tenant at `https://.aembit.io`. Look for a welcome email from Aembit. It may take a few minutes; check your Junk or Spam folders if you do not see it. ### Install Docker Desktop and enable Kubernetes Docker Desktop includes Docker Engine and Kubernetes, making it easy to manage your containerized applications. 1. Download and install Docker Desktop from the [official Docker website](https://docs.docker.com/get-started/get-docker/) for your operating system. Once installed, open Docker Desktop. 1. Enable Kubernetes by going to **Settings -> Kubernetes** in Docker Desktop and toggling the **Enable Kubernetes** switch to the **On** position. ![Enable Kubernetes in Docker](../../../../assets/images/quickstart_enable_kubernetes.png) :::tip[Security best practice] If you get errors or warnings about permissions on your `~/.kube/config` file being too permissive, tighten up the file's permissions by running the following command: ```shell chmod 600 ~/.kube/config ``` Locking down permissions on your `~/.kube/config` file is a security best practice since the config file contains sensitive credentials for accessing Kubernetes clusters. ::: ### Install Helm Helm deploys the pre-configured sandbox client and Server Workloads for this quickstart guide. A basic understanding of [Helm commands](https://helm.sh/docs/helm/) will be helpful for deploying the sandbox workloads. Select one of the following tabs for your operating system to install Helm: 1. Download the [latest Helm version](https://github.com/helm/helm/releases) for Windows. 1. Run the installer and follow the on-screen instructions. 1. Once installed, open a Command Prompt or PowerShell terminal and verify the installation by running: ```cmd helm version ``` **Expected Output:** ```cmd version.BuildInfo{Version:"v3.x.x", GitCommit:"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx", GitTreeState:"clean", GoVersion:"go1.x.x"} ``` 1. Use Homebrew to install Helm: ```shell brew install helm ``` 1. Verify the installation: ```shell helm version ``` **Expected Output:** ```shell version.BuildInfo{Version:"v3.x.x", GitCommit:"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx", GitTreeState:"clean", GoVersion:"go1.x.x"} ``` 1. Download and install the latest Helm binary: ```shell curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 chmod 700 get_helm.sh ./get_helm.sh ``` 1. Verify the installation: ```shell helm version ``` **Expected Output:** ```shell version.BuildInfo{Version:"v3.x.x", GitCommit:"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx", GitTreeState:"clean", GoVersion:"go1.x.x"} ``` With these prerequisites complete, you are ready to deploy the sandbox workloads and configure secure access between workloads. ## Deploying workloads Make sure that your environment is ready for deployment by verifying the following: - [Docker Desktop is installed and Kubernetes enabled](#install-docker-desktop-and-enable-kubernetes). - [Helm is installed and configured correctly](#install-helm). With these steps in place, you are ready to deploy the workloads. ### Install applications 1. From your terminal, add the Aembit Helm chart repo by running: ```shell helm repo add aembit https://helm.aembit.io ``` 1. Deploy both the client and Server Workloads: ```shell helm install aembit-quickstart aembit/quickstart \ -n aembit-quickstart \ --create-namespace ``` ### Verify deployments After deploying the applications, verify that everything is running correctly using the following commands: 1. Check the Helm release status: ```shell helm status aembit-quickstart -n aembit-quickstart ``` **Expected Output:** ```shell {3,4} NAME: aembit-quickstart LAST DEPLOYED: Wed Jan 01 10:00:00 2025 NAMESPACE: aembit-quickstart STATUS: deployed REVISION: 1 TEST SUITE: None ``` 1. List all resources in the namespace: ```shell kubectl get all -n aembit-quickstart ``` **Expected Output:** ```shell {2,3,5,6} NAME READY STATUS RESTARTS AGE pod/aembit-quickstart-client-abcdef 1/1 Running 0 1m pod/aembit-quickstart-server-abcdef 1/1 Running 0 1m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/aembit-quickstart-client NodePort 10.109.109.55 8080:30080/TCP 1m service/aembit-quickstart-server NodePort 10.109.104.236 9090:30090/TCP 1m ``` These outputs help you confirm that the workloads and services have been deployed correctly and are functioning as expected. ### Interacting with the applications In this section, you are going to interact with the pre-configured applications. This interaction will demonstrate that the Client Workload can connect to the Server Workload but lacks the credentials to authenticate to it. 1. With the client and Server Workloads running, open the [**Client Workload**](http://localhost:30080) 1. Click the **Get Data** button. **You will receive a failure response** since you have not deployed Aembit Edge, nor has Aembit injected the necessary credentials for the Client Workload to access the Server Workload yet. ![Failure Message - Client Workload](../../../../assets/images/quickstart_client_workload_unauthorized.png) In the next sections, you will deploy Aembit Edge. Making it so that Aembit will automatically acquire and inject the credential on behalf of the Client Workload so it can then access the Server Workload. ## Deploying Aembit Edge With your workloads deployed, it's time to integrate Aembit Edge into your system. Aembit Edge consists of components that customers install within their environment. These components form the core of Aembit's Workload IAM functionality. Proceed with deploying Aembit Edge into your environment. ### Create a new Agent Controller The Agent Controller is a helper component that facilitates the registration of other Aembit Edge Components. 1. In your Aembit tenant, go to **Edge Components** from the left nav menu. 1. From the top ribbon menu, select **Deploy Aembit Edge**. 1. Select **Kubernetes** from the list of **Environments**. ![Navigate to Deploy Aembit Edge Page](../../../../assets/images/quickstart_navigate_deploy_aembit_edge_page.png) 1. In the **Prepare Edge Components** section, click to **New Agent Controller**. You will see the Agent Controller setup page displayed. 1. Enter a name, such as `Quickstart Agent Controller` (or another user-friendly name). 1. Add an optional description for the controller. 1. For now, ignore the [Trust Provider](/astro/user-guide/access-policies/trust-providers/) section, as you don't need it for this quickstart guide. ![Create a New Agent Controller](../../../../assets/images/quickstart_create_new_agent_controller.png) 1. Click **Save**. Once saved, your newly created Agent Controller will auto-select in the list of available Agent Controllers. This reveals the **Install Aembit Edge Helm Chart** section. ### Deploy the Aembit Edge As part of Aembit Edge, the Agent Proxy is automatically injected within the Client Workload pod. It manages workload identity and securely injects credentials for communication with Server Workloads. 1. In the **Install Aembit Edge Helm Chart** section, make sure that the Agent Controller you just created is selected in the dropdown menu. 1. In the **New Agent Controller** section, click **Generate Code** to generate a Device Code. The Device Code is a temporary one-time-use code, valid for 15 minutes, that you use during installation to authenticate the Agent Controller with your Tenant. Make sure you complete the next steps promptly before the code expires. ![Deploy Aembit Edge](../../../../assets/images/quickstart_deploy_aembit_edge.png) 1. Since you already [installed the Aembit Helm repo](#install-applications), go ahead and install the Aembit Helm chart. *From your terminal*, run the following command, making sure to replace: - `` with your tenant ID (Find this in the Aembit website URL: `.aembit.io`) - `` with the code you generated in the Aembit web UI ```shell {4} helm install aembit aembit/aembit \ --create-namespace \ -n aembit \ --set tenant=,agentController.deviceCode= ``` :::tip To reduce errors, copy the command from the Aembit Web UI for this step, as it populates your `` and `` for you. ![Deploy Aembit Edge Generate Code button](../../../../assets/images/deploy_aembit_edge-generate-code.png) ::: Aembit Edge is now deployed in your Kubernetes cluster! 1. Check the current state of quickstart Client pod to confirm it is running without the Agent Proxy container. The **`READY`** column for the `pod/aembit-quickstart-client-abcdef` should display **`1/1`**, indicating only the Client Workload container is running. ```shell kubectl get all -n aembit-quickstart ``` **Expected Output:** ```shell {2} NAME READY STATUS RESTARTS AGE pod/aembit-quickstart-client-abcdef 1/1 Running 0 1m pod/aembit-quickstart-server-abcdef 1/1 Running 0 1m ``` 1. Restart the quickstart Client pod to include the Agent Proxy in the deployment: ```shell kubectl delete pods -l app=aembit-quickstart-client -n aembit-quickstart --grace-period=0 --force ``` 1. After the pod restarts, verify that the `aembit-quickstart-client` pod now includes two containers: the Client Workload container and the Agent Proxy container. After the pod restarts, check its state again. **`READY`** column for the `aembit-quickstart-client` pod should now display **`2/2`**, indicating that both the Client Workload container and the Agent Proxy container are running successfully. ```shell kubectl get all -n aembit-quickstart ``` **Expected Output:** ```shell {2} NAME READY STATUS RESTARTS AGE pod/aembit-quickstart-client-abcdef 2/2 Running 0 1m pod/aembit-quickstart-server-abcdef 1/1 Running 0 1m ``` This step confirms that the Agent Proxy has been injected within the Client pod, enabling Aembit to securely manage credentials for communication between Client and Server Workloads. ## Configuring an Access Policy [Access Policies](/astro/user-guide/access-policies/) define the conditions for granting Client Workloads access to Server Workloads. Aembit evaluates access by verifying if the workloads match the Access Policy, the Client's identity is authenticated by a Trust Provider, and all conditions are met. In this quickstart guide, we have omitted configuring a Trust Provider to simplify your first walkthrough. However, Trust Providers are a critical component in securing all production deployments. They enable Aembit to authenticate workloads without provisioning long-lived credentials or secrets, ensuring that only trusted workloads are authenticated and authorized. Once authorized, Aembit delivers the necessary credentials to the Agent Proxy, which are used to authenticate the Client workload to the Server Workload. :::note[About Client Workload credentials] Credentials are never released to the Client Workload. Instead, they are injected directly into the traffic destined for the Server Workload, ensuring secure communication. ::: 1. From your Aembit tenant, click **Access Policies** in the left nav menu. 1. Click **+ New** to open the Create New Access Policy page. ![Create Access Policy](../../../../assets/images/quickstart_create_access_policy.png) Follow the steps in the following sections to configure each part of the Access Policy. ### Configure a Client Workload [Client Workloads](/astro/user-guide/access-policies/client-workloads/) are software applications that access services provided by Server Workloads. These could be custom apps, CI/CD pipelines, or scripts running without user intervention. 1. On the Access Policy page hover over the Client Workload section, click **New**, and on the Client Workload page: - **Name** - Quickstart Client (or another user-friendly name). - **Client Identification** - Kubernetes Pod Name Prefix - **Value** - `aembit-quickstart-client` 1. Click **Save**. ![Configuring Client Workload](../../../../assets/images/quickstart_client_workload.png) ### Configure a Server Workload [Server Workloads](/astro/user-guide/access-policies/server-workloads/guides/) serve requests from Client Workloads and can include APIs, gateways, databases, and more. The configuration settings define the Service Endpoint and Authentication methods, specifying the networking details and how requests are authenticated. 1. On the Access Policy page hover over the Server Workload section, click **New**, and on the Server Workload page: - **Name** - Quickstart Server (or another user-friendly name). - **Host** - `aembit-quickstart-server.aembit-quickstart.svc.cluster.local` - **Application Protocol** - HTTP - **Transport Protocol** - TCP - **Port** - 9090 - **Forward to Port** - 9090 - **Authentication Method** - HTTP Authentication - **Authentication Scheme** - Bearer 1. Click **Save**. ![Configuring Server Workload](../../../../assets/images/quickstart_server_workload.png) ### Configuring a Credential Provider [Credential Providers](/astro/user-guide/access-policies/credential-providers/) supply the access credentials, such as OAuth tokens or API keys, that allow Client Workloads to authenticate with Server Workloads. Aembit can also request and manage tokens from third-party services. :::tip[Security Best Practice] In this QuickStart, you are using the API Key option for simplicity. However, Aembit recommends using short-lived credentials whenever possible to enhance security and reduce exposure to risks associated with long-lived credentials. ::: 1. From your web browser, go to the [sandbox Server Workload](http://localhost:30090). 1. Click **Generate API Key**. This generates a unique API key you will use in later in this section. :::caution[Generating more than one key] Avoid clicking the button multiple times, as only one API key (the last generated) remains active at a time. Copy the API key immediately after creating it, as it will be used in the next step. ::: 1. Copy the API key. ![Copy API Key - Server Workload](../../../../assets/images/quickstart_server_workload_copy_api_key.png) 1. After saving your Client Workload on Aembit UI, you will return to the Access Policy page. Hover over the Credential Provider section, click **New**, and on the Credential Provider page: - **Name** - Quickstart API Key (or another user-friendly name) - **Credential Type** - API Key - **API Key** - Paste the API key you generated from the Server Workload 1. Click **Save**. ![Configuring Credential Provider](../../../../assets/images/quickstart_credential_provider.png) ### Finalizing the Access Policy Once you have configured the Client Workload, Server Workload, and Credential Provider, click **Save & Activate** to complete the process and activate the policy. ## Testing the Access Policy To test your newly configured Access Policy, go to the [sandbox Client Workload](http://localhost:30080) and click **Get Data**. Since the Access Policy has been activated and Aembit Edge installed the necessary credential into the request, you should see a successful response. ![Success Message - Client Workload](../../../../assets/images/quickstart_client_workload_success.png) Congratulations! You've created a Access Policy that's securing access between workloads! With just a few steps, you have deployed workloads, configured an Access Policy, and successfully authenticated requests—all without the complexity of manual credential management. This quickstart guide is just the foundation of all the features that Aembit has to offer. It supports powerful capabilities for scaling, securing, and managing workload identity across many environments, ensuring security and efficiency as your needs grow. #### Troubleshoot If you encounter any issues or don't see a successful response, the Aembit Web UI has a useful **Troubleshooter** that can help you identify potential problems: 1. Go to **Access Policies** and select the Access Policy you created for this quickstart guide. 1. Click **Troubleshoot** in the top corner. This brings up the Troubleshooter with your Access Policy's Client and Server Workloads already populated. ![Aembit Help Troubleshooter page](../../../../assets/images/quickstart_troubleshooting.png) 1. Inspect and make sure that the **Access Policy Checks**, **Client Workload Checks**, **Credential Provider Checks** and **Server Workload Checks** are **Active** (they have green checks). ![Aembit Help Troubleshooter page](../../../../assets/images/quickstart_troubleshooting_sw_checks.png) 1. For any sections that aren't Active, go back to the respective section in the quickstart guide and double check your configurations. Also, make sure all the [Prerequisites](#before-you-begin) are complete. The Troubleshooter helps diagnose potential issues with your configuration. For more details, visit the [Troubleshooter Tool](/astro/user-guide/troubleshooting/tenant-configuration) page. Still need help? Please [submit a support request](https://aembit.io/support/) to Aembit's support team. ## What's next? Now that you've completed the basics, it's time to explore additional features and capabilities to get the most out of Aembit. See [Quickstart: Access Policy enhancements](/astro/get-started/quickstart/quickstart-access-policy) to learn how to: - **Configure Trust Providers** to enhance workload identity verification and strengthen access control. - **Set Up Access Conditions** to enforce time-based, geo-based, or custom rules for workload access. - **Navigate Reporting Tools** to review access events, track policy usage, and analyze workload behavior. Following the [Quickstart: Access Policy enhancements](/astro/get-started/quickstart/quickstart-access-policy) page helps you expand beyond the quickstart, guiding you toward features that enhance security, visibility, and scalability. --- # Aembit security posture URL: https://docs.aembit.io/get-started/security-posture/ Description: Description of ow Aembit approaches, implements, and maintains security This section covers Aembit's approach to security, providing information about the security architecture, compliance, and threat modeling. The following pages provide in-depth information about Aembit's security posture: - [Security Architecture](/astro/get-started/security-posture/architecture) - Details of Aembit's security architecture design - [Security Compliance](/astro/get-started/security-posture/security-compliance) - Information about Aembit's compliance with security standards - [Threat Model](/astro/get-started/security-posture/threat-model) - Analysis of potential threats and how Aembit addresses them --- # Aembit software architecture URL: https://docs.aembit.io/get-started/security-posture/architecture/ Description: Explanation and illustration of Aembit's software architecture :::tip[Work in progress] This page is a work in progress. It may contain incomplete or unverified information. Please check back later for updates. If you have feedback or suggestions, please fill out the Aembit New Docs Feedback Survey. Thanks for your patience! ::: --- # Security compliance URL: https://docs.aembit.io/get-started/security-posture/security-compliance/ Description: Overview of Aembit's security posture and compliance :::tip[Work in progress] This page is a work in progress. It may contain incomplete or unverified information. Please check back later for updates. If you have feedback or suggestions, please fill out the Aembit New Docs Feedback Survey. Thanks for your patience! ::: --- # Aembit in your threat model URL: https://docs.aembit.io/get-started/security-posture/threat-model/ Description: How and where Aembit fits into your threat model :::tip[Work in progress] This page is a work in progress. It may contain incomplete or unverified information. Please check back later for updates. If you have feedback or suggestions, please fill out the Aembit New Docs Feedback Survey. Thanks for your patience! ::: --- # Aembit tutorials overview URL: https://docs.aembit.io/get-started/tutorials/ Description: Learn about how to configure, deploy, and scale Aembit This section provides tutorials to help you learn how to deploy, configure, and scale Aembit in different environments. These tutorials offer step-by-step guidance for common implementation scenarios. The following tutorials are available: - [Kubernetes Tutorial](/astro/get-started/tutorials/tutorial-k8s) - Learn how to deploy and configure Aembit in Kubernetes environments - [Terraform Tutorial](/astro/get-started/tutorials/tutorial-terraform) - Learn how to automate Aembit configuration using Terraform - [Virtual Machines Tutorial](/astro/get-started/tutorials/tutorial-vms) - Learn how to deploy and configure Aembit on virtual machines --- # Tutorial - Deploying on Kubernetes URL: https://docs.aembit.io/get-started/tutorials/tutorial-k8s/ Description: Tutorial explaining how to deploy Aembit Edge Components on Kubernetes --- # Tutorial - Scaling with the Aembit Terraform provider URL: https://docs.aembit.io/get-started/tutorials/tutorial-terraform/ Description: Tutorial explaining how to scale Aembit with the Aembit Terraform provider --- # Tutorial - Deploying on virtual machines URL: https://docs.aembit.io/get-started/tutorials/tutorial-vms/ Description: Tutorial explaining how to deploy Aembit Edge Components on virtual machines --- # Aembit use cases URL: https://docs.aembit.io/get-started/use-cases/ Description: This page describes common use cases for Aembit This section covers common use cases for Aembit, demonstrating how Workload Identity and Access Management can be applied in various scenarios. The following pages detail specific use cases: - [CI/CD Pipelines](/astro/get-started/use-cases/ci-cd) - Securing CI/CD pipelines with Aembit - [Credentials Management](/astro/get-started/use-cases/credentials) - Managing access credentials across workloads - [Microservices Security](/astro/get-started/use-cases/microservices-security) - Securing communication between microservices - [Multicloud Environments](/astro/get-started/use-cases/multicloud) - Managing access across multiple cloud environments - [Third-Party Access](/astro/get-started/use-cases/third-party-access) - Securely connecting to third-party services --- # Using Aembit in CI/CD environments URL: https://docs.aembit.io/get-started/use-cases/ci-cd/ Description: How Aembit secures NHI access in CI/CD environments :::tip[Work in progress] This page is a work in progress. It may contain incomplete or unverified information. Please check back later for updates. If you have feedback or suggestions, please fill out the Aembit New Docs Feedback Survey. Thanks for your patience! ::: --- # Using Aembit to manage credentials URL: https://docs.aembit.io/get-started/use-cases/credentials/ Description: How Aembit enables you to centrally manage and control credentials in your environments :::tip[Work in progress] This page is a work in progress. It may contain incomplete or unverified information. Please check back later for updates. If you have feedback or suggestions, please fill out the Aembit New Docs Feedback Survey. Thanks for your patience! ::: --- # Using Aembit to secure your microservices URL: https://docs.aembit.io/get-started/use-cases/microservices-security/ Description: How Aembit secures NHI access between microservices :::tip[Work in progress] This page is a work in progress. It may contain incomplete or unverified information. Please check back later for updates. If you have feedback or suggestions, please fill out the Aembit New Docs Feedback Survey. Thanks for your patience! ::: --- # Using Aembit in multicloud environments URL: https://docs.aembit.io/get-started/use-cases/multicloud/ Description: How Aembit secures NHI access in multicloud environments :::tip[Work in progress] This page is a work in progress. It may contain incomplete or unverified information. Please check back later for updates. If you have feedback or suggestions, please fill out the Aembit New Docs Feedback Survey. Thanks for your patience! ::: --- # Using Aembit to secure third-party access URL: https://docs.aembit.io/get-started/use-cases/third-party-access/ Description: How Aembit secures third-party access to your environment :::tip[Work in progress] This page is a work in progress. It may contain incomplete or unverified information. Please check back later for updates. If you have feedback or suggestions, please fill out the Aembit New Docs Feedback Survey. Thanks for your patience! ::: --- # User Guide/deploy Install =========================== # About the Aembit Agent Controller URL: https://docs.aembit.io/user-guide/deploy-install/about-agent-controller/ Description: About the Aembit Agent Controller and how it works :::tip[Work in progress] This page is a work in progress. It may contain incomplete or unverified information. Please check back later for updates. If you have feedback or suggestions, please fill out the Aembit New Docs Feedback Survey. Thanks for your patience! ::: --- # About the Aembit Agent Proxy URL: https://docs.aembit.io/user-guide/deploy-install/about-agent-proxy/ Description: About the Aembit Agent Proxy and how it works :::tip[Work in progress] This page is a work in progress. It may contain incomplete or unverified information. Please check back later for updates. If you have feedback or suggestions, please fill out the Aembit New Docs Feedback Survey. Thanks for your patience! ::: --- # About Colocating Aembit Edge Components URL: https://docs.aembit.io/user-guide/deploy-install/about-colocating-edge-components/ Description: Considerations and best practices if colocating Aembit Edge Components Deploying Aembit's Edge Components is all about balancing security, scalability, and operational simplicity. Ideally, the Agent Controller and Agent Proxy should run on separate machines. However, in some situations—perhaps for a test environment or because of infrastructure limitations—you may have no choice but to colocate them. If that's the case, understanding the risks and following best practices can help you minimize issues. ## Why Aembit recommends separating Edge Components Keeping Agent Controller and Agent Proxy on separate machines is the best way to make sure they remain resilient and secure. Colocating Edge Components introduces a single point of failure, which can disrupt both traffic interception (Proxy) and trust anchor services (Controller) at the same time. Security is another major concern. Agent Controller and Agent Proxy serve distinct roles, and combining them on one machine increases the potential impact of a compromise. If an attacker breaches the host, they gain access to both components, expanding their reach. Colocation also limits your ability to scale efficiently. The Agent Proxy may require more CPU or memory during high traffic periods, and colocating it with the Agent Controller makes it harder to allocate additional resources where needed. Lastly, colocation can complicate your network design. The Agent Proxy must sit in a position to intercept workload traffic, while the Agent Controller belongs in a more secure, isolated network segment. Finding a placement that serves both roles effectively can be challenging. ## When colocation might be the right choice While Aembit recommends separate deployments, there may be times when colocation is your only option. In smaller test environments, proof-of-concept setups, or resource-constrained scenarios, colocating the Agent Controller and Proxy may be acceptable. When this happens, taking steps to mitigate risk is essential. ## Best Practices for colocating Edge Components If you must colocate, follow these guidelines to reduce risk and maintain performance: - **Harden the host machine**: Apply stricter security controls, such as enhanced monitoring, restricted access, and regular audits. - **Allocate sufficient resources**: Ensure the host has enough CPU, memory, and network bandwidth to support both components without performance degradation. - **Plan for recovery**: Develop clear recovery procedures to minimize downtime if the colocated host fails. - **Carefully design your network**: Ensure the Agent Proxy can intercept workload traffic while maintaining secure access to the Agent Controller's trust anchor services. ## Making the best decision for your environment Colocating Aembit's Edge Components should be a last resort, not a first choice. When separation isn't possible, understanding the risks and applying best practices can help you maintain a secure and stable deployment. By taking these steps, you can make sure your environment remains resilient, even in less-than-ideal circumstances. --- # Advanced deployment options URL: https://docs.aembit.io/user-guide/deploy-install/advanced-options/ Description: Advanced deployment options for Aembit deployments This section covers advanced deployment options for Aembit Edge Components, providing more sophisticated configuration capabilities for your Aembit deployment. The following pages provide information about advanced deployment options: - [Aembit Edge Prometheus-Compatible Metrics](/astro/user-guide/deploy-install/advanced-options/aembit-edge-prometheus-compatible-metrics) - [Changing Agent Log Levels](/astro/user-guide/deploy-install/advanced-options/changing-agent-log-levels) - [Trusting Private CAs](/astro/user-guide/deploy-install/advanced-options/trusting-private-cas) ### TLS Decrypt - [About TLS Decrypt](/astro/user-guide/deploy-install/advanced-options/tls-decrypt) - Overview of the TLS Decrypt feature - [About TLS Decrypt Standalone CA](/astro/user-guide/deploy-install/advanced-options/tls-decrypt/about-tls-decrypt-standalone-ca) - [Configure TLS Decrypt](/astro/user-guide/deploy-install/advanced-options/tls-decrypt/configure-tls-decrypt) - [Configure TLS Decrypt Standalone CA](/astro/user-guide/deploy-install/advanced-options/tls-decrypt/configure-tls-decrypt-standalone-ca) --- # Aembit Edge Prometheus-compatible metrics URL: https://docs.aembit.io/user-guide/deploy-install/advanced-options/aembit-edge-prometheus-compatible-metrics/ Description: How to view Aembit Edge Prometheus-compatible metrics Many organizations rely on and use various data collection and visualization tools to monitor components in their environment. This information provides users with the ability to quickly be alerted to any potential issues that may arise, and troubleshoot those issues. Aembit Edge Components expose various Prometheus-compatible metrics so you have greater visibility into each of these components (Agent Controller, Agent Proxy, Agent Injector). ## Prometheus configuration Aembit exposes Prometheus-compatible metrics in several different deployment models, including Kubernetes and Virtual Machines. The installation and configuration steps for both of these deployment models are described below, but please note that you may select any observability tool you wish, as long as it can to scrape Prometheus-capable metrics. ### Configuring Prometheus (Kubernetes) These steps described below show an example of how you can configure a "vanilla" Prometheus instance in a Kubernetes cluster. Depending on your own Kubernetes cluster configuration, you may need to perform a different set of steps to configure Prometheus for your cluster. 1. Open a terminal window in your environment and run the command shown below. `kubectl edit configmap prometheus-server` 2. Edit the `prometheus.yaml` configuration file by adding the following code snippet before the `kubernetes-pods` section: ```shell - honor_labels: true job_name: kubernetes-pods-aembit kubernetes_sd_configs: - role: pod relabel_configs: - action: keep regex: true source_labels: - __meta_kubernetes_pod_annotation_aembit_io_metrics_scrape - action: replace regex: (.+) source_labels: - __meta_kubernetes_pod_annotation_aembit_io_metrics_path target_label: __metrics_path__ - action: replace regex: (\d+);(([A-Fa-f0-9]{1,4}::?){1,7}[A-Fa-f0-9]{1,4}) replacement: "[$2]:$1" source_labels: - __meta_kubernetes_pod_annotation_aembit_io_metrics_port - __meta_kubernetes_pod_ip target_label: __address__ - action: replace regex: (\d+);((([0-9]+?)(\.|$)){4}) replacement: $2:$1 source_labels: - __meta_kubernetes_pod_annotation_aembit_io_metrics_port - __meta_kubernetes_pod_ip target_label: __address__ - action: labelmap regex: __meta_kubernetes_pod_label_(.+) - action: replace source_labels: - __meta_kubernetes_namespace target_label: namespace - action: replace source_labels: - __meta_kubernetes_pod_name target_label: pod - action: drop regex: Pending|Succeeded|Failed|Completed source_labels: - __meta_kubernetes_pod_phase - action: replace source_labels: - __meta_kubernetes_pod_node_name target_label: node ``` The example code block shown above allows for the automatic detection of Aembit annotations so Prometheus can automatically scrape Agent Proxy metrics. 3. Save your changes in the `prometheus.yaml` configuration file. #### Kubernetes Annotations Agent Controller and Agent Proxy come with standard Prometheus annotations, enabling Prometheus to automatically discover and scrape metrics from these Aembit Edge Components. Since the Agent Proxy runs as part of the Client Workload, which may already expose Prometheus metrics and have its own annotations, a new set of annotations was introduced. These annotations can be added to Client Workload pods without conflicting with existing annotations. The following annotations have been introduced, which are automatically added to the Client Workload where the Agent Proxy is injected: - `aembit.io/metrics-scrape` - Default value is `true`. - `aembit.io/metrics-path` - Default value is `/metrics`. - `aembit.io/metrics-port` - Default value is `9099`. This is a default metrics port used by Agent Proxy to expose metrics. You may override these annotations, `aembit.io/metrics-port` to adjust metrics port on Agent Proxy. #### Helm Variables The following Helm variables control whether metrics are enabled or disabled: - agentController.metrics.enabled - agentInjector.metrics.enabled - agentProxy.metrics.enabled ### Configuring Prometheus (Virtual Machine) You need to configure which Virtual Machines you want to scrape for metrics and data by editing the `/etc/prometheus/prometheus.yml` YAML file and replacing `example.vm.local:port` with the Agent Controller and Agent Proxy VM hostname, and port number on which the metrics servers are listening. For Agent Controller, set the port number to **9090**. For Agent Proxy, set the port number to **9099**. ```yaml scrape_configs: - job_name: 'vm-monitoring' static_configs: - targets: ['example.vm.local:'] ``` #### Virtual Machine Environment Variables These environment variables can be passed to the Agent Controller installer to manage the metrics functionality. - **AEMBIT_METRICS_ENABLED** - enabled for both Agent Controller and Agent Proxy - **AEMBIT_METRICS_PORT** - available only for Agent Proxy ## Aembit Edge Prometheus Metrics Aembit Edge Components expose Prometheus-compatible metrics that can be viewed using an observability tool that is capable of scraping these types of metrics. The sections below list the various Prometheus-compatible metrics that Aembit Edge Components expose, along with the labels you can use to filter results and drill down into specific data. ### Agent Proxy Metrics The Agent Proxy Prometheus-compatible metrics listed below may be viewed in a dashboard. - `aembit_agent_proxy_incoming_connections_total` - The total number of incoming connections (connections established from a Client Workload to the Agent Proxy). - labels: - `application_protocol`: `http`, `snowflake`, `postgres`, `redshift`, `mysql`, `redis`, `unspecified` - `resource_set_id` (optional): `` - `client_workload_id` (optional): `` - `server_workload_id` (optional): `` - `aembit_agent_proxy_active_incoming_connections` - The number of active incoming connection (connections established from a Client Workload to the Agent Proxy). - labels: - `application_protocol`: `http`, `snowflake`, `postgres`, `redshift`, `mysql`, `redis`, `unspecified` - `resource_set_id` (optional): `` - `client_workload_id` (optional): `` - `server_workload_id` (optional): `` - `aembit_agent_proxy_credentials_injections_total` - The total number of credentials injected by Agent Proxy. - labels: - `application_protocol`: `http`, `snowflake`, `postgres`, `redshift`, `mysql`, `redis`, `unspecified` - success: `success`, `failure`. - `aembit_agent_proxy_token_expiration_unix_timestamp` - The expiration timestamp for Aembit Agent Proxy Token (to access Aembit Cloud). - `aembit_agent_proxy_aembit_cloud_connection_status` - The current connection status between Agent Proxy and Aembit Cloud. If the connection is up, the result is "1" (Connected). If the status is down, the result is "0" (Disconnected). - `aembit_agent_proxy_credentials_cached_entries_total` - The total number of unexpired credentials currently cached by Agent Proxy. - labels: - `resource_set_id` (optional): `` - `aembit_agent_proxy_directives_cached_entries_total` - The total number of unexpired directives currently cached by Agent Proxy. - labels: - `resource_set_id` (optional): `` - `version` - The Agent Proxy version. - labels: - component: `aembit_agent_proxy` - version: `version: ` - `process_cpu_second_total` - The Amount of CPU seconds used by the Agent Proxy. This value could be more than the wall clock time if Agent Proxy used more than one core. This metric is useful in conjunction with `machine_cpu_cores` to calculate CPU % usage. - labels: - component: `aembit_agent_proxy` - hostname: `hostname: ` - `machine_cpu_cores` - The number of CPU cores available to Agent Proxy. - labels: - component: `aembit_agent_proxy` - hostname: `hostname: ` - `process_memory_usage_bytes` - The amount of memory (in bytes) used by Agent Proxy. - labels: - component: `aembit_agent_proxy` - hostname: `hostname: ` ### Agent Controller Metrics The Agent Controller Prometheus-compatible metrics listed below may be viewed in a dashboard. - `aembit_agent_controller_token_expiration_unix_timestamp` - The expiration timestamp for Aembit Agent Controller Token (to access Aembit Cloud). - `aembit_agent_controller_access_token_requests_total` - The number of Agent Controller requests to get access token (for Agent Controller use). - label - Result: `success`, `failure` - `Agent_Controller_Id`: `` - `aembit_agent_controller_proxy_token_requests_total` - The number of Agent Proxy requests received by the Agent Controller to get access token. - labels - Result: success, `failure` - `Agent_Controller_Id` (optional): `` - `aembit_agent_controller_registration_status` - The Agent Controller registration status. Status can be either: `0` (Not Registered) or `1` (Registered). - labels - `Agent_Controller_Id` (optional): `` - `version` - The Agent Controller version. - labels - component: `aembit_agent_controller` - version: `` ### Agent Injector metrics The Agent Injector Prometheus-compatible metrics listed below may be viewed in a dashboard. - `aembit_injector_pods_seen_total` - The number of pods proceeded by the Agent Injector. - `aembit_injector_pods_injection_total` - The number of pods into which Aembit Edge Components were injected. - label - `success`: "success" or "failure" --- # Agent Controller High Availability URL: https://docs.aembit.io/user-guide/deploy-install/advanced-options/agent-controller/agent-controller-high-availability/ Description: How to install and configure Agent Controllers in a high availability configuration The Agent Controller is a critical Aembit Edge Component that facilitates Agent Proxy registration. Ensuring the continuous availability of the Agent Controller is vital for the uninterrupted operation of Agent Proxies. As a result, for any production deployment, it's essential to install and configure the Agent Controller in a high availability configuration. [Three key principles](https://en.wikipedia.org/wiki/High_availability#Principles) must be addressed to achieve high availability for the Agent Controller: - Elimination of single points of failure - Ensuring reliable crossover - Failure detection ## Remove single points of failure Having one Agent Controller instance can be a single point of failure. To mitigate this, multiple Agent Controller instances should be operational within an environment, providing redundancy and eliminating this risk. To deploy multiple instances, repeat the [Agent Controller installation procedure](/astro/user-guide/deploy-install/virtual-machine/). Trust Provider-based registration of the Agent Controller simplifies launching multiple instances, as it removes the need to generate a new device code for each instance. When employing this method, you can use the same Agent Controller ID while installing additional instances for the same logical Agent Controller. If you opt for the device code registration method, you must create a separate Agent Controller entry for each deployed instance in your tenant. ## Ensure reliable crossover For effective traffic routing to multiple Agent Controller instances, use a load balancer. It's critical that the load balancer itself is configured for high availability to avoid becoming a single point of failure. To accommodate the technical requirement of load balancing HTTPS (encrypted) traffic between Agent Proxies and Agent Controllers, a TCP load balancer (Layer 4) is necessary. Choose a TCP load balancer that aligns with your company's preferences and standards. ## Failure detection Monitoring of both Agent Controllers and load balancers is necessary to quickly detect any failures. Establish a manual or automated procedure for failure remediation upon detection. The health status of an Agent Controller can be checked through an `HTTP GET` request to the /health endpoint on port 80. A healthy Agent Controller will return an HTTP Response code of `200`. ## Transport Layer Security (TLS) When Transport Layer Security (TLS) is configured on Agent Controllers behind a load balancer, it is crucial for the certificates on these Agent Controllers to include the domain names associated with the load balancer. This ensures that SSL/TLS termination at the Agent Controllers presents a certificate valid for the domain names clients use to connect. ### Agent Controller health endpoint Swagger documentation ```yaml openapi: 3.0.0 info: title: Agent Controller Health Check API version: 1.0.0 paths: /health: get: summary: Agent Controller Health Check Endpoint description: Returns the health status of the Agent Controller. responses: '200': description: Healthy - the Agent Controller is functioning properly. content: application/json: schema: type: object properties: status: type: string example: "Healthy" version: type: string example: "1.9.696" gitSHA: type: string example: "b16139605d32ce60db0a5682de8ee3b579c6e885" host: type: string example: "hostname" '401': description: Unhealthy - the Agent Controller is not registered yet or can't register. content: application/json: schema: type: object properties: status: type: string example: "Unregistered" version: type: string example: "1.9.696" gitSHA: type: string example: "b16139605d32ce60db0a5682de8ee3b579c6e885" host: type: string example: "hostname" ``` :::note A newly deployed Agent Controller may take up to 10 seconds to register and attain a healthy state. ::: --- # Configure Agent Controller TLS with Aembit's PKI URL: https://docs.aembit.io/user-guide/deploy-install/advanced-options/agent-controller/configure-aembit-pki-agent-controller-tls/ Description: How to configure Agent Controller TLS with Aembit's PKI in Kubernetes and Virtual Machine deployments Aembit provides the ability for you to use Agent Controller Transport Layer Security (TLS) certificates for secure Agent Proxy to Agent Controller communication in Kubernetes environments, and on Virtual Machine deployments, using Aembit's PKI. ## Configure Agent Controller TLS with Aembit's PKI in Kubernetes If you have a Kubernetes deployment and would like to use Aembit's PKI, there are two configuration options, described below. ### Automatic TLS configuration If you *aren't already* using a custom PKI, install the latest Aembit Helm Chart. By default, Agent Controllers are automatically configured to accept TLS communication from Agent Proxy. ### Preserve existing custom configuration If you have already configured a custom PKI-based Agent Controller TLS, no additional steps are necessary, as your configuration will be preserved. ## Configure Aembit's PKI-based Agent Controller for VM deployments If you are using a Virtual Machine, Agent Controller will not know which hostname Agent Proxy should use to communicate with Agent Controller. This requires you to manually configure Agent Controller to enable TLS communication between Agent Proxy and Agent Controller. ### Aembit Tenant configuration 1. Log into your Aembit Tenant, and go to **Edge Components -> Agent Controllers**. 1. Select or create a new Agent Controller. 1. In **Allowed TLS Hostname (Optional)**, enter the FQDN (Ex: `my-subdomain.my-domain.com`), subdomain, or wildcard domain (Ex: `*.example.com`) to use for the Aembit Managed TLS certificate. :::note The allowed TLS hostname is unique to each Agent Controller that you configure it on. ::: 1. Click **Save**. ### Manual configuration If you have not already configured Aembit's PKI, perform the steps listed below. 1. Install Agent Controller on your Virtual Machine, and set the `AEMBIT_MANAGED_TLS_HOSTNAME` environment variable to the hostname that Agent Proxy uses to communicate with Agent Controller. When set, Agent Controller retrieves the certificate for the hostname from Aembit Cloud, enabling TLS communication between Agent Proxy and Agent Controller. 2. Configure Agent Proxy's Virtual Machines to trust the Aembit Tenant Root Certificate Authority (CA). ## Confirming TLS Status When you have configured Agent Controller TLS, you can verify the status of Agent Controller TLS by performing the following steps: 1. Log into your Aembit tenant. 2. Click on the **Edge Components** link in the left navigation pane. You are directed to the Edge Components dashboard. ![Edge Components Agent Controller Status Page](../../../../../../assets/images/agent_controller_tls_status_page.png) 3. By default, the **Agent Controllers** tab is selected. You should see a list of your configured Agent Controllers. 4. Verify TLS is active by confirming color status button in the TLS column for the Agent Controller. :::note If the TLS status is not colored, this means TLS is not configured for Agent Controller. ::: ## Agent Controller TLS Support Matrix The table below lists the various Agent Controller TLS deployment models, denoting whether the configuration process is manual or automatic. | Agent Controller Deployment Model | Customer Based PKI | Aembit Based PKI | |------------------------------------|--------------------|----------------------------| | Kubernetes | Manual | Automatic | | Virtual Machine | Manual | Manual | | ECS | Not Supported | Automatic | ## Automatic TLS Certificate Rotation Aembit-managed certificates are automatically rotated by the Agent Controller, with no manual steps or extra configuration required. --- # Configure a custom PKI-based Agent Controller TLS URL: https://docs.aembit.io/user-guide/deploy-install/advanced-options/agent-controller/configure-customer-pki-agent-controller-tls/ Description: How to configure a custom PKI-based Agent Controller TLS in Kubernetes and Virtual Machine deployments import { Steps } from '@astrojs/starlight/components'; Aembit provides the ability for you to use your own PKI-based Transport Layer Security (TLS) for secure Agent Proxy to Agent Controller communication in Kubernetes environments, and on Virtual Machine deployments. ## Prerequisites * Access to a Certificate Authority such as HashiCorp Vault or Microsoft Active Directory Certification Authority. * A TLS PEM Certificate and Key file pair that is configured for the hostname of the Agent Controller. * On Kubernetes, the hostname will be `aembit-agent-controller..svc.cluster.local` where `` is the namespace where the Aembit Helm chart is installed. * On Virtual Machines, the hostname is going to depend on your network and DNS configuration. Please use the FQDN or PQDN hostname which will be used by Agent Proxy instances to access the Agent Controller. * The TLS PEM Certificate file should contain both the Agent Controller certificate and chain to the Root CA. * Self-signed certificates are not supported by the Agent Proxy for Agent Controller TLS communication. ## Kubernetes environment configuration The Aembit Agent Controller requires that the TLS certificate and key be available in a [Kubernetes TLS Secret](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_secret_tls/). Therefore, there are 2 steps to completing this configuration. 1. Create a Kubernetes TLS Secret using the `kubectl create secret tls` command or similar method. For example: ```shell kubectl create secret tls NAME --cert=path/to/cert/file --key=path/to/key/file ``` 2. In the Aembit Helm chart installation file, set the `agentController.tls.secretName` value equal to the name of the secret created in step #1. :::note Both prior steps assume that the TLS Secret and Aembit Helm chart are installed into the same namespace. ::: If you don't have your own CA, you may consider [Kubernetes cert-manager](https://github.com/cert-manager/cert-manager) to create and maintain certificates and keys in your Kubernetes environment. ## Virtual machine environment configuration When installing the Agent Controller on a Virtual Machine, there are two installation parameters that must be specified: - `TLS_PEM_PATH` - `TLS_KEY_PATH` For example, the Agent Controller installation command line could be specified like: ```shell sudo TLS_PEM_PATH=/path/to/tls.crt TLS_KEY_PATH=/path/to/tls.key AEMBIT_TENANT_ID=tenant AEMBIT_AGENT_CONTROLLER_ID=aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee ./install ``` ## Rotating custom PKI Agent Controller TLS certificates Regular certificate rotation is essential to ensure that certificates remain valid and do not expire unexpectedly. By routinely updating certificates before their expiration, you prevent service disruptions and maintain secure communication. In the Aembit environment, Agent Controller stores TLS certificate and key files in the `/opt/aembit/edge/agent_controller` directory. ### Update TLS Certificate To update your TLS certificate and key, perform the steps described below. 1. Replace the existing TLS certificate and key files in the `/opt/aembit/edge/agent_controller` directory with the new key files provided by the customer. 2. Ensure the ownership of these new files matches the original permissions (`user: aembit_agent_controller, group aembit`). ```shell sudo chown aembit_agent_controller:aembit /opt/aembit/edge/agent_controller/tls.crt sudo chown aembit_agent_controller:aembit /opt/aembit/edge/agent_controller/tls.key ``` 3. Verify the file permissions match the original settings. ```shell $: /opt/aembit/edge/agent_controller# ls -l -r-------- 1 aembit_agent_controller aembit ....... tls.crt -r-------- 1 aembit_agent_controller aembit ....... tls.key ``` 4. After you have replaced the files and adjusted the permissions, restart the Agent Controller service to apply these changes. ```shell sudo systemctl restart aembit_agent_controller ``` 5. You can verify that TLS certificate/key was properly rotate by checking the following log message: ```shell $: journalctl --namespace aembit_agent_controller | grep "Tls certificate sync background process" [INF] (Aembit.AgentController.Business.Services.BackgroundServices.TlsSyncUpService) ``` - If TLS is configured successfully, you will see the following message displayed: *Tls certificate sync background process is active.* - If TLS is not configured successfully, you will see the following message displayed: *Tls certificate sync background process will not run because Tls is not enabled.* --- # How to create an Agent Controller URL: https://docs.aembit.io/user-guide/deploy-install/advanced-options/agent-controller/create-agent-controller/ import { Steps } from '@astrojs/starlight/components'; The Agent Controller is a helper component that facilitates the registration of other Aembit Edge Components. This page details how to create a new Agent Controller in your Aembit Tenant. ## Create an Agent Controller \{#create-an-ac\} To create an Agent Controller in your Aembit Tenant, follow these steps: 1. Log into your Aembit Tenant, and go to **Edge Components -> Agent Controllers**. ![New in Agent Controllers section](../../../../../../assets/images/agent_controller_create_entry_point_ac.png) 1. Click **+ New**, which displays the **Agent Controller** flyout. 1. Fill out the following fields: - **Name**: Choose a user-friendly name for your controller. - **Description (optional)**: Add a brief description to help identify its purpose. - **Trust Provider**: Select an existing Trust Provider from the dropdown menu. If you don't have a Trust Provider set up, refer to [Add Trust Provider](/astro/user-guide/access-policies/trust-providers/add-trust-provider) to create one. :::note Trust Providers enable identity attestation during workload registration. Associating your Agent Controller to a Trust Provider accomplishes this for you. This makes sure there is secure, verified communication between components. Aembit recommends configuring a Trust Provider as part of your setup. ::: - **Allowed TLS Hostname (Optional)** - Enter the FQDN (Ex: `my-subdomain.my-domain.com`), subdomain, or wildcard domain (Ex: `*.example.com`) to include in the [Aembit Managed TLS](/astro/user-guide/deploy-install/advanced-options/agent-controller/configure-aembit-pki-agent-controller-tls) certificate. This restricts the certificate to only be valid when Agent Proxies attempt to access Agent Controller using this specific domain name. The allowed TLS hostname is unique to each Agent Controller that you configure it on. 1. Click **Save**. Once you save it, your newly created Agent Controller appears in the list of available Agent Controllers. --- # How to shutdown Agent Proxy using HTTP URL: https://docs.aembit.io/user-guide/deploy-install/advanced-options/agent-proxy/agent-proxy-shutdown/ Description: How to shut down the Agent Proxy using HTTP ## Introduction In certain scenarios, it may be necessary to manually shut down the Agent Proxy when the main container has exited, but the sidecar process continues to run. Instead of terminating the entire job, which may appear as a cancelled job, Aembit offers a solution to gracefully terminate the job while allowing the sidecar to continue operating. ## Agent Proxy Shutdown The Agent Proxy can be shut down by sending an HTTP `POST` request to its `/quit` endpoint. ### Example Command An example command using `curl`: ```shell curl -X POST localhost:/quit ``` When the Agent Proxy is properly configured to receive this request, it will flush any remaining events to the backend before exiting gracefully. ## Configuration Flags The behavior of the Agent Proxy can be controlled through specific environment variables outlined below: `AEMBIT_ENABLE_HTTP_SHUTDOWN` Environment Variable This variable controls whether the Agent Proxy supports the `/quit` endpoint. - **Default Value**: `false` - **Accepted Values**: `false` or `true` `AEMBIT_SERVICE_PORT` Environment Variable This variable specifies the port on which the Agent Proxy responds to the diagnostic and configuration endpoint `/quit`. - **Default Value**: `51234` - **Accepted Values**: an integer number in the range 1 to 65535 (inclusive) ### Accessibility and Security Considerations :::note Handler endpoints, including `/quit`, are only accessible via `localhost` or `127.0.0.1`. This setting is non-configurable to ensure security. ::: :::caution The `/quit` handler should only be enabled in fully trusted environments. When enabled, any application with network access to `127.0.0.1` can send a request to shut down the Agent Proxy. ::: ## Recommended Environments :::note The `/quit` handler is intended for use primarily within **Kubernetes** environments. ::: --- # Agent Proxy termination strategy URL: https://docs.aembit.io/user-guide/deploy-install/advanced-options/agent-proxy/agent-proxy-termination-strategy/ Description: Learn about Agent Proxy's termination strategies across different environments and how to configure the AEMBIT_SIGTERM_STRATEGY variable Agent Proxy must be able to serve Client Workload traffic throughout the entire lifecycle of the Client Workload. When both the Client Workload and Agent Proxy receive a termination signal (`SIGTERM`), the Agent Proxy attempts to continue operating and serving traffic until the Client Workload exits. Agent Proxy runs in distinct environments, such as Virtual Machines, Kubernetes, and ECS Fargate, where workload lifecycles can differ. To handle these variations, Agent Proxy uses different termination strategies. ## Configuration You can configure the termination strategy by setting the `AEMBIT_SIGTERM_STRATEGY` environment variable. The supported values are: - `immediate` – Exits immediately upon receiving `SIGTERM`. - `sigkill` – Ignores the `SIGTERM` signal and waits for a `SIGKILL`. ## Default termination strategies The following table lists the default termination strategy for each environment. You can override the default behavior using the `AEMBIT_SIGTERM_STRATEGY` environment variable. | Environment | Default Termination Strategy | |---------------------------|------------------------------| | Kubernetes | `sigkill` | | AWS ECS Fargate | `sigkill` | | Virtual Machine (Linux) | `immediate` | | Virtual Machine (Windows) | N/A | | Virtual Appliance | `immediate` | | Docker-compose on VMs | `sigkill` | | AWS Lambda container | `immediate` | --- # AWS Relational Database Service (RDS) Certificates URL: https://docs.aembit.io/user-guide/deploy-install/advanced-options/agent-proxy/aws-rds/ Description: How to install AWS RDS Certificate to Agent Proxy to make it trust the AWS RDS Certificate :::note MySQL, PostgreSQL, and Redshift in AWS uses a TLS certificate issued from an AWS root certificate authority that's not publicly trusted. You must follow the steps on this page when attempting to connect to MySQL, PostgreSQL, and Redshift in AWS. ::: To install all the possible CA Certificates for AWS RDS databases, follow the instructions and use the following commands: 1. Transition to a root session so you have root access. ```shell sudo su ``` 2. Run the following commands to download the CA certificate bundle from AWS, split it into a set of `.crt` files, and then update the local trust store with all these files. ```shell apt update ; apt install -y ca-certificates curl rm -f /tmp/global-bundle.pem curl "https://truststore.pki.rds.amazonaws.com/global/global-bundle.pem" -o /tmp/global-bundle.pem csplit -s -z -f /usr/local/share/ca-certificates/aws-rds /tmp/global-bundle.pem '/-----BEGIN CERTIFICATE-----/' '{*}' for file in /usr/local/share/ca-certificates/aws-rds*; do mv -- "$file" "${file%}.crt"; done update-ca-certificates ``` 3. After running this command, you should see the following output: ```shell Updating certificates in /etc/ssl/certs... 118 added, 0 removed; done. ``` 4. Ensure you exit your root session. ```shell exit ``` :::note Make sure to follow the preceding instructions for each virtual machine running Client Workloads that needs access to AWS MySQL, PostgreSQL, or Redshift. ::: --- # How to configure explicit steering URL: https://docs.aembit.io/user-guide/deploy-install/advanced-options/agent-proxy/explicit-steering/ Description: How to use the Explicit Steering feature to direct specific traffic to the Agent Proxy The Explicit Steering feature enables you to route and direct specific traffic in a Kubernetes deployment to the Agent Proxy. :::note By default, in Kubernetes deployment, all traffic in a given pod is sent to the Agent Proxy. ::: ## Configure Explicit Steering To configure explicit steering in your Kubernetes cluster, simply follow the steps described on the [Kubernetes Deployment](/astro/user-guide/deploy-install/kubernetes/kubernetes) page in the Aembit Technical Documentation and set the `aembit.io/steering-mode` annotation to `explicit`. This sets the steering mode to `explicit`. Once you have set the steering mode to `explicit`, each Client Workload that wants to use Aembit will need to be configured to use Agent Proxy as its HTTP proxy. The default port used for explicit steering is `8000`. In the case, it conflicts with a port that the Client Workload uses. The explicit port number may be overridden via the `AEMBIT_HTTP_SERVER_PORT` environment variable. The following section provides several examples of how Agent Proxy is used as an HTTP proxy. ## Examples The section below shows several different Client Workload examples using different applications with Agent Proxy as an HTTP proxy. ### Example Client Workload using `curl` with `-x` to specify an HTTP proxy ```sh curl -x localhost:8000 myserverworkload ``` ### Example Client Workload using HashiCorp Vault CLI (Vault CLI implicitly uses VAULT_HTTP_PROXY) ```shell export VAULT_HTTP_PROXY="http://localhost:8000" vault token lookup ``` ### Example Client Workload written in Go (Go's HTTP client implicitly uses HTTPS_PROXY) ```shell export HTTPS_PROXY=localhost:8000 ./run_go_app [...] ``` --- # About traffic steering methods URL: https://docs.aembit.io/user-guide/deploy-install/advanced-options/agent-proxy/steering/ Description: How different traffic steering methods and how to configure them for various deployment models Traffic steering is the process of directing network traffic from Client Workloads to an Agent Proxy, which inspects and modifies this traffic. Selecting the appropriate steering method depends on factors such as the deployment model, protocol compatibility, and the level of control required over traffic management. Certain deployment models offer flexibility, allowing you to select the steering method that best suits your needs. In other cases, the deployment model dictates the steering method. ## Conceptual overview Traffic steering methods determine how network traffic from Client Workloads reaches the Agent Proxy. Three primary methods exist: - **Transparent Steering**: Automatically redirects all TCP traffic without client configuration. - **Selective Transparent Steering**: Automatically redirects TCP traffic only for specified hostnames without client configuration. - **Explicit Steering**: Requires explicit client-side configuration to route traffic. ## Method comparison and protocol support | Deployment Model | Transparent Steering | Selective Transparent Steering | Explicit Steering | | --------------------- | -------------------- | ------------------------------ | ----------------- | | Kubernetes (K8S) | ✅ (default) | ❌ | ✅ | | Virtual Machines (VM) | ✅ (default) | ✅ | ❌ | | Elastic Container Service (ECS) Fargate | ❌ | ❌ | ✅ (default) | | AWS Lambda Extension | ❌ | ❌ | ✅ (default) | | Virtual Appliance | ❌ | ❌ | ✅ (default) | **Protocol Support:** - **Transparent Steering:** All supported protocols. - **Selective Transparent Steering:** All supported protocols. - **Explicit Steering:** HTTP-based protocols only. ## Technical details and configuration ### Transparent steering Transparent Steering automatically redirects all TCP traffic using `iptables` without requiring any client-side awareness. It's straightforward, minimizing configuration overhead. Transparent Steering is the default method for Kubernetes(K8S) and Virtual Machine (VM) deployments and doesn't require additional configuration. ### Selective transparent steering Selective Transparent Steering redirects TCP traffic only for specified hostnames, providing precise control without explicit client configuration. - Turned off by default. - Available exclusively for virtual machines. - Enable this by setting the environment variable `AEMBIT_STEERING_ALLOWED_HOSTS` during installation: ```shell AEMBIT_STEERING_ALLOWED_HOSTS=graph.microsoft.com,vault.mydomain [...] ./install ``` For further information, see the [Agent Proxy Virtual Machine Installation Guide](/astro/user-guide/deploy-install/virtual-machine/linux/agent-proxy-install-linux). ### Explicit steering Explicit steering directs Client Workloads traffic based on specific configurations. It's the default steering method for Elastic Container Service (ECS) Fargate, AWS Lambda Extensions, and virtual appliances deployment models. Explicit Steering is also an optional configuration for Kubernetes deployments. In Kubernetes, enable explicit steering by setting the `aembit.io/steering-mode` annotation on a Client Workload: ```yaml aembit.io/steering-mode: explicit ``` For Kubernetes-specific installation details and annotation configurations, refer to the [Kubernetes Installation Guide](/astro/user-guide/deploy-install/kubernetes/kubernetes). #### Explicit steering port configuration Agent Proxy listens on port `8000` for traffic sent using explicit steering. If this conflicts with an existing application port, override it using the `AEMBIT_HTTP_SERVER_PORT` environment variable. #### Explicit steering examples Many ways exist to configure Client Workloads to use explicit steering. Common methods include setting environment variables such as `HTTP_PROXY` or `HTTPS_PROXY`. However, specific applications might provide their own explicit configuration methods to route traffic via a proxy. The following are examples: - **Go applications:** - Using the `HTTPS_PROXY` environment variable, widely recognized by many HTTP libraries: ```shell export HTTPS_PROXY=localhost:8000 ./run_go_app [...] ``` - **Using `curl` command:** - Explicitly specifying proxy configuration via a command-line argument: ```shell curl -x localhost:8000 myserverworkload ``` - **HashiCorp Vault CLI:** - Configuring the HashiCorp Vault-specific environment variable to route traffic via the proxy: ```shell export VAULT_HTTP_PROXY="http://localhost:8000" vault token lookup ``` --- # How to change Edge Component log levels URL: https://docs.aembit.io/user-guide/deploy-install/advanced-options/changing-agent-log-levels/ Description: How to change the log levels of Aembit's Edge Components import { Tabs, TabItem } from '@astrojs/starlight/components'; Sometimes, you'll want to use a different value than an Agent Controller's or Agent Proxy's default value for logging. For example, when troubleshooting a problem with your agent or when trying out a new feature. The following sections detail how to change the log level of your: - [Agent Controller](#change-agent-controller-log-level) - [Agent Proxy](#change-agent-proxy-log-level) :::note The process to change your Agent Controller's or Agent Proxy's log level does differ depending on your chosen deployment type. Make sure to use the correct tab in the sections to change your log levels. ::: See [Log level reference](/astro/reference/edge-components/agent-log-level-reference) for complete details about each agent's log levels. ## Change Agent Controller log level Use the following tabs to set change your Agent Controller's log level using the `AEMBIT_LOG_LEVEL` environment variable: 1. Log into your Agent Controller. 1. Open the Aembit Agent Controller service at `/etc/systemd/system/aembit_agent_controller.service`. You may have to open this as root using `sudo`. 1. Under `[Service]`, update or add `Environment=AEMBIT_LOG_LEVEL=`, and set the log level you want. For example: ```txt ins="" [Service] // /etc/systemd/system/aembit_agent_controller.service ... User=aembit_agent_controller Restart=always Environment=AEMBIT_TENANT_ID=abc123 Environment=AEMBIT_DEVICE_CODE= Environment=AEMBIT_AGENT_CONTROLLER_ID=A12345 Environment=ASPNETCORE_URLS=http://+:5000,http://+:9090 Environment=AEMBIT_LOG_LEVEL= StandardOutput=journal StandardError=journal ... ``` 1. Reload the Aembit Agent Controller config: ```shell systemctl daemon-reload ``` 1. Restart the Aembit Agent Controller service: ```shell systemctl restart aembit_agent_controller.service ``` {/* @hhewett note: I'll be adding the rest of the ways to change env vars at a later time. see https://aembit.atlassian.net/browse/ATD-423 for more info */} ## Change Agent Proxy log level Use the following tabs to set change your Agent Proxy's log level using the `AEMBIT_LOG_LEVEL` environment variable: 1. Log into your Agent Proxy. 1. Open the Aembit Agent Proxy service at `/etc/systemd/system/aembit_agent_proxy.service`. You may have to open this as root using `sudo`. 1. Under `[Service]`, update or add `Environment=AEMBIT_LOG_LEVEL=`, and set the log level you want. For example: ```txt [Service] ... User=aembit_agent_proxy Restart=always StandardOutput=journal StandardError=journal TimeoutStopSec=20 Nice=-20 LimitNOFILE=65535 Environment=AEMBIT_SIGTERM_STRATEGY=immediate Environment=AEMBIT_AGENT_CONTROLLER=https://my-proxy-service:5000 Environment=AEMBIT_DOCKER_CONTAINER_CIDR= Environment=CLIENT_WORKLOAD_ID= Environment=AEMBIT_AGENT_PROXY_DEPLOYMENT_MODEL=vm Environment=AEMBIT_SERVICE_PORT=51234 // highlight-next-line Environment=AEMBIT_LOG_LEVEL= ... ``` 1. Reload the Aembit Agent Proxy config: ```shell systemctl daemon-reload ``` 1. Restart the Aembit Agent Proxy service: ```shell systemctl restart aembit_agent_proxy.service ``` {/* @hhewett note: I'll be adding the rest of the ways to change env vars at a later time. see https://aembit.atlassian.net/browse/ATD-423 for more info */} --- # About TLS Decrypt URL: https://docs.aembit.io/user-guide/deploy-install/advanced-options/tls-decrypt/ Description: Overview of how TLS Decrypt works TLS Decrypt allows the Aembit Agent Proxy to decrypt and manage encrypted traffic between your Client and Server Workloads, enabling Workload IAM functionality. To configure TLS Decrypt, see [Configure TLS Decrypt](/astro/user-guide/deploy-install/advanced-options/tls-decrypt/configure-tls-decrypt). One of the most important aspects of TLS decryption is the way in which you manage keys and certificates. Aembit has implemented the following set of security measures to make sure TLS decryption is secure in your Aembit environment: - Aembit stores private keys for certificates used in TLS decryption in the Agent Proxy memory only, which never persists. - The private key that Aembit uses for the TLS Decrypt CA is securely stored and kept in Aembit Cloud. - The default lifetime for a TLS decryption certificate is 1 day. - TLS certificates are only generated for the target host. Wildcards are explicitly **not** used. - The certificate hostname can only match the hostnames that are in your Server Workloads. - A certificate is only issued if the certificate meets the requirements of the Access Policy, which includes Client Workload and Server Workload identification, Trust Provider attestation, successful validation of conditional access checks. - Each Aembit Tenant has a unique Root CA, making sure TLS decryption certificates issued by one tenant aren't trusted by Client Workloads configured to trust the Root CA of a different tenant. :::caution Since Aembit issues each tenant its own Root CA, Aembit recommends setting up separate tenants for environments with distinct security boundaries. By configuring separate tenants, each environment remains securely isolated. This prevents potential risks where an actor uses a certificate issued in one environment (with lower safeguards) to attack another environment with stricter safeguards. ::: ## Example workflow When a Client Workload first attempts to establish a connection to a Server Workload, Agent Proxy intercepts the connection, generates a key pair and Certificate Signing Request (CSR), and then requests a certificate for TLS decryption from Aembit Cloud. This certificate is then cached and reused for subsequent connections until a [configurable percentage of its lifetime](/astro/user-guide/deploy-install/advanced-options/tls-decrypt/configure-tls-decrypt#change-your-leaf-certificate-lifetime) has elapsed, optimizing performance while maintaining security. Once Aembit Cloud evaluates the request and authorizes the Client Workload to access the Server Workload, Aembit Cloud issues a certificate the Agent Proxy can use to decrypt TLS and permit the Client Workload to access the Server Workload. ## Decryption scope Aembit Agent Proxy only decrypts connections when it evaluates and matches the associated Access Policy, and the Server Workload for this Access Policy has the TLS Decrypt flag enabled. Because of these restrictions, the Agent Proxy only decrypts the connection when it: - Identifies the Client Workload - Identifies the Server Workload - Finds the associated Access Policy - Attests the Client Workload - Conditional Access checks pass - Server Workload has the TLS flag enabled If any of these conditions aren't met, Aembit leaves the connection intact and doesn't decrypt it. ## Standalone CA for TLS Decrypt Instead of using your Aembit Tenant's CA, you have the option to define and use your own Standalone CA. See [About Standalone CA for TLS Decrypt](/astro/user-guide/deploy-install/advanced-options/tls-decrypt/configure-tls-decrypt-standalone-ca) to learn more. To set up a Standalone CA, see [How to configure a Standalone CA](/astro/user-guide/deploy-install/advanced-options/tls-decrypt/configure-tls-decrypt-standalone-ca). --- # About Standalone CA for TLS Decrypt URL: https://docs.aembit.io/user-guide/deploy-install/advanced-options/tls-decrypt/about-tls-decrypt-standalone-ca/ Description: How to configure TLS Decrypt with a Standalone CA Standalone Certificate Authorities (CAs) function as dedicated, isolated entities that grant you more granular control over managing [TLS Decrypt](/astro/user-guide/deploy-install/advanced-options/tls-decrypt/). With Standalone CAs, you can create, assign, and manage unique CAs what are independent from Aembit's default CAs to precisely manage TLS traffic. You can assign Standalone CAs to specific resources (such as Client Workloads or [Resource Sets](/astro/user-guide/administration/resource-sets/)) rather than tying those resources to your Aembit configuration at the Tenant-level. To set up a Standalone CA, see [How to configure Standalone CA for TLS Decrypt](/astro/user-guide/deploy-install/advanced-options/tls-decrypt/configure-tls-decrypt-standalone-ca). ## Important terminology **Trust model**: A set of rules and configurations that define which CAs are trusted within a given context. In the context of Aembit's TLS Decrypt feature, a trust model determines whether Aembit uses a Tenant-level CA, a Standalone CA, or both to validate TLS certificates. **Trust boundary**: The defined scope within which a CA is trusted. By assigning a Standalone CA to a Resource Set, you create a distinct trust boundary that isolates that Resource Set's workloads from other environments. ## How Standalone CAs work Standalone CAs provide a decentralized approach to certificate management by allowing individual resources to define their own trusted CAs rather than relying on a single Tenant-wide CA. After you create and assign a Standalone CA to a Resource Set, it establishes a distinct trust boundary, making sure that workloads in separate Resource Sets operate independently. This isolation makes it so different Resource Sets don't rely on the same root certificate. It also reduces the risk of unintended certificate exposure by limiting each CA's visibility and application to its defined scope. Additionally, assigning a Standalone CA directly to a Client Workload overrides any Resource Set or Tenant-level CA, providing a way to enforce unique trust requirements for workloads that require separate security controls. If you don't assign a Standalone CA to a Client Workload or its associated Resource Set, Aembit automatically falls back to the Tenant-level CA. This fallback makes sure workloads can still establish trusted TLS connections even if no Standalone CA is explicitly configured, maintaining continuity in certificate management. ## Standalone CA assignment You have two options when assigning a Standalone CA: - **Assign to a Resource Set**: Assigning a Standalone CA to a Resource Set isolates its trust model and establishes a shared trust boundary for all workloads within that set. This makes sure that only workloads within that Resource Set rely on the selected CA. - **Assign to a Client Workload**: By explicitly assigning a Standalone CA to a Client Workload, you can override the Tenant-level CA or Standalone CA set at the Resource Set-level. This assignment takes precedence over the Resource Set's CA, giving you have fine-grained control over TLS decryption behavior on individual Client Workloads. This layered structure allows you to establish both broad certificate policies via Resource Sets and targeted overrides for specific Client Workloads. ## How Aembit chooses which CA to use Aembit resolves certificate authorities during the TLS Decrypt process from most to least restrictive: 1. **Client Workload Level**: Aembit first checks for a Standalone CA assigned directly to the requesting Client Workload. 1. **Resource Set Level**: If Aembit doesn't find a workload-specific CA, it checks for a CA assigned to the workload's Resource Set. 1. **Tenant Level**: If you've not assigned a Standalone CA at either level, Aembit defaults to using the Tenant-level CA. This hierarchical approach allows targeted overrides for specific workloads while preserving the broader certificate structure across your infrastructure.
Expand for a complex example Imagine an organization that operates multiple environments for development, staging, and production, each managed within its own Resource Set. In this setup, the production Resource Set has a Standalone CA configured to enforce stricter security controls. A critical backend API within this Resource Set also has a Standalone CA assigned directly to it, designed to meet its unique certificate requirements. Meanwhile, the development and staging Resource Sets have no Standalone CAs assigned. If a workload in the production Resource Set attempts to establish a TLS connection: - Aembit first checks for a Standalone CA assigned directly to that workload. Since the backend API has its own CA, that certificate is used. - If the workload didn’t have a workload-specific CA, Aembit would default to the production Resource Set’s Standalone CA. - If no Standalone CA were assigned to either the workload or the Resource Set, Aembit would instead use the Tenant-level CA. Meanwhile, workloads in the development and staging Resource Sets would skip the first two steps, defaulting directly to the Tenant-level CA since no Standalone CAs are defined. This hierarchy allows the organization to enforce stricter security controls for critical services while maintaining simpler certificate management in less sensitive environments.
## Best practices for Standalone CAs - **Use Standalone CAs for Critical Resources**: For sensitive services requiring stricter control, Standalone CAs improve isolation and minimize certificate sprawl. - **Define Clear Certificate Lifetimes**: Setting appropriate expiration periods reduces exposure to outdated certificates. - **Audit and Monitor CA Usage**: Periodically review CA associations to maintain secure and predictable TLS decryption behavior. - **Keep organization consistent**: Consistency matters for predictable TLS decryption behavior, so align Standalone CA assignments with your infrastructure's organizational structure. - **Simplify where you can**: While scoping CAs narrowly can reduce exposure, consolidating similar workloads under a shared Resource Set can simplify certificate management. ## Scoping Standalone CAs too tightly While tightly scoped Standalone CAs improve security and isolation, they can increase operational complexity. Managing multiple narrowly scoped CAs requires careful tracking of certificate rotations and renewals. Frequent resource movement across environments may lead to mismatched CA associations, disrupting communication. Additionally, troubleshooting becomes more complex when multiple isolated trust boundaries exist. Balance security with operational efficiency when defining CA scopes. ## Standalone CA behavior When managing Standalone CAs, it's crucial to understand how Resource Sets influence their behavior. Resource Sets define the scope within which a Standalone CA is trusted, which directly impacts both certificate visibility and Client Workload associations. ### In Resource Sets - **Consider trust boundary establishment**: Assigning a Standalone CA to a Resource Set creates a distinct trust boundary, with all Client Workloads in that Resource Set inheriting the assigned CA unless overridden. - **Plan for certificate isolation**: Maintain unique Standalone CAs for different Resource Sets to prevent certificate trust from extending across unrelated workloads. - **Beware of resource portability risks**: Moving workloads between Resource Sets may break certificate trust unless the new Resource Set shares the same Standalone CA or you reconfigure it. ### In Client Workloads - **Use targeted overrides strategically**: Assign a Standalone CA directly to a Client Workload to override the Resource Set's CA only when workloads have distinct security requirements. - **Watch for inconsistent trust models**: Carefully coordinate workload-level CA assignments to avoid creating fragmented trust models and certificate mismatches. - **Remember the tenant-level fallback**: If you don't assign a Standalone CA to either the Resource Set or Client Workload, Aembit defaults to using the Tenant-level CA. By thoughtfully aligning Standalone CA assignments with your Resource Sets and workload structure, you can achieve stronger security without adding unnecessary complexity. ## Additional resources - [About TLS Decrypt](/astro/user-guide/deploy-install/advanced-options/tls-decrypt/) - [Configure a Standalone CA](/astro/user-guide/deploy-install/advanced-options/tls-decrypt/configure-tls-decrypt-standalone-ca) --- # Configure TLS Decrypt URL: https://docs.aembit.io/user-guide/deploy-install/advanced-options/tls-decrypt/configure-tls-decrypt/ Description: How to configure TLS Decrypt when using HTTPS or Redis over TLS import { Tabs, TabItem } from '@astrojs/starlight/components'; When your Client Workload uses Transport Layer Security (TLS) (such as HTTPS or Redis with TLS) to communicate with the Server Workload, you must enable [TLS Decrypt](/astro/user-guide/deploy-install/advanced-options/tls-decrypt/) in your Aembit Tenant. TLS Decrypt allows the Aembit Agent Proxy to decrypt and manage encrypted traffic between your Client and Server Workloads, enabling Workload IAM functionality. To configure TLS Decrypt, you must configure your Client Workloads to trust your Aembit Tenant Root Certificate Authorities (CAs) so they can establish TLS connections with your Server Workload. To do this, you must: - [Get your Aembit Tenant Root CA](#get-your-aembit-tenant-root-ca). - [Add the root CA to the root store](#add-your-aembit-tenant-root-ca-to-a-trusted-root-store) on your Client Workloads. - You also have the option to [change your Leaf Certificate Lifetime](#change-your-leaf-certificate-lifetime) (default 1 day). ## Prerequisites To configure TLS Decrypt, you must have the following: - A Server Workload with TLS enabled (see [Enable Server Workload TLS](/astro/user-guide/access-policies/server-workloads/server-workload-enable-tls)). - Your Aembit Tenant Root CA. - TLS version 1.2+ on your Client and Server Workloads (Agent Proxy requirement). :::note If your Client Workloads support TLS version 1.3, then Agent Proxy uses TLS version 1.3. ::: ## Get your Aembit Tenant Root CA To get your Aembit Tenant Root CA, perform the following steps: 1. Log in to your Aembit tenant. 1. In the left sidebar menu, go to **Edge Components**. 1. In the top ribbon menu, click **TLS Decrypt**. ![TLS Decrypt Page](../../../../../../assets/images/tls_decrypt.png) 3. Click Download your Aembit Tenant Root CA certificate. Alternatively, you may download the Aembit Tenant Root CA directly by using to the following URL, making sure to replace `` with your actual Aembit tenant ID: ```shell https://.aembit.io/api/v1/root-ca ``` ## Add your Aembit Tenant Root CA to a trusted root store Different operating systems and application frameworks have different methods for adding root certificates to their associated root store. Most Client Workloads use the system root store. This isn't always the case, however, so make sure to consult your operating system's documentation. You must install your Aembit Tenant Root CA on your Client Workload container or Virtual Machine (VM). Install your Aembit Tenant Root CA either during workload build/provisioning time, or at runtime, as long as the Client Workload processes trust the Aembit Tenant Root CA. Select a tab for your operating system, distribution, and specific application to see the steps to adding your Aembit Tenant Root CA to your root store: For Debian/Ubuntu Linux, you must include the Aembit Tenant Root CA in your Client Workload container image: 1. [Get your Aembit Tenant Root CA](#get-your-aembit-tenant-root-ca) and save it to `/.crt`. 2. Run the following commands to include the root CA in your ``Dockerfile``: ```dockerfile RUN apt-get update && apt-get install -y ca-certificates COPY /.crt /usr/local/share/ca-certificates RUN update-ca-certificates ``` ```shell sudo apt-get update && sudo apt-get install -y ca-certificates sudo wget https://.aembit.io/api/v1/root-ca \ -O /usr/local/share/ca-certificates/.crt sudo update-ca-certificates ``` ```shell sudo yum update -y && sudo yum install -y ca-certificates sudo wget https://.aembit.io/api/v1/root-ca \ -O /etc/pki/ca-trust/source/anchors/.crt sudo update-ca-trust ``` ```powershell Invoke-WebRequest ` -Uri https://.aembit.io/api/v1/root-ca ` -Outfile .cer Import-Certificate ` -FilePath .cer ` -CertStoreLocation Cert:\LocalMachine\Root ``` Node.js uses its own certificate store, distinct from the system's certificate store (such as `/etc/ssl/certs/ca-certificates.crt` on Ubuntu/Debian and `/etc/pki/tls/certs/` on RedHat), to manage and validate trusted root CAs. To include additional trusted root certificates, use the environment variable [NODE_EXTRA_CA_CERTS](https://nodejs.org/api/cli.html#node_extra_ca_certsfile): 1. [Get your Aembit Tenant Root CA](#get-your-aembit-tenant-root-ca). 1. Set the `NODE_EXTRA_CA_CERTS` environment variable accordingly. For Python-based applications, [get your Aembit Tenant Root CA](#get-your-aembit-tenant-root-ca), then follow the section that applies to you: #### Using the Python `requests` library Configure the environment variable `REQUESTS_CA_BUNDLE` to point to a bundle of trusted certificates, including the Aembit Tenant Root CA. For more details, refer to the [requests advanced user guide](https://requests.readthedocs.io/en/latest/user/advanced/). #### Using the Python `httpx` package Configure the environment variable `SSL_CERT_FILE` to include the Aembit Tenant Root CA. For additional information, see [PEP 476](https://peps.python.org/pep-0476/). Please contact Aembit support if you need instructions for a different distribution or trust root store location. ## Change your leaf certificate lifetime The default lifetime of leaf certificates for your Aembit Tenant Root CA is **1 day**. To change this value, follow these steps: 1. Log in to your Aembit tenant. 1. In the left sidebar menu, go to **Edge Components**. 1. In the top ribbon menu, click **TLS Decrypt**. 1. Under **Leaf Certificate Lifetime**, select the desired value (`1 hour`, `1 day`, or `1 week`) from the dropdown menu. 1. Click **Save**. 1. (Optional) To apply the changes to existing leaf certificates, you must either: - Restart the associated Agent Proxy. See [Verifying your leaf certificate lifetime](#verifying-your-leaf-certificate-lifetime). - Wait for existing certificates to expire. :::tip[Security best practice] Changing the lifetime duration for leaf certificates doesn't require reinstallation of the root CA certificate in any location where it's already installed. The root CA certificate itself remains unchanged, and this modification only affects the validity period of newly issued leaf certificates. That said, it's important to remember that **existing certificates retain their original expiration dates**. This means you'll need to restart the associated Agent Proxy to fully transition to the shorter lifetime. This is especially important for more drastic decreases like going from one week to one hour. ::: ### Verifying your leaf certificate lifetime [After changing your leaf certificate lifetime](#change-your-leaf-certificate-lifetime), verify the changes by viewing the details of the cert through the following commands: 1. After changing the leaf certificate lifetime, log in to the Agent Proxy associated with the leaf certificate lifetime you updated. 1. Restart the Agent Proxy. 1. Run the following command to create a test TLS connection from the Agent Proxy to a Server Workload. The hostname must be in a Server Workload associated with the Access Policy for that Agent Proxy. ```shell openssl s_client -connect : ``` 1. Inspect the output and look for the `Server certificate` section. Copy the contents of the certificate (highlighted in the following example): ```txt {2-8} Server certificate -----BEGIN CERTIFICATE----- MjUwMjA1MjI1MDIxWhcNMzUwMjAzMjI1MDIxWjBrMSUwIwYDVQQDDBxBZW1iaXQg ... ... omitted for brevity ... 0ApHb7jB+YkL59eG9WOdCUqjQjBAA= -----END CERTIFICATE----- subject-CN - my.service.com ``` 1. View and inspect the detailed contents of the certificate by echoing the certificate you just copied into the `openssl x509 -text` command: ```shell echo "" | openssl x509 -text ``` You should see output similar to the following: ```shell {7-9} Certificate: Data: Version: 3 (0x2) Serial Number: 1234567890 (0x12345fe4) Signature Algorithm: ecdsa-with-SHA384 Issuer: CN = Aembit Tenant 1a2b3c Issuing CA, O = Aembit Inc, C = US, emailAddress = support@aembit.io Validity Not Before: Feb 10 13:25:42 2025 GMT Not After : Feb 11 13:30:42 2025 GMT Subject: CN = my.service.com ... ... omitted for brevity ... ``` Notice that the highlighted `Validity` section has the new lifetime representing the leaf certificate lifetime you selected. Aembit intentionally adds five minutes to the `Not Before` time to account for clock skew between different systems. --- # How to configure a Standalone CA URL: https://docs.aembit.io/user-guide/deploy-install/advanced-options/tls-decrypt/configure-tls-decrypt-standalone-ca/ Description: How to configure Standalone CA for TLS Decrypt To configure a [Standalone CA](/astro/user-guide/deploy-install/advanced-options/tls-decrypt/configure-tls-decrypt-standalone-ca), you must first [create a Standalone CA](#how-to-create-a-standalone-ca) then assign it to your desired resources: - [Resource Set](#assign-a-standalone-ca-to-a-resource-set) - [Client Workload](#assign-a-standalone-ca-to-a-client-workload) :::tip[Paid feature] Standalone CAs are a paid feature. Please contact your Aembit representative for more information about pricing and implementation. ::: ## Prerequisites - [Aembit Role](/astro/user-guide/administration/roles/) with the following **Read/Write** permissions: - `Standalone Certificate Authorities` - `Client Workloads` - `Resource Sets` :::note[Optional] If you've never configured Standalone CA for TLS Decrypt before, Aembit recommends that you read [Standalone CA behavior](/astro/user-guide/deploy-install/advanced-options/tls-decrypt/about-tls-decrypt-standalone-ca#standalone-ca-behavior) to familiarize yourself with how Standalone CAs interact with Resource Sets. ::: ## How to create a Standalone CA Follow these steps to create a Standalone CA: 1. Log into your Aembit Tenant, and go to **Edge Components -> TLS Decrypt**. 1. In the top right corner, select the **Resource Set** where you want your Standalone CA to reside. ![TLS Decrypt screen with Standalone Certificate Authorities list](../../../../../../assets/images/tls_decrypt-standalone-ca.png) 1. In the **Standalone Certificate Authorities** section, click **+ New**. This displays the **Standalone Certificate Authority** pop out window: ![New Standalone Certificate Authority pop out window](../../../../../../assets/images/tls_decrypt-standalone-ca-new.png) 1. Enter a **Name** and optional **Description**. 1. Select the lifetime you desire from the **Leaf Certificate Lifetime options** dropdown. 1. Click **Save**. Aembit displays your new Standalone CA in the **Standalone Certificate Authorities** table. ## Assign a Standalone CA to a Resource Set 1. Log into your Aembit Tenant, and go to **Administration -> Resource Sets**. 1. Click the **Resource Set** that you want to assign a Standalone CA, then click **Edit**. Or follow [Create a new Resource Set](/astro/user-guide/administration/resource-sets/creating-resource-sets) to create one. ![Edit Resource Set screen with Standalone Certificate Authority section](../../../../../../assets/images/resource-set-standalone-ca.png) :::note If you don't see the Standalone CA that you want to assign, the Standalone CA may reside in a different Resource Set. ::: 1. In the **Standalone Certificate Authority** section, select the Standalone CA you want to assign to the Resource Set. 1. Click **Save**. ## Assign a Standalone CA to a Client Workload 1. Log into your Aembit Tenant, and go to **Client Workloads**. 1. In the top right corner, select the **Resource Set** where the Standalone CA you want to assign resides. :::caution It's crucial that you select the correct Resource Set, or you may not see your Standalone CA when assigning it. Or worse, you may assign the wrong Standalone CA to your Client Workload. ::: 1. Select the Client Workload you wan to assign the Standalone CA to, then click **Edit**. ![Edit Client Workload screen with Standalone Certificate Authority](../../../../../../assets/images/cw-standalone-ca.png) 1. In the **Standalone Certificate Authority** section, select the Standalone CA you want to assign to the Client Workload. 1. Click **Save**. ## Additional resources - [About Standalone CA for TLS Decrypt](/astro/user-guide/deploy-install/advanced-options/tls-decrypt/configure-tls-decrypt-standalone-ca) - [About TLS Decrypt](/astro/user-guide/deploy-install/advanced-options/tls-decrypt/) --- # How to trust certificates issued by private CAs URL: https://docs.aembit.io/user-guide/deploy-install/advanced-options/trusting-private-cas/ Description: How to configure Aembit Edge components to trust certificates issued by private CAs There are scenarios where Server Workloads are secured with certificates issued by private Certificate Authorities (CAs), which are not publicly trusted. The Aembit Agent Proxy, by default, does not trust certificates issued by such private CAs and will not connect to these workloads. This article describes the steps required to configure Edge Components to establish trust with these certificate authorities. ## Adding Private CA ### Kubernetes To have your private CAs trusted, pass them as the `agentProxy.trustedCertificates` parameter in the Aembit Helm chart. This parameter should be a base64-encoded list of PEM-encoded certificates. The resulting Helm command will look like this (please remember to replace your tenant ID and other parameters): ```shell helm install aembit aembit/aembit ` --create-namespace -n aembit ` --set tenant=TENANT,agentController.deviceCode=123456,agentProxy.trustedCertificates=LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0.... ``` ### Elastic Container Service (ECS) To trust private CAs, pass them as a variable to the Aembit ECS Terraform module. This variable should be a base64-encoded list of PEM-encoded certificates. ```hcl module "aembit-ecs" { source = "Aembit/ecs/aembit" version = "1.12.0" ... aembit_trusted_ca_certs = "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0...." } ``` ### Virtual machine The Agent Proxy automatically trusts all certificates installed in the system's trust root certificate store. Below are the steps to add them to the appropriate system trust root certificate store. #### Debian/Ubuntu-based VM Place your private CA certificate in `/usr/local/share/ca-certificates/`, ensuring the file contains PEM-encoded certificate(s) and that the file extension is `.crt`. Then, execute the following commands: ```shell sudo apt-get update && sudo apt-get install -y ca-certificates sudo update-ca-certificates ``` ## Disabling TLS verification In rare circumstances, Server Workloads could be secured with certificates that would normally be rejected by full TLS verification. For example, a Server Workload may have a certificate with a mismatch between the service's Fully Qualified Domain Name (FQDN) and its Common Name (CN) or Subject Alternative Name (SAN). Aembit allows the disabling of TLS verification for specific Server Workloads. :warning: Please exercise extreme caution with this configuration. Using certificates that are rejected by full TLS verification and disabling TLS verification are considered poor security practices. To disable TLS verification, toggle the **Forward TLS Verification** option to "None" within the **Server Workload** settings. ![Forward TLS Verification](../../../../../assets/images/forward_tls_verification.png) --- # Aembit Components and Packages URL: https://docs.aembit.io/user-guide/deploy-install/components-packages/ Description: Comparison of Aembit components and packages :::tip[Work in progress] This page is a work in progress. It may contain incomplete or unverified information. Please check back later for updates. If you have feedback or suggestions, please fill out the Aembit New Docs Feedback Survey. Thanks for your patience! ::: --- # Edge Component container image best practices URL: https://docs.aembit.io/user-guide/deploy-install/container-image-best-practices/ Description: Best practices for deploying official Aembit container images Aembit built its official [Aembit container images](https://hub.docker.com/u/aembit) to streamline the deployment process. Aembit provides a [Helm chart for Kubernetes](/astro/user-guide/deploy-install/kubernetes/kubernetes) and a [Terraform module for ECS](/astro/user-guide/deploy-install/serverless/aws-ecs-fargate) that ease deployment in containerized environments. If these are incompatible with your deployment environment, you may run into issues as you hand craft a Kubernetes configuration or an ECS task definition. The details on this page help you as you follow your own path. ## Container user IDs Some container images declare a specific user ID that the containerized application expects to run as. The following table lists Aembit container images and their expected user IDs: | Container Image | User ID | |-----------------------|---------| | `aembit_agent_proxy` | `65534` | | `aembit_sidecar_init` | `26248` | You shouldn't need to specify these user IDs unless you define a pod-level `securityContext/runAsUser` attribute in a Kubernetes deployment or extend the container image in a way that changes the default user ID. If you've specified the wrong user for either the `aembit-agent-proxy` container or the `aembit-init-iptable` container, you'll see a log error message such as: ``` sudo: you do not exist in the passwd database ``` Since the v1.22 release of the Aembit Helm chart, the injected container definitions include `securityContext/runAsUser` attributes that will override any such pod-level attribute. Since the v1.22 release of the Aembit Agent Injector, you will see a warning message: ``` The injected container (...) is unlikely to run correctly because it will run as UID ... where UID ... is expected." ``` If you see this warning, you must make sure to specify the `securityContext/runAsUser` attribute for each of the Aembit containers that you are injecting into any Client Workload pods that specify a `securityContext/runAsUser` attribute at the pod-level. ## Client Workload user IDs Transparent Steering relies on the user ID of the process initiating a network connection to exempt the Agent Proxy outbound connections. Therefore any Client Workload that runs under the `65534` UID (commonly named `nobody`) will also be exempt from Transparent Steering. ## Write-accessible filesystem The `aembit_agent_proxy` container image depends on being able to write to the root filesystem to download your tenant's CA certificate and add it to the trusted certificate bundle. If you disable writing to the root filesystem, Agent Proxy logs an error message similar to the following: ``` Error when fetching token. Will attempt to refresh in 16 seconds. Error: error sending request ... invalid peer certificate: UnknownIssuer ``` ECS and Kubernetes use slightly different spelling, using a different letter casing, for the same setting: * `readonlyRootFilesystem` on ECS * `readOnlyRootFilesystem` on Kubernetes --- # How to deploy to Kubernetes URL: https://docs.aembit.io/user-guide/deploy-install/kubernetes/kubernetes/ Description: How to deploy Aembit Edge Components in a Kubernetes environment import { Steps } from '@astrojs/starlight/components'; Aembit provides different deployment options that you can use to deploy Aembit Edge Components in your environment. Each of these options provides similar features and functionality. The steps for each of these options, however, are specific to the deployment option you select. This page describes the process to deploy Aembit Edge Components to Kubernetes cluster using Helm. To deploy Aembit Edge Components to your Kubernetes cluster, you must follow these steps: 1. [Prepare Edge Components](#step-1---prepare-edge-components) 1. [Add and install the Aembit Edge Helm chart](#step-2---install-aembit-edge-helm-chart) 1. [Annotate Client Workloads](#step-3---annotate-client-workloads) 1. [Optional configurations](#optional-configurations) You also have the option to [upgrade the Aembit Edge Helm chart](#upgrade-the-aembit-edge-helm-chart).
To further customize your deployments, see the available [optional configurations](#optional-configurations). ## Prerequisites 1. Make sure you run all commands from your local terminal with `kubectl` configured for your cluster. 1. Verify that you have set your current context in Kubernetes correctly: ```shell kubectl config current-context ``` If the context output is incorrect, set it correctly by running: ```shell mark="" kubectl config use-context ``` ## Step 1 - Prepare Edge Components 1. Log into your Aembit tenant and go to **Edge Components -> Deploy Aembit Edge**. 1. In the **Prepare Edge Components** section, [create a new Agent Controller](/astro/user-guide/deploy-install/about-agent-controller) or select an existing one. ![Deploy Aembit Edge Page](../../../../../assets/images/deploy_aembit_edge.png) 1. If the Agent Controller you selected does have a Trust Provider configured, skip ahead to the next section. Otherwise, click **Generate Code**. This creates a temporary `` that Aembit uses to authorize your Agent Controller. ## Step 2 - Install Aembit Edge Helm chart Follow the steps in the **Install Aembit Edge Helm Chart** section: 1. Add the Aembit Helm repository to your local Helm configuration by running: ```shell helm repo add aembit https://helm.aembit.io ``` 1. Install the Aembit Helm chart by running the following command, making sure to replace: * `` with your tenant ID (Find this in the Aembit website URL: `.aembit.io`) * `` with the code you generated in the Aembit web UI if your Agent Controller doesn't have a Trust Provider configured Also, this is the time to add extra [Helm configurations options](#optional-configurations) to the installation that fit your needs. ```shell {4} ins="" ins="" helm install aembit aembit/aembit \ -n aembit \ --create-namespace \ --set tenant=,agentController.DeviceCode= ``` :::tip To reduce errors, copy the command from the Aembit Web UI for this step, as it populates your `` and `` for you. ![Deploy Aembit Edge Generate Code button](../../../../../assets/images/deploy_aembit_edge-generate-code.png) ::: ## Step 3 - Annotate Client Workloads For Aembit Edge to manage your client workloads, you must annotate them with `aembit.io/agent-inject: "enabled"` so that the Aembit Agent Proxy can intercept network requests from them. To add this annotation to your client workloads, you can: * Modify your client workload's Helm chart by adding the following annotation in the deployment template and applying the changes: ```yaml template: metadata: annotations: aembit.io/agent-inject: "enabled" ``` * If using ArgoCD, update your GitOps repository with the annotation and sync the changes. * Directly modify your deployment YAML files to include the annotation in the pod template metadata section and applying your changes: ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: your-application spec: template: metadata: annotations: aembit.io/agent-inject: "enabled" ``` ## Upgrade the Aembit Edge Helm chart To stay up to date with the latest features and improvements, follow these steps to update and upgrade the Aembit Edge Helm chart: 1. From your local terminal with `kubectl` configured for your cluster, update the Aembit Helm chart repo: ```shell helm repo update aembit ``` 1. Upgrade the Helm chart: ```shell helm upgrade aembit aembit/aembit -n aembit ``` ## Optional configurations The following sections contain optional configurations that you can use to customize your Kubernetes deployments. ### Agent Proxy native sidecar configuration For Kubernetes versions `1.29` and higher, Aembit supports init-container-based Client Workloads. This starts the Agent Proxy as part of the init containers. To enable native sidecar configurations, do the following: 1. Make sure you add the [required Client Workload annotation](#step-3---annotate-client-workloads). 1. Set the Helm chart value `agentProxy.nativeSidecar=true` during chart installation by adding the following flag: ```shell --set agentProxy.nativeSidecar=true ``` ### Edge Component environment variables The Edge Components you deploy as part of this process have environment variables that you can configure to customize your deployment further. See [Edge Component environment variables reference](/astro/reference/edge-components/edge-component-env-vars), for all available configuration options. ### Aembit Edge Component configurations The Aembit Helm Chart includes configurations that control the behavior of Aembit Edge Components (both Agent Controller and Agent Proxy). See [Helm chart config options](/astro/reference/edge-components/helm-chart-config-options), for all available configuration options. ### Resource Set deployment To deploy a Resource Set using Kubernetes, you need to add the `aembit.io/resource-set-id` annotation to your Client Workload deployment and specifying the proper Resource Set ID. For example: ```shell aembit.io/resource-set-id: f251f0c5-5681-42f0-a374-fef98d9a5005 ``` Once you add the annotation, the Aembit Edge injects this Resource Set ID into the Agent Proxy. This configuration enables the Agent Proxy to support Client Workloads in this Resource Set. For more information, see [Resource Sets overview](/astro/user-guide/administration/resource-sets/). ### Delaying pod startup until Agent Proxy has registered By default, Agent Proxy allows Client Workload pods to enter the [`Running`](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase) state as soon as proxying ports become available, even if registration with Aembit Cloud isn't yet complete. While in this pre-registration state, Agent Proxy operates in Passthrough mode and can't inject credentials into Client Workloads. As a result, you may have to retry application requests. To delay the Client Workload pod startup until registration completes, set the `AEMBIT_PASS_THROUGH_TRAFFIC_BEFORE_REGISTRATION` Agent Proxy environment variable to `false`. This causes the `postStart` lifecycle hook to wait until Agent Proxy has registered with the Aembit Cloud service before entering the [`Running`](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase) state. If registration fails to complete within 120 seconds (due to misconfiguration or connectivity issues) the pod fails to start and eventually enters a `CrashBackOff` state. To override how long the Client Workload pods wait during `postStart`, set the Agent Proxy `AEMBIT_POST_START_MAX_WAIT_SEC` environment variable to specify the maximum wait time in seconds. :::important[Important limitation] Due to a [known Kubernetes issue](https://github.com/kubernetes/kubernetes/issues/116032), pod deletion doesn't correctly interrupt the `postStart` hook. As a result, deleting a pod that's waiting for Agent Proxy registration takes the full `AEMBIT_POST_START_MAX_WAIT_SEC` duration, even if you've set the pod's `terminationGracePeriodSeconds` to a lower value. ::: See [Edge Component environment variables reference](/astro/reference/edge-components/edge-component-env-vars), for a description of the `AEMBIT_PASS_THROUGH_TRAFFIC_BEFORE_REGISTRATION` and `AEMBIT_POST_START_MAX_WAIT_SEC` configuration options. --- # Aembit Edge on serverless services URL: https://docs.aembit.io/user-guide/deploy-install/serverless/ Description: Guides and topics about deploying Aembit Edge Components on serverless services functions and CI/CD This section covers how to deploy Aembit Edge components on serverless environments to enable secure, identity-based access between workloads. Serverless deployments remove the need to manage underlying infrastructure, providing more scalable and flexible deployment options. The following pages provide information about deploying Aembit Edge on various serverless platforms: - [AWS ECS Fargate](/astro/user-guide/deploy-install/serverless/aws-ecs-fargate) - Deploy Aembit Edge on AWS ECS Fargate - [AWS EKS Fargate](/astro/user-guide/deploy-install/serverless/aws-eks-fargate) - Deploy Aembit Edge on AWS EKS Fargate - [AWS Lambda Container](/astro/user-guide/deploy-install/serverless/aws-lambda-container) - Deploy Aembit Edge in AWS Lambda containers - [GitHub Actions](/astro/user-guide/deploy-install/serverless/github-actions) - Use Aembit Edge with GitHub Actions - [GitLab Jobs](/astro/user-guide/deploy-install/serverless/gitlab-jobs) - Use Aembit Edge with GitLab CI/CD jobs --- # Deploying to AWS ECS Fargate URL: https://docs.aembit.io/user-guide/deploy-install/serverless/aws-ecs-fargate/ Description: How to deploy Aembit Edge Components in a ECS Fargate environment import { Steps } from '@astrojs/starlight/components'; Aembit provides different deployment options that you can use to deploy Aembit Edge Components in your environment. Each of these options provides similar features and functionality. The steps for each of these options, however, are specific to the deployment option you select. This page describes the process to deploy Aembit Edge Components to ECS Fargate using Terraform. To deploy Aembit Edge Components to your Kubernetes cluster, you must follow these steps: 1. [Add a Trust Provider](#step-1---add-a-trust-provider) 1. [Add an Agent Controller](#step-2---add-an-agent-controller) 1. [Modify and deploy terraform configuration](#step-3---modify-and-deploy-terraform-configuration) To further customize your deployments, see the available [optional configurations](#optional-configurations). ## Before you begin 1. Ensure that Terraform has valid AWS credentials to deploy resources. Terraform doesn't require the AWS CLI but can use its credentials if available. Terraform automatically looks for credentials in environment variables, AWS credentials files, IAM roles, and other sources. For details on configuring authentication, refer to the [AWS Provider Authentication Guide](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#authentication-and-configuration). 1. Verify that you have initialized Terraform and that you have the required permissions to execute the deployment. Go to your Terraform deployment directory for the Client Workload and run the following command: ```shell terraform plan ``` The command should complete without errors. ## Step 1 - Add a Trust Provider You need to create a Trust Provider, or use an existing one, to enable the Agent Controller (created in the next step) to authenticate with the Aembit cloud. This Trust Provider relies on the AWS Role associated with your application for authentication. 1. Log into your Aembit tenant and go to **Edge Components --> Trust Providers**. 1. Click **+ New**, revealing the **Trust Provider** pop out. 1. Enter a **Name** and optional **Description**. 1. Select **AWS Role** as the **Trust Provider**. 1. Under **Match Rules**, click **+ New Rule** and set the following: 1. **Attribute** - Select **accountId** 1. **Value** - Enter the AWS account ID (without dashes) where your Client Workload is running 1. Click **Save**. ![Add Trust Provider UI](../../../../../assets/images/create-trust-provider-ecs-fargate.png) ## Step 2 - Add an Agent Controller 1. Log into your Aembit tenant and go to **Edge Components --> Agent Controllers**. 1. Click **+ New**, revealing the **Agent Controller** pop out. 1. Enter a **Name** and optional **Description**. 1. Select the **Trust Provider** you created in [Step 1](#step-1---add-a-trust-provider). 1. Click **Save**. ![Add Agent Controller UI](../../../../../assets/images/create-agent-controller-ecs-fargate.png) ## Step 3 - Modify and deploy Terraform configuration 1. Add the Aembit Edge ECS Module to your Terraform code, using configuration: ```hcl module "aembit-ecs" { source = "Aembit/ecs/aembit" version = "1.x.y" # Find the latest version at https://registry.terraform.io/modules/Aembit/ecs/aembit/latest aembit_tenantid = "" aembit_agent_controller_id = "" ecs_cluster = "" ecs_vpc_id = "" ecs_subnets = ["","",""] ecs_security_groups = [""] } ``` :::note To see additional configuration options, see [Optional configurations](#optional-configurations) ::: 1. Add the Aembit Agent Proxy container definition to your Client Workload Task Definitions. The following code sample shows an example of this by injecting `jsondecode(module.aembit-ecs.agent_proxy_container)` as the first container of the Task definition for your Client Workload. ```hcl resource "aws_ecs_task_definition" "workload_task" { family = "workload_task" container_definitions = jsonencode([ jsondecode(module.aembit-ecs.agent_proxy_container), { name = "workload" ... ``` 1. Add the required for explicit steering environment variables to your Client Workload Task Definitions. For example: ```hcl environment = [ {"name": "http_proxy", "value": module.aembit-ecs.aembit_http_proxy}, {"name": "https_proxy", "value": module.aembit-ecs.aembit_https_proxy} ] ``` 1. Execute `terraform init` to download Aembit ECS Fargate module. 1. With your Terraform code updated as described, run `terraform apply` or your typical Terraform configuration scripts to deploy Aembit Edge into your AWS ECS Client Workloads. ## Optional configurations The following table lists the configurable variables of the module and their default values. *All variables are required unless marked optional*. | Parameter | Description | Default | |---------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------| | `aembit_tenantid` | The Aembit TenantID with which to associate this installation and Client Workloads. | not set | | `aembit_agent_controller_id` | The Aembit Agent Controller ID with which to associate this installation. | not set | | `aembit_trusted_ca_certs` | (**Optional**) Additional CA Certificates that the Aembit AgentProxy should trust for Server Workload connectivity. | not set | | `ecs_cluster` | The AWS ECS Cluster into which the Aembit Agent Controller should be deployed. | not set | | `ecs_vpc_id` | The AWS VPC which the Aembit Agent Controller will be configured for network connectivity. This must be the same VPC as your Client Workload ECS Tasks. | not set | | `ecs_subnets` | The subnets which the Aembit Agent Controller and Agent Proxy containers can utilize for connectivity between Proxy and Controller and Aembit Cloud. | not set | | `ecs_security_groups` | The security group which will be assigned to the AgentController service. This security group must allow inbound HTTP access from the AgentProxy containers running in your Client Workload ECS Tasks. | not set | | `agent_controller_task_role_arn` | The AWS IAM Task Role to use for the Aembit AgentController Service container. This role is used for AgentController registration with the Aembit Cloud Service. | `arn:aws:iam::${data.aws_caller_identity.current.account_id}:role/ecsTaskExecutionRole` | | `agent_controller_execution_role_arn` | The AWS IAM Task Execution Role used by Amazon ECS and Fargate agents for the Aembit AgentController Service. | `arn:aws:iam::${data.aws_caller_identity.current.account_id}:role/ecsTaskExecutionRole` | | `log_group_name` | (**Optional**) Specifies the name of an optional log group to create and send logs to for components created by this module. You can set this value to `null`. | `/aembit/edge` | | `agent_controller_image` | The container image to use for the AgentController installation. | not set | | `agent_proxy_image` | The container image to use for the AgentProxy installation. | not set | | `aembit_stack` | The Aembit Stack which hosts the specified Tenant. | `useast2.aembit.io` | | `ecs_task_prefix` | Prefix to include in front of the Agent Controller ECS Task Definitions to ensure uniqueness. | `aembit_` | | `ecs_service_prefix` | Prefix to include in front of the Agent Controller Service Name to ensure uniqueness. | `aembit_` | | `ecs_private_dns_domain` | The Private DNS TLD that will be configured and used in the specified AWS VPC for AgentProxy to AgentController connectivity. | `aembit.local` | | `agent_proxy_resource_set_id` | Associates Agent Proxy with a specific [Resource Set](/astro/user-guide/administration/resource-sets/) | not set | --- # AWS EKS Fargate URL: https://docs.aembit.io/user-guide/deploy-install/serverless/aws-eks-fargate/ Description: Aembit Edge Component deployment considerations in an EKS Fargate environment Aembit provides different deployment options that you can use to deploy Aembit Edge Components in your environment. Each of these options provides similar features and functionality. The steps for each of these options, however, are specific to the deployment option you select. This page describes the extra considerations that apply to AWS EKS Fargate that differ from the standard Kubernetes deployment. AWS Elastic Kubernetes Service (EKS) Fargate is a serverless Kubernetes solution, where EKS automatically provisions and scales the compute capacity for pods. To schedule pods on Fargate in your EKS cluster, instead of on EC2 instances that you manage, you must define a [Fargate profile](https://docs.aws.amazon.com/eks/latest/userguide/fargate-profile.html). Fargate profiles provide a selector based on `namespace` and (optionally) `labels`, pods that match the selector will be scheduled on Fargate. ## Deployment considerations In general, the same deployment steps should be undertaken as described in the [Kubernetes](/astro/user-guide/deploy-install/kubernetes/kubernetes) page. However, you must use a namespace that matches the Fargate profile selector so that Aembit schedules Edge Components on Fargate with the Client Workload. You must provide this namespace when deploying the Aembit Edge Helm Chart. For example: ```shell {2} helm install aembit aembit/aembit \ -n \ --create-namespace \ --set ... ``` ## Limitations You must use the [Explicit Steering](/astro/user-guide/deploy-install/advanced-options/agent-proxy/explicit-steering#configure-explicit-steering) feature when deploying in AWS EKS Fargate. This is a limitation of the AWS Fargate serverless environment, which intentionally restricts network configuration, preventing advanced networking features like transparent steering. --- # AWS Lambda Container URL: https://docs.aembit.io/user-guide/deploy-install/serverless/aws-lambda-container/ Description: This page describes the steps required to deploy Aembit Edge Components in an AWS Lambda container environment. # Aembit provides several different deployment options which you can use to deploy Aembit Edge Components in your environment. Each of these options provides similar features and functionality; however, the steps for each of these options are specific to the deployment option you select. This page describes the process to deploy Aembit Edge components to an [AWS Lambda container](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-lambda-functions-with-container-images.html) environment. ## Deploy Aembit Edge Components ### Topology Aembit Agent Proxies for AWS Lambda containers are deployed within Lambda Containers. They are packaged as [AWS Lambda Extensions](https://docs.aws.amazon.com/lambda/latest/dg/lambda-extensions.html) and are automatically launched by the AWS Lambda Runtime. The deployed Lambda function must connect to an Amazon Virtual Private Cloud (VPC) with access to both the Agent Controller and the Internet. ### VPC For each AWS region hosting your Lambda containers, you must create a VPC (or use an existing one). All Lambda containers in each AWS account/region that include Aembit components must connect to a corresponding VPC in the same region. This VPC must provide: - Access to the Agent Controller. - Access to the Internet. Agent Controllers can either operate directly within this VPC or be located elsewhere, but accessible from this VPC. AWS Lambda containers are automatically placed within a VPC's private network. To enable Internet access, traffic from the VPC must pass through a NAT located in the public network. For more information, consult the [Connecting outbound networking to resources in a VPC](https://docs.aws.amazon.com/lambda/latest/dg/configuration-vpc.html#vpc-internet) documentation. ### Agent Controller Deploy the Agent Controller either on a [Virtual Machine](/astro/user-guide/deploy-install/virtual-machine/) or within your [Kubernetes Cluster](/astro/user-guide/deploy-install/kubernetes/kubernetes). ### Lambda Container Packaging The Aembit Edge Components are distributed as part of the Aembit AWS Lambda Extension. All Lambda extensions are incorporated into Lambda containers at build time. Include the following commands in your Dockerfile to add the extension to your AWS Lambda container image: ```docker COPY --from=aembit/aembit_aws_lambda_extension: /extension/ /opt/extensions ``` Remember to replace `` with the current version available on [DockerHub](https://hub.docker.com/r/aembit/aembit_aws_lambda_extension/tags). ### Lambda Container Deployment Deploy or update your Lambda container: - Specify additional environment variables for your Lambda function. For Agent Controllers with TLS configured: ```shell AEMBIT_AGENT_CONTROLLER=https://:5443 ``` For Agent Controllers without TLS: ```shell AEMBIT_AGENT_CONTROLLER=http://:5000 ``` - Specify `http_proxy` and/or `https_proxy` environment variables to direct HTTP and/or HTTPS traffic through Aembit: ```shell http_proxy=http://localhost:8000 https_proxy=http://localhost:8000 ``` Additional environment variables can be configured to set the Agent Proxy log level, among other settings. Please refer to the list of [available Agent Proxy environment variables](/astro/user-guide/deploy-install/virtual-machine/linux/agent-proxy-install-linux). ## Client Workload identification The most convenient way to identify Lambda container Client Workloads is using [AWS Lambda ARN Client Workload Identification](/astro/user-guide/access-policies/client-workloads/identification/aws-lambda-arn). :::note Please note that if you plan to work with a specific version of a Lambda function or aliases (as opposed to the latest version), you will need to use Qualified ARNs. For more details, refer to the Client Workload Identification article linked above. ::: Alternatively, you can use [Aembit Client ID](/astro/user-guide/access-policies/client-workloads/identification/aembit-client-id) by setting the `CLIENT_WORKLOAD_ID` environment variable. ## Trust Providers The only Trust Provider available for Lambda containers Client Workloads is [AWS Role Trust Provider](/astro/user-guide/access-policies/trust-providers/aws-role-trust-provider). Please refer to the [Lambda Container Support section](/astro/user-guide/access-policies/trust-providers/aws-role-trust-provider#lambda-container-support) for more details about the configuration. ## Resource Set Deployment To deploy a Resource Set using an AWS Lambda Container, you need to specify the `AEMBIT_RESOURCE_SET_ID` environment variable in your Client Workload. This configuration enables the Agent Proxy to support Client Workloads in this Resource Set. ## Lambda Container lifecycle and Workload Events Lambda Containers are paused immediately after the completion of the Lambda function. As a result, workload events may not have enough time to be sent by the Aembit Agent Proxy to Aembit Cloud. These events will be retained by the Aembit Agent Proxy and sent either at the next Lambda function invocation or during the container shutdown process. As a result, it may take longer than in other environments for these workload events to become available in your tenant. ## Configuring TLS Decrypt To utilize TLS decryption in your AWS Lambda container, download and trust the tenant certificate within your AWS Lambda container. Considering that the Lambda container's filesystem is configured to be read-only, Aembit recommends including this step in your build pipeline. Refer to the [Configure TLS Decrypt](/astro/user-guide/deploy-install/advanced-options/tls-decrypt/configure-tls-decrypt) page for comprehensive instructions on configuring TLS Decrypt. ## Performance The startup and shutdown times for the Aembit Agent Proxy normally take several seconds, which results in an increase in the execution time of your Lambda function by several seconds. ## Limitations Aembit currently supports only the following protocols in AWS Lambda container environments: - HTTP - HTTPS - Snowflake ## Supported phases The Aembit AWS Lambda Extension supports Client Workload identification and credential injection during the following Lambda container lifecycle phases: - [INIT phase](https://docs.aws.amazon.com/lambda/latest/dg/lambda-runtime-environment.html#runtimes-lifecycle-invoke) Supported for internal extensions, function inits, and external extensions executed after the Aembit extension. - [INVOKE phase](https://docs.aws.amazon.com/lambda/latest/dg/lambda-runtime-environment.html#runtimes-lifecycle-ib) Fully supported. --- # GitHub Actions URL: https://docs.aembit.io/user-guide/deploy-install/serverless/github-actions/ Description: This page describes the steps required to deploy Aembit Edge Components in a GitHub environment. # Aembit provides several different deployment options you can use to deploy Aembit edge components in your environment. Each of these options provides similar features and functionality; however, the steps for each of these options are specific to the deployment option you select. This page describes the process to utilize the Aembit Edge Agent in [GitHub Actions](https://docs.github.com/en/actions/learn-github-actions/understanding-github-actions). ## Configure a Serverless Access Policy To configure your Aembit tenant to support GitHub Actions as a Client Workload: 1. Configure your **Client Workload** to identify the Aembit Agent runtime environment with one or more of the following Client Identification options. * [GitHub ID Token Repository](/astro/user-guide/access-policies/client-workloads/identification/github-id-token-repository) * [GitHub ID Token Subject](/astro/user-guide/access-policies/client-workloads/identification/github-id-token-subject) :::note Please keep the following in mind as you go through the steps: - For **Step 2**: Make sure to copy the provided Client ID and (where appropriate) Audience values for configuration of the Agent command line parameters. - For **Step 3**: Any Credential Provider type can be used; however, some Credential Provider types may require specifying the `--credential_names` parameter when running the Aembit Agent. - For **Step 4**: Any Server Workload type can be used. The `--server_workload_host` and `--server_workload_port` parameters will need to match the values specified. ::: 2. Configure your **Trust Provider** type to [**GitHub Action ID Token**](/astro/user-guide/access-policies/trust-providers/github-trust-provider) to identify and attest the Aembit Agent runtime environment. 3. Configure your **Credential Provider** to specify the credential values which you want to be available in the Serverless runtime environment. 4. Configure your **Server Workload** to specify the service endpoint host and port which you want to utilize in the Serverless runtime environment. 5. Configure your **Access Policy** referencing the Aembit entities from steps 3 - 6, and then click **Save & Activate**. ## Configure for use with a Custom Resource Set To configure GitHub Actions to work with a Custom Resource Set: 1. Open your existing GitHub Actions configuration file. 2. Go to your Aembit tenant, click on the **Trust Providers** link in the left navigation pane and locate your GitLab Trust Provider in the Custom Resource Set you are working with. 3. In your GitHub Actions configuration file, go to the `env` section for the action step and add both the `AEMBIT_CLIENT_ID` and `AEMBIT_RESOURCE_SET_ID` values. In the example below, notice that the `AEMBIT_CLIENT_ID` and `AEMBIT_RESOURCE_SET_ID` values have been added in the `steps` section. ```yaml jobs: sample: steps: - name: Sample env: AEMBIT_CLIENT_ID: aembit:stack:tenant:identity:github_idtoken:uuid AEMBIT_RESOURCE_SET_ID: 585677c8-9g2a-7zx8-604b-e02e64af11e4 ``` 4. Verify both the `AEMBIT_CLIENT_ID` and `AEMBIT_RESOURCE_SET_ID` environment variables match the values in your Resource Set and Trust Provider in your Aembit tenant. 5. Commit your changes to your GitHub Actions configuration file. # Deploy the Serverless Script 1. Retrieve the latest available Aembit Agent release. The latest release can be found on the [Agent Releases](https://releases.aembit.io/agent/index.html) page. 2. Include the Aembit Agent within your Serverless environment. This can be accomplished by bundling it within an image, or retrieving it dynamically as appropriate for your workload. 3. Configure your Serverless script to call the Aembit Agent with the proper parameters. The example below shows configurations for **GitHub Actions**. ```yaml # The id-token permissions value must be set to write for retrieval of the GitHub OIDC Identity Token permissions: id-token: write ... jobs: sample: steps: - name: Sample env: # Copy the Client ID value from your Trust Provider to this value AEMBIT_CLIENT_ID: aembit:stack:tenant:identity:github_idtoken:uuid run: | $(./aembit credentials get --client_id $AEMBIT_CLIENT_ID --server_workload_host oauth.sample.com --server_workload_port 443) echo OAuth Token $TOKEN ``` :warning: In the configuration file, replace the value for AEMBIT CLIENT ID with the Client ID value generated on your Trust Provider, and set the Server Workload Host and Server Workload Port values to your desired values. ## Verify Aembit Agent To verify the Aembit Agent: 1) When downloading the Aembit Agent from the [Agent Releases](https://releases.aembit.io/agent/index.html) page, also download the matching `SHA256SUMS` and `SHA256SUMS.sig` file. 2) Use the `gpg` and `shasum` commands (or similar) to perform a signature/hash verification against the [Aembit Keybase Key](https://keybase.io/aembit). For example: ```shell curl https://keybase.io/aembit/pgp_keys.asc | gpg --import gpg --verify aembit_1.13.0_SHA256SUMS.sig aembit_1.13.0_SHA256SUMS grep $(shasum -a 256 aembit_1.13.0_linux_amd64.zip) aembit_1.13.0_SHA256SUMS ``` --- # GitLab Jobs URL: https://docs.aembit.io/user-guide/deploy-install/serverless/gitlab-jobs/ Description: This page describes the steps required to deploy Aembit Edge Components in a GitLab environment. # Aembit provides several different deployment options you can use to deploy Aembit Edge components in your environment. Each of these options provides similar features and functionality; however, the steps for each of these options are specific to the deployment option you select. This page describes the process to utilize the Aembit Edge Agent in [GitLab Jobs](https://docs.gitlab.com/ee/ci/jobs/). ## Configure a Serverless Access Policy To configure your Aembit Tenant to support GitLab Jobs as a Client Workload: 1. Configure your **Client Workload** to identify the Aembit Agent runtime environment with one or more of the following Client Identification options. * [GitLab ID Token Namespace Path](/astro/user-guide/access-policies/client-workloads/identification/gitlab-id-token-namespace-path) * [GitLab ID Token Project Path](/astro/user-guide/access-policies/client-workloads/identification/gitlab-id-token-project-path) * [GitLab ID Token Ref Path](/astro/user-guide/access-policies/client-workloads/identification/gitlab-id-token-ref-path) * [GitLab ID Token Subject](/astro/user-guide/access-policies/client-workloads/identification/gitlab-id-token-subject) :::note Please keep the following in mind as you go through the steps: - For **Step 2**: Make sure to copy the provided Client ID and (where appropriate) Audience values for configuration of the Agent command line parameters. - For **Step 3**: Any Credential Provider type can be used. Some may require specifying the `--credential_names` parameter when running the Aembit Agent. - For **Step 4**: Any Server Workload type can be used. The `--server_workload_host` and `--server_workload_port` parameters will need to match the values specified. ::: 2. Configure your **Trust Provider** type to [**Gitlab Job ID Token**](/astro/user-guide/access-policies/trust-providers/gitlab-trust-provider) to identify and attest the Aembit Agent runtime environment. 3. Configure your **Credential Provider** to specify the credential values which you want to be available in the Serverless runtime environment. 4. Configure your **Server Workload** to specify the service endpoint host and port which you want to utilize in the Serverless runtime environment. 5. Configure your **Access Policy** and then click **Save & Activate**. ## Configure for use with a Custom Resource Set To configure a GitLab Job to work with a Custom Resource Set: 1. Open your existing GitLab CI configuration file. 2. Go to your Aembit tenant, click on the **Trust Providers** link in the left navigation pane and locate your GitLab Trust Provider in the Custom Resource Set you are working with. 3. In your `gitlab-ci.yml` file, either: - update the `AEMBIT_CLIENT_ID` and add the `AEMBIT_RESOURCE_SET_ID` environment variables if you moving to a custom Resource Set; or - add both `AEMBIT_CLIENT_ID` and `AEMBIT_RESOURCE_SET_ID` environment variables if you are just getting started with enabling your workload to use Aembit. In the example below, you see the `AEMBIT_CLIENT_ID` and `AEMBIT_RESOURCE_SET_ID` environment variables have been added to the `variables` section. ```yaml variables: AEMBIT_CLIENT_ID: aembit:stack:tenant:identity:gitlab_idtoken:uuid AEMBIT_RESOURCE_SET_ID: bd886157-ba1d-54x86-9f26-3095b0515278 ``` 4. Verify these environment variables match the values in your Resource Set and Trust Provider in your Aembit tenant. 5. Commit your changes to the GitLab CI configuration file, `.gitlab-ci.yml`. # Deploy the Serverless Script 1. Retrieve the latest available Aembit Agent release. The latest release can be found on the [Agent Releases](https://releases.aembit.io/agent/index.html) page. 2. Include the Aembit Agent within your Serverless environment. This can be accomplished by bundling it within an image or retrieving it dynamically as appropriate for your workload. 3. Configure your Serverless script to call the Aembit Agent with the proper parameters. The example below show configuration for GitLab Jobs. ```yaml sample: variables: # Copy the Client ID value from your Trust Provider to this value AEMBIT_CLIENT_ID: aembit:stack:tenant:identity:gitlab_idtoken:uuid id_tokens: GITLAB_OIDC_TOKEN: # Copy the Audience value from your Trust Provider to this value aud: https://tenant.id.stack.aembit.io script: # Following are samples for OAuth Client Credentials flow, API Key, and Username/Password Credential Provider Types # Please update the --server_workload_host and --server_workload_port values to match your target workloads - $(./aembit credentials get --client_id $AEMBIT_CLIENT_ID --id_token $GITLAB_OIDC_TOKEN --server_workload_host oauth.sample.com --server_workload_port 443) - echo OAuth Token $TOKEN - $(./aembit credentials get --client_id $AEMBIT_CLIENT_ID --id_token $GITLAB_OIDC_TOKEN --server_workload_host apikey.sample.com --server_workload_port 443 --credential_names APIKEY) - echo API Key Example $APIKEY - $(./aembit credentials get --client_id $AEMBIT_CLIENT_ID --id_token $GITLAB_OIDC_TOKEN --server_workload_host password.sample.com --server_workload_port 443 --credential_names USERNAME,PASSWORD) - echo Username Password Example $USERNAME -- $PASSWORD ``` :warning: Update the configuration file as follows: - Replace the AEMBIT CLIENT ID and `aud` placeholders with the values of Client ID and Audience generated on your Trust Provider. - Set the Server Workload Host and Server Workload Port values to your desired values. ## Verify Aembit Agent To verify the Aembit Agent: 1. When downloading the Aembit Agent from the [Agent Releases](https://releases.aembit.io/agent/index.html) page, also download the matching `SHA256SUMS` and `SHA256SUMS.sig` file. 2. Use the `gpg` and `shasum` commands (or similar) to perform a signature/hash verification against the [Aembit Keybase Key](https://keybase.io/aembit). For example: ```shell curl https://keybase.io/aembit/pgp_keys.asc | gpg --import gpg --verify aembit_1.13.0_SHA256SUMS.sig aembit_1.13.0_SHA256SUMS grep $(shasum -a 256 aembit_1.13.0_linux_amd64.zip) aembit_1.13.0_SHA256SUMS ``` --- # Aembit Edge on virtual appliances URL: https://docs.aembit.io/user-guide/deploy-install/virtual-appliances/ Description: Guides and topics about deploying Aembit Edge Components on virtual appliances This section covers how to deploy Aembit Edge Components on virtual appliances. Virtual appliances provide a pre-configured environment for running Aembit Edge, simplifying the deployment process. The following pages provide information about deploying Aembit Edge on virtual appliances: - [Virtual Appliance](/astro/user-guide/deploy-install/virtual-appliances/virtual-appliance) - Guide for deploying Aembit Edge using virtual appliances --- # Virtual Appliance URL: https://docs.aembit.io/user-guide/deploy-install/virtual-appliances/virtual-appliance/ Description: This page describes the steps required to deploy the Aembit Edge components as a virtual appliance. :::note This feature is available as a limited beta only. Please contact your Aembit representative for more information. ::: # The Aembit Edge Components can be deployed as a virtual appliance. This allows more than one Client Workload to use the same set of Edge Components. Aembit provides an OVA file suitable for deployment on a [VMWare ESXi](https://www.vmware.com/products/cloud-infrastructure/esxi-and-esx) server. ## Limitations The virtual appliance deployment model is limited in the following ways: 1. Only explicit steering is supported. 2. Only HTTP(S) and Snowflake traffic is supported. 3. Client Workloads may only be identified by the source IP. 4. There are no Trust Providers currently compatible. 5. Of the current Access Conditions, only the **Aembit Time Condition** is compatible. ## Deployment Instructions For VM-creation details for your specific ESXi version, consult the [vSphere Documentation](https://docs.vmware.com/en/VMware-vSphere/index.html). 1. Download the virtual appliance OVA from the [Virtual Appliance Releases](https://releases.aembit.io/edge_virtual_appliance/index.html). 2. Upload the OVA to your ESXi server. 3. Create a new virtual machine, entering the appropriate configuration values. See the below [Configurations](#configurations) section for details. 4. Deploy the virtual machine. 5. Log into the virtual machine. For login details, please contact your Aembit representative. :::danger Immediately update the `aembit_edge` user password using by running the `passwd` command and supplying a new password. ::: ### Device Code Expiration In the event your device code expires before installation is complete, please contact your Aembit representative for assistance. ## Configurations There are two fields that must first be populated for a virtual appliance deployment to succeed: 1. `AEMBIT_TENANT_ID` 2. `AEMBIT_DEVICE_CODE` The virtual appliance deployment uses a subset of the virtual machine deployment options. See the [virtual machine deployment](/astro/user-guide/deploy-install/virtual-machine/) page for a detailed discussion of these options. ## Usage Configure the proxy configuration of your Client Workloads to send traffic to the virtual appliance. For more information on configuring the proxy settings of your Client Workload, see [Explicit Steering](/astro/user-guide/deploy-install/advanced-options/agent-proxy/explicit-steering#configure-explicit-steering). --- # Deploying Aembit Edge on VMs URL: https://docs.aembit.io/user-guide/deploy-install/virtual-machine/ Description: Guides and topics about deploying Aembit Edge Components on virtual machines (VMs) You can run Aembit Edge Components on virtual machines (VMs) to enable secure, identity-based access between workloads. When deploying on VMs, you install Agent Controller and Agent Proxy directly onto each machine. After installation, you must register Agent Proxy with an Agent Controller configured with a [Trust Provider](/astro/user-guide/access-policies/trust-providers/) or with your Aembit Tenant using a one-time Device Code. Once deployed, the Agent Proxy intercepts workload traffic, injects credentials, and enforces access policies—without requiring application changes. This section provides installation guides for deploying Aembit Edge Components on VMs in Linux and Windows environments. :::note Aembit recommends deploying Agent Controller and Agent Proxy on standalone VMs and not collocating them on the same VM. See [About Colocating Aembit Edge Components](/astro/user-guide/deploy-install/about-colocating-edge-components) for more info. ::: ## By operating system The following sections provide installation guides by Linux and Windows operating systems: ### Linux installation guides \{#deploy-vm-install-linux\} - [Agent Controller](/astro/user-guide/deploy-install/virtual-machine/linux/agent-controller-install-linux) - [Agent Proxy](/astro/user-guide/deploy-install/virtual-machine/linux/agent-proxy-install-linux) - [Agent Proxy on SELinux or RHEL](/astro/user-guide/deploy-install/virtual-machine/linux/agent-proxy-selinux-config) ### Windows installation guides \{#deploy-vm-install-windows\} - [Agent Controller](/astro/user-guide/deploy-install/virtual-machine/windows/agent-controller-install-windows) - [Agent Proxy](/astro/user-guide/deploy-install/virtual-machine/windows/agent-proxy-install-windows) ## By Edge Component The following sections provide installation guides by Aembit Edge Components ### Agent Controller \{#deploy-vm-install-ac\} - [Linux](/astro/user-guide/deploy-install/virtual-machine/linux/agent-controller-install-linux) - [Windows](/astro/user-guide/deploy-install/virtual-machine/windows/agent-controller-install-windows) ### Agent Proxy \{#deploy-vm-install-ap\} - [Linux](/astro/user-guide/deploy-install/virtual-machine/linux/agent-proxy-install-linux) - [SELinux or RHEL](/astro/user-guide/deploy-install/virtual-machine/linux/agent-proxy-selinux-config) - [Windows](/astro/user-guide/deploy-install/virtual-machine/windows/agent-proxy-install-windows) --- # How to set up Agent Controller on Linux URL: https://docs.aembit.io/user-guide/deploy-install/virtual-machine/linux/agent-controller-install-linux/ Description: How to set up Aembit Agent Controller on Linux import { Steps } from '@astrojs/starlight/components'; Aembit provides many different deployment options you can use to deploy Aembit Edge Components in your environment. Each of these options provide similar features and functionality. The steps for each of these options, however, are specific to the deployment option you select. This page describes the process to deploy Agent Controller to a Linux virtual machine (VM). :::note Aembit recommends deploying Agent Controller and Agent Proxy on standalone VMs and not collocating them on the same VM. See [About Colocating Aembit Edge Components](/astro/user-guide/deploy-install/about-colocating-edge-components) for more info. ::: ## Supported versions Use the following table to make sure that Aembit supports the operating system and platform you're deploying to your VM: | Operating system | Edge Component versions | |---------------------|-----------------------------| | Ubuntu 20.04 LTS | Agent Controller v1.12.878+ | | Ubuntu 22.04 LTS | Agent Controller v1.12.878+ | | Red Hat 8.9 \* | Agent Controller v1.12.878+ | \* See [How to configure Agent Proxy on SELinux or RHEL](/astro/user-guide/deploy-install/virtual-machine/linux/agent-proxy-selinux-config) for more info. ## Install Agent Controller To install Agent Controller, follow these steps: 1. Download the latest [Agent Controller Release](https://releases.aembit.io/agent_controller/index.html). 1. Log on to the remote host with your user: ```shell ssh -i @ ``` 1. Download Agent Controller using the correct ``: ```shell wget https://releases.aembit.io/agent_controller//linux/amd64/aembit_agent_controller_linux_amd64_.tar.gz ``` 1. Unpack the archive: ```shell tar xf aembit_agent_controller_linux_amd64..tar.gz ``` 1. Go to the unpacked directory: ```shell cd aembit_agent_controller_linux_amd64 ``` 1. Run the installer to enable Trust Provider-based Agent Controller registration, making sure to replace `` and `` with the values from your Aembit Tenant: ```shell sudo AEMBIT_TENANT_ID= AEMBIT_AGENT_CONTROLLER_ID= ./install ``` Optionally, add any other [Agent Controller environment variables reference](/astro/reference/edge-components/edge-component-env-vars#agent-controller-environment-variables) in the format `ENV_VAR_NAME=myvalue`. :::tip[Trust Providers] If you don't already have a Trust Provider, see [Add Trust Provider](/astro/user-guide/access-policies/trust-providers/add-trust-provider). Popular Trust Providers: - [AWS Metadata Service](/astro/user-guide/access-policies/trust-providers/aws-metadata-service-trust-provider) - [Azure Metadata Service](/astro/user-guide/access-policies/trust-providers/azure-metadata-service-trust-provider) ::: :::caution[Using Device Codes] Aembit doesn't recommend using device codes because they don't give you the same level of control and flexibility in attestation as Trust Providers. However, Kerberos attestation may sometimes require Agent Controllers using a device code. Device codes are a suitable fallback mechanism in environments without a Trust Provider or in non-prod scenarios such as a proof of concept, lab, or demo environment. To use a device code, you must generate a device code in the Aembit website UI and replace `AEMBIT_AGENT_CONTROLLER_ID` with the `AEMBIT_DEVICE_CODE` environmental variable in the preceding command. ::: ### Agent Controller environment variables For a list of all available environment variables for configuring the Agent Controller installer, see [Agent Controller environment variables reference](/astro/reference/edge-components/edge-component-env-vars#agent-controller-environment-variables). :::tip[Security Best Practice] Make sure the Agent Controller can accept connections on port 5000 from Agent Proxies (update your security groups if needed). Because access to Agent Controller is sensitive, *your Agent Controller's port should not be open to the Internet*. ::: ### Uninstall Agent Controller Run the following command to uninstall the previously installed Agent Controller. ```shell sudo ./uninstall ``` --- # How to set up Agent Proxy on a Linux VM URL: https://docs.aembit.io/user-guide/deploy-install/virtual-machine/linux/agent-proxy-install-linux/ Description: How to set up Aembit Agent Proxy on a Linux virtual machine (VM) import { Steps } from '@astrojs/starlight/components'; Aembit provides many different deployment options you can use to deploy Aembit Edge Components in your environment. Each of these options provide similar features and functionality. The steps for each of these options, however, are specific to the deployment option you select. This page describes the process to deploy Agent Proxy to a Linux virtual machine (VM). :::note Aembit recommends deploying Agent Controller and Agent Proxy on standalone VMs and not collocating them on the same VM. See [About Colocating Aembit Edge Components](/astro/user-guide/deploy-install/about-colocating-edge-components) for more info. ::: ## Supported versions Use the following table to make sure that Aembit supports the operating system and platform you're deploying to your VM: | Operating system | Edge Component versions | |---------------------|-------------------------| | Ubuntu 20.04 LTS | Agent Proxy v1.11.1551+ | | Ubuntu 22.04 LTS | Agent Proxy v1.11.1551+ | | Red Hat 8.9 \* | Agent Proxy v1.11.1551+ | \* See [How to configure Agent Proxy on SELinux or RHEL](/astro/user-guide/deploy-install/virtual-machine/linux/agent-proxy-selinux-config) for more info. ## Install Agent Proxy To install Agent Proxy on Linux, follow these steps: 1. Download the latest [Agent Proxy Release](https://releases.aembit.io/agent_proxy/index.html). 1. Log on to the VM with your username: ```shell ins="" ins="" ins="" ssh -i @ ``` 1. Download the latest released version of Agent Proxy. Make sure to include the `` in the command: ```shell ins="" wget https://releases.aembit.io/agent_proxy//linux/amd64/aembit_agent_proxy_linux_amd64_.tar.gz ``` 1. Unpack the archive using the correct *version number* in the command: ```shell ins="" tar xf aembit_agent_proxy_linux_amd64_.tar.gz ``` 1. Navigate to the unpacked directory: ```shell ins="" cd aembit_agent_proxy_linux_amd64_ ``` 1. Run the Agent Proxy installer, making sure to replace `` address: ```shell ins="" sudo AEMBIT_AGENT_CONTROLLER=http://:5000 ./install ``` Optionally, add any other [Agent Proxy environment variables reference](/astro/reference/edge-components/edge-component-env-vars#agent-proxy-environment-variables) in the format `ENV_VAR_NAME=myvalue`. 1. (Optional) You may optionally use the additional installation environment variable `AEMBIT_DOCKER_CONTAINER_CIDR`. This variable may be set to the CIDR block of the Docker container bridge network to allow handling workloads running in containers on your VM. Your Client Workloads running on your virtual machine should now be able to access server workloads. :::note If you are running Aembit in AWS, you may use the Agent Controller Private IP DNS name as Agent Controller Host (for example, `ip-172-31-3-73.us-west-1.compute.internal`). ::: ## Agent Proxy environment variables For a list of all available environment variables for configuring the Agent Proxy installer, see [Agent Proxy environment variables reference](/astro/reference/edge-components/edge-component-env-vars#agent-proxy-environment-variables). ## Uninstall Agent Proxy Run the following command to uninstall Agent Proxy from Linux VMs: ```shell sudo ./uninstall ``` ## Access Agent Proxy logs To access logs on your Agent Proxy, select the following tab for your operating system: Linux handles Agent Proxy logs with `journald`. To access Agent Proxy logs, run: ```shell journalctl --namespace aembit_agent_proxy ``` Older versions of `journald` do not support namespaces. If the preceding command does not work, you can use the following command: ```shell journalctl --unit aembit_agent_proxy ``` For more information about Agent Proxy log levels, see [Agent Proxy log level reference](/astro/reference/edge-components/agent-log-level-reference#agent-proxy-log-levels) ## Optional configurations The following sections describe optional configurations you can use to customize your Agent Proxy installation: ### Configuring AWS RDS certificates To install all the possible CA Certificates for AWS Relational Database Service (RDS) databases, see [AWS RDS Certificates](/astro/user-guide/deploy-install/advanced-options/agent-proxy/aws-rds). ### Configuring TLS Decrypt To use TLS decryption on your virtual machine, download the Aembit CA certificate and add it to your trusted CAs. See [About TLS Decrypt](/astro/user-guide/deploy-install/advanced-options/tls-decrypt/) for detailed instructions on how to use and configure TLS decryption on your virtual machine. ### Resource Set deployment If you want to deploy a Resource Set using the Agent Proxy Virtual Machine Installer, you need to specify the `AEMBIT_RESOURCE_SET_ID` environment variable during the Agent Proxy installation. See [Edge Component environment variables reference](/astro/reference/edge-components/edge-component-env-vars) for details. This configuration enables the Agent Proxy to support Client Workloads in this Resource Set. For more info, see [Resource Sets overview](/astro/user-guide/administration/resource-sets/). --- # How to configure Agent Proxy on SELinux or RHEL URL: https://docs.aembit.io/user-guide/deploy-install/virtual-machine/linux/agent-proxy-selinux-config/ Description: How configure Agent Proxy on SELinux or RedHat Enterprise Linux (RHEL) import { Steps } from '@astrojs/starlight/components'; Security Enhanced Linux (SELinux) is a mandatory-access security tool that enables administrators to strictly define how processes are able to interact with system resources like files, directories, and sockets. For a thorough introduction to SELinux, see the [RedHat SELinux page](https://www.redhat.com/en/topics/linux/what-is-selinux) and the [SELinux Wiki](https://selinuxproject.org/page/Main_Page). For SELinux users on RedHat Enterprise Linux, Aembit Edge Components ship with SELinux rules (`.te`) files when deployed to VM environments. Use `.te` files to create a custom SELinux policy. On this page: - How to [create a custom SELinux policy](#create-an-selinux-policy) for Edge Components deployed on a RHEL 8 or RHEL 9 VM. - How to [update your Edge Component's policy](#selinux-policy-updates) in case SELinux raises violations. - How to [migrate your existing Edge Component policy](#edge-component-version-upgrades) when updating the installed version your Edge Component. ## Create an SELinux Policy To configure SELinux to work with Aembit Edge Components, perform the following steps: :::note These steps assume you've already installed an Edge Component on your RHEL virtual machine. If you haven't deployed Aembit Edge Components, please see the [Virtual Machine guide](/astro/user-guide/deploy-install/virtual-machine/) to get started. ::: :::note Aembit recommends switching SELinux to permissive mode before installing a new SELinux policy. You can do this temporarily (until the system reboots) by executing `sudo setenforce 0`. To make the change persistent, you can: 1. Modify `/etc/selinux/config` and set the `SELINUX=` line to `SELINUX=permissive`. 1. Reboot the machine. ::: 1. Install the requisite SELinux packages. ```shell sudo dnf install -y selinux-policy-devel rpm-build ``` 1. Create a new directory to contain the SELinux policy files. ```shell mkdir ~/edge_component_policy cd ~/edge_component_policy ``` :::note The following steps use Agent Proxy as the example application. If you're installing a policy for Agent Controller, replace occurrences of `proxy` with `controller` in script and/or directory names. ::: 1. Use the `selinux/generate_selinux_policy.sh` script inside your Edge Component installer bundle to generate a new SELinux policy for the Edge Component. ```shell # cwd: ~/edge_component_policy sudo /selinux/generate_selinux_policy.sh # e.g sudo /home/user/aembit_agent_proxy_linux_amd64_1.19.2326/selinux/generate_selinux_policy.sh ``` :::note Your current working directory should now contain a number of new files, including: - `aembit_agent_proxy.te` - `aembit_agent_proxy.if` - `aembit_agent_proxy.fc` - `aembit_agent_proxy.sh` ::: 1. Copy the `.te` file for your RedHat version, located in the Edge Component installer bundle's `selinux` directory, into the directory with the newly generated policy files. :::note[Intended behavior] This step replaces the generated `aembit_agent_proxy.te` file in your working directory with the one provided in the Edge Component installer bundle. ::: ```shell # cwd: ~/edge_component_policy sudo cp /selinux//aembit_agent_proxy.te . # e.g sudo cp /home/user/aembit_agent_proxy_linux_amd64_1.19.2326/selinux/RHEL_9.3/aembit_agent_proxy.te . ``` 1. Install the policy using the generated `aembit_agent_proxy.sh` shell script. ```shell # cwd: ~/edge_component_policy sudo ./aembit_agent_proxy.sh ``` 1. Restart the Edge Component for the policy to take effect. ```shell sudo systemctl restart aembit_agent_proxy # or sudo systemctl restart aembit_agent_controller ``` 1. Verify Agent Proxy is now running under SELinux. ```shell ps -efZ | grep aembit_agent_proxy # Sample output: # system_u:system_r:aembit_agent_proxy_t:s0 [...] /opt/aembit/edge/agent_proxy//bin/aembit_agent_proxy # ^^^^^^^^ ^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^ - SELinux-generated user, role, and type for the Agent Proxy binary ``` After completing the preceding steps, the Edge Component run under SELinux. :::note The rules included in the Edge Component installer packages are configured to support common workloads with default installation parameters. We recommend running Edge Components for 1-2 days in permissive mode in case SELinux raises violations due to custom configurations or unexpected workload interactions. See the [policy update section](#selinux-policy-updates) to learn how to update your Edge Component policy. ::: ## SELinux policy updates SELinux may report violations if an Edge Component is run with non-default installation options or with unique workloads. If this occurs, follow these steps to update the SELinux policy and allow the Edge Component to access the needed resources. :::note Aembit recommends running SELinux in permissive mode while performing the following steps. ::: :::note The following steps use Agent Proxy as the example application. If you're updating a policy for Agent Controller, replace occurrences of `proxy` with `controller` in script and/or directory names. ::: 1. Change to the directory where you initially generated the SELinux policy files for your Edge Component (if you followed along from the [previous section](#create-an-selinux-policy), this was `~/edge_component_policy`). ```shell cd ~/edge_component_policy ``` 1. Update the rules (`.te`) file to account for new violations by running the previously generated installation script with the `--update` flag. ```shell sudo ./aembit_agent_proxy.sh --update ``` 1. Restart the Edge Component for the policy updates to take effect. ```shell sudo systemctl restart aembit_agent_proxy ``` :::note It can be useful in some situations to check for violations without committing to a policy update. You can do this with the `ausearch` tool: ```shell # get the last time at which the policy was updated last_update_time=`ls -l --time-style="+%x %T" aembit_agent_proxy.te | awk '{ printf "%s %s", $6, $7 }'` # query SELinux for violations ausearch --start $last_update_time -m avc --raw -se aembit_agent_proxy ``` ::: ## Edge Component version upgrades When installing a new version of an Edge Component that's monitored by SELinux, you may choose to re-use your existing rules (`.te`) file from a previous policy installation, or you can install a new policy from scratch using the `.te` file provided in the Edge Component's installation bundle. Both options lead to a fully functioning SELinux policy. * To create a new policy using the rules (`.te`) file provided in the new Edge Component's installer bundle, follow the steps outlined in the [policy creation](#create-an-selinux-policy) section. * To create a new policy using your existing rules (`.te`) file, follow the steps in the [policy creation](#create-an-selinux-policy) section, but use your previous `.te` file instead of the supplied one in the Edge Component's installation bundle. --- # How to set up Agent Controller on Windows Server URL: https://docs.aembit.io/user-guide/deploy-install/virtual-machine/windows/agent-controller-install-windows/ Description: How to set up Aembit Agent Controller on Windows Server import { Tabs, TabItem } from '@astrojs/starlight/components'; import { Steps } from '@astrojs/starlight/components'; Aembit provides many different deployment options you can use to deploy Aembit Edge Components in your environment. Each of these options provide similar features and functionality. The steps for each of these options, however, are specific to the deployment option you select. This page describes the process to deploy Agent Controller to a Windows Server virtual machine (VM). :::note Aembit recommends deploying Agent Controller and Agent Proxy on standalone VMs and not collocating them on the same VM. See [About Colocating Aembit Edge Components](/astro/user-guide/deploy-install/about-colocating-edge-components) for more info. ::: To install Agent Controller on Windows Server, Aembit provides a Windows installer file (`.msi`).
See [Installation details](#installation-details) for more information about what it does. Aembit supports three primary configurations when you install Agent Controller on Windows Server: - A single Windows Server. - A single Windows Server with Kerberos attestation enabled. See [Kerberos Trust Provider](/astro/user-guide/access-policies/trust-providers/kerberos-trust-provider). - Multiple Windows Servers in a [high availability (HA) configuration](/astro/user-guide/deploy-install/advanced-options/agent-controller/agent-controller-high-availability) using an Active Directory [Group Managed Service Account (gMSA)](https://learn.microsoft.com/en-us/windows-server/identity/ad-ds/manage/group-managed-service-accounts/group-managed-service-accounts/group-managed-service-accounts-overview). Using a gMSA reduces the operational difficulty in managing secrets across multiple Agent Controller hosts. ## Supported versions Use the following table to make sure that Aembit supports the operating system and platform you're deploying to your VM: | Operating system | Edge Component versions | |---------------------|-----------------------------------------------------------| | Windows Server 2019 | Agent Controller v1.21.2101+ | | Windows Server 2022 | Agent Controller v1.21.2101+ | ## Prerequisites Before you install Agent Controller on Windows Server, you must have the following: - Network and system access to download and install software on the Windows Server host. - If installing with Kerberos attestation enabled: - Your Agent Controller Windows Server host joined to an Active Directory (AD) domain. ## Install Agent Controller on Windows Server To install an Aembit Agent Controller on Windows Server: 1. Download the latest release version of the Agent Controller installer from the [Agent Controller releases page](https://releases.aembit.io/agent_controller/index.html), making sure to replace the instances of `` with the latest version in the following command. Note that downloading directly via a browser may result in unexpected behavior. ```powershell ins="" Invoke-WebRequest ` -Uri https://releases.aembit.io/agent_controller//windows/amd64/aembit_agent_controller_windows_amd64_.tar.gz ` -Outfile aembit_agent_controller.msi ``` Next, follow the installation steps in the appropriate tab: 2. Install Agent Controller using the following command. Make sure to replace `` with your Aembit Tenant ID and `` with the ID of the Agent Controller you are configuring. ```powershell ins="" ins="" msiexec /i aembit_agent_controller.msi /l*v installer.log ` AEMBIT_TENANT_ID= ` AEMBIT_AGENT_CONTROLLER_ID= ``` 2. Install the Agent Controller, using the following command. Make sure to replace `` with your Aembit Tenant ID and `` with the ID of the Agent Controller you are configuring. ```powershell ins="" ins="" msiexec /i aembit_agent_controller.msi /l*v installer.log ` AEMBIT_AGENT_CONTROLLER_ID= ` AEMBIT_TENANT_ID= ` AEMBIT_KERBEROS_ATTESTATION_ENABLED=true ``` 2. Make sure to add the [Kerberos Trust Provider](/astro/user-guide/access-policies/trust-providers/kerberos-trust-provider) in your Aembit Tenant. 2. When installing the Agent Proxy, make sure the `AEMBIT_AGENT_CONTROLLER` value uses the DNS name of the Agent Controller service principal. 2. Install Agent Controller, using the following command. Run the `.msi` installer to enable Trust Provider-based Agent Controller registration, making sure to replace `` and `` with the values from your Aembit Tenant. To install Agent Controller on Windows Server using a gMSA, you must also set the `SERVICE_LOGON_ACCOUNT` environment variable using [Down-Level Logon Name format](https://learn.microsoft.com/en-us/windows/win32/secauthn/user-name-formats#down-level-logon-name) `SERVICE_LOGON_ACCOUNT=\\`. ```powershell ins="" ins="" ins="" ins="" msiexec /i aembit_agent_controller.msi /l*v installer.log ` AEMBIT_AGENT_CONTROLLER_ID= ` AEMBIT_TENANT_ID= ` AEMBIT_KERBEROS_ATTESTATION_ENABLED=true ` SERVICE_LOGON_ACCOUNT=\$ ``` :::info If the account supplied in `SERVICE_LOGON_ACCOUNT` is not valid, you will receive the following message: > An error occurred while applying security settings. \<`SERVICE_LOGON_ACCOUNT` value> is not a valid user or group. > This could be a problem with the package, or a problem connecting to a domain controller on the network. > Check your network connection and click Retry, or Cancel to end the install. ::: 2. When installing the Agent Proxy, make sure to set the `AEMBIT_AGENT_CONTROLLER` value as the DNS name component of the gMSA service principal. 2. Make sure to add the [Kerberos Trust Provider](/astro/user-guide/access-policies/trust-providers/kerberos-trust-provider) in your Aembit Tenant. ### Agent Controller environment variables For a list of all available environment variables for configuring the Agent Controller installer, see [Agent Controller environment variables reference](/astro/reference/edge-components/edge-component-env-vars#agent-controller-environment-variables). :::tip[Security Best Practice] Make sure the Agent Controller can accept connections on port 5000 from Agent Proxies (update your security groups if needed). Because access to Agent Controller is sensitive, *your Agent Controller's port should not be open to the Internet*. ::: ### (Optional) Verify the service account By default, the Agent Controller service runs as the [`LocalService` account](https://learn.microsoft.com/en-us/windows/win32/services/localservice-account). To verify that the Agent Controller service is running as the expected service account, use the following PowerShell command: ```powershell (Get-WmiObject Win32_Service -Filter "Name='AembitAgentController'").StartName ``` If you don't see the **Aembit Agent Controller** service running or if it's running as a different user, [uninstall Agent Controller](#uninstall-agent-controller) and retry these instructions. ## Uninstall Agent Controller To uninstall Agent Controller from your Windows Server, use Windows built-in **Add/Remove Programs** feature like you'd normally uninstall any other program or app from Windows. ## Limitations Agent Controller on Windows has the following limitations: - **Changing the service logon account after installation isn't supported**: If you need to change to a different Windows service account, you must uninstall and reinstall the Agent Controller on your Windows Server host. - **Changing the TLS strategy may not work as expected**: Because of the way Aembit stores and preserves parameters, changing from a TLS configuration using customer certificates to a configuration using Aembit-managed certificates may not work as expected. To remediate: 1. Uninstall the Agent Controller. 2. Delete the `C:\ProgramData\Aembit\AgentController` directory and its contents. 3. Reinstall the Agent Controller. ## Installation details | **Attribute** | **Value** | |---------------------|-----------------------------------------------------------------------| | **Service name** | `AembitAgentController` | | **Binary location** | `C:\Program Files\Aembit\AgentController\aembit_agent_controller.exe` | | **Log files** | `C:\ProgramData\Aembit\AgentController\Logs` | ## Additional resources - [Kerberos Trust Provider](/astro/user-guide/access-policies/trust-providers/kerberos-trust-provider) --- # How to set up Agent Proxy on Windows Server URL: https://docs.aembit.io/user-guide/deploy-install/virtual-machine/windows/agent-proxy-install-windows/ Description: How to set up Aembit Agent Proxy on Windows Server import { Steps } from '@astrojs/starlight/components'; Aembit provides many different deployment options you can use to deploy Aembit Edge Components in your environment. Each of these options provide similar features and functionality. The steps for each of these options, however, are specific to the deployment option you select. This page describes the process to deploy Agent Proxy to a Windows Server virtual machine (VM). :::note Aembit recommends deploying Agent Controller on a standalone virtual machine and not collocating it with Agent Proxy. Although it's possible to deploy both components on the same virtual machine, this isn't recommended because you could end up managing many Agent Controllers. ::: ## Supported versions Use the following table to make sure that Aembit supports the operating system and platform you're deploying to your VM: | Operating system | Edge Component versions | |---------------------|-------------------------| | Windows Server 2019 | Agent Proxy v1.20.2559+ | | Windows Server 2022 | Agent Proxy v1.20.2559+ | ## Install Agent Proxy To install Agent Proxy on Windows Server, follow these steps: 1. Download the latest [Agent Proxy Release](https://releases.aembit.io/agent_proxy/index.html) using the following PowerShell command. Note that downloading directly via a browser may result in unexpected behavior. ```powershell ins="" ins="" Invoke-WebRequest -Uri -Outfile aembit_agent_proxy_windows_amd64_.msi ``` 1. Install Agent Proxy using `msiexec`: Optionally, append any [Agent Proxy environment variables](#agent-proxy-environment-variables) in the following format separated by spaces: `ENV_VAR_NAME=myvalue ENV_VAR_NAME=myvalue` ```powershell ins="" ins="" msiexec /i aembit_agent_proxy_windows_amd64_.msi /l*v install.log ``` 1. Configure an explicit proxy on your Windows Server VM. Common methods include Group Policy Objects (GPO), Proxy Auto-Configuration (PAC) files, system-level proxy settings, and many others. Since HTTP proxy configurations may have specific requirements, consult your IT administrator to determine the most appropriate method for your environment. :::note[Windows privileges] If you're configuring an explicit proxy in a managed environment (like Windows Active Directory), you may need elevated administrator privileges to do so. Check with your IT administrator to make sure you have the privileges you need to configure an explicit proxy. Otherwise, you shouldn't need elevated privileges when configuring an explicit proxy directly on your Client Workload. ::: :::note[Troubleshooting] If you encounter the following error during installation or while upgrading, you may have one or more malformed environment variable values: > There is a problem with this Windows Installer package. A program run as part of the setup did not finish as expected. Contact your support personnel or package vendor. Check to make sure your environment variables are set correctly. See [Agent Proxy environment variables](/astro/reference/edge-components/edge-component-env-vars) for details. ::: :::note If you are running Aembit in AWS, you may use the Agent Controller Private IP DNS name as Agent Controller Host (for example, `ip-172-31-3-73.us-west-1.compute.internal`). ::: ### Agent Proxy environment variables For a list of all available environment variables for configuring the Agent Proxy installer, see [Agent Proxy environment variables reference](/astro/reference/edge-components/edge-component-env-vars#agent-proxy-environment-variables). ### Uninstall Agent Proxy To uninstall Agent Proxy from Windows Server VMs, follow these steps: 1. As an administrator, open the Command Prompt or PowerShell. 1. Run the following command to uninstall Agent Proxy: ``` msiexec /uninstall aembit_agent_proxy_windows_amd64_.msi /l*v uninstall.log /quiet ``` :::note[Removing logs] Uninstalling Agent Proxy does not remove logs. If desired, delete logs from `C:\ProgramData\Aembit\AgentProxy\Logs`. ::: ## Access Agent Proxy logs Agent Proxy writes logs to `C:\ProgramData\Aembit\AgentProxy\Logs\log`. For more information about Agent Proxy log levels, see [Agent Proxy log level reference](/astro/reference/edge-components/agent-log-level-reference#agent-proxy-log-levels) ## Optional configurations The following sections describe optional configurations you can use to customize your Agent Proxy installation: ### Configuring AWS RDS certificates To install all the possible CA Certificates for AWS Relational Database Service (RDS) databases, see [AWS RDS Certificates](/astro/user-guide/deploy-install/advanced-options/agent-proxy/aws-rds). ### Configuring TLS Decrypt To use TLS decryption on your virtual machine, download the Aembit CA certificate and add it to your trusted CAs. See [About TLS Decrypt](/astro/user-guide/deploy-install/advanced-options/tls-decrypt/) for detailed instructions on how to use and configure TLS decryption on your virtual machine. ### Resource Set deployment If you want to deploy a Resource Set using the Agent Proxy Virtual Machine Installer, you need to specify the `AEMBIT_RESOURCE_SET_ID` environment variable during the Agent Proxy installation. See [Edge Component environment variables reference](/astro/reference/edge-components/edge-component-env-vars) for details. This configuration enables the Agent Proxy to support Client Workloads in this Resource Set. For more info, see [Resource Sets overview](/astro/user-guide/administration/resource-sets/). --- # User Guide/discovery ====================== # Managing discovered workloads URL: https://docs.aembit.io/user-guide/discovery/managing-discovered-workloads/ Description: How to manage workloads found through Aembit Discovery :::note This is a beta feature and may be subject to changes. ::: This section explains how to manage discovered workloads—view their details, convert them to managed, ignore them, and restore them if needed. Once Aembit has completed the discovery process, you can find your discovered workloads in the **Discovered tab**. Aembit displays the following: Use the dropdown in the top-right corner to filter workloads by state: - **Discovered**: Workloads Aembit has found but aren't yet managed. - **Ignored**: Workloads marked as irrelevant, which no longer appear in the main list. Following that, Aembit displays a table of all discovered workloads. The table includes the following columns: - **Name**: The name of the workload. - **Client Workload Identifiers** (on the Client Workload page): The identification type for Client Workloads. (for example Kubernetes service account name) - **Host/Port/Protocol** (on the Server Workload page): The service endpoint details for Server Workloads. - **Source**: Indicates where Aembit discovered the workload. - **Activity**: Displays the activity of the workload. ![Discovered Workload Page](../../../../assets/images/discovery_unmanaged_workloads.png) ## View workload details 1. Click any row in the **Discovered** list to go to the details page for that specific workload. 1. On this page, information that Aembit fetches from the source auto-populates the fields, including **display name**, **identification type**, **service endpoint details**, and more. On the right side of the page, Aembit displays the associated metadata for the workload. 1. (Optional) If you need more detailed data, click the **View JSON** to access the full JSON data associated with the workload. This allows you to inspect all the metadata and relevant details for the workload in its raw format. ![Details Page of Discovered Workload](../../../../assets/images/discovery_workload_details.png) ## Convert a discovered workload to managed After [reviewing a workload's details](#view-workload-details) and deciding to manage it, follow these steps to convert that workload to **managed**: 1. On the workload you want to convert, click **Manage**. This opens the workload in **edit mode**, allowing you to make any necessary changes to its configuration or settings. 1. Once you're satisfied with the details, click **Save** to complete the management process. Once saved, the workload moves from the **Discovered tab** to the **Managed tab**, where you can use it in Access Policies. You can then return to the **Managed tab** to create and apply Access Policies for the workload. ## Ignore a discovered workload If you find a workload unnecessary or irrelevant, and you no longer want to see it in the **Discovered tab**, 1. Select the workload you want to ignore. 1. Click **Ignore**. Aembit moves the workload to the **Ignored** list, removing it from the **Discovered** list. This helps keep your Discovered list focused on relevant workloads. To restore a workload to discovered, see the [next section](#restore-ignored-workloads). ## Restore ignored workloads To view ignored workloads, switch the dropdown list in the top-right corner of the page to **Ignored**. This displays all workloads that Aembit ignores. To restore workloads to the **Discovered tab**, you can either: - Go to a workload's details page, and click **Restore**. - Select the checkbox for one or more workloads from the **Discovered** list, and click **Restore** at the top-right side of the page. This action moves workloads back to the **Discovered tab**, making it eligible for management again. ![Restore Ignored Workloads](../../../../assets/images/discovery_restore_ignored_workloads.png) --- # Discovery Sources overview URL: https://docs.aembit.io/user-guide/discovery/sources/ Description: Available Discovery Sources in Aembit :::note This is a beta feature and may be subject to changes. ::: In this section, you can explore all the Discovery Sources that Aembit gathers data for discovery. The following list includes the existing Discovery Sources, each representing a different way Aembit collects workload information. - [Aembit Edge](/astro/user-guide/discovery/sources/aembit-edge) - [Wiz](/astro/user-guide/discovery/sources/wiz) --- # Aembit Edge Discovery Source URL: https://docs.aembit.io/user-guide/discovery/sources/aembit-edge/ Description: How Aembit discovers workloads using the Aembit Edge Discovery Source :::note This is a beta feature and may be subject to changes. ::: This page explains how Aembit Edge discovers workloads. Aembit Edge enables efficient workload discovery within your environments, helping you maintain visibility and manage access across your infrastructure. **Edge Discovery** identifies workloads in [environments](/astro/reference/edge-component-supported-versions) where you've deployed **Aembit Edge**. By collecting communication event data, Aembit Edge helps identify workloads and categorize them as either **Managed** or **Discovered** based on predefined criteria. To perform **Edge Discovery**, you need to deploy **Aembit Edge** to your desired environments. **Aembit Edge** automatically collects event data about workload communication. This data allows Aembit to categorize workloads as either **Managed** or **Discovered** based on predefined criteria. The process makes sure that Aembit tracks and manages workloads meeting these criteria, while Aembit marks others as **Discovered** for further review. Aembit Edge helps simplify the management of workloads by automatically identifying which workloads are active and how they're interacting, providing a comprehensive view of your infrastructure. ### How to perform Edge Discovery 1. **Deploy Aembit Edge** to your environment. - Ensure you set up your environment to support Aembit Edge. This involves configuring the necessary infrastructure and permissions for the Edge Components. 1. **Ensure your environment generates event data.** - Aembit Edge relies on event data from your environment to detect workloads and monitor their interactions. Make sure your environment is actively generating the necessary data for discovery. 1. **Wait for the system to collect the data and categorize the workloads.** - Aembit Edge automatically start collecting the event data and categorize workloads as either **Managed** or **Discovered**, depending on whether they meet predefined criteria. 1. **Log out and log back into the Aembit Tenant to trigger the discovery process and refresh the workload data.** - Logging out and back in make sure that the system updates with the most recent data and categorization of workloads. Once discovery is complete, you can view the workloads that Aembit discovered and categorized as **discovered** in the **Client Workloads** or **Server Workloads** sections. After completing these steps, you'll have improved visibility into the workloads operating in your environment. To interact with or manage the discovered workloads, visit [Interacting with Discovered Workloads](/astro/user-guide/discovery/managing-discovered-workloads) for more details. --- # Wiz Discovery Source URL: https://docs.aembit.io/user-guide/discovery/sources/wiz/ Description: How Aembit discovers workloads using the Wiz Discovery Source :::note This is a beta feature and may be subject to changes. To enable Discovery with Wiz, contact Aembit by completing the [Contact Us form](https://aembit.io/contact/). ::: This page explains how Aembit uses the Wiz Discovery Source to identify workloads in your cloud environments. The [Wiz Discovery Integration](/astro/user-guide/administration/discovery/integrations/wiz) allows Aembit to pull workload data from your Wiz tenant through the Wiz Integration API. Once integrated, Aembit automatically fetches workload data from your Wiz tenant and imports it as discovered workloads—draft entities you can review and optionally manage within Aembit. **Wiz Discovery** simplifies the process of discovering workloads in cloud environments by seamlessly syncing data from Wiz into Aembit. This integration provides Aembit with a comprehensive, up-to-date view of your workloads, enabling you to apply Access Policies and make informed decisions about managing your cloud resources. ### How to perform wiz discovery 1. **Configure the Wiz Integration**: Follow the [Wiz Discovery Integration](/astro/user-guide/administration/discovery/integrations/wiz) guide to configure the integration. This step make sure that Aembit can securely connect to your Wiz environment and begin syncing data. 2. **Sync the Data**: After saving the integration, Aembit starts syncing data from Wiz. The initial sync may take longer than subsequent syncs, as it pulls in all relevant workload data from Wiz. 3. **Review Discovered Workloads**: After syncing, Aembit displays the discovered workloads in the **Discovered** tab. These workloads aren't yet managed by Aembit, so you can review them and categorize them according to your security and Access Policies. :::note After the initial sync, Aembit compares future syncs to the previously retrieved data. If you add new workloads in Wiz, Aembit won't detect them until they become available in the Wiz environment. ::: By following these steps, Aembit fetches and syncs the latest workload data from your Wiz environment. This streamlines the process of managing workloads in the cloud. After syncing, Aembit categorizes the workloads as discovered and displays them for further review. You can then choose to manage them, apply Access Policies, or take other appropriate actions. To interact with or manage the discovered workloads, visit [Interacting with Discovered Workloads](/astro/user-guide/discovery/managing-discovered-workloads) for more details. --- # User Guide/access Policies ============================ # Access Conditions URL: https://docs.aembit.io/user-guide/access-policies/access-conditions/ Description: This document provides a high-level description of Access Conditions Access Conditions are rules and conditions used to evaluate an Access Policy and determine whether a Client Workload should be granted access to a service (e.g. a Server Workload). Whenever a request is received for access to an Access Policy and/or Credential, these Access Conditions validate and verify the request. If validation passes, the request is granted; however, if validation fails, the request is denied. In order for an Access Condition to validate and verify a request, an existing integration must already be established, and an Access Policy must already have been created. --- # Access Conditions for GeoIP Restriction URL: https://docs.aembit.io/user-guide/access-policies/access-conditions/aembit-geoip/ Description: This document provides a description on how to setup and configure an Access Condition for a GeoIP Restriction. # You may configure an Access Condition to enable GeoIP restrictions. This can be useful if you would like to only grant access to Client Workloads from specific locations. A GeoIP restriction ensures any request received from a locale that is not already specified will be blocked. For example, if you would like to allow requests from a specific country or region, you may simply add an Access Condition for that region or area. ## Creating a GeoIP Access Condition To create a GeoIP Restriction Access Condition, perform the steps listed below. 1. Log into your Aembit tenant using your login credentials. 2. When your credentials have been authenticated and you are logged into your tenant, you are directed to the main dashboard page. Click on **Access Conditions** in the left navigation pane. You will see a list of existing Access Conditions. ![Access Conditions List](../../../../../assets/images/access-conditions-existing-list.png) 3. Click on the **New Access Condition** button. An Access Condition dialog window appears. ![Access Condition Dialog Window - Empty](../../../../../assets/images/access-conditions-empty-dialog-window.png) 4. In the Access Condition dialog window, enter information in the following fields: - **Name** - Name of the Access Condition. - **Description** - An optional text description of the Access Condition. - **Integration** - A drop-down menu that enables you to select the type of integration you would like to create. Select **Aembit GeoIP Condition** from the drop-down menu. ![Access Condition Dialog Window - GeoIP Selected](../../../../../assets/images/access-conditions-geoip-selected.png) 5. In the Conditions -> Location section, click on the **Country** drop-down menu to select the country you would like to use for your Access Condition. 6. After selecting a **Country** from the drop-down menu, you will see an expanded drop-down menu where you may select a **Subdivision** you want to use for that country. A Subdivision may be a region, state, province, or other territory that you would like to use for further Access Condition scoping. ![Access Condition Dialog Window - Country and Subdivision Selected](../../../../../assets/images/access-conditions-geoip-country-selected.png) :::note You may select more than one Subdivision for a country by clicking on the **+** icon. ::: 7. Click **Save**. Your new Aembit GeoIP Access Condition now appears on the main Access Conditions page. ![Access Conditions List With GeoIP Listed](../../../../../assets/images/access-conditions-list-with-geoip.png) ## GeoIP Accuracy Limitations and Best Practices for Cloud Data Centers When configuring GeoIP-based access conditions, it is important to know the limitations in geolocation accuracy, especially for workloads hosted in cloud data centers such as AWS, Azure, Google Cloud, and others. Due to the dynamic and shared nature of cloud infrastructure, geolocation services often provide lower confidence levels for specific subdivisions (e.g., states, provinces) or cities for cloud-based IP addresses. As a result, Aembit recommends customers limit GeoIP conditions to the country level for workloads in cloud data centers. This approach ensures more reliable geolocation data while still providing geographic-based access control. Using subdivisions or cities for cloud-hosted workloads can result in access failures if the geolocation confidence falls below acceptable thresholds. --- # Aembit Time Condition URL: https://docs.aembit.io/user-guide/access-policies/access-conditions/aembit-time-condition/ Description: This page describes how to create an Access Condition for a specific Time Condition. ## Introduction One type of Access Condition you may create in your Aembit tenant is a Time Condition. This is especially useful if you would like to only grant access to Client Workloads during specific periods of time (days/hours). The section below describes the required steps to setup and configure a Time Condition Access Condition. ## Creating a Time Condition Access Condition To create a Time Condition Access Condition, perform the steps below. 1. Log into your Aembit tenant using your login credentials. 2. When your credentials have been authenticated and you are logged into your tenant, you are directed to the main dashboard page. Click on **Access Conditions** in the left navigation pane. You will see a list of existing Access Conditions (in this example, no Access Conditions have been created) ![Access Conditions List - Blank](../../../../../assets/images/access_conditions_blank.png) 3. Click on the **New Access Condition** button. An Access Condition dialog window appears. ![Access Condition Dialog Window - Empty](../../../../../assets/images/access-condition-time-condition-dialog-window.png) 4. In the Access Condition dialog window, enter information in the following fields: - **Name** - Name of the Access Condition. - **Description** - An optional text description of the Access Condition. - **Integration** - A drop-down menu that enables you to select the type of integration you would like to create. Select **Aembit Time Condition** from the drop-down menu. ![Access Condition Dialog Window - Time Condition Selected](../../../../../assets/images/access-condition-time-condition-integration-selected.png) 5. In the Conditions section, click on the **Timezone** drop-down menu to select the timezone you would like to use for your Access Condition. 6. Click on the **+** icon next to each day you would like to use in your Time Condition configuration. :::note At least one time condition is required. ::: ![Access Condition Dialog Window - Time Condition Completed](../../../../../assets/images/access-condition-dialog-window-time-condition-completed.png) 7. Click **Save**. Your new Aembit Time Condition Access Condition now appears on the main Access Conditions page. ![Access Condition Main Page - Time Condition Listed](../../../../../assets/images/access-condition-main-page-new-time-condition.png) --- # Access Condition for CrowdStrike URL: https://docs.aembit.io/user-guide/access-policies/access-conditions/crowdstrike/ Description: This page describes how to create an Access Condition for a CrowdStrike integration. ## Introduction If you have an existing CrowdStrike integration and would like to create an Access Condition for this integration, you may create this Access Condition using your Aembit tenant. The section below describes the required steps to setup and configure a Access Condition for a CrowdStrike integration. ## Creating an Access Condition for a CrowdStrike Integration To create an Access Condition for a CrowdStrike integration, perform the steps listed below. 1. Log into your Aembit tenant using your login credentials. 2. When your credentials have been authenticated and you are logged into your tenant, you are directed to the main dashboard page. Click on **Access Conditions** in the left navigation pane. You will see a list of existing Access Conditions (in this example, no Access Conditions have been created) ![Access Conditions List - Blank](../../../../../assets/images/access_conditions_blank.png) 3. Click on the **New Access Condition** button. An Access Condition dialog window appears. ![Access Condition Dialog Window - Empty](../../../../../assets/images/access-condition-time-condition-dialog-window.png) 4. In the Access Condition dialog window, enter information in the following fields: - **Name** - Name of the Access Condition. - **Description** - An optional text description of the Access Condition. - **Integration** - A drop-down menu that enables you to select the type of integration you would like to create. Select your CrowdStrike integration from the drop-down menu. In this example, **Test CrowdStrike Condition** is selected. ![Access Condition Dialog Window - CrowdStrike Selected](../../../../../assets/images/access-condition-dialog-window-test-crowdstrike-selected.png) 5. In the **Conditions** section, click on the desired toggle buttons you would like to set for the Access Condition. There are three different toggle buttons you may select: - **Restrict Reduced Functionality Mode** - This toggle ensures the CrowdStrike Agent reports if the Crowdstrike Agent on the Host is not in Reduced Functionality Mode. - **Match Hostname** - This toggle ensures the CrowdStrike Agent reported HostName matches the Aembit Agent Proxy retrieved HostName. - **Match Serial Number** - This toggle ensures the CrowdStrike Agent Host Serial Number matches the Aembit Agent Proxy retrieved Host Serial Number. 6. In the **Conditions - Time** section, enter the time period you would like to use to restrict Client Workloads that were last seen prior specified to the time span. 7. When you are finished, Click **Save**. Your new Access Condition for the CrowdStrike integration appears on the main Access Conditions page. ![Access Conditions Main Page - CrowdStrike Listed](../../../../../assets/images/access-condition-main-page-crowdstrike-listed.png) --- # Access Condition integrations overview URL: https://docs.aembit.io/user-guide/access-policies/access-conditions/integrations/ Description: Overview of Access Condition integrations and how they work This section covers Access Condition integrations, which allow Aembit to leverage external security platforms to enhance access decisions based on security context. Access Condition integrations allow you to use security information from third-party platforms when evaluating access requests. This enables you to make more informed access decisions based on security posture, compliance status, and other contextual factors. The following Access Condition integrations are available: - [CrowdStrike Integration](/astro/user-guide/access-policies/access-conditions/integrations/crowdstrike) - Use security information from CrowdStrike to inform access decisions - [Wiz Integration](/astro/user-guide/access-policies/access-conditions/integrations/wiz) - Use security information from Wiz to inform access decisions --- # CrowdStrike Integration URL: https://docs.aembit.io/user-guide/access-policies/access-conditions/integrations/crowdstrike/ Description: This page describes how to integrate CrowdStrike with Aembit. :::note The CrowdStrike Integration feature is a paid feature. To enable CrowdStrike integration, please contact Aembit by completing the [Contact Us form](https://aembit.io/contact/). ::: # CrowdStrike is a cybersecurity platform that provides cloud workload and endpoint security, threat intelligence, and cyberattack response services to businesses and enterprises. While Aembit provides workload identity and access management, integrating with a 3rd party service, such as CrowdStrike, enables businesses to prevent Server Workload access from Client Workloads that do not meet an expected state. If the Client Workload environment is not in this state, workload access will not be authorized. :::note A specific expected state is defined as a configured set of conditions as defined in one or more Aembit access condition rules. For example, in CrowdStrike, this may be when an agent is operating in Reduced Functionality Mode. ::: ## CrowdStrike Falcon Sensor The CrowdStrike Falcon Sensor is a lightweight, real-time, threat intelligence application installed on client endpoints that reviews processes and programs to detect suspicious activity or anomalies. To integrate CrowdStrike Falcon with Aembit Cloud, you will need to: - create a new API key - create a new CrowdStrike integration ### Create a new CrowdStrike OAuth2 API Client To create a new CrowdStrike OAuth2 API Client: 1. Generate an API key from the CrowdStrike website (for example `https://falcon.us-2.crowdstrike.com/api-clients-and-keys/clients` ). Note that URLs may change over time, therefore, you should always use the latest URLs listed on the CrowdStrike site. 2. In the Create API Client dialog, enter the following information: - Name - Description (optional) ![Create a new CrowdStrike OAuth2 API Client](../../../../../../assets/images/create_api_key.png) 3. Click on the **Hosts** checkbox in the Read column to enable the Hosts -> Read permission. 4. Click the **Create** button to generate your new API client. 5. You will see a dialog appear with the following information: - Client ID - Secret - Base URL :::note It is important that you copy this information and store it in a safe location. You will need this information later when you configure your CrowdStrike integration in your Aembit cloud tenant. ::: ![API Client Created](../../../../../../assets/images/api_client_created.png) 6. Once you have copied the API client information, click **Done** to close the dialog. Now that you have created your new API client, you will need to add this information to your Aembit Cloud tenant by following the steps described below. ### Create a new CrowdStrike -> Aembit integration To integrate CrowdStrike with your Aembit Cloud tenant: 1. Sign into your Aembit Cloud tenant. 2. Click on the **Access Conditions** page in the left navigation page. You should see a list of existing Access Conditions. In this example, there are no existing access conditions. ![Access Conditions page](../../../../../../assets/images/access_conditions_blank.png) 3. Click on the **Create an Integration** button. The main Integrations page is displayed. ![Integrations Page](../../../../../../assets/images/integrations_page.png) 4. Select the **CrowdStrike** Integration tile. 5. On the Aembit Integrations page, configure your CrowdStrike Integration by entering the values you just copied in the fields below. - **Name** - The name of the Integration you want to create. - **Description (optional)** - An optional text description for the Integration. - **Endpoint** - The *Base URL* value taken from the values you copied when generating your CrowdStrike API key. - **Oauth Token Configuration information:** - **Token Endpoint** - The endpoint for your token. The value entered should be: *BaseURL + “/oauth2/token"* - **Client ID** - The *Client ID* value taken from the values you copied when generating your CrowdStrike API key. - **Client Secret** - The *Client Secret* value taken from the values you copied when generating your CrowdStrike API key. :::note You can retrieve the correct BaseURL by referring to your [API Client page](https://falcon.us-2.crowdstrike.com/api-clients-and-keys/clients), and additionally, in the [BaseURLs](https://falcon.us-2.crowdstrike.com/documentation/page/a2a7fc0e/crowdstrike-oauth2-based-apis#k9578c40) section of the CrowdStrike API documentation. ::: ![Integration Example](../../../../../../assets/images/integration_example.png) 7. Click the **Save** button when finished. Your CrowdStrike Integration is saved and will then appear on the Integrations page. --- # Wiz Integration URL: https://docs.aembit.io/user-guide/access-policies/access-conditions/integrations/wiz/ Description: This page describes how to integrate Wiz with Aembit. :::note The Wiz Integration feature is a paid feature. To use the Wiz Integration feature, please contact Aembit by completing the [Contact Us form](https://aembit.io/contact/). ::: # The Wiz Cloud Security Platform provides a security analysis service, including inventory enumeration and asset information for identification of customer assets and vulnerabilities. In particular, Wiz provides an Integration API which can be accessed via an OAuth2 Client Credentials Flow and can return an Inventory result set on demand, including Kubernetes Clusters, Deployments, and Vulnerabilities. ## Wiz Integration API To integrate Wiz with Aembit, you must already have a Wiz API client set up and configured. When setting up your Wiz API client, make sure you request the following information from your Wiz account representative (you will need this information later when integrating with Aembit): - OAuth2 Endpoint URL - Client ID - Client secret - Audience (this is required and the value is expected to be `wiz-api`) ## Kubernetes/Helm/Agent Proxy Configuration For the Wiz integration to work correctly, Aembit needs to receive a unique Provider ID that can be compared/matched against the Kubernetes Clusters returned by the Wiz Integration API. For example, in an AWS EKS Cluster, the output should look similar to the example below: `arn:aws:eks:region-code:111122223333:cluster/my-cluster` To use this sample value, update your Aembit Edge Helm Chart deployment with the following parameter values: - **name:** agentProxy.env.KUBERNETES_PROVIDER_ID - **value:** arn:aws:eks:region-code:111122223333:cluster/my-cluster These parameters instruct the Aembit Edge components to configure the Agent Proxy containers with an environment variable named `KUBERNETES_PROVIDER_ID` with the value indicated. :::note This Wiz integration supports Agent Proxy versions 1.8.1203 and higher. ::: ### Create a new Wiz -> Aembit integration Once you have set up your Wiz API client and are ready to integrate Wiz with your Aembit Cloud tenant, follow the steps listed below. 1. Sign into your Aembit tenant. 2. Click on the **Access Conditions** page in the left navigation page. You should see a list of existing Access Conditions. In this example, there are no existing access conditions. ![Access Conditions page](../../../../../../assets/images/access_conditions_blank.png) 3. At the top of the page, in the *Access Conditions* tab, select **Integrations**, and then select **New**. An Integrations page appears showing the types of integrations you can create. Currently, there are two integration types available: Wiz or CrowdStrike. ![Main Integrations Page](../../../../../../assets/images/integrations_page.png) 4. Select the **Wiz Integration API** tile. You will see the *Wiz Integration* page. ![Wiz Integration Page](../../../../../../assets/images/wiz_integration_page.png) 5. On this page, enter the following values from your Wiz API client (these are the values you saved earlier when creating your Wiz API client). - **Name** - The name of the Integration you want to create. - **Description (optional)** - An optional text description for the Integration. - **Endpoint** - The *Base URL* value taken from the values you copied when creating your Wiz API key. - **Sync Frequency**- The amount of time (interval) between synchronization attempts. This value can be between 5 minutes up to 1 hour. - **Oauth Token Configuration information:** - **Token Endpoint** - The endpoint for your token. - **Client ID** - The *Client ID* value. - **Client Secret** - The *Client Secret* value. - **Audience** - This value should be set to `wiz-api` as recommended by the Wiz Integration API documentation. 7. Click the **Save** button when finished. Your Integration is saved and will then appear on the Integrations page. :::note After the next sync attempt, the status will be updated to show success/failure details. ::: --- # Access Condition for Wiz URL: https://docs.aembit.io/user-guide/access-policies/access-conditions/wiz/ Description: This page describes how to create an Access Condition for a Wiz integration. ## Introduction If you have an existing Wiz integration and would like to create an Access Condition for this integration, you may create this Access Condition using your Aembit tenant. The section below describes the required steps to set up and configure an Access Condition for a Wiz integration. ## Creating an Access Condition for a Wiz Integration To create an Access Condition for a Wiz integration, perform the steps listed below. 1. Log into your Aembit tenant using your login credentials. 2. When your credentials have been authenticated and you are logged into your tenant, you are directed to the main dashboard page. Click on **Access Conditions** in the left navigation pane. You will see a list of existing Access Conditions (in this example, no Access Conditions have been created) ![Access Conditions - Existing Access Conditions](../../../../../assets/images/access_conditions_wiz_existing_access_conditions.png) 3. Click on the **New Access Condition** button. An Access Condition dialog window appears. ![Access Conditions Dialog Window - Empty](../../../../../assets/images/access_conditions_wiz_dialog_window_empty.png) 4. In the Access Condition dialog window, enter information in the following fields: - **Name** - Name of the Access Condition. - **Description** - An optional text description of the Access Condition. - **Integration** - A drop-down menu that enables you to select the type of integration you would like to create. Select your existing Wiz integration from the drop-down menu. 5. In the **Conditions** section, click on the **Container Cluster Connected** toggle if you want to block Client Workloads that Wiz reports are not container cluster connected. 6. In the **Conditions - Time** section, enter the duration of time you would like to use for restricting Client Workloads in Kubernetes Clusters that have not been seen recently. :::note If you would like to have a full day as the time duration, Aembit recommends using 26 hours to handle the different system synchronizations. ::: ![Access Conditions Dialog Window - Filled Out](../../../../../assets/images/access_conditions_wiz_dialog_window_wiz_selected_filled_out.png) 7. When finished, Click **Save**. Your new Access Condition for the Wiz integration will appear on the main Access Conditions page. ![Access Conditions List With New Wiz Access Condition](../../../../../assets/images/access_conditions_wiz_success_listed.png) --- # Access Policy advanced options URL: https://docs.aembit.io/user-guide/access-policies/advanced-options/ Description: Advanced options for Aembit Access Policies This section covers advanced options for Access Policies in Aembit, providing more sophisticated ways to configure and automate your access policies. The following pages provide information about advanced Access Policy options: - [Scaling Aembit with Terraform](/astro/user-guide/access-policies/advanced-options/terraform) --- # Scaling Aembit with Terraform URL: https://docs.aembit.io/user-guide/access-policies/advanced-options/terraform/ Description: Information and guides about scaling Aembit with Terraform :::tip[Work in progress] This page is a work in progress. It may contain incomplete or unverified information. Please check back later for updates. If you have feedback or suggestions, please fill out the Aembit New Docs Feedback Survey. Thanks for your patience! ::: --- # Configuration with Terraform URL: https://docs.aembit.io/user-guide/access-policies/advanced-options/terraform/terraform-configuration/ Description: How to use the Aembit Terraform Provider to configure Aembit Cloud resources import { Steps } from '@astrojs/starlight/components'; Aembit has released a Terraform Provider in the [Terraform Registry](https://registry.terraform.io/providers/Aembit/aembit/latest) that enables users to configure Aembit Cloud resources in an automated manner. ## Configuration Configuring the Aembit Terraform provider requires two steps: 1. Create or update the Terraform configuration to include the Aembit provider. 2. Specify the Aembit provider authentication configuration. a. Aembit recommends using Aembit integrated authentication for dynamic retrieval of the Aembit API Access Token. This can be done by specifying the Aembit Edge SDK Client ID from an appropriately configured Aembit Trust Provider. b. For development and testing purposes, users can specify an Aembit Tenant ID and Token for short-term access. Additional details for how to perform each of these steps can be found in the [Provider Documentation](https://registry.terraform.io/providers/Aembit/aembit/latest/docs) section of the Aembit Terraform provider page. ## Resources and Data Sources The Aembit [Terraform Provider](https://registry.terraform.io/providers/Aembit/aembit/latest) enables users to create, update, import, and delete Aembit Cloud resources using terraform manually or via CI/CD workflows. For example, users can configure GitHub Actions or Terraform Workspaces to utilize the Aembit Terraform provider and manage Aembit Cloud resources on demand to best serve their Workload purposes. Detailed instructions for using the Aembit Terraform Provider can be found in the [Terraform Registry](https://registry.terraform.io/providers/Aembit/aembit/latest/docs). --- # Client Workloads URL: https://docs.aembit.io/user-guide/access-policies/client-workloads/ Description: This document provides a high-level description of Client Workloads This section covers Client Workloads in Aembit, which are the applications or services that need to access Server Workloads using credentials managed by Aembit. The following pages provide information about Client Workload identification methods: - [Aembit Client ID](/astro/user-guide/access-policies/client-workloads/identification/aembit-client-id) - [AWS Lambda ARN](/astro/user-guide/access-policies/client-workloads/identification/aws-lambda-arn) - [Multiple Client Workload IDs](/astro/user-guide/access-policies/client-workloads/identification/client-workload-multiple-ids) - [GitHub ID Token Repository](/astro/user-guide/access-policies/client-workloads/identification/github-id-token-repository) - [GitHub ID Token Subject](/astro/user-guide/access-policies/client-workloads/identification/github-id-token-subject) - [GitLab ID Token Namespace Path](/astro/user-guide/access-policies/client-workloads/identification/gitlab-id-token-namespace-path) - [GitLab ID Token Project Path](/astro/user-guide/access-policies/client-workloads/identification/gitlab-id-token-project-path) - [GitLab ID Token Ref Path](/astro/user-guide/access-policies/client-workloads/identification/gitlab-id-token-ref-path) - [GitLab ID Token Subject](/astro/user-guide/access-policies/client-workloads/identification/gitlab-id-token-subject) - [Hostname](/astro/user-guide/access-policies/client-workloads/identification/hostname) - [Kubernetes Pod Name Prefix](/astro/user-guide/access-policies/client-workloads/identification/kubernetes-pod-name-prefix) - [Kubernetes Pod Name](/astro/user-guide/access-policies/client-workloads/identification/kubernetes-pod-name) - [Process Name](/astro/user-guide/access-policies/client-workloads/identification/process-name) --- # Client Workload Identification URL: https://docs.aembit.io/user-guide/access-policies/client-workloads/identification/ Description: This document provides a high-level description of Client Workload Identification This section covers the various methods available for identifying Client Workloads in Aembit. Proper identification is crucial for establishing trust and ensuring secure access between workloads. The following pages describe the different identification methods available: - [Aembit Client ID](/astro/user-guide/access-policies/client-workloads/identification/aembit-client-id) - [AWS Lambda ARN](/astro/user-guide/access-policies/client-workloads/identification/aws-lambda-arn) - [Multiple Client Workload IDs](/astro/user-guide/access-policies/client-workloads/identification/client-workload-multiple-ids) - [GitHub ID Token Repository](/astro/user-guide/access-policies/client-workloads/identification/github-id-token-repository) - [GitHub ID Token Subject](/astro/user-guide/access-policies/client-workloads/identification/github-id-token-subject) - [GitLab ID Token Namespace Path](/astro/user-guide/access-policies/client-workloads/identification/gitlab-id-token-namespace-path) - [GitLab ID Token Project Path](/astro/user-guide/access-policies/client-workloads/identification/gitlab-id-token-project-path) - [GitLab ID Token Ref Path](/astro/user-guide/access-policies/client-workloads/identification/gitlab-id-token-ref-path) - [GitLab ID Token Subject](/astro/user-guide/access-policies/client-workloads/identification/gitlab-id-token-subject) - [Hostname](/astro/user-guide/access-policies/client-workloads/identification/hostname) - [Kubernetes Pod Name Prefix](/astro/user-guide/access-policies/client-workloads/identification/kubernetes-pod-name-prefix) - [Kubernetes Pod Name](/astro/user-guide/access-policies/client-workloads/identification/kubernetes-pod-name) - [Process Name](/astro/user-guide/access-policies/client-workloads/identification/process-name) --- # Aembit Client ID URL: https://docs.aembit.io/user-guide/access-policies/client-workloads/identification/aembit-client-id/ Description: This document outlines the Aembit Client ID method for identifying Client Workloads. # The Aembit Client ID method serves as a fallback for Client Workload identification when other suitable methods are unavailable. This method entails generating a unique ID by the Aembit Cloud, which is then provisioned to the Client Workload. ## Applicable Deployment Type This method is suitable for Aembit Edge-based deployments. ## Configuration ### Aembit Cloud 1. Create a new Client Workload. 2. Choose "Aembit Client ID" for client identification. 3. Complete the remaining fields. 4. Copy the newly generated ID. 5. Save the Client Workload. ![Aembit Client ID](../../../../../../assets/images/client_identification_aembit_client_id.png) ### Client Workload #### Virtual Machine Deployment During Agent Proxy installation, specify the `CLIENT_WORKLOAD_ID` environment variable. ```shell CLIENT_WORKLOAD_ID= AEMBIT_TENANT_ID= AEMBIT_AGENT_CONTROLLER_ID= ./install ``` #### Kubernetes Add the `aembit.io/agent-inject` annotation to your Client Workload. See the example below: ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: example-app spec: replicas: 1 selector: matchLabels: app: example-app template: metadata: labels: app: example-app annotations: aembit.io/agent-inject: "enabled" aembit.io/client-id: "7e75e718-7634-480b-9f7b-a07bb5a4f11d" ``` --- # AWS Lambda ARN URL: https://docs.aembit.io/user-guide/access-policies/client-workloads/identification/aws-lambda-arn/ Description: This document outlines how to identify Client Workloads using AWS Lambda ARN for AWS Lambda container deployments. # The AWS Lambda ARN Client Workload identification method is applicable only to AWS Lambda container deployments. Aembit utilizes the native AWS identifier (Lambda ARN) to identify and distinguish Client Workloads. ## Applicable Deployment Type This method is suitable for Aembit Edge-based deployments. ## Configuration ### Aembit Cloud 1. Create a new Client Workload. 2. Choose **AWS Lambda ARN** for client identification. 3. In the Value field, enter the AWS Lambda ARN. It should be provided in the following format: `arn:aws:lambda:::function:` ### Using Versions When working with AWS Lambda ARN, it’s crucial to understand the two types of ARNs: Qualified ARN and Unqualified ARN. Each serves a specific purpose, and understanding their differences is key. For detailed information, refer to the official [AWS Documentation](https://docs.aws.amazon.com/lambda/latest/dg/configuration-versions.html#versioning-versions-using). **Unqualified ARN**: Used for the latest version of a Lambda function. Example: `arn:aws:lambda:aws-region:acct-id:function:helloworld` **Qualified ARN**: Used for a specific version of a Lambda function or [aliases](https://docs.aws.amazon.com/lambda/latest/dg/configuration-aliases.html). Example: `arn:aws:lambda:aws-region:acct-id:function:helloworld:42` If you need to work with a Qualified ARN, you must create a client workload that uses a wildcard to handle multiple versions. For instance: `arn:aws:lambda:aws-region:acct-id:function:helloworld:*`. ### Finding the AWS Lambda ARN You can find the list of Lambda functions via the AWS CLI by executing: `aws lambda list-functions --region us-east-2` This command will return all the Lambda-related information, including the Lambda ARN, which is available under the `FunctionArn` field. --- # Client Workload multiple identifiers URL: https://docs.aembit.io/user-guide/access-policies/client-workloads/identification/client-workload-multiple-ids/ Description: This page describes how to use multiple client identifiers for identifying Client Workloads. In less complex environments, you may uniquely identify Client Workloads using a single client identifier, such as: - Kubernetes Pod name prefix - Process name - Source IP In complex environments that span multiple clouds, networks, and Kubernetes clusters, relying on a single client identifier may no longer be sufficient. For example, you might encounter multiple Kubernetes clusters with pods sharing the same name prefix. Different Virtual Machines might run processes with identical names or have Virtual Machines sharing the same private IP. To ensure Client Workloads are uniquely identified, and to enable the creation of accurate Access Policies targeting the correct workloads, Aembit recommends employing multiple client identifiers. ## Configuration Client Workload configurations support the addition of multiple identifiers. ![Client Workload multiple identifiers](../../../../../../assets/images/client-workload-multiple-ids.png) For example, effective combinations could include: - Hostname and Process name - Kubernetes namespace and Kubernetes Pod prefix These combinations facilitate precise identification (such as using Kubernetes Pod prefix and Process name) while ensuring global uniqueness within your organization by incorporating additional identifiers, whether secondary, tertiary, or beyond. --- # GitHub ID Token Repository URL: https://docs.aembit.io/user-guide/access-policies/client-workloads/identification/github-id-token-repository/ Description: This page describes how the GitHub ID Token Repository method identifies Client Workloads in Aembit. This Client Workload identification method is specifically designed for [GitHub Action deployments](/astro/user-guide/deploy-install/serverless/github-actions). **The GitHub ID Token Repository** identification method allows you to identify GitHub workflows based on their repository origin. Aembit achieves this using the **repository** claim within the OIDC token issued by GitHub Actions. ## Applicable Deployment Type This method is suitable for GitHub-based CI/CD Workflow deployments. ## Configuration ### Aembit Cloud 1. Create a new Client Workload. 2. Choose **GitHub ID Token Repository** for client identification. 3. Identify the repository where your workflow is located. Copy this full repository name and use it in the **Value** field according to the format below. - **Format** - `{organization}/{repository}` for organization-owned repositories or `{account}/{repository}` for user-owned repositories. - **Example** - user123/another-project ### Finding the GitHub ID Token Repository: - Navigate to your project on GitHub. - Locate the repository name displayed at the top left corner of the page, in the format mentioned above. ![Repository name on GitHub](../../../../../../assets/images/github_repository.png) --- # GitHub ID Token Subject URL: https://docs.aembit.io/user-guide/access-policies/client-workloads/identification/github-id-token-subject/ Description: This page describes how the GitHub ID Token Subject method identifies Client Workloads in Aembit. This Client Workload identification method is specifically designed for [GitHub Action deployments](/astro/user-guide/deploy-install/serverless/github-actions). **The GitHub ID Token Subject** identification method allows you to identify GitHub workflows based on their repository and triggering event. Aembit achieves this using the **subject** claim within the OIDC token issued by GitHub Actions. ## Applicable Deployment Type This method is suitable for GitHub-based CI/CD Workflow deployments. ## Configuration ### Aembit Cloud 1. Create a new Client Workload. 2. Choose **GitHub ID Token Subject** for client identification. 3. Construct a subject manually using the format specified below and use it in the **Value** field. The GitHub ID Token Subject method provides advanced workflow identification capabilities by allowing you to identify Client Workloads based on repository origin, triggering events (like pull requests), branches, and more. The following example is for a pull request triggered workflow: - **Format** - repo:`{orgName}/{repoName}`:pull_request - **Example** - repo:my-org/my-repo:pull_request For more subject claims and examples, refer to the [GitHub OIDC Token Documentation](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect#example-subject-claims). ### Finding the GitHub ID Token Subject: You can reconstruct subject claim as follows: 1. Identify the repository: Navigate to your project on GitHub. Locate the repository name displayed at the top left corner of the page. 2. Determine filtering criteria: Choose the specific element you want to use for precise workflow selection: a deployment environment (e.g., "production"), a triggering event (e.g., "pull_request" or "push"), or a specific branch or tag name. 3. Combine the information: Assemble the subject using the format: `repo:{organization}/{repository}:`. Alternatively, you can inspect the GitHub OIDC token to extract the **subject** claim. For further details, please contact Aembit. --- # GitLab ID Token Namespace Path URL: https://docs.aembit.io/user-guide/access-policies/client-workloads/identification/gitlab-id-token-namespace-path/ Description: This page describes how the GitLab ID Token Namespace Path method identifies Client Workloads in Aembit. # This Client Workload identification method is specifically designed for [GitLab Jobs deployments](/astro/user-guide/deploy-install/serverless/gitlab-jobs). **The GitLab ID Token Namespace Path** identification method allows you to identify GitLab jobs based on their project owner. Aembit utilizes the **namespace_path** claim within the OIDC token issued by GitLab. ## Applicable Deployment Type This method is suitable for GitLab-based CI/CD Workflow deployments. ## Configuration ### Aembit Cloud 1. Create a new Client Workload. 2. Choose **GitLab ID Token Namespace Path** for client identification. 3. Determine whether your workflow resides under a GitLab group or your user account. Copy the group name or username and use it in the **Value** field. - **Format** - The group or username - **Example** - my-group ### Finding the GitLab ID Token Namespace Path: - Navigate to **Projects** on GitLab. - If the project is group-owned, go to the **All** tab and locate your project. The Namespace Path is displayed before the slash (/) in the project name. - If the project is user-based, enter your GitLab username in the **Value** field. ![GitLab Namespace Path](../../../../../../assets/images/gitlab_path.png) --- # GitLab ID Token Project Path URL: https://docs.aembit.io/user-guide/access-policies/client-workloads/identification/gitlab-id-token-project-path/ Description: This page describes how the GitLab ID Token Project Path method identifies Client Workloads in Aembit. # This Client Workload identification method is specifically designed for [GitLab Jobs deployments](/astro/user-guide/deploy-install/serverless/gitlab-jobs). **The GitLab ID Token Project Path** identification method allows you to identify GitLab jobs based on their project location. Aembit utilizes the **project_path** claim within the OIDC token issued by GitLab. ## Applicable Deployment Type This method is suitable for GitLab-based CI/CD Workflow deployments. ## Configuration ### Aembit Cloud 1. Create a new Client Workload. 2. Choose **GitLab ID Token Project Path** for client identification. 3. Identify the project where your workflow is located. Copy the full project path and use it in the **Value** field according to the format below. - **Format** - `{group}/{project}` - **Example** - my-group/my-project ### Finding the GitLab ID Token Project Path: - Navigate to the **Projects** on GitLab and go to the **All** tab. Locate your project and copy the full displayed project path in the format specified above. ![GitLab Project Path](../../../../../../assets/images/gitlab_path.png) --- # GitLab ID Token Ref Path URL: https://docs.aembit.io/user-guide/access-policies/client-workloads/identification/gitlab-id-token-ref-path/ Description: This page describes how the GitLab ID Token Ref Path method identifies Client Workloads in Aembit. # This Client Workload identification method is specifically designed for [GitLab Jobs deployments](/astro/user-guide/deploy-install/serverless/gitlab-jobs). **The GitLab ID Token Ref Path** identification method allows you to identify GitLab jobs based on the triggering branch or tag name. Aembit utilizes the **ref_path** claim within the OIDC token issued by GitLab. Combine this method with additional Client Workload identification methods, such as project path for repository identification. ## Applicable Deployment Type This method is suitable for GitLab-based CI/CD Workflow deployments. ## Configuration ### Aembit Cloud 1. Create a new Client Workload. 2. Choose **GitLab ID Token Ref Path** for client identification. 3. Construct a ref path manually using the format specified below and use it in the **Value** field. - **Format** - `refs/{type}/{name}`, where `{type}` can be either `heads` for branches or `tags` for tags, and `{name}` is the branch name or tag name used in the reference. - **Example** - refs/heads/feature-branch-1 ### Finding the GitLab ID Token Ref Path: You can reconstruct ref path claim as follows: 1. Determine ref type: Identify whether the workflow was triggered by a branch (then ref_type is heads) or a tag (ref_type is tags). 2. Get the ref: Find the specific branch name (e.g., main) or tag name (e.g., v1.1.5).Check your workflow configuration or, if accessible, the GitLab UI for triggering event details. 3. Combine the information: Assemble the ref path using the format: `refs/{type}/{name}`. Alternatively, you can inspect the GitLab OIDC token to extract the **ref_path** claim. For further details, please contact Aembit. --- # GitLab ID Token Subject URL: https://docs.aembit.io/user-guide/access-policies/client-workloads/identification/gitlab-id-token-subject/ Description: This page describes how the GitLab ID Token Subject method identifies Client Workloads in Aembit. # This Client Workload identification method is specifically designed for [GitLab Jobs deployments](/astro/user-guide/deploy-install/serverless/gitlab-jobs). **The GitLab ID Token Subject** identification method allows you to identify GitLab jobs based on their group, project, and triggering branch or tag. Aembit achieves this using the **subject** claim within the OIDC token issued by GitLab. Combine this method with additional Client Workload identification techniques, for project path and reference identification. ## Applicable Deployment Type This method is suitable for GitLab-based CI/CD Workflow deployments. ## Configuration ### Aembit Cloud 1. Create a new Client Workload. 2. Choose **GitLab ID Token Subject** for client identification. 3. Construct a subject manually using the format specified below and use it in the **Value** field. - **Format** - `project_path:{group}/{project}:ref_type:{type}:ref:{branch_name}`, where `type` can be either `branch` (for a branch-triggered workflow) or `tag` (for a tag-triggered workflow). - **Example** - project_path:my-group/my-project:ref_type:branch:ref:feature-branch-1 ### Finding the GitLab ID Token Subject: You can reconstruct subject claim as follows: 1. Identify the project path: Navigate to the **Projects** on GitLab and go to the **All** tab. Locate your project and copy the full displayed project path (e.g., my-group/my-project). 2. Determine ref type: Identify whether the workflow was triggered by a branch (then ref_type is branch) or a tag (ref_type is tag). 3. Get the ref: Find the specific branch name (e.g., main) or tag name (e.g., v1.2.0). Check your workflow configuration or, if accessible, the GitLab UI for triggering event details. 4. Combine the information: Assemble the subject using the format: `project_path:{group}/{project}:ref_type:{type}:ref:{branch_name}`. Alternatively, you can inspect the GitLab OIDC token to extract the **subject** claim. For further details, please contact Aembit. --- # Hostname URL: https://docs.aembit.io/user-guide/access-policies/client-workloads/identification/hostname/ Description: This document describes how the Hostname method identifies Client Workloads in Aembit for Virtual Machine deployments. # The Hostname Client Workload identification method is applicable to Virtual Machine deployments and utilizes the hostname of the machine (which can be retrieved by the hostname command) to identify and distinguish Client Workloads. ## Applicable Deployment Type This method is suitable for Aembit Edge-based deployments. ## Configuration ### Aembit Cloud 1. Create a new Client Workload. 2. Choose **Hostname** for client identification. 3. In the **Value** field, enter the hostname of the virtual machine where the Client Workload is running. ### Finding the Hostname - Open a terminal on your Linux VM. - Use the `hostname -f` command to retrieve its hostname. Alternatively, you can often find the hostname in the Virtual Machine's configuration settings or system information. ### Uniqueness Ensure the hostname is unique within your organization to avoid unintentionally matching other Virtual Machines. If necessary, consider combining Hostname with other client identifiers. Please consult the [Client Workload multiple identifiers](/astro/user-guide/access-policies/client-workloads/identification/client-workload-multiple-ids) documentation to enhance uniqueness. --- # Kubernetes Pod Name URL: https://docs.aembit.io/user-guide/access-policies/client-workloads/identification/kubernetes-pod-name/ Description: This document describes how the Kubernetes Pod Name Prefix method identifies Client Workloads in Aembit. # In Kubernetes environments, each pod is assigned a unique name within its namespace. The Kubernetes Pod Name identification method allows you to target a specific individual pod by specifying its exact name. This is particularly useful for managing access for standalone pods that are not part of a deployment or for pods with unique names that need to be individually managed. ## Applicable Deployment Type This method is suitable for Edge-based deployments. ## Configuration ### Aembit Cloud 1. Create a new Client Workload. 2. Choose **Kubernetes Pod Name** for client identification. 3. In the **Value** field, enter the desired pod name. #### Finding the Pod Name: - Use the `kubectl get pods` command to list all pods in your cluster. - Identify the specific pod you want to target and note its exact name. - Use this exact name as the **Value** in the Client Workload configuration. --- # Kubernetes Pod Name Prefix URL: https://docs.aembit.io/user-guide/access-policies/client-workloads/identification/kubernetes-pod-name-prefix/ Description: This document describes how the Kubernetes Pod Name Prefix method identifies Client Workloads in Aembit. # In Kubernetes environments, pods are often dynamically created and assigned unique names. The Kubernetes Pod Name Prefix identification method allows you to target a group of pods belonging to the same deployment by specifying the common prefix of their names. This is particularly useful for managing access for deployments with multiple replicas or deployments that are frequently scaled up or down. ## Applicable Deployment Type This method is suitable for Edge-based deployments. ## Configuration ### Aembit Cloud 1. Create a new Client Workload. 2. Choose **Kubernetes Pod Name Prefix** for client identification. 3. In the **Value** field, enter the desired pod name prefix. This is typically the name of your deployment. #### Finding the Pod Name Prefix: - Use the `kubectl get pods` command to list all pods in your cluster. - Identify the pods belonging to your target deployment. Their names will share a common prefix. - Use this common prefix as the Value in the Client Workload configuration. #### Uniqueness Ensure that the chosen prefix is unique enough to avoid unintentionally matching pods from other deployments. Please consult the [Client Workload multiple identifiers](/astro/user-guide/access-policies/client-workloads/identification/client-workload-multiple-ids) documentation to enhance uniqueness. --- # Process Name URL: https://docs.aembit.io/user-guide/access-policies/client-workloads/identification/process-name/ Description: This document describes how the Process Name method identifies Client Workloads in Aembit for Virtual Machine deployments. # The Process Name Client Workload identification method is applicable to Virtual Machine deployments and utilizes the name of the process associated with the Client Workload to identify and distinguish it from other workloads. ## Applicable Deployment Type This method is suitable for Aembit Edge-based deployments. ## Configuration ### Aembit Cloud 1. Create a new Client Workload. 2. Choose **Process Name** for client identification. 3. In the **Value** field, enter the exact name of the process that represents the Client Workload. ### Finding the Process Name - Open a terminal on your Linux VM. - Use system monitoring tools, or commands like `ps` or `top` on the virtual machine, to list running processes and identify the relevant process name. Alternatively, you can often find the process name in the Client Workload's configuration files or documentation. ### Uniqueness Process name identification is inherently not unique, as processes with the same name could exist on multiple virtual machines. To enhance uniqueness, consider combining Process Name with other client identifiers, such as Hostname. For more information on using multiple identifiers effectively, please consult the [Client Workload multiple identifiers](/astro/user-guide/access-policies/client-workloads/identification/client-workload-multiple-ids) documentation to enhance uniqueness. --- # Credential Providers URL: https://docs.aembit.io/user-guide/access-policies/credential-providers/ Description: This document provides a high-level description of Credential Providers This section covers Credential Providers in Aembit, which are used to provide access credentials to Client Workloads so they can access Server Workloads securely. The following pages provide information about different Credential Provider types and how to configure them: - [Aembit Access Token](/astro/user-guide/access-policies/credential-providers/aembit-access-token) - [API Key](/astro/user-guide/access-policies/credential-providers/api-key) - [AWS Security Token Service Federation](/astro/user-guide/access-policies/credential-providers/aws-security-token-service-federation) - [AWS SigV4](/astro/user-guide/access-policies/credential-providers/aws-sigv4) - [Azure Entra Workload Identity Federation](/astro/user-guide/access-policies/credential-providers/azure-entra-workload-identity-federation) - [Google GCP Workload Identity Federation](/astro/user-guide/access-policies/credential-providers/google-workload-identity-federation) - [JSON Web Token (JWT)](/astro/user-guide/access-policies/credential-providers/json-web-token) - [Managed GitLab Account](/astro/user-guide/access-policies/credential-providers/managed-gitlab-account) - [OAuth 2.0 Authorization Code](/astro/user-guide/access-policies/credential-providers/oauth-authorization-code) - [OAuth 2.0 Client Credentials](/astro/user-guide/access-policies/credential-providers/oauth-client-credentials) - [Username Password](/astro/user-guide/access-policies/credential-providers/username-password) - [Vault Client Token](/astro/user-guide/access-policies/credential-providers/vault-client-token) ### Advanced Options - [Multiple Credential Providers](/astro/user-guide/access-policies/credential-providers/advanced-options/multiple-credential-providers) - [Dynamic Claims](/astro/user-guide/access-policies/credential-providers/advanced-options/dynamic-claims) - [Multiple Credential Providers Terraform](/astro/user-guide/access-policies/credential-providers/advanced-options/multiple-credential-providers-terraform) ### Integrations - [GitLab Service Account](/astro/user-guide/access-policies/credential-providers/integrations/gitlab) --- # Configure Dynamic Claims URL: https://docs.aembit.io/user-guide/access-policies/credential-providers/advanced-options/dynamic-claims/ Description: This page describes the dynamic claims feature. It is important to note that dynamic claims are currently supported when working with Vault, and enables you to make a configuration dynamic in nature (allowing workloads to specify workload-specific claim values outside of the Aembit Tenant UI). When working with Vault Client Token Credential Providers for your Aembit tenant, you have the option to enable the dynamic claims feature. With this feature, you can set either a subject claim, or a custom claim, with either literal strings or dynamic values. ### Minimum Versions To use the dynamic claims feature, the Agent Injector also needs to be updated to the new minimum version/image (currently 1.9.142) so the new `aembit.io/agent-configmap` annotation works properly. ### Literal strings Literal strings can be placed verbatim into the target claim with no modification or adjustment necessary. ### Dynamic values Aembit Cloud communicates dynamic claim requests to the Agent Proxy following a series of steps which are described below. 1. The template is sent to Agent Proxy. 2. Agent Proxy collects all necessary information and then sends this information to Aembit Cloud. 3. Aembit Cloud replaces template variables with the values provided by Agent Proxy. The sections below describe how you can support Vault with Aembit dynamic claims. ## Configuring Vault (HashiCorp) Cloud To enable dynamic claims, you must first configure your HashiCorp Vault instance, since dynamic claims are only applicable to Vault Client Token Credential Providers. Because dynamic claims support is intended for the credential provider type `Vault client token`, Vault must also be configured to support a matching set of values. Vault OIDC roles, which are used to log into Vault as part of the Vault client token retrieval, support one or more of the following three bound types: - bound_subject - bound_audiences - generically bound claims For more detailed information on configuring Vault Cloud, please see the [HashiCorp Vault](https://developer.hashicorp.com/vault/docs/auth/jwt#configuration) technical documentation. ## Client Workload Configuration If you need to use values from ConfigMap as dynamic claims, you need to configure the `aembit.io/agent-configmap` annotation for the Client Workload. For the latest release, you can add this new annotation to a deployment similar to the screenshot shown below. ![Dynamic Claims Kubernetes Annotation](../../../../../../assets/images/dynamic_claims_kubernetes_annotation.png) Here is an example Client Workload YAML with this annotation: ```sh apiVersion: apps/v1 kind: Deployment metadata: name: example-app spec: replicas: 1 selector: matchLabels: app: example-app template: metadata: labels: app: example-app annotations: aembit.io/agent-inject: "enabled" aembit.io/agent-configmap: '["foo1:bar1", "foo2:bar2"] ``` The Agent Proxy supports Kubernetes ConfigMaps and specific environment variables in dynamic claims. The following templates are currently supported: - k8s.configmap.*.*". - Make sure to specify the *CONFIGMAP* and *VALUE* (represented by "*.*"). - os.environment.*.*. - Make sure to specify *"K8S_POD_NAME"* (represented by *.*). - os.environment.*.* - Make sure to specify *CLIENT_WORKLOAD_ID* (represented by "*.*"). ## Client Workload (Kubernetes) Annotations In order for the Client Workload to retrieve and configure ConfigMap, the Client Workload must be annotated properly. For the latest release, you can add this new annotation to a deployment similar to the screenshot shown below. ![Dynamic Claims Kubernetes Annotation](../../../../../../assets/images/dynamic_claims_kubernetes_annotation.png) ## Confirm Aembit Authentication to Vault If the Client Workload is able to successfully connect to Vault, this confirms that Aembit authenticated to Vault with the configured and properly injected dynamic claims. --- # Configure multiple Credential Providers URL: https://docs.aembit.io/user-guide/access-policies/credential-providers/advanced-options/multiple-credential-providers/ Description: How to configure multiple Credential Providers to map to an Access Policy import { Steps } from '@astrojs/starlight/components'; While you may only ever need to add a single Credential Provider to an Access Policy, there are use cases where you may need to add multiple Credential Providers to a single Access Policy. To ensure you can perform this task, Aembit has enabled multiple Credential Provider addition and Access Policy mapping functionality. :::note This functionality is currently limited to use with JSON Web Token (JWT) Credential Providers and Snowflake or HTTP Server Workloads. ::: ## Adding Multiple Credential Providers to an Access Policy To add multiple Credential Providers to an Access Policy, follow the steps described below. 1. Log into your Aembit tenant. 2. Once you are logged into your tenant, click on the **Access Policy** link in the left navigation pane. You should see the Access Policy Page displayed. ![Access Policy Main Page](../../../../../../assets/images/multiple_credential_providers_access_policy_main_page.png) 3. Before adding a Credential Provider to an Access Policy, you need to perform the following tasks: - Add an Access Policy - Add a Client Workload - Add a Server Workload :::note If the Server Workload Application Protocol is **NOT** Snowflake or HTTP, then you are unable to add multiple Credential Providers to an Access Policy. ::: 4. Once you have added your Access Policy, Client Workload, and Server Workload, you should now add Credential Providers to the Access Policy by dragging your mouse over the Credential Provider **+** button and selecting either **New** or **Existing**. ![New Credential Provider Plus Icon](../../../../../../assets/images/multiple_credential_providers_access_policies_new_credential_provider_plus.png) You have 2 options: - **Existing** - This selection enables you to choose one of your existing Credential Providers. - **New** - This selection enables you to create a new Credential Provider. Once you make your selection, the Credential Providers dialog window appears. 5. If you selected **Use Existing**, select the Credential Provider you would like to use for the Access Policy from the list displayed. 6. If you selected **New**, proceed to add your new Credential Provider by completing the fields as prompted in the dialog window. You should see the following fields displayed: - **Name** - Credential Provider name - **Description** - Text description - **Credential Type** - Select **JSON Web Token** ![Credential Provider Dialog Window With JWT](../../../../../../assets/images/multiple_credential_providers_dialog_box_jwt_complete.png) When **JSON Web Token** is selected, the following additional fields appear, enabling you to enter your Snowflake credentials: - **Token Configuration** - **Snowflake Account ID** - **Username** - **Snowflake Alter User Command** For more information on how to retrieve this information for your Credential Provider, please see the [JSON Web Token](/astro/user-guide/access-policies/credential-providers/json-web-token) page. 7. When finished, click **Save** to save your Credential Provider. 8. When you return to the Access Policy page, you see the first Credential Provider listed in the Credential Providers column. ![Access Policy Page With First Credential Provider Added](../../../../../../assets/images/multiple_credential_providers_access_policy_main_page_jwt_token_added.png) 9. Now that you have your first Credential Provider added to the Access Policy, repeat steps 4 - 7 to add additional Credential Providers by navigating to the Credential Provider column and dragging your mouse over the **+** button. :::note When you add additional Credential Providers to an Access Policy, a dialog window appears stating that when you perform this action, you must also map the Credential Provider to the Access Policy. ::: ![Credential Mapping Dialog Window For Multiple Credential Providers](../../../../../../assets/images/multiple_credential_providers_credential_mapping_dialog_window.png) Click **Continue** to add additional Credential Providers to the Access Policy. ### Mapping JWT Credential Providers to a Snowflake Server Workload Access Policy After adding at least (2) JWT Credential Providers to an Access Policy, you must then map these Credential Providers to your Access Policy. To map JWT Credential Providers to an Access Policy, follow the steps described below. 1. On the Access Policy page, in the Credential Providers column, you should see a box with the total number of Credential Providers that have been added. ![Access Policies Main Page With Credential Providers](../../../../../../assets/images/multiple_credential_providers_access_policies_main_page_with_two_credential_providers.png) :::note Notice how the box specifies the **Total** number of Credential Providers for the Access Policy (2) and a red line notation with "unmapped." This means the Credential Providers for the Access Policy are currently "unmapped" and need to be mapped before the Access Policy can be saved. ::: 2. Click on the arrow button to open the Credential Provider Mappings dialog window. In the Credential Provider Mappings dialog window, you see the Credential Providers currently added to the Access Policy with information about each Credential Provider. ![Credential Mapping Page With Credential Providers Unmapped](../../../../../../assets/images/multiple_credential_providers_credential_mapping_page.png) 3. Notice that there is a red "!" icon. This denotes that the Credential Provider currently has no mappings. Hover over the Credential Provider and you see a down arrow appear. Click on the down arrow to open the Credential Provider mapping menu. ![Credential Provider Mappings Dropdown](../../../../../../assets/images/multiple_credential_providers_mapping_page_credential_provider_dropdown.png) 4. In this menu, add any Snowflake Usernames you would like added to the Credential Providers. This means that if the username is included in the connection request from the Client Workload, this Credential Provider will be used for credential injection. Repeat this process as many times as needed for all of your policy-associated Credential Providers. 5. Click **Save** when you are finished adding your mapping values. :::note When you add Snowflake usernames to a Credential Provider and click **Save**, the red "!" icon turns to a green checkbox. You also see theses usernames displayed in the Values column. ::: 6. When you return to the Access Policies page, notice that you now see a green "All Mapped" notation in the box for the Credential Providers you just mapped. ![Access Policy Page With Green All Mapped](../../../../../../assets/images/multiple_credential_providers_access_policies_main_page_after_save_activate_selected.png) 7. Click **Save** to save your selections. If you would like to save, and then also activate the credential mapping, click **Save & Activate**. Now, when you return to the Access Policy page, if you hover over the Access Policy, you see the Credential Providers that are mapped to that Access Policy. ### Mapping JWT Credential Providers to a HTTP Server Workload Access Policy After adding at least (2) JWT Credential Providers to an Access Policy, you may then map these Credential Providers to your Access Policy. To map Credential Providers to an Access Policy, follow the steps described below. 1. On the Access Policy page, in the Credential Providers column, you should see a box with the total number of Credential Providers that have been added. ![Access Policy Main Page With HTTP Server Workload and Unmapped Credential Providers](../../../../../../assets/images/multiple_credential_providers_access_policy_main_page_http_unmapped.png) 2. Click on the arrow button to open the Credential Provider Mappings dialog window. In the Credential Provider Mappings dialog window, you see the Credential Providers currently added to the Access Policy with information about each Credential Provider. ![Credential Mapping Page With JWT Credential Provider Type](../../../../../../assets/images/multiple_credential_providers_credential_mapping_page.png) 3. Notice that there is a red "!" icon. This denotes that the Credential Providers currently have no mappings. Hover over the Credential Provider and you see a down arrow appear. Click on the down arrow to open the Credential Provider menu. ![Credential Provider Menu HTTP Mapping](../../../../../../assets/images/multiple_credential_providers_credential_provider_mappings_mapping_type_http.png) 4. In this menu, add the HTTP Header or HTTP Body values you would like used for the Credential Provider mapping. This means that if these HTTP values are included in the connection request from the Client Workload, this Credential Provider will be used for credential injection. Repeat this process as many times as needed for all of your policy-associated Credential Providers. ![Credential Provider Mapping Dialog With HTTP Header and HTTP Body](../../../../../../assets/images/multiple_credential_providers_credential_provider_mappings_dialog_http.png) - If you would like to use **HTTP Header** values for your credential mapping, you will see the following dropdown menu: ![Credential Provider Mapping - HTTP Header](../../../../../../assets/images/multiple_credential_providers_credential_provider_mapping_http_header.png) - If you would like to use **HTTP Body** values for your credential mapping, you will see the following dropdown menu. ![Credential Provider Mapping - HTTP Body](../../../../../../assets/images/multiple_credential_providers_credential_provider_mappings_http_body.png) 5. Click **Save** when you are finished adding your mapping values. You are directed back to the Credential Provider Mapping page where you see the values you entered for the HTTP Header and HTTP Body fields. ![Credential Provider Mappings Page With Mapped HTTP Values](../../../../../../assets/images/multiple_credential_providers_credential_provider_mapping_http_mapped_values.png) :::note When you add HTTP mapping values for a Credential Provider and click **Save**, the red "!" icon turns to a green checkbox. You also see theses usernames displayed in the Values column. ::: 6. When you return to the Access Policies page, notice that you now see a green "All Mapped" notation in the box for the Credential Providers you just mapped. ![Access Policy Main Page - HTTP Mapped Credential Providers](../../../../../../assets/images/multiple_credential_providers_access_policy_main_page_http_credential_providers_mapped.png) 7. Click **Save** to save your selections. If you would like to save, and then also activate the credential mapping, click **Save & Activate**. Now, when you return to the Access Policy page, if you hover over the Access Policy, you see the Credential Providers that are mapped to that Access Policy. --- # Configure multiple Credential Providers with Aembit's Terraform Provider URL: https://docs.aembit.io/user-guide/access-policies/credential-providers/advanced-options/multiple-credential-providers-terraform/ Description: How to configure multiple Credential Providers to map to an Aembit Terraform Provider import { Steps } from '@astrojs/starlight/components'; Aembit supports users who would like to use the Aembit Terraform Provider to manage their Aembit resources, while also supporting single and multiple Credential Providers per Access Policy. The Aembit Terraform Provider enables you to perform Create, Read, Update and Delete (CRUD) operations on these Aembit resources using Terraform directly, or via a CI/CD workflow. :::note These instructions assume you already have configured the Aembit Terraform Provider. If you have not already performed this configuration, please refer to the [Configuration with Terraform](/astro/user-guide/access-policies/advanced-options/terraform/terraform-configuration) page to configure the Aembit Terraform Provider before continuing on this page. ::: ## Configure an Access Policy with multiple Credential providers To configure your Aembit Access Policies with multiple Credential Providers with the `AccountName` mapping type: 1. Go to your Terraform configuration file(s). 1. In your configuration file, locate the `resource "aembit_access_policy"` section(s). They should look like the example shown below. ```hcl {12} resource "aembit_access_policy" "test_policy" { name = "TF First Policy" is_active = true client_workload = aembit_client_workload.first_client.id trust_providers = [ aembit_trust_provider.azure1.id, aembit_trust_provider.azure2.id ] access_conditions = [ aembit_access_condition.wiz.id ] credential_provider = aembit_credential_provider.<*resource_name*>.id, server_workload = aembit_server_workload.first_server.id } ``` In the preceding example, notice in the highlighted line for `credential_provider`. Because there is only one Credential Provider configured, this signifies that only one Credential Provider is currently configured for the Access Policy. 1. To add additional Credential Providers to your configuration, go to the `aembit_access_policy` resource in your Terraform configuration file that you want to update and locate the `credential_provider` line. 1. Change the `credential_provider` property to `credential_providers` so you may add multiple Credential Providers. 1. Add your Credential Providers to this section using the following format: ```hcl credential_providers = [{ credential_provider_id = aembit_credential_provider.<*resource1_name*>.id, mapping_type = "AccountName", account_name = "account_name_1" }, { credential_provider_id = aembit_credential_provider.<*resource2_name*>.id, mapping_type = "AccountName", account_name = "account_name_2" }, { credential_provider_id = aembit_credential_provider.<*resource3_name*>.id, mapping_type = "AccountName", account_name = "account_name_3" }] } ``` Where: - `credential_provider_id` - The Credential Provider ID. - `mapping_type` - The Credential Provider mapping type. - `account_name` - The account name to trigger on for using this Credential Provider if the `mapping_type` value is `AccountName`. 1. When you have finished adding all of your Credential Providers to the Aembit Terraform Provider configuration file, your `aembit_access_policy` resource section should look similar to the example shown below. ```hcl resource "aembit_access_policy" "multi_cp_second_policy" { is_active = true name = "TF Multi CP Second Policy" client_workload = aembit_client_workload.second_client.id credential_providers = [{ credential_provider_id = aembit_credential_provider.<*resource1_name*>.id, mapping_type = "AccountName", account_name = "account_name_1" }, { credential_provider_id = aembit_credential_provider..id, mapping_type = "AccountName", account_name = "account_name_2" }, { credential_provider_id = aembit_credential_provider..id, mapping_type = "AccountName", account_name = "account_name_3" }] server_workload = aembit_server_workload.first_server.id } ``` ### Multiple Credential Provider examples The following examples use `HttpHeader` and `HttpBody` Mapping Types to show multiple Credential Providers: #### HttpHeader Example ```hcl resource "aembit_access_policy" "multi_cp_httpheader" { is_active = true name = "TF Multi CP HTTP Header" client_workload = aembit_client_workload.first_client.id credential_providers = [{ credential_provider_id = aembit_credential_provider.<*resource1_name*>.id, mapping_type = "HttpHeader", header_name = "X-Sample-Header-name-1", header_value = "X-Sample-Header-value-1" }, { credential_provider_id = aembit_credential_provider.<*resource2_name*>.id, mapping_type = "HttpHeader", header_name = "X-Sample-Header-name-2", header_value = "X-Sample-Header-value-2" }] server_workload = aembit_server_workload.first_server.id } ``` Where: - `credential_provider_id` - The Credential Provider ID. - `mapping_type` - The Credential Provider mapping type. - `header_name` - The HTTP Header name for which a matching value will trigger this Credential Provider to be used. - `header_value` - The HTTP Header value for which a matching value will trigger this Credential Provider to be used. #### HttpBody Example ```hcl resource "aembit_access_policy" "multi_cp_httpbody" { is_active = true name = "TF Multi CP HTTP Body" client_workload = aembit_client_workload.first_client.id credential_providers = [{ credential_provider_id = aembit_credential_provider.<*resource1_name*>.id, mapping_type = "HttpBody", httpbody_field_path = "x_sample_httpbody_field_path_1", httpbody_field_value = "x_sample_httpbody_field_value_1" }, { credential_provider_id = aembit_credential_provider.<*resource2_name*>.id, mapping_type = "HttpBody", httpbody_field_path = "x_sample_httpbody_field_path_2", httpbody_field_value = "x_sample_httpbody_field_value_2" }] server_workload = aembit_server_workload.first_server.id } ``` Where: - `credential_provider_id` - The Credential Provider ID. - `mapping_type` - The Credential Provider mapping type. - `httpbody_field_path` - The JSON path to a value that triggers this Credential Provider to be used. Note that the `HttpBody` mapping type requires JSON HTTP body content, and this parameter must be specified in JSON path notation. - `httpbody_field_value` - The JSON path to a value which triggers this Credential Provider to be used. :::note In these two examples, you can see that different fields need to be configured, based on the `mapping_type` specified in the configuration file. ::: --- # Configure an Aembit Access Token Credential Provider URL: https://docs.aembit.io/user-guide/access-policies/credential-providers/aembit-access-token/ Description: How to create and use an Aembit Access Token Credential Provider import { Steps } from '@astrojs/starlight/components'; The Aembit Access Token Credential Provider generates access tokens for authenticating applications and services to the Aembit API. ## Create an Aembit Access Token Credential Provider To configure an Aembit Access Token Credential Provider, follow the steps described below. 1. Log into your Aembit tenant. The main Dashboard page is displayed. 2. On the Dashboard page, select the **Credential Providers** tab in the left navigation pane. You are directed to the Credential Providers page where you will see a list of existing Credential Providers. ![Credential Providers Main Page - Empty](../../../../../assets/images/credential_providers_main_page_empty.png) 3. Click **New** to open the Credential Providers dialog window. ![Credential Providers - Dialog Window Empty](../../../../../assets/images/credential_providers_aembit_access_token_dialog_window_empty.png) 4. In the Credential Providers dialog window, enter the following information: - **Name** - Name of the Credential Provider. - **Description** - An optional text description of the Credential Provider. - **Credential Type** - A dropdown menu that enables you to configure the Credential Provider type. Select **Aembit Access Token**. - **Audience** - Auto-generated by Aembit, this is a specific endpoint used for authentication within the Aembit API. - **Role** - A dropdown menu allowing you to select the appropriate user role for access, such as Super Admin, Auditor, or New Role. Be sure to choose a role with the appropriate permissions that align with your Client Workload's needs. We recommend following the least privilege principle, assigning the role with the minimum permissions required to perform the task. - **Lifetime** - The duration for which the generated access token remains valid. ![Credential Provider Dialog Window](../../../../../assets/images/credential_providers_aembit_access_token_dialog_window_completed.png) 5. When you have finished entering information about your new Aembit Access Token Credential Provider, click **Save**. 6. You are directed back to the Credential Providers page where you will see your new Aembit Access Token Credential Provider. ![Credential Providers Page With New Aembit Access Token Credential Provider](../../../../../assets/images/credential_providers_aembit_access_token_main_page_with_new_credential_provider.png) --- # Configure an API Key Credential Provider URL: https://docs.aembit.io/user-guide/access-policies/credential-providers/api-key/ Description: How to create and use an API Key Credential Provider import { Steps } from '@astrojs/starlight/components'; The Application Programming Interface (API) Key credential provider is designed for scenarios where authentication is accomplished using a static API Key. An API Key is a secret used by workloads to identify themselves when making calls to an API. This API key acts as a security mechanism for controlling access to APIs. ## Credential Provider configuration To configure an API Key Credential Provider, follow the steps outlined below. 1. Log into your Aembit tenant. 2. Once you are logged into your tenant, select the **Credential Providers** tab in the left navigation pane. You are directed to the Credential Providers page displaying a list of existing Credential Providers. In this example, there are no existing Credential Providers. ![Credential Providers - Main Page Empty](../../../../../assets/images/credential_providers_main_page_empty.png) 3. Click **New** to open the Credential Providers dialog window. ![Credential Providers - Dialog Window Empty](../../../../../assets/images/credential_providers_api_key_dialog_window_empty.png) 4. In the Credential Providers dialog window, enter the following information: - **Name** - Name of the Credential Provider. - **Description** - An optional text description of the Credential Provider. - **Credential Type** - A dropdown menu that enables you to configure the Credential Provider type. Select **API Key**. - **API Key** - The authentication key used to access the server workload. API keys are commonly generated by the system or service provider. ![Credential Providers - Dialog Window Completed](../../../../../assets/images/credential_providers_api_key_dialog_window_completed.png) 5. Click **Save** when finished. You will be directed back to the Credential Providers page, where you will see your newly created Credential Provider. ![Credential Providers - Main Page With New Credential Provider](../../../../../assets/images/credential_providers_api_key_main_page_with_new_credential_provider.png) --- # Configure an AWS STS Federation Credential Provider URL: https://docs.aembit.io/user-guide/access-policies/credential-providers/aws-security-token-service-federation/ Description: How to add and use the AWS Security Token Service (STS) Federation Credential Provider with Server Workloads import { Steps } from '@astrojs/starlight/components'; AWS offers the AWS Security Token Service (STS), a web service designed to facilitate the request of temporary, restricted-privilege credentials for users. Aembit's Credential Provider for AWS STS broadly supports AWS services that use the SigV4 and SigV4a authentication protocol depending if requests are for regional services or global/multi-region services respectively. See [How Aembit uses AWS SigV4 and SigV4a](/astro/user-guide/access-policies/credential-providers/aws-sigv4) for information about SigV4/4a and how Aembit handles SigV4/4a requests. ## Credential Provider configuration To configure an AWS Security Token Service Federation Credential Provider, follow these steps: 1. Log into your Aembit tenant. 2. In the left nav menu, go to **Credential Providers**. Aembit directs you to the Credential Providers page displaying a list of existing Credential Providers. In this example, there are no existing Credential Providers. ![Credential Providers - Main Page Empty](../../../../../assets/images/credential_providers_main_page_empty.png) 3. Click **New**. This opens the Credential Providers dialog window. ![Credential Providers - Dialog Window Empty](../../../../../assets/images/credential_providers_aws_sts_dialog_window_empty.png) 4. In the Credential Providers dialog window, enter the following information: - **Name** - Name of the Credential Provider. - **Description** - An optional text description of the Credential Provider. - **Credential Type** - A dropdown menu that enables you to configure the Credential Provider type. Select **AWS Security Token Service Federation**. - **OIDC Issuer URL** - OpenID Connect (OIDC) Issuer URL, auto-generated by Aembit, is a dedicated endpoint for OIDC authentication within AWS. - **AWS IAM Role Arn** - Enter your AWS IAM Role in ARN format, Aembit associates this ARN with the AWS STS credentials request. - **Aembit IdP Token Audience** - This read-only field specifies the `aud` (Audience) claim value which Aembit uses in the JWT Access Token when requesting credentials from AWS STS. - **Lifetime (seconds)** - Specify the duration for which AWS STS credentials remain valid, ranging from 900 seconds (15 minutes) to a maximum of 129,600 seconds (36 hours). ![Credential Providers - Dialog Window Completed](../../../../../assets/images/credential_providers_aws_sts_dialog_window_completed.png) 5. Click **Save** when finished. Aembit directs you back to the **Credential Providers** page, where you'll see your newly created Credential Provider. ![Credential Providers - Main Page With New Credential Provider](../../../../../assets/images/credential_providers_aws_sts_main_page_with_new_credential_provider.png) ## AWS Identity Provider configuration \{#aws-idp-config\} To use the AWS STS Credential Provider, you must configure the AWS Identity Provider and assign it with an IAM role: 1. Within the AWS Console, go to **IAM** > **Identity providers** and select **Add provider**. 2. On the Configure provider screen, complete the steps and fill out the values specified: - **Provider type**- Select **OpenID Connect**. - **Provider URL**- Paste in the **OIDC Issuer URL** from the Credential Provider fields. - Click **Get thumbprint** to configure the AWS Identity Provider trust relationship. - **Audience**: Paste in the **Aembit IdP Token Audience** from the Credential Provider fields. - Click **Add provider**. 3. Within the AWS Console, go to **IAM** > **Identity providers** and select the Identity Provider you just created. 4. Click **Assign role** and choose **Use an existing role**. --- # How Aembit uses AWS SigV4 and SigV4a URL: https://docs.aembit.io/user-guide/access-policies/credential-providers/aws-sigv4/ Description: How Aembit's Credential Provider for AWS STS works with the AWS SigV4 and Sigv4a request signing protocols AWS Signature Version 4 (SigV4) and Signature Version 4a (SigV4a) are AWS request signing protocols that Aembit uses to sign HTTP requests from Client Workloads to AWS services when using credentials obtained from Aembit’s [AWS Security Token Service (STS) Credential Provider](/astro/user-guide/access-policies/credential-providers/aws-security-token-service-federation). During authentication, SigV4 makes sure a request is authentic, unaltered in transit, and not replayed. ## SigV4 versions SigV4 has two versions: - [SigV4](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html) is AWS’s standard signing process. It requires that you specify the exact AWS region where you're sending a request (such as `us-east-1`, `us-east-2`). AWS scopes the signing key and signature to that specific region. AWS requires a new signature if you send the same request to a service in a different region. For most requests to [AWS regional endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html#regional-endpoints), AWS uses SigV4. - [SigV4a](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_sigv.html#how-sigv4a-works) extends SigV4 to support multi-region AWS services, in cases where you route a request across multiple AWS regions. Instead of specifying a single region in the signature, SigV4a uses a region wildcard (*), allowing the signature to be valid across all AWS regions. For requests to [AWS global service endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html#global-endpoints) or any service that supports cross-region requests, AWS requires SigV4a. ## SigV4 version selection Aembit automatically determines whether to use SigV4 or SigV4a when a Client Workload uses an AWS STS Credential Provider to access AWS services. It works like this: - When a Server Workload’s hostname includes a region (such as `us-east-1` or `us-east-2`), Aembit uses SigV4, scoping the signature to only that region. - When the Server Workload’s hostname doesn't include a region (S3 Multi-Region Access Points or other global AWS services), Aembit uses SigV4a, which allows the signature to work across AWS regions. Aembit performs this selection automatically based on the hostname structure, following AWS’s standard endpoint formats. You don't need to make configuration changes to benefit from this and your existing AWS STS Credential Providers automatically gain support for SigV4a where applicable. ## Workload identity and service access separation in AWS When working with Aembit Trust Providers and Credential Providers in AWS environments, it's important to understand the roles each of these play. Aembit uses Trust Providers to verify who a workload is, and Credential Providers to control what AWS services that workload can access. 1. Trust Providers (like the [AWS Role Trust Provider](/astro/user-guide/access-policies/trust-providers/aws-role-trust-provider)) verify who a workload is by confirming the AWS environment it’s running in and the IAM Role it's using. 1. Once Aembit verifies the workload’s identity, the [AWS STS Credential Provider](/astro/user-guide/access-policies/credential-providers/aws-security-token-service-federation) retrieves temporary AWS credentials for the workload, tied to the IAM Role verified by the Trust Provider. 1. When the workload makes API requests to AWS services like S3, Lambda, or SQS, Aembit’s Agent Proxy automatically signs those requests using AWS SigV4 for regional services, or SigV4a for global or multi-region services. This clear separation makes sure that: - Only attested workloads receive AWS credentials. - Aembit secures All AWS service access using temporary credentials, eliminating the need for long-lived secrets. - Aembit automatically applies the correct SigV4 or SigV4a signing process based on the destination service and hostname. --- # Configure an Azure Entra WIF Credential Provider URL: https://docs.aembit.io/user-guide/access-policies/credential-providers/azure-entra-workload-identity-federation/ Description: This page describes the Azure Entra Workload Identity Federation (WIF) Credential Provider and its usage with Server Workloads. import { Steps } from '@astrojs/starlight/components'; Aembit's Credential Provider for Microsoft Azure Entra Workload Identity Federation (WIF) enables you to automatically obtain credentials through Aembit as a third-party federated Identity Provider (IdP). This allows you to securely authenticate with Azure Entra to access your Azure Entra registered applications and managed identities. For example, to assign API permissions or app roles to you registered applications or managed identities. You can configure the Azure Entra Credential Provider using the [Aembit web UI](#configure-a-credential-provider-for-azure-entra) or through the [Aembit Terraform provider](#configure-azure-entra-using-the-aembit-terraform-provider). ## Prerequisites To configure an Azure Entra Credential Provider, you must have and do the following: - Ability to access and manage your Aembit tenant. - Ability to access and manage either of the following: - [Microsoft Entra registered application](https://learn.microsoft.com/en-us/entra/identity-platform/quickstart-register-app) - [Microsoft Managed Identity](https://learn.microsoft.com/en-us/entra/identity/managed-identities-azure-resources/overview) - You request only one resource per Azure Entra Credential Provider
See detailed example Azure's architecture requires that you request only one resource per Azure Entra Credential Provider. For example, when you need to access both [Microsoft Graph](http://graph.microsoft.com/) and [Azure Management](http://management.azure.com/), you must configure the following: - **Two distinct Credential Providers**: - One requesting the `https://graph.microsoft.com/.default` scope - Another requesting the `https://management.azure.com/.default` scope - **Two distinct Server Workloads**: - One for `graph.microsoft.com` - Another for `management.azure.com` - **In your Access Policies, map each Credential Provider to its respective Server Workload**.
- Terraform only: - You have Terraform installed. - You have the [Aembit Terraform Provider](https://registry.terraform.io/providers/Aembit/aembit/latest) configured. ## Configure a Credential Provider for Azure Entra This section explains how to configure an Azure Entra Credential Provider in the Aembit web UI that requests a single Azure Entra resource. These steps assume you already have a Microsoft Entra registered application (see [Prerequisites](#prerequisites)). You must configure the Aembit Credential Provider at the same time as the Azure Entra registered application credential. :::tip It's best to have your Azure Entra registered application open in the Azure Entra Portal in a different browser window alongside the Aembit web UI while configuring the Credential Provider. ::: ## Create a Credential Provider 1. Log in to your Aembit tenant, and in the left sidebar menu, go to **Credential Providers**. 1. Click **+ New**, which reveals the **Credential Provider** page. 1. Enter a **Name** and optional **Description**. 1. In the **Credential Type** dropdown, select **Azure Entra Identity Federation**, revealing new fields. ![Aembit web UI Credential Provider page](../../../../../assets/images/azure-entra-aembit-credential-provider.png) Before filling out these fields, you must add the credential for your Azure Entra registered application in the Azure Entra Portal first. Keep the Aembit web UI open while you work on the next section. ## Add a credential for your Azure Entra registered app In the Azure Entra Portal, create a new credential for your registered application: 1. In your Azure Entra Portal, go to **App registrations** and select your registered application from the list. 1. Go to **Manage --> Certificates & secrets** and select the **Federated Credentials** tab. 1. Click **Add credential**, to reveal the **Add a credential** page and fill out the following sections (for quick reference, see the [mappings](#azure-entra-and-credential-provider-ui-value-mappings) section): 1. For **Connect your account**: - **Federated credential scenario** - Select **Other issuer** - **Issuer** - From the Aembit Credential Provider page, copy and paste the **OIDC Issuer URL** - **Type** - Select **Explicit subject identifier** - **Value** - Enter the desired value (this must match the **JWT Token Subject** value on the Aembit Credential Provider page) 1. For **Credential details**: - **Name** - Enter the desired name - **Audience** - Use the default value or optionally change it to the desired value (this must match the **Audience** value on the Aembit Credential Provider page) Your Aembit Credential Provider UI and Entra registered application credential should look similar to the following example: ![Aembit web UI and Azure Entra registered app credential mappings](../../../../../assets/images/azure-entra-registered-app-credential-value-mappings.png) 1. Click **Add** and your new credential shows up on the **Federated credentials** tab in Azure Entra. 1. While still on your registered application, go to the **Overview** section. Keep the Azure Entra Portal open to use it in the next section. ## Complete the Credential Provider in the Aembit web UI Go back to the Aembit web UI, and complete the **Credential Provider** page: 1. For **JWT Token Scope**, enter the scope of the resource you want to request. For example, for Microsoft Graph, use `https://graph.microsoft.com/.default`. 1. Use the info from your Azure Entra registered application's **Overview** page to complete the remaining fields for the Aembit Credential Provider (for quick reference, see the [mappings](#azure-entra-and-credential-provider-ui-value-mappings) section): 1. **Azure Tenant ID** - copy and paste the **Directory (tenant) ID**. 1. **Azure Client ID** - copy and paste the **Application (client) ID**. ![Azure Entra registered application overview page](../../../../../assets/images/azure-entra-registered-app-values.png) 1. Click **Save**. Your Azure Entra Credential Provider now displays in your list of Credential Providers in the Aembit web UI. ## Verify the connection To verify the connection between your Aembit Credential Provider and your Azure Entra registered application: 1. On the **Credential Providers** page, select the Credential Provider you just created. 1. Click **Verify**. After a few moments you should see a green banner display a "Verified Successfully" message. If you don't receive a "Verified Successfully" message, go back through the values in your Credential Provider in the Aembit UI and the credential in your Azure Entra registered application to make sure they're correct. You're now ready to use your Credential Provider for Azure Entra Workload Identity Federation with your Server Workloads in an Aembit Access Policy! ## Configure Azure Entra using the Aembit Terraform provider To configure an Azure Entra Credential Provider using the [Aembit Terraform Provider](https://registry.terraform.io/providers/Aembit/aembit/latest), follow the steps in this section. :::tip[OIDC Issuer URL] When using the Aembit Terraform Provider, you won't have the OIDC Issuer URL the Azure credential requires until *after* you apply the Terraform configuration successfully. Make sure you leave the Azure Entra **Add a credential** page open until after you have successfully applied the Terraform configuration. Then copy the value for `oidc_issuer` from the applied Terraform configuration to the **Issuer** field in the **Add a credential** page. ::: 1. Follow the steps to [Add a credential for your Azure Entra registered app](#add-a-credential-for-your-azure-entra-registered-app). Leaving the **Issuer** blank and stopping before you add the new credential. Keep this page open as you'll need some values from it. 1. Create a new Terraform configuration file (such as `azure-wif.tf`) with the following structure: ```hcl provider "aembit" { } resource "aembit_credential_provider" "azureEntra" { name = "" is_active = true azure_entra_workload_identity = { audience = "" subject = "" scope = "" azure_tenant = "" client_id = "" } } ```
Example Terraform resource file for Microsoft Graph ```hcl provider "aembit" { } resource "aembit_credential_provider" "azureEntra" { name = "Azure Entra WIF" is_active = true azure_entra_workload_identity = { audience = "api://AzureADTokenExchange" subject = "aembit:federation:test" scope = "https://graph.microsoft.com/.default" azure_tenant = "7f492ad1-25ec-4bfe-9c3a-84b517de8f2c" client_id = "3d845691-7abc-4def-a123-456789abcdef" } } ```
1. Apply the Terraform configuration: ```shell terraform apply ``` 1. After the Terraform apply completes successfully, the Aembit Terraform provider generates an OIDC Issuer URL as the value for `oidc_issuer`. Run the following command to obtain the value for `oidc_issuer`: ```shell terraform state show aembit_credential_provider.azureEntra ``` 1. Copy the URL from `oidc_issuer` and return to the Azure Portal's **Add a credential** page. 1. Paste the URL from `oidc_issuer` into the **Issuer** field. 1. Click **Add** and your new credential shows up on the **Federated credentials** tab in Azure Entra.
You're now ready to use your Credential Provider for Azure Entra Workload Identity Federation with your Server Workloads in an Aembit Access Policy! ## Azure Entra and Credential Provider UI value mappings The following table shows how the different value in Azure Entra from your registered application map to the required values to the Aembit Credential Provider web UI and Terraform provider: | Aembit Credential Provider value | Azure Entra credential value | Azure UI location | Terraform value | |----------------------------------|------------------------------|---------------------------|-----------------| | OIDC Issuer URL | Account Issuer | Registered app credential | Auto-populated | | Audience | Credential Audience | Registered app credential | `audience` | | JWT Token Subject | Account Value | Registered app credential | `subject` | | Azure Tenant ID | Directory (tenant) ID | Your app's Overview | `azure_tenant` | | Azure Client ID | Application (client) ID | Your app's Overview | `client_id` | --- # Configure a Google GCP WIF Credential Provider URL: https://docs.aembit.io/user-guide/access-policies/credential-providers/google-workload-identity-federation/ Description: How to create a Google GCP Workload Identity Federation (WIF) Credential Provider import { Steps } from '@astrojs/starlight/components'; Aembit offers the Google Workload Identity Federation (WIF) Credential Provider to integrate with Google GCP Services. This provider allows your Client Workloads to securely authenticate with GCP and obtain short-lived security tokens for accessing GCP services and resources. ## Credential Provider configuration To configure a Google Workload Identity Federation Credential Provider, follow the steps outlined below. 1. Log into your Aembit tenant. 2. Once you are logged into your tenant, click on the **Credential Providers** tab in the left navigation pane. You are directed to the Credential Providers page displaying a list of existing Credential Providers. In this example, there are no existing Credential Providers. ![Credential Providers - Main Page Empty](../../../../../assets/images/credential_providers_main_page_empty.png) 3. Click on the **New** button to open the Credential Providers dialog window. ![Credential Providers - Dialog Window Empty](../../../../../assets/images/credential_providers_google_wif_dialog_window_empty.png) 4. In the Credential Providers dialog window, enter the following information: - **Name** - Name of the Credential Provider. - **Description** - An optional text description of the Credential Provider. - **Credential Type** - A dropdown menu that enables you to configure the Credential Provider type. Select **Google Workload Identity Federation**. - **OIDC Issuer URL** - OpenID Connect (OIDC) Issuer URL, auto-generated by Aembit, is a dedicated endpoint for OIDC authentication with Google Cloud. - **Audience** - This field specifies the `aud` (Audience) claim that must be present in the OIDC token when requesting credentials from Google Cloud. The value should match either: - **Default** - Full canonical resource name of the Workload Identity Pool Provider (used if "Default audience" was chosen during setup). - **Allowed Audiences** - A value included in the configured allowed audiences list, if defined. :::caution If the default audience was chosen during provider creation, provide the value previously copied from Google Cloud Console, **excluding** the http prefix (e.g., //iam.googleapis...). ::: - **Service Account Email** - A Service Account represents a Google Cloud service identity, each service account has a unique email address (e.g., `service-account-name@project-id.iam.gserviceaccount.com`) that serves as its identifier. This email is used for granting permissions and enabling interactions with other services. - **Lifetime (seconds)** - Specify the duration for which credentials remain valid, to a maximum of 1 hour (3,600 seconds). ![Credential Providers - Dialog Window Completed](../../../../../assets/images/credential_providers_google_wif_dialog_window_completed.png) 5. Click **Save** when finished. You will be directed back to the Credential Providers page, where you will see your newly created Credential Provider. ![Credential Providers - Main Page With New Credential Provider](../../../../../assets/images/credential_providers_google_wif_main_page_with_new_credential_provider.png) :::note For detailed examples on configuring the Workload Identity Federation, please refer to the respective Server Workloads' credential provider configuration sections, such as the [GCP Bigquery](/astro/user-guide/access-policies/server-workloads/guides/gcp-bigquery#credential-provider-configuration-1) example. ::: --- # Credential Provider integrations overview URL: https://docs.aembit.io/user-guide/access-policies/credential-providers/integrations/ Description: An overview of what Credential Provider integrations are and how they work Aembit Credential Provider Integrations associate a third-party system (such as GitLab) with your Credential Providers to perform credential lifecycle management on your behalf. Credential Providers that use Credential Provider Integrations are responsible for maintaining an always-available credential value, which Aembit injects as part of an Access Policy. Aembit's credential lifecycle management capabilities include creating, rotating, and deleting tokens. ## List of Credential Provider Integrations - [GitLab Service Account](#how-the-gitlab-service-account-integration-works) ## How Credential Provider Integrations work In general, Credential Provider Integrations use the following process: 1. When you initially create Credential Provider, Aembit creates the third-party account or credential or both and securely stores it in Aembit's database. 1. Once 80% of the configured Credential Provider's **Lifetime** expires, Aembit rotates the third-party credential and securely stores the updated credential in Aembit's database. 1. When properly requested and authorized, Aembit provides the third-party credential from Aembit's database to the associated Agent Proxy. If the injected credential fails, Agent Proxy continues to log the existing Workload Events to indicate the failure but doesn't generate a notification or take explicit action. For example, if you delete a credential on your third-party system, then the Workload fails until Aembit successfully rotates the credential. 1. When you delete a Credential Provider, Aembit deletes the third-party account and credential. :::note[Deleting integrations] You can't delete a Credential Provider Integration until you delete all its associated Credential Providers. You can't change the association between a Credential Provider Integration and a Credential Provider after you create it. ::: ## How the GitLab Service Account integration works This [GitLab Service Account](/astro/user-guide/access-policies/credential-providers/integrations/gitlab) integration uses your GitLab administrator account to connect with your GitLab instance and control credential lifecycle management for each Managed GitLab Account Credential Provider. When creating a [Managed GitLab Account Credential Provider](/astro/user-guide/access-policies/credential-providers/managed-gitlab-account), you scope it to only access specific GitLab Projects or GitLab Groups. Each provider creates an additional, separate GitLab service account that manages credentials on your behalf. This approach gives you fine-grained control over your GitLab workloads' credential lifecycle management. ### GitLab subscriptions Depending on the type of [GitLab plan](https://docs.gitlab.com/subscriptions/choosing_subscription/) you have, you have different choices of how to set up your GitLab Service Account integration. - For [GitLab.com plans](/astro/user-guide/access-policies/credential-providers/integrations/gitlab), you must use `https://gitlab.com` when creating the integration. - For [GitLab Dedicated or Self-Managed plans](/astro/user-guide/access-policies/credential-providers/integrations/gitlab-dedicated-self), you must use the URL of your GitLab dedicated or Self-Managed instance's. See [GitLab's plans](https://docs.gitlab.com/subscriptions/choosing_subscription/) for details about GitLab subscription types. :::important The distinction between the different GitLab plans requires you to use different API calls when creating the GitLab Service Account integration. ::: ### Process flow At a high level, the GitLab Service Account Credential Provider Integration works like this: 1. You initially connect Aembit to GitLab using your GitLab administrator account. 1. You create a Credential Provider with Managed GitLab Account integration. 1. Aembit creates a service account for each Credential Provider with your specified access scope. 1. Aembit securely stores credentials in its database. 1. Aembit automatically rotates credentials before expiration. 1. When requested and authorized, Aembit provides credentials to the Agent Proxy. ## Additional resources - [GitLab.com integration](/astro/user-guide/access-policies/credential-providers/integrations/gitlab) - [GitLab Dedicated/Self-Managed integration](/astro/user-guide/access-policies/credential-providers/integrations/gitlab-dedicated-self) --- # Create a GitLab Service Account Integration for a GitLab.com plan" URL: https://docs.aembit.io/user-guide/access-policies/credential-providers/integrations/gitlab/ Description: How to create a GitLab Service Account Credential Provider Integration using a GitLab.com plan The GitLab service account Credential Provider Integration allows you to create a [Managed GitLab Account Credential Provider](/astro/user-guide/access-policies/credential-providers/managed-gitlab-account), which provides credential lifecycle management and rotation capabilities for secure authentication between your GitLab instances and other Client Workloads. This page details everything you need to create a GitLab Service Account Credential Provider Integration. This integration requires the use of two types of GitLab accounts: This integration requires the use of a top-level group GitLab account with the `Owner` role that performs the initial authorization for Aembit to start communicating with GitLab. See [How the GitLab Service Account integration works](/astro/user-guide/access-policies/credential-providers/integrations/#how-the-gitlab-service-account-integration-works) for more details. ## Prerequisites - `Owner` role access to [GitLab Admin area](https://docs.gitlab.com/administration/admin_area/) and [REST API](https://docs.gitlab.com/api/rest/) - A [GitLab Personal Access Token (PAT)](https://docs.gitlab.com/user/profile/personal_access_tokens/) for your [GitLab service account](https://docs.gitlab.com/user/profile/service_accounts/) with the `Owner` role as well as `api` and `self-rotate` [scopes](https://docs.gitlab.com/user/profile/personal_access_tokens/#personal-access-token-scopes) ## Configure a GitLab service account integration To create a GitLab service account integration, follow these steps: 1. Log into your Aembit Tenant, and go to **Credential Providers -> Integrations** in the left sidebar. ![Credential Provider - Integrations tab](../../../../../../assets/images/cp-integrations-page.png) 1. (Optional) In the top right corner, select the [Resource Set](/astro/user-guide/administration/resource-sets/) that you want this Credential Provider Integration to reside. 1. Click **+ New**, which displays the **Integration** pop out menu. 1. Select **GitLab Service Account**. 1. Fill out the following fields on the **GitLab Service Account** form: - **Display Name**: Enter a unique name for this integration. - **Description**: (Optional) Enter a description. - **Token Endpoint URL**: Enter `https://gitlab.com`, indicating that you're using a GitLab.com plan. See [GitLab subscriptions](/astro/user-guide/access-policies/credential-providers/integrations/#gitlab-subscriptions) for more details. - **Personal Access Token**: Enter the GitLab Personal Access Token that's associated with your service account owned by a top-level group which must have `api` and `self-rotate` scopes. If you don't already have a GitLab service account with a PAT, see [Create a GitLab service account and PAT](#create-a-gitlab-service-account-and-pat). The form should look similar to the following screenshot: ![Completed GitLab Service Account Credential Provider Integration](../../../../../../assets/images/cp-integration-gitlab-sa.png) 1. Click **Save**. Aembit displays the new integration in the list of Credential Provider Integrations. :::note As soon as you successfully create the integration, Aembit rotates the token for the GitLab service account and regularly rotates it as long as the Credential Provider Integration exists. ::: ## Create a GitLab service account and PAT The service account you use for the GitLab Server Account integration must be owned by a top-level group access to GitLab APIs. To create a GitLab service account and PAT, follow these steps: :::note GitLab doesn't provide a way to create service accounts from the Admin area UI, so you must use the API to create the service account. See [GitLab issue #509870](https://gitlab.com/gitlab-org/gitlab/-/issues/509870) for more details. ::: 1. *From your terminal*, enter the following command to create the GitLab service account you want to associate with the integration. Make sure to replace `` with your GitLab API access token and `` with your numeric top-level group ID. For `name` and `username`, you can use the same value for both or follow whatever method you desire. ```shell ins="" ins="" ins="" ins="" curl --header "PRIVATE-TOKEN: " \ -X POST "https://gitlab.com/api/v4/groups//service_accounts" \ --data "name=" \ --data "username=" ``` If successful, the response should look similar to the following: ```shell {"id":12345678,"username":"","name":"","email":""} ``` Record the `id` as you'll need it in the next step. 1. Create a PAT for the GitLab service account you just created. Make sure to replace `` with your GitLab API access token, `` with your numeric top-level group ID, and `` with the `user_id` you recorded from the previous step: ```shell ins="" ins="" ins="" ins="" curl --header "PRIVATE-TOKEN: " \ -X POST "https://gitlab.com/api/v4/groups//service_accounts//personal_access_tokens" \ --data "name=" \ --data "scopes[]=api,self_rotate" ``` If successful, the response should look similar to the following: ```shell {"id":1234,"name":"","revoked":false,"created_at":"2025-03-21T20:18:23.333Z","description":null,"scopes":["api","self_rotate"],"user_id":,"last_used_at":null,"active":true,"expires_at":"2025-03-31","token":""} ``` Record the `token` value as you'll need it in the final step. 1. Add the new service account you just created to your top-level group: Make sure to replace `` with your GitLab API access token and `` with the `user_id` you recorded earlier: ```shell ins="" ins="" curl --header "PRIVATE-TOKEN: " \ -X POST "https://gitlab.com/api/v4/groups/84110211/members" \ --data "user_id=" \ --data "access_level=50" ``` ## Additional resources - [Managed GitLab Account](/astro/user-guide/access-policies/credential-providers/managed-gitlab-account) - [Credential Provider Integrations overview](/astro/user-guide/access-policies/credential-providers/integrations/) - [GitLab Dedicated/Self-Managed integration](/astro/user-guide/access-policies/credential-providers/integrations/gitlab-dedicated-self) --- # Create a GitLab Service Account Integration for a Dedicated/Self-Managed instance URL: https://docs.aembit.io/user-guide/access-policies/credential-providers/integrations/gitlab-dedicated-self/ Description: How to create a GitLab Service Account Credential Provider Integration using a GitLab Dedicated or Self-Managed instance The GitLab service account Credential Provider Integration allows you to create a [Managed GitLab Account Credential Provider](/astro/user-guide/access-policies/credential-providers/managed-gitlab-account), which provides credential lifecycle management and rotation capabilities for secure authentication between your GitLab instances and other Client Workloads. This page details everything you need to create a GitLab Service Account Credential Provider Integration. This integration requires the use of two types of GitLab accounts: - A GitLab administrator account that performs the initial authorization for Aembit to start communicating with GitLab. - A GitLab *service account* that performs credential lifecycle management for the Managed GitLab Account Credential Provider. See [How the GitLab Service Account integration works](/astro/user-guide/access-policies/credential-providers/integrations/#how-the-gitlab-service-account-integration-works) for more details. ## Prerequisites - Administrator access to [GitLab Admin area](https://docs.gitlab.com/administration/admin_area/) and the GitLab [REST API](https://docs.gitlab.com/api/rest/) - A [GitLab Personal Access Token (PAT)](https://docs.gitlab.com/user/profile/personal_access_tokens/) for your [GitLab service account](https://docs.gitlab.com/user/profile/service_accounts/) with `api` and `self-rotate` [scopes](https://docs.gitlab.com/user/profile/personal_access_tokens/#personal-access-token-scopes) - The URL of your GitLab Dedicated or GitLab Self-Managed instance (see [GitLab's plans](https://docs.gitlab.com/subscriptions/choosing_subscription/) for details)
For example: `gitlab_tenant_name.gitlab-dedicated.com` or `https://gitlab.my-company.com` ## Configure a GitLab service account integration To create a GitLab service account integration, follow these steps: 1. Log into your Aembit Tenant, and go to **Credential Providers -> Integrations** in the left sidebar. ![Credential Provider - Integrations tab](../../../../../../assets/images/cp-integrations-page.png) 1. (Optional) In the top right corner, select the [Resource Set](/astro/user-guide/administration/resource-sets/) that you want this Credential Provider Integration to reside. 1. Click **+ New**, which displays the **Integration** pop out menu. 1. Select **GitLab Service Account**. 1. Fill out the following fields on the **GitLab Service Account** form: - **Display Name**: Enter a unique name for this integration. - **Description**: (Optional) Enter a description. - **Token Endpoint URL**: Enter the URL of your GitLab Dedicated or GitLab Self-Managed instance. See [GitLab subscriptions](/astro/user-guide/access-policies/credential-providers/integrations/#gitlab-subscriptions) for more details. - **Top Level Group ID**: n/a
Aembit disables this field when using GitLab Dedicated or Self-Managed instance URLs. - **Personal Access Token**: Enter the GitLab Personal Access Token that's associated with your instance-level administrator service account that must have `api` and `self-rotate` scopes. If you don't already have a GitLab service account with a PAT, see [Create a GitLab service account and PAT](#create-a-gitlab-service-account-pat). The form should look similar to the following screenshot: ![Completed GitLab Service Account Credential Provider Integration](../../../../../../assets/images/cp-integration-gitlab-sa.png) 1. Click **Save**. Aembit displays the new integration in the list of Credential Provider Integrations. :::note As soon as you successfully create the integration, Aembit rotates the token for the GitLab service account and regularly rotates it as long as the Credential Provider Integration exists. ::: ## Create a GitLab service account PAT To create a GitLab service account PAT, you must have *Administrator* access to your GitLab Admin area and GitLab APIs. This process has two main parts: 1. [Create a PAT for your GitLab administrator account](#create-a-gitlab-administrator-account-pat) using the *GitLab UI*. 1. [Create a GitLab service account and PAT](#create-a-gitlab-service-account-and-pat) using both the *GitLab API* and *GitLab UI*. ### Create a GitLab administrator account PAT To create a PAT for your GitLab administrator account, follow these steps: 1. Log into your GitLab Admin area with an administrator user account. 1. See [Create a personal access token](https://docs.gitlab.com/user/profile/personal_access_tokens/#create-a-personal-access-token) in the GitLab docs to create a PAT for your *administrator user account* (not the service account). 1. Keep the GitLab Admin area UI open, as you need it in the next step. ### Create a GitLab service account and PAT To create a GitLab service account and PAT, follow these steps: :::note GitLab doesn't provide a way to create service accounts from the Admin area UI, so you must use the API to create the service account. See [GitLab issue #509870](https://gitlab.com/gitlab-org/gitlab/-/issues/509870) for more details. ::: 1. *From your terminal*, enter the following command to create the GitLab service account you want to associate with the integration. Make sure to replace `` with your GitLab API access token and `` with your GitLab instance URL. For `` and ``, you can use the same value for both or follow whatever method you desire. ```shell ins="" ins="" ins="" ins="" curl --header "PRIVATE-TOKEN: " \ -X POST "/api/v4/service_accounts" \ --data "name=" \ --data "username=" ``` 1. *From your GitLab Admin area*, go to to **Admin area -> Users** and select the service account you just created. 1. Go to **Access Level**, and change the **Access Level** from **Regular** to **Administrator**. 1. *Back in your terminal*, create a PAT for the GitLab service account you just made **Administrator**. Make sure to replace `` with your GitLab API access token, `` with your GitLab instance URL, and `` with the same value you used to create the service account: ```shell ins="" ins="" ins="" curl --header "PRIVATE-TOKEN: " \ -X POST "/api/v4/users//personal_access_tokens" \ --data "scopes[]=api,self_rotate" \ --data "name=" ``` If successful, the response should look similar to the following: ```shell {"id":1234,"name":"token_name","revoked":false,"created_at":"2025-03-21T20:18:23.333Z","description":null,"scopes":["api","self_rotate"],"user_id":36,"last_used_at":null,"active":true,"expires_at":"2025-03-31","token":"your_token"} ``` 1. Record the `` value in the response and use it as the **Personal Access Token** in the [Configure a GitLab service account integration](#configure-a-gitlab-service-account-integration) section. ## Additional resources - [Managed GitLab Account](/astro/user-guide/access-policies/credential-providers/managed-gitlab-account) - [Credential Provider Integrations overview](/astro/user-guide/access-policies/credential-providers/integrations/) - [GitLab.com integration](/astro/user-guide/access-policies/credential-providers/integrations/gitlab) --- # Configure a JSON Web Token (JWT) Credential Provider URL: https://docs.aembit.io/user-guide/access-policies/credential-providers/json-web-token/ Description: How to create and use a JSON Web Token (JWT) Credential Provider import { Steps } from '@astrojs/starlight/components'; A JSON Web Token (JWT), defined by the open standard [RFC 7519](https://datatracker.ietf.org/doc/html/rfc7519), is a compact and self-contained method for securely transmitting information as a JSON object between parties. It is important to note that Aembit's current support for JWT generation is specifically tailored for Snowflake. ## Credential Provider configuration To configure a JSON Web Token (JWT) Credential Provider, follow the steps outlined below. 1. Log into your Aembit tenant. 2. Once you are logged into your tenant, click on the **Credential Providers** tab in the left navigation pane. You are directed to the Credential Providers page displaying a list of existing Credential Providers. In this example, there are no existing Credential Providers. ![Credential Providers - Main Page Empty](../../../../../assets/images/credential_providers_main_page_empty.png) 3. Click on the **New** button to open the Credential Providers dialog window. ![Credential Providers - Dialog Window Empty](../../../../../assets/images/credential_providers_jwt_dialog_window_empty.png) 4. In the Credential Providers dialog window, enter the following information: - **Name** - Name of the Credential Provider. - **Description** - An optional text description of the Credential Provider. - **Credential Type** - A dropdown menu that enables you to configure the Credential Provider type. Select **JSON Web Token (JWT)**. - **Token Configuration** - By default, this field is pre-selected as **Snowflake Key Pair Authentication** for connecting to Snowflake. - **Snowflake Account ID** - Use this field to input the Snowflake Locator, a unique identifier that distinguishes a Snowflake account within the organization. - **Username** - The username is your access credential for Snowflake, allowing authentication to access a Server Workload. It is your unique Snowflake username associated with the account. - **Snowflake Alter User Command** - After saving the Credential Provider, an auto-generated SQL command is produced in this field. This command incorporates a public key, which is essential for establishing trust between your Snowflake account and the JWT tokens issued by Aembit. To execute this command on your Snowflake account, utilize a Snowflake-compatible tool of your choice. ![Credential Providers - Dialog Window Completed](../../../../../assets/images/credential_providers_jwt_dialog_window_completed.png) 5. Click **Save** when finished. You will be directed back to the Credential Providers page, where you will see your newly created Credential Provider. ![Credential Providers - Main Page With New Credential Provider](../../../../../assets/images/credential_providers_jwt_main_page_with_new_credential_provider.png) --- # Configure a Managed GitLab Account Credential Provider URL: https://docs.aembit.io/user-guide/access-policies/credential-providers/managed-gitlab-account/ Description: How to create and use a Managed GitLab Account Credential Provider import { Steps } from '@astrojs/starlight/components'; The Manage GitLab Account Credential Provider uses the [GitLab Service Account Credential Provider Integration](/astro/user-guide/access-policies/credential-providers/integrations/#how-the-gitlab-service-account-integration-works) to allow you to manage the credential lifecycle of your GitLab service accounts. ## Prerequisites You must have the following to create a Managed GitLab Account Credential Provider: - A completed [GitLab Service Account Credential Provider Integration](/astro/user-guide/access-policies/credential-providers/integrations/gitlab) ## Create a Managed GitLab account Credential Provider To create a Managed GitLab Account Credential Provider, follow these steps: 1. Log into your Aembit Tenant, and go to **Credential Providers** in the left sidebar. 1. (Optional) In the top right corner, select the [Resource Set](/astro/user-guide/administration/resource-sets/) that you want this Credential Provider to reside. 1. Click **+ New**, which displays the Credential Provider pop out menu. 1. Enter a **Name** and optional **Description**. 1. Under **Credential Type**, select **Managed GitLab Account**, revealing more fields. 1. Fill out the remaining fields: 1. **Select GitLab Integration**: Select a GitLab Service Account integration you've already configured. :::note If the **Select GitLab Integration** dropdown menu is empty, you either: - May not have any GitLab Service Account integrations configured yet. See [GitLab Service Account](/astro/user-guide/access-policies/credential-providers/integrations/gitlab) to create one. - May need to change Resource Sets. ::: 1. **GitLab Group IDs or Paths**: Enter the [group ID](https://docs.gitlab.com/user/group/#access-a-group-by-using-the-group-id) or [group path](https://docs.gitlab.com/user/namespace/#determine-which-type-of-namespace-youre-in). If entering more than one, separate them with commas (for example: `parent-group/subgroup,34,56`). 1. **GitLab Project IDs or Paths**: Enter the [project ID](https://docs.gitlab.com/user/project/working_with_projects/#access-a-project-by-using-the-project-id) or project path. If entering more than one, separate them with commas (`my-project.345678,my-other-project`). 1. **Access Level**: Enter the [GitLab Access Level](https://docs.gitlab.com/api/access_requests/#valid-access-levels) you want your GitLab service account to have. 1. **Scope**: Enter the [GitLab Personal Access Token (PAT) Scopes](https://docs.gitlab.com/user/profile/personal_access_tokens/#personal-access-token-scopes) you want the GitLab service account to have. When entering more than one, separate them with spaces (for example: `api read_user k8s_proxy`). 1. **Lifetime**: Enter the number of days you want credentials to remain active. The form should look similar to the following screenshot: ![Completed Manage GitLab Account Credential Provider form](../../../../../assets/images/cp-managed-gitlab-account.png) 1. Click **Save**. Aembit displays the new Credential Provider in the list of Credential Providers. ## Verify the Credential Provider To verify that you successfully created the Managed GitLab Account Credential Provider and it's communicating with GitLab: 1. In your Aembit Tenant, go to **Credential Providers**. 1. (Optional) In the top right corner, select the [Resource Set](/astro/user-guide/administration/resource-sets/) that your Credential Provider resides. 1. Select your newly created Credential Provider. Scroll down to see all the details provided by GitLab for this Service Account. You should see something similar to the following screenshot: ![Completed Managed GitLab Account Credential Provider with 'Ready' badge](../../../../../assets/images/cp-integration-gitlab-sa-ready.png) ### (Optional) Verify in the GitLab Admin area To verify that the Managed GitLab Account Credential Provider successfully creates service account in GitLab: 1. Log into your *administrator* GitLab account associated with your GitLab Service Account integration. 1. Go to **Admin area -> Overview -> Users**. 1. Select the service account formatted like this: `Aembit__managed_service_account`. 1. On the **Account** tab, verify that the **Username** and **ID** match the values shown in the Credential Provider in the Aembit UI. Similar to the following screenshot: ![GitLab Admin area UI - Groups and projects tab on service account](../../../../../assets/images/cp-integration-gitlab-sa-gl-account.png) 1. On the **Groups and projects** tab, verify that the groups, projects, and access levels match what you entered in the Managed GitLab Account form. GitLab displays these in a table showing Groups with their associated Projects and Access Levels. Similar to the following screenshot: ![GitLab Admin area UI - Accounts tab on service account](../../../../../assets/images/cp-integration-gitlab-sa-gl-groups-projects.png) --- # Configure OAuth 2.0 Authorization Code Credential Provider URL: https://docs.aembit.io/user-guide/access-policies/credential-providers/oauth-authorization-code/ Description: How to create and use an OAuth 2.0 Authorization Code Credential Provider import { Steps } from '@astrojs/starlight/components'; Many organizations require access to 3rd party SaaS services that have short-lived access tokens generated on demand for authentication to APIs that these 3rd party services provide. Some critical SaaS services that organizations may use, and need Credential Provider support, include: - Atlassian - GitLab - Slack - Google Workspace - PagerDuty Configuring an OAuth 2.0 Authorization Code Credential Provider requires a few steps, including: 1. Create and configure the Credential Provider. 1. Create and configure the 3rd party Application (examples provided in the Server Workload pages). 1. Authorize the Credential Provider to complete the integration. The sections below describe how you can configure an OAuth 2.0 Authorization Code Credential Provider. For detailed examples on configuring the 3rd party applications, please refer to the respective Server Workload pages, such as the [Atlassian](/astro/user-guide/access-policies/server-workloads/guides/atlassian#oauth-20-authorization-code) example. ## Credential Provider configuration To configure an OAuth 2.0 Authorization Code Credential Provider, follow the steps outlined below. 1. Log into your Aembit Tenant. 2. Once you are logged into your tenant, click on the **Credential Providers** tab in the left navigation pane. You are directed to the Credential Providers page displaying a list of existing Credential Providers. In this example, there are no existing Credential Providers. ![Credential Providers - Main Page Empty](../../../../../assets/images/credential_providers_main_page_empty.png) 3. Click on the **New** button to open the Credential Providers dialog window. ![Credential Providers - Dialog Window Empty](../../../../../assets/images/credential_providers_auth_code_dialog_window_empty.png) 4. In the Credential Providers dialog window, enter the following information: - **Name** - Name of the Credential Provider. - **Description** - An optional text description of the Credential Provider. - **Credential Type** - A dropdown menu that enables you to configure the Credential Provider type. Select **OAuth 2.0 Authorization Code**. - **Callback URL** - An auto-generated Callback URL from Aembit Admin. - **Client ID** - The Client ID associated with the Credential Provider. - **Client Secret** - The Client Secret associated with the Credential Provider. - **Scopes** - The list of scopes for the Credential Provider. This should be a list of individual scopes separated by spaces. - **OAuth URL** - The base URL for all OAuth-related requests. Use the **URL Discovery** button next to this field to automatically populate the Authorization URL and Token URL if the correct OAuth URL is provided. - **Authorization URL** - The endpoint where user is redirected to authenticate and authorize access to your application. - **Token URL** - The URL where the authorization code is exchanged for an access token. - **PKCE Required** - Configure Aembit to use PKCE for the 3rd party OAuth integration (recommended). - **Lifetime** - The lifetime of the retrieved credential. Aembit uses this to send notification reminders to the user prior to the authorization expiring. ![Credential Providers - Dialog Window Completed](../../../../../assets/images/credential_providers_auth_code_dialog_window_completed.png) 5. Click **Save** when finished. You will be directed back to the Credential Providers page, where you will see your newly created Credential Provider. ![Credential Providers - Main Page With New Credential Provider](../../../../../assets/images/credential_providers_auth_code_main_page_with_new_credential_provider.png) --- # Configure an OAuth 2.0 Client Credentials Credential Provider URL: https://docs.aembit.io/user-guide/access-policies/credential-providers/oauth-client-credentials/ Description: How to create and use an OAuth 2.0 Client Credentials Credential Provider import { Steps } from '@astrojs/starlight/components'; The OAuth 2.0 Client Credentials Flow, described in [OAuth 2.0 RFC 6749 (section 4.4)](https://datatracker.ietf.org/doc/html/rfc6749#section-4.4), is a method in which an application can obtain an access token by using its unique credentials such as client ID and client secret. This process is typically used when an application needs to authenticate itself, without requiring user input, to access protected resources. ## Credential Provider configuration To configure an OAuth 2.0 Client Credentials Credential Provider, follow the steps outlined below. 1. Log into your Aembit tenant. 2. Once you are logged into your tenant, click on the **Credential Providers** tab in the left navigation pane. You are directed to the Credential Providers page displaying a list of existing Credential Providers. In this example, there are no existing Credential Providers. ![Credential Providers - Main Page Empty](../../../../../assets/images/credential_providers_main_page_empty.png) 3. Click on the **New** button to open the Credential Providers dialog window. ![Credential Providers - Dialog Window Empty](../../../../../assets/images/credential_providers_oauth_clientcreds_dialog_window_empty.png) 4. In the Credential Providers dialog window, enter the following information: - **Name** - Name of the Credential Provider. - **Description** - An optional text description of the Credential Provider. - **Credential Type** - A dropdown menu that enables you to configure the Credential Provider type. Select **OAuth 2.0 Client Credentials**. - **Token Endpoint Url** - The Token Endpoint URL is the designated location where an application can obtain an access token through the OAuth 2.0 Client Credentials Flow. - **Client Id** - The Client ID is a unique identifier assigned to your application upon registration. You can find your application's Client ID in the respective section provided by the OAuth Server. - **Client Secret** - The Client Secret is a secret that is only known to the client (application) and the Authorization Server. It is used for secure authentication between the client and the Authorization Server. - **Scopes (optional)** - OAuth 2.0 allows clients to specify the level of access they require while seeking authorization. Typically, scopes are documented by the server to inform clients about the access required for specific actions. - **Credential Style** - A set of options that allows you to choose how the credentials are sent to the authorization server when requesting an access token. You can select one of the following options: - **Authorization Header** - The credentials are included in the request's Authorization header as a Base64-encoded string. This is the most common and secure method. - **POST Body** - The credentials are sent in the body of the POST request as form parameters. This method is less common and may be required by certain servers that don't support the Authorization header. Make sure to review your Server Workload documentation to determine what is considered the credential style in that specific context. ![Credential Providers - Dialog Window Completed](../../../../../assets/images/credential_providers_oauth_clientcreds_dialog_window_completed.png) 5. Click **Save** when finished. You will be directed back to the Credential Providers page, where you will see your newly created Credential Provider. ![Credential Providers - Main Page With New Credential Provider](../../../../../assets/images/credential_providers_oauth_clientcreds_main_page_with_new_credential_provider.png) --- # Configure a Username & Password Credential Provider URL: https://docs.aembit.io/user-guide/access-policies/credential-providers/username-password/ Description: How to create and use a Username & Password Credential Provider import { Steps } from '@astrojs/starlight/components'; The Username & Password credential provider is tailored for Server Workloads requiring username and password authentication, such as databases and Server Workloads utilizing HTTP Basic authentication. ## Credential Provider configuration To configure a Username & Password Credential Provider, follow the steps outlined below. 1. Log into your Aembit tenant. 2. Once you are logged into your tenant, click on the **Credential Providers** tab in the left navigation pane. You are directed to the Credential Providers page displaying a list of existing Credential Providers. In this example, there are no existing Credential Providers. ![Credential Providers - Main Page Empty](../../../../../assets/images/credential_providers_main_page_empty.png) 3. Click on the **New** button to open the Credential Providers dialog window. ![Credential Providers - Dialog Window Empty](../../../../../assets/images/credential_providers_username_password_dialog_window_empty.png) 4. In the Credential Providers dialog window, enter the following information: - **Name** - Name of the Credential Provider. - **Description** - An optional text description of the Credential Provider. - **Credential Type** - A dropdown menu that enables you to configure the Credential Provider type. Select **Username & Password**. - **Username** - The username serves as the access credential associated with the account or system, allowing authentication for accessing the Server Workload. Depending on the context, the **Username** could take various forms: - **Email Address** - Use the full email address associated with the account. - **Master User** - In certain systems, this might be a master user account that has privileged access. - **Account Username** - This could be a specific username assigned to the account for authentication purposes. Please make sure to review your Server Workload documentation to determine what is considered a username in that specific context. - **Password** - The corresponding password for the provided username. Please refer to the specific Server Workload documentation for accurate configuration details. ![Credential Providers - Dialog Window Completed](../../../../../assets/images/credential_providers_username_password_dialog_window_completed.png) 5. Click **Save** when finished. You will be directed back to the Credential Providers page, where you will see your newly created Credential Provider. ![Credential Providers - Main Page With New Credential Provider](../../../../../assets/images/credential_providers_username_password_main_page_with_new_credential_provider.png) --- # Configure a HashiCorp Vault Client Token Credential Provider URL: https://docs.aembit.io/user-guide/access-policies/credential-providers/vault-client-token/ Description: How to configure a Credential Provider for HashiCorp Vault Client Token import { Steps } from '@astrojs/starlight/components'; Aembit's Credential Provider for HashiCorp Vault (or just Vault) enables you to integrate Aembit with your Vault services. This Credential Provider allows your Client Workloads to securely authenticate with Vault using OpenID Connect (OIDC) and obtain short-lived JSON Web Tokens (JWTs) for accessing Vault resources. - **OIDC Issuer URL** - OpenID Connect (OIDC) Issuer URL, auto-generated by Aembit, is a dedicated endpoint for OIDC authentication within HashiCorp Vault. ## Accessing Vault on private networks By default, Aembit handles authentication through Aembit Cloud for Vaults accessible from the cloud. For Vault instances on private networks, enable **Private Network Access** during configuration to allow your colocated Agent Proxy to handle authentication directly. Note that your Vault Server Workload must still be accessible from the network edge.
Click to see example ![Private Network Access enabled](../../../../../assets/images/cp_vault_private_network_access.png)
:::caution When enabling Vault Private Network Access, you must use Agent Proxy `v1.21.2670` or higher. ::: ## Configure a Vault Credential Provider To configure a Vault Credential Provider, follow these steps: 1. Log in to your Aembit tenant, and in the left sidebar menu, go to **Credential Providers**. 1. Click **+ New**, which reveals the **Credential Provider** page. 1. In the Credential Providers dialog window, enter the following information: 1. Enter a **Name** and optional **Description**. 1. In the **Credential Type** dropdown, select **Vault Client Token**, revealing new fields. 1. In the **JSON Web Token (JWT)** section, enter a Vault-compatible **Subject** value. If you [configured Vault Roles](/astro/user-guide/access-policies/server-workloads/guides/hashicorp-vault#configure-vault-role) with `bound_subject`, the **Subject** value needs to match the `bound_subject` value exactly. 1. Define any **Custom Claims** you may have by clicking **+ New Claim**, and entering the **Claim Name** and **Value** for each custom claim you add. 1. Enter the remaining details in the **Vault Authentication** section: - **Host** - Hostname of your Vault Server. - **Port** - The port to access the Vault service. Optionally, you may check the **TLS** checkbox to require TLS connections to your Vault service. - **Authentication Path** - The path to your OIDC authentication configuration in the Vault service. - **Role** - The access credential associated with the Vault **Authentication Path**. - **Namespace** - The environment namespace of the Vault service. - **Forwarding Configuration** - Specify how Aembit should forward requests between Vault clusters or servers. This setting ensures Aembit's request handling aligns with your Vault cluster's forwarding configuration. See Vault configuration parameters for more details about request forwarding in Vault. For more info, see the [Vault configuration parameters](https://developer.hashicorp.com/vault/docs/configuration) in the official HashiCorp Vault docs. - **Private Network Access** - Check this if your Vault exists in a private network, or a network that's accessible only from your Edge deployment (for example, when Vault is behind a regional load-balancer). Otherwise, leave it unchecked for Vaults that are accessible from the cloud. See [Accessing Vault on private networks](#accessing-vault-on-private-networks). :::caution When enabling Vault Private Network Access, you must use Agent Proxy `v1.21.2670` or higher. ::: ![Credential Providers - Dialog Window Completed](../../../../../assets/images/cp_vault_complete.png) 1. Click **Save**. Aembit displays your new Vault Credential Provider on the **Credential Providers** page. --- # Server Workloads URL: https://docs.aembit.io/user-guide/access-policies/server-workloads/ Description: This document provides a high-level description of Server Workloads ## Server Workloads by category \{#server-workloads-by-category\} The following sections break down the Server Workloads by category. Click on the links below to learn more about each category and its respective Server Workloads. ### AI and machine learning \{#ai-and-machine-learning\} - [Claude](/astro/user-guide/access-policies/server-workloads/guides/claude) - [Gemini](/astro/user-guide/access-policies/server-workloads/guides/gemini) - [OpenAI](/astro/user-guide/access-policies/server-workloads/guides/openai) ### CI/CD \{#ci-cd\} - [GitHub REST](/astro/user-guide/access-policies/server-workloads/guides/github-rest) - [GitLab REST](/astro/user-guide/access-policies/server-workloads/guides/gitlab-rest) - [SauceLabs](/astro/user-guide/access-policies/server-workloads/guides/saucelabs) ### Cloud platforms and services \{#cloud-platforms-and-services\} - [Apigee](/astro/user-guide/access-policies/server-workloads/guides/apigee) - [Microsoft Graph](/astro/user-guide/access-policies/server-workloads/guides/microsoft-graph) ### CRM \{#crm\} - [Salesforce REST](/astro/user-guide/access-policies/server-workloads/guides/salesforce-rest) ### Data analytics \{#data-analytics\} - [AWS Redshift](/astro/user-guide/access-policies/server-workloads/guides/aws-redshift) - [Databricks](/astro/user-guide/access-policies/server-workloads/guides/databricks) - [GCP BigQuery](/astro/user-guide/access-policies/server-workloads/guides/gcp-bigquery) - [Looker Studio](/astro/user-guide/access-policies/server-workloads/guides/looker-studio) - [Snowflake](/astro/user-guide/access-policies/server-workloads/guides/snowflake) ### Databases \{#databases\} - [AWS MySQL](/astro/user-guide/access-policies/server-workloads/guides/aws-mysql) - [AWS PostgreSQL](/astro/user-guide/access-policies/server-workloads/guides/aws-postgres) - [Local MySQL](/astro/user-guide/access-policies/server-workloads/guides/local-mysql) - [Local PostgreSQL](/astro/user-guide/access-policies/server-workloads/guides/local-postgres) - [Local Redis](/astro/user-guide/access-policies/server-workloads/guides/local-redis) ### Financial services \{#financial-services\} - [PayPal](/astro/user-guide/access-policies/server-workloads/guides/paypal) - [Stripe](/astro/user-guide/access-policies/server-workloads/guides/stripe) ### IT tooling \{#it-tooling\} - [PagerDuty](/astro/user-guide/access-policies/server-workloads/guides/pagerduty) ### Productivity \{#productivity\} - [Atlassian](/astro/user-guide/access-policies/server-workloads/guides/atlassian) - [Box](/astro/user-guide/access-policies/server-workloads/guides/box) - [Freshsales](/astro/user-guide/access-policies/server-workloads/guides/freshsales) - [Google Drive](/astro/user-guide/access-policies/server-workloads/guides/google-drive) - [Slack](/astro/user-guide/access-policies/server-workloads/guides/slack) ### Security \{#security\} - [Aembit](/astro/user-guide/access-policies/server-workloads/guides/aembit) - [Beyond Identity](/astro/user-guide/access-policies/server-workloads/guides/beyond-identity) - [GitGuardian](/astro/user-guide/access-policies/server-workloads/guides/gitguardian) - [HashiCorp Vault](/astro/user-guide/access-policies/server-workloads/guides/hashicorp-vault) - [KMS](/astro/user-guide/access-policies/server-workloads/guides/kms) - [Okta](/astro/user-guide/access-policies/server-workloads/guides/okta) - [Snyk](/astro/user-guide/access-policies/server-workloads/guides/snyk) --- # Server Workloads URL: https://docs.aembit.io/user-guide/access-policies/server-workloads/guides/ Description: This document provides a high-level description of Server Workloads ## Server Workloads by category \{#server-workloads-by-category\} The following sections break down the Server Workloads by category. Click on the links below to learn more about each category and its respective Server Workloads. ### AI and machine learning \{#ai-and-machine-learning\} - [Claude](/astro/user-guide/access-policies/server-workloads/guides/claude) - [Gemini](/astro/user-guide/access-policies/server-workloads/guides/gemini) - [OpenAI](/astro/user-guide/access-policies/server-workloads/guides/openai) ### CI/CD \{#ci-cd\} - [GitHub REST](/astro/user-guide/access-policies/server-workloads/guides/github-rest) - [GitLab REST](/astro/user-guide/access-policies/server-workloads/guides/gitlab-rest) - [SauceLabs](/astro/user-guide/access-policies/server-workloads/guides/saucelabs) ### Cloud platforms and services \{#cloud-platforms-and-services\} - [Apigee](/astro/user-guide/access-policies/server-workloads/guides/apigee) - [Microsoft Graph](/astro/user-guide/access-policies/server-workloads/guides/microsoft-graph) ### CRM \{#crm\} - [Salesforce REST](/astro/user-guide/access-policies/server-workloads/guides/salesforce-rest) ### Data analytics \{#data-analytics\} - [AWS Redshift](/astro/user-guide/access-policies/server-workloads/guides/aws-redshift) - [Databricks](/astro/user-guide/access-policies/server-workloads/guides/databricks) - [GCP BigQuery](/astro/user-guide/access-policies/server-workloads/guides/gcp-bigquery) - [Looker Studio](/astro/user-guide/access-policies/server-workloads/guides/looker-studio) - [Snowflake](/astro/user-guide/access-policies/server-workloads/guides/snowflake) ### Databases \{#databases\} - [AWS MySQL](/astro/user-guide/access-policies/server-workloads/guides/aws-mysql) - [AWS PostgreSQL](/astro/user-guide/access-policies/server-workloads/guides/aws-postgres) - [Local MySQL](/astro/user-guide/access-policies/server-workloads/guides/local-mysql) - [Local PostgreSQL](/astro/user-guide/access-policies/server-workloads/guides/local-postgres) - [Local Redis](/astro/user-guide/access-policies/server-workloads/guides/local-redis) ### Financial services \{#financial-services\} - [PayPal](/astro/user-guide/access-policies/server-workloads/guides/paypal) - [Stripe](/astro/user-guide/access-policies/server-workloads/guides/stripe) ### IT tooling \{#it-tooling\} - [PagerDuty](/astro/user-guide/access-policies/server-workloads/guides/pagerduty) ### Productivity \{#productivity\} - [Atlassian](/astro/user-guide/access-policies/server-workloads/guides/atlassian) - [Box](/astro/user-guide/access-policies/server-workloads/guides/box) - [Freshsales](/astro/user-guide/access-policies/server-workloads/guides/freshsales) - [Google Drive](/astro/user-guide/access-policies/server-workloads/guides/google-drive) - [Slack](/astro/user-guide/access-policies/server-workloads/guides/slack) ### Security \{#security\} - [Aembit](/astro/user-guide/access-policies/server-workloads/guides/aembit) - [Beyond Identity](/astro/user-guide/access-policies/server-workloads/guides/beyond-identity) - [GitGuardian](/astro/user-guide/access-policies/server-workloads/guides/gitguardian) - [HashiCorp Vault](/astro/user-guide/access-policies/server-workloads/guides/hashicorp-vault) - [KMS](/astro/user-guide/access-policies/server-workloads/guides/kms) - [Okta](/astro/user-guide/access-policies/server-workloads/guides/okta) - [Snyk](/astro/user-guide/access-policies/server-workloads/guides/snyk) --- # Aembit API URL: https://docs.aembit.io/user-guide/access-policies/server-workloads/guides/aembit/ Description: This page describes how to configure Aembit to enable a Client Workload to authenticate and interact with the Aembit API. [Aembit](https://aembit.io/) is a Workload Identity and Access Management (IAM) Platform for managing access between workloads—Workload IAM. The Aembit API enables Client Workloads, such as CI/CD tools, to authenticate and interact with Aembit without relying on long-lived secrets. This secret-less authentication is achieved through workload attestation via a Trust Provider. By configuring Client Workloads with the appropriate trust and credential components, Aembit ensures secure, role-based access to your tenant's API resources. On this page you can find the Aembit configuration required to work with the Aembit service as a Server Workload using the REST API. :::note[Prerequisites] Before proceeding with the configuration, make sure you have configured your Aembit tenant. For more detailed information on how to use the Aembit API, please refer to the [official Aembit documentation](/astro/api-guide/). ::: ## Credential Provider configuration 1. Create a new Credential Provider. - **Name** - Choose a user-friendly name. - **Credential Type** - [Aembit Access Token](/astro/user-guide/access-policies/credential-providers/aembit-access-token) - **Audience** - Auto-generated by Aembit, this is a tenant specific server hostname used for authentication and connectivity with the Aembit API. Copy this value for use in the configuration that follows. - **Role** - Choose a role with the appropriate permissions that align with your Client Workload's needs. We recommend following the principle of least privilege, assigning the minimum necessary permissions for the task. If needed, you can [create new customer roles](/astro/user-guide/administration/roles/add-roles). - **Lifetime** - Specify the duration for which the generated access token remains valid. ## Server Workload configuration 1. Create a new Server Workload. - **Name** - Choose a user-friendly name. 2. Configure the service endpoint: - **Host** - Enter the previously copied audience value. - **Application Protocol** - HTTP - **Port** - 443 with TLS - **Forward to Port** - 443 with TLS - **Authentication method** - HTTP Authentication - **Authentication scheme** - Bearer ## Access Policy This page covers the configuration of the Server Workload and Credential Provider, which are tailored to different types of Server Workloads. To complete the setup, you will need to create an access policy for a Client Workload to access the Aembit Server Workload and associate it with the Credential Provider, Trust Provider, and any optional Access Conditions. ## Client Workload configuration Aembit now handles the credentials required to access the Aembit API as a Server Workload, eliminating the need for you to manage them directly. You can safely remove any previously used credentials from the Client Workload. ## Required features - The [TLS Decrypt](/astro/user-guide/deploy-install/advanced-options/tls-decrypt/configure-tls-decrypt) feature is required if the Client Workload uses the Agent Proxy to access the Aembit API. --- # Apigee URL: https://docs.aembit.io/user-guide/access-policies/server-workloads/guides/apigee/ Description: This page describes how to configure Aembit to work with the Apigee Server Workload. # [Google Apigee](https://cloud.google.com/apigee?hl=en) is a full lifecycle API management platform that enables organizations to design, secure, deploy, monitor, and scale APIs. With its comprehensive set of features and scalable architecture, Google Apigee empowers developers to build efficient, reliable, and secure APIs that drive business growth. Below you can find the Aembit configuration required to work with the Google Apigee service as a Server Workload using the REST APIs. Aembit supports multiple authentication/authorization methods for Apigee. This page describes scenarios where the Credential Provider is configured for Apigee via: - [OAuth 2.0 Authorization Code (3LO)](/astro/user-guide/access-policies/server-workloads/guides/apigee#oauth-20-authorization-code) - [API Key](/astro/user-guide/access-policies/server-workloads/guides/apigee#api-key) :::note[Prerequisites] Before proceeding with the configuration, ensure you have the following: - An active Google Cloud account - An existing API Proxy (API Key Method) - App set up in the Google Apigee platform If you have not created a proxy before, you can follow the steps in the next section. For more information on creating an API Proxy, please refer to the [official Google documentation](https://cloud.google.com/apigee/docs/api-platform/get-started/get-started). ::: ## OAuth 2.0 Authorization Code ### Server Workload Configuration 1. Create a new Server Workload. - **Name** - Choose a user-friendly name. 2. Configure the Service endpoint: - **Host** - `apigee.googleapis.com` - **Application Protocol** - HTTP - **Port** - 443 with TLS - **Forward to Port** - 443 with TLS - **Authentication method** - HTTP Authentication - **Authentication scheme** - Bearer ### Credential Provider Configuration 1. Sign in to the Google Cloud Console and navigate to the [Credentials](https://console.cloud.google.com/apis/credentials) page. Ensure you are working within a GCP project for which you have authorization. 2. On the **Credentials** dashboard, click **Create Credentials** located in the top left corner and select the **OAuth client ID** option. ![Create OAuth client ID](../../../../../../assets/images/gcp_create_oauth_client_id.png) 3. If there is no configured Consent Screen for your project, you will see a **Configure Consent Screen** button on the directed page. Click the button to continue. ![Configure Consent Screen](../../../../../../assets/images/gcp_no_consent_screen.png) 4. Choose **User Type** and click **Create**. - Provide a name for your app. - Choose a user support email from the dropdown menu. - App logo and app domain fields are optional. - Enter at least one email for the Developer contact information field. - Click **Save and Continue**. - You may skip the Scopes step by clicking **Save and Continue** once again. - In the **Summary** step, review the details of your app and click **Back to Dashboard**. 5. Navigate back to [Credentials](hhttps://console.cloud.google.com/apis/credentials) page, click **Create Credentials**, and select the **OAuth client ID** option again. - Choose **Web Application** for Application Type. - Provide a name for your web client. - Switch to the Aembit UI to create a new Credential Provider, selecting the OAuth 2.0 Authorization Code credential type. After setting up the Credential Provider, copy the auto-generated **Callback URL**. - Return to Google Cloud Console and paste the copied URL into the **Authorized redirect URIs** field. - Click **Create**. 6. A pop-up window will appear. Copy both the **Client ID** and the **Client Secret**. Store them for later use in the tenant configuration. 7. Edit the existing Credential Provider created in the previous steps. - **Name** - Choose a user-friendly name. - **Credential Type** - [OAuth 2.0 Authorization Code](/astro/user-guide/access-policies/credential-providers/oauth-authorization-code) - **Callback URL (Read-Only)** - An auto-generated Callback URL from Aembit Admin. - **Client Id** - Provide the Client ID copied from Google. - **Client Secret** - Provide the Secret copied from Google. - **Scopes** - Enter the scopes you will use for Apigee (e.g. `https://www.googleapis.com/auth/cloud-platform`) A full list of GCP Scopes can be found at [OAuth 2.0 Scopes for Google APIs](https://developers.google.com/identity/protocols/oauth2/scopes). - **OAuth URL** - `https://accounts.google.com` Click on **URL Discovery** to populate the Authorization and Token URL fields, which can be left as populated. - **PKCE Required** - Off - **Lifetime** - 1 year (A Google Cloud Platform project with an OAuth consent screen configured for an external user type and a publishing status of Testing is issued a refresh token expiring in 7 days).
Google does not specify a refresh token lifetime for the internal user type selected version; this value is recommended by Aembit. For more information, refer to the [official Google documentation](https://developers.google.com/identity/protocols/oauth2#expiration). 8. Click **Save** to save your changes on the Credential Provider. 9. In Aembit UI, click the **Authorize** button. You will be directed to a page where you can choose your Google account first. Then click **Allow** to complete the OAuth 2.0 Authorization Code flow. You will see a success page and will be redirected to Aembit automatically. You can also verify your flow is complete by checking the **State** value in the Credential Provider. After completion, it should be in a **Ready** state. ![Credential Provider - Ready State](../../../../../../assets/images/credential_providers_auth_code_status_ready.png) :::caution Once the set lifetime ends, the retrieved credential will expire and no longer be active. Aembit will notify you before this happens. Please ensure you reauthorize your credential before it expires. ::: ## API Key ### Create Apigee API Proxy :::note The provided steps below outline a basic configuration for creating an Apigee API proxy. Keep in mind that Apigee supports various customizations not detailed in these instructions. ::: 1. Navigate to the [Apigee UI in Cloud console](https://console.cloud.google.com/apigee) and sign in with your Google Cloud account. 2. In the left navigation pane, select **API Proxies** under the Proxy development section. 3. On the **API Proxies** dashboard, click **Create** in the top left corner. ![Create API Proxy](../../../../../../assets/images/apigee_create_api_proxy.png) 4. You will be prompted to choose a proxy type; keep the default **Reverse proxy** option and provide the any other required information. 5. Once you have configured your proxy, deploy it to make the API proxy active. ### Server Workload Configuration To locate the environment group hostname for your proxy in the Apigee UI, follow these steps: - Navigate to the [Apigee UI](https://apigee.google.com/) and sign in with your Google Cloud account. - In the Apigee UI, go to **Management > Environments > Groups**. - Identify the row displaying the environment where your proxy is deployed. - Copy the endpoint for later use in the tenant configuration. 1. Create a new Server Workload. - **Name** - Choose a user-friendly name. 2. Configure the service endpoint: - **Host** - `.com` (Provide the endpoint copied from Apigee UI) - **Application Protocol** - HTTP - **Port** - 443 with TLS - **Forward to Port** - 443 with TLS - **Authentication method** - API Key - **Authentication scheme** - Query Parameter - **Query Parameter** - apikey ### Credential Provider Configuration 1. Navigate to the [Apigee UI in Cloud console](https://console.cloud.google.com/apigee) and sign in with your Google Cloud account. 2. In the left navigation pane, select **Apps** to access a list of your applications. 3. Click on the name of the app to view its details. 4. Within the **Credentials** section, click the icon to **Copy to clipboard** next to **Key** and securely store the key for later use in the tenant configuration. ![Copy Apigee API Key](../../../../../../assets/images/apigee_api_key.png) 5. Create a new Credential Provider. - **Name** - Choose a user-friendly name. - **Credential Type** - [API Key](/astro/user-guide/access-policies/credential-providers/api-key) - **API Key** - Provide the key copied from Google Cloud Apigee console. ## Client Workload Configuration Aembit now handles the credentials required to access the Server Workload, eliminating the need for you to manage them directly. You can safely remove any previously used credentials from the Client Workload. If you access the Server Workload through an SDK or library, it is possible that the SDK/library may still require credentials to be present for initialization purposes. In this scenario, you can provide placeholder credentials. Aembit will overwrite these placeholder credentials with the appropriate ones during the access process. ## Access Policy - Create an Access Policy for a Client Workload to access the Apigee Server Workload. Assign the newly created Credential Provider to this Access Policy. ## Required Features - You will need to configure the [TLS Decrypt](/astro/user-guide/deploy-install/advanced-options/tls-decrypt/configure-tls-decrypt) feature to work with the Apigee Server Workload. --- # Atlassian URL: https://docs.aembit.io/user-guide/access-policies/server-workloads/guides/atlassian/ Description: This page describes how to configure Aembit to work with the Atlassian Server Workload. # [Atlassian](https://www.atlassian.com/) is a cloud-based service offering that facilitates collaborative work and project management for teams by providing a suite of tools, which include: - Jira for project tracking - Confluence for document collaboration - Bitbucket for version control; and - other integrated applications Below you can find the Aembit configuration required to work with the Atlassian Cloud service as a Server Workload using the Atlassian REST APIs. Aembit supports multiple authentication/authorization methods for Atlassian. This page describes scenarios where the Credential Provider is configured for Atlassian via: - [OAuth 2.0 Authorization Code (3LO)](/astro/user-guide/access-policies/server-workloads/guides/atlassian#oauth-20-authorization-code) - [API Key](/astro/user-guide/access-policies/server-workloads/guides/atlassian#api-key) :::note[Prerequisites] Before proceeding with the configuration, you will need to have an Atlassian tenant and related Atlassian Developer account. ::: ## OAuth 2.0 Authorization Code ### Server Workload Configuration 1. Create a new Server Workload. - **Name** - Choose a user-friendly name. 2. Configure the service endpoint: - **Host** - `.atlassian.net` - **Application Protocol** - HTTP - **Port** - 443 with TLS - **Forward to Port** - 443 with TLS - **Authentication method** - HTTP Authentication - **Authentication scheme** - Bearer ### Credential Provider Configuration 1. Log into to the [Atlassian Developer Console](https://developer.atlassian.com/console/myapps/). 2. Click on **Create** and select the **OAuth 2.0 integration** option. ![Create an App](../../../../../../assets/images/atlassian_developer_console_create_app.png) 3. Provide a name for your app, check the agreement box, and click **Create** . 4. In the left pane, select **Authorization**, and then click **Add** under the Action column. 5. Switch to the Aembit UI to create a new Credential Provider, selecting the OAuth 2.0 Authorization Code credential type. After setting up the Credential Provider, copy the auto-generated **Callback URL**. 6. Return to Atlassian and paste the copied URL into the **Callback URL** field. 7. In the left pane, select **Permissions**, and then click **Add** under the Action column of the API that best suits your project needs. After clicking **Add**, it will change to **Configure**; click **Configure** to edit. ![Atlassian Scopes](../../../../../../assets/images/atlassian_permissions.png) 8. On the redirected page, click **Edit Scopes**, add the necessary scopes for your application, and then click **Save** Copy the **Code** version of all selected scopes and save this information for future use. 9. In the left pane, select **Settings**, scroll down to the **Authentication details**, and copy both the **Client ID** and the **Secret**. Store them for later use in the tenant configuration. ![Copy Client ID and Client Secret](../../../../../../assets/images/atlassian_copy_client_id_and_secret.png) 10. Edit the existing Credential Provider created in the previous steps. - **Name** - Choose a user-friendly name. - **Credential Type** - [OAuth 2.0 Authorization Code](/astro/user-guide/access-policies/credential-providers/oauth-authorization-code) - **Callback URL (Read-Only)** - An auto-generated Callback URL from Aembit Admin. - **Client Id** - Provide the Client ID copied from Atlassian. - **Client Secret** - Provide the Secret copied from Atlassian. - **Scopes** - Enter the scopes you use, space delimited. Must include the `offline_access` scope required for the refresh token (e.g. `offline_access read:jira-work read:servicedesk-request`) - **OAuth URL** - `https://auth.atlassian.com` Click on **URL Discovery** to populate the Authorization and Token URL fields, which can be left as populated. - **PKCE Required** - Off (PKCE is not supported by Atlassian, so leave this field unchecked). - **Lifetime** - 1 year (Absolute expiry time according to Atlassian)
For more information on rotating the refresh token, please refer to the [official Atlassian documentation](https://developer.atlassian.com/cloud/jira/platform/oauth-2-3lo-apps/#use-a-refresh-token-to-get-another-access-token-and-refresh-token-pair). 11. Click **Save** to save your changes on the Credential Provider. 12. In Aembit UI, click the **Authorize** button. You are be directed to a page where you can review the access request. Click **Accept** to complete the OAuth 2.0 Authorization Code flow. You should see a success page and be redirected to Aembit automatically. You can also verify that your flow is complete by checking the **State** value in the Credential Provider. After completion, it should be in a **Ready** state. ![Credential Provider - Ready State](../../../../../../assets/images/credential_providers_auth_code_status_ready.png) :::caution Once the set lifetime ends, the retrieved credential will expire and no longer be active. Aembit will notify you before this happens. Please ensure you reauthorize your credential before it expires. ::: ## API Key :::note This section is labeled as API Key because, while it requires a username (your Atlassian email) and password, the password is actually an API key. Atlassian uses HTTP Basic Authentication, and we use the Username & Password Credential Provider in Aembit UI to implement this method. ::: ### Server Workload Configuration 1. Create a new Server Workload. - **Name** - Choose a user-friendly name. 2. Configure the service endpoint: - **Host** - `.atlassian.net` - **Application Protocol** - HTTP - **Port** - 443 with TLS - **Forward to Port** - 443 with TLS - **Authentication method** - HTTP Authentication - **Authentication scheme** - Basic ### Credential Provider Configuration 1. Sign into your Atlassian account. 2. Navigate to the [Atlassian account - API Tokens](https://id.atlassian.com/manage-profile/security/api-tokens) page. 3. Click on **Create API token**. 4. In the dialog that appears, enter a memorable and concise label for your token, and then click **Create**. ![Create Atlassian API token](../../../../../../assets/images/atlassian_api_tokens.png) 5. Click **Copy to clipboard** and securely store the token for later use in the configuration on the tenant. For more information on how to store your API token, please refer to the [official Atlassian documentation](https://support.atlassian.com/atlassian-account/docs/manage-api-tokens-for-your-atlassian-account/). 6. Create a new Credential Provider. - **Name** - Choose a user-friendly name. - **Credential Type** - [Username & Password](/astro/user-guide/access-policies/credential-providers/username-password) - **Username** - Your email address for the Atlassian account used to create the token. - **Password** - Provide the token copied from Atlassian. ## Client Workload Configuration Aembit now handles the credentials required to access the Server Workload, eliminating the need for you to manage them directly. You can safely remove any previously used credentials from the Client Workload. If you access the Server Workload through an SDK or library, it is possible that the SDK/library may still require credentials to be present for initialization purposes. In this scenario, you can provide placeholder credentials. Aembit will overwrite these placeholder credentials with the appropriate ones during the access process. ## Access Policy - Create an Access Policy for a Client Workload to access the Atlassian Server Workload. Assign the newly created Credential Provider to this Access Policy. ## Required Features - You will need to configure the [TLS Decrypt](/astro/user-guide/deploy-install/advanced-options/tls-decrypt/configure-tls-decrypt) feature to work with the Atlassian Server Workload. --- # Amazon RDS for MySQL URL: https://docs.aembit.io/user-guide/access-policies/server-workloads/guides/aws-mysql/ Description: This page describes how to configure Aembit to work with the Amazon RDS for MySQL Server Workload. # [Amazon RDS for MySQL](https://aws.amazon.com/rds/mysql/) is a robust and fully managed relational database service provided by Amazon Web Services, specifically tailored to streamline the deployment, administration, and scalability of MySQL databases in the cloud. Below you can find the Aembit configuration required to work with AWS RDS for MySQL as a Server Workload using MySQL-compatible CLI, application, or a library. ## Prerequisites Before proceeding with the configuration, ensure you have an AWS tenant (or [sign up](https://portal.aws.amazon.com/billing/signup#/start/email) for one) and an Amazon RDS for MySQL database. If you have not created a database before, you can follow the steps in the next section. For more information on creating an Amazon RDS DB instance, please refer to the [official Amazon documentation](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Tutorials.WebServerDB.CreateDBInstance.html). ### Create Amazon RDS MySQL Database 1. Sign in to the AWS Management Console and navigate to the [Amazon RDS console](https://console.aws.amazon.com/rds/). 2. In the left navigation pane, select **Databases**, and then click **Create Database** in the top right corner. ![AWS RDS Create Database](../../../../../../assets/images/aws_rds_create_database.png) 3. Configure the database according to your preferences. Below are key choices: - Under **Engine options**, choose **MySQL** for the engine type. - Under **Engine options**, select a version from the **8.0.x** series. - Under **Settings**, enter a name for the **DB cluster identifier**; this will be used in the endpoint. - In **Settings**, expand the **Credentials Settings** section. Use the **Master username** and **master password** as Credential Provider details. You can either auto-generate a password or type your own. Save this information for future use. :::note In this example, we are using the master username and password for demonstration purposes; however, it is advisable to create a dedicated user with appropriate privileges for enhanced security. ::: - In **Connectivity**, find the **Publicly Accessible** option and set it to **Yes**. :warning: Setting the **Publicly Accessible** option to **Yes** is done here purely for demonstration purposes. In normal circumstances, it is recommended to keep the RDS instance not publicly accessible for enhanced security. - In **Connectivity**, ensure the **VPC security group (firewall)** configuration is in place to allow client workload/agent proxy communication. - In **Connectivity**, expand the **Additional Configuration** section and verify the **Database Port** is set to 3306. - In **Database authentication**, select **Password authentication**. - In **Additional configuration**, specify an **Initial database name**. 4. After making all of your selections, click **Create Database**. ## Server Workload Configuration To retrieve the connection information for a DB instance in the AWS Management Console: 1. Sign in to the AWS Management Console and navigate to the [Amazon RDS console](https://console.aws.amazon.com/rds/). 2. In the left navigation pane, select **Databases** to view a list of your DB instances. 3. Click on the name of the DB instance to view its details. 4. Navigate to the **Connectivity & security** tab and copy the endpoint. ![AWS RDS Database Endpoint](../../../../../../assets/images/aws_mysql_endpoint.png) 5. Create a new Server Workload. - **Name** - Choose a user-friendly name. 6. Configure the service endpoint: - **Host** - `...rds.amazonaws.com` (Provide the endpoint copied from AWS) - **Application Protocol** - MySQL - **Port** - 3306 - **Forward to Port** - 3306 with TLS - **Forward TLS Verification** - Full - **Authentication method** - Password Authentication - **Authentication scheme** - Password ## Credential Provider Configuration 1. Create a new Credential Provider. - **Name** - Choose a user-friendly name. - **Credential Type** - [Username & Password](/astro/user-guide/access-policies/credential-providers/username-password) - **Username** - Provide login ID for the master user of your DB cluster. - **Password** - Provide the Master password of your DB cluster. ## Client Workload Configuration Aembit now handles the credentials required to access the Server Workload, eliminating the need for you to manage them directly. You can safely remove any previously used credentials from the Client Workload. If you access the Server Workload through an SDK or library, it is possible that the SDK/library may still require credentials to be present for initialization purposes. In this scenario, you can provide placeholder credentials. Aembit will overwrite these placeholder credentials with the appropriate ones during the access process. ## Access Policy - Create an access policy for a Client Workload to access the Amazon RDS for MySQL Server Workload and assign the newly created Credential Provider to it. --- # Amazon RDS for PostgreSQL URL: https://docs.aembit.io/user-guide/access-policies/server-workloads/guides/aws-postgres/ Description: This page describes how to configure Aembit to work with with the Amazon RDS for PostgreSQL Server Workload. # [Amazon RDS for PostgreSQL](https://aws.amazon.com/rds/postgresql) is a fully managed relational database service provided by Amazon Web Services, offering a scalable and efficient solution for deploying, managing, and scaling PostgreSQL databases in the cloud. Below you can find the Aembit configuration required to work with AWS RDS for PostgreSQL as a Server Workload using PostgreSQL-compatible CLI, application, or a library. ## Prerequisites Before proceeding with the configuration, ensure you have an AWS tenant (or [sign up](https://portal.aws.amazon.com/billing/signup#/start/email) for one) and an Amazon RDS for PostgreSQL database. If you have not created a database before, you can follow the steps in the next section. For more information on creating an Amazon RDS DB instance, please refer to the [official Amazon documentation](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Tutorials.WebServerDB.CreateDBInstance.html). ### Create Amazon RDS PostgreSQL Database 1. Sign in to the AWS Management Console and navigate to the [Amazon RDS console](https://console.aws.amazon.com/rds/). 2. In the left navigation pane, select **Databases**, and then click **Create Database** in the top right corner. ![AWS RDS Create Database](../../../../../../assets/images/aws_rds_create_database.png) 3. Configure the database according to your preferences. Below are key choices: - Under **Engine options**, choose **PostgreSQL** for the engine type. - Under **Engine options**, select a version **16** or from the **15** series. - Under **Settings**, enter a name for the **DB cluster identifier**; this will be used in the endpoint. - In **Settings**, expand the **Credentials Settings** section. Use the **Master username** and **master password** as Credential Provider details. You can either auto-generate a password or type your own. Save this information for future use. :::note In this example, we are using the master username and password for demonstration purposes; however, it is advisable to create a dedicated user with appropriate privileges for enhanced security. ::: - In **Connectivity**, find the **Publicly Accessible** option and set it to **Yes**. :warning: Setting the **Publicly Accessible** option to **Yes** is done here purely for demonstration purposes. In normal circumstances, it is recommended to keep the RDS instance not publicly accessible for enhanced security. - In **Connectivity**, ensure the **VPC security group (firewall)** configuration is in place to allow client workload/agent proxy communication. - In **Connectivity**, expand the **Additional Configuration** section and verify the **Database Port** is set to 5432. - In **Database authentication**, select **Password authentication**. - In **Additional configuration**, specify an **Initial database name**. 4. After making all of your selections, click **Create Database**. ## Server Workload Configuration To retrieve the connection information for a DB instance in the AWS Management Console: 1. Sign in to the AWS Management Console and navigate to the [Amazon RDS console](https://console.aws.amazon.com/rds/). 2. In the left navigation pane, select **Databases** to view a list of your DB instances. 3. Click on the name of the DB instance to view its details. 4. Navigate to the **Connectivity & security** tab and copy the endpoint. ![AWS RDS Database Endpoint](../../../../../../assets/images/aws_postgres_endpoint.png) 5. Create a new Server Workload. - **Name** - Choose a user-friendly name. 6. Configure the service endpoint: - **Host** - `...rds.amazonaws.com` (Provide the endpoint copied from AWS) - **Application Protocol** - Postgres - **Port** - 5432 - **Forward to Port** - 5432 with TLS - **Forward TLS Verification** - Full - **Authentication method** - Password Authentication - **Authentication scheme** - Password ## Credential Provider Configuration 1. Create a new Credential Provider. - **Name** - Choose a user-friendly name. - **Credential Type** - [Username & Password](/astro/user-guide/access-policies/credential-providers/username-password) - **Username** - Provide login ID for the master user of your DB cluster. - **Password** - Provide the Master password of your DB cluster. ## Client Workload Configuration Aembit now handles the credentials required to access the Server Workload, eliminating the need for you to manage them directly. You can safely remove any previously used credentials from the Client Workload. If you access the Server Workload through an SDK or library, it is possible that the SDK/library may still require credentials to be present for initialization purposes. In this scenario, you can provide placeholder credentials. Aembit will overwrite these placeholder credentials with the appropriate ones during the access process. ## Access Policy - Create an access policy for a Client Workload to access the Amazon RDS for PostgreSQL Server Workload and assign the newly created Credential Provider to it. --- # Amazon Redshift URL: https://docs.aembit.io/user-guide/access-policies/server-workloads/guides/aws-redshift/ Description: This page describes how to configure Aembit to work with the Amazon Redshift Server Workload. # [Amazon Redshift](https://aws.amazon.com/redshift/) is a high-performance, fully managed cloud data warehouse designed for rapid query execution and storage of petabyte-scale datasets. This high-performance solution combines speed and scalability, making it ideal for businesses seeking efficient and flexible analytics capabilities in the cloud. Below you can find the Aembit configuration required to work with Amazon Redshift as a Server Workload using the AWS or SQL-compatible CLI, application, or a library. ## Prerequisites Before proceeding with the configuration, ensure you have an AWS tenant (or [sign up](https://portal.aws.amazon.com/billing/signup#/start/email) for one) and an Amazon Redshift managed cluster. If you have not created a cluster before, you can follow the steps in the next section. For more information on creating Amazon Redshift resources, please refer to the [official Amazon documentation](https://docs.aws.amazon.com/redshift/latest/mgmt/overview.html). ### Create a cluster with Amazon Redshift 1. Sign in to the AWS Management Console and navigate to the [Amazon Redshift console](https://console.aws.amazon.com/redshiftv2) and choose **Clusters** in the navigation pane. ![Amazon Redshift Clusters](../../../../../../assets/images/aws_redshift_clusters.png) 2. Click on **Create Cluster** and configure the cluster according to your preferences. Below are key choices: - Under **Cluster configuration**, enter a name for the **cluster identifier**; this will be used in the endpoint. - In **Database configurations**, set an **Admin user name**, and either auto-generate or provide an **Admin password**. Save this information for future use. :::note In this example, we are using the `admin` username and password for demonstration purposes; however, it is advisable to create a dedicated user with appropriate privileges for enhanced security. ::: - In **Additional configuration**, you may turn off **Use defaults** and customize settings further. - In **Network and security**, find the **Publicly Accessible** option and check the box for **Turn on Publicly accessible**. :warning: Setting the **Publicly Accessible** option to **Yes** is done here purely for demonstration purposes. In normal circumstances, it is recommended to keep the instances not publicly accessible for enhanced security. - In **Network and security**, ensure the **VPC security group (firewall)** configuration is in place to allow Client Workload/Agent Proxy communication. - In **Database configurations**, specify a **Database name** and verify the **Database Port** is set to 5439. 3. After making all of your selections, click **Create cluster**. ## Server Workload Configuration To retrieve the connection information for a cluster in the Amazon Redshift Console: 1. Sign in to the AWS Management Console and navigate to the [Amazon Redshift console](https://console.aws.amazon.com/redshiftv2). 2. In the left navigation pane, select **Clusters** to view your clusters. 3. Click on the name of the cluster to view details. 4. In **General Information** copy the endpoint (excluding port and database name). ![Amazon Redshift Cluster Endpoint](../../../../../../assets/images/aws_redshift_cluster_endpoint.png) 5. Create a new Server Workload. - **Name** - Choose a user-friendly name. 6. Configure the service endpoint: - **Host** - `...redshift.amazonaws.com` (Provide the endpoint copied from AWS) - **Application Protocol** - Redshift - **Port** - 5439 - **Forward to Port** - 5439 - **Authentication method** - Password Authentication - **Authentication scheme** - Password ## Credential Provider Configuration 1. Create a new Credential Provider. - **Name** - Choose a user-friendly name. - **Credential Type** - [Username & Password](/astro/user-guide/access-policies/credential-providers/username-password) - **Username** - Provide login ID for the admin user of your cluster. - **Password** - Provide the admin password of your cluster. ## Client Workload Configuration Aembit now handles the credentials required to access the Server Workload, eliminating the need for you to manage them directly. You can safely remove any previously used credentials from the Client Workload. If you access the Server Workload through an SDK or library, it is possible that the SDK/library may still require credentials to be present for initialization purposes. In this scenario, you can provide placeholder credentials. Aembit will overwrite these placeholder credentials with the appropriate ones during the access process. ## Access Policy - Create an access policy for a Client Workload to access the Amazon RDS for MySQL Server Workload and assign the newly created Credential Provider to it. --- # Beyond Identity URL: https://docs.aembit.io/user-guide/access-policies/server-workloads/guides/beyond-identity/ Description: This page describes how to configure Aembit to work with the Beyond Identity Server Workload. # [Beyond Identity](https://www.beyondidentity.com/) is a passwordless authentication service designed to bolster security measures for various applications and platforms. The Beyond Identity API serves as a developer-friendly interface, enabling seamless integration of advanced cryptographic techniques to eliminate reliance on traditional passwords. Below you can find the Aembit configuration required to work with the Beyond Identity service as a Server Workload using the Beyond Identity API. ## Prerequisites Before proceeding with the configuration, ensure you have the following: - Beyond Identity tenant. - An app configured in your Beyond Identity tenant. This can either be a custom application you set up or the built-in **Beyond Identity Management API app**. If you have not configured an app yet, follow the steps outlined in the next section or refer to the [official Beyond Identity documentation](https://developer.beyondidentity.com/docs/add-an-application) for more detailed instructions. ### Add new app in Beyond Identity 1. Log in to the [Beyond Identity Admin Console](https://console-us.beyondidentity.com/login). 2. Navigate to the left pane, select **Apps**, and then click on **Add an application** from the top-right corner. ![Beyond Identity Add an App](../../../../../../assets/images/beyond_identity_add_app.png) 3. Configure the app based on your preferences. Below are key choices: - Enter a name for the **Display Name**. - Choose **OAuth2** for the Protocol under **Client Configuration**. - Choose **Confidential** for the Client Type. - Choose **Disabled** for the PKCE. - Choose **Client Secret Basic** for the Token Endpoint Auth Method. - Select **Client Credentials** for the Grant Type. - Optionally, choose the scopes you intend to use in the **Token Configuration** section under **Allowed Scopes**. 4. After making your selections, click **Submit** to save the new app. ## Server Workload Configuration 1. Create a new Server Workload. - **Name** - Choose a user-friendly name. 2. Configure the service endpoint: - **Host** - `api-us.beyondidentity.com` - **Application Protocol** - HTTP - **Port** - 443 with TLS - **Forward to Port** - 443 with TLS - **Authentication method** - HTTP Authentication - **Authentication scheme** - Bearer ## Credential Provider Configuration 1. Log in to the [Beyond Identity Admin Console](https://console-us.beyondidentity.com/login). 2. Navigate to the left pane and select **Apps** to access a list of your applications within your realm. 3. Choose your pre-configured application or use the default **Beyond Identity Management API** app. 4. In the External Protocol tab, copy the **Token Endpoint**. From the Client Configuration section, also copy both the **Client ID** and **Client Secret**. Keep these details stored for later use in the tenant configuration. ![App Details | Copy Token Endpoint, Client ID and Tenant ID](../../../../../../assets/images/beyond_identity_app_details.png) 5. Create a new Credential Provider. - **Name** - Choose a user-friendly name. - **Credential Type** - [OAuth 2.0 Client Credentials](/astro/user-guide/access-policies/credential-providers/oauth-client-credentials) - **Token endpoint** - Provide the token endpoint copied from Beyond Identity. - **Client ID** - Provide the client ID copied from Beyond Identity. - **Client Secret** - Provide the client secret copied from Beyond Identity. - **Scopes** - Enter the scopes you use, space delimited. (You can find scopes in the App details, Token Configuration section under **Allowed Scopes**) ## Client Workload Configuration Aembit now handles the credentials required to access the Server Workload, eliminating the need for you to manage them directly. You can safely remove any previously used credentials from the Client Workload. If you access the Server Workload through an SDK or library, it is possible that the SDK/library may still require credentials to be present for initialization purposes. In this scenario, you can provide placeholder credentials. Aembit will overwrite these placeholder credentials with the appropriate ones during the access process. ## Access Policy - Create an Access Policy for a Client Workload to access the Beyond Identity Server Workload. Assign the newly created Credential Provider to this Access Policy. ## Required Features - You will need to configure the [TLS Decrypt](/astro/user-guide/deploy-install/advanced-options/tls-decrypt/configure-tls-decrypt) feature to work with the Beyond Identity Server Workload. --- # Box URL: https://docs.aembit.io/user-guide/access-policies/server-workloads/guides/box/ Description: This page describes how to configure Aembit to work with the Box Server Workload. # [Box](https://www.box.com/en-gb/home) is a cloud content management and file sharing service designed to help businesses securely store, manage, and share files online. The Box API provides developers with tools to integrate Box's content management features into their own applications, enabling efficient collaboration and secure file handling. Below you can find the Aembit configuration required to work with the Box service as a Server Workload using the Box API. ## Prerequisites Before proceeding with the configuration, ensure you have the following: - Box tenant. - A custom authorized application using Server Authentication in the Box tenant. If you have not created an app yet, follow the steps outlined in the next section or refer to the [official Box Developer documentation](https://developer.box.com/guides/authentication/client-credentials/) for more detailed instructions. - 2FA enabled on your Box tenant to view and copy the application's client secret. ### Create New App In Box 1. Log in to the [Box Developer Console](https://app.box.com/developers/console). 2. Navigate to the left pane, select **My Apps**, and then click on **Create New App** in the top-right corner. ![Box Create New App](../../../../../../assets/images/box_create_app.png) 3. Choose **Custom App**. A pop-up window will appear. Fill in the name and optional description field, choose the purpose, and then click **Next** to proceed. 4. Select **Server Authentication (Client Credentials Grant)** as the authentication method and click **Create App**. 5. Before the application can be used, a Box Admin must authorize it within the Box Admin Console. Navigate to the **Authorization** tab and click **Review and Submit** to send the request. A pop-up window will appear. Fill in the description field and click **Submit** to send. After your admin [authorizes the app](/astro/user-guide/access-policies/server-workloads/guides/box#authorize-app-as-an-admin), the Authorization Status and Enablement Status should both be green. ![Box Authorized App](../../../../../../assets/images/box_authorized_app.png) 6. Go back to the **Configuration** tab and scroll down to the **Application Scopes** section. Choose the scopes that best suit your project needs and click **Save Changes** in the top-right corner. ### Authorize App As an Admin 1. Navigate to the [Admin Console](https://app.box.com/master). 2. In the left panel, click on **Apps**, and then in the right panel, click on **Custom Apps Manager** in the ribbon list to view a list of your Server Authentication Apps. 3. Click the 3-dot-icon of the app that requires authorization. 4. Choose **Authorize App** from the drop-down menu. ![Box Authorize App as Admin](../../../../../../assets/images/box_authorize_app_as_admin.png) 5. A pop-up window will appear. Click **Authorize** to proceed. ## Server Workload Configuration 1. Create a new Server Workload. - **Name** - Choose a user-friendly name. 2. Configure the service endpoint: - **Host** - `api.box.com` - **Application Protocol** - HTTP - **Port** - 443 with TLS - **Forward to Port** - 443 with TLS - **Authentication method** - HTTP Authentication - **Authentication scheme** - Bearer ## Credential Provider Configuration 1. Log in to the [Box Developer Console](https://app.box.com/developers/console). 2. Navigate to the left pane, select **My Apps**, and then click on the name of the app to view details. 3. In the General Settings tab, copy the **Enterprise ID**. ![General Settings | Copy Enterprise ID](../../../../../../assets/images/box_copy_enterprise_id.png) 4. In the Configuration tab, scroll down to the **OAuth 2.0 Credentials** section. Click **Fetch Client Secret** and then copy both the **Client ID** and **Client Secret**. Keep these details stored for later use in the tenant configuration. ![Configuration | Copy Client ID and Tenant ID](../../../../../../assets/images/box_copy_client_id_secret.png) 5. Create a new Credential Provider. - **Name** - Choose a user-friendly name. - **Credential Type** - [OAuth 2.0 Client Credentials](/astro/user-guide/access-policies/credential-providers/oauth-client-credentials) - **Token endpoint** - `https://api.box.com/oauth2/token` - **Client ID** - Provide the client ID copied from Box. - **Client Secret** - Provide the client secret copied from Box. - **Scopes** - You can leave this field **empty**, as Box will default to your selected scopes on the Developer Console, or specify the scopes, such as `root_readonly`. For more detailed information for scopes, you can refer to the [official Box Developer documentation](https://developer.box.com/guides/api-calls/permissions-and-errors/scopes/#scopes-oauth-2-authorization). - **Credential Style** - POST Body **Additional Parameters** :::note The following parameters are used to authenticate as the application's **Service Account**. To authenticate as a **Managed User**, refer to the [official Box Developer documentation](https://developer.box.com/guides/authentication/client-credentials/) for additional configuration steps. For security purposes, we recommend using the service account option and collaborating your service account on just the content it needs to access. ::: - **Name** - box_subject_type - **Value** - enterprise - **Name** - box_subject_id - **Value** - Provide the enterprise ID copied from Box. ## Client Workload Configuration Aembit now handles the credentials required to access the Server Workload, eliminating the need for you to manage them directly. You can safely remove any previously used credentials from the Client Workload. If you access the Server Workload through an SDK or library, it is possible that the SDK/library may still require credentials to be present for initialization purposes. In this scenario, you can provide placeholder credentials. Aembit will overwrite these placeholder credentials with the appropriate ones during the access process. ## Access Policy - Create an Access Policy for a Client Workload to access the Box Server Workload. Assign the newly created Credential Provider to this Access Policy. ## Required Features - You will need to configure the [TLS Decrypt](/astro/user-guide/deploy-install/advanced-options/tls-decrypt/configure-tls-decrypt) feature to work with the Box Server Workload. --- # Claude URL: https://docs.aembit.io/user-guide/access-policies/server-workloads/guides/claude/ Description: This page describes how to configure Aembit to work with the Claude Server Workload. # [Claude](https://www.anthropic.com/api) is an artificial intelligence platform from Anthropic that allows developers to embed advanced language models into their applications. It supports tasks like natural language understanding and conversation generation, enhancing software functionality and user experience. Below you can find the Aembit configuration required to work with the Claude service as a Server Workload using the Claude API and Anthropic’s Client SDKs. ## Prerequisites Before proceeding with the configuration, ensure you have an Anthropic account and API key. If you have not already generated a key, follow the instructions below. For more details about Claude API, refer to the [official Claude API documentation](https://docs.anthropic.com/en/api/getting-started). ### Create API Key 1. Sign in to your Anthropic account. 2. Navigate to the [API Keys](https://console.anthropic.com/settings/keys) page by clicking the **Get API Keys** button from the dashboard menu. ![Anthropic Console Dashboard](../../../../../../assets/images/claude_api_dashboard.png) 3. Click the **Create key** button in the top right corner of the page. 4. A pop-up window will appear. Fill in the name field, then click **Create Key** to proceed. ![Create API key](../../../../../../assets/images/claude_api_create_key.png) 5. Click **Copy** and securely store the key for later use in the configuration on the tenant. ![Copy API key](../../../../../../assets/images/claude_api_copy_key.png) ## Server Workload Configuration 1. Create a new Server Workload. - **Name** - Choose a user-friendly name. 2. Configure the service endpoint: - **Host** - `api.anthropic.com` - **Application Protocol** - HTTP - **Port** - 443 with TLS - **Forward to Port** - 443 with TLS - **Authentication method** - HTTP Authentication - **Authentication scheme** - Header - **Header** - x-api-key ## Credential Provider Configuration 1. Create a new Credential Provider. - **Name** - Choose a user-friendly name. - **Credential Type** - [API Key](/astro/user-guide/access-policies/credential-providers/api-key) - **API Key** - Paste the key copied from Anthropic Console. ## Client Workload Configuration Aembit now handles the credentials required to access the Server Workload, eliminating the need for you to manage them directly. You can safely remove any previously used credentials from the Client Workload. If you access the Server Workload through an SDK or library, it is possible that the SDK/library may still require credentials to be present for initialization purposes. In this scenario, you can provide placeholder credentials. Aembit will overwrite these placeholder credentials with the appropriate ones during the access process. ## Access Policy - Create an Access Policy for a Client Workload to access the Claude API Server Workload. Assign the newly created Credential Provider to this Access Policy. ## Required Features - You will need to configure the [TLS Decrypt](/astro/user-guide/deploy-install/advanced-options/tls-decrypt/configure-tls-decrypt) feature to work with the Claude API Server Workload. :::note If you are using the SDK, you will need to configure the `SSL_CERT_FILE` environment variable and point it to a file containing the tenant root CA. The specific commands may vary depending on how your application is launched. Below command lines are examples for the Python SDK: ```shell wget https://.aembit.io/api/v1/root-ca -O tenant.crt SSL_CERT_FILE=./tenant.crt python3 ./your_app.py ``` ::: --- # Databricks URL: https://docs.aembit.io/user-guide/access-policies/server-workloads/guides/databricks/ Description: This page describes how to configure Aembit to work with the Databricks Server Workload. # [Databricks](https://www.databricks.com/) is a unified data analytics platform built on Apache Spark, designed for scalable big data processing and machine learning. It provides tools for data engineering, data science, and analytics, enabling efficient handling of complex data workloads. Below you can find the Aembit configuration required to work with the Databricks service as a Server Workload using the Databricks REST API. Aembit supports multiple authentication/authorization methods for Databricks. This page describes scenarios where the Credential Provider is configured for Databricks via: - [OAuth 2.0 Authorization Code (3LO)](/astro/user-guide/access-policies/server-workloads/guides/databricks#oauth-20-authorization-code) - [OAuth 2.0 Client Credentials](/astro/user-guide/access-policies/server-workloads/guides/databricks#oauth-20-client-credentials) - [API Key](/astro/user-guide/access-policies/server-workloads/guides/databricks#api-key) :::note[Prerequisites] Before proceeding with the configuration, ensure you have the following: - Databricks tenant. - Workspace in the Databricks tenant. If you have not created a workspace before, you can follow the steps outlined in the subsequent sections or refer to the [official Databricks documentation](https://docs.databricks.com/en/getting-started/onboarding-account.html) for more detailed instructions. ::: ## Create a Workspace in Databricks :::note The following steps outline the process for creating a workspace in Databricks on AWS. If you are using Google Cloud Platform (GCP) or Microsoft Azure, you can find the corresponding steps by changing the platform option in the top right corner of the Databricks documentation. ::: 1. Sign in to the [Databricks Console](https://accounts.cloud.databricks.com/) and navigate to the **Workspaces** page. 2. Click **Create workspace** located in the top right corner, select the **Quickstart** option, and then click **Next**. ![Databricks Create Workspace](../../../../../../assets/images/databricks_create_workspace.png) 3. In the next step, provide a name for your workspace, choose the AWS region, and then click **Start Quickstart**. This redirects you to the AWS Console. 4. In the AWS Console, you may change the pre-generated stack name if desired. Scroll down, check the acknowledgment box, and then click **Create stack**. The stack creation process may take some time. Once the creation is successfully completed, you receive a confirmation email from Databricks. You can then switch back to the Databricks console. If you do not see your workspace in the list, please refresh the page. 5. Click on the name of the workspace to view details. In the URL field, copy the part after the prefix (e.g., `abc12345` in `https://abc12345.cloud.databricks.com`). This is your Databricks instance name, and is used in future steps. 6. Click **Open Workspace** located in the top right corner to proceed with the next steps in the workspace setup. ![Databricks Workspace Details](../../../../../../assets/images/databricks_workspace_details.png) ## OAuth 2.0 Authorization Code ### Server Workload Configuration 1. Create a new Server Workload. - **Name** - Choose a user-friendly name. 2. Configure the service endpoint: - **Host** - `.cloud.databricks.com` (Use the Databricks instance name copied in step 5 of the workspace creation process) - **Application Protocol** - HTTP - **Port** - 443 with TLS - **Forward to Port** - 443 with TLS - **Authentication method** - HTTP Authentication - **Authentication scheme** - Bearer ### Credential Provider Configuration 1. In your Databricks account console, select **Settings** from the left-hand menu. 2. Navigate to the **App Connections** section in the top menu. 3. Click the **Add Connection** button in the top right corner. ![Databricks Add Connection](../../../../../../assets/images/databricks_app_creation.png) 4. Enter the **name** of your app. 5. Switch to the Aembit UI to create a new Credential Provider, selecting the OAuth 2.0 Authorization Code credential type. After setting up the Credential Provider, copy the auto-generated **Callback URL**. 6. Return to Databricks and paste the copied **Callback URL** into the **Redirect URLs** field. 7. Select the scopes for your application based on your specific needs. :::note To avoid potential issues, do **not** to set the **Access Token TTL** to less than 10 minutes. ::: 8. Once all selections are made, click **Add**. 9. A pop-up window appears. Copy both the **Client ID** and **Client Secret**, and securely store these details for later use in your tenant configuration. ![Databricks App Client Id and Client Secret](../../../../../../assets/images/databricks_app_clientid_and_secret.png) 10. Edit the existing Credential Provider created in the previous steps. - **Name** - Choose a user-friendly name. - **Credential Type** - [OAuth 2.0 Authorization Code](/astro/user-guide/access-policies/credential-providers/oauth-authorization-code) - **Callback URL (Read-Only)** - An auto-generated Callback URL from Aembit Admin. - **Client Id** - Provide the client ID copied from Databricks. - **Client Secret** - Provide the client secret copied from Databricks. - **Scopes** - `all-apis offline_access` or `sql offline_access`, depending on your scope selection in the Databricks UI. For more details on scopes and custom OAuth applications, please refer to the [official Databricks documentation](https://docs.databricks.com/en/integrations/enable-disable-oauth.html#enable-custom-app-ui). - **OAuth URL** - - For a **workspace-level** OAuth URL, use: `https:///oidc` (Use the Databricks instance name copied in step 5 of the workspace creation process) - For an **account-level** OAuth URL, use: `https://accounts.cloud.databricks.com/oidc/accounts/` - In your Databricks account, click on your username in the upper right corner, and in the dropdown menu,copy the part next to Account ID and use it in the previous link. ![Databricks Account ID](../../../../../../assets/images/databricks_account_id.png) :::tip These two URLs correspond to different levels of OAuth authorization. The level determines the scope of the authorization code: - **Account-Level**: Use this URL if you need to call both account-level and workspace-level REST APIs across all accounts and workspaces that your Databricks user account has access to. - **Workspace-Level**: Use this URL if you only need to call REST APIs within a single workspace that your user account has access to. For more detailed information about these two different levels, please refer to the [official Databricks documentation](https://docs.databricks.com/en/dev-tools/auth/oauth-u2m.html#step-2-generate-an-authorization-code). ::: Click on **URL Discovery** to populate the Authorization and Token URL fields, which can be left as populated. - **PKCE Required** - On - **Lifetime** - 1 year (Databricks does not specify a refresh token lifetime; this value is recommended by Aembit.) 11. Click **Save** to save your changes on the Credential Provider. 12. In the Aembit UI, click the **Authorize** button. You are directed to a page where you can review the access request. Click **Authorize** to complete the OAuth 2.0 Authorization Code flow. You should see a success page and then be redirected to Aembit automatically. You can also verify your flow is complete by checking the **State** value in the Credential Provider. After completion, it should be **Ready**. ![Credential Provider - Ready State](../../../../../../assets/images/credential_providers_auth_code_status_ready.png) :::caution Once the set lifetime ends, the retrieved credential expires and will not work anymore. Aembit will notify you before this happens. Please ensure you reauthorize the credential before it expires. ::: ## OAuth 2.0 Client Credentials ### Server Workload Configuration 1. Create a new Server Workload. - **Name** - Choose a user-friendly name. 2. Configure the service endpoint: - **Host** - `.cloud.databricks.com` (Use the Databricks instance name copied in step 5 of the workspace creation process) - **Application Protocol** - HTTP - **Port** - 443 with TLS - **Forward to Port** - 443 with TLS - **Authentication method** - HTTP Authentication - **Authentication scheme** - Bearer ### Credential Provider Configuration 1. In your Databricks workspace, click your username in the top right corner, and select **Settings** from the dropdown menu. 2. In the left-hand menu, navigate to **Identity and access**. 3. Next to **Service principals**, click **Manage**. ![Databricks Service principals](../../../../../../assets/images/databricks_service_principals.png) 4. Click the **Add service principal** button. 5. If you do not already have a service principal, click **Add New**; otherwise, select the desired service principal from the list and click **Add**. 6. Click on the name of the service principal to view its details. 7. Navigate to the **Permissions** tab and click the **Grant access** button. 8. In the pop-up window, select the User, Group, or Service Principal and assign their role, then click **Save**. 9. Navigate to the **Secrets** tab and click the **Generate secret** button. 10. A pop-up window appears. Copy both the **Client ID** and **Client Secret**, and store these details securely for later use in the tenant configuration. ![Service principals Client ID and Client Secret](../../../../../../assets/images/databricks_service_principal_clientid_secret.png) 11. Create a new Credential Provider. - **Name** - Choose a user-friendly name. - **Credential Type** - [OAuth 2.0 Client Credentials](/astro/user-guide/access-policies/credential-providers/oauth-client-credentials) - **Token endpoint** - - For a **workspace-level** endpoint URL, use: `https:///oidc/v1/token` (Use the Databricks instance name copied in step 5 of the workspace creation process) - For an **account-level** endpoint URL, use: `https://accounts.cloud.databricks.com/oidc/accounts//v1/token` - In your Databricks account, click on your username in the upper right corner, and in the dropdown menu,copy the part next to Account ID and use it in the previous link. ![Databricks Account ID](../../../../../../assets/images/databricks_account_id.png) :::tip These two URLs correspond to different levels of OAuth authorization. The level determines the scope of the authorization code: - **Account-Level**: Use this URL if you need to call both account-level and workspace-level REST APIs across all accounts and workspaces that your Databricks user account has access to. - **Workspace-Level**: Use this URL if you only need to call REST APIs within a single workspace that your user account has access to. For more detailed information about these two different levels, please refer to the [official Databricks documentation](https://docs.databricks.com/en/dev-tools/auth/oauth-m2m.html#manually-generate-and-use-access-tokens-for-oauth-m2m-authentication). ::: - **Client ID** - Provide the client ID copied from Databricks. - **Client Secret** - Provide the client secret copied from Databricks. - **Scopes** - `all-apis` - **Credential Style** - Authorization Header ## API Key ### Server Workload Configuration 1. Create a new Server Workload. - **Name** - Choose a user-friendly name. 2. Configure the service endpoint: - **Host** - `.cloud.databricks.com` (Use the Databricks instance name copied in step 5 of the workspace creation process) - **Application Protocol** - HTTP - **Port** - 443 with TLS - **Forward to Port** - 443 with TLS - **Authentication method** - HTTP Authentication - **Authentication scheme** - Bearer ### Credential Provider Configuration 1. In your Databricks workspace, click on your username in the top right corner, and select **Settings** from the dropdown menu. ![Databricks Workspace Navigate Settings](../../../../../../assets/images/databricks_workspace_navigate_settings.png) 2. In the left-hand menu, navigate to the **Developer** section. 3. Next to **Access tokens**, click **Manage**. 4. Click the **Generate new token** button. 5. Optionally, provide a comment and set a lifetime for your token, then click **Generate**. 6. Click **Copy to clipboard** and securely store the token for later use in the configuration on the tenant. ![Databricks API Key](../../../../../../assets/images/databricks_api_key.png) 7. Create a new Credential Provider. - **Name** - Choose a user-friendly name. - **Credential Type** - [API Key](/astro/user-guide/access-policies/credential-providers/api-key) - **API Key** - Paste the token copied from Databricks. ## Client Workload Configuration Aembit now handles the credentials required to access the Server Workload, eliminating the need for you to manage them directly. You can safely remove any previously used credentials from the Client Workload. If you access the Server Workload through an SDK or library, it is possible that the SDK/library may still require credentials to be present for initialization purposes. In this scenario, you can provide placeholder credentials. Aembit will overwrite these placeholder credentials with the appropriate ones during the access process. ## Access Policy - Create an Access Policy for a Client Workload to access the Databricks Server Workload. Assign the newly created Credential Provider to this Access Policy. ## Required Features - You will need to configure the [TLS Decrypt](/astro/user-guide/deploy-install/advanced-options/tls-decrypt/configure-tls-decrypt) feature to work with the Databricks Server Workload. --- # Freshsales URL: https://docs.aembit.io/user-guide/access-policies/server-workloads/guides/freshsales/ Description: This page describes how to configure Aembit to work with the Freshsales Server Workload. # [Freshsales](https://www.freshworks.com/crm/sales/) is a customer relationship management platform that helps businesses manage their sales processes. It offers features like lead tracking, email integration, and sales analytics to streamline workflows and improve customer interactions. Below you can find the Aembit configuration required to work with the Freshsales service as a Server Workload using the REST API. ## Prerequisites Before proceeding with the configuration, you will need to have a Freshsales or Freshsales Suite tenant (or [sign up](https://www.freshworks.com/crm/signup/) for one). ## Server Workload Configuration 1. Create a new Server Workload. - **Name** - Choose a user-friendly name. 2. Configure the service endpoint: - **Host** - `.myfreshworks.com` - **Application Protocol** - HTTP - **Port** - 443 with TLS - **Forward to Port** - 443 with TLS - **Authentication method** - HTTP Authentication - **Authentication scheme** - Header - **Header** - Authorization ## Credential Provider Configuration 1. Sign into your Freshsales account. 2. In the upper-right corner of the page, click your profile photo, then click **Settings**. ![Freshsales Dashboard](../../../../../../assets/images/freshsales_dashboard.png) 3. Click on the **API Settings** tab. 4. Click **Copy** and securely store the API key for later use in the configuration on the tenant. ![Copy Freshsales CRM API Key](../../../../../../assets/images/freshsales_settings_api_key.png) 5. Create a new Credential Provider. - **Name** - Choose a user-friendly name. - **Credential Type** - [API Key](/astro/user-guide/access-policies/credential-providers/api-key) - **API Key** - Provide the key copied from Freshsales. ## Client Workload Configuration Aembit now handles the credentials required to access the Server Workload, eliminating the need for you to manage them directly. You can safely remove any previously used credentials from the Client Workload. If you access the Server Workload through an SDK or library, it is possible that the SDK/library may still require credentials to be present for initialization purposes. In this scenario, you can provide placeholder credentials. Aembit will overwrite these placeholder credentials with the appropriate ones during the access process. ## Access Policy - Create an access policy for a Client Workload to access the Freshsales Server Workload and assign the newly created Credential Provider to it. ## Required Features - You will need to configure the [TLS Decrypt](/astro/user-guide/deploy-install/advanced-options/tls-decrypt/configure-tls-decrypt) feature to work with the Freshsales Server Workload. --- # GCP BigQuery URL: https://docs.aembit.io/user-guide/access-policies/server-workloads/guides/gcp-bigquery/ Description: This page describes how to configure Aembit to work with the GCP BigQuery Server Workload. # [Google BigQuery](https://cloud.google.com/bigquery?hl=en), part of Google Cloud Platform, is a data warehousing solution designed for storing, querying, and analyzing large datasets. It offers scalability, SQL-based querying, and integrations with other GCP services and third-party tools. Below you can find the Aembit configuration required to work with the GCP BigQuery service as a Server Workload using the BigQuery REST API. Aembit supports multiple authentication/authorization methods for BigQuery. This page describes scenarios where the Credential Provider is configured for BigQuery via: - [OAuth 2.0 Authorization Code (3LO)](/astro/user-guide/access-policies/server-workloads/guides/gcp-bigquery#oauth-20-authorization-code) - [Google Workload Identity Federation](/astro/user-guide/access-policies/server-workloads/guides/gcp-bigquery#google-workload-identity-federation) :::note[Prerequisites] Before proceeding with the configuration, ensure you have the following: - An active Google Cloud account - A GCP project with BigQuery enabled - Data available for querying in BigQuery ::: ## OAuth 2.0 Authorization Code ### Server Workload Configuration 1. Create a new Server Workload. - **Name** - Choose a user-friendly name. 2. Configure the Service endpoint: - **Host** - `bigquery.googleapis.com` - **Application Protocol** - HTTP - **Port** - 443 with TLS - **Forward to Port** - 443 with TLS - **Authentication method** - HTTP Authentication - **Authentication scheme** - Bearer ### Credential Provider Configuration 1. Sign in to the Google Cloud Console and navigate to the [Credentials](hhttps://console.cloud.google.com/apis/credentials) page. Ensure that you are working within a GCP project for which you have authorization. 2. On the **Credentials** dashboard, click **Create Credentials** located in the top left corner and select the **OAuth client ID** option. ![Create OAuth client ID](../../../../../../assets/images/gcp_create_oauth_client_id.png) 3. If there is no configured Consent Screen for your project, you will see a **Configure Consent Screen** button on the directed page. Click the button to continue. ![Configure Consent Screen](../../../../../../assets/images/gcp_no_consent_screen.png) 4. Choose **User Type** and click **Create**. - Provide a name for your app. - Choose a user support email from the dropdown menu. - App logo and app domain fields are optional. - Enter at least one email for the Developer contact information field. - Click **Save and Continue**. - You may skip the Scopes step by clicking **Save and Continue** once again. - In the **Summary** step, review the details of your app and click **Back to Dashboard**. 5. Navigate back to [Credentials](hhttps://console.cloud.google.com/apis/credentials) page, click **Create Credentials**, and select the **OAuth client ID** option again. - Choose **Web Application** for Application Type. - Provide a name for your web client. - Switch to the Aembit UI to create a new Credential Provider, selecting the OAuth 2.0 Authorization Code credential type. After setting up the Credential Provider, copy the auto-generated **Callback URL**. - Return to Google Cloud Console and paste the copied URL into the **Authorized redirect URIs** field. - Click **Create**. 6. A pop-up window will appear. Copy both the **Client ID** and the **Client Secret**. Store them for later use in the tenant configuration. 7. Edit the existing Credential Provider created in the previous steps. - **Name** - Choose a user-friendly name. - **Credential Type** - [OAuth 2.0 Authorization Code](/astro/user-guide/access-policies/credential-providers/oauth-authorization-code) - **Callback URL (Read-Only)** - An auto-generated Callback URL from Aembit Admin. - **Client Id** - Provide the Client ID copied from Google. - **Client Secret** - Provide the Secret copied from Google. - **Scopes** - Enter the scopes you will use for BigQuery (e.g. `https://www.googleapis.com/auth/bigquery`) A full list of GCP Scopes can be found at [OAuth 2.0 Scopes for Google APIs](https://developers.google.com/identity/protocols/oauth2/scopes). - **OAuth URL** - `https://accounts.google.com` Click on **URL Discovery** to populate the Authorization and Token URL fields, which can be left as populated. - **PKCE Required** - Off - **Lifetime** - 1 year (A Google Cloud Platform project with an OAuth consent screen configured for an external user type and a publishing status of Testing is issued a refresh token expiring in 7 days).
Google does not specify a refresh token lifetime for the internal user type selected version; this value is recommended by Aembit. For more information, refer to the [official Google documentation](https://developers.google.com/identity/protocols/oauth2#expiration). 8. Click **Save** to save your changes on the Credential Provider. 9. In Aembit UI, click the **Authorize** button. You will be directed to a page where you can choose your Google account first. Then click **Allow** to complete the OAuth 2.0 Authorization Code flow. You will see a success page and will be redirected to Aembit automatically. You can also verify your flow is complete by checking the **State** value in the Credential Provider. After completion, it should be in a **Ready** state. ![Credential Provider - Ready State](../../../../../../assets/images/credential_providers_auth_code_status_ready.png) :::caution Once the set lifetime ends, the retrieved credential will expire and no longer be active. Aembit will notify you before this happens. Please ensure you reauthorize your credential before it expires. ::: ## Google Workload Identity Federation ### Server Workload Configuration 1. Create a new Server Workload. - **Name** - Choose a user-friendly name. 2. Configure the Service endpoint: - **Host** - `bigquery.googleapis.com` - **Application Protocol** - HTTP - **Port** - 443 with TLS - **Forward to Port** - 443 with TLS - **Authentication method** - HTTP Authentication - **Authentication scheme** - Bearer ### Credential Provider Configuration 1. Sign in to the Google Cloud Console and navigate to [Service Accounts](https://console.cloud.google.com/iam-admin/serviceaccounts). Ensure that you are working within a GCP project for which you have authorization. 2. On the **Service Accounts** dashboard, click **Create Service Account** located in the top left corner. ![Create Service Account](../../../../../../assets/images/gcp_bigquery_create_service_account.png) - Provide a name for your service account. The ID will be generated based on the account name, but you have the option to edit it. The description field is optional. - Click the icon next to **Email address** to copy it and store the address for later. - Click **Done**. ![Create Service Account Details](../../../../../../assets/images/gcp_bigquery_create_service_account_details.png) 3. In the left navigation pane, select **IAM** to access a list of permissions for your project. 4. Click the **Grant Access** button in the ribbon list in the middle of the page. ![Grant Access to Service Account in IAM](../../../../../../assets/images/gcp_bigquery_grant_access_to_service_acc.png) 5. In the opened dialog, click **New Principals**, start typing your service account name, and select from the search results. 6. Assign roles to your service account by clicking the dropdown icon, selecting the GCP role that best suits your project needs, and then click **Save**. ![Set Role to Service Account](../../../../../../assets/images/gcp_bigquery_set_role_to_service_account.png) 7. In the left navigation pane, select [Workload Identity Federation](https://console.cloud.google.com/iam-admin/workload-identity-pools). If this is your first time on this page, click **Get Started**; otherwise, click **Create Pool**. ![Create Identity Pool](../../../../../../assets/images/gcp_bigquery_create_pool.png) - Specify a name for your identity pool. The ID will be generated based on the pool name, but you can edit it if needed. The description field is optional; proceed by clicking **Continue**. - Next, add a provider to the pool. Select **OpenID Connect (OIDC)** as the provider option and specify a name for your provider. Again, the ID will be auto-generated, but you can edit it. - For the **Issuer(URL)** field, switch to the Aembit UI to create a new Credential Provider, selecting the Google Workload Identity Federation credential type. After setting up the Credential Provider, copy the auto-generated **Issuer URL**, then paste it into the field. - If you choose to leave the Audiences option set to Default audience, click the **Copy to clipboard icon**next to the auto-generated value and store the address for later use in the tenant configuration, then proceed by clicking **Continue**. ![Add Provider](../../../../../../assets/images/gcp_bigquery_add_provider.png) - Specify the provider attribute of **assertion.tenant** in OIDC 1 and click **Save**. 8. To access resources, pool identities must be granted access to a service account. Within the GCP workload identity pool you just created, click the **Grant Access** button located in the top ribbon list. - In the opened dialog, choose **Grant access using Service Account impersonation** option. - Then, choose the **Service Account** that you created from the dropdown menu. - For **Attribute name**, choose **subject** from dropdown menu. - For **Attribute value**, provide your Aembit Tenant ID. You can find your tenant ID from the URL you use. For example, if the URL is `https://xyz.aembit.io`, then `xyz` is your tenant ID. - Proceed with clicking to **Save**. ![Grant Access to Service Account in Pool Identity](../../../../../../assets/images/gcp_bigquery_grant_access_pool_identity.png) 9. Edit the existing Credential Provider created in the previous steps. - **Name** - Choose a user-friendly name. - **Credential Type** - [Google Workload Identity Federation](/astro/user-guide/access-policies/credential-providers/google-workload-identity-federation) - **OIDC Issuer URL (Read-Only)** - An auto-generated OpenID Connect (OIDC) Issuer URL from Aembit Admin. - **Audience** - Provide the audience value for the provider. The value should match either: **Default** - Full canonical resource name of the Workload Identity Pool Provider (used if "Default audience" was chosen during setup). **Allowed Audiences** - A value included in the configured allowed audiences list, if defined. - **Service Account Email** - Provide the service account email that was previously copied from Google Cloud Console during service account creation. (e.g., `service-account-name@project-id.iam.gserviceaccount.com`) - **Lifetime** - Specify the duration for which the credentials will remain valid. :::caution If the default audience was chosen during provider creation, provide the value previously copied from Google Cloud Console, the part **after** the prefix (e.g., //iam.googleapis...). ::: ## Client Workload Configuration Aembit now handles the credentials required to access the Server Workload, eliminating the need for you to manage them directly. You can safely remove any previously used credentials from the Client Workload. If you access the Server Workload through an SDK or library, it is possible that the SDK/library may still require credentials to be present for initialization purposes. In this scenario, you can provide placeholder credentials. Aembit will overwrite these placeholder credentials with the appropriate ones during the access process. ## Access Policy - Create an Access Policy for a Client Workload to access the GCP BigQuery Server Workload. Assign the newly created Credential Provider to this Access Policy. ## Required Features - You will need to configure the [TLS Decrypt](/astro/user-guide/deploy-install/advanced-options/tls-decrypt/configure-tls-decrypt) feature to work with the GCP BigQuery Server Workload. --- # Gemini (Google) URL: https://docs.aembit.io/user-guide/access-policies/server-workloads/guides/gemini/ Description: This page describes how to configure Aembit to work with the Gemini Server Workload # [Gemini](https://ai.google.dev/) is an AI platform that allows developers to integrate multimodal capabilities into their applications, including text, images, audio, and video processing. It supports tasks such as natural language processing, content generation, and data analysis. Below you can find the Aembit configuration required to work with the Google Gemini service as a Server Workload using the REST API. ## Prerequisites Before proceeding with the configuration, ensure you have a Google account and an API key. If you have not already created a key, follow the instructions below. For more details about the Gemini API, refer to the [official Gemini API documentation](https://ai.google.dev/gemini-api/docs/api-key). ### Create API Key 1. Navigate to the [API Keys](https://aistudio.google.com/app/apikey) page and sign in to your Google account. 2. Click the **Create API key** button in the middle of the page. ![Google AI Studio | Get API Keys](../../../../../../assets/images/gemini_get_api_key.png) 3. Click the **Got it** button on the Safety Setting Reminder pop-up window. 4. If you do not already have a project in Google Cloud, click **Create API key in new project**. Otherwise, select from your projects and click **Create API key in existing project**. ![Create API key](../../../../../../assets/images/gemini_create_api_key.png) 5. Click **Copy** and securely store the key for later use in your tenant configuration. ![Copy API key](../../../../../../assets/images/gemini_copy_api_key.png) ## Server Workload Configuration 1. Create a new Server Workload. - **Name** - Choose a user-friendly name. 2. Configure the service endpoint: - **Host** - `generativelanguage.googleapis.com` - **Application Protocol** - HTTP - **Port** - 443 with TLS - **Forward to Port** - 443 with TLS - **Authentication method** - HTTP Authentication - **Authentication scheme** - Header - **Header** - x-goog-api-key ## Credential Provider Configuration 1. Create a new Credential Provider. - **Name** - Choose a user-friendly name. - **Credential Type** - [API Key](/astro/user-guide/access-policies/credential-providers/api-key) - **API Key** - Paste the key copied from Google AI Studio. ## Client Workload Configuration Aembit now handles the credentials required to access the Server Workload, eliminating the need for you to manage them directly. You can safely remove any previously used credentials from the Client Workload. If you access the Server Workload through an SDK or library, it is possible that the SDK/library may still require credentials to be present for initialization purposes. In this scenario, you can provide placeholder credentials. Aembit will overwrite these placeholder credentials with the appropriate ones during the access process. ## Access Policy - Create an Access Policy for a Client Workload to access the Gemini Server Workload. Assign the newly created Credential Provider to this Access Policy. ## Required Features - You will need to configure the [TLS Decrypt](/astro/user-guide/deploy-install/advanced-options/tls-decrypt/configure-tls-decrypt) feature to work with the Gemini Server Workload. --- # GitGuardian URL: https://docs.aembit.io/user-guide/access-policies/server-workloads/guides/gitguardian/ Description: This page describes how to configure Aembit to work with the GitGuardian Server Workload. # [GitGuardian](https://www.gitguardian.com/) is a cybersecurity platform dedicated to safeguarding sensitive information within source code repositories. It specializes in identifying and protecting against potential data leaks, ensuring that organizations maintain the confidentiality of their critical data. Below you can find the Aembit configuration required to work with the GitGuardian service as a Server Workload using the GitGuardian API. ## Prerequisites Before proceeding with the configuration, you will need to have a GitGuardian tenant (or [sign up](https://dashboard.gitguardian.com/auth/signup) for one). ## Server Workload Configuration 1. Create a new Server Workload. - **Name** - Choose a user-friendly name. 2. Configure the service endpoint: - **Host** - `api.gitguardian.com` - **Application Protocol** - HTTP - **Port** - 443 with TLS - **Forward to Port** - 443 with TLS - **Authentication method** - API Key - **Authentication scheme** - Header - **Header** - Authorization ## Credential Provider Configuration 1. Navigate to the [GitGuardian Dashboard](https://dashboard.gitguardian.com/) and sign in with your account. 2. On the left navigation pane, choose **API** and then go to **Personal access tokens** in the second left pane to access details. 3. Click on **Create Token** in the top right corner. 4. Provide a name, choose an expiration time, select scopes based on your preferences, and then click **Create token** at the bottom of the modal. ![Create GitGuardian API Personal Access token](../../../../../../assets/images/gitguardian_key.png) 5. Make sure to copy your new personal access token at this stage, as it will not be visible again. For more information on authentication, please refer to the [official GitGuardian API documentation](https://api.gitguardian.com/docs#section/Authentication). 6. Create a new Credential Provider. - **Name** - Choose a user-friendly name. - **Credential Type** - [API Key](/astro/user-guide/access-policies/credential-providers/api-key) - **API Key** - Provide the key copied from GitGuardian and use the format `Token api-key`, replacing `api-key` with your API key. ## Client Workload Configuration Aembit now handles the credentials required to access the Server Workload, eliminating the need for you to manage them directly. You can safely remove any previously used credentials from the Client Workload. If you access the Server Workload through an SDK or library, it is possible that the SDK/library may still require credentials to be present for initialization purposes. In this scenario, you can provide placeholder credentials. Aembit will overwrite these placeholder credentials with the appropriate ones during the access process. ## Access Policy - Create an access policy for a Client Workload to access the GitGuardian Server Workload and assign the newly created Credential Provider to it. ## Required Features - You will need to configure the [TLS Decrypt](/astro/user-guide/deploy-install/advanced-options/tls-decrypt/configure-tls-decrypt) feature to work with the GitGuardian Server Workload. --- # GitHub REST URL: https://docs.aembit.io/user-guide/access-policies/server-workloads/guides/github-rest/ Description: This page describes how to configure Aembit to work with the GitHub REST API Server Workload. # [GitHub](https://github.com/) is a cloud-based platform for code hosting and version control using Git. Its REST API enables programmatic interaction with GitHub's features, allowing for custom tool development and automation. Below you can find the Aembit configuration required to work with the GitHub service as a Server Workload using the GitHub REST API. Aembit supports multiple authentication/authorization methods for GitHub. This page describes scenarios where the Credential Provider is configured for GitHub via: - [OAuth 2.0 Authorization Code (3LO)](#oauth-20-authorization-code) - [API Key](/astro/user-guide/access-policies/server-workloads/guides/github-rest#api-key) :::note[Prerequisites] Before proceeding with the configuration, ensure you have the following: - A GitHub account - A personal access token (API Key Method) - A GitHub app (OAuth 2.0 Authorization Code Method) If you have not created a token or an app before, you can follow the steps outlined in the subsequent sections. For detailed information on authenticating with different flows, please refer to the [official GitHub documentation](https://docs.github.com/en/rest/authentication/authenticating-to-the-rest-api?apiVersion=2022-11-28). ::: ## OAuth 2.0 Authorization Code ### Server Workload Configuration 1. Create a new Server Workload. - **Name** - Choose a user-friendly name. 2. Configure the service endpoint: - **Host** - `api.github.com` - **Application Protocol** - HTTP - **Port** - 443 with TLS - **Forward to Port** - 443 with TLS - **Authentication method** - HTTP Authentication - **Authentication scheme** - Bearer ### Credential Provider Configuration 1. Sign in to your GitHub account. 2. In the upper-right corner of any page, click your profile photo, then click **Settings**. 3. Navigate to **Developer settings** in the left-hand menu, and choose **Github Apps**. 4. On the right side, click on the **New GitHub App** button. ![Create New Github App](../../../../../../assets/images/github_create_github_app.png) 5. Provide a name for your app, and optionally type a description of your app. 6. For the **Homepage URL**, enter the full URL of your Aembit tenant (e.g., `https://xyz.aembit.io`,). 7. Switch to the Aembit UI to create a new Credential Provider, selecting the OAuth 2.0 Authorization Code credential type. After setting up the Credential Provider, copy the auto-generated **Callback URL**. 8. Return to GitHub and under **Callback URL**, paste the copied URL. 9. Check the **Request user authorization** box and uncheck the **webhook**. 10. Under the **Permissions** section, expand the drop-down menus and select the permissions (scopes) for your application depending on your needs. 11. Choose the installation area for this app, then click on **Create Github App**. 12. Copy the **Client ID**, then click **Generate a new client secret**, and copy the **Client Secret**. Securely store the token for later use in the configuration on the tenant. ![GitHub App Copy Client ID and Client Secret](../../../../../../assets/images/github_app_copy_clientid_and_secret.png) 13. Edit the existing Credential Provider created in the previous steps. - **Name** - Choose a user-friendly name. - **Credential Type** - [OAuth 2.0 Authorization Code](/astro/user-guide/access-policies/credential-providers/oauth-authorization-code) - **Callback URL (Read-Only)** - An auto-generated Callback URL from Aembit Admin. - **Client Id** - Provide the Client ID copied from GitHub. - **Client Secret** - Provide the Secret copied from GitHub. - **Scopes** - You can leave this field empty by entering a single whitespace, as GitHub will default to your selected scopes for the app. - **OAuth URL** - `https://github.com` - **Authorization URL** - `https://github.com/login/oauth/authorize` - **Token URL** - `https://github.com/login/oauth/access_token` - **PKCE Required** - Off (PKCE is not supported by Github, so leave this field unchecked). - **Lifetime** - 6 Months 14. Click **Save** to save your changes on the Credential Provider. 15. In the Aembit UI, click the **Authorize** button. You are be directed to a page where you can review the access request. Click **Authorize** to complete the OAuth 2.0 Authorization Code flow. You should see a success page and be redirected to Aembit automatically. You can also verify your flow is complete by checking the **State** value in the Credential Provider. After completion, it should be in a **Ready** state. ![Credential Provider - Ready State](../../../../../../assets/images/credential_providers_auth_code_status_ready.png) :::caution Once the set lifetime ends, the retrieved credential will expire and no longer be active. Aembit will notify you before this happens. Please ensure you reauthorize your credential before it expires. ::: ## API Key ### Server Workload Configuration 1. Create a new Server Workload. - **Name** - Choose a user-friendly name. 2. Configure the service endpoint: - **Host** - `api.github.com` - **Application Protocol** - HTTP - **Port** - 443 with TLS - **Forward to Port** - 443 with TLS - **Authentication method** - HTTP Authentication - **Authentication scheme** - Bearer ### Credential Provider Configuration 1. Sign in to your GitHub account. 2. In the upper-right corner of any page, click your profile photo, then click **Settings**. 3. Navigate to **Developer settings** in the left-hand menu. 4. Under **Personal access tokens**, choose **Fine-grained tokens**. 5. On the right side, click on the **Generate new token** button. ![Generate new fine-grained token](../../../../../../assets/images/github_rest_create_fine_grained_token.png) 6. Provide a name, expiration date, and description for your token. Choose the resource owner and repository access type. 7. Under the **Permissions** section, expand the drop-down menu and select the permissions (scopes) for your application depending on your needs. 8. After making all of your selections, click on **Generate Token**. 9. Click **Copy to clipboard** and securely store the token for later use in the configuration on the tenant. ![Copy fine-grained token](../../../../../../assets/images/github_rest_copy_fine_grained_token.png) :::note The following configuration steps also work with classic personal access tokens; however, fine-grained tokens are recommended as they offer more granular permissions and improved security. ::: 10. Create a new Credential Provider. - **Name** - Choose a user-friendly name. - **Credential Type** - [API Key](/astro/user-guide/access-policies/credential-providers/api-key) - **API Key** - Paste the token copied from GitHub. ## Client Workload Configuration Aembit now handles the credentials required to access the Server Workload, eliminating the need for you to manage them directly. You can safely remove any previously used credentials from the Client Workload. If you access the Server Workload through an SDK or library, it is possible that the SDK/library may still require credentials to be present for initialization purposes. In this scenario, you can provide placeholder credentials. Aembit will overwrite these placeholder credentials with the appropriate ones during the access process. ## Access Policy - Create an Access Policy for a Client Workload to access the GitHub REST API Server Workload. Assign the newly created Credential Provider to this Access Policy. ## Required Features - You will need to configure the [TLS Decrypt](/astro/user-guide/deploy-install/advanced-options/tls-decrypt/configure-tls-decrypt) feature to work with the GitHub REST API Server Workload. --- # GitLab REST URL: https://docs.aembit.io/user-guide/access-policies/server-workloads/guides/gitlab-rest/ Description: This page describes how to configure Aembit to work with the GitLab REST API Server Workload. # [GitLab](https://gitlab.com/) is a cloud-based DevOps lifecycle tool that provides a Git repository manager with features like CI/CD, issue tracking, and more. Its REST API allows for programmatic access to these features, enabling the development of custom tools and automation. Below you can find the Aembit configuration required to work with the GitLab service as a Server Workload using the GitLab REST API. :::note[Prerequisites] Before proceeding with the configuration, you must have a GitLab tenant (or [sign up](https://gitlab.com/users/sign_up) for one) and a user, group, or instance level owned application. If you have not generated an application yet, follow the configuration steps below. For detailed information on how to create a new application, please refer to the [official GitLab documentation](https://docs.gitlab.com/ee/integration/oauth_provider.html). ::: ## Server Workload Configuration 1. Create a new Server Workload. - **Name** - Choose a user-friendly name. 2. Configure the service endpoint: - **Host** - `gitlab.com` - **Application Protocol** - HTTP - **Port** - 443 with TLS - **Forward to Port** - 443 with TLS - **Authentication method** - HTTP Authentication - **Authentication scheme** - Bearer ## Credential Provider Configuration 1. Sign in to your GitLab account. 2. In the upper-left corner of any page, click your profile photo, then click **Edit Profile**. 3. Navigate to **Applications** in the left-hand menu. 4. On the right side, click on the **Add new application** button. ![Gitlab Add new application](../../../../../../assets/images/gitlab_create_app.png) 5. Provide a name for your app. 6. Switch to the Aembit UI to create a new Credential Provider, selecting the OAuth 2.0 Authorization Code credential type. After setting up the Credential Provider, copy the auto-generated **Callback URL**. 7. Return to GitLab and paste the copied URL into the **Redirect URI** field. 8. Check the **Confidential** box, and select the scopes for your application depending on your needs. 9. After making all of your selections, click on **Save application**. 10. On the directed page, copy the **Application ID**, **Secret** and **Scopes**, and store them for later use in the tenant configuration. ![Gitlab New application](../../../../../../assets/images/gitlab_created_app.png) 11. Edit the existing Credential Provider created in the previous steps. - **Name** - Choose a user-friendly name. - **Credential Type** - [OAuth 2.0 Authorization Code](/astro/user-guide/access-policies/credential-providers/oauth-authorization-code) - **Callback URL (Read-Only)** - An auto-generated Callback URL from Aembit Admin. - **Client Id** - Provide the Application ID copied from GitLab. - **Client Secret** - Provide the Secret copied from GitLab. - **Scopes** - Enter the scopes you use, space-delimited (e.g. `read_api read_user read_repository`). - **OAuth URL** - `https://gitlab.com` Click on **URL Discovery** to populate the Authorization and Token URL fields, which can be left as populated. - **PKCE Required** - On - **Lifetime** - 1 year (GitLab does not specify a refresh token lifetime; this value is recommended by Aembit.) 12. Click **Save** to save your changes on the Credential Provider. 13. In Aembit UI, click the **Authorize** button. You are directed to a page where you can review the access request. Click **Authorize** to complete the OAuth 2.0 Authorization Code flow. You should see a success page and be redirected to Aembit automatically. You can also verify your flow is complete by checking the **State** value in the Credential Provider. After completion, it should be in a **Ready** state. ![Credential Provider - Ready State](../../../../../../assets/images/credential_providers_auth_code_status_ready.png) :::caution Once the set lifetime ends, the retrieved credential will expire and no longer be active. Aembit will notify you before this happens. Please ensure you reauthorize your credential before it expires. ::: ## Client Workload Configuration Aembit now handles the credentials required to access the Server Workload, eliminating the need for you to manage them directly. You can safely remove any previously used credentials from the Client Workload. If you access the Server Workload through an SDK or library, it is possible that the SDK/library may still require credentials to be present for initialization purposes. In this scenario, you can provide placeholder credentials. Aembit will overwrite these placeholder credentials with the appropriate ones during the access process. ## Access Policy - Create an Access Policy for a Client Workload to access the GitLab REST API Server Workload. Assign the newly created Credential Provider to this Access Policy. ## Required Features - You will need to configure the [TLS Decrypt](/astro/user-guide/deploy-install/advanced-options/tls-decrypt/configure-tls-decrypt) feature to work with the GitLab REST API Server Workload. --- # Google Drive URL: https://docs.aembit.io/user-guide/access-policies/server-workloads/guides/google-drive/ Description: This page describes how to configure Aembit to work with the Google Drive Server Workload. # [Google Drive](https://www.google.com/drive/), part of Google Workspace, is a cloud-based storage solution designed for storing, sharing, and collaborating on files. Below you can find the Aembit configuration required to work with the Google Drive service as a Server Workload using the Google Drive API. :::note[Prerequisites] Before proceeding with the configuration, ensure you have the following: - An active Google Cloud account - A GCP project with Google Drive enabled ::: ### Server Workload Configuration 1. Create a new Server Workload. - **Name** - Choose a user-friendly name. 2. Configure the Service endpoint: - **Host** - `www.googleapis.com` - **Application Protocol** - HTTP - **Port** - 443 with TLS - **Forward to Port** - 443 with TLS - **Authentication method** - HTTP Authentication - **Authentication scheme** - Bearer ### Credential Provider Configuration 1. Sign in to the Google Cloud Console and navigate to the [Credentials](hhttps://console.cloud.google.com/apis/credentials) page. Ensure you are working within a GCP project for which you have authorization. 2. On the **Credentials** dashboard, click **Create Credentials** located in the top left corner and select the **OAuth client ID** option. ![Create OAuth client ID](../../../../../../assets/images/gcp_create_oauth_client_id.png) 3. If there is no configured Consent Screen for your project, you see a **Configure Consent Screen** button on the directed page. Click the button to continue. ![Configure Consent Screen](../../../../../../assets/images/gcp_no_consent_screen.png) 4. Choose **User Type** and click **Create**. - Provide a name for your app. - Choose a user support email from the dropdown menu. - App logo and app domain fields are optional. - Enter at least one email for the Developer contact information field. - Click **Save and Continue**. - You may skip the Scopes step by clicking **Save and Continue** once again. - In the **Summary** step, review the details of your app and click **Back to Dashboard**. 5. Navigate back to [Credentials](hhttps://console.cloud.google.com/apis/credentials) page, click **Create Credentials**, and select the **OAuth client ID** option again. - Choose **Web Application** for Application Type. - Provide a name for your web client. - Switch to the Aembit UI to create a new Credential Provider, selecting the OAuth 2.0 Authorization Code credential type. After setting up the Credential Provider, copy the auto-generated **Callback URL**. - Return to Google Cloud Console and paste the copied URL into the **Authorized redirect URIs** field. - Click **Create**. 6. A pop-up window appears. Copy both the **Client ID** and the **Client Secret**. Store them for later use in the tenant configuration. 7. Edit the existing Credential Provider created in the previous steps. - **Name** - Choose a user-friendly name. - **Credential Type** - [OAuth 2.0 Authorization Code](/astro/user-guide/access-policies/credential-providers/oauth-authorization-code) - **Callback URL (Read-Only)** - An auto-generated Callback URL from Aembit Admin. - **Client Id** - Provide the Client ID copied from Google. - **Client Secret** - Provide the Secret copied from Google. - **Scopes** - Enter the scopes you will use for Google Drive. (e.g. `https://www.googleapis.com/auth/drive`) A full list of GCP Scopes can be found at [OAuth 2.0 Scopes for Google APIs](https://developers.google.com/identity/protocols/oauth2/scopes#drive). - **OAuth URL** - `https://accounts.google.com` Click on **URL Discovery** to populate the Authorization and Token URL fields, which can be left as populated. - **PKCE Required** - Off - **Lifetime** - 1 year (This value is recommended by Aembit. For more information, please refer to the [official Google documentation](https://developers.google.com/identity/protocols/oauth2#expiration).) 8. Click **Save** to save your changes on the Credential Provider. 9. In Aembit UI, click the **Authorize** button. You are directed to a page where you can choose your Google account first. Then click **Allow** to complete the OAuth 2.0 Authorization Code flow. You should see a success page and be redirected to Aembit automatically. You can also verify your flow is complete by checking the **State** value in the Credential Provider. After completion, it should be in a **Ready** state. ![Credential Provider - Ready State](../../../../../../assets/images/credential_providers_auth_code_status_ready.png) :::caution Once the set lifetime ends, the retrieved credential expires and no longer be active. Aembit notifies you before this happens. Please ensure you reauthorize your credential before it expires. ::: ## Client Workload Configuration Aembit now handles the credentials required to access the Server Workload, eliminating the need for you to manage them directly. You can safely remove any previously used credentials from the Client Workload. If you access the Server Workload through an SDK or library, it is possible that the SDK/library may still require credentials to be present for initialization purposes. In this scenario, you can provide placeholder credentials. Aembit will overwrite these placeholder credentials with the appropriate ones during the access process. ## Access Policy - Create an Access Policy for a Client Workload to access the Google Drive Server Workload. Assign the newly created Credential Provider to this Access Policy. ## Required Features - You will need to configure the [TLS Decrypt](/astro/user-guide/deploy-install/advanced-options/tls-decrypt/configure-tls-decrypt) feature to work with the Google Drive Server Workload. --- # HashiCorp Vault URL: https://docs.aembit.io/user-guide/access-policies/server-workloads/guides/hashicorp-vault/ Description: This page describes how to configure Aembit to work with the HashiCorp Vault Server Workload. # [HashiCorp Vault](https://www.vaultproject.io/) is a secrets management platform designed to secure, store, and control access to sensitive data and cryptographic keys. Below you can find the Aembit configuration required to work with the HashiCorp Vault service as a Server Workload using the a Vault CLI, or HTTP API. ## Prerequisites Before proceeding with the configuration, ensure you have the following: - Vault Cluster (self-hosted or HCP tenant). - An OIDC authentication method enabled in your Vault cluster. If you have not already set this up, follow the steps outlined in the next section or refer to the [official HashiCorp Vault documentation](https://developer.hashicorp.com/vault/tutorials/auth-methods/oidc-auth) for more detailed instructions. ### Configure Vault 1. Log in to your Vault cluster. 2. In the left pane, select **Authentication methods**, and then click on **Enable new method** from the top-right corner. 3. Choose the **OIDC** radio-button and click **Next**. 4. Choose a name for the **Path**. The `oidc/` format is the Hashicorp recommended format. Then click on **Enable Method**. 5. In the Configuration page, configure the OIDC according to your preferences. Below are key choices: - For the **OIDC discovery URL** field, navigate to Aembit UI, create a new Credential Provider, choose **Vault Client Token**, and copy the auto-generated Issuer URL. Paste it into Vault's **OIDC discovery URL** field. Make sure not to include a slash at the end of the URL. - If you do not set a **Default Role** for the Vault Authentication method, make sure to include a role name for configuration in the Aembit Credential Provider. 6. After making all your configurations, click **Save**. ### Configure Vault Role After completing the configuration on Vault, creating a Vault Role for the associated Vault Authentication Method is essential. To do this, navigate to the Vault CLI shell icon (>_) to open a command shell, and within the terminal, execute the following command: ```shell $ vault write auth/$AUTH_PATH/role/$ROLE_NAME \ bound_audiences="$AEMBIT_ISSUER" \ user_claim="$USER_CLAIM" \ token_policies="$POLICY_VALUE" \ role_type="jwt" ``` :warning: Before running the command, ensure you have replaced the variables (e.g. `$AUTH_PATH`, `$ROLE_NAME`, etc.) with your desired values and `$AEMBIT_ISSUER` with the Issuer URL copied from the Aembit Credential Provider. ## Credential Provider Configuration 1. Create a new Credential Provider. - **Name** - Choose a user-friendly name. - **Credential Type** - [Vault Client Token](/astro/user-guide/access-policies/credential-providers/vault-client-token) JSON WEB TOKEN (JWT) - **Subject** - Test (In this example, 'Test' is used as a value, but this field can accept any Vault-compatible subject value.) - **Issuer (Read-Only)** - An auto-generated OpenID Connect (OIDC) Issuer URL from Aembit Edge, used during Vault method configuration. CUSTOM CLAIMS - **Claim Name** - vault_user - **Value** - empty (In this example, 'empty' is used as a value, but this field can accept any string input.) VAULT AUTHENTICATION - **Host** - Hostname of your Vault Cluster (e.g. `vault-cluster-public-vault-xyz.abc.hashicorp.cloud`) - **Port** - 8200 with TLS is recommended. Please use the configuration which matches your Vault cluster. - **Authentication Path** - Provide the path name of your OIDC Authentication method (e.g. oidc/path). - **Role** - If you did not set the **Default Role** previously, a role name must be provided here; otherwise optional. - **Namespace** - Provide the **namespace** used in Vault. You can find it at the bottom left corner of the page. (optional) - **Forwarding Configuration** - No Forwarding (default) ### Configuration-Specific Fields Depending on your Vault Role configuration, ensure that the Credential Provider includes the following values: - **Subject** - If using a `bound_subject` configuration for your Vault Role, this value must match that configuration. CUSTOM CLAIMS - **Claim Name** - aud - **Value** - This value should match the configuration in your Vault role's `bound_audiences` setting. ## Server Workload configuration 1. Create a new Server Workload. - **Name** - Choose a user-friendly name. 2. Configure the Service endpoint: - **Host** - Hostname of your Vault Cluster (e.g. `vault-cluster-public-vault-xyz.abc.hashicorp.cloud`) - **Application Protocol** - HTTP - **Port** - 8200 with TLS is recommended. Please use the configuration which matches your Vault cluster. - **Forward to Port** - 8200 with TLS is recommended. Please use the configuration which matches your Vault cluster. - **Authentication method** - HTTP Authentication - **Authentication scheme** - Header - **Header** - X-Vault-Token ## Client Workload Configuration Aembit now handles the credentials required to access the Server Workload, eliminating the need for you to manage them directly. You can safely remove any previously used credentials from the Client Workload. If you access the Server Workload through an SDK or library, it is possible that the SDK/library may still require credentials to be present for initialization purposes. In this scenario, you can provide placeholder credentials. Aembit will overwrite these placeholder credentials with the appropriate ones during the access process. ## Access Policy - Create an access policy for a Client Workload to access the HashiCorp Vault Server Workload and assign the newly created Credential Provider to it. ## Required Features - You will need to configure the [TLS Decrypt](/astro/user-guide/deploy-install/advanced-options/tls-decrypt/configure-tls-decrypt) feature to work with the HashiCorp Vault Server Workload. --- # AWS Key Management Service (KMS) URL: https://docs.aembit.io/user-guide/access-policies/server-workloads/guides/kms/ Description: This page describes how to configure Aembit to work with the AWS KMS server workload. # [Amazon Key Management Service](https://aws.amazon.com/kms/) is a service that enables you to create and control the encryption keys used to secure your data. This service integrates seamlessly with other AWS services, allowing you to easily encrypt and decrypt data, manage access to keys, and audit key usage. Below you can find the Aembit configuration required to work with AWS KMS as a Server Workload using the AWS CLI, AWS SDK, or other HTTP-based client. ## Prerequisites - You will need an AWS IAM role configured to access AWS KMS resources. ## Server Workload Configuration 1. Create a new Server Workload. - **Name** - Choose a user-friendly name. 2. Configure the service endpoint: - **Host** - `kms.us-east-1.amazonaws.com` (substitute **us-east-1** with your preferred region) - **Application Protocol** - HTTP - **Port** - 443 with TLS - **Forward to Port** - 443 with TLS - **Authentication method** - HTTP Authentication - **Authentication scheme** - AWS Signature v4 ## Credential Provider Configuration 1. Create a new Credential Provider. - **Name** - Choose a user-friendly name. - **Credential Type** - [AWS Security Token Service Federation](/astro/user-guide/access-policies/credential-providers/aws-security-token-service-federation) - **OIDC Issuer URL** - Copy and securely store for later use in AWS Identity Provider configuration. - **AWS IAM Role Arn** - Provide the IAM Role Arn. - **Aembit IdP Token Audience** - Copy and securely store for later use in AWS Identity Provider configuration. 2. Create an AWS IAM Role to access KMS and trust Aembit. - Within the AWS Console, go to **IAM** > **Identity providers** and select **Add provider**. - On the Configure provider screen, complete the steps and fill out the values specified: - **Provider type**- Select **OpenID Connect** - **Provider URL**- Paste in the **OIDC Issuer URL** from the previous steps. - Click **Get thumbprint** to configure the AWS Identity Provider trust relationship. - **Audience**: Paste in the **Aembit IdP Token Audience** from the previous steps. - Click **Add provider**. - Within the AWS Console, go to **IAM** > **Identity providers** and select the Identity Provider you just created. - Click the **Assign role** button and choose **Use an existing role**. ## Client Workload Configuration Aembit now handles the credentials required to access the Server Workload, eliminating the need for you to manage them directly. You can safely remove any previously used credentials from the Client Workload. If you access the Server Workload through an SDK or library, it is possible that the SDK/library may still require credentials to be present for initialization purposes. In this scenario, you can provide placeholder credentials. Aembit will overwrite these placeholder credentials with the appropriate ones during the access process. ## Access Policy - Create an access policy for a Client Workload to access the KMS Server Workload and assign the newly created Credential Provider to it. ## Required Features - You will need to configure the [TLS Decrypt](/astro/user-guide/deploy-install/advanced-options/tls-decrypt/configure-tls-decrypt) feature to work with the KMS Server Workload. - If you are using [AWS CLI](https://aws.amazon.com/cli/) to access KMS, you will need to set the environment variable `AWS_CA_BUNDLE` to point to the above certificate. --- # Local MySQL URL: https://docs.aembit.io/user-guide/access-policies/server-workloads/guides/local-mysql/ Description: This page describes how to configure Aembit to work with the local MySQL Server Workload. # [MySQL](https://www.mysql.com/) is a powerful and widely-used open-source relational database management system, commonly used for local development environments and applications of various scales, while providing a intense foundation for efficient data storage, retrieval, and management. Below you can find the Aembit configuration required to work with MySQL as a Server Workload using the MySQL-compatible CLI, application, or a library. ## Prerequisites Before proceeding with the configuration, ensure you have access to a Kubernetes cluster. Modify the example YAML file according to your specific configurations, and then deploy it to your Kubernetes cluster. ### Example MySQL Yaml File :::note This example does not use TLS and is shown here for demonstration purposes only. It is strongly recommended to use TLS in production settings. ::: ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: mysql spec: selector: matchLabels: app: mysql strategy: type: Recreate template: metadata: labels: app: mysql spec: containers: - image: mysql:5.7.44 name: mysql args: ["--ssl=0"] env: - name: MYSQL_ROOT_PASSWORD value: "" - name: MYSQL_DATABASE value: ports: - containerPort: 3306 name: mysql --- # Service apiVersion: v1 kind: Service metadata: name: mysql annotations: spec: type: NodePort ports: - name: mysql port: 3306 targetPort: 3306 selector: app: mysql ``` :warning: Before running the command, ensure you have replaced the master password and database name in the configuration file with your desired values. Use the following command to deploy this file to your Kubernetes cluster. `kubectl apply -f ./mysql.yaml` ## Server Workload Configuration 1. Create a new Server Workload. - **Name** - Choose a user-friendly name. 2. Configure the service endpoint: - **Host** - `mysql.default.svc.cluster.local` - **Application Protocol** - MySQL - **Port** - 3306 - **Forward to Port** - 3306 - **Authentication method** - Password Authentication - **Authentication scheme** - Password ## Credential Provider Configuration 1. Create a new Credential Provider. - **Name** - Choose a user-friendly name. - **Credential Type** - [Username & Password](/astro/user-guide/access-policies/credential-providers/username-password) - **Username** - Provide the database login ID for the MySQL master user. - **Password** - Provide the master password associated with the MySQL database credentials. ## Client Workload Configuration Aembit now handles the credentials required to access the Server Workload, eliminating the need for you to manage them directly. You can safely remove any previously used credentials from the Client Workload. If you access the Server Workload through an SDK or library, it is possible that the SDK/library may still require credentials to be present for initialization purposes. In this scenario, you can provide placeholder credentials. Aembit will overwrite these placeholder credentials with the appropriate ones during the access process. ## Access Policy - Create an access policy for a Client Workload to access the MySQL Server Workload and assign the newly created Credential Provider to it. --- # Local PostgreSQL URL: https://docs.aembit.io/user-guide/access-policies/server-workloads/guides/local-postgres/ Description: This page describes how to configure Aembit to work with the local PostgreSQL Server Workload. # [PostgreSQL](https://www.postgresql.org/) stands out as a dynamic and versatile relational database service, delivering scalability and efficiency. This solution facilitates the effortless deployment, administration, and scaling of PostgreSQL databases in diverse cloud settings. Below you can find the Aembit configuration required to work with PostgreSQL as a Server Workload using PostgreSQL-compatible CLI, application, or a library. ## Prerequisites Before proceeding with the configuration, ensure you have access to a Kubernetes cluster. Modify the example YAML file according to your specific configurations, and then deploy it to your Kubernetes cluster. ### Example Postgres Yaml File :::note This example does not use TLS and is shown here for demonstration purposes only. It is strongly recommended to use TLS in production settings. ::: ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: postgresql spec: selector: matchLabels: app: postgresql strategy: type: Recreate template: metadata: labels: app: postgresql spec: containers: - image: postgres:16.0 name: postgresql env: - name: POSTGRES_DB value: - name: POSTGRES_USER value: - name: POSTGRES_PASSWORD value: "" ports: - containerPort: 5432 name: postgresql --- # Service apiVersion: v1 kind: Service metadata: name: postgresql annotations: spec: type: NodePort ports: - name: postgresql port: 5432 targetPort: 5432 selector: app: postgresql ``` :warning: Before running the command, ensure you have replaced the master user name, master password and database name in the configuration file with your desired values. Use the following command to deploy this file to your Kubernetes cluster. `kubectl apply -f ./postgres.yaml` ## Server Workload Configuration 1. Create a new Server Workload. - **Name** - Choose a user-friendly name. 2. Configure the service endpoint: - **Host** - `postgres.default.svc.cluster.local` - **Application Protocol** - Postgres - **Port** - 5432 - **Forward to Port** - 5432 - **Authentication method** - Password Authentication - **Authentication scheme** - Password ## Credential Provider Configuration 1. Create a new Credential Provider. - **Name** - Choose a user-friendly name. - **Credential Type** - [Username & Password](/astro/user-guide/access-policies/credential-providers/username-password) - **Username** - Provide the database login ID for the PostgreSQL master user. - **Password** - Provide the master password associated with the PostgreSQL database credentials. ## Client Workload Configuration Aembit now handles the credentials required to access the Server Workload, eliminating the need for you to manage them directly. You can safely remove any previously used credentials from the Client Workload. If you access the Server Workload through an SDK or library, it is possible that the SDK/library may still require credentials to be present for initialization purposes. In this scenario, you can provide placeholder credentials. Aembit will overwrite these placeholder credentials with the appropriate ones during the access process. ## Access Policy - Create an access policy for a Client Workload to access the MySQL Server Workload and assign the newly created Credential Provider to it. --- # Local Redis URL: https://docs.aembit.io/user-guide/access-policies/server-workloads/guides/local-redis/ Description: This page describes how to configure Aembit to work with the local Redis Server Workload. # [Redis](https://redis.io/), known as an advanced key-value store, offers a fast and efficient solution for managing data in-memory. Redis' speed and simplicity make it the preferred choice for applications requiring rapid access to cached information, real-time analytics, and message brokering. Redis supports a variety of data structures, including strings, hashes, lists, sets, and more, allowing users to model and manipulate data based on their specific requirements. Below you can find the Aembit configuration required to work with Redis as a Server Workload using the Redis-compatible CLI, application, or a library. ## Prerequisites Before proceeding with the configuration, ensure you have access to a Kubernetes cluster. Modify the example YAML file according to your specific configurations, and then deploy it to your Kubernetes cluster. ### Example Redis Yaml File :::note This example does not use TLS and is shown here for demonstration purposes only. It is strongly recommended to use TLS in production settings. ::: ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: redis spec: replicas: 1 selector: matchLabels: app: redis strategy: type: Recreate template: metadata: labels: app: redis spec: containers: - name: redis image: redis imagePullPolicy: Always ports: - containerPort: 6379 name: redis env: - name: MASTER value: "true" - name: REDIS_USER value: "" - name: REDIS_PASSWORD value: "" --- # Service apiVersion: v1 kind: Service metadata: name: redis spec: type: NodePort selector: app: redis ports: - port: 6379 targetPort: 6379 ``` :warning: Before running the command, ensure you have replaced the master user name and master password in the configuration file with your desired values. Use the following command to deploy this file to your Kubernetes cluster. `kubectl apply -f ./redis.yaml` ## Server Workload Configuration 1. Create a new Server Workload. - **Name** - Choose a user-friendly name. 2. Configure the service endpoint: - **Host** - `redis.default.svc.cluster.local` - **Application Protocol** - Redis - **Port** - 6379 - **Forward to Port** - 6379 - **Authentication method** - Password Authentication - **Authentication scheme** - Password ## Credential Provider Configuration 1. Create a new Credential Provider. - **Name** - Choose a user-friendly name. - **Credential Type** - [Username & Password](/astro/user-guide/access-policies/credential-providers/username-password) - **Username** - Provide the login ID for the Redis master user. - **Password** - Provide the master password associated with the Redis credentials. ## Client Workload Configuration Aembit now handles the credentials required to access the Server Workload, eliminating the need for you to manage them directly. You can safely remove any previously used credentials from the Client Workload. If you access the Server Workload through an SDK or library, it is possible that the SDK/library may still require credentials to be present for initialization purposes. In this scenario, you can provide placeholder credentials. Aembit will overwrite these placeholder credentials with the appropriate ones during the access process. ## Access Policy - Create an access policy for a Client Workload to access the Redis Server Workload and assign the newly created Credential Provider to it. --- # Looker Studio URL: https://docs.aembit.io/user-guide/access-policies/server-workloads/guides/looker-studio/ Description: This page describes how to configure Aembit to work with the Looker Studio Server Workload. # [Looker Studio](https://lookerstudio.google.com/), part of Google Cloud Platform, is a data visualization tool designed for creating and managing reports and dashboards. It enables users to connect to various data sources, transforming raw data into interactive visual insights. Below you can find the Aembit configuration required to work with the Looker Studio service as a Server Workload using the Looker Studio API. :::note[Prerequisites] Before proceeding with the configuration, ensure you have the following: - An active Google Cloud account - A GCP project with [Looker Studio API](https://console.cloud.google.com/apis/library/datastudio.googleapis.com) enabled - Looker Studio assets (e.g., reports or data sources) available ::: ## Server Workload Configuration 1. Create a new Server Workload. - **Name** - Choose a user-friendly name. 2. Configure the Service endpoint: - **Host** - `datastudio.googleapis.com` - **Application Protocol** - HTTP - **Port** - 443 with TLS - **Forward to Port** - 443 with TLS - **Authentication method** - HTTP Authentication - **Authentication scheme** - Bearer ## Credential Provider Configuration 1. Sign in to the Google Cloud Console and navigate to the [Credentials](hhttps://console.cloud.google.com/apis/credentials) page. Ensure you are working within a GCP project for which you have authorization. 2. On the **Credentials** dashboard, click **Create Credentials** located in the top left corner and select the **OAuth client ID** option. ![Create OAuth client ID](../../../../../../assets/images/gcp_create_oauth_client_id.png) 3. If there is no configured Consent Screen for your project, you will see a **Configure Consent Screen** button on the directed page. Click the button to continue. ![Configure Consent Screen](../../../../../../assets/images/gcp_no_consent_screen.png) 4. Choose **User Type** and click **Create**. - Provide a name for your app. - Choose a user support email from the dropdown menu. - App logo and app domain fields are optional. - Enter at least one email for the Developer contact information field. - Click **Save and Continue**. - You may skip the Scopes step by clicking **Save and Continue** once again. - In the **Summary** step, review the details of your app and click **Back to Dashboard**. 5. Navigate back to [Credentials](hhttps://console.cloud.google.com/apis/credentials) page, click **Create Credentials**, and select the **OAuth client ID** option again. - Choose **Web Application** for Application Type. - Provide a name for your web client. - Switch to the Aembit UI to create a new Credential Provider, selecting the OAuth 2.0 Authorization Code credential type. After setting up the Credential Provider, copy the auto-generated **Callback URL**. - Return to Google Cloud Console and paste the copied URL into the **Authorized redirect URIs** field. - Click **Create**. 6. A pop-up window will appear. Copy both the **Client ID** and the **Client Secret**. Store them for later use in the tenant configuration. 7. Edit the existing Credential Provider created in the previous steps. - **Name** - Choose a user-friendly name. - **Credential Type** - [OAuth 2.0 Authorization Code](/astro/user-guide/access-policies/credential-providers/oauth-authorization-code) - **Callback URL (Read-Only)** - An auto-generated Callback URL from Aembit Admin. - **Client Id** - Provide the Client ID copied from Google. - **Client Secret** - Provide the Secret copied from Google. - **Scopes** - Enter the scopes you will use for Looker Studio (e.g. `https://www.googleapis.com/auth/datastudio`) Detailed information about scopes can be found at [official Looker Studio documentation](https://developers.google.com/looker-studio/integrate/api#authorize-app). - **OAuth URL** - `https://accounts.google.com` Click on **URL Discovery** to populate the Authorization and Token URL fields, which can be left as populated. - **PKCE Required** - Off - **Lifetime** - 1 year (This value is recommended by Aembit. For more information, please refer to the [official Google documentation](https://developers.google.com/identity/protocols/oauth2#expiration).) 8. Click **Save** to save your changes on the Credential Provider. 9. In the Aembit UI, click the **Authorize** button. You are directed to a page where you can review the access request. Click **Authorize** to complete the OAuth 2.0 Authorization Code flow. You should see a success page and then be redirected to Aembit automatically. You can also verify your flow is complete by checking the **State** value in the Credential Provider. After completion, it should be **Ready**. ![Credential Provider - Ready State](../../../../../../assets/images/credential_providers_auth_code_status_ready.png) :::caution Once the set lifetime ends, the retrieved credential expires and will not work anymore. Aembit will notify you before this happens. Please ensure you reauthorize the credential before it expires. ::: ## Client Workload Configuration Aembit now handles the credentials required to access the Server Workload, eliminating the need for you to manage them directly. You can safely remove any previously used credentials from the Client Workload. If you access the Server Workload through an SDK or library, it is possible that the SDK/library may still require credentials to be present for initialization purposes. In this scenario, you can provide placeholder credentials. Aembit will overwrite these placeholder credentials with the appropriate ones during the access process. ## Access Policy - Create an Access Policy for a Client Workload to access the Looker Studio Server Workload. Assign the newly created Credential Provider to this Access Policy. ## Required Features - You will need to configure the [TLS Decrypt](/astro/user-guide/deploy-install/advanced-options/tls-decrypt/configure-tls-decrypt) feature to work with the Looker Studio Server Workload. --- # Microsoft Graph URL: https://docs.aembit.io/user-guide/access-policies/server-workloads/guides/microsoft-graph/ Description: This page describes how to configure Aembit to work with the Microsoft Graph Server Workload. # [Microsoft Graph API](https://developer.microsoft.com/en-us/graph) is a comprehensive cloud-based service that empowers developers to build applications that integrate seamlessly with Microsoft 365. This service serves as a unified endpoint to access various Microsoft 365 services and data, offering a range of functionalities for communication, collaboration, and productivity. Below you can find the Aembit configuration required to work with the Microsoft service as a Server Workload using the Microsoft Graph REST API. ## Prerequisites Before proceeding with the configuration, ensure you have the following: - Microsoft Azure tenant. - A registered and consent-granted application on Microsoft Entra ID (previously Azure Active Directory). If you haven't set up an app yet, follow the steps in the next section. ### Microsoft Entra ID (Azure Active Directory) App Registration 1. Log in to the [Microsoft Azure Portal](https://portal.azure.com/#home). 2. Navigate to **Microsoft Entra ID** (Azure Active Directory). 3. On the left panel, click on **App registrations**, and then from the right part, click on **New registration** in the ribbon list. 4. Choose a user-friendly name, select the **Accounts in this organizational directory only** option, and then click **Register**. Your application is now registered with Microsoft Entra ID (Azure Active Directory). ![Register an application](../../../../../../assets/images/microsoft_register_app.png) 5. To set API Permissions, on the left panel, click on **API Permissions**, and then on the right part, click on **Add a permission**. In the opened dialog, click on **Microsoft Graph** and then click **Application permissions**. :::note The current configuration with Microsoft Graph **_only_** works for the Application permission type. For more details on permissions and types, please refer to the [official Microsoft article](https://learn.microsoft.com/en-us/graph/permissions-overview?tabs=http). ::: ![Set API Permissions](../../../../../../assets/images/microsoft_set_permission.png) 6. Select the permissions your workload needs. Since there are many permissions to choose from, it may help to search for the ones you want. Then, click on **Add permissions**. 7. Under Configured Permissions, click on **Grant admin consent for…**, and then click **Yes**. ![Grant Admin Consent](../../../../../../assets/images/microsoft_grant_consent.png) Before an app accesses your organization's data, you need to grant specific permissions. The level of access depends on the permissions. In Microsoft Entra ID (Azure Active Directory), Application Administrator, Cloud Application Administrator, and Global Administrator are [built-in roles](https://learn.microsoft.com/en-us/entra/identity/role-based-access-control/permissions-reference) with the ability to manage admin consent request policies. If the button is disabled for you, please contact your Administrator. Note that only users with the appropriate privileges can perform this step. For more information on granting tenant-wide admin consent, refer to the [official Microsoft article](https://learn.microsoft.com/en-us/entra/identity/enterprise-apps/grant-admin-consent?pivots=portal). ## Server Workload Configuration 1. Create a new Server Workload. - **Name** - Choose a user-friendly name. 2. Configure the service endpoint: - **Host** - `graph.microsoft.com` - **Application Protocol** - HTTP - **Port** - 80 - **Forward to Port** - 443 with TLS - **Authentication method** - HTTP Authentication - **Authentication scheme** - Bearer ## Credential Provider Configuration 1. Log in to the [Microsoft Azure Portal](https://portal.azure.com/#home). 2. Navigate to **Microsoft Entra ID** (Azure Active Directory) and on the left panel click on **App registrations**. 3. Select your application. 4. In the Overview section, copy both the **Application (client) ID** and the **Directory (tenant) ID**. Store them for later use in the tenant configuration. ![Overview | Copy Client ID and Tenant ID](../../../../../../assets/images/microsoft_overview_workload.png) 5. Under Manage, navigate to **Certificates & secrets**. In the Client Secrets tab, if there is no existing secret, please create a new secret and make sure to save it immediately after creation. If there is an existing one, please provide the stored secret in the following steps. ![Copy Client Secret](../../../../../../assets/images/microsoft_client_secret.png) 6. Create a new Credential Provider. - **Name** - Choose a user-friendly name. - **Credential Type** - [OAuth 2.0 Client Credentials](/astro/user-guide/access-policies/credential-providers/oauth-client-credentials) - **Token endpoint** - ht​tps://login.microsoftonline.com/**Your-Tenant-Id**/oauth2/v2.0/token - **Client ID** - Provide the client ID copied from Azure. - **Client Secret** - Provide the client secret copied from Azure. - **Scopes** - https://graph.microsoft.com/.default ## Client Workload Configuration Aembit now handles the credentials required to access the Server Workload, eliminating the need for you to manage them directly. You can safely remove any previously used credentials from the Client Workload. If you access the Server Workload through an SDK or library, it is possible that the SDK/library may still require credentials to be present for initialization purposes. In this scenario, you can provide placeholder credentials. Aembit will overwrite these placeholder credentials with the appropriate ones during the access process. ## Access Policy - Create an Access Policy for a Client Workload to access the Microsoft Server Workload. Assign the newly created Credential Provider to this Access Policy. --- # Okta URL: https://docs.aembit.io/user-guide/access-policies/server-workloads/guides/okta/ Description: This page describes how to configure Aembit to work with the Okta Server Workload. # [Okta](https://www.okta.com/) is a cloud-based Identity and Access Management (IAM) platform that offers tools for user authentication, access control, and security, helping streamline identity management and improve user experiences across applications and devices. Below you can find the Aembit configuration required to work with the Okta Workforce Identity Cloud service as a Server Workload using the Core Okta API. ## Prerequisites Before proceeding with the configuration, you must have an Okta Workforce Identity Cloud organization (tenant). ## Server Workload Configuration To retrieve the connection information in the Okta Admin Console: - Click on your username in the upper-right corner of the Admin Console. The domain appears in the dropdown menu; copy the domain. ![Okta Endpoint](../../../../../../assets/images/okta_endpoint.png) 1. Create a new Server Workload. - **Name** - Choose a user-friendly name. 2. Configure the service endpoint: - **Host** - `.okta.com` (Provide the domain copied from Okta) - **Application Protocol** - HTTP - **Port** - 443 with TLS - **Forward to Port** - 443 with TLS - **Authentication method** - API Key - **Authentication scheme** - Header - **Header** - Authorization ## Credential Provider Configuration 1. Sign in to your Okta organization as a user with administrator privileges. 2. In the left navigation pane, select **Security**, then click on **API**. 3. Navigate to the **Tokens** tab in the ribbon list. 4. Click **Create Token**, name your token, and then click **Create Token**. 5. Click the **Copy to Clipboard icon** to securely store the token for later use in the tenant configuration. For detailed information on API tokens, please refer to the [official Okta documentation](https://developer.okta.com/docs/guides/create-an-api-token/main/). ![Copy API Token](../../../../../../assets/images/okta_copy_api_token.png) 6. Create a new Credential Provider. - **Name** - Choose a user-friendly name. - **Credential Type** - [API Key](/astro/user-guide/access-policies/credential-providers/api-key) - **API Key** - Provide the key copied from Okta and use the format `SSWS api-token`, replacing `api-token` with your API token. ## Client Workload Configuration Aembit now handles the credentials required to access the Server Workload, eliminating the need for you to manage them directly. You can safely remove any previously used credentials from the Client Workload. If you access the Server Workload through an SDK or library, it is possible the SDK/library may still require credentials to be present for initialization purposes. In this scenario, you can provide placeholder credentials. Aembit will overwrite these placeholder credentials with the appropriate ones during the access process. ## Access Policy - Create an Access Policy for a Client Workload to access the Okta Server Workload. Assign the newly created Credential Provider to this Access Policy. ## Required Features - You will need to configure the [TLS Decrypt](/astro/user-guide/deploy-install/advanced-options/tls-decrypt/configure-tls-decrypt) feature to work with the Okta Server Workload. --- # ChatGPT (OpenAI) URL: https://docs.aembit.io/user-guide/access-policies/server-workloads/guides/openai/ Description: This page describes how to configure Aembit to work with the OpenAI Server Workload [OpenAI](https://platform.openai.com/) is a artificial intelligence platform that allows developers to integrate advanced language models into their applications. It supports diverse tasks such as text completion, summarization, and sentiment analysis, enhancing software functionality and user experience. Below you can find the Aembit configuration required to work with the OpenAI service as a Server Workload using the OpenAI API. ## Prerequisites Before proceeding with the configuration, ensure you have an OpenAI account and API key. If you have not already generated a key, follow the instructions below. For more details on API key authentication, refer to the [official OpenAI API documentation](https://platform.openai.com/docs/api-reference/api-keys). ### Create Project API Key 1. Sign in to your OpenAI account. 2. Navigate to the [API Keys](https://platform.openai.com/api-keys) page from the left menu. 3. Click on **Create new secret key** button in the middle of the page. 4. A pop-up window will appear. Choose the owner and project (if not multiple, the default project is selected). Then, fill in either the optional name field or service account ID, depending on the owner selection. - If the owner is selected as **You**, under the **Permissions** section, select the permissions (scopes) for your application according to your needs. - Click on **Create secret key** to proceed. ![Create secret key](../../../../../../assets/images/openai_api_create_secret_key.png) 5. Click **Copy** and securely store the key for later use in the configuration on the tenant. ![Copy secret key](../../../../../../assets/images/openai_api_copy_secret_key.png) :::note The following configuration steps will also work with **user API keys**; however, **project API keys** are recommended as they offer more granular control over the resources. ::: ## Server Workload Configuration 1. Create a new Server Workload. - **Name** - Choose a user-friendly name. 2. Configure the service endpoint: - **Host** - `api.openai.com` - **Application Protocol** - HTTP - **Port** - 443 with TLS - **Forward to Port** - 443 with TLS - **Authentication method** - HTTP Authentication - **Authentication scheme** - Bearer ## Credential Provider Configuration 1. Create a new Credential Provider. - **Name** - Choose a user-friendly name. - **Credential Type** - [API Key](/astro/user-guide/access-policies/credential-providers/api-key) - **API Key** - Paste the key copied from OpenAI Platform. ## Client Workload Configuration Aembit now handles the credentials required to access the Server Workload, eliminating the need for you to manage them directly. You can safely remove any previously used credentials from the Client Workload. If you access the Server Workload through an SDK or library, it is possible that the SDK/library may still require credentials to be present for initialization purposes. In this scenario, you can provide placeholder credentials. Aembit will overwrite these placeholder credentials with the appropriate ones during the access process. ## Access Policy - Create an Access Policy for a Client Workload to access the OpenAI Server Workload. Assign the newly created Credential Provider to this Access Policy. ## Required Features - You will need to configure the [TLS Decrypt](/astro/user-guide/deploy-install/advanced-options/tls-decrypt/configure-tls-decrypt) feature to work with the OpenAI API Server Workload. --- # PagerDuty URL: https://docs.aembit.io/user-guide/access-policies/server-workloads/guides/pagerduty/ Description: This page describes how to configure Aembit to work with the PagerDuty Server Workload. # [PagerDuty](https://www.pagerduty.com/) is a digital operations management platform that helps businesses improve their incident response process. It allows teams to centralize their monitoring tools and manage incidents in real-time, reducing downtime and improving service reliability. Below you can find the Aembit configuration required to work with the PagerDuty service as a Server Workload using the PagerDuty API. Aembit supports multiple authentication/authorization methods for PagerDuty. This page describes scenarios where the Credential Provider is configured for PagerDuty via: - [OAuth 2.0 Authorization Code (3LO)](/astro/user-guide/access-policies/server-workloads/guides/pagerduty#oauth-20-authorization-code) - [OAuth 2.0 Client Credentials](/astro/user-guide/access-policies/server-workloads/guides/pagerduty#oauth-20-client-credentials) :::note[Prerequisites] Before proceeding with the configuration, ensure you have the following: - PagerDuty tenant. - Registered app in the PagerDuty tenant. If you have not registered an app before, you can follow the steps outlined in the subsequent sections or refer to the [official PagerDuty Developer documentation](https://developer.pagerduty.com/docs/dd91fbd09a1a1-register-an-app) for more detailed instructions. ::: ## OAuth 2.0 Authorization Code ### Server Workload Configuration 1. Create a new Server Workload. - **Name** - Choose a user-friendly name. 2. Configure the service endpoint: - **Host** - `api.pagerduty.com` - **Application Protocol** - HTTP - **Port** - 443 with TLS - **Forward to Port** - 443 with TLS - **Authentication method** - HTTP Authentication - **Authentication scheme** - Bearer ### Credential Provider Configuration 1. Log in to your [PagerDuty account](https://identity.pagerduty.com/global/authn/authentication/PagerDutyGlobalLogin/enter_email). 2. Navigate to the top menu, select **Integrations**, and then click on **App Registration**. ![PagerDuty Dashboard Navigation](../../../../../../assets/images/pagerduty_dashboard_navigation.png) 3. Click the **New App** button in the top right corner of the page. ![PagerDuty New App](../../../../../../assets/images/pagerduty_new_app.png) 4. Fill in the name and description fields, choose **OAuth 2.0**, and then click **Next** to proceed. 5. Select **Scoped OAuth** as the authorization method. 6. Switch to the Aembit UI to create a new Credential Provider, selecting the OAuth 2.0 Authorization Code credential type. After setting up the Credential Provider, copy the auto-generated **Callback URL**. 7. Return to PagerDuty and click to **Add Redirect URL** and paste the copied **Callback URL** into the field. 8. Choose the permissions (scopes) for your application based on your needs. 9. Before registering your app, scroll down and click **Copy to clipboard** to store your selected permission scopes for later use in the tenant configuration. ![PagerDuty Copy Scopes](../../../../../../assets/images/pagerduty_copy_scopes.png) 10. After making all of your selections, click on **Register App**. 11. A pop-up window appears. Copy both the Client ID and Client Secret, and store these details securely for later use in the tenant configuration. ![PagerDuty Copy Client ID and Secret](../../../../../../assets/images/pagerduty_copy_client_id_and_secret.png) 12. Edit the existing Credential Provider created in the previous steps. - **Name** - Choose a user-friendly name. - **Credential Type** - [OAuth 2.0 Authorization Code](/astro/user-guide/access-policies/credential-providers/oauth-authorization-code) - **Callback URL (Read-Only)** - An auto-generated Callback URL from Aembit Admin. - **Client Id** - Provide the client ID copied from PagerDuty. - **Client Secret** - Provide the client secret copied from PagerDuty. - **Scopes** - Enter the scopes you use, space delimited. (e.g. `incidents.read abilities.read`). - **OAuth URL** - `https://identity.pagerduty.com/global/oauth/anonymous/.well-known/openid-configuration` Click on **URL Discovery** to populate the Authorization and Token URL fields. These fields need to be updated to the following values: - **Authorization URL** - `https://identity.pagerduty.com/oauth/authorize` - **Token URL** - `https://identity.pagerduty.com/oauth/token` - **PKCE Required** - On - **Lifetime** - 1 year (PagerDuty does not specify a refresh token lifetime; this value is recommended by Aembit.) 13. Click **Save** to save your changes on the Credential Provider. 14. In Aembit UI, click the **Authorize** button. You are then directed to a page where you can review the access request. Click **Accept** to complete the OAuth 2.0 Authorization Code flow. You should see a success page and be redirected to Aembit automatically. You can also verify that your flow is complete by checking the **State** value in the Credential Provider. After completion, it should be in a **Ready** state. ![Credential Provider - Ready State](../../../../../../assets/images/credential_providers_auth_code_status_ready.png) :::caution Once the set lifetime ends, the retrieved credential expires and no longer be active. Aembit will notify you before this happens. Please ensure you reauthorize your credential before it expires. ::: ## OAuth 2.0 Client Credentials ### Server Workload Configuration 1. Create a new Server Workload. - **Name** - Choose a user-friendly name. 2. Configure the service endpoint: - **Host** - `api.pagerduty.com` - **Application Protocol** - HTTP - **Port** - 443 with TLS - **Forward to Port** - 443 with TLS - **Authentication method** - HTTP Authentication - **Authentication scheme** - Bearer ### Credential Provider Configuration 1. Log in to your [PagerDuty account](https://identity.pagerduty.com/global/authn/authentication/PagerDutyGlobalLogin/enter_email). 2. Navigate to the top menu, select **Integrations**, and then click on **App Registration**. ![PagerDuty Dashboard Navigation](../../../../../../assets/images/pagerduty_dashboard_navigation.png) 3. Click the **New App** button in the top right corner of the page. ![PagerDuty New App](../../../../../../assets/images/pagerduty_new_app.png) 4. Fill in the name and description fields, choose **OAuth 2.0**, and then click **Next** to proceed. 5. Select **Scoped OAuth** as the authorization method and choose the permissions (scopes) for your application based on your needs. 6. Before registering your app, scroll down and click **Copy to clipboard** to store your selected permission scopes for later use in the tenant configuration. ![PagerDuty Copy Scopes](../../../../../../assets/images/pagerduty_copy_scopes.png) 7. After making all of your selections, click on **Register App**. 8. A pop-up window appears. Copy both the Client ID and Client Secret, and store these details securely for later use in the tenant configuration. ![PagerDuty Copy Client ID and Secret](../../../../../../assets/images/pagerduty_copy_client_id_and_secret.png) 9. Create a new Credential Provider. - **Name** - Choose a user-friendly name. - **Credential Type** - [OAuth 2.0 Client Credentials](/astro/user-guide/access-policies/credential-providers/oauth-client-credentials) - **Token endpoint** - `https://identity.pagerduty.com/oauth/token` - **Client ID** - Provide the client ID copied from PagerDuty. - **Client Secret** - Provide the client secret copied from PagerDuty. - **Scopes** - Enter the scopes you use, space delimited. Must include the `as_account-` scope that identifies the PagerDuty account, using the format `{REGION}.{SUBDOMAIN}` (e.g. `as_account-us.dev-aembit incidents.read abilities.read`). For more detailed information, you can refer to the [official PagerDuty Developer Documentation](https://developer.pagerduty.com/docs/e518101fde5f3-obtaining-an-app-o-auth-token). - **Credential Style** - POST Body ## Client Workload Configuration Aembit now handles the credentials required to access the Server Workload, eliminating the need for you to manage them directly. You can safely remove any previously used credentials from the Client Workload. If you access the Server Workload through an SDK or library, it is possible that the SDK/library may still require credentials to be present for initialization purposes. In this scenario, you can provide placeholder credentials. Aembit will overwrite these placeholder credentials with the appropriate ones during the access process. ## Access Policy - Create an Access Policy for a Client Workload to access the PagerDuty Server Workload. Assign the newly created Credential Provider to this Access Policy. ## Required Features - You will need to configure the [TLS Decrypt](/astro/user-guide/deploy-install/advanced-options/tls-decrypt/configure-tls-decrypt) feature to work with the PagerDuty Server Workload. --- # PayPal URL: https://docs.aembit.io/user-guide/access-policies/server-workloads/guides/paypal/ Description: This page describes how to configure Aembit to work with the PayPal Server Workload. # [PayPal](https://www.paypal.com/) is an online payment platform that allows individuals and businesses to send and receive payments securely. PayPal supports various payment methods, including credit cards, debit cards, and bank transfers. Below you can find the Aembit configuration required to work with the PayPal service as a Server Workload using the PayPal REST API. ## Prerequisites Before proceeding with the configuration, you will need to have a PayPal Developer tenant (or [sign up](https://www.paypal.com/signin) for one). ## Server Workload Configuration 1. Create a new Server Workload. - **Name** - Choose a user-friendly name. 2. Configure the service endpoint: - **Host** - `api-m.sandbox.paypal.com` (Sandbox) or `api-m.paypal.com` (Live) - **Application Protocol** - HTTP - **Port** - 443 with TLS - **Forward to Port** - 443 with TLS - **Authentication method** - HTTP Authentication - **Authentication scheme** - Bearer ## Credential Provider Configuration 1. Log into the [PayPal Developer Dashboard](https://developer.paypal.com/dashboard/) using your PayPal account credentials. 2. Navigate to the [Apps & Credentials](https://developer.paypal.com/dashboard/applications/) page from the top menu. 3. Ensure you are in the correct mode (Sandbox mode for test data or Live mode for production data). 4. Locate the **Default Application** under the REST API apps list. 5. Click the copy buttons next to the **Client ID** and **Client Secret** values to copy them. Store these details securely for later use in the tenant configuration. ![Copy Client ID and Secret](../../../../../../assets/images/paypal_copy_client_id_and_secret.png) 6. Create a new Credential Provider. - **Name** - Choose a user-friendly name. - **Credential Type** - [OAuth 2.0 Client Credentials](/astro/user-guide/access-policies/credential-providers/oauth-client-credentials) - **Token endpoint** - `https://api-m.sandbox.paypal.com/v1/oauth2/token` (Sandbox) or `https://api-m.paypal.com/v1/oauth2/token` (Live) - **Client ID** - Provide the client ID copied from PayPal. - **Client Secret** - Provide the client secret copied from PayPal. - **Scopes** - You can leave this field **empty**, as PayPal will default to the necessary scopes, or specify the required scopes based on your needs, such as `https://uri.paypal.com/services/invoicing`. For more detailed information, you can refer to the [official PayPal Developer Documentation](https://developer.paypal.com/api/rest/authentication/). ## Client Workload Configuration Aembit now handles the credentials required to access the Server Workload, eliminating the need for you to manage them directly. You can safely remove any previously used credentials from the Client Workload. If you access the Server Workload through an SDK or library, it is possible that the SDK/library may still require credentials to be present for initialization purposes. In this scenario, you can provide placeholder credentials. Aembit will overwrite these placeholder credentials with the appropriate ones during the access process. ## Access Policy - Create an access policy for a Client Workload to access the PayPal Server Workload and assign the newly created Credential Provider to it. ## Required Features - You will need to configure the [TLS Decrypt](/astro/user-guide/deploy-install/advanced-options/tls-decrypt/configure-tls-decrypt) feature to work with the PayPal Server Workload. --- # Salesforce REST URL: https://docs.aembit.io/user-guide/access-policies/server-workloads/guides/salesforce-rest/ Description: This page describes how to configure Aembit to work with the Salesforce REST Server Workload. # [Salesforce](https://www.salesforce.com/) is a cloud-based platform that helps businesses manage customer relationships, sales, and services. It supports integration with various tools and offers customization to fit different business needs. Below you can find the Aembit configuration required to work with the Salesforce service as a Server Workload using the Salesforce REST API. :::note[Prerequisites] Before proceeding with the configuration, ensure you have the following: - Salesforce account. - A connected app on Salesforce. If you have not set up an app yet, follow the steps in the next section. ::: ### Salesforce App Configuration 1. Log in to your [Salesforce account](https://login.salesforce.com/). 2. In the upper-right corner of any page, click the cog icon and then click **Setup**. ![Salesforce Setup](../../../../../../assets/images/salesforce_dashboard_to_setup.png) 3. In the search box at the top of the Setup page, type **App Manager** and select it from the search results. 4. Click the **New Connected App** button in the top-right corner of the page. ![New Connected App](../../../../../../assets/images/salesforce_new_connected_app.png) 5. Configure the app based on your preferences. Below are key choices: - Provide a name for your connected app. The API Name will be auto-generated based on the app name, but you can edit it if needed. Enter a valid email address in the **Contact Email** field. The other fields under the Basic Information section are optional. - Check the **Enable OAuth Settings** box. - Enter a placeholder URL such as `https://aembit.io` in the Callback URL field to pass the required check. (This field will not be used for the Client Credentials Flow.) - Select the necessary **OAuth Scopes** for your application based on your needs. - Uncheck the **Proof Key for Code Exchange**, **Require Secret for Web Server Flow**, and **Require Secret for Refresh Token Flow** boxes. - Check the **Enable Client Credentials Flow** box. When the pop-up window appears, click **OK** to proceed. - Scroll down and click **Save**. - Click **Continue** to complete the app creation process. ![Configure Connected App](../../../../../../assets/images/salesforce_configure_connected_app.png) :::note Salesforce requires an execution user to be designated, allowing the platform to generate access tokens for the chosen user. ::: 6. On the Connected App Detail page of your newly created app, click the **Manage** button, and then click **Edit Policies**. ![App Details to Manage](../../../../../../assets/images/salesforce_app_details_to_manage.png) 7. In the Client Credentials Flow section, click the magnifying glass icon next to the **Run As** field. 8. Select the user you want to designate from the pop-up window and click **Save**. ![Assign User to App](../../../../../../assets/images/salesforce_assign_user_to_app.png) For detailed information on the OAuth 2.0 Client Credentials Flow on Salesforce, please refer to the [official Salesforce documentation](https://help.salesforce.com/s/articleView?id=sf.remoteaccess_oauth_client_credentials_flow.htm&type=5). ## Server Workload Configuration To retrieve connection information in Salesforce: - Click on your profile photo in the upper-right corner of any page. The endpoint appears in the dropdown menu under your username; copy the endpoint. ![Salesforce endpoint](../../../../../../assets/images/salesforce_domain.png) 1. Create a new Server Workload. - **Name** - Choose a user-friendly name. 2. Configure the service endpoint: - **Host** - `.my.salesforce.com` (Provide the endpoint copied from Salesforce) - **Application Protocol** - HTTP - **Port** - 443 - **Forward to Port** - 443 with TLS - **Authentication method** - HTTP Authentication - **Authentication scheme** - Bearer ## Credential Provider Configuration 1. Log in to your [Salesforce account](https://login.salesforce.com/). 2. In the upper-right corner of any page, click the cog icon and then click **Setup**. ![Salesforce Dashboard to Setup](../../../../../../assets/images/salesforce_dashboard_to_setup.png) 3. In the search box at the top of the Setup page, type **App Manager** and select it from the search results to view your newly created app. ![Salesforce Setup to App Manager](../../../../../../assets/images/salesforce_setup_to_app_manager.png) 4. Scroll down the list, find your app, click the icon at the end of the row, and select **View** from the dropdown menu. ![App List](../../../../../../assets/images/salesforce_view_app_from_list.png) 5. Click the **Manage Consumer Details** button. Salesforce will ask you to verify your identity. ![Manage Consumer Details](../../../../../../assets/images/salesforce_app_details_to_client_credentials.png) 6. After verifying your identity, on the opened page, copy both the **Consumer Key** and **Consumer Secret**, and store these details securely for later use in the tenant configuration. ![Copy Consumer Key and Secret](../../../../../../assets/images/salesforce_consumer_key_and_secret.png) 7. Create a new Credential Provider. - **Name** - Choose a user-friendly name. - **Credential Type** - [OAuth 2.0 Client Credentials](/astro/user-guide/access-policies/credential-providers/oauth-client-credentials) - **Token endpoint** - `https://.my.salesforce.com/services/oauth2/token` - **Client ID** - Provide the Consumer Key copied from Salesforce. - **Client Secret** - Provide the Consumer Secret copied from Salesforce. - **Scopes** - You can leave this field empty, as Salesforce will default to your selected scopes for the app. - **Credential Style** - Authorization Header ## Client Workload Configuration Aembit now handles the credentials required to access the Server Workload, eliminating the need for you to manage them directly. You can safely remove any previously used credentials from the Client Workload. If you access the Server Workload through an SDK or library, it is possible that the SDK/library may still require credentials to be present for initialization purposes. In this scenario, you can provide placeholder credentials. Aembit will overwrite these placeholder credentials with the appropriate ones during the access process. ## Access Policy - Create an Access Policy for a Client Workload to access the Salesforce Server Workload. Assign the newly created Credential Provider to this Access Policy. --- # Sauce Labs URL: https://docs.aembit.io/user-guide/access-policies/server-workloads/guides/saucelabs/ Description: This page describes how to configure Aembit to work with the Sauce Labs Server Workload. # [Sauce Labs](https://saucelabs.com/) is a comprehensive cloud-based testing platform designed to facilitate the automation and execution of web and mobile application tests. It supports a wide range of browsers, operating systems, and devices, ensuring thorough and efficient testing processes. Below you can find the Aembit configuration required to work with the Sauce Labs as a Server Workload using the Sauce REST APIs. :::note[Prerequisites] Before proceeding with the configuration, you will need to have a Sauce Labs tenant. ::: ## Server Workload Configuration 1. Create a new Server Workload. - **Name** - Choose a user-friendly name. 2. Configure the service endpoint: - **Host** - Use the appropriate endpoint for your data center: - `api.us-west-1.saucelabs.com` for US West - `api.us-east-4.saucelabs.com` for US East - `api.eu-central-1.saucelabs.com` for Europe - **Application Protocol** - HTTP - **Port** - 443 with TLS - **Forward to Port** - 443 with TLS - **Authentication method** - HTTP Authentication - **Authentication scheme** - Basic ## Credential Provider Configuration 1. Sign into your Sauce Labs account. 2. In the upper-right corner of any page, click the user icon and select **User Settings**. ![Sauce Labs Dashboard to User Settings](../../../../../../assets/images/saucelabs_dashbaoard_to_usersettings.png) 3. Under User Information, copy the **User Name**. Scroll down the page and under the Access Key section, click **Copy to clipboard** to copy the **Access Key**. Securely store both values for later use in the tenant configuration. For more information on authentication, please refer to the [official Sauce Labs documentation](https://docs.saucelabs.com/dev/api/#authentication). ![Sauce Labs Username and Access Key](../../../../../../assets/images/saucelabs_username_and_accesskey.png) 4. Create a new Credential Provider. - **Name** - Choose a user-friendly name. - **Credential Type** - [Username & Password](/astro/user-guide/access-policies/credential-providers/username-password) - **Username** - Provide the User Name copied from Sauce Labs. - **Password** - Provide the Access Key copied from Sauce Labs. ## Client Workload Configuration Aembit now handles the credentials required to access the Server Workload, eliminating the need for you to manage them directly. You can safely remove any previously used credentials from the Client Workload. If you access the Server Workload through an SDK or library, it is possible that the SDK/library may still require credentials to be present for initialization purposes. In this scenario, you can provide placeholder credentials. Aembit will overwrite these placeholder credentials with the appropriate ones during the access process. ## Access Policy - Create an Access Policy for a Client Workload to access the Sauce Labs Server Workload. Assign the newly created Credential Provider to this Access Policy. ## Required Features - You will need to configure the [TLS Decrypt](/astro/user-guide/deploy-install/advanced-options/tls-decrypt/configure-tls-decrypt) feature to work with the Sauce Labs Server Workload. --- # Slack URL: https://docs.aembit.io/user-guide/access-policies/server-workloads/guides/slack/ Description: This page describes how to configure Aembit to work with the Slack Server Workload. # [Slack](https://slack.com/) is a cloud-based collaboration platform designed to enhance communication and teamwork within organizations. Slack offers channels for structured discussions, direct messaging, and efficient file sharing. With support for diverse app integrations, Slack serves as a centralized hub for streamlined workflows and improved team collaboration. Below you can find the Aembit configuration required to work with the Slack service as a Server Workload using the Slack apps and APIs. Aembit supports multiple authentication/authorization methods for Slack. This page describes scenarios where the Credential Provider is configured for Slack via: - [OAuth 2.0 Authorization Code (3LO)](/astro/user-guide/access-policies/server-workloads/guides/slack#oauth-20-authorization-code) - [API Key](/astro/user-guide/access-policies/server-workloads/guides/slack#api-key) :::note[Prerequisites] Before proceeding with the configuration, ensure you have a Slack workspace and a Slack App with the necessary scopes. If you have not set up a Slack App yet, follow the steps under the Credential Provider configuration in the flow you will use. For detailed information on Slack Apps, please refer to the [official Slack documentation](https://api.slack.com/start/apps). ::: ## OAuth 2.0 Authorization Code ### Server Workload Configuration 1. Create a new Server Workload. - **Name** - Choose a user-friendly name. 2. Configure the service endpoint: - **Host** - `slack.com` - **Application Protocol** - HTTP - **Port** - 443 with TLS - **Forward to Port** - 443 with TLS - **Authentication method** - HTTP Authentication - **Authentication scheme** - Bearer ### Credential Provider Configuration 1. Sign in to your Slack account. 2. Navigate to the [Slack - Your Apps](https://api.slack.com/apps) page. 3. Click on **Create an App**. ![Create an Slack App](../../../../../../assets/images/slack_create_an_app.png) 4. In the dialog that appears, choose **From Scratch**. Enter an App Name and select a workspace to develop your app in. 5. Click **Create** to proceed. 6. After the app is created, navigate to your app's main page. Scroll down to the **App Credentials** section, and copy both the **Client ID** and the **Client Secret**. Store them for later use in the tenant configuration. 7. Scroll up to the **Add features and functionality** section, and click **Permissions**. 8. Switch to the Aembit UI to create a new Credential Provider, selecting the OAuth 2.0 Authorization Code credential type. After setting up the Credential Provider, copy the auto-generated **Callback URL**. 9. Return to Slack, under **Redirect URLs**, click **Add New Redirect URL**, paste in the URL, click **Add**, and then click **Save URLs**. 10. In the **Scopes** section, under the **Bot Token Scopes**, click **Add an OAuth Scope** to add the necessary scopes for your application. 11. Scroll up to the **Advanced token security via token rotation** section, and click **Opt In**. ![Add Bot Token Scopes](../../../../../../assets/images/slack_add_bot_token_scopes.png) 12. Edit the existing Credential Provider created in the previous steps. - **Name** - Choose a user-friendly name. - **Credential Type** - [OAuth 2.0 Authorization Code](/astro/user-guide/access-policies/credential-providers/oauth-authorization-code) - **Callback URL (Read-Only)** - An auto-generated Callback URL from Aembit Admin. - **Client Id** - Provide the Client ID copied from Slack. - **Client Secret** - Provide the Secret copied from Slack. - **Scopes** - Enter the scopes you use, space delimited. A full list of Slack Scopes can be found in the [official Slack documentation](https://api.slack.com/scopes?filter=granular_bot). - **OAuth URL** - `https://slack.com` Click on **URL Discovery** to populate the Authorization and Token URL fields. These fields will need to be updated to the following values: - **Authorization URL** - `https://slack.com/oauth/v2/authorize` - **Token URL** - `https://slack.com/api/oauth.v2.access` - **PKCE Required** - Off (PKCE is not supported by Slack, so leave this field unchecked). - **Lifetime** - 1 year (Slack does not specify a refresh token lifetime; this value is recommended by Aembit.) 13. Click **Save** to save your changes on the Credential Provider. 14. In Aembit UI, click the **Authorize** button. You will be directed to a page where you can review the access request. Click **Allow** to complete the OAuth 2.0 Authorization Code flow. You will see a success page and will be redirected to Aembit automatically. You can also verify your flow is complete by checking the **State** value in the Credential Provider. After completion, it should be in a **Ready** state. ![Credential Provider - Ready State](../../../../../../assets/images/credential_providers_auth_code_status_ready.png) :::caution Once the set lifetime ends, the retrieved credential will expire and no longer be active. Aembit will notify you before this happens. Please ensure you reauthorize your credential before it expires. ::: ## API Key ### Server Workload Configuration 1. Create a new Server Workload. - **Name** - Choose a user-friendly name. 2. Configure the service endpoint: - **Host** - `slack.com` - **Application Protocol** - HTTP - **Port** - 443 with TLS - **Forward to Port** - 443 with TLS - **Authentication method** - HTTP Authentication - **Authentication scheme** - Bearer ### Credential Provider Configuration 1. Sign in to your Slack account. 2. Navigate to the [Slack - Your Apps](https://api.slack.com/apps) page. 3. Click on **Create an App**. ![Create a Slack App](../../../../../../assets/images/slack_create_an_app.png) 4. In the dialog that appears, choose either **From Scratch** or **From App Manifest**. 5. Depending on your selection, enter an App Name and select a workspace to develop your app in. 6. Click **Create** to proceed. 7. After the app is created, navigate to your app's main page. Select and customize the necessary tools for your app under the **Add features and functionality** section. 8. Proceed to the installation section and click **Install to Workspace**. You will be redirected to a page where you can choose a channel for your app's functionalities. After choosing, click **Allow**. ![Install an app to workspace](../../../../../../assets/images/slack_install_app_to_workspace.png) 9. Select the **OAuth & Permissions** link from the left menu. 10. Click **Copy** to securely store the token for later use in the tenant configuration. For detailed information on OAuth tokens, please refer to the [official Slack documentation](https://api.slack.com/authentication/oauth-v2). ![Copy OAuth Token](../../../../../../assets/images/slack_copy_oauth_token.png) 11. Create a new Credential Provider. - **Name** - Choose a user-friendly name. - **Credential Type** - [API Key](/astro/user-guide/access-policies/credential-providers/api-key) - **API Key** - Paste the token copied from Slack. ## Client Workload Configuration Aembit now handles the credentials required to access the Server Workload, eliminating the need for you to manage them directly. You can safely remove any previously used credentials from the Client Workload. If you access the Server Workload through an SDK or library, it is possible that the SDK/library may still require credentials to be present for initialization purposes. In this scenario, you can provide placeholder credentials. Aembit will overwrite these placeholder credentials with the appropriate ones during the access process. ## Access Policy - Create an Access Policy for a Client Workload to access the Slack Server Workload. Assign the newly created Credential Provider to this Access Policy. ## Required Features - You will need to configure the [TLS Decrypt](/astro/user-guide/deploy-install/advanced-options/tls-decrypt/configure-tls-decrypt) feature to work with the Slack Server Workload. --- # Snowflake URL: https://docs.aembit.io/user-guide/access-policies/server-workloads/guides/snowflake/ Description: This page describes how to configure Aembit to work with the Snowflake Server Workload. # [Snowflake](https://www.snowflake.com/) is a cloud-based data platform that revolutionizes the way organizations handle and analyze data. Snowflake's architecture allows for seamless and scalable data storage and processing, making it a powerful solution for modern data analytics and warehousing needs. In the sections below, you can find the required Aembit configuration needed to work with the Snowflake service as a Server Workload. This page describes scenarios where the Client Workload accesses Snowflake via: - the [Snowflake Driver/Connector](/astro/user-guide/access-policies/server-workloads/guides/snowflake#snowflake-via-driverconnector) embedded in Client Workload. - the [Snowflake SQL Rest API](/astro/user-guide/access-policies/server-workloads/guides/snowflake#snowflake-sql-rest-api). ## Prerequisites Before proceeding with the configuration, you must have a Snowflake tenant (or [sign up](https://signup.snowflake.com/) for one). ## Snowflake via Driver/Connector This section of the guide is tailored to scenarios where the Client Workload interacts with Snowflake through the [Snowflake Driver/Connector](https://docs.snowflake.com/en/developer-guide/drivers) embedded in the Client Workload. ### Snowflake key-pair authentication Snowflake key-pair authentication, when applied to workloads, involves using a public-private key pair for secure, automated authentication. Aembit generates and securely stores a private key, while the corresponding public key is registered with Snowflake. This setup allows Aembit to authenticate with Snowflake, leveraging the robust security of asymmetric encryption, without relying on conventional user-based passwords. For more information on key-pair authentication and key-pair rotation, please refer to the [official Snowflake documentation](https://docs.snowflake.com/en/user-guide/key-pair-auth#configuring-key-pair-rotation). #### Server Workload Configuration 1. Create a new Server Workload. - **Name** - Choose a user-friendly name. 2. Configure the service endpoint: - **Host** - `-.snowflakecomputing.com` - **Application Protocol** - Snowflake - **Port** - 443 with TLS - **Forward to Port** - 443 with TLS - **Authentication method** - JWT Token Authentication - **Authentication scheme** - Snowflake JWT #### Credential provider configuration 1. Sign into your Snowflake account. 2. Click in the bottom left corner and copy the Locator value for use in the Aembit Snowflake Account ID field. ![Copy Locator Value](../../../../../../assets/images/snowflake_locator_value.png) 3. Create a new Credential Provider. - **Name** - Choose a user-friendly name. - **Credential Type** - [JSON Web Token (JWT)](/astro/user-guide/access-policies/credential-providers/json-web-token) - **Token Configuration** - Snowflake Key Pair Authentication - **Snowflake Account ID** - Your Snowflake Locator value that you copied from the previous step. - **Username** - Your username for the Snowflake account. 4. Click **Save**. ![Snowflake JWT Credentials on Aembit Edge UI](../../../../../../assets/images/snowflake_JWT_credentials.png) 5. After saving the Credential Provider, view the newly created provider and copy the provided SQL command. This command needs to be executed against your Snowflake account. You can use any tool of your choice that supports Snowflake to execute this command. ### Snowflake username/password authentication :::note Aembit will be deprecating Snowflake username/password authentication to match Snowflake's updated MFA security guidance. ::: Username/password authentication in Snowflake involves using a traditional credential-based approach for access control. Users or workloads are assigned a unique username and a corresponding password. When accessing Snowflake, the username and password are used to verify identity. Username/password authentication in Snowflake is considered less secure than key pair authentication and is typically used when key pair methods are not feasible. #### Server Workload Configuration 1. Create a new Server Workload. - Name: Choose a user-friendly name. 2. Configure the service endpoint: - **Host** - `-.snowflakecomputing.com` - **Application Protocol** - Snowflake - **Port** - 443 with TLS - **Forward to Port** - 443 with TLS - **Authentication method** - Password Authentication - **Authentication scheme** - Password #### Credential Provider Configuration 1. Create a new Credential Provider. - **Name** - Choose a user-friendly name. - **Credential Type** - [Username & Password](/astro/user-guide/access-policies/credential-providers/username-password) - **Username** - Your username for the Snowflake account. - **Password** - Your password for the account. ## Snowflake SQL REST API This section focuses on scenarios where the Client Workload interacts with Snowflake through the [Snowflake SQL REST API](https://docs.snowflake.com/en/developer-guide/sql-api/). The Snowflake SQL REST API offers a flexible REST API for accessing and modifying data within a Snowflake database. ### Server Workload Configuration 1. Create a new Server Workload. - **Name** - Choose a user-friendly name. 2. Configure the service endpoint: - **Host** - `-.snowflakecomputing.com` - **Application Protocol** - HTTP - **Port** - 443 with TLS - **Forward to Port** - 443 with TLS - **Authentication method** - HTTP Authentication - **Authentication scheme** - Bearer **Static HTTP Headers** - **Key** - X-Snowflake-Authorization-Token-Type - **Value** - KEYPAIR_JWT ### Credential provider configuration 1. Sign into your Snowflake account. 2. Click in the bottom left corner and copy the Locator value for use in the Aembit Snowflake Account ID field. ![Copy Locator Value](../../../../../../assets/images/snowflake_locator_value.png) 3. Create a new Credential Provider. - **Name** - Choose a user-friendly name. - **Credential Type** - [JSON Web Token (JWT)](/astro/user-guide/access-policies/credential-providers/json-web-token) - **Token Configuration** - Snowflake Key Pair Authentication - **Snowflake Account ID** - Your Snowflake Locator value that you copied from the previous step. - **Username** - Your username for the Snowflake account. 4. Click **Save**. ![Snowflake JWT Credentials on Aembit Edge UI](../../../../../../assets/images/snowflake_JWT_credentials.png) 5. After saving the Credential Provider, view the newly created provider and copy the provided SQL command. This command needs to be executed against your Snowflake account. You can use any tool of your choice that supports Snowflake to execute this command. ## Client Workload Configuration Aembit now handles the credentials required to access the Server Workload, eliminating the need for you to manage them directly. You can safely remove any previously used credentials from the Client Workload. If you access the Server Workload through an SDK or library, it is possible that the SDK/library may still require credentials to be present for initialization purposes. In this scenario, you can provide placeholder credentials. Aembit will overwrite these placeholder credentials with the appropriate ones during the access process. ## Access Policy - Create an Access Policy for a Client Workload to access the Snowflake Server Workload. Assign the newly created Credential Provider to this Access Policy. ## Required Features - You will need to configure the [TLS Decrypt](/astro/user-guide/deploy-install/advanced-options/tls-decrypt/configure-tls-decrypt) feature to work with the Snowflake Server Workload. :::caution As of Snowflake SDK 2.1.0, proxy settings must be explicitly specified within the connection string. In prior versions, the SDK automatically utilized proxy configurations based on environment variables such as `http_proxy` or `https_proxy`. For instance, if you are deploying the SDK within an ECS environment, it is essential to include the following parameters in your connection string: ```shell USEPROXY=true;PROXYHOST=localhost;PROXYPORT=8000 ``` ::: --- # Snyk URL: https://docs.aembit.io/user-guide/access-policies/server-workloads/guides/snyk/ Description: This page describes how to configure Aembit to work with the Snyk Server Workload. # [Snyk](https://snyk.io/) is a security platform designed to help organizations find and fix vulnerabilities in their code, dependencies, containers, and infrastructure as code. It integrates into development workflows to maintain security throughout the software development lifecycle. Below you can find the Aembit configuration required to work with the Snyk service as a Server Workload using the Snyk API. :::note[Prerequisites] Before proceeding with the configuration, you need to have a Snyk tenant and an authorized Snyk App. If you have not created an app before, you can follow the steps outlined in the subsequent sections. For detailed information on how to create a Snyk App using the Snyk API or other methods, please refer to the [official Snyk documentation](https://docs.snyk.io/snyk-api/snyk-apps/create-a-snyk-app-using-the-snyk-api). ::: ## Server Workload Configuration 1. Create a new Server Workload. - **Name** - Choose a user-friendly name. 2. Configure the service endpoint: - **Host** - `api.snyk.io` - **Application Protocol** - HTTP - **Port** - 443 with TLS - **Forward to Port** - 443 with TLS - **Authentication method** - HTTP Authentication - **Authentication scheme** - Bearer ## Credential Provider Configuration 1. Sign in to your Snyk account. 2. In the lower-left corner of any page, click your profile name, then click **Account Settings**. 3. On the **General** page, click to reveal your **Key**. 4. Copy the **Key** and securely store it for later use in the app creation process using the Snyk API. ![Snyk Copy Key](../../../../../../assets/images/snyk_get_auth_token.png) 5. Navigate to **Settings** in the left-hand menu, and choose **General**. 6. Copy the **Organization ID** and securely store it for later use in the app creation process using the Snyk API. ![Snyk Copy Organization ID](../../../../../../assets/images/snyk_get_organization_id.png) 7. Switch to the Aembit UI to create a new Credential Provider, selecting the OAuth 2.0 Authorization Code credential type. After setting it up, copy the auto-generated **Callback URL**. 8. Create a Snyk App: To create a Snyk App, execute the following `curl` command. Make sure to replace the placeholders with the appropriate values: - REPLACE_WITH_API_TOKEN: This is the token you retrieved in Step 4. - REPLACE_WITH_APP_NAME: Provide a friendly name for your app that will perform OAuth with Snyk, such as "Aembit." - REPLACE_WITH_CALLBACK_URL: Use the callback URL obtained in the previous step. - REPLACE_WITH_SCOPES: Add the necessary scopes for your app. It is crucial to include the `org.read` scope, which is required for the refresh token. For a comprehensive list of available scopes, refer to the [official Snyk documentation](https://docs.snyk.io/snyk-api/snyk-apps/scopes-to-request). - REPLACE_WITH_YOUR_ORGID: This is the organization ID you retrieved in Step 6. ```shell curl -X POST -H "Content-Type: application/vnd.api+json" \ -H "Authorization: token " \ -d '{"data": { "attributes": {"name": "", "redirect_uris": [""], "scopes": [""], "context": "user"}, "type": "app"}}' \ "https://api.snyk.io/rest/orgs//apps/creations?version=2024-01-04" ``` The response includes important configuration details, such as the **clientId** and **clientSecret**, which are essential for completing the authorizing of your Snyk App. 9. Edit the existing Credential Provider created in the previous steps. - **Name** - Choose a user-friendly name. - **Credential Type** - [OAuth 2.0 Authorization Code](/astro/user-guide/access-policies/credential-providers/oauth-authorization-code) - **Callback URL (Read-Only)** - An auto-generated Callback URL from Aembit Admin. - **Client Id** - Provide the `clientId` from the response of the `curl` command. - **Client Secret** - Provide the `clientSecret` from the response of the `curl` command. - **Scopes** - Enter the scopes you use, space delimited. (e.g. `org.read org.project.read org.project.snapshot.read`) - **OAuth URL** - `https://snyk.io/` - **Authorization URL** - `https://app.snyk.io/oauth2/authorize` - **Token URL** - `https://api.snyk.io/oauth2/token` - **PKCE Required** - On - **Lifetime** - 1 year (Snyk does not specify a refresh token lifetime; this value is recommended by Aembit.) 10. Click **Save** to save your changes on the Credential Provider. 11. In the Aembit UI, click the **Authorize** button. You are directed to a page where you can review the access request. Click **Authorize** to complete the OAuth 2.0 Authorization Code flow. You should see a success page and then be redirected to Aembit automatically. You can also verify your flow is complete by checking the **State** value in the Credential Provider. After completion, it should be **Ready**. ![Credential Provider - Ready State](../../../../../../assets/images/credential_providers_auth_code_status_ready.png) :::caution Once the set lifetime ends, the retrieved credential will expire and will not work anymore. Aembit will notify you before this happens. Please ensure you reauthorize the credential before it expires. ::: ## Client Workload Configuration Aembit now handles the credentials required to access the Server Workload, eliminating the need for you to manage them directly. You can safely remove any previously used credentials from the Client Workload. If you access the Server Workload through an SDK or library, it is possible that the SDK/library may still require credentials to be present for initialization purposes. In this scenario, you can provide placeholder credentials. Aembit will overwrite these placeholder credentials with the appropriate ones during the access process. ## Access Policy - Create an access policy for a Client Workload to access the Snyk Server Workload and assign the newly created Credential Provider to it. ## Required Features - You will need to configure the [TLS Decrypt](/astro/user-guide/deploy-install/advanced-options/tls-decrypt/configure-tls-decrypt) feature to work with the Snyk Server Workload. --- # Stripe URL: https://docs.aembit.io/user-guide/access-policies/server-workloads/guides/stripe/ Description: This page describes how to configure Aembit to work with the Stripe Server Workload. # [Stripe](https://stripe.com/) is a digital payment processing service that allows businesses to accept and process payments online. Stripe supports various payment methods, including credit cards, and provides tools for managing subscriptions and recurring payments. Below you can find the Aembit configuration required to work with the Stripe service as a Server Workload using the Stripe SDK or other HTTP-based client. ## Prerequisites Before proceeding with the configuration, you will need to have a Stripe tenant (or [sign up](https://dashboard.stripe.com/register) for one). ## Server Workload Configuration 1. Create a new Server Workload. - **Name** - Choose a user-friendly name. 2. Configure the service endpoint: - **Host** - `api.stripe.com` - **Application Protocol** - HTTP - **Port** - 443 with TLS - **Forward to Port** - 443 with TLS - **Authentication method** - HTTP Authentication - **Authentication scheme** - Bearer ## Credential Provider Configuration 1. Sign into your Stripe account. 2. Go to the Developers section. 3. Click on the API keys tab. 4. Ensure you are in the correct mode (Test mode for Stripe test data or Live mode for live production data). ![Create Stripe API token](../../../../../../assets/images/stripe_keys.png) 5. You can either reveal and copy the Standard keys' secret key or, for additional security, create and copy a Restricted key. Please read more about this in the [official Stripe documentation](https://stripe.com/docs/keys). 6. Create a new Credential Provider. - **Name** - Choose a user-friendly name. - **Credential Type** - [API Key](/astro/user-guide/access-policies/credential-providers/api-key) - **API Key** - Provide the key copied from Stripe. ## Client Workload Configuration Aembit now handles the credentials required to access the Server Workload, eliminating the need for you to manage them directly. You can safely remove any previously used credentials from the Client Workload. If you access the Server Workload through an SDK or library, it is possible that the SDK/library may still require credentials to be present for initialization purposes. In this scenario, you can provide placeholder credentials. Aembit will overwrite these placeholder credentials with the appropriate ones during the access process. ## Access Policy - Create an access policy for a Client Workload to access the Stripe Server Workload and assign the newly created Credential Provider to it. ## Required Features - You will need to configure the [TLS Decrypt](/astro/user-guide/deploy-install/advanced-options/tls-decrypt/configure-tls-decrypt) feature to work with the Stripe Server Workload. --- # Enable TLS on a Server Workload URL: https://docs.aembit.io/user-guide/access-policies/server-workloads/server-workload-enable-tls/ Description: How to enable TLS on a Server Workload import { Steps } from '@astrojs/starlight/components'; To enable TLS on traffic to your Server Workloads, do the following: 1. Log into your Aembit tenant. 1. In the left sidebar menu, go to **Server Workloads**. 1. Create a new Server Workload or select an existing Server Workload from the list and click **Edit**. 1. Under **Service Endpoint** in the **Port** field, check the **TLS** checkbox. ![TLS Decrypt Page](../../../../../assets/images/enable_tls_decrypt.png) 1. Click **Save**. --- # Trust Providers URL: https://docs.aembit.io/user-guide/access-policies/trust-providers/ Description: This document provides a high-level description of Trust Providers This section covers Trust Providers in Aembit, which are used to verify the identity of Client Workloads based on their infrastructure or identity context. The following pages provide information about different Trust Provider types and how to configure them: - [Add Trust Provider](/astro/user-guide/access-policies/trust-providers/add-trust-provider) - [AWS Metadata Service Trust Provider](/astro/user-guide/access-policies/trust-providers/aws-metadata-service-trust-provider) - [AWS Role Trust Provider](/astro/user-guide/access-policies/trust-providers/aws-role-trust-provider) - [Azure Metadata Service Trust Provider](/astro/user-guide/access-policies/trust-providers/azure-metadata-service-trust-provider) - [GCP Identity Token Trust Provider](/astro/user-guide/access-policies/trust-providers/gcp-identity-token-trust-provider) - [GitHub Trust Provider](/astro/user-guide/access-policies/trust-providers/github-trust-provider) - [GitLab Trust Provider](/astro/user-guide/access-policies/trust-providers/gitlab-trust-provider) - [Kerberos Trust Provider](/astro/user-guide/access-policies/trust-providers/kerberos-trust-provider) - [Kubernetes Service Account Trust Provider](/astro/user-guide/access-policies/trust-providers/kubernetes-service-account-trust-provider) - [Terraform Cloud Identity Token Trust Provider](/astro/user-guide/access-policies/trust-providers/terraform-cloud-identity-token-trust-provider) --- # Add Trust Provider URL: https://docs.aembit.io/user-guide/access-policies/trust-providers/add-trust-provider/ Description: This document describes the steps required to configure a Trust Provider for Client Workload identity attestation. Trust Providers enable Aembit to authenticate without provisioning credentials or other secrets. Trust Providers are third-party systems or services that can attest identities with identity documents, tokens, or other cryptographically signed evidence. Client Workload identity attestation is a core functionality to ensure only trusted Client Workloads can access the Server Workloads. ## Configure Trust Provider If you are getting started with Aembit, configuring trust providers is optional; however, it is critical to secure all production deployments. 1. Click the **Trust Providers** tab. 2. Click **+ New** to create a new Trust Provider. 3. Give the Trust Provider a name and optional description. 4. Choose the appropriate Trust Provider type based on your Client Workloads' environment. 5. Follow the instructions for the Trust Provider based on your selection. - [AWS Role Trust Provider](/astro/user-guide/access-policies/trust-providers/aws-role-trust-provider) - [AWS Metadata Service Trust Provider](/astro/user-guide/access-policies/trust-providers/aws-metadata-service-trust-provider) - [Azure Metadata Service trust provider](/astro/user-guide/access-policies/trust-providers/azure-metadata-service-trust-provider) - [Kerberos trust provider](/astro/user-guide/access-policies/trust-providers/kerberos-trust-provider) - [Kubernetes Service account trust provider](/astro/user-guide/access-policies/trust-providers/kerberos-trust-provider) 6. Configure one or more **match rules** (specific to your Trust Provider type). :::note Aembit recommends making matching criteria as tight as possible. ::: 7. Click **Save**. ## Client Workload Identity Attestation You must associate one or more Trust Providers with the existing Access Policy for Aembit to use Client Workload identity attestation. 1. Choose one of the existing **Access Policies**. 2. Click **Edit**. 3. Add an existing, or create a new Trust Provider. ![Associate Trust Provider to Policy](../../../../../assets/images/associate_trust_provider_to_policy.png) ## Agent Controller Identity Attestation You must associate a Trust Provider with Agent Controller in order for Aembit to use Agent Controller for identity attestation. 1. Click the **Edge Components** tab. 2. Select one of the existing **Agent Controllers**. 3. Click **Edit**. 4. Choose from the dropdown one of the existing **Trust Providers**. ![Agent Controller Trust Provider Page](../../../../../assets/images/agent_controller_trust_provider.png) --- # AWS Metadata Service trust provider URL: https://docs.aembit.io/user-guide/access-policies/trust-providers/aws-metadata-service-trust-provider/ Description: This page describes the steps required to configure an AWS Metadata Service Trust Provider. # The AWS Metadata Service Trust Provider supports attestation of Client Workloads and Agent Controller identities in [AWS](https://aws.amazon.com/) environments (running either directly on EC2 instances or on managed [AWS EKS](https://aws.amazon.com/eks/)). The AWS Metadata Service Trust Provider relies on the [AWS Metadata Service](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html) for instance identity document. ## Match rules The following match rules are available for this Trust Provider type: - accountId - architecture - availabilityZone - billingProducts - imageId - instanceId - instanceType - kernelId - marketplaceProductCodes - pendingTime - privateIP - region - version Please refer to the [AWS documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-identity-documents.html) for a detailed description of match rule fields available in the identity document. ## Additional configurations Aembit requires one of AWS's public certificates to verify the identity document signature. Please download the certificate from the [AWS public certificate page](https://docs.aws.amazon.com/es_en/AWSEC2/latest/UserGuide/regions-certs.html) for the region where your Client Workloads are located. Please use certificates under the RSA tabs on the AWS documentation page and paste the appropriate certificate into **Certificate** field on the **Trust Provider** page. --- # AWS Role Trust Provider URL: https://docs.aembit.io/user-guide/access-policies/trust-providers/aws-role-trust-provider/ Description: This page describes the steps needed to configure the AWS Role Trust Provider. # The AWS Role Trust Provider supports attestation within the AWS environment. Aembit Edge Components can currently be deployed in several AWS services that support AWS Role Trust Provider attestation: - EC2 instances with an attached IAM role - AWS Role instances - ECS Fargate containers - Lambda containers ## Match rules The following match rules are available for this Trust Provider type: - accountId - assumedRole - roleArn - username For a description of the match rule fields available in the AWS Role Trust Provider, please refer to the [AWS documentation](https://docs.aws.amazon.com/STS/latest/APIReference/API_GetCallerIdentity.html). ## AWS Role support Aembit supports AWS Role-Based Trust Providers by enabling you to create a new Trust Provider using the Aembit Tenant UI. Follow the steps below to create the AWS Role Trust Provider. 1. On the Trust Providers page, click on the **New** button to open the Trust Providers dialog window. 2. In the dialog window, enter the following information: - **Name**: The name of the Trust Provider - **Description**: An optional text description for the Trust Provider - **Trust Provider**: A drop-down menu that lists the different Trust Provider types 3. Select **AWS Role** from the Trust Provider drop-down menu. 4. Click on the **Match Rules** link to open an instance of the Match Rules drop-down menu. - If you use the `roleARN` value, make sure it is in the following format: `arn:aws:sts:::assumed-role//` - If you use the `username` value, make sure it is in the following format: `:` :::note The username value refers to the `AccessKeyId` field in Amazon's [IAM Roles for Amazon EC2](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html#instance-metadata-security-credentials) documentation. ::: ![Trust Provider Dialog Window - Complete](../../../../../assets/images/trust_providers_new_trust_provider_dialog_window_complete.png) 5. Click **Save** when finished. Your new EC2 Trust Provider will appear on the main Trust Providers page. ## ECS Fargate container support You must assign an AWS IAM role with `AmazonECSTaskExecutionRolePolicy` permission to your ECS tasks. :::note There are multiple ways to perform the following steps (e.g. UI, API, CDK, Terraform, etc.). The steps below are one approach; however, select the way that is most appropriate for your organization. ::: 1. Check the existence of AWS IAM ecsTaskExecutionRole. Please refer to the [AWS documentation](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_execution_IAM_role.html#procedure_check_execution_role) for more information. 2. Create AWS IAM `ecsTaskExecutionRole` if this is missing. Please refer to the [AWS documentation](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_execution_IAM_role.html#create-task-execution-role) for more information. 2. Retrieve the ARN of `ecsTaskExecutionRole` role. This should look like ``arn:aws:iam:::role/ecsTaskExecutionRole`` 3. Assign this role in your ECS task definition by setting the task role and task execute role fields. ![ECS Role Trust Provider Page](../../../../../assets/images/ecs_task_role.png) ## Lambda Container Support If you are using this Trust Provider for attestation of workloads running in a Lambda container environment, you may utilize the following match rules: - accountId - roleArn The Lambda Container **roleArn** is structured as follows: ``` arn:aws:sts:::assumed-role// ``` --- # Azure Metadata Service trust provider URL: https://docs.aembit.io/user-guide/access-policies/trust-providers/azure-metadata-service-trust-provider/ Description: This page describes the steps required to configure the Azure Metadata Service Trust Provider. # The Azure Metadata Service Trust Provider supports attestation of Client Workloads and Agent Controller identities in an [Azure](https://azure.microsoft.com/) environment. The Azure Metadata Service Trust Provider relies on the [Azure Metadata Service](https://learn.microsoft.com/en-us/azure/virtual-machines/instance-metadata-service?tabs=linux) for instance identity document. ## Match rules The following match rules are available for this Trust Provider type: - sku - subscriptionId - vmId Please refer to the [Azure documentation](https://learn.microsoft.com/en-us/azure/virtual-machines/instance-metadata-service?tabs=linux#attested-data) for a detailed description of match rule fields available in the identity document. --- # GCP Identity Token Trust Provider URL: https://docs.aembit.io/user-guide/access-policies/trust-providers/gcp-identity-token-trust-provider/ Description: This page describes the steps required to configure the GCP Identity Token Trust Provider. # The GCP Identity Token Trust Provider verifies the identities of workloads running within Google Cloud Platform (GCP) by validating identity tokens issued by GCP. These tokens carry metadata, such as the email associated with the service account or user executing the operation, ensuring secure and authenticated access to GCP resources. ## Match rules The following match rule is available for this Trust Provider type: | Data | Description | Example | |-------------------------------------------------|-----------------------------------------------------------|------------------------------------------------| | email | The email associated with the GCP service account or user | user@example.com | For additional information about GCP Identity Tokens, please refer to [Google Cloud Identity](https://cloud.google.com/docs/authentication/get-id-token) technical documentation. --- # GitHub Trust Provider URL: https://docs.aembit.io/user-guide/access-policies/trust-providers/github-trust-provider/ Description: This page outlines the steps required to configure the GitHub Trust Provider. # The GitHub Trust Provider supports attestation of Client Workloads identities in a [GitHub Actions](https://github.com/features/actions) environment. The GitHub Trust Provider relies on OIDC (OpenID Connect) tokens issued by GitHub. These tokens contain verifiable information about the workflow, its origin, and the triggering actor. ## Match rules The following match rules are available for this Trust Provider type: | Data | Description | Example | |-------------------------------------------------|-----------------------------------------------------------|------------------------------------------------| | actor | The GitHub account name that initiated the workflow run | user123 | | repository | The repository where the workflow is running. It can be in the format `{organization}/{repository}` for organization-owned repositories or `{account}/{repository}` for user-owned repositories.
For additional information about [Repository Ownership](https://docs.github.com/en/repositories/creating-and-managing-repositories/about-repositories#about-repository-ownership). |
  • MyOrganization/test-project
  • user123/another-project
| | workflow | The name of the GitHub Action workflow.
For additional information about [Workflows](https://docs.github.com/en/actions/using-workflows/about-workflows). | build-and-test | For additional information about GitHub ID Token claims, please refer to [GitHub OIDC Token Documentation](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect#understanding-the-oidc-token). --- # Gitlab Trust Provider URL: https://docs.aembit.io/user-guide/access-policies/trust-providers/gitlab-trust-provider/ Description: This page outlines the steps required to configure the Gitlab Trust Provider. # The Gitlab Trust Provider supports attestation of Client Workloads identities in a [Gitlab Jobs](https://docs.gitlab.com/ee/ci/jobs/) environment. The GitLab Trust Provider relies OIDC (OpenID Connect) tokens issued by GitLab. These tokens contain verifiable information about the job, its origin within the project, and the associated pipeline. ## Match rules The following match rules are available for this Trust Provider type: | Data | Description | Example | |------------------------------------------|---------------------------------------------------------------------|-----------------------------------------------------------------------| | namespace_path | The group or user namespace (by path) where the repository resides. | my-group | project_path | The repository from where the workflow is running, using the format `{group}/{project}` | my-group/my-project | | ref_path | The fully qualified reference (branch or tag) that triggered the job.
([Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/119075) in GitLab 16.0.) |
  • refs/heads/feature-branch-1
  • refs/tags/v1.2.0
| subject | The repository and Git reference from where the workflow is running. The format is `project_path:{group}/{project}:ref_type:{type}:ref:{branch_name}`, where `type` can be either `branch` (for a branch-triggered workflow) or `tag` (for a tag-triggered workflow). |
  • project_path:my-group/my-project:ref_type:branch:ref:feature-branch-1
  • project_path:my-group/my-project:ref_type:tag:ref:v2.0.1
| For additional information about GitLab ID Token claims, please refer to [GitLab Token Payload](https://docs.gitlab.com/ee/ci/secrets/id_token_authentication.html#token-payload). :::note When using GitLab Dedicated, ensure the OIDC Endpoint is properly configured; otherwise use `https://gitlab.com`. ::: --- # Kerberos Trust Provider URL: https://docs.aembit.io/user-guide/access-policies/trust-providers/kerberos-trust-provider/ Description: How to configure a Kerberos Trust Provider The Kerberos Trust Provider enables the attestation of Client Workloads running on virtual machines (VMs) joined to Active Directory (AD). This attestation method is specifically designed for on-premise deployments where alternative attestation methods, such as AWS or Azure metadata service Trust Providers, aren't available. This Trust Provider is unique because it relies on attestation provided by an Aembit component, rather than attestation from a third-party system. In this scenario, the Aembit Agent Controller acts as the attesting system. It authenticates a client (specifically, Agent Proxy) via Kerberos and attests to the client's identity. The client's identity information is then signed by the Aembit Agent Controller and validated by Aembit Cloud as part of the access policy evaluation process. ## Prerequisites Many prerequisites are necessary, particularly regarding domain users and principals. This page outlines Aembit's current recommendations for a secure and scalable deployment. Kerberos based attestation is available only for [Virtual Machine Deployments](/astro/user-guide/deploy-install/virtual-machine/). ### Join your Edge Components to AD domain - You must join Agent Controller VMs to AD before you install Agent Controller on them. - You must join Client Workload VMs to AD before installing Agent Proxy. ### Domain users and service principals - You must create a user in AD named `aembit_ac` for Agent Controllers. This user doesn't need any specific permissions in AD. - You must create a service principal for the Agent Controller under the `aembit_ac` AD user. - For testing purposes, create a service principal `HTTP/`. - For production purposes, see [High Availability](#high-availability). - Agent Controllers on Windows Server in high availability (HA) configurations, must set the `SERVICE_LOGON_USER` environment variable to an AD user in [Down-Level Logon Name format](https://learn.microsoft.com/en-us/windows/win32/secauthn/user-name-formats#down-level-logon-name) (for example: `SERVICE_LOGON_USER=\$`). ### Network access - Agent Controller VMs don't need access to the Domain Controller. - Client Workload VMs must have access to the Domain Controller to acquire tickets. ### Keytabs - Agent Controller - Agent Controller Linux VMs require a keytab file for the Agent Controller AD user. - You can place the keytab file on the VM before or after the Agent Controller installation. - The Agent Controller Linux user must have read/write permissions on the keytab file (`aembit_agent_controller`). If you place a keytab file before you install the Agent Controller, Aembit recommends creating a Linux group `aembit` and a Linux user `aembit_agent_controller`, and making this file accessible by this Linux user/group. - If your organization has mandatory AD password rotation, make sure you have a configuration in place for keytab renewal. See [Agent Controller keytab rotation](#agent-controller-keytab-rotation-for-high-availability-deployment) for more information. - Agent Proxy - The Agent Proxy on the Client Workload machine uses the host keytab file. - The Agent Proxy uses the [sAMAccountName](https://learn.microsoft.com/en-us/windows/win32/ad/naming-propertes#samaccountname) principal from the host keytab. - The host keytab can have Linux root:root ownership. ## Kerberos Trust Provider match rules The Kerberos Trust Provider supports the following match rules: - Principal - Realm/Domain - Source IP :::warning Important When matching on Principal or Realm/Domain, see [Kerberos Principal formatting](#kerberos-principal-formatting) for guidance. ::: | Data | Description | Example | |-----------|--------------------------------------------------------|----------------------------------| | Principal | The Agent Proxy's VM principal | `IP-172-31-35-14$@EXAMPLE.COM` | | Realm | The realm of the Client Workload VM principal | `EXAMPLE.COM` | | Domain | The NetBIOS domain of the Client Workload VM principal | `example` | | Source IP | The Network Source IP address of the Client request | `192.168.1.100` | ### Associated Agent Controllers During the configuration of the Kerberos Trust Provider, you must specify the list of Agent Controllers responsible for providing attestation. Aembit trusts only the attestation information signed by specified Agent Controllers by a Kerberos Trust Provider entry. ### Kerberos Principal formatting Aembit supports Agent Controller on Windows VMs to improve management of the Aembit Edge Components. This is especially true for [Agent Controller high availability configurations](/astro/user-guide/deploy-install/advanced-options/agent-controller/agent-controller-high-availability) that use Windows [Group Managed Service Accounts (gMSA)](https://learn.microsoft.com/en-us/windows-server/identity/ad-ds/manage/group-managed-service-accounts/group-managed-service-accounts/group-managed-service-accounts-overview) to manage multiple Agent Controllers. The challenge is that Windows and Linux systems treat AD differently, in that Linux treats it purely as Kerberos and Windows treats it natively like AD. This results in different naming and formatting for the Kerberos Principal value that Aembit uses in Kerberos tokens which it exchanges for AD authentication. The following table details all the combinations you can encounter based on the OS installed on Agent Controller and Agent Proxy: | OS combination | Principal format | |-------------------------------------------------------------|-------------------------------------------| | **Linux** Agent Controller +
**Linux** Agent Proxy | `@` | | **Linux** Agent Controller +
**Windows** Agent Proxy | `@` | | **Windows** Agent Controller +
**Linux** Agent Proxy | `\` | | **Windows** Agent Controller +
**Windows** Agent Proxy | `\` | As part of the Kerberos Trust Provider attestation process and to address this challenge, Aembit Cloud automatically parses the attested Kerberos Principal value and *verifies either the realm or the domain* from the value for you. ## Enable Kerberos attestation By default, Aembit disables Kerberos attestation on both Agent Controller and Agent Proxy. Follow the applicable sections to enable Kerberos attestation on Aembit Edge Components: ### Agent Controller on Windows Server To enable Kerberos attestation for [Agent Controller on a Windows Server VM](/astro/user-guide/deploy-install/virtual-machine/windows/agent-controller-install-windows), you must set the following environment variables: ```shell AEMBIT_KERBEROS_ATTESTATION_ENABLED=true SERVICE_LOGON_USER=\$ ``` ### Agent Controller on Linux To enable Kerberos attestation for [Agent Controller on a Linux VM](/astro/user-guide/deploy-install/virtual-machine/linux/agent-controller-install-linux), you must set the following environment variables: ```shell AEMBIT_KERBEROS_ATTESTATION_ENABLED=true KRB5_KTNAME= ``` ### Agent Proxy Similarly, the Agent Proxy installer requires the following environment variable (in addition to the standard variables provided during [installation](/astro/user-guide/deploy-install/virtual-machine/linux/agent-proxy-install-linux)): ```shell AEMBIT_KERBEROS_ATTESTATION_ENABLED=true AEMBIT_PRIVILEGED_KEYTAB=true ``` ## TLS The contents of the communication between Agent Proxy and Agent Controller is sensitive. In a production deployment, you may configure Agent Controller TLS to secure communication between these two components using either a Customer's PKI or Aembit's PKI. Please see the following pages for more information on using a PKI in your configuration: - [Configure a Customer's PKI Agent Controller TLS](/astro/user-guide/deploy-install/advanced-options/agent-controller/configure-customer-pki-agent-controller-tls) - [Configure Aembit's PKI Agent Controller TLS](/astro/user-guide/deploy-install/advanced-options/agent-controller/configure-aembit-pki-agent-controller-tls) ## High availability Given the critical role of attestation in evaluating an Access Policy, Aembit strongly encourages configuring multiple Agent Controllers in a high availability architecture. To learn how, see [Agent Controller High Availability](/astro/user-guide/deploy-install/advanced-options/agent-controller/agent-controller-high-availability). The following are the additional steps you must perform for Kerberos attestation to work in a highly available configuration: - You don't need to join the load balancer to your domain. - You must create a service principal `HTTP/` under the Aembit Agent Controller Active Directory user. - You don't need to create principals for individual Agent Controller VMs. - You must place the keytab for the Agent Controller AD user (including the load-balancer service principal) on all Agent Controller VMs. - If you operate multiple Agent Controller clusters running behind one or more load balancers, you must add each load balancer FQDN as the service principal under Agent Controller AD account. ## Agent Controller keytab rotation for high availability deployment Standard best practice recommends the periodic rotation of all keytabs. Considering that Aembit shares the keytab representing an Agent Controller's identity across multiple Agent Controller machines, the common method of keytab rotation on Linux (using SSSD) isn't feasible. Your organization must have a centrally orchestrated keytab rotation, where the Agent Controller AD user keytab is rotated centrally and then pushed to all Agent Controller Virtual Machines. Note that the entity performing the keytab rotation needs the appropriate permissions in AD to change the Agent Controller password during new-keytab creation. --- # Kubernetes Service Account trust provider URL: https://docs.aembit.io/user-guide/access-policies/trust-providers/kubernetes-service-account-trust-provider/ Description: This page describes the steps required to configure the Kubernetes Service Account Trust Provider. # The Kubernetes Service Account Trust Provider supports attestation of Client Workloads and Agent Controller identities in a Kubernetes environment (either self-hosted or managed by cloud providers - [AWS EKS](https://aws.amazon.com/eks/), [Azure AKS](https://azure.microsoft.com/en-us/products/kubernetes-service), [GCP GKE](https://cloud.google.com/kubernetes-engine?hl=en)). ## Match rules The following match rules are available for this Trust Provider type: - iss - kubernetes.io \{ namespace \} - kubernetes.io \{ pod \{ name \} \} - kubernetes.io \{ serviceaccount \{ name \} \} - sub | Data | Description | Example | |-----------------------------------------------|---------------------------------|------------------------------------------------| | iss | Kubernetes Cluster Issuer URL | https://kubernetes.default.svc.cluster.local | | kubernetes.io \{ namespace \} | Pod namespace | default | | kubernetes.io \{ pod \{ name \} \} | Pod name | example-app | | kubernetes.io \{ serviceaccount \{ name \} \} | Service Account name | default | | sub | Service Account token subject | system:serviceaccount:default:default | ## Additional configurations Aembit requires a Kubernetes cluster public key to validate the Service Account token used by this trusted provider. The majority of cloud providers expose an OIDC endpoint that enables automatic retrieval of the Kubernetes cluster public key. :::note There are multiple ways to retrieve the OIDC endpoint (via UI, CLI, API, etc.) The steps below use the CLI approach; however, select the way that is most appropriate for your organization. ::: ### AWS EKS - Ensure your AWS CLI is installed, configured, and authenticated. - Execute the following command: ```shell aws eks describe-cluster --name \ --query "cluster.identity.oidc.issuer" --output text ``` - Paste the response in **OIDC Endpoint** field. ### GCP GKE - Ensure your GCP CLI is installed, configured, and authenticated. - Execute the following command: ```shell gcloud container clusters describe \ --region=\ --format="value(selfLink)" ``` - Paste the response in **OIDC Endpoint** field. ## Azure AKS - Ensure your Azure CLI is installed, configured, and authenticated. - Execute the following command: ```shell az aks show --resource-group \ --name \ --query "oidcIssuerProfile.issuerUrl" -o tsv ``` - Paste the response in **OIDC Endpoint** field. --- # Terraform Cloud Identity Token Trust Provider URL: https://docs.aembit.io/user-guide/access-policies/trust-providers/terraform-cloud-identity-token-trust-provider/ Description: This page describes the steps required to configure the Terraform Cloud Identity Token Trust Provider. # The Terraform Cloud Identity Token Trust Provider verifies the identities of Client Workloads within Terraform Cloud using identity tokens. These tokens include metadata such as organization, project, and workspace details, ensuring secure and authenticated access to resources. ## Match rules The following match rules are available for this Trust Provider type: | Data | Description | Example | |-------------------------------------------------|-----------------------------------------------------------|------------------------------------------------| | terraform_organization_id | The Terraform organization that is executing the run. | org-abcdefghijklmno | | terraform_project_id | The specific project within the Terraform organization that is running the operation. | prj-abcdefghijklmno | | terraform_workspace_id | The ID associated with the Terraform workspace where the run is being conducted. | ws-abcdefghijklmno | For additional information about Terraform Cloud Identity Token, please refer to [Terraform Workload Identity](https://developer.hashicorp.com/terraform/cloud-docs/workspaces/dynamic-provider-credentials/workload-identity-tokens). --- # User Guide/audit Report ========================= # Access Authorization Events URL: https://docs.aembit.io/user-guide/audit-report/access-authorization-events/ Description: This page describes how users can review access authorization event information in Aembit Reporting. An access authorization event is an event that Aembit generates that occurs whenever an Edge Component requests access to a Server Workload. When Aembit receives an access request, the generated events include detailed information, providing a granular view of the processing steps to evaluate the request against an existing Access Policy. Once Aembit Cloud processes the request and the evaluation is complete, a result is generated that specifies if access is granted or denied (success or failure). Having the ability to view information about these access authorization events enables you to not only troubleshoot issues, but also have a historical records of these events. You may also use these logs to perform threat detection analysis to ensure malicious actors and workloads don't gain access to your resources. ## Event types The three different types of access authorization events that you may view in the Aembit Reporting dashboard include: - Access Request - Access Authorization - Access Credential ## Access Request events An `access.request` event captures the request and associated metadata. An example of an `access.request` event type is shown below. ```json { "meta": { "clientIP": "1.2.3.4", "timestamp": "2024-09-14T20:29:11.0689334Z", "eventType": "access.request", "eventId": "5b788a92-accd-49a1-851f-171f7c20d355", "resourceSetId": "ffffffff-ffff-ffff-ffff-ffffffffffff", "contextId": "4e876ace-d1b0-4095-ac22-f9c0fb7e676a", "severity": "Info" }, "clientRequest": { "version": "1.0.0", "network": { "sourceIP": "10.0.0.15", "sourcePort": 53134, "transportProtocol": "TCP", "proxyPort": 8080, "targetHost": "server.domain.com", "targetPort": 80 } } } ``` ## Access Authorization events In an `access.authorization` event, you can view detailed information about the steps Aembit Cloud Control Plane undertakes to evaluate an Access Policy. Information shown in an access authorization event includes event metadata, the outcome of the evaluation, and details about the identified Client Workload, Server Workload, Access Policy, Trust Providers, Access Conditions and Credential Provider. The following example shows the type of data you should expect to see in an access authorization event. ```json { "meta": { "clientIP": "1.2.3.4", "timestamp": "2024-09-14T20:29:11.0689334Z", "eventType": "access.authorization", "eventId": "5b788a92-accd-49a1-851f-171f7c20d355", "resourceSetId": "ffffffff-ffff-ffff-ffff-ffffffffffff", "contextId": "4e876ace-d1b0-4095-ac22-f9c0fb7e676a", "severity": "Info" }, "outcome": { "result": "Unauthorized", "reason": "Attestation failed" }, "clientWorkload": { "id": "7c466803-9dd4-4388-9e45-420c57a0432c", "name": "Test Client", "result": "Identified", }, "serverWorkload": { "id": "49183921-55ab-4856-a8fc-a032af695e0d", "name": "Test Server", "result": "Identified", }, "accessPolicy": { "id": "dd987f8c-34fb-43e2-9d43-89d862e6b7ec", "name": "Test Access Policy", "result": "Identified", } "trustProviders": [{ "id": "24462228-14c1-41a4-8b23-9be789b48452", "name": "Kerberos", "result": "Attested" },{ "id": "c0bd6c06-71ce-4a87-b03c-4c64cb311896", "name": "AWS Production", "result": "Unauthorized", "reason": "InvalidSignature" },{ "id": "5f0c2962-2af4-4b5f-97c0-9046b37198a9", "name": "Kubernetes", "result": "Unauthorized", "reason": "MatchRuleFailed", "attribute": "serviceNameUID", "expectedValue": "foo", "actualValue": "bar", }], "accessConditions": [], "credentialProvider": { "id": "bb7927f8-060c-4486-9a5e-bcbe1efc53d6", "name": "Production PostgreSQL", "result": "Identified", "maxAge": 60, } } ``` ### Authorization Failure If an authorization request fails during the check, a `reason` property value is returned in either the `trustProviders` and/or `accessConditions` elements notifying you that a failure has occurred, and providing a reason for the failure. By providing you a reason for the failure, you can then use this information to diagnose and troubleshoot the issue. There are several different types of `reason` values that can be returned with a failure. Some of these values include: - **NoDataFound** - Attestation didn't succeed because the necessary data wasn't available. - **InvalidSignature** - The cryptographic verification check failed. - **MatchRuleFailed** - The match rules for the Trust Provider weren't satisfied. - **ConditionFailed** - The Access Condition check failed. In the example shown above, notice that the `Trust Providers` check failed. For the Trust Provider ID `5f0c2962-2af4-4b5f-97c0-9046b37198a9` here for this example, the reasons specified in the JSON response are: - `MatchRuleFailed` With this information, you can determine that not only did Trust Provider fail the `AWS Production` cryptographic check, but the check was also unable to match the Trust Provider to an existing match attribute for that Trust Provider (the check was looking for `ServiceNameUID` with the expected value `foo`). Now that you know why the failure occurred, you can troubleshoot the issue. ### Access Credential The `access.credential` event type shows the result of the Edge Controller retrieval attempt of the required credential when requested. If the request was successful, the Edge Controller acquires credentials for the Server Workload via the Credential Provider and the event will specify the result as `Retrieved`. The example below shows what you should expect to see in an `access.credential` event. ```json { "meta": { "clientIP": "1.2.3.4", "timestamp": "2024-09-14T20:29:11.0689334Z", "eventType": "access.credential", "eventId": "5b788a92-accd-49a1-851f-171f7c20d355", "resourceSetId": "ffffffff-ffff-ffff-ffff-ffffffffffff", "contextId": "4e876ace-d1b0-4095-ac22-f9c0fb7e676a", "severity": "Info" }, "outcome": { "result": "Authorized", }, "clientWorkload": { "id": "7c466803-9dd4-4388-9e45-420c57a0432c", "name": "Test Client", "result": "Identified", }, "serverWorkload": { "id": "49183921-55ab-4856-a8fc-a032af695e0d", "name": "Test Server", "result": "Identified" }, "accessPolicy": { "id": "dd987f8c-34fb-43e2-9d43-89d862e6b7ec", "name": "Test Access Policy", "result": "Identified" } "trustProviders": [{ "id": "49183921-55ab-4856-a8fc-a032af695e0d", "name": "Kerberos", "result": "Attested" }], "accessConditions": [], "credentialProvider": { "id": "bb7927f8-060c-4486-9a5e-bcbe1efc53d6", "name": "Production PostgreSQL", "result": "Retrieved", "maxAge": 60, } } ``` ### Credential Failure If an credential request fails during the check, a `reason` property value is returned with the `credentialProvider` entity notifying you that a failure has occurred, and providing you a reason for the failure. By providing you a reason for the failure, you can then use this information to diagnose and troubleshoot the issue. There may be several reasons why credential request fails. Some of these reasons may be: - **Token expired** - The requested token has expired. - **Request failed with BadRequest** - There was a communication error with the credential provider. Note: This will include the encountered HTTP status code. - **Aembit Internal Error** - There was an internal Aembit error during the credential retrieval. - **Unknown error** - An unexpected error occurred during credential retrieval and is being investigated by Aembit support. With this information, you can determine the reason for the failure and then troubleshoot the issue. ## Retrieving Access Authorization Event Data To retrieve detailed information about access authorization events, perform the steps described below. 1. Log into your Aembit tenant. 2. Once you are logged in, click on the **Reporting** link in the left navigation pane. You are directed to the Reporting Dashboard page. :::note By default, when you navigate to the Reporting Dashboard, access authorization events data is displayed. ::: ![Reporting Main Dashboard](../../../../assets/images/reporting-auth-events-dashboard-main.png) You will see two dropdown menus at the top of the page that enable you to filter and sort the results displayed: - **Timespan** - The period of time you would like to have event data displayed. - **Severity** - The level of importance of the event. :::note When the **Access Authorization Events** page loads, the page loads with default display filters of: **Timespan = 24 Hours**, and **Severity = All**. ::: 3. Select the period of time you would like to view by clicking on the **Timespan** dropdown menu. Options are: - 1 hour, 3 hours, 6 hours, 12 hours, or 24 hours. 4. Select the severity level of the results you would like to view by clicking on the **Severity** dropdown menu. Options are: - Error, Warning, Info, or All 5. Once you have selected your filtering options, the table displays access authorization event information based on these selections. ### Viewing Access Authorization event data When you select an access authorization event from the dashboard, you can expand the view to display detailed data for that event. Depending on the event type, different data is displayed. The sections below show example of the type of data that may be displayed with an event. ### Access authorization event example If you would like to review detailed information about an access authorization event, click on the event. This expands the view for that event, revealing both a summary of the event with quick links to each entity, and detailed JSON output, including event metadata. Depending on the type of access authorization event, the information presented in the expanded view will be specific to that event type. For example, if you review an example below shows an event where Trust Provider attestation failed. #### Trust Provider attestation failure example In the following example, you can see detailed information about an access authorization event that shows a failure because the Trust Provider couldn't be attested. ![Trust Provider Failed Attestation Event](../../../../assets/images/reporting-auth-event-failed-trust-provider.png) In the left side of the information panel, you see the following information displayed: - **Timestamp** - The time when the event was recorded. - **Client IP** - The client IP address that made the access authorization request. This will typically be a network egress IP from your edge environment. - **Context ID** - ID used to associate the relevant access authorization events together. - **Event Type** - The type of event that was recorded. - **Client Workload** - The identified Client Workload ID. - **Server Workload** - The identified Server Workload ID. :::note Each of these entities has a quick link, enabling you to go directly to that entity. ::: In the right side of the information panel, you see the more granular, detailed information displayed about each of these entities, including: - **Meta** - Metadata associated with the event. - Information includes `clientIP`, `timestamp`, `eventType`, `contextId`, `directiveId`, and `severity`. - **Outcome** - The result of the access authorization request. - Options are `Authorized`, `Unauthorized`, or `Error`. - **Client Workload** - The Client Workload used in the access authorization request. - Detailed information includes `id`, `name`, `result`, and `matches`. Note that the `matches` value is optional, and is only rendered if multiple Client Workloads are matched. - **Server Workload** - The Server Workload used in the access authorization request. - Detailed information about the Server Workload includes `id`, `name`, `result`, and `matches`. - Note that the `matches` value is optional, and is only rendered if multiple Server Workloads are matched. - **Access Policy** - The Access Policy used to evaluate the access authorization request. - Information includes `id`, `name`, `result`, and `matches`. - **Trust Providers** - The Trust Providers assigned to the Access Policy at the time of evaluation. - Information in the JSON response includes `id`, `name`, `result`, `attribute`, `expectedValue`, and `actualValue`. - The `reason`, `attribute`, `expectedValue` and `actualValue` fields are all optional; however, in the case of Trust Provider attestation failure, you will see the `reason` field populated. - If a `reason` value is returned, refer to the [Authorization Failure](#authorization-failure) section on this page for more information. - **Access Conditions** - The Access Conditions assigned to the Access Policy at the time of evaluation. - Information in the JSON response includes `id`, `name`, `result`, `attribute`, `expectedValue`, and `actualValue`. - The `reason`, `attribute`, `expectedValue` and `actualValue` fields will only be returned if there is a failure, and the reason for the failure is `ConditionFailed`. - **Credential Provider** - The Credential Provider used in the access authorization request. - Detailed information includes `id,` `name`, `result`, and `maxAge` values. - If a failure occurs during credential retrieval, then a `reason` value will also be included. :::note If a `reason` value is returned, refer to the [Credential Failure](#credential-failure) section on this page for more information. ::: --- # How to review Audit Logs URL: https://docs.aembit.io/user-guide/audit-report/audit-logs/ Description: How to review Audit Log information in the Reporting Dashboard ## Introduction Your Aembit tenant includes the ability for you to review detailed audit log information so you can troubleshoot any issues encountered in your environment. Having this data readily available can assist you in diagnosing any issues that may arise, while also providing you with detailed information about these events. ## Retrieving audit log data To retrieve event information from audit logs, perform the following steps: 1. Log into your Aembit tenant with your user credentials. 2. Click the **Reporting** link in the left navigation pane. You are then directed to the Reporting Dashboard page. 3. Click the **Audit Logs** tab at the top of the page, Aembit displays the **Audit Logs** dashboard. ![Audit Logs Main Page](../../../../assets/images/reporting-audit-logs-main-page.png) 4. At the top of the page, you see the following dropdown menus: - **Timespan** - The period of time you would like to have audit logs data displayed. The default display value is **30 Days**. - **Category** - The type of event information you want displayed. The default display value is **All**. - **Severity** - The level of importance of the event. The default display value is **All**. 4. Select the period of time you would like to view by clicking on the **Timespan** dropdown menu. Options are: - 1 Day, 15 Days, 30 Days, 3 Months, 6 Months, 1 Year, or All 5. Select the event information you would like to view by clicking on the **Category** dropdown menu. Options are: - AccessConditions, AccessPolicies, AgentControllers, Agents, Authentication, CredentialProvider, IdentityProviders, Integrations, Log Streams, Resource Sets, Roles, Tenant, TrustProvider, Users, Workloads, or All 6. Select the severity level of the results you would like to view by clicking on the **Severity** dropdown menu. Options are: - Alert, Warn, Info, or All 7. Once you have selected your filtering options, the table displays the audit log information based on these selections. ### Audit logs reporting example If you would like to review detailed audit log information for an event, select the event. This expands the window for that event, enabling you to see both a summary of the event (on the left side of the information panel), and detailed JSON output (on the right side of the information panel). The following example shows audit log information for an event where Trust Provider attestation failed. ![Audit Logs Reporting Example](../../../../assets/images/reporting-audit-log-attestation.png) In the left side of the information panel, you see a summary of the event information displayed, including: - **Timestamp** - The time the event was recorded. - **Actor** - The entity responsible for the request. - **Category** - The category of the event. - **Activity** - The type of request being made. - **Target** - The identifier of the entity that you are running the activity against. For example, if you are editing a Credential Provider, the target is the name of the Credential Provider. - **Result** - The result of the event. - **Client IP** - The IP address of the user or workload that executed the action that is recorded by the audit log. - **Browser** - The browser used by the client. - **Operating System** - The operating system used by the client. - **User Agent** - The User-Agent HTTP header included in the API request that generated the audit log activity. In the right side of the information panel, you see the more granular, detailed information, including: - **ExternalID** - The external ID of the audit log. - **Resource Set ID** - The Resource Set ID of the entity affected by the audit log generating activity. - **Category** - The category of the event in the audit log. - **Actor** - The entity responsible for the request. - **Activity** - The type of request being made. - **Target** - The target entity of the action represented by the audit log record. - **Client** - The metadata for the Client (e.g. browser) environment. - **Outcome** - The verdict of the request. - **Trust Provider** - The Trust Provider used in the request. Note that this value is only applicable for Trust Provider attestation based authentication (e.g. Agent Controller attested authentication or Proxyless authentication). - **Severity** - The severity of the event. - **Created At** - The time the request was made. --- # User Guide/administration =========================== # Admin dashboard overview URL: https://docs.aembit.io/user-guide/administration/admin-dashboard/ Description: This page describes the different views and dashboards on the Aembit Admin Dashboard When logging into your Aembit tenant, you are immediately shown the Admin dashboard, which displays detailed workload and operational information. Whether you want to see the number of Client Workloads requesting access to Server Workloads over the last 24 hours, or view the number of credentials requests recorded over a 24-hour period for a specific usage type, the Admin Dashboard provides you quick access to these views so you can glean insight into your Aembit environment's performance. ## The Admin Dashboard To view the Admin Dashboard. 1. Log into your Aembit tenant with your user credentials. 2. Once you are logged in, you are directed to the Admin Dashboard, where you see data displayed in various panels. ![Admin Dashboard Main Page](../../../../../assets/images/admin_dashboard_main.png) You should see the following tiles: - Summary - Workload Events - Client Workloads (Managed) - Server Workloads (Managed) - Credentials (Usage By Type) - Workload Connections (Managed) - Access Conditions (Most Access Conditions Failures) ### Summary + Workload Events #### Summary The **Summary** panel displays the number of configured workloads and entities in your Aembit environment, including the number of entities that are currently inactive. - Client Workloads - Trust Providers - Access Conditions - Credential Providers - Server Workloads ![Admin Dashboard - Summary](../../../../../assets/images/admin-dashboard-summary.png) :::note When you click on one of these panels, the **Summary** tab opens the dashboard page for that resource with a list of existing configurations. ::: #### Workload Events The **Workload Events** panel displays the number of Workload Events recorded over the last 6 hours. This historical data can be very useful in measuring how many workload events occurred over a set period of time. With this data, you can optimize your Aembit environment; this includes the workload event severity so users can quickly identify connectivity issues. ![Admin Dashboard - Workload Events](../../../../../assets/images/admin-dashboard-workload-events.png) If you select the **Refresh** button, you can refresh the results to view newly received events, enabling you to view the latest event records and make any necessary changes if needed to ensure your Aembit environment is operating efficiently. ### Client Workloads (Managed) The **Client Workloads (Managed)** panel displays the number of managed Client Workloads that attempted to access Server Workloads over the last 24 hours, sorted from top to bottom based on the number of Client Workload connections. This information can be helpful in determining which Client Workloads are accessing Server Workloads in your Aembit environment and identifying the most active Client Workloads. ![Managed Client Workloads](../../../../../assets/images/admin-dashboard-managed-client-workload-tile.png) ### Server Workloads (Managed) The **Server Workloads (Managed)** panel displays the number of managed Server Workload connections that were recorded over the last 24 hours, sorted from top to bottom based on the number of requests received for the Server Workload. This information can be helpful in determining which Server Workloads are being accessed in your Aembit environment and identifying the most active Server Workloads. ![Managed Server Workloads](../../../../../assets/images/admin-dashboard-managed-server-workload-tile.png) ### Credential (Usage By Type) The **Credential (Usage By Type)** panel displays a pie chart showing the total number of credential types that were issued in the past 24 hours. This information can be helpful in determining which credential types are most frequently being used. Aembit encourages the use of short-lived credentials wherever possible. By identifying the usage level of different credential types, this chart can be helpful when transitioning from long-lived to short-lived credentials. ![Credential Provider Usage By Type](../../../../../assets/images/admin-dashboard-credential-provider-usage-by-type-1--tile.png) ### Workload Connections (Managed) / Application Protocol The **Workload Connections** panel displays the number of managed Workload Connections that were recorded over the last 24 hours, sorted from top to bottom based on the type of application protocol used in the request. ![Workload Connections By Application Protocol](../../../../../assets/images/admin-dashboard-app-protocol-pie-tile-1.png) ### Access Policies (Most Access Condition Failures) The **Access Policies (Most Access Condition Failures)** panel displays the number of Access Condition failures based on Access Policies. In this chart, you can see that Aembit was able to identify Client Workloads and Server Workloads on Access Policies, but the Access Condition fails and these workloads can therefore not be attested, enabling you to identify how many attestations are failing because of Access Conditions. In the example shown below, notice that for the VM1 - Production Instance, the most Access Condition failures occurred for Microsoft Graph API and Redshift DB - Ohio. ![Access Policy Failures](<../../../../../assets/images/admin-dashboard-access-policies-most-access condition-failures.png>) --- # Discovery overview URL: https://docs.aembit.io/user-guide/administration/discovery/ **Discovery** serves as the central control board for managing integrations related to the [Discovery](/astro/user-guide/discovery/) process. :::note This is a beta feature and may be subject to changes. To enable Discovery with Wiz, contact Aembit by completing the [Contact Us form](https://aembit.io/contact/). ::: Once you've contacted Aembit to enable Discovery in your Aembit Tenant, you can configure an integration to find workloads in your environment. Once you configure an integration, Aembit uses it to discover workloads. After discovering your workloads, Aembit displays them in either the **Client Workload** or **Server Workload** tab as **Discovered**. For detailed instructions on managing discovered workloads, refer to [Interacting with Discovered Workloads](/astro/user-guide/discovery/managing-discovered-workloads). ## Using the discovery tab On the **Discovery tab** page, the **New** option appears in the top-right corner. Clicking **New** allows you to create and configure new integrations. ![Discovery Tab Layout](../../../../../assets/images/administration_discovery_main_page.png) Following that, Aembit displays the **Integrations** list which lists existing integrations in a table. Each row in the table shows key details such as: - **Name:** The name of the integration. - **Type:** The type of integration. - **Last Successful Sync:** The date and time of the last successful synchronization. - **Sync Status:** Indicates the synchronization status. To interact with an integration, you can either: - Hover over the row in the **Integrations List**, where a three-dotted icon appear on the right end of the row. Clicking this icon opens a menu where you can: - **View details:** See more information about the integration. - **Edit:** Modify the integration's configuration. - **Delete:** Remove the integration. - **Change active status:** Activate or deactivate the integration. - Or, you can click directly on the integration row, which opens a **details page** where they can view, edit, or delete the integration. Additionally, you can hover over the **Name** column to see the **ID** of the integration, which they can copy for reference. ## Related resources For more information about Discovery, see the following related pages: - [Discovery Overview](/astro/user-guide/discovery/) - Learn about the Discovery feature in Aembit - [Managing Discovered Workloads](/astro/user-guide/discovery/managing-discovered-workloads) - Learn how to work with discovered workloads - [Discovery Sources](/astro/user-guide/discovery/sources/) - Learn about the different Discovery Sources available in Aembit --- # Create a Wiz Discovery Integration URL: https://docs.aembit.io/user-guide/administration/discovery/integrations/wiz/ Description: How to create a Wiz Discovery Integration :::note This is a beta feature and may be subject to changes. ::: This page describes how to create a new Wiz integration for [Discovery](/astro/user-guide/discovery/). ## Prerequisites Before you begin, you must have access to the following: - **Wiz Account**: You should have a **Wiz account**. ## Set up a service account in Wiz 1. Sign in to your **Wiz account**. 1. Go to **Settings -> Integrations**. 1. Click **+ Add Integration** in the top-right corner. ![Adding Integration on Wiz](../../../../../../assets/images/discovery_wiz_add_integration.png) 1. Search for **Aembit** and click the **Aembit integration**. ![Wiz - Searching for Aembit Integration](../../../../../../assets/images/discovery_wiz_search_aembit.png) 1. Provide a name for your integration (for example, **Aembit Discovery integration**). 1. Click **Add integration** at the bottom bar. ![Complete Integration](../../../../../../assets/images/discovery_new_aembit_integration.png) 1. Open a new browser window or **copy** the following details from Wiz, as you'll need them in the [next section](#configure-wiz-discovery): - **API Endpoint URL** - **Token URL** - **Client ID** - **Client Secret** ![Wiz Integration Details](../../../../../../assets/images/discovery_wiz_integration_details.png) ## Configure Wiz Discovery Follow these steps to configure the Wiz integration in your Aembit Tenant: 1. Log into your Aembit Tenant, and go to **Administration -> Discovery** 1. Click **+ New**. 1. Select **Wiz integration** from the available options. 1. Using the details from the final step in the [previous section](#set-up-a-service-account-in-wiz), fill in the integration details: - **Name**: The name of the Integration. For example, **Wiz Discovery**. - **Description** - An optional text description for the Integration. - **Endpoint**: Paste the **API Endpoint URL** you copied earlier. - **Sync Frequency**: Choose the sync frequency from dropdown menu. - **OAuth Token Endpoint**: Paste the **Token URL** from the previous step. - **Client ID**: Paste the **Client ID** you copied earlier. - **Client Secret**: Paste the **Client Secret** you copied earlier. - **Audience**: Enter `wiz-api`. ![Wiz Integration Configuration on Aembit](../../../../../../assets/images/discovery_aembit_new_integration.png) 1. Click **Save**. --- # Identity Providers overview URL: https://docs.aembit.io/user-guide/administration/identity-providers/ Description: Description of what Identity Providers are and how they work in the Aembit UI :::tip[Paid feature] The Identity Providers feature requires a paid subscription. To activate this feature, please contact Aembit by submitting the [Contact Us form](https://aembit.io/contact/). ::: The Identity Providers feature allows you to offer alternate authentication methods when users sign in to your Aembit tenant. The default authentication method is to use an email and password with the option to [enable and require MFA](/astro/user-guide/administration/sign-on-policy/#require-multi-factor-authentication-for-native-sign-in). Requiring your users to remember and manually enter a username and password every time they sign in to your Aembit tenant is tedious, error-prone, and insecure long-term. To improve the user experience and provide more secure authentication methods, set up SAML Single Sign-On (SSO) through integrating an external SAML-capable Identity Provider (IdP) such as Okta. To enforce the exclusive use of SSO and prevent your users from authenticating with their username and password, enable [Require Single Sign On](/astro/user-guide/administration/sign-on-policy/#require-single-sign-on). :::tip The Identity Providers feature is only available on the following subscription plans: - Teams plan - Enterprise plan To enable Identity Providers, please contact Aembit by completing the [Contact Us form](https://aembit.io/contact/). ::: ## SAML overview SAML 2.0 (Security Assertion Markup Language) is an open standard created to provide cross-domain Single Sign-On (SSO) authentication. SSO allows a user to authenticate in one system (the [Identity Provider](#saml-identity-provider)) and gain access to a different system (the [Service Provider](#service-provider)) by providing proof of their authentication from the IdP. ### SAML Identity Provider The SAML Identity Provider (IdP) enables SSO user authentication where Aembit acts as the Service Provider. Common SAML Identity Providers include Okta, Google, Microsoft Entra ID, and many others. ### Service Provider The Service Provider takes this information and implicitly trusts the information given and provides access to the service or resource. The Aembit Service Provider is an example of a resource that accepts external Identity Provider data. ## Aembit SSO authentication process The following occurs during the SSO authentication process on your Aembit tenant: 1. A user selects the option to authenticate through an IdP on the Aembit tenant login page. 1. Aembit redirects the user to the IdP's log in page. 1. The IdP prompts the user to authenticate. 1. If the IdP authentication is successful, the IdP redirects the user back to your Aembit tenant. 1. Aembit logs the user in through the successful SSO authentication. ## About automatic user creation When you enable the automatic user creation feature, Aembit automatically generates new user accounts on your behalf when your users go through the [SSO authenticate process](#aembit-sso-authentication-process). This automation not only saves time and resources by reducing or eliminating the manual effort needed to manage user accounts but also minimizes errors associated with manual account management. Also, this feature provides granular control of what user roles Aembit assigns to new users it creates. The automatic user creation feature works by extracting certain SAML attributes in the SAML response from the IdP after successful authentication with that IdP. It's important to know, however, that not all IdPs configure their SAML attributes the same way. Different IdPs use distinct attribute names to pass user group claim information. To alleviate these inconsistencies, Aembit allows you to map your IdP's SAML attributes to the user roles available in your Aembit tenant. See [Configure automatic user creation](/astro/user-guide/administration/identity-providers/automatic-user-creation) for details. ### How automatic user creation works During the SSO authentication process, when Aembit verifies the incoming SAML response message, if no user account exists for that user, Aembit initiates the automatic user creation process. Aembit requires an email address to uniquely identify users of your Aembit tenant. If it can, Aembit populates the first and last name of the users it automatically creates. If not, Aembit just uses the user's email address. Aembit looks for the presence of the following group claim elements in the SAML response to try to create the new user account: - A `NameID` element containing the user's email address. If Aembit finds the `NameID` element isn't present or the value isn't a valid email address, Aembit searches for the `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress` claim instead. If Aembit finds neither, the automatic user creation process stops. If this happens, you must [Configure automatic user creation](/astro/user-guide/administration/identity-providers/automatic-user-creation) - Both `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname` and `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname` to populate a user's first and last names, respectively. Otherwise, Aembit populates a user's first and last names with their email address. - A `AttributeStatement` element with at least one `Attribute` child element with an attribute value matching the configuration data entered on the **Mapping** tab of the **Identity Provider** page. This match is necessary to determine which roles Aembit assigns to the new user account. If Aembit doesn't find a matching attribute value, Aembit won't create the new user account. ## Additional resources The following pages provide more information about working with Identity Providers: - [Automatic User Creation](/astro/user-guide/administration/identity-providers/automatic-user-creation) - Configure automatic user creation with Identity Providers - [Creating Identity Providers](/astro/user-guide/administration/identity-providers/creating-identity-providers) - Set up new Identity Providers in your Aembit tenant --- # How to configure Single Sign On automatic user creation URL: https://docs.aembit.io/user-guide/administration/identity-providers/automatic-user-creation/ Description: How to configure SSO automatic user creation through an identity provider import { Steps } from '@astrojs/starlight/components'; [Automatic user creation](/astro/user-guide/administration/identity-providers/#about-automatic-user-creation) automatically generates new user accounts on your behalf when your users go through the SSO authenticate process. This feature provides granular control of what user roles Aembit assigns to new users it creates. For more details, see [how automatic user creation works](/astro/user-guide/administration/identity-providers/#how-automatic-user-creation-works). ## Prerequisites To enable automatic user creation in your Aembit tenant, you must have the following: - A Teams or Enterprise subscription plan. - Your Identity Provider's (IdP) SAML group claim information attribute names and values. ## Map IdP SAML attributes to Aembit user roles To map the group information sent from your Identity Provider to the roles available in your tenant, follow these steps: 1. Log in to your Aembit tenant. 1. In the left sidebar mene go to **Administration --> Identity Providers**. 1. Create a new Identity Provider or edit an existing one, and then select the **Mappings** tab. ![Identity Provider Mappings](../../../../../assets/images/identity_providers_mappings.png) 1. Click **Edit** if not already in edit mode. 1. Click **New**, which adds a new row to the table **Role Assignments** table. 1. In the **SAML Attribute Name** column, use the dropdown to select an existing attribute name or click "+" to add a new one. Make sure the values correspond to the groups defined in your Identity Provider. 1. In the **SAML Attribute Value** column, use the dropdown to select an existing attribute value or click "+" to add a new one. Make sure the values correspond to the groups defined in your Identity Provider. :::tip Refer to your Identity Provider's configuration documentation for the correct attribute names and values. For example, Azure uses the predefined claim name `http://schemas.microsoft.com/ws/2008/06/identity/claims/groups`, while other Identity Providers may allow customization of the claim name for group information. ::: 1. In the **Aembit Roles** column, use the dropdown to select one or more Aembit roles. ![Aembit Administration Page - Identity Providers Role Mapping](../../../../../assets/images/identity_providers_mappings.png) 1. If needed, repeat the previous four steps. 1. Click **Save**. --- # How to create an Identity Provider URL: https://docs.aembit.io/user-guide/administration/identity-providers/creating-identity-providers/ Description: How to create an Identity Provider import { Steps } from '@astrojs/starlight/components'; Configuring an external Identity Provider (IdP) allows you to offer alternate authentication methods for how users sign in to your Aembit tenant. For example, Single Sign-On (SSO) instead of the default authentication method of an email and password. When you configure a SAML-capable IdP in your Aembit tenant, you must enter either your IdP's SAML **Metadata URL** or **Metadata XML** information. If either of these items are present, Aembit provides the following in the Aembit UI: - Aembit SP Entity ID - Aembit SSO URL (for IdP-initiated SSO) ## Configure an external Identity Provider To configure an external SAML-capable IdP to work with the Aembit IdP, follow these steps: 1. Log into your Aembit tenant. 1. In the left sidebar menu, go to **Administration --> Identity Providers**. ![An empty Identity Providers page](../../../../../assets/images/aembit_admin_page_identity_providers.png) 1. Click **New** to reveal the **Identity Provider** page. ![Adding a Identity Provider page](../../../../../assets/images/identity_providers_main_page.png) 1. On the **Details** tab, fill out the following fields: - **Name:** The name of the SAML Identity Provider (for example, Okta SSO) - **Description:** A description of the SAML Identity Provider (this is optional) - **Identity Provider Type:** The type of SAML Identity Provider. Aembit only supports SAML 2.0. - Depending on your Identity Provider, either enter the Metadata URL in the **Metadata URL** field or use the **Metadata XML** field to upload an XML file with the Identity Provider Metadata information: - **Metadata URL:** The URL where Aembit can retrieve SAML metadata for a specific SAML-capable Identity Provider. - **Metadata XML:** Some Identity Providers may not provide a publicly accessible Metadata URL. In these cases, Identity Provider configuration may have an option to download the metadata information in XML form. :::note You should use the **Aembit SP Entity ID** and **Aembit SSO URL** fields to configure your external Identity Provider. ::: 1. Optionally, in the **Mappings** tab of the **Identity Provider** page you may specify mapping information between group claims configured in your Identity Provider and user roles available in your tenant. Adding this information enables automatic user creation based on the information in SAML response messages sent by your Identity Provider. See [Configure automatic user creation](/astro/user-guide/administration/identity-providers/automatic-user-creation#map-idp-saml-attributes-to-aembit-user-roles) for more information. 1. Click **Save**. Aembit displays the newly created SAML IdP listed on the **Identity Provider** page. Now, when your users log in to your Aembit Tenant, the login UI displays the available SAML SSO options similar to the following screenshot: ![Updated Login Page With Okta](../../../../../assets/images/updated_login_with_sso.png) --- # Log Stream overview URL: https://docs.aembit.io/user-guide/administration/log-streams/ Description: Description of what Log Streams are and how to capture and archive log information The Log Streams feature enables you to set up a process to forward audit logs, workload events, and access authorization events from your Aembit tenant to an AWS S3 or GCP Cloud Storage Bucket. This in turn enables you to perform more detailed data analysis and processing outside of your Aembit tenant. :::note This is currently a paid feature. To enable this feature, please reach out to [Aembit Support](https://aembit.io/support/). ::: The following pages provide information about configuring Log Streams for different cloud storage services: - [AWS S3](/astro/user-guide/administration/log-streams/aws-s3) - Configure Log Streams to send logs to AWS S3 buckets - [GCS Bucket](/astro/user-guide/administration/log-streams/gcs-bucket) - Configure Log Streams to send logs to Google Cloud Storage buckets --- # Create a AWS S3 Log Stream URL: https://docs.aembit.io/user-guide/administration/log-streams/aws-s3/ Description: This page describes how to create a new Log Stream to an AWS S3 Bucket import { Steps } from '@astrojs/starlight/components'; To create a new Log Stream to an AWS S3 Bucket, follow these steps: 1. Log into your Aembit tenant. 1. Once you are logged in, click on the **Administration** tab in the left navigation pane. You will be redirected to the Administration Overview page. 1. Select **Log Streams** from the top navigation bar. The Log Streams page appears, displaying all existing Log Streams. ![Log Streams Main Page](../../../../../assets/images/log_streams_main_screen.png) 1. Click **+ New**, which displays the Log Streams pop out window. ![Log Streams - AWS S3](../../../../../assets/images/log_streams_window.png) 1. Enter the following information in the window: - **Name:** The name of the new Log Stream you want to create. - **Description:** A text description for the new Log Stream. - **Stream Type:** The types of events you would like to have associated with the Log Stream. Options are: `Access Authorization Events`, `Audit Logs`, and `Workload Events` - **Destination Type:** The type of service you would like to have your newly created Log Stream information forwarded to for further analysis and study. Select **AWS S3 using Bucket Policy** from the drop-down menu. For more detailed information on how to create an AWS S3 Bucket, please refer to the [Amazon AWS S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/creating-bucket.html) technical documentation. 1. Add your information for the AWS S3 Bucket in the following fields: - **S3 Bucket Region** - **S3 Bucket Name** - **S3 Path Prefix** You will also notice a generated Bucket Resource Policy in the **Destination Bucket Policy**field. Make sure you apply this policy to the Destination Bucket. 1. Click **Save**. Your new Log Stream will be saved, and then displayed on the main Log Streams page. --- # Create a Google Cloud Storage Bucket Log Stream URL: https://docs.aembit.io/user-guide/administration/log-streams/gcs-bucket/ Description: This page describes how to create a new Log Stream to an Google Cloud Storage (GCS) Bucket import { Steps } from '@astrojs/starlight/components'; ## Prerequisites Before creating a new Google Cloud Storage (GCS) Bucket Log Stream, make sure you have set up and configured: - [Google Cloud Storage Bucket](https://cloud.google.com/storage/docs/creating-buckets) - [IAM Service Account](https://cloud.google.com/iam/docs/service-accounts-create) - [Workload Identity Federation](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-providers) ## Create a new Google Cloud Storage Bucket Log Stream To create a new Log Stream for a Google Cloud Storage (GCS) Bucket, follow these steps: 1. Log into your Aembit tenant. 1. Once you are logged in, click the **Administration** tab in the left navigation pane. You will be redirected to the Administration Overview page. 1. Select **Log Streams** from the top navigation bar. The Log Streams page appears, displaying all existing Log Streams. ![Log Streams Main Page - Empty](../../../../../assets/images/gcs_log_streams_main_page_empty.png) 1. Click **New**, which displays the Log Streams pop out window. ![Log Streams Dialog Window - Empty](../../../../../assets/images/gcs_log_streams_dialog_window_empty.png) 1. Enter the following information in the window: - **Name:** The name of the new Log Stream you want to create. - **Description:** A text description for the new Log Stream. - **Stream Type:** The types of events you would like to have associated with the Log Stream. Options are: `Access Authorization Events`, `Audit Logs`, and `Workload Events` - **Destination Type:** The type of service you would like to have your newly created Log Stream information forwarded to for further analysis and study. Select **GCS Bucket using Workload Identity Federation** from the drop-down menu. 1. Add your information for the Google Cloud Storage Bucket in the following fields: - **Bucket Name** - Name of the bucket. - **Audience** - A user-defined value that was set from the Provider Details screen. Aembit will match any audience value specified for the provider, and can be either the default audience or a custom value. - **Service Account Email** - The email address of the Service Account (set at the time of Service Account creation). - **Token Lifetime** - The amount of time that the token will remain active. 1. When you are finished entering this information, click **Save**. Your new GCS Bucket Log Stream will be saved, and then displayed on the main Log Streams page. ![Log Streams Main Page With GCS Bucket Log Stream Added](../../../../../assets/images/gcs_log_streams_log_stream_list_with_gcs_bucket.png) --- # Resource Sets overview URL: https://docs.aembit.io/user-guide/administration/resource-sets/ Description: Description of what Resource Sets are and how they work # In complex environments, managing access to sensitive resources requires granular control. Aembit's Resource Sets are an advanced feature that extends Aembit's existing Role-Based Access Control (RBAC) capabilities, providing fine-grained permissions and roles within your Aembit tenant. This feature enables you to define and manage logical and isolated sets of resources. Resources include things such as Client Workloads, Server Workloads, deployed Agent Proxy instances and their associated operational events such as Audit Logs, Access Authorization, and Workload Events. Each Resource Set acts as a mini-environment or sub-tenant, enabling segmentation of security boundaries to best secure your environment. This segmentation allows roles to be specifically tailored for your Resource Sets, thereby ensuring that users and workloads have access limited to the resources necessary for their designated tasks. Therefore, this approach not only enhances security by adhering to the principle of least privilege (PoLP) but also supports complex operational and organizational configurations. ### Configuration Resource Sets primarily govern Access Policies and their associated entities. The following list contains all available Access Policy entities: - Client Workloads - Trust Providers - Access Conditions - Integrations - Credential Providers - Server Workloads The resources you configure can then operate independently of similar or identical resources in other Resource Sets, enabling numerous configuration and control options. To ensure this separation, Aembit administrators can configure user assigned roles associated to specific Resource Sets and assign users to these roles. This logical association enables support for numerous advanced permission sets as best suited for your organization's security needs. Aembit generates Audit Logs for all configuration updates, separates them out into their respective Resource Sets, and ensures they're only visible to those users with the appropriate permissions. ### Deployment You can specify a Resource Set association when deploying an Aembit Agent Proxy or using the Aembit Agent. This enables all operational activity to execute within the bounds of that Resource Set. ### Reporting Aembit segments its comprehensive event logging, which includes Audit Logs, Access Authorization, and Workload Events, into the associated Resource Set. Aembit restricts access to these events only to authorized users. This separation ensures that event data is logically isolated but also subject to stringent access controls, restricting visibility to authorized users within each specific Resource Set. Resource Sets empower you to enforce the principle of least privilege. PoLP makes sure that your users can only view configuration details and operational results for the environments and workloads under their direct responsibility. Moreover, this approach facilitates compliance by providing clear audit trails within defined security boundaries, and it simplifies troubleshooting by limiting the scope of event analysis to relevant resource contexts. ## Additional resources The following pages provide more information about working with Resource Sets: - [Creating Resource Sets](/astro/user-guide/administration/resource-sets/creating-resource-sets) - Learn how to create Resource Sets - [Adding Resources to Resource Sets](/astro/user-guide/administration/resource-sets/adding-resources-to-resource-set) - Add resources to your Resource Sets - [Assign Roles](/astro/user-guide/administration/resource-sets/assign-roles) - Assign roles to your Resource Sets - [Deploying Resource Sets](/astro/user-guide/administration/resource-sets/deploying-resource-sets) - Deploy your Resource Sets --- # How to add a resource to a Resource Set URL: https://docs.aembit.io/user-guide/administration/resource-sets/adding-resources-to-resource-set/ Description: How to add resources to a Resource Set ## Adding Resources to a Resource Set Now that you have created a Resource Set and assigned roles for managing the Resource Set, you should begin adding resources to the new Resource Set by following the steps listed below. 1. In your Aembit tenant, click on the Dashboard tab in the left navigation pane. You should see your main Dashboard page. ![Dashboard - Default Resource Set](../../../../../assets/images/administration_resource_sets_dashboard_default_resource_set.png) 2. In the top right corner, click on the Resource Selector button next to the **Getting Started** button. A drop-down will appear with a list of Resource Sets. :::note When you log into your Aembit tenant, by default, Aembit displays the **Default** Resource Set selected. ::: 3. Select the Resource Set you would like to use to add new resources. In this example, *DevOps Team Resource Set* is selected. ![Main Dashboard - DevOps Team Resource Set Selected](../../../../../assets/images/administration_resource_sets_dashboard_devops_team_resource_set_selected.png) 4. To select the type of resource you would like to create, either click on the tile at the bottom of the page for that resource; or click on the tab in the left navigation pane. In this example, to select the **Client Workload** resource has been selected. 5. The Client Workload Dialog window will then appear. Notice in the top-right corner of the window that there is a label that designates that this resource will be included in the *DevOps Team Resource Set*. ![Client Workload Dialog Window With DevOps Team Resource Set Selected](../../../../../assets/images/administration_resource_sets_new_client_workload_devops_team_resource_set.png) 6. Enter all information required for adding the new Client Workload to the *DevOps Team Resource Set* in this dialog window. Click **Save** when finished. 7. Repeat these steps for any other resources you would like to add to the Resource Set. --- # How to assign a role to a Resource Set URL: https://docs.aembit.io/user-guide/administration/resource-sets/assign-roles/ Description: How to assign a role for a Resource Set ## About Resource Set Roles and Permissions While a Resource Set is a collection of individual resources grouped together, within that same Resource Set, you will also need to assign users a specific role, and permissions for that role. When configuring Resource Sets, consider the following: - Roles should be assigned to users based on their responsibilities for managing the Resource Set. When thinking of roles and role assignments, consider the role assignment from a resource-first perspective. - Permissions should be granted for each Role to ensure the user can perform their required tasks. Permissions in a role work with the Resource Set association to enable access to specific Resource Set entities as configured. ## Assigning A Role To assign roles within a Resource Set, follow the steps listed below. 1. Log into your Aembit tenant. 2. Once you are logged into your tenant, click on the **Administration** tab in the left navigation pane. You will be directed to the main Administration page. 3. Click on the **Resource Sets** tab to display any existing Resource Sets. In this example, there are no Resource Sets. ![Administration Page - Resource Sets Empty](../../../../../assets/images/administration_resource_sets_main_page_empty.png) 4. Click on the **New** button to open the Resource Sets dialog window. 5. In the Resource Sets dialog window, select the **Roles** tab. You have the option to either add a new role or select from existing roles. - To add a new role, click on the **Create New** tab and check the **Create Admin** checkbox for the new role and enter a name for the new role you would like to create. ![Create New Role - DevOps Admin User](../../../../../assets/images/administration_resources_role_assignments_new_role_devops_admin_user.png) - To select an existing role, click on the **Select Existing** tab. A drop-down menu appears listing all existing roles. Select the role, or roles, you would like to use from the drop-down menu and click **Save**. :::note You may select multiple roles for a Resource Set. ::: ![Resource Sets - Select an Existing Role](../../../../../assets/images/administration_resource_sets_roles_select_existing.png) 6. Once you are finished assigning the roles to the Resource Sets, click **Save**. Your new Resource Set will be saved and then displayed on the main Resource Sets page with the correct roles assigned. ![Resource Set Main Page - Test Resource 3](../../../../../assets/images/administration_resource_sets_new_resource_set_3.png) --- # How to create a Resource Set URL: https://docs.aembit.io/user-guide/administration/resource-sets/creating-resource-sets/ Description: How to create a Resource Set ## Create a new Resource Set To create a new Resource set in your Aembit tenant, follow the steps listed below. 1. Log into your Aembit tenant. 2. Once you are logged into your tenant, click on the **Administration** tab in the left navigation pane. You will be directed to the main Administration page. 3. Click on the **Resource Sets** tab to display any existing Resource Sets. In this example, there are no Resource Sets. ![Administration Page - Resource Sets Empty](../../../../../assets/images/administration_resource_sets_main_page_empty.png) 4. Click on the **New** button to open the Resource Sets dialog window. 5. In the Resource Sets dialog window, select the **Details** tab. You will be prompted to enter the following information: - **Name** - The name of the Resource Set. In this example, the name of the new Resource Set is *DevOps Team Resource Set*. - **Description** - An optional text description for the Resource Set. ![Resource Set - DevOps Team Resource Set Example](../../../../../assets/images/administration_resource_sets_new_resource_set_devops_example.png) 7. Click **Save** when finished. Your new *DevOps Team Resource Set* will be saved and then displayed on the main Resource Sets page. ![Resource Sets Main Page With DevOps Team Resource Set](../../../../../assets/images/administration_resource_sets_main_page_with_devops_resource_set.png) --- # How to deploy a Resource Set URL: https://docs.aembit.io/user-guide/administration/resource-sets/deploying-resource-sets/ Description: How to deploy a Resource Set Once a Resource Set has been created, and roles and responsibilities have been assigned, the Agent Proxy component needs to be configured and deployed to work with the specific `AEMBIT_RESOURCE_SET_ID` environment variable. All Aembit deployment mechanisms are supported, including: - Kubernetes - Terraform ECS Module - Agent Proxy VM Installer - AWS Lambda ### Kubernetes Deployment To deploy a Resource Set using Kubernetes, you need to add the `aembit.io/resource-set-id` annotation to your Client Workload deployments. For more information on how to deploy Resource Sets using Kubernetes, please see the [Kubernetes Deployment](/astro/user-guide/deploy-install/kubernetes/kubernetes) page. ### Terraform ECS Module Deployment To deploy a Resource Set using the Terraform ECS Module, you need to provide the `AEMBIT_RESOURCE_SET_ID` environment variable in the Client Workload ECS Task. For more detailed information on how to deploy a Resource Set using the Terraform ECS Module, please see the [Terraform Configuration](/astro/user-guide/access-policies/advanced-options/terraform/terraform-configuration#resources-and-data-sources) page. ### Agent Proxy VM Installer Deployment To deploy a Resource Set using the Agent Proxy Virtual Machine Installer, you need to specify the `AEMBIT_RESOURCE_SET_ID` environment variable during the Agent Proxy installation. For more information on how to deploy a Resource Set using the Agent Proxy Virtual Machine Installer, please see the [Virtual Machine Installation](/astro/user-guide/deploy-install/virtual-machine/) page. ### AWS Lambda Container Deployment To deploy a Resource Set using an AWS Lambda Container, you need to specify the `AEMBIT_RESOURCE_SET_ID` environment variable to your Client Workload. For more information on how to deploy a Resource Set using an AWS Lambda Container, please see the [AWS Lambda](/astro/user-guide/deploy-install/serverless/aws-lambda-container) deployment page. --- # Roles overview URL: https://docs.aembit.io/user-guide/administration/roles/ Description: Description of Aembit roles and how they work When working in your Aembit environment, you may find it necessary to assign specific roles and permissions for groups so they only have access to certain resources that they are required to manage in order to perform their tasks. By creating roles and assigning permissions to that role, you can enhance your overall security profile by ensuring each role, with its assigned permissions, only has the access required. Your Aembit tenant enables you to create new roles within your organization, assign Resource Sets for a role, and set permissions for the role. The following pages provide more information about managing roles in your Aembit tenant: - [Adding Roles](/astro/user-guide/administration/roles/add-roles) - Learn how to add roles to your Aembit tenant --- # Add a role URL: https://docs.aembit.io/user-guide/administration/roles/add-roles/ Description: How to create a new Role in the Aembit Tenant ## Adding a New Role To add a new role to your Aembit tenant and environment, follow the steps listed below. 1. Log into your Aembit tenant. 2. Once you are logged into your tenant, click on the **Administration** tab in the left navigation pane. You will see the Administration page displayed. 3. Click on the **Roles** tab at the top of the page. This opens the Roles page and displays a list of existing roles. ![Roles Page](../../../../../assets/images/administration_roles_main_page.png) :::note By default, your Aembit tenant includes both the SuperAdmin and Auditor roles. ::: 4. Click on the **New** button to open the Roles dialog window. ![Roles Dialog Window - Empty](../../../../../assets/images/administration_roles_add_new_role_dialog_window.png) 5. In the Roles dialog window, enter the following information: - **Name** - The name of the Role. - **Description** - An optional text description of the Role. - **Resource Set Assignment(s)** - A drop-down menu that enables you to assign existing Resource Sets to the Role. - **Permissions** - A drop-down menu that enables you to select an existing permission set based on the type of Role you would like to create. By selecting from this list, the radio buttons in the Permissions section will be auto-filled with the default permissions for that role. ![Roles Dialog Window - Completed](../../../../../assets/images/administration_roles_dialog_window_completed.png) In this example, the SuperAdmin role has been selected. Note that the radio buttons have been automatically filled in with the default permissions for that Role. 6. Click **Save** when finished. The new Role you just created will appear on the Roles page. ![Roles Page - New Role Added](../../../../../assets/images/administration_roles_main_page_with_new_role.png) --- # Sign-On Policy overview URL: https://docs.aembit.io/user-guide/administration/sign-on-policy/ Description: Description of what Sign-On Policies are and how they work Use the Sign-On Policy page to control how users log in to your Aembit tenant. The settings in this page allow you to customize the login experience and security level according to the organization's needs. The Sign-On Policy page offers two key options to enhance security and streamline the authentication process: ## Require Single Sign-On The following are requirements for using Single Sign-On (SSO): :::tip[Paid feature] This option is available only to tenants with enabled Identity Providers feature. ::: This option mandates that users authenticate through a Single Sign-On provider. This not only simplifies the login process but also enhances security by centralizing authentication management. When you turn on the require SSO option, your users with the system Super Admin role can always use the native sign-in option (username and password). ## Require multi-factor authentication for native sign-in This option enforces the use of multi-factor authentication (MFA) for users logging in directly through Aembit's native sign-in method. When enabled, users must provide an MFA code, as well as their password. This markedly increases security by adding an extra layer of protection against unauthorized access. Aembit provides users a 24-hour grace period once you require users to authenticate with MFA. The grace period resets for any users that update their accounts (for example: due to a password reset or account unlocking activity). After this period, Aembit locks accounts without MFA configured. ## Required permissions Access to the policy settings on this page requires the **Sign-On Policy** permission. --- # Users overview URL: https://docs.aembit.io/user-guide/administration/users/ Description: This page provides a high-level description of users When you are working in your Aembit environment, you may find it necessary to add new users to your organization's Aembit tenant so they can be added to groups, manage resources, and be assigned certain roles within your organization. In your Aembit tenant, adding a user entails creating a new user in the tenant UI, and then assigning specific roles for that user. Once the user has been added and a role has been assigned, that user can then manage resources. The following pages provide more information about managing users in your Aembit tenant: - [Adding Users](/astro/user-guide/administration/users/add-user) - Learn how to add users to your Aembit tenant --- # How to add a user URL: https://docs.aembit.io/user-guide/administration/users/add-user/ Description: How to add a user to your Aembit tenant To add a user to your Aembit tenant, perform the steps listed below. 1. Log into your Aembit tenant. 2. Once you are logged into your tenant, click on the **Administration** tab in the left navigation pane. 3. On the Administration page, click on the **Users** tab at the top of the page. The Users page will be displayed with a list of existing users. ![Users Page - Empty](../../../../../assets/images/administration_users_main_page_empty.png) 4. Click on the **New** button to open the Users dialog window. 5. In the Users dialog window, enter the following information: - **First Name** - First name of the user **Last Name** - Last name of the user - **Email** - The email address associated with the user - **Country Code (optional)** - The country code associated with the user - **Phone Number (optional)** - The phone number associated with the user. - **Role Assignments** - A drop-down menu that enables you select specific role assignments for the user from a list of available roles. ![Users Dialog Window - Completed](../../../../../assets/images/administration_users_dialog_window_completed.png) 6. When finished, click **Save**. The user is then added to the Users page. ![Users Page - New User Added](../../../../../assets/images/administration_users_main_page_with_new_user.png) --- # User Guide/troubleshooting ============================ # Agent Controller Health URL: https://docs.aembit.io/user-guide/troubleshooting/agent-controller-health/ Description: This page describes steps for troubleshooting issues with Agent Controller health. ### Potential culprit The Agent Controller is a critical Aembit Edge Component that facilitates Agent Proxy registration. For any production deployment, it's essential to install and configure the [Agent Controller in a high availability configuration](/astro/user-guide/deploy-install/advanced-options/agent-controller/agent-controller-high-availability) and enable health monitoring. It is common to skip the high availability configuration and monitoring for proof-of-concept deployments. This oversight may lead to issues if the Agent Controller enters an unhealthy state. Several common causes can lead to this situation: - The Agent Controller was configured to use Trust Provider-based registration, and the Trust Provider was misconfigured (either originally or mistakenly altered afterward). - The Agent Controller was configured to use a device code, and an expired or incorrect device code was used. In both scenarios, the Agent Controller will be unable to register, leading to the Agent Proxy's inability to register and retrieve credentials from the Aembit cloud. ### Troubleshooting Steps #### Agent Controller Deployed on Virtual Machine To check the health of the Agent Controller, query the [Agent Controller Health endpoint](/astro/user-guide/deploy-install/advanced-options/agent-controller/agent-controller-high-availability#agent-controller-health-endpoint-swagger-documentation). Execute the following command to assess the health of the Agent Controller: ```shell curl http://:5000/health ``` #### Agent Controller Deployed on Kubernetes Execute the following command to assess the health of the Agent Controller: ```shell kubectl get pods -n aembit -l aembit.io/component=agent-controller ``` #### Resolving issues If the Agent Controller is not healthy: - Check the Trust Provider configuration if it was deployed via Trust Provider-based registration. - If the Agent Controller was deployed with device code registration, generate a new device code and redeploy the Agent Controller. --- # Agent Proxy Connectivity URL: https://docs.aembit.io/user-guide/troubleshooting/agent-proxy-connectivity/ Description: This page describes steps for investigating and troubleshooting issues with Agent Proxy connectivity. ### Potential culprit If the Aembit Agent Proxy cannot establish a connection either to the Agent Controller or to the Aembit Cloud, Agent Proxy will not be able to receive directives and credentials from the Aembit Cloud. If you do not see Workload events for your Client Workload and Server Workload pair, the issue with connectivity could be one of the potential culprits. You will need to access the terminal of a Virtual Machine or a container where the Aembit Agent Proxy is running using your preferred method. Please use your preferred method to access the terminal of a Virtual Machine or container where the Aembit Agent Proxy is running. :::note If your Client Workload is running in Kubernetes, the Aembit Agent Proxy will be added as a sidecar to the Client Workload container, and you can access it by executing: ```shell kubectl exec -it -c aembit-agent-proxy -- bash ``` ::: ### Troubleshooting steps The next step is to check connectivity to the Agent Controller and Aembit Cloud by executing these commands (If necessary, telnet needs to be installed): ```shell telnet telnet .aembit.io 443 ``` If either DNS resolution or TCP connectivity fails, please check your DNS and firewall setup to allow the Aembit Agent Proxy to establish these connections. --- # Agent Proxy Debug Network Tracing URL: https://docs.aembit.io/user-guide/troubleshooting/agent-proxy-debug-network-tracing/ Description: This page describes how you can utilize the Agent Proxy Debug Network Tracing feature to capture and record network traffic in a Virtual Machine deployment. # Agent Proxy has the ability to capture a rolling window of the most recent network traffic on your host's network devices, a feature referred to as Debug Network Tracing. When enabled, Agent Proxy: - writes a package capture file (`.pcap`) to the local disk whenever it encounters certain errors (currently limited to TLS "certificate unknown" occurrences). - writes a `.pcap` file with the most recent network packets for all devices when receiving `POST /write-pcap-file` on the HTTP service server endpoint (defaults to `localhost:51234` unless configured otherwise). With this information, you can review network traffic information to locate the error and perform remediation steps to resolve the issue. :::note Debug Network Tracing is "off by default; therefore, you must enable this feature directly. ::: ## Configuring Debug Network Tracing for Agent Proxy Configuring Agent Proxy to capture network traffic information requires you to perform the steps listed below. 1. Go to the [Virtual Machine installation](/astro/user-guide/deploy-install/virtual-machine/) page in the Aembit technical documentation. 2. Follow the steps described in the [Agent Proxy Installation](/astro/user-guide/deploy-install/virtual-machine/linux/agent-proxy-install-linux) section to install Agent Proxy. 3. When installing Agent Proxy, supply the following environment variable to the Agent Proxy VM installer: `AEMBIT_DEBUG_MAX_CAPTURED_PACKETS_PER_DEVICE=` - Where `N` is the number of packets you would like to have Agent Proxy capture, while also determining the size of the rolling window. For example, if you set `N` to `2000`, this means that Agent Proxy will monitor and keep a history of the last 2000 network packets for each IPv4 device. Your command should look like the example shown below. ```shell sudo AEMBIT_AGENT_CONTROLLER=http://:5000 AEMBIT_DEBUG_MAX_CAPTURED_PACKETS_PER_DEVICE=2000 [...] ./install ``` 4. Agent Proxy debug network tracing is now enabled, and you are able to review network traffic on your devices. --- # Tenant Configuration URL: https://docs.aembit.io/user-guide/troubleshooting/tenant-configuration/ Description: This page describes steps for troubleshooting an Aembit Tenant misconfiguration. ### Troubleshooter Tool Several common misconfigurations can occur. Aembit provides a troubleshooter tool that can detect such misconfigurations. 1. Sign into your Aembit cloud tenant. 2. Click on the **Help** link in the left navigation pane. 3. You will be directed to the **Troubleshooter** tool. ![Troubleshooter](../../../../assets/images/troubleshooter.png) 4. Choose the appropriate Client Workload and Server Workload. 5. Click the **Analyze** button. You will be presented with a view showing various checks that were performed: - Access Policy Checks - Client Workload Checks - Trust Provider Checks - Access Condition Checks - Server Workload Checks ![Client Workload Checks](../../../../assets/images/troubleshooter_clientworkload_checks.png) ![Access Conditions Checks](../../../../assets/images/troubleshooter_accessconditions_checks.png) ![Server Workload Checks](../../../../assets/images/troubleshooter_serverworkload_checks.png) The checks could be in several states: - A green checkbox icon indicates that the check successfully passed. - A blue information icon presents general information. - A yellow exclamation icon indicates that additional configuration may be considered; however, the current configuration is supported and operational. - A red cross icon indicates that such a configuration will prevent the Client Workload from successfully authenticating to the Server Workload. Such a misconfiguration will have an action item on the right indicating how to rectify the issue. ### Credential Provider Verification Some Credential Providers, like OAuth 2.0 Client Credentials, allow for the verification of credentials. 1. Sign into your Aembit cloud tenant. 2. Click on the **Credential Providers** link in the left navigation pane. 2. Click on a Credential Provider. 2. Click the **Verify** button. You will be notified whether the verification succeeded or failed. In the case of verification failure, please check the credential provider's details for accuracy. --- # Checking Tenant Health URL: https://docs.aembit.io/user-guide/troubleshooting/tenant-health-check/ Description: This page describes how to check the health of the Aembit Cloud components. # When working with Aembit for your environment workloads, you may find it useful to occasionally check the health of the Aembit Cloud Service and associated components. The following services may be checked for current health and status: - Aembit Status Page - API/Management Plane - Edge Controller - Identity Provider ### Aembit Status Page The Aembit Service Status Page displays the current status of the Aembit Service, including any incidents that have been logged by service. You may find this useful if you would like to verify that the service is up and running before working with your Aembit tenant. #### Checking the Health of the Aembit Service To check the current status of the Aembit service: 1. Navigate to the Aembit Status Page by opening a browser and going to the following web address: https://status.aembit.io/ 2. On this page, you may review the current status of the Aembit service, including the current status of the Management Portal and Control Plane, in addition to a 90-day record of any reported incidents. ![Aembit Status Page](../../../../assets/images/aembit_status_page.png) :::note If you would like to view historical uptime data beyond 90 days, click on the **View historical uptime** link. When you click on this link, you will see an Aembit Historical Data page where you can choose between historical data from either the Management Portal or Control Plane. ![Aembit Historical Data Page](../../../../assets/images/aembit_status_historical_uptime_data.png) :::tip You may automatically receive Aembit service status updates by clicking on the **Subscribe to Updates** button in the top-right corner of the Status page and entering your email address. ::: ### API/Management Plane The API/Management Plane is a programmatic interface that enables you to perform many of the same actions and tasks you can perform in your Aembit tenant. While the Aembit tenant allows you to perform these tasks in a user interface; sometimes, you may wish to programmatically perform some of these actions, especially if you wish to perform batch operations or write scripts to perform these tasks. Monitoring the API/Management Plane can be useful in ensuring the endpoints that control these actions are operational and working properly. #### Checking the Health of the API/Management Plane To check the health of the API/Management Plane, follow the steps described below. 1. Log into your Aembit tenant. 2. On the main dashboard page, hover over your name in the bottom left corner of the dashboard. You should see a **Profile** link appear. 3. Click on the **Profile** link to open the User Profile dialog window. ![User Profile Dialog Window](../../../../assets/images/user_profile_dialog_window.png) 4. In the User Profile dialog window, copy the **API Base Url** value. 5. Execute the following API call to the Aembit server using your API Base Url value that you copied from the User Dialog window. `api/v1/health` Where: - `api` is the service you are calling - `v1` is the API version - `health` is the resource you are calling 6. You should receive a `200` HTTP status code if your tenant is operating correctly (referred to as "healthy"). An example of a successful tenant health check response is shown below. `{"status":"Healthy","version":"===version===","gitSHA":"===sha===","host":"===tenant===.aembit.io","tenant":"===tenant==="}` ### Agent Controller Agent Controller communicates its health status to Aembit Cloud every 60 seconds (similar to a "heartbeat" request), enabling you to monitor the real-time health status of Agent Controller. When reviewing the health status of Agent Controller, there are (4) different connection states: - **Healthy** - The Agent Controller is registered and the connection status is healthy (green). - **Registered** - This state is only visible if Kerberos is enabled. Agent Controller is registered, but it is not ready to provide Kerberos attestation yet. - **Unregistered** - The Agent Controller is not registered with a Device Code or Trust Provider (yellow). - **Registered and Not Connected** - The Agent Controller is registered and healthy, but the connection is down (yellow). :::note If Agent Controller is in an "inactive" state, Agent Controller status will be displayed with a gray icon in the **Status** column. ::: #### Checking the Health of the Agent Controller In the Aembit Tenant To check the health of the Agent Controller in your Aembit Tenant: 1. Log into the Aembit tenant with your user credentials. 2. Click on the **Edge Components** link in the left navigation pane. You will see the Edge Components Dashboard displayed. :::note By default, The Agent Controllers dashboard is displayed. ::: ![Agent Controller Dashboard](../../../../assets/images/agent_controller_health_status_check.png) 4. From the list of Agent Controllers, locate the Agent Controller you want to check the health and scroll over to the **Status** column. 5. Hover over the **Status** icon to see when the last health check was performed. ### Edge Controller The Edge Controller is a component within the Aembit Cloud infrastructure that provides endpoints that enable you to generate application events, retrieve configuration information, policies, and credentials via a set of endpoints. Verifying the Edge Controller, and its endpoints, are operating correctly is important in ensuring that application events and other configuration information is captured and logged, and able to be retrieved by users. #### Checking the Health of the Edge Controller To check the health of the Edge Controller: 1. Go to the [gRPC Health Proto GitHub repository](https://github.com/grpc/grpc/blob/master/src/proto/grpc/health/v1/health.proto) and 2. Use the [gRPCurl](https://github.com/fullstorydev/grpcurl) command line tool to verify the Edge Controller is running. For example, if you run this command with Docker, the command should look like this: `docker run --rm -v $PWD:/app fullstorydev/grpcurl -v -import-path=/app -proto health.proto tenant.ec.useast2.aembit.io:443 grpc.health.v1.Health/Check` ### Identity Provider An Identity Provider is a system that stores, manages, and verifies digital identities for users or entities connected to a network or system so a user may be authenticated to use a service. In the Aembit framework, the Identity Provider authenticates users and grants them access to various Aembit services. Monitoring the health of the Identity Provider ensures authentication and identity verification services are running correctly, and users can be authenticated properly before granting access to Aembit services. #### Checking the Health of the Identity Provider If you would like to check the current health of your Identity Provider, the steps are very similar to the steps you followed to check the API/Management Plane, which are described below. 1. In your Aembit tenant, select the Sign In with Email option. 2. Notice that when you select this option, you will see a Fully Qualified Domain Name (FQDN) in your browser address bar (e.g. https://tenant.id.useast2.aembit.io) with your Base URL. 3. Append the FQDN in the address bar with `api/v1/health` like the example shown below. `https://tenant.id.useast2.aembit.io/api/v1/health` Where: - `https://tenant.id.useast2.aembit.io` is the base URL - `api` is the service being called - `v1` is the API version - `health` is the resource being called 4. After clicking enter, you should receive an output message confirming that the Identity Provider is in a "healthy" state. `{"status":"Healthy","version":"===version===","gitSHA":"===sha===","host":"===tenant===.aembit.io","tenant":"===tenant==="}` ---