Documentation

Amazon Bedrock Models

A drag-and-drop component for integrating Amazon Bedrock LLMs into your workflow. Simply configure the required parameters and connect the inputs/outputs to other components.

Amazon Bedrock Component

Amazon Bedrock component interface and configuration

AWS Credentials Required: This component requires valid AWS credentials with appropriate permissions to access Amazon Bedrock services. Ensure you have the necessary IAM roles and policies configured before using this component in production.

Component Inputs

  • Input: Text input for the model

    Example: "Explain the concept of quantum computing in simple terms."

  • System Message: System prompt to guide model behavior

    Example: "You are a helpful assistant who explains complex topics in simple language."

  • Stream: Toggle for streaming responses

    Example: true (for real-time token streaming) or false (for complete response)

  • Model ID: The Bedrock model identifier

    Example: "anthropic.claude-3-haiku-20240307", "amazon.titan-text-express-v1"

  • AWS Access Key ID: Your AWS access key

    Example: "AKIAIOSFODNN7EXAMPLE"

  • AWS Secret Access Key: Your AWS secret key

    Example: "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"

  • AWS Session Token: Optional temporary session token

    Example: "AQoEXAMPLEH4aoAH0gNCAPyJxz4BlCFFxWNE1OPTgk5TthT..."

  • Credentials Profile Name: AWS credentials profile (optional)

    Example: "bedrock-profile"

  • Region Name: AWS region where Bedrock is deployed

    Example: "us-east-1", "us-west-2"

Component Outputs

  • Text: Generated text output

    Example: "Quantum computing is a type of computing that uses quantum mechanics to process information..."

  • Language Model: Model information and metadata

    Example: model_id: anthropic.claude-3-haiku-20240307, usage: {input_tokens: 40, output_tokens: 120, total_tokens: 160}

Additional Settings

Model Kwargs

Key-value pairs for additional model parameters

{ "temperature": 0.7, "max_tokens": 500, "top_p": 0.9 }

Endpoint URL

Custom endpoint URL (optional)

Default: AWS regional endpoint based on Region Name Example: "https://bedrock-runtime.us-east-1.amazonaws.com" Use case: For VPC endpoints or custom AWS endpoint configurations

Supported Models

Anthropic Claude Models

High-performance models from Anthropic available via Bedrock

- anthropic.claude-3-sonnet-20240229 - anthropic.claude-3-haiku-20240307 - anthropic.claude-3-opus-20240229 - anthropic.claude-instant-v1

Amazon Titan Models

Amazon's proprietary LLMs

- amazon.titan-text-express-v1 - amazon.titan-text-lite-v1

Other Provider Models

Additional models available through Bedrock

- meta.llama2-13b-chat-v1 - meta.llama2-70b-chat-v1 - cohere.command-text-v14 - ai21.j2-ultra-v1

Implementation Example

// Basic configuration with environment variables const bedrockClient = { modelId: "anthropic.claude-3-haiku-20240307", regionName: "us-east-1", // AWS credentials from environment variables // AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY automatically used }; // Advanced configuration const advancedBedrockClient = { modelId: "anthropic.claude-3-sonnet-20240229", regionName: "us-west-2", awsAccessKeyId: process.env.AWS_ACCESS_KEY_ID, awsSecretAccessKey: process.env.AWS_SECRET_ACCESS_KEY, awsSessionToken: process.env.AWS_SESSION_TOKEN, stream: true, modelKwargs: { temperature: 0.5, max_tokens: 2000, top_p: 0.9 }, endpointUrl: "https://bedrock-runtime.us-west-2.amazonaws.com" }; // Usage example async function generateResponse(input) { const response = await bedrockComponent.generate({ input: input, systemMessage: "You are a helpful assistant that explains complex concepts clearly.", modelId: "anthropic.claude-3-haiku-20240307" }); return response.text; }

Use Cases

  • Enterprise Applications: Build AI solutions that leverage AWS security and compliance standards
  • Multi-Model Workflows: Create applications that can switch between different model providers
  • Content Generation: Generate articles, summaries, and creative content
  • Conversational Agents: Build chatbots and virtual assistants using streaming responses
  • AWS Integration: Integrate with other AWS services in a unified environment

Best Practices

  • Use environment variables for AWS credentials
  • Test with small inputs first to validate your setup
  • Monitor token usage and costs through AWS billing dashboard
  • Use appropriate regional endpoints to minimize latency
  • Implement error handling in workflows to manage API failures
  • Consider using AWS IAM roles instead of access keys for production deployments