Azure OpenAI Embeddings
Generate embeddings using Azure OpenAI's enterprise-grade models. Features secure Azure integration, flexible deployment options, and version control through API versioning for reliable production use.

Azure OpenAI Embeddings component interface and configuration
Azure Deployment Notice: Before using this component, ensure you have deployed the appropriate embedding model in your Azure OpenAI resource. Your deployment name and model selection must match for the API calls to work properly.
Component Inputs
- Model: The embedding model to use
Example: "text-embedding-ada-002", "text-embedding-3-small", "text-embedding-3-large"
- Azure Endpoint: Your Azure OpenAI resource endpoint URL
Example: "https://your-resource.openai.azure.com"
- Deployment Name: The name of your Azure OpenAI model deployment
Example: "embedding-deployment"
- API Key: Your Azure OpenAI API key
Example: "1234567890abcdef1234567890abcdef"
- API Version: The Azure OpenAI API version to use
Example: "2023-05-15", "2024-02-15" (Default: "2023-05-15")
- Dimensions: Optional override for output vector dimensions
Example: 1536, 3072 (depends on model capabilities)
Component Outputs
- Embeddings: Vector representations of the input text
Example: [0.018, -0.027, 0.082, ...]
- Usage Information: Tokens used in the request
Example: prompt_tokens: 125, total_tokens: 125
Model Comparison
text-embedding-3-large
Highest quality model with best performance, ideal for mission-critical applications
Dimensions: 3072
Performance: Premium
Languages: Excellent multilingual support
Ideal for: Enterprise semantic search and critical retrieval tasks
text-embedding-3-small
Balanced model offering good performance with lower compute requirements
Dimensions: 1536
Performance: Good
Languages: Strong multilingual support
Ideal for: Most commercial applications with balanced cost-performance needs
text-embedding-ada-002
Legacy model maintained for backward compatibility
Dimensions: 1536
Performance: Baseline
Languages: English-focused
Ideal for: Legacy systems and compatibility with existing vector databases
Implementation Example
// Basic configuration
const embedder = new AzureOpenAIEmbeddor({
model: "text-embedding-ada-002",
azureEndpoint: process.env.AZURE_OPENAI_ENDPOINT,
deploymentName: process.env.AZURE_OPENAI_EMBEDDING_DEPLOYMENT,
apiKey: process.env.AZURE_OPENAI_API_KEY
});
// Advanced configuration
const advancedEmbedder = new AzureOpenAIEmbeddor({
model: "text-embedding-3-large",
azureEndpoint: process.env.AZURE_OPENAI_ENDPOINT,
deploymentName: "embedding-3-large-deployment",
apiKey: process.env.AZURE_OPENAI_API_KEY,
apiVersion: "2024-02-15",
dimensions: 1536
});
// Generate embeddings
const result = await embedder.embed({
input: "Your text to embed"
});
// Batch processing
const batchResult = await embedder.embedBatch({
inputs: [
"First text to embed",
"Second text to embed"
]
});
console.log(result.embeddings);
Use Cases
- Enterprise RAG Systems: Build retrieval-augmented generation with Azure compliance
- Regulatory-Compliant Search: Create searchable document bases that comply with industry regulations
- Azure AI Integration: Seamlessly integrate with other Azure AI services
- Private Cloud Deployments: Implement in environments with strict data privacy requirements
- Enterprise Knowledge Bases: Create corporate knowledge stores with vector search capabilities
Best Practices
- Use managed identities for authentication in production environments
- Set up quota alerts to monitor embedding API usage
- Implement caching strategies to reduce redundant embedding generation
- Consider regional data residency requirements when deploying Azure OpenAI
- Use the latest API versions for access to the newest features and models