1. Azure Connectors
These Azure connectors enable your workflow to seamlessly integrate with Microsoft Azure's cloud services for retrieving, processing, and utilizing data in your RAG applications.
1.1 Azure Cosmos DB Loader

Azure Cosmos DB Connector Interface
Description
The Azure Cosmos DB Loader connects to Microsoft's globally distributed, multi-model database service to retrieve documents. It allows you to query and extract JSON documents stored in Cosmos DB collections for use within your workflow.
Use Cases
- Retrieving product catalogs for e-commerce recommendation systems
- Accessing user profile data for personalized experiences
- Loading business documents for semantic search applications
- Integrating with NoSQL data stores requiring low latency access
- Real-time analytics on document-based datasets
Inputs
- Cosmos DB URI: The endpoint URL for your Cosmos DB instance (required)
Example: https://your-cosmosdb-account.documents.azure.com:443/
- Primary Key: The primary access key for authentication (required)
Example: aBcD1234567890XyZaBcD1234567890XyZ==
- Database Name: Name of the database containing the documents (required)
Example: customer-database
- Container Name: Container/collection where the documents are stored (required)
Example: customer-profiles
- SQL Query: SQL-like query to filter documents (optional)
Example: SELECT * FROM c WHERE c.category = 'electronics' AND c.price > 1000
Outputs
JSON formatted data that can be processed by subsequent nodes in your workflow.
Example Output:
[ { "id": "item-1234", "name": "Wireless Headphones", "category": "electronics", "price": 89.99, "description": "Noise cancelling wireless headphones with 20hr battery life" }, { "id": "item-5678", "name": "Bluetooth Speaker", "category": "electronics", "price": 129.99, "description": "Waterproof bluetooth speaker with 360° sound" } ]
Implementation Notes
- For optimal performance, use targeted queries instead of retrieving entire collections
- Throughput costs are based on Request Units (RU) consumption - optimize queries to minimize RU usage
- Consider adding pagination for large result sets by using the OFFSET and LIMIT clauses
1.2 Azure Blob Loader
Description
The Azure Blob Loader fetches files from Azure Blob Storage. It can retrieve documents, images, videos, logs, and any other file types stored in Azure's object storage service, making them available for processing in your workflow.

Azure Blob Storage Connector Interface
Use Cases
- Loading PDF documents, reports, or manuals for knowledge extraction
- Processing images or media files for content analysis
- Accessing log files for troubleshooting or analytics
- Retrieving configuration files or datasets for model training
- Handling large files that exceed traditional database size limits
Inputs
- Connection String: Azure Storage account connection string (required)
Example: DefaultEndpointsProtocol=https;AccountName=mystorageaccount;AccountKey=myAccountKey==;EndpointSuffix=core.windows.net
- Container: Storage container name where files are stored (required)
Example: documents
- Blob File Name: Path or name of the specific file to load (required)
Example: reports/annual-report-2023.pdf
Outputs
Raw file content or transformed data depending on file type.
For text files, the content is returned as plain text. For binary files like PDFs, the connector will handle appropriate extraction.
Implementation Notes
- Use folder paths in blob names to organize content (e.g., "finance/reports/q2-2023.pdf")
- Consider using Shared Access Signatures (SAS) for more granular permissions
- For large files, the connector will stream content to avoid memory issues
- Binary files (like PDFs) will typically need to be paired with a document processing node
1.3 Azure SQL Loader
Description
The Azure SQL Loader connects to Azure SQL Database or SQL Server instances hosted on Azure. It enables querying structured relational data using standard SQL syntax, making it ideal for retrieving well-organized tabular information.

Azure SQL Connector Interface
Use Cases
- Extracting customer records for personalized interactions
- Querying product information for e-commerce applications
- Retrieving financial or transactional data for analysis
- Accessing historical data for trend analysis in reports
- Integrating with existing enterprise SQL-based systems
Inputs
- Database URI: Connection string for your Azure SQL database (required)
Example: Server=mysqlserver.database.windows.net,1433;Database=mydb;User ID=myuser;Password=mypassword;Encrypt=true
- Table Name: Database table to query (required)
Example: Customers
- SQL Query: SQL query to extract the specific data (required)
Example: SELECT CustomerID, Name, Email, LastPurchaseDate FROM Customers WHERE Region = 'West' AND LastPurchaseDate > '2023-01-01'
Outputs
JSON array of records or tabular data representation.
Example Output:
[ { "CustomerID": 1001, "Name": "Acme Corporation", "Email": "contact@acme.com", "LastPurchaseDate": "2023-06-15" }, { "CustomerID": 1042, "Name": "TechSolutions Inc", "Email": "info@techsolutions.com", "LastPurchaseDate": "2023-08-22" } ]
Implementation Notes
- Use parameterized queries to prevent SQL injection when working with dynamic inputs
- Consider using indexed columns in WHERE clauses for better performance
- Limit the number of columns and rows returned to optimize performance
- For large result sets, use TOP or OFFSET/FETCH clauses for pagination
1.4 Azure Container Loader

Azure Container Loader Interface
Description
The Azure Container Loader enables you to work with Azure Container Instances or Azure Container Registry. It provides the ability to pull container metadata, images, or execute commands against containerized applications.
Use Cases
- Retrieving data from containerized microservices
- Integrating with container-based data processing systems
- Accessing containerized databases or data services
- Running specialized data extraction tools in isolated containers
Inputs
- Connection String: Azure Container connection or endpoint (required)
Example: https://mycontainerregistry.azurecr.io
- Container: Container instance or image name (required)
Example: data-processor:latest
Outputs
Container metadata, execution results, or data streams depending on the configured operation.
Authentication & Security
- Ensure proper authentication and access permissions are configured in your Azure services
- Use managed identities where possible for enhanced security
- Store sensitive connection strings and keys in environment variables or a secure vault
- Implement the principle of least privilege when configuring service access
Best Practices
- Obtain connection strings from Azure Portal's service-specific "Connection Strings" or "Access Keys" sections
- Test connections with minimal permissions first before expanding access
- Use query timeouts to prevent workflow hanging on long-running operations
- Consider implementing retry logic for transient Azure service failures
- Monitor Azure resource usage to optimize costs and performance
Workflow Integration Tips
- Connect Azure connectors to transformer nodes to format data before processing
- Use filter nodes after connectors to remove unnecessary data
- Consider caching results for frequently used but rarely changed datasets
- Chain multiple Azure connectors to create comprehensive data pipelines