
Mastering Drupal Site Audits with AI: Using Audit Export for Automated Configuration Documentation and QA
Configuration management in Drupal has always been powerful, but documenting and auditing that configuration? That's traditionally been a manual slog. You'd export configs, open spreadsheets, cross-reference module versions, and hope you didn't miss anything important before that big migration or security review.
The Audit Export module changes this equation. Released in early 2024 and significantly updated in late 2025, it provides structured, machine-readable audit data that you can feed directly into AI systems for analysis and documentation. Combined with Drupal's Tool API, you can build workflows where AI agents query your site's configuration, identify issues, and generate documentation, without anyone manually exporting CSV files.
This guide walks through setting up Audit Export, connecting it to AI systems via Tool API, and building automated QA workflows that actually work in production environments.
Prerequisites
Before diving in, make sure you have:
- Drupal 10.3 or Drupal 11 installed and running
- Composer for dependency management
- Drush 12 installed and configured
- Command-line access to your Drupal environment
- Basic familiarity with Drupal's configuration management system
- An AI service account (OpenAI API, Claude, or similar) for the AI integration steps
You'll also want a development or staging environment to test these workflows before running them on production. Audits can be resource-intensive, and you don't want your first test run happening during peak traffic.
Step 1: Install and Configure Audit Export
Start by adding the module to your project:
composer require 'drupal/audit_export:^1.0@beta' drush en audit_export -y
After installation, navigate to /admin/config/system/audit-export to configure which audits to enable. The module provides audits for:
- Entity types and bundles
- Fields and field configurations
- User roles and permissions
- Menus and menu links
- Taxonomy vocabularies and terms
- Views configurations
- Blocks and block placements
- Cron jobs and services
Select the audits relevant to your documentation needs. For a full site audit before migration, enable everything. For ongoing security monitoring, focus on roles, permissions, and services.
Configure the cron settings if you want automated audits. The module can run audits on a schedule and store results for later retrieval, useful for tracking configuration drift over time.
Step 2: Enable Tool API Integration for AI Access
The real power comes from the Tool API integration added in the November 2025 release. This exposes every audit as a typed tool that external systems can invoke programmatically:
drush en tool_api -y drush en audit_export_tool -y
With these enabled, your audits become API endpoints. AI agents, CI/CD pipelines, and monitoring tools can now:
- List all available audits
- Run specific audits on demand
- Retrieve results with pagination
- Export data without touching the admin UI
If you're planning to connect MCP-compatible AI clients (like some LLM-powered assistants), also enable the MCP module:
drush en mcp -y
This makes your audit tools discoverable to AI systems that support the Model Context Protocol, letting them query your site's configuration as part of their reasoning process.
Step 3: Run Your First Audit
Test the setup with Drush before building automation:
# Run all configured audits drush audit-export:run # Check what reports are available drush audit-export:list # Export results to CSV drush audit-export:export --format=csv --destination=/tmp/audit-reports
Examine the output. You'll see structured data covering your configured audit types (entity inventories, permission matrices, view configurations, and more). This is the raw material your AI systems will analyze.
For a quick verification, look at the roles audit output. It should list every role on your site with its associated permissions. If you see data that looks accurate, you're ready to connect AI analysis.
Step 4: Connect AI Analysis via Tool API
Here's where automation gets interesting. Rather than manually exporting CSVs and uploading them somewhere, you can have AI systems call the Tool API directly.
The Tool API exposes endpoints like:
- audit_export.list_audits - Returns available audit types
- audit_export.run_audit - Executes a specific audit
- audit_export.get_report - Retrieves stored results
An AI agent can call these tools, receive JSON responses, and perform analysis without human intervention.
For a practical implementation, create a simple integration script that bridges Audit Export and your AI service:
auditRunner = $audit_runner;
$this->httpClient = $http_client;
$this->aiEndpoint = $ai_endpoint;
$this->aiApiKey = $ai_api_key;
}
/**
* Run audit and send to AI for analysis.
*/
public function analyzeAudit(string $audit_type): array {
// Run the audit
$results = $this->auditRunner->run($audit_type);
// Prepare prompt for AI analysis
$prompt = $this->buildAnalysisPrompt($audit_type, $results);
// Send to AI service
$response = $this->httpClient->post($this->aiEndpoint, [
'headers' => [
'Authorization' => 'Bearer ' . $this->aiApiKey,
'Content-Type' => 'application/json',
],
'json' => [
'model' => 'gpt-4',
'messages' => [
[
'role' => 'system',
'content' => 'You are a Drupal security and configuration analyst. Analyze the provided audit data and identify issues, risks, and recommendations.'
],
[
'role' => 'user',
'content' => $prompt
]
],
'temperature' => 0.3,
],
]);
return json_decode($response->getBody(), TRUE);
}
protected function buildAnalysisPrompt(string $type, array $results): string {
$json = json_encode($results, JSON_PRETTY_PRINT);
return "Analyze this Drupal {$type} audit data. Identify:\n" .
"1. Security concerns (especially permission issues)\n" .
"2. Configuration anomalies\n" .
"3. Maintenance recommendations\n" .
"4. Documentation notes\n\n" .
"Audit data:\n{$json}";
}
}Register this service in your module's services.yml and inject the appropriate dependencies. The key insight here is that you're not building complex AI logic in Drupal; you're structuring the data and prompts so that external AI services can do useful work.
Step 5: Build Automated QA Workflows
With the pieces in place, create workflows that run automatically and flag issues requiring attention.
A typical CI/CD integration looks like this:
# .gitlab-ci.yml example
audit_check:
stage: test
script:
- drush audit-export:run
- drush audit-export:export --format=csv --destination=./audit-output
- php scripts/analyze-audit.php ./audit-output
artifacts:
paths:
- audit-output/
- audit-analysis.json
expire_in: 30 days
rules:
- if: $CI_PIPELINE_SOURCE == "schedule"
- if: $CI_COMMIT_BRANCH == "main"The analyze-audit.php script sends the exported data to your AI service and parses the response for critical issues. If the AI identifies high-risk findings (like anonymous users having admin permissions), the script exits with a non-zero code and fails the pipeline.
For ongoing monitoring rather than deployment gates, configure Audit Export's cron automation combined with its remote posting feature. Point the remote endpoint at a webhook that triggers your AI analysis pipeline. This gives you continuous configuration monitoring without manual intervention.
Step 6: Generate Documentation from Audit Data
Beyond QA, audit data works well for automated documentation. We've found that structuring prompts for specific documentation outputs produces much better results than asking for general summaries.
Create documentation templates that your AI system fills in:
This approach produces surprisingly coherent documentation. The AI sees the raw configuration data and translates it into human-readable descriptions. Our experience shows that you'll want to review the output for accuracy—AI occasionally misinterprets field purposes or role intentions—but it handles 80% of the documentation grunt work.Common Mistakes to Avoid
Running full audits on production during peak hours. Audit operations query the database extensively. Schedule them during low-traffic periods or run against a database replica.Sending sensitive data to third-party AI services without review. Your permission audit contains your entire security model. Your user audit might include email addresses. Scrub sensitive fields before sending data externally, or use a self-hosted AI model for compliance-sensitive sites.Treating AI output as authoritative. AI analysis surfaces patterns and anomalies, but it doesn't understand your business context. An AI might flag a custom permission as "unusual" when it's actually intentional. Always have a human review AI-generated findings before taking action.Skipping the schema understanding step. Audit Export outputs follow specific structures. Spend time understanding the data format before building integrations. Parsing errors and incorrect assumptions cause subtle bugs that surface later.Not versioning your prompts. As you refine the prompts you send to AI services, keep them in version control alongside your code. Prompt changes affect output quality, and you'll want to track what changed when documentation suddenly looks different.Testing and Verification
After setting up your workflow, verify each component:Test audit generation:
drush audit-export:run --audit=entity_types drush audit-export:listConfirm that reports are created and contain expected data.Test Tool API access:
curl -X POST https://your-site.com/api/tool/audit_export.list_audits \ -H "Authorization: Bearer YOUR_TOKEN" \ -H "Content-Type: application/json"You should receive a JSON list of available audits.Test AI integration:
Run your analysis script against a known configuration and verify the AI response makes sense. Compare against manual review to calibrate trust levels.Test CI/CD integration:
Trigger a manual pipeline run and check that artifacts are created and analysis completes without errors.Test documentation generation:
Generate docs for a section of your site you know well. Read through the output and note any inaccuracies. Use these to refine your prompts.Where This Fits in the Broader Ecosystem
Audit Export isn't the only AI-adjacent module worth knowing. Several others complement this workflow:AI SEO Analyzer generates stored SEO reports for nodes, useful when your documentation needs to cover content quality alongside configuration. Combining its node-level analysis with Audit Export's structural data gives a complete picture.Analyze AI Content Security Audit scans content against security guidelines. While Audit Export covers configuration, this module handles the content layer—useful for sites where user-generated content needs monitoring.Config Auto Export detects configuration changes and can POST them to external services. Pair it with Audit Export to capture both the current state (audits) and changes over time (auto-export events).Teams we work with report that the combination of these tools—Audit Export for structure, AI SEO for content quality, and Config Auto Export for change tracking—provides solid coverage for most documentation and QA needs.Summary
Automated site audits with AI analysis represent a significant improvement over manual configuration reviews. By combining Audit Export's structured data with Tool API integration and AI services, you can build workflows that continuously monitor configuration health, flag issues automatically, and generate documentation that stays current.The technical setup requires some investment—installing modules, configuring Tool API access, writing integration code, and refining prompts—but the payoff is substantial. Configuration drift gets caught early. Documentation stays accurate. Security reviews happen continuously rather than quarterly.We recommend starting with a single audit type (roles and permissions make a good first choice given their security implications), building a complete workflow around it, and expanding from there. This incremental approach lets you refine your prompts and validation logic before scaling to full-site audits.If you're planning to implement AI-powered configuration auditing for your Drupal sites, the setup process involves decisions about data flow, AI provider selection, and integration architecture that affect long-term maintainability. We can help you evaluate these options and design a workflow that fits your team's security requirements and operational patterns.
