Core Principles
1. Explainability
What it means: Every automation action is understandable, traceable, and justifiable.
How we implement it:
Bot Execution History & Audit Trails
- Every bot execution is recorded with timestamps, status, and detailed activity logs
- Capture of exact actions taken, errors encountered, and outcomes
- Searchable execution history for forensic analysis and debugging
- Complete visibility into which bot performed what action, when, and with what credentials
Example: When autobotAI enriches a security finding with threat intelligence, the execution log shows:
- The exact threat and vulnerability data sources queried (e.g., VirusTotal, Trivy, Crowdstrike, Azure Defender, AbuseIPDB)
- The decision criteria applied
- Why the bot classified the threat as high/medium/low risk
- What remediations were recommended and why
Visual Workflow Builder
- All automation logic is built visually, making workflows transparent
- Non-technical stakeholders can review what automation will do before execution
- Each step in the workflow is labeled with its purpose and expected output
- Conditional logic is clearly shown (if/then decision points)
AI Assistant and Agent Reasoning
- When autobotAI's AI Assistant generates a workflow and/or multi-agent workflows executed, the reasoning is captured
- Users see the suggested steps and can understand why each step was recommended
- Generated workflows with agent nodes or generative AI nodes or deterministic action nodes can be edited or rejected before deployment
- Context about the workflow's intended use case is documented as part of bot title, description, use case type and severity for business operation
Contextual Summaries for Approvals
- When approvals are required, LLM-driven summaries explain the proposed action in plain language
- Technical details are abstracted for human decision-makers but if low level details required then the same can be captured from workflow execution logs
- Key risks and implications are highlighted
- Evidence supporting the recommendation is provided
How to use this: Review bot execution logs to understand what happened. Check audit trails to prove compliance. Enable stakeholders to review workflow logic before deployment.
2. Accountability
What it means: Clear ownership and responsibility for every automated action and decision.
How we implement it:
Human-in-the-Loop Approvals
- Critical operations require human authorization before execution
- Approval workflows capture who approved what, when, and their justification
- Escalation paths ensure decisions are made by appropriate authority levels
- No automation can execute critical actions without documented human sign-off
Example in Security Operations:
- Incident response playbooks trigger for detected threats but pause for human approval
- Security analyst reviews threat context and chooses to remediate
- Action is logged with analyst name, approval timestamp, and decision rationale
- If Security analyst or remediation task owner rejects the recommendation, alternative steps like adding details to risk register is being done
Role-Based Access Control
- Workspace isolation ensures only authorized personnel can create, modify, or approve workflows
- Permission boundaries control who can execute automations with sensitive access
- Different roles have different approval thresholds (e.g., low-risk: auto-approve; high-risk: requires manager)
- Access logs show who modified workflows and when
Audit Trail for Compliance
- Every action taken by automation is logged with full context
- User who triggered the automation (manual or scheduled)
- Exact resources affected and changes made
- Outcome (success/failure) and any errors encountered
- Complete chain of custody for sensitive operations
Governance Structure
- Policy enforcement happens before automation execution
- Workflows should be reviewed and approved by center of excellence governance committees
- Business rules and security policies are enforced at each decision point
- Violations trigger escalation and halt workflow
How to use this: Assign clear approvers for each automation type. Review approval logs for accountability. Use role-based access to control who can execute sensitive automations.
3. Reproducibility
What it means: Automation produces consistent, verifiable results every time it runs.
How we implement it:
Version-Controlled Bot Library
- Every bot workflow is versioned and tracked
- Previous versions can be restored with bot import/export feature
- Templates ensure standardized approaches to common tasks
Standardized Workflow Templates in Library
- Common security and compliance tasks use proven, tested templates
- Parameters are documented and validated
- Customer's CoE team can define and design workflows following best practices and organizational policies
- New bots inherit standards from templates
Documented Execution Parameters
- Each workflow has defined inputs, environment variables, and configuration
- Expected behaviors under different conditions are documented
- Edge cases and error handling are specified upfront
- Assumptions about data quality and system state are recorded
Execution History & Replay Capability
- Past executions can be reviewed to validate consistency
- Parameters used in past runs are stored and comparable
- Failed executions can be debugged by replaying with same conditions
- Trend analysis at dashboard shows whether bot behavior is consistent over time
How to use this: Use workflow templates and AI assistant for new automations. Document workflow parameters. Review agent node provided tools (MCP/API/full-code/CLI/ or query based node) and execution history to spot inconsistencies between defined use case and operational flow.
4. Fairness
What it means: Automation produces equitable, unbiased outcomes across different users, environments, and contexts.
How we implement it:
Multi-Source Data Validation
- Threat intelligence checks multiple sources (VirusTotal, AbuseIPDB, internal feeds like MISP) to avoid single-source bias
- Compliance rules are validated against multiple frameworks (SOC 2, GDPR, RBI, and other industry-specific standards)
- Decisions incorporate diverse data perspectives before conclusions are drawn
Contextual Decision-Making
- AI evaluators and agentic nodes analyze rich context before making recommendations
- Decisions are not based on single factors but comprehensive assessment designed by customer
- Edge cases and special circumstances are considered with verification and human in loop
- Similar situations receive similar treatment
Testing Across Diverse Environments
- Workflows are validated across different customer environments and use cases
- Bot performance is monitored for consistency across different data patterns
- Fairness metrics track whether automation treats all entities equally
- Results are disaggregated by customer type, asset type, and use case to spot disparities
Bias Detection in Agentic and GenAI powered Workflows
- AI Assistant generated workflows should be reviewed for potential bias before deployment
- AI Agent node, GenAI nodes and approval/notification node system prompt can be tuned by customer in a way to prevent bias
- Historical outcomes can be analyzed by workflow to spot patterns of unfair treatment
- Sensitive decision points are marked for additional human review
- Feedback loops capture in agent session memory and long term memory when users override AI recommendations
Human Review of AI Recommendations
- AI-generated recommendations are not auto-approved without human oversight
- Critical decisions require explicit human confirmation
- Users can override AI suggestions with documented reasoning
- Override patterns are analyzed to improve AI fairness
How to use this: Enable multi-source validation for critical decisions. Review AI-generated workflows before deployment. Monitor execution outcomes for bias patterns. Document and analyze human overrides.
5. Human-Centric
What it means: AI augments and empowers human decision-makers rather than replacing Security operation owner's judgment.
How we implement it:
Mandatory Approval Nodes
- Critical security and compliance decisions require human approval
- Approvers receive rich context to make informed decisions
- Approval is mandatory—not a rubber stamp, but an active decision point
- Easy rejection paths ensure humans can override automation and workflow can update risk register based on provided reason.
AI Suggestions Require Human Validation
- autobotAI proposes actions, humans decide whether to execute
- Workflow can be designed to make sure AI cannot autonomously make high-impact decisions.
- Humans see what the AI recommends and why, then decide
- If human disagrees with AI, the human decision is documented and details are captured for session memory and long term memory.
Override Capabilities
- At every decision point in a workflow, humans can pause and review.
- Approved overrides are logged and explained.
- Humans can manually execute different actions than recommended.
- System learns from overrides to improve future recommendations.
Context-Aware Notifications
- Security Operation owner / analyst receive alerts with pre-digested context
- Relevant information is highlighted to support quick decision-making
- Historical context is available for informed decisions
- Notification fatigue is minimized by intelligent filtering
Analyst Empowerment
- Workflows are designed to reduce analyst cognitive load
- Routine investigations are automated; complex decisions stay with remediation and response owner
- Analysts spend time on high-value analysis, not data gathering and execution of remediation steps.
- Customizable workflows let analysts define their preferred approach.
How to use this: Design workflows with approval gates for critical actions. Ensure AI recommendations include reasoning. Allow easy human override like review at code repo or approval message at slack/ms teams. Monitor how often humans override AI to improve recommendations with maintained risk register.
6. Security
What it means: Protecting customer data, workflow integrity, and automation security throughout the platform.
How we implement it:
Zero-Trust Architecture
- Workspace isolation ensures customer data is never shared.
- Each customer's workflows and data are siloed.
- Cross-workspace access is prevented at infrastructure level.
- Service-to-service authentication is enforced with serverless identity authentication.
Data Privacy Framework with self-hosted autobotAI workspace
- Customer data remains in customer workspaces.
- autobotAI does not access customer data except as authorized.
- Data is encrypted in transit and at rest.
- Customers control data retention and deletion.
Encryption Standards
- All data in transit uses TLS 1.2 or higher.
- Data at rest is encrypted using industry-standard algorithms.
- Encryption keys are managed securely and rotated regularly with provider's default key mechanism.
- Sensitive credentials are tokenized and encrypted.
Secure AI Integration
- Third-party LLM calls (Amazon Bedrock, OpenAI) use secure, encrypted channels.
- Prompts and responses do not contain customer PII unless explicitly needed. This can be additional controlled by LLM API provider using Guardrails.
- Customer data is not used to train third-party models.
- API calls are authenticated and authorized with respective AI provider authentication mechanism.
Permission-Based Execution
- Bots / Agentic workflows execute only with the minimum permissions required
- Credentials are scoped to specific resources and time periods.
- Bot / Agentic workflow actions are restricted by role and policy.
- Failed authorization attempts are logged.
Audit Logging for Security
- All security-relevant actions are logged with context.
- Login attempts, permission changes, and sensitive actions are recorded.
- Logs are immutable and stored securely.
- Log retention meets compliance requirements.
How to use this: Deploy autobotAI in isolated workspaces. Use role-based access controls. Enable encryption for all data. Monitor security audit logs regularly.
7. Compliance
What it means: Ensuring automation aligns with regulatory requirements and customer-specific compliance needs.
How we implement it:
Compliance Insights Dashboard
- Real-time visibility into which resources are compliant
- IT resources (Cloud and Container environment) Violations are detected and flagged automatically
- Remediation recommendations are generated with evidence
- Compliance status trends are tracked over time
Automated Compliance Checks
- Workflows automatically scan for common compliance issues (e.g., unencrypted data, overprivileged access)
- Policies are encoded into automation rules
- Checks run continuously without manual intervention
- Results feed into compliance reporting
Regulatory Framework Support
- autobotAI supports major compliance frameworks (SOC 2, HIPAA, GDPR, RBI etc.)
- Workflows can be configured to enforce specific regulatory requirements
- Evidence collection is automated to support audit readiness
- Compliance documentation is generated automatically
Evidence Collection for Audits
- Compliance checks generate detailed evidence
- Remediation actions are fully documented
- Audit trails show who authorized each compliance action
- Reports can be exported for external auditors
AI Guardrails for Compliance
- Content filtering and advanced reasoning provided by LLM provider can provide AI-generated workflows from violating compliance rules.
- Safety checks with agent system prompts ensure recommendations don't breach policies.
- Sensitive data handling is validated before execution.
- autobotAI can integrate with partner compliance tools like Wiz, Crowdstrike, trendmciro etc to automate violation remediation.
Customer-Specific Requirements
- Workflows can be customized to match customer compliance needs
- Policy definitions are flexible and customer-configurable
- Compliance rules can be added by integrating additional external posture management solutions like CSPM, KSPM, AISPM etc.
- Override processes for exceptions are documented and controlled
Data Retention & Deletion
- Customer controls data retention policies.
- Optional Automated deletion enforces retention limits.
How to use this: Configure compliance policies relevant to your industry. Enable automated compliance checks. Review compliance insights regularly. Maintain audit-ready documentation.