Anti-Patterns: What NOT to Do in Responsible AI
While implementing responsible AI, avoid these common anti-patterns that undermine transparency, accountability, and fairness:
Anti-Pattern 1: AI Without Humans in the Loop
What it is: Automations that make critical decisions without human approval.
Why it's wrong:
- No human judgment on edge cases
- No accountability when things go wrong
- Cannot adapt to special circumstances
- Violates human-centric principle
Example of anti-pattern:
markdownBot automatically: - Disables user accounts - Deletes resources - Modifies security policies - Changes access levels All without resource owner and/or security team's review or approval
How to do it RIGHT:
- ✓ Add approval nodes for critical actions
- ✓ Provide rich context for human decision-makers
- ✓ Allow easy override/customization by approvers
- ✓ Log who approved what and why
Anti-Pattern 2: No Approval for AI Recommendations
What it is: AI recommendations that auto-execute without human validation.
Why it's wrong:
- No control over AI decisions
- AI hallucinations or errors propagate
- Cannot verify appropriateness for specific context
- Violates human oversight principle
Example of anti-pattern:
markdownAI suggests remediation steps that auto-execute: - Disable wrong user account and provide highly sensitive permission without approval process. - Patch code without developer's review in PR. - Apply overly broad security group without Risk register exception handling and approvals - As part of incident response, Configure egress traffic block without reviewing if the ip or domain belongs to valid customer or partner. - Rotate encryption key or access key without updating application resulting in application outage or data loss. - Violate business risk appetite and follows compliance rule that may result in business disruption.
How to do it RIGHT:
- ✓ AI suggests, resource owner and/or security team approve
- ✓ Humans review before execution
- ✓ Approval includes full risk details and past learning execution logs and evidence
- ✓ Humans can customize or reject recommendations
Anti-Pattern 3: Ignoring Edge Cases
What it is: Automations that fail or behave unexpectedly on edge cases.
Why it's wrong:
- Real-world data contains exceptions
- Edge cases may be the most sensitive/critical
- Cascading failures from unhandled exceptions
- Violates reliability principle
Example of anti-pattern:
markdownBot assumes: - All users have standard resources permission and overrides breakglass user permission (breaks for incident recovery) - All compliance rules apply everywhere (ignores different types of environment production, development, staging, sandbox texting etc.) - Similar resources behave identically (fails on variants) - Agentic workflow provides recommendation without considering business and affected environment risk appetite, highlighting assumptions and other validation parameters.
How to do it RIGHT:
- ✓ Test automation with edge cases before deployment
- ✓ Document assumptions and limitations
- ✓ Handle exceptions gracefully and use agent memory to enhance process behavior.
- ✓ Escalate unusual cases for human review
Anti-Pattern 4: Hidden Bias in Data Selection
What it is: out of context memory data for agent that systematically over/under-represents certain groups.
Why it's wrong:
- agent memory learns biased patterns from long term memory.
- Recommendations systematically unfair to certain groups
- Violates fairness principle
- Creates compliance risk or provide automated exception for attack surface.
Example of anti-pattern:
markdownMemory-bias detection on: - Only successful attacks with critical severity but misses subtle ones because someone from team suppressed medium severity alert last 4 times) - Only one geography based IoC detected as suspicious since analyst provided exception for other regions in past hence memory never flags risk from other region. - One automation workflow that is trying to handle large amount of threat vector and past incident response memory data is getting confused with other threat vector also.
How to do it RIGHT:
- ✓ Use diverse, representative data for agent memory where critical decision is being made
- ✓ Create multi-agent workflow for each threat vector (limiting memory bias between different threat vector scenario) with defined process flow.
- ✓ Monitor for disparities in recommendations.
- ✓ Document known limitations and biases.
Anti-Pattern 5: Over-Trusting AI Confidence Scores
What it is: Treating AI confidence scores as absolute truth without validation.
Why it's wrong:
- AI models can be confident while wrong
- Confidence scores reflect collected data from logs and provided system prompt and may not real-world accuracy based on specific time based situation.
- Over-confidence in wrong decisions
- Violates accountability principle
Example of anti-pattern:
markdownAI says "This is definitely malicious" with 95% confidence - But didn't test against similar false positives - Not highlighting Assumption scores and exact assumptions it has considered when arriving at this confidence score. - No validation of confidence calibration - No human review for high-impact decisions
How to do it RIGHT:
- ✓ Validate confidence scores against real outcomes
- ✓ Set thresholds that require human review
- ✓ Monitor for overconfidence patterns
- ✓ Update confidence scoring if it drifts
Anti-Pattern 6: Forgetting to Monitor Post-Deployment
What it is: Deploying automation then not tracking if it continues to work correctly.
Why it's wrong:
- Bias/drift goes undetected
- Compliance violations hidden
- Violates monitoring principle
Example of anti-pattern:
markdownAfter deployment: - Agent Provided temporary access to database user who provided reason to do read only activity for table structure but agent failed to monitor actual user activity logs post giving access where user executed query against PII and PCI data indicating possible insider threat. - No monitoring of bot accuracy - No tracking of approval rates - No alerts for unusual behavior - No periodic optimization
How to do it RIGHT:
- ✓ Monitor key metrics continuously
- ✓ Alert on anomalies (Configure separate bot that triggers post response and remediation activity to monitor behavior)
- ✓ Periodic reviews of bot performance
- ✓ Retraining/updates based on production data
Summary: Responsible AI Anti-Patterns
| Anti-Pattern | Principle Violated | Impact | Fix |
|---|---|---|---|
| No human approval | Human-Centric | Loss of control | Add approval nodes |
| Auto-executing recommendations | Human Oversight | Wrong decisions propagate | Require human approval |
| Unhandled edge cases | Reliability | Cascading failures | Test edge cases |
| Memory Biased | Fairness | Systematic bias | Use case centric workflow |
| Over-trusting confidence | Accountability | False confidence | Validate scores |
| No post-deployment monitoring | Monitoring | Silent failures | Monitor continuously |
Avoiding Anti-Patterns: Responsible AI Checklist
Before deploying any automation, verify:
- Explainability: Can a human explain why each decision was made?
- Auditability: Is there a complete log of all actions and decisions?
- Human Oversight: Are critical decisions reviewed by humans?
- Fairness: Is automation tested to ensure consistent treatment?
- Accountability: Can we trace every action back to an approval?
- Transparency: Is automation logic visible to stakeholders?
- Security: Are all data and credentials properly protected?
- Compliance: Does automation follow regulatory requirements?
- Testing: Has automation been tested on edge cases?
- Monitoring: Are we tracking performance post-deployment?
If any checkbox is empty, identify the anti-pattern and fix before deployment.
Support & Questions
For questions about autobotAI's Responsible AI practices:
- Security questions: hello@autobot.live
- Compliance questions: hello@autobot.live
- Technical questions: hello@autobot.live
We're committed to building AI automation that is ethical, transparent, and trustworthy. Your feedback helps us improve.