Added three new instruction files aligned with Microsoft's Responsible AI principles: - responsible-ai-development.instructions.md: Comprehensive guide implementing Microsoft's six Responsible AI principles (Fairness, Reliability & Safety, Privacy & Security, Inclusiveness, Transparency, Accountability) - human-in-the-loop-development.instructions.md: Best practices for maintaining human oversight and accountability in AI-assisted development workflows - inclusive-language-bias-awareness.instructions.md: Guidelines for using inclusive, bias-aware language across all technical artifacts Updated CONTRIBUTING.md to emphasize Responsible AI development practices and align with Microsoft's official framework. These files promote trustworthy AI development that upholds societal values while enhancing developer productivity with AI tools like GitHub Copilot.
16 KiB
| description | applyTo |
|---|---|
| Microsoft Responsible AI principles and practices for developing trustworthy AI systems that uphold societal values | ** |
Responsible AI Development
Introduction
Responsible AI is a set of steps we take to make sure that AI systems are trustworthy and uphold societal principles. This instruction file aligns with Microsoft's Responsible AI principles to guide the development of AI-powered features and AI-assisted development practices.
"We believe AI can amplify human ingenuity and productivity, but it also must be developed responsibly. Microsoft has been on a responsible AI journey since 2017, when we defined our principles for responsible AI and established an Office of Responsible AI to drive our adoption of these principles."
— Microsoft Responsible AI
Microsoft's Six Responsible AI Principles
These principles guide how we develop, deploy, and use AI systems:
1. Fairness
Principle: AI systems should treat all people fairly.
What this means:
- AI systems should not discriminate against individuals or groups
- Systems should provide equitable treatment across diverse populations
- Bias in training data, algorithms, and outcomes should be actively mitigated
- Fair representation should be ensured in data and decision-making
In Practice:
- Review AI-generated code for biased assumptions or stereotypes
- Use diverse and representative test data
- Evaluate AI suggestions for fairness across different user groups
- Question AI outputs that might disadvantage certain populations
- Document fairness considerations in design decisions
Example:
// Good: Fair, inclusive user model
interface UserProfile {
id: string;
displayName: string;
preferredLanguage: string;
accessibilityPreferences?: AccessibilitySettings;
// Only include attributes that are functionally necessary
}
// Avoid: Making assumptions based on demographics
interface UserProfile {
id: string;
name: string;
age: number;
gender: "male" | "female"; // Limited options, may exclude users
// Demographic data should only be collected if necessary
}
2. Reliability and Safety
Principle: AI systems should perform reliably and safely.
What this means:
- Systems should operate consistently under expected conditions
- Systems should handle errors gracefully and fail safely
- Potential risks and harms should be identified and mitigated
- Systems should be thoroughly tested before deployment
- Safety should be prioritized over feature velocity
In Practice:
- Validate all AI-generated code through rigorous testing
- Implement proper error handling and fallback mechanisms
- Test edge cases and failure scenarios
- Monitor AI system behavior in production
- Establish safety review processes for critical systems
Example:
// Good: Reliable error handling
async function processUserData(data: unknown): Promise<Result> {
try {
// Validate input
if (!isValidUserData(data)) {
return {
success: false,
error: "Invalid user data format",
};
}
// Process with error handling
const result = await safeProcessing(data);
return { success: true, data: result };
} catch (error) {
logger.error("Processing failed", { error, data });
return {
success: false,
error: "Processing failed. Please try again.",
};
}
}
3. Privacy and Security
Principle: AI systems should be secure and respect privacy.
What this means:
- Personal data should be protected and used appropriately
- User consent should be obtained for data usage
- Security best practices should be implemented throughout the system
- Data minimization principles should be followed
- Encryption and access controls should protect sensitive information
In Practice:
- Never include sensitive data in AI prompts or training
- Implement proper authentication and authorization
- Sanitize and validate all inputs
- Encrypt data in transit and at rest
- Follow privacy regulations (GDPR, CCPA, etc.)
- Use secure coding practices for AI-generated code
Example:
// Good: Privacy-respecting data handling
interface UserDataRequest {
userId: string; // Only include necessary identifiers
requestedFields: Array<"profile" | "preferences">; // Explicit consent
}
async function getUserData(request: UserDataRequest): Promise<UserData> {
// Check authorization
if (!(await isAuthorized(request.userId, request.requestedFields))) {
throw new UnauthorizedError("Insufficient permissions");
}
// Audit data access
await auditLog.record({
action: "user_data_access",
userId: request.userId,
fields: request.requestedFields,
timestamp: new Date(),
});
// Return only requested fields
return fetchUserData(request.userId, request.requestedFields);
}
// Avoid: Exposing sensitive data
// Never log or expose PII in error messages or responses
4. Inclusiveness
Principle: AI systems should empower everyone and engage people.
What this means:
- Systems should be accessible to people with diverse abilities
- Design should consider diverse user needs and contexts
- Systems should not create barriers to access or participation
- Inclusive design should be integrated from the start
- Multiple interaction modalities should be supported
In Practice:
- Follow accessibility standards (WCAG 2.2 Level AA)
- Support assistive technologies (screen readers, voice access)
- Use inclusive language in code, comments, and documentation
- Test with diverse user groups and scenarios
- Consider international and cultural contexts
Example:
// Good: Inclusive, accessible component
interface ButtonProps {
label: string; // Required visible label
onClick: () => void;
ariaLabel?: string; // Additional context for screen readers
disabled?: boolean;
loading?: boolean;
size?: "small" | "medium" | "large";
}
function AccessibleButton({ label, onClick, ariaLabel, disabled = false, loading = false, size = "medium" }: ButtonProps) {
return (
<button onClick={onClick} disabled={disabled || loading} aria-label={ariaLabel || label} aria-busy={loading} className={`btn btn-${size}`}>
{loading ? (
<>
<span className="sr-only">Loading...</span>
<LoadingSpinner aria-hidden="true" />
</>
) : (
label
)}
</button>
);
}
5. Transparency
Principle: AI systems should be understandable.
What this means:
- Users should understand when they're interacting with AI
- System behavior and limitations should be clearly communicated
- Decisions made by AI should be explainable when possible
- Documentation should clarify AI capabilities and constraints
- Users should have visibility into how their data is used
In Practice:
- Document when and how AI tools are used in development
- Explain AI-generated code and decisions
- Provide clear error messages and explanations
- Disclose AI usage to end users when appropriate
- Make AI behavior predictable and consistent
Example:
// Good: Transparent AI usage
/**
* This function uses AI to suggest code completions.
*
* Privacy Note: Your code context is sent to our AI service for processing.
* We do not store or train on your code. See our privacy policy for details.
*
* Limitations:
* - Suggestions may not always be correct or optimal
* - Review all suggestions before accepting
* - Security-critical code should be manually reviewed
*
* @param context - The code context for generating suggestions
* @returns AI-generated code suggestions
*/
async function getAISuggestions(context: CodeContext): Promise<Suggestion[]> {
// Implementation with clear error handling
}
// User-facing disclosure
const aiAssistanceNotice = `
AI Assistant is active. Suggestions are powered by AI and should be reviewed.
[Learn more about how we use AI]
`;
6. Accountability
Principle: People should be accountable for AI systems.
What this means:
- Humans should be responsible for AI system outcomes
- Clear ownership and governance should be established
- Mechanisms for recourse and appeal should exist
- AI systems should be monitored and audited
- Responsibility cannot be abdicated to AI systems
In Practice:
- Maintain human oversight of AI-generated code
- Establish clear review and approval processes
- Document decisions and their rationale
- Implement audit trails for AI-assisted development
- Take responsibility for all code you commit
- Provide mechanisms for users to report issues
Example:
// Good: Accountable AI usage with audit trail
interface AICodeReview {
timestamp: Date;
reviewer: string; // Human reviewer
aiTool: string; // AI tool used
suggestion: string; // AI suggestion
decision: "accepted" | "modified" | "rejected";
rationale: string; // Why this decision was made
modifications?: string; // How suggestion was modified
approvedBy: string; // Final human approval
}
async function reviewAICodeSuggestion(suggestion: AISuggestion, reviewer: User): Promise<AICodeReview> {
// Human review process
const decision = await humanReview(suggestion);
// Audit trail
const review: AICodeReview = {
timestamp: new Date(),
reviewer: reviewer.id,
aiTool: "GitHub Copilot",
suggestion: suggestion.code,
decision: decision.action,
rationale: decision.reason,
modifications: decision.modifiedCode,
approvedBy: reviewer.id,
};
await auditLog.record(review);
return review;
}
Implementing Responsible AI in Development
Development Workflow Integration
1. Design Phase:
- Consider all six Responsible AI principles
- Identify potential risks and harms
- Plan for fairness, privacy, and inclusiveness
- Document design decisions and trade offs
2. Implementation Phase:
- Follow secure coding practices
- Use inclusive language and accessible design
- Validate AI-generated code against principles
- Implement proper error handling and logging
- Add transparency through documentation
3. Testing Phase:
- Test for fairness across user groups
- Verify security and privacy protections
- Validate accessibility compliance
- Test reliability and edge cases
- Ensure explainability of AI behavior
4. Review Phase:
- Human review of all AI-generated code
- Security and privacy review for sensitive code
- Fairness and bias review
- Documentation review for transparency
- Establish accountability for decisions
5. Deployment Phase:
- Monitor system performance and safety
- Collect user feedback
- Maintain audit trails
- Establish incident response procedures
- Plan for ongoing monitoring and updates
6. Monitoring and Maintenance:
- Regular fairness audits
- Security and privacy assessments
- Performance and reliability monitoring
- User feedback analysis
- Continuous improvement processes
AI-Assisted Development Checklist
Before accepting AI-generated code, verify:
Fairness:
- Does the code treat all users fairly?
- Are there biased assumptions or stereotypes?
- Is test data diverse and representative?
- Would this work equitably for all user groups?
Reliability and Safety:
- Has the code been thoroughly tested?
- Is error handling comprehensive?
- Are edge cases handled safely?
- Could this fail in a way that harms users?
Privacy and Security:
- Is sensitive data properly protected?
- Are inputs validated and sanitized?
- Is authentication/authorization correct?
- Does this follow security best practices?
Inclusiveness:
- Is the code accessible to all users?
- Does it support assistive technologies?
- Is the language inclusive and neutral?
- Are diverse use cases considered?
Transparency:
- Is the code well-documented?
- Are AI-assisted parts clearly marked?
- Can users understand what's happening?
- Are limitations clearly stated?
Accountability:
- Do I understand this code completely?
- Am I prepared to maintain this code?
- Is there an audit trail for this decision?
- Have I taken responsibility for this code?
Governance and Process
Responsible AI Review Board
For significant AI features or systems, establish a review process:
Review Triggers:
- New AI-powered features or capabilities
- Changes to AI algorithms or models
- Use of AI in critical or sensitive contexts
- AI systems that make automated decisions
- AI handling personal or sensitive data
Review Team:
- Technical experts (security, privacy, accessibility)
- Domain experts (legal, compliance, ethics)
- User advocates or representatives
- Product and engineering leadership
Review Process:
- Submit proposal with Responsible AI assessment
- Review team evaluates against six principles
- Identify risks and mitigation strategies
- Document decisions and approvals
- Establish monitoring and review schedule
Incident Response
When AI systems cause harm or violate principles:
1. Immediate Response:
- Stop or limit the problematic behavior
- Notify affected users
- Document the incident thoroughly
- Escalate to appropriate stakeholders
2. Investigation:
- Determine root cause
- Assess scope and impact
- Identify violated principles
- Review related systems for similar issues
3. Remediation:
- Fix the immediate problem
- Implement preventive measures
- Update processes and guidelines
- Communicate with stakeholders
4. Learning:
- Document lessons learned
- Update training and guidelines
- Share knowledge with team
- Improve review processes
Microsoft Responsible AI Resources
Official Guidelines
- Microsoft Responsible AI: Core principles and approach
- Responsible AI Standard: Detailed implementation guidance
- Responsible AI Dashboard: Tools for implementing responsible AI
- Azure AI Services Documentation: Service-specific guidance
Training and Certification
- Microsoft Learn: Responsible AI: Free training courses
- Responsible AI for Developers: Technical implementation
- AI Business School: Business perspective on responsible AI
Tools and Frameworks
- Fairlearn: Toolkit for assessing and improving fairness
- InterpretML: Model interpretability and explainability
- Error Analysis: Deep-dive error analysis for AI systems
- Responsible AI Toolbox: Comprehensive toolkit
Summary
Responsible AI development is not optional—it's essential for building trustworthy AI systems that uphold societal values and serve all people fairly. By integrating Microsoft's six Responsible AI principles into our development practices, we ensure that:
- Fairness: AI treats all people equitably
- Reliability and Safety: AI systems work consistently and safely
- Privacy and Security: AI protects user data and system security
- Inclusiveness: AI empowers everyone
- Transparency: AI behavior is understandable
- Accountability: Humans remain responsible for AI systems
Key Takeaways
- Humans must remain in control: AI assists, humans decide
- All six principles matter: No principle is more important than others
- Integrate from the start: Responsible AI is not an afterthought
- Continuous vigilance: Monitor and improve AI systems over time
- Shared responsibility: Everyone on the team owns responsible AI
Remember
"Responsible AI is a journey, not a destination. It requires ongoing commitment, learning, and adaptation as technology and society evolve."
As developers using AI tools like GitHub Copilot, we have a responsibility to ensure that AI amplifies human ingenuity while upholding the values that make technology beneficial for everyone.
For more information: