topicsfeedssearch

The Reasonable Effectiveness of Virtue Ethics in AI Alignment

The Gradient · Feb 18, 23:25 · original ↗

Preface

This essay argues that rational people don’t have goals, and that rational AIs shouldn’t have goals. Human actions are rational not because we direct them at some final ‘goals,’ but because we align actions to practices[1]: networks of actions, action-dispositions, action-evaluation criteria,