Paper Details

Published: 2025/09/16

Container Title: 1st International Online Conference of the Journal Philosophies

As artificial intelligence (AI) systems continue to outperform humans in an increasing range of specialised tasks, a fundamental question emerges at the intersection of philosophy, cognitive science, and engineering: should we aim to build AIs that think like humans, or should we embrace non-human-like architectures that may be more efficient or powerful, even if they diverge radically from biological intelligence? This paper draws on a compelling analogy from the history of aviation: the fact that aeroplanes, while inspired by birds, do not fly like birds. Instead of flapping wings or mimicking avian anatomy, engineers developed fixed-wing aircraft governed by aerodynamic principles that enabled superior performance. This decoupling of function from the biological form invites us to ask whether intelligence, like flight, can be achieved without replicating the mechanisms of the human brain. We explore this analogy through three main lenses. First, we consider the philosophical implications: What does it mean for an entity to be intelligent if it does not share our cognitive processes? Can we meaningfully compare different forms of intelligence across radically different substrates? Second, we examine engineering trade-offs in building AIs modelled on human cognition (e.g., through neural–symbolic systems or cognitive architectures) versus those designed for performance alone (e.g., deep learning models). Finally, we explore the ethical consequences of diverging from human-like thinking in AI systems. If AIs do not think like us, how can we ensure alignment, predictability, and shared moral frameworks? By critically evaluating these questions, this paper advocates for a pragmatic and pluralistic approach to AI design: one that values human-like understanding where it is useful (e.g., for interpretability or human–AI interaction) but also recognises the potential of novel architectures unconstrained by biological precedent. Intelligence may ultimately be a broader concept than the human example suggests, and embracing this plurality may be key to building robust and beneficial AI systems.