I find persuasive the view, articulated by scholars, including Yuval Noah Harari, suggest that the most profound transformation will not be automation itself, but the emergence of AI as a powerful agent shaping human decisions, beliefs, and knowledge systems. AI will increasingly participate in writing, designing, negotiating, teaching, and even mediating social and political discourse. In this sense, AI will not only produce information, but also influence how reality is interpreted.
We are also likely to see AI systems becoming more interactive, multimodal, and embedded
in everyday decision-making. They will act as cognitive partners in research, design, and
professional work. However, as AI becomes more persuasive and conversational, the risk is
not only dependency, but a gradual erosion of human agency. People may defer judgment to
systems that appear knowledgeable and coherent, even when those systems lack genuine
understanding.
Another major shift will be from humans searching for information to AI increasingly
determining what information humans see. This has implications for education, research,
and public knowledge production, where exposure to ideas may become mediated by
algorithmic systems rather than active inquiry.
Frontier AI systems can already generate policy proposals, legal arguments, financial
frameworks, and even religious narratives.
Yet they do so without experience, embodiment, or moral accountability. They do not bear
consequences, exercise judgment, or understand context in human terms. Scaling such
systems into decision-making environments without sufficient human oversight risks
gradually delegating responsibility to entities that optimize patterns rather than values.
The central challenge of the next decades will therefore not only be technical, but epistemic:
working on that fluency does not replace reasoning, that coherence is not mistaken for
truth, and that human judgment remains central in how knowledge and decisions are
produced.