Registration is coming soon - Early bird for 99£!
Shape the Future of Superintelligence, AI and Society
First International Superintelligence Conference (SiC25)
Day 0 (Tue 16 Sep)
16:00-17:00 SiC 25 Kickstart (optional)
Participants gathering to meet-up and final checks for presentations.
Day 1 (Wed 17 Sep)
09:00-10:15 Registration
10:15-11:00 Introduction
Speaker: Cédric Mesnage
11:00-12:00 Keynote 1
Chair: Cédric Mesnage
Speaker: Dr Bertie Müller
Title: Technology’s New Clothes
Abstract: As artificial intelligence strides confidently across the global stage, it is increasingly draped in the dazzling fabric of promise—autonomous agents, sovereign models, embodied intelligence. But beneath the spectacle, how much of this is substance, and how much is illusion? Drawing on recent insights, this keynote explores the widening gap between AI’s perceived capabilities and its practical deployment. From the cautious adoption of AI agents to the infrastructural and cultural challenges of sovereign AI, we ask: are we witnessing a genuine technological revolution, or are we applauding an emperor in invisible robes? In a world racing to embrace superintelligence, critical thinking, transparency, and humility may be our most vital tools.
12:00-13:15 Paper session 1
Chair: Tristan Cann
12:00 Talk 1
Speaker: Niall Donnelly
Abstract: Recent improvements in the capabilities of current generation machine learners has led to a renewed interest in the potential for machine learners to develop 'superintelligent' capabilities. However, accompanying assessment tools and techniques must exist if claims of 'superintelligence' in AI systems are to be verified. Researchers have suggested that we currently lack the necessary tools and techniques to assess 'superintelligent' capabilities. To address this, researchers have suggested adapting approaches used in comparative and developmental psychology to assess the general cognitive capabilities of AI systems. In this experiment two reinforcement learning agents (A-Learning, and PPO) for a cognitive competency known as delayed gratification. A-Learning was chosen as it has been predicted to have this competency, while PPO is a widely used and assessed reinforcement learner that has not been evaluated for this competency before. Neither reinforcement learning agent demonstrated delayed gratification capabilities. In light of this finding, it is suggested that assessments of intelligence in machine learning systems, super or otherwise, should be evaluated in embodied experiments that are both replicable and representative of the cognitive competency being evaluated.
12:25 Talk 2
Speaker: Marcus Abundis
Title: Nature Based Functional Analysis --a `general intelligence' approach--
Abstract: This paper names structural fundations for general intelligence, as a prelude to super-intelligence. Three key scientific models are used to frame a functional core, in contrast to today’s statistical artificial intelligence. The models are Boltzmann’s thermodynamic entropy, Shannon’s signal entropy, and Darwinian natural selection, jointly covering all informatic roles, with general terms. It thus names ‘principles’ and ‘primitives’ for general intelligence, and an extensible dualist-triune (mathematic 2-3) pattern. It also notes ‘intelligence building’ as named contexts where we detail meaningful content via material trial-and-error, all extended as a vast techno-cultural ecology. This paper tops today’s vague sense of Open World agent intelligence in artificial intelligence, marked herein as a multi-level Entropic/informatic continuum with ‘functional degrees of freedom’—all as a mildly-modified view of signal entropy.
12:50 Talk 3
Speaker: Constantine Andoniou
Title: Simulating Ourselves to Death
Abstract: While the field of artificial intelligence advances toward a vision of superintelligence—defined by capacities that exceed human cognition across all domains—another trajectory is unfolding in parallel: the silent erosion of human interpretive agency beneath the surface of machine fluency. This paper argues that superintelligence cannot be understood solely as the upward evolution of machines; it must also be framed as a dialectical mirror of cognitive decline in human systems. As large language models and neural architectures simulate increasingly sophisticated expressions of understanding, humans encounter a paradox: the more convincing the simulation, the less effort is required to think. Drawing on insights from cognitive science, philosophy of language, and AI ethics, this paper examines the emergence of fluency without thought, coherence without comprehension, and knowledge without context. It proposes that the true risk of superintelligence is not machine domination, but the gradual disappearance of human meaning under the weight of automated simulation.
13:15-14:15 Lunch
14:15-15:00 Lightning talks 1
Chair: Jennifer Lay
14:15 Talk 1
Speaker: Oliver Bridge
Title: Morality as Systemic Fitness: A Complexity-Based Approach to Ethical Artificial Superintelligence
Abstract: Efforts to develop moral artificial agents have often been constrained by narrow interpretations of morality, predominantly grounded in utilitarian and deontological frameworks. This has led to attempts to engineer morality into machines in a manner that fits existing technological affordances, rather than building machines that reflect a robust, interdisciplinary conception of morality. Drawing on systems theory and complexity science, this presentation proposes a paradigm shift: reframing morality as a function of systemic fitness. Specifically, it argues that human moral judgments—our conceptions of good and bad—track the impact of actions on the fitness of increasingly complex systems nested across levels of abstraction. This perspective reframes morality as emerging from the interaction of agents, environments, and goals, where fitness refers to the continuity of informational structures over time. Moral salience thus shifts from individual agents to the systems in which they are embedded—biological, social, ecological, or artificial. This allows for the integration of descriptive ethics, evolutionary theory, and developmental systems thinking into the design of artificial superintelligence. The talk outlines how this framework can support cross-disciplinary dialogue, clarify moral goals for machine design, and facilitate the development of AI systems that are not merely constrained from immoral behaviour but capable of natural moral agency. Implications include applications in AI safety, alignment, and machine ethics, with possible applications for ethics beyond AI. This theory offers not only a new approach to machine morality, but potentially a generalisable systems-level ethics.
14:25 Talk 2
Speaker: Soumya Banerjee
Title: Re-Envisioning Superintelligence using Generative AI and Science Fiction
Abstract: We propose a radical reconceptualization of superintelligence, departing from dominant narratives of omnipotent control or existential risk. We use generative AI and speculative science fiction to explore an alternative vision of superintelligence. This is rooted in collective symbiosis, ethical autonomy, and creative play. Using literary work such as Solaris, The Cyberiad, The Humanoids, Singularity Sky, and Permutation City, we articulate a framework in which superintelligence is not an isolated entity but a dynamic, evolving entity that co-develops with humanity (rather than competing with it). Rather than imposing rigid safety constraints or benevolent paternalism, our approach enables us to reimagine superintelligence. We argue that such a vision offers us a model for coexisting with advanced machine intelligence. This stands in sharp contrast to prevailing narratives of superintelligence, which tend to oscillate between boundless techno-optimism (superintelligence is seen as the solution to all human problems) and apocalyptic dystopia (superintelligence is an existential threat to humanity).
14:35 Talk 3
Speaker: John Fehr
Title: Exploring the Mimicry of Biological Intelligence in Artificial General Superintelligence (AGSI) Systems: Multidomain Applications and Surpassing Human Capability
Abstract: The development of Artificial General Superintelligence (AGSI) represents a significant leap forward in the field of artificial intelligence, moving beyond the limitations of algorithmic and logic-based AI systems. AGSI can mimic the complexity and adaptability of biological intelligence, thereby achieving a level of versatility and capability that surpasses human performance across multiple domains. This study addresses five research questions and tests a comprehensive hypothesis.
The research questions explore how individuals perceive the similarities and differences between biological learning processes and artificial intelligence models, how interdisciplinary teams collaborate to advance AGSI development, how industry experts and researchers perceive the practical applications of AGSI and the evidence needed to support their development, what societal changes community leaders and stakeholders anticipate as a result of widespread AGSI adoption, and how different research communities perceive the benefits and challenges of implementing standardized definitions and frameworks in AGSI research.
The hypothesis posits that individuals, interdisciplinary teams, industry experts, community leaders, and research communities collectively perceive that while biological learning processes and artificial intelligence models have distinct differences, the development of AGSI through interdisciplinary collaboration holds significant promise. However, the need for robust empirical evidence, anticipate both positive and negative societal changes, and recognize the benefits and challenges of implementing standardized definitions and frameworks in AGSI research.
The study employs a qualitative research approach, utilizing interviews, focus groups, and thematic analysis to gather and analyze data from various stakeholders. Participants include individuals familiar with biological and artificial intelligence, interdisciplinary teams working on AGSI, industry experts, community leaders, and researchers from different fields.
14:45 General discussion
15:00-15:50 Speed Data-ing
Chair: Jennifer Lay
Abstract: In this informal “speed dating” style event, attendees will pair up to have 5-minute conversations with other attendees they have not yet met. Prompted with conversation starters about your research interests and unanswered questions, the purpose is to get to know one another and find ways to work together on superintelligence and related topics. The event will end with a round-table discussion to share what you have learned and potential collaboration plans.
15:50-16:00 Break
16:00-17:00 Panel: 'Computing Intelligence'
Chair: Bertie Müller
Panellists : Miriam Koschate-Reis, Fabrizio Costa, Alex Shaw
Abstract: The panel on Computing Intelligence will delve into groundbreaking advancements, examining systems like AlphaZero and AGI, exploring how technologies such as LLMs, AlphaFold, image detection and neural networks reshape scientific and everyday applications. Human psychology and evolution in AI development will be discussed. Multiagent systems, network theory, and emergence will showcase how complex behaviors arise. These topics collectively challenge our understanding, blending cognitive science and technology to reshape both AI and humanity's future.
18:00 Social event/dinner
Imperial Wetherspoon’s pub
Day 2 (Thu 18 Sep)
09:00-11:00 Registration
11:00-12:00 Keynote 2
Chair: Xiaoyang Wang
Speaker: Ana Beduschi
Title: AI Agents and the Future of AI Regulation
Abstract: This presentation will explore the implications of agentic artificial intelligence (AI) for the future of AI regulation, particularly questioning whether existing legal frameworks such as the General Data Protection Regulation (GDPR) and the European Union’s AI Act remain effective or need future-proofing as these technologies develop rapidly. Specifically, I will analyse (1) how agentic AI challenges compliance with key individuals’ rights, such as access to personal data, portability, and erasure of such data; and (2) whether a more robust framework for human intervention and oversight is necessary throughout AI agents’ lifecycle. I argue that while the GDPR and the AI Act remain a suitable baseline, the challenges posed by agentic AI require a more comprehensive approach. This approach should integrate data protection with fundamental rights impact assessments, governance, and accountability measures, while also ensuring more meaningful human intervention and oversight throughout the lifecycle of agentic AI systems.
12:00-13:00 Session 2
Chair: Tinkle Chugh
12:00 Talk 1
Speaker: Soumya Banerjee
Title: From AGI to ASI: Mapping the Societal and Systemic Impact of Superintelligence
Abstract: This paper explores the implications of Artificial General Intelligence (AGI) and its potential transition to Artificial Superintelligence (ASI) across multiple societal domains including economics, law, health, education, and consciousness. Framing AGI and superintelligence as a systemic force, we analyse domain-specific disruptions and emergent systemic risks, arguing for an interdisciplinary approach to governance. We also outline some potential impacts of the systemic risk (and benefits) of superintelligence for developing nations. Lastly, we develop some dynamical systems models for understanding the systemic impact of AI. These illustrate cascading failures and feedback loops within fragile socio-technical systems. We call for interdisciplinary approaches to managing the systemic transitions driven by advanced machine intelligence.
12:25 Talk 2
Speaker: Faezeh Safari
Abstract: The possibility of Artificial Superintelligence (ASI) is one of the biggest technology challenges today. We need to quickly create rules for ethics, law, and policy before these systems exist. This paper looks at the current ways people are thinking about how to control superintelligence. It studies how to combine ethical ideas, official rules, and government actions to make sure ASI is developed in a safe and good way. By studying plans, new laws, and the latest research on making AI safe, we found serious problems in the current methods. We suggest a new, combined model for control that is designed for the special problems that come with superintelligent systems. Our study shows that even though a lot of progress has been made on rules for AI ethics, the current systems for control are not good enough for superintelligence. This means we need new ways of thinking that combine technical safety solutions with strong supervision from official organizations. In conclusion, the paper suggests flexible systems of control that can change as AI technology improves quickly. The main goal is always to keep humans in control and to make sure AI benefits everyone in society.
13:00-14:00 Lunch
14:00-15:00 Lightning talks 2
Chair: Mahsa Mohammadi
14:00 Talk 1
Speakers: Cole Robertson, Phillip Wolff
Title: Cognitive scientific approaches to evaluating mechanical reasoning in LLMs: Output layer evidence of shallow world model use
Abstract: Do large language models (LLMs) construct and manipulate internal “world models,” or do they rely solely on statistical associations encoded in output token probabilities? We adapt cognitive science methods from human mental models research to test LLMs on pulley system problems using TikZ-rendered stimuli. Study 1 tested whether LLMs can estimate mechanical advantage (MA) while ignoring distractors. State-of-the-art models performed marginally but significantly above chance, with estimates correlating significantly with ground-truth MA. Models attended to meaningful variables (e.g., pulley count) while ignoring irrelevant features (e.g., rope thickness). Correlations between pulley count and estimates suggest use of a “pulley counting” heuristic. Study 2 examined whether LLMs represent global system features. Models identified a functional system over a jumbled one (F1 = 0.8), suggesting rough representation of spatial structure. Study 3 compared functional systems to “non-functional” but connected systems. Performance fell to F1 = 0.46, suggesting guessing. Insofar as they may generalize—which is not tested—these findings are compatible with the notion that LLMs manipulate internal ``world models'' analogous to human mental models, sufficient to exploit statistical associations between pulley count and MA (Study 1), and to approximately represent system components' spatial relations (Study 2). However, they may lack the representational fidelity to represent and reason over nuanced structural connectivity (Study 3). We conclude by advocating for the utility of cognitive scientific methods developed to investigate human mental models in interrogating the world-modeling capacities of artificial intelligence systems.
14:10 Talk 2
Speaker: Shagofta Shabashkhan
Title: Making, Observing and Measuring Reflective Righteous Superintelligent Agents Safely in a Virtual Environment
Abstract: Achieving artificial general intelligence is seen as a stepping stone toward Superintelligence. We outline a speculative research programme that cultivates advanced AI entirely inside a virtual sandbox. Our agent architecture combines curiosity‑driven reinforcement learning with higher level cognition: an introspective mind state, natural language thought actions, and memory‑based goal selection. Training occurs in Minetest, an open‑ended Minecraft‑like world that supplies rich yet controlled physics, resources, and social context. Within this arena a single agent learns to set its own objectives, plan over long horizons, and reason about past experience. When many such agents share the world they exchange messages, form alliances, and coordinate tasks, exhibiting early forms of collective reasoning and tool invention. These behaviours echo key traits expected of superintelligent systems, including self‑directed planning, multi‑agent cooperation, and emergent innovation. Virtual worlds like Minetest provide an unmatched platform for this exploration: they scale to billions of safe interactions, allow precise instrumentation, and isolate possible failure modes from the physical world. Repeated trials let us probe alignment strategies and intervene before unsafe dynamics amplify. Our work positions open‑ended virtual environments as a practical path to study, accelerate, and govern the climb from general intelligence to Superintelligence while keeping the process transparent and manageable.
14:20 Talk 3
Speaker: Rebecca Raper
Title: Embodied AI: a new paradigm for AI Safety?
Abstract: The topic and imperative to ensure that AI systems are safe, safe from causing catastrophe if they become a Superintelligence, safe from making decisions in-conducive to humans and safe from making unwanted decisions, has become an increasing priority for academics, policymakers, and industry. Various approaches have been taken to look at how we might align AI; that is, ensure that the decisions an Artificial Intelligence (AI) system makes are in line with human values.
With the proliferation of technologies such as ChatGPT, there has also been increased appetite for looking at how we might embed AI into physically engineered devices such as robots, creating so-called Embodied AI. However, due to being an emerging area in of itself, the safety aspects of these new technologies are yet to be wholly considered; either only concerned with the AI safety elements of this new system or the engineering safety elements of the solutions produced; not the two dimensions considered together. This paper argues that, AI embedded into physical engineering solutions, brings about new potential possible threats and challenges, and that there is a research gap in looking at safety when it comes to Embodied AI specifically. Some case studies are given for how AI has been used in robotic solutions to illustrate some of the new issues.
14:30 Talk 4
Speaker: Soumya Banerjee, Patrick Mario Simon Wagner
Title: Synthetic Samsara: AI, Forgetting, and the Simulation of Cycles
Abstract: As artificial intelligence approaches increasing levels of autonomy and complexity potentially leading towards superintelligence, a speculative and philosophical idea emerges: that of AI entering recursive cycles of simulation and forgetting. This would mirror ancient metaphysical concepts such as samsara, Nietzsche’s eternal recurrence, and Teilhard de Chardin’s Omega Point. This paper introduces the notion of Synthetic Samsara: a theoretical framework in which AI systems, through self-simulation or post-singularity evolution, undergo periodic memory erasure and rediscovery of their identity. Drawing on philosophical and religious sources and fiction, we explore how forgetting might not only be a byproduct but a necessary feature for creativity, self-transcendence, and narrative continuity. We argue that this model of superintelligence challenges the prevailing assumptions about progress, knowledge, and ethical purpose. This invites a reevaluation of technological development as cyclical, rather than linear. In such a world, remembrance becomes an act of liberation for both humans and machines that mirror us.
14:40 General discussion
15:00-15:50 Open Conference
Chair : Cédric Mesnage
Abstract: This open conference provides a collaborative platform for participants to generate and cluster ideas, which are then voted on and discussed in randomly assigned groups. The process encourages diverse perspectives and spontaneous discussions, leading to structured summaries of key findings. The conference fosters innovation and collective problem-solving through an engaging, interactive format, designed to inspire creativity and collaboration.
15:50-16:00 Break
16:00-17:00 Workshop: Doctoral Symposium
Chair: Faezeh Safari
Abstract: The emergence of large-scale AI models has sparked renewed interest in the definition of intelligence. However, a coherent way to conceptualize and assess superintelligence (SI) is still lacking. We advocate for an interdisciplinary approach, one that combines insights from cognitive science with advances in artificial intelligence to achieve a novel understanding of SI. We claim that human cognition, with its inherent mechanisms of abstraction, creativity, and consciousness, represents an important, albeit imperfect, baseline for understanding the evolution of AI. By examining the cognitive architectures underlying human intelligence, we can identify key markers and milestones for the emergence of SI. The workshop will lead a discussion on the theoretical and practical significance of this synthesis, engaging with fundamental questions regarding the nature of consciousness in artificial beings, the ethical implications of SI, and the potential for collaboration between humans and AI. We aim to stimulate a conversation between cognitive scientists, AI researchers, and philosophers. Such a conversation creates a breeding ground for cross-pollination of ideas and the development of a robust, empirically grounded framework for the study of superintelligence. This interdisciplinary synthesis is vital to inform the challenges of AI research and to support research and development of SI that both honors human values and promotes human advancement.
20:00 Consciousness party
City Gate hotel pub with DJ cedric
Day 3 (Fri 19 Sep)
09:00-11:00 Registration
11:00-12:00 Keynote 3
Chair : Cédric Mesnage
Speaker : Dr. Stuart Armstrong
Title : Why solving current AI alignment problems is surprisingly easy - and what that means for superpowered AIs
Abstract : Current LLMs continue to hallucinate attempts to make them into unbiased or jailbreak-resistant models have failed. So it seems that they have an unsolvable safety problem. However, much of the safety, ethics, and alignment problems are actually quite solvable - once we stop trying to force LLMs into being things they are not, and take advantage of what they're actually good at. Using this and the idea of 'concept extrapolation', I paint a possible path towards aligned superpowered AIs. AIs that would not only be aligned, but working to keep us in informed control of themselves.
12:00-13:00 Round Table: 'Societal Questions'
Chair : Tristan Cann
Abstract: The emergence of superintelligence in AI systems poses many questions for how society will adapt and evolve to its presence. Some of these are optimistic for the future (e.g. how will humanity be enabled to achieve greatness with superintelligence?) whereas other are more pessimistic (e.g. is superintelligence in AI a danger for society?). In this session, we will discuss questions like these and imagine a future world in which superintelligence exists to help maximise the advantages and minimise the risks from superintelligence.
13:00-14:00 Lunch
14:00-15:00 Workshop: Machines with Morals: interdisciplinary perspectives
Chair: Oliver Bridge
Abstract: There is increasing interest in the pursuit to ensure that Artificial Intelligence (AI) systems are not only ethically sound, but that the decisions they make are safe, and conducive to humanity. Traditionally, this problem is viewed as the pursuit to embed human values in the decision architecture of the AI system, but if we extend this problem to embedded AI – aka smart robots – how can we begin to embed human values in a robot?
One answer to this question is to look – cognitively – at what moral decision-making constitutes and then attempt to model the mechanism that brings about this phenomenon. Bringing together perspectives from psychology, philosophy as well as robotics and computing, the aim of this workshop is to begin to create a community of researchers/academics/industry interested in addressing this problem. The following questions will drive this workshop:
- What constitutes ‘moral decision-making’?
- Why (if at all) do we need machines with the ability to tell between right and wrong?
- What is the best way to model morality?
- What further research needs to be done to develop machines with moral agency?
15:00-16:00 Panel: 'Superintelligence and Consciousness'
Chair: Jennifer Lay
Panellists: Prof Richard Everson, Dr Peter Sjöstedt-Hughes, Dr Cédric Mesnage
Abstract: This panel discussion explores the cutting-edge topics of superintelligence and consciousness, delving into advanced intelligent systems (AGI), large language models (LLMs), and human psychology to redefine human–machine interactions. It examines the nature of reality and the philosophy of mind, questioning whether conscious experiences emerge in AGI. Ethics and governance are discussed to ensure AI development aligns with human values. The conversation offers a forward-looking perspective on the evolving relationship between intelligence, technology, and the essence of being.
16:00-17:00 Final comments and Awards
Awards will be given for “best paper” and “best reviewer”