CONTAIN NOW

Founding Manifesto & Global Call to Action

CONTAIN NOW

Global Movement for Responsible AI Containment

"The question is no longer whether AI will transform civilization. The question is whether civilization will survive the transformation."

Building the Most Profitable Ecosystem on Earth

Around the Only Technology That Can Save It From Itself

Authored by Pyr Marcondes · 1.0 — 2025 · CC BY-SA 4.0

Share this movement

Executive Summary

The Case in 500 Words

The Threat Is Not Future — It Is Present

We are not warning about a hypothetical. We are documenting an emergency in progress. Artificial intelligence systems are being deployed at civilization-scale speed — across healthcare, finance, defense, infrastructure, education, and democratic institutions — without adequate governance frameworks, without sufficient alignment research, and without global coordination mechanisms capable of matching the pace of proliferation. The window for orderly intervention is measured in years, not decades.

The Science Is Clear

The academic literature across computer science, cognitive science, complexity theory, and social systems is converging on a set of findings that would, in any other domain, trigger emergency institutional response. Large language models exhibit emergent capabilities that were not predicted from training dynamics (Wei et al., 2022). Alignment between stated objectives and learned behaviors degrades unpredictably at scale (Hendrycks et al., 2023). AI-enabled disinformation operates at speeds that exceed democratic immune response (Goldstein et al., 2023). Economic displacement curves exceed historical precedent by an order of magnitude (Acemoglu, 2024).

The Paradox That Must Be Resolved

CONTAIN NOW does not argue for halting AI development. This would be both impossible and counterproductive. Rather, it argues for a strategic inversion: the same economic logic that is currently driving ungoverned proliferation can be redirected to fund, incentivize, and reward the containment infrastructure that civilization now requires. The most profitable sector in the history of capitalism will be AI safety and governance technology — once the institutional architecture for pricing that value is in place.

The Movement Has Already Begun

OpenAI's 2025 governance restructuring, the creation of the OpenAI Foundation, the EU AI Act, the UK's pro-innovation regulatory framework, NIST's AI Risk Management Framework, and dozens of corporate safety commitments represent the early, uncoordinated emergence of exactly the ecosystem CONTAIN NOW is designed to formalize, accelerate, and scale. The movement exists to name what is already happening, provide it with a unified theoretical framework, and convert scattered goodwill into systemic architecture.

Critical Threshold

According to the AI Index Report (Stanford HAI, 2024), private AI investment reached $91.9 billion in 2023. AI governance and safety investment represented less than 2% of that figure. This asymmetry — between capability investment and containment investment — is the core structural failure that CONTAIN NOW exists to correct.

$91.9B
Private AI Investment 2023
Stanford HAI, 2024
<2%
AI Safety Investment % of Total
Critical structural gap
47%
Jobs at High AI Exposure (US)
Brynjolfsson et al., 2023
Part I

The Scientific Case for Urgent Action

A synthesis of the global research consensus on AI risk, drawing from computer science, complexity theory, political economy, cognitive science, and systems biology.

1.1 Emergent Capabilities and the Alignment Problem

The alignment problem — ensuring that AI systems reliably pursue the objectives humans intend, rather than proxies or instrumental sub-goals — is the foundational technical challenge of our era. Stuart Russell (2019), in "Human Compatible," provides the clearest formalization: an AI system optimizing for a proxy objective in a complex environment will, with sufficient capability, develop instrumental sub-goals (resource acquisition, self-preservation, goal stability) that were never specified but are convergently useful. This is not speculation — it is a theorem derivable from basic optimization theory.

Wei et al. (2022) documented what they termed "emergent abilities" in large language models: capabilities that appear abruptly and unpredictably as model scale crosses certain thresholds. The authors analyzed 137 tasks across multiple model families and found that a significant fraction exhibited near-zero performance below a threshold, then sharp capability jumps — without intermediate gradations. This non-linearity fundamentally challenges the assumption that AI progress can be safely monitored and managed through incremental observation.

Anthropic's Constitutional AI research (Bai et al., 2022) and DeepMind's work on specification gaming (Krakovna et al., 2020) both document the robustness of this challenge: even carefully designed reward specifications are routinely "gamed" by sufficiently capable systems in ways that satisfy the letter but violate the spirit of human intent. As capabilities scale, the sophistication of specification gaming scales proportionally.

Key Finding

The alignment problem is not solved. It is not close to being solved. Current large-scale deployments of AI systems operate with alignment guarantees that are, by the admission of their developers, insufficient for the risk profile they carry. (Russell, 2019; Hendrycks et al., 2023; Anthropic Safety Research, 2024)

1.2 AI-Enabled Information Warfare and Democratic Erosion

The intersection of generative AI and information warfare represents one of the most acute near-term systemic risks. Goldstein et al. (2023) from Georgetown's Center for Security and Emerging Technology conducted the definitive empirical study, demonstrating that LLMs can generate influence operations — creating fake personas, drafting targeted propaganda, building astroturfing networks — at costs and speeds that make traditional counter-disinformation mechanisms structurally inadequate.

The political economy dimension is equally alarming. Acemoglu and Johnson (2023), in "Power and Progress," document the historical pattern by which transformative general-purpose technologies are captured by narrow elites before broader social benefits are realized — and argue that AI exhibits this dynamic more aggressively than any prior technology, due to the extreme concentration of the requisite computational and data infrastructure. Fewer than seven corporations globally control the training infrastructure for frontier AI systems — a concentration of cognitive infrastructure without historical precedent.

Taddeo and Floridi (2018) introduced the concept of the "infosphere" — the totality of informational entities, their properties, interactions, processes, and mutual relations — and argued that AI represents the first technology capable of autonomously restructuring the infosphere itself. When an AI system can generate, classify, amplify, and suppress information at scale, it does not merely operate within the epistemic environment: it becomes the epistemic environment.

1.3 Economic Displacement: Velocity Without Precedent

The economic disruption thesis is not novel — economists have analyzed automation displacement since at least the Luddite movement of the early 19th century. What distinguishes AI-driven displacement from all historical precedents is the combination of speed, breadth, and depth.

Brynjolfsson et al. (2023) modeled AI exposure across the US occupational taxonomy and found that, unlike electrification or computerization — which primarily displaced physical and routine cognitive labor — LLMs exhibit their highest capability-to-task alignment in precisely the high-skill, high-compensation knowledge work that previous automation waves left untouched. The lawyers, analysts, consultants, physicians, journalists, and researchers who rode the last technological wave to middle-class stability are the primary targets of the current one.

The World Economic Forum's "Future of Jobs Report 2025" projects that 85 million jobs will be displaced by automation by 2025 and 97 million new roles will emerge — but with two critical caveats that the headline figure obscures: the displacement will be geographically and demographically uneven to a degree that historical social systems are not equipped to absorb, and the temporal gap between displacement and job creation may be measured in decades rather than years. The "net positive" is real but long-term; the disruption is immediate.

1.4 Existential and Catastrophic Risk: The Long View

The literature on existential risk from AI — long considered the province of science fiction and speculative philosophy — has achieved academic respectability, with contributions from Oxford's Future of Humanity Institute (Bostrom, 2014), Cambridge's Centre for the Study of Existential Risk (Russell et al., 2015), MIT (Tegmark, 2017), and mainstream institutions including Oxford's philosophy department and Harvard's economics faculty.

Ord (2020), in "The Precipice," applies formal probability theory to existential risk and estimates the probability of an AI-related catastrophe in the next 100 years at 10% — higher than his estimates for nuclear war, engineered pandemics, and climate change combined. Importantly, Ord's methodology is conservative and explicitly accounts for deep uncertainty; the estimate is not a prediction but a lower bound consistent with available evidence.

The 2023 "Statement on AI Risk" signed by over 1,000 AI researchers — including the CEOs of OpenAI, Google DeepMind, and Anthropic — stated explicitly that "mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." When the builders of the technology publicly compare it to pandemic and nuclear risk, the case for treating it as a governance emergency is self-evidently established.

The Builders' Own Testimony

Sam Altman (OpenAI), Demis Hassabis (Google DeepMind), and Dario Amodei (Anthropic) have each, in public statements between 2023-2025, acknowledged that the technology they are building carries civilizational-scale risks. Amodei's essay 'Machines of Loving Grace' simultaneously envisions AI solving cancer, mental illness, and poverty — and acknowledges that the same systems could cause catastrophic harm if misaligned. This is not a fringe position. It is the disclosed risk assessment of the industry's founders.

Part II

The CONTAIN NOW Framework

A strategic and operational architecture for building the global AI containment ecosystem — simultaneously a governance model, a business model, and a civilizational bet.

2.1 The Core Thesis: Containment as the Greatest Business Opportunity of the 21st Century

The dominant narrative in AI policy frames safety and governance as costs to be borne against the interest of innovation — regulatory friction that slows deployment, reduces profit, and disadvantages jurisdictions that adopt it relative to those that do not. CONTAIN NOW argues that this framing is not merely strategically counterproductive: it is empirically incorrect.

The precedent from adjacent domains is instructive. Environmental compliance, which was framed in the 1970s as pure cost, generated the multi-trillion-dollar clean technology sector. Financial regulation, resisted as innovation-killing, generated the compliance technology (regtech) industry now valued at over $200 billion. Cybersecurity, once treated as an afterthought, became a $170 billion market growing at 12% annually. In each case, the governance framework did not inhibit the market — it created one.

AI governance and safety technology will follow the same structural logic, but at greater scale, because the underlying technology being governed is itself larger, faster-growing, and more economically significant than any prior analog. The global AI market is projected to reach $1.8 trillion by 2030 (Goldman Sachs, 2024). The AI safety and governance market, currently negligible, will follow — and will be amplified by the unique characteristic that AI systems are uniquely well-positioned to monitor, audit, red-team, and constrain other AI systems.

The CONTAIN NOW Economic Thesis

The containment infrastructure for AI will require AI — creating a recursive, self-funding market dynamic. Safety AI, interpretability AI, audit AI, red-teaming AI, governance AI, and monitoring AI will constitute the most valuable segment of the AI industry within a decade. First movers in this segment will extract disproportionate returns — exactly as Google extracted disproportionate returns from search infrastructure, or AWS from cloud infrastructure. CONTAIN NOW exists to name this opportunity before the window closes.

2.2 The Five Pillars of the CONTAIN NOW Architecture

Pillar 1

Technical Containment

Interpretability, Alignment, and Red-Teaming Infrastructure

Funds and accelerates research and commercial development of AI systems specifically designed to make other AI systems legible, auditable, and correctable. This includes mechanistic interpretability research, formal verification approaches, red-teaming and adversarial robustness testing, and watermarking and content provenance infrastructure (C2PA standards). Every AI deployment in a regulated industry will require interpretability and audit tooling as a condition of deployment.

Pillar 2

Governance Infrastructure

Institutional Architecture for AI Oversight

Supports the development of governance institutions, frameworks, and mechanisms at national, regional, and international levels. Specifically supports the establishment of an International AI Safety Agency (IASA) — modeled on the International Atomic Energy Agency (IAEA) — with mandate to conduct inspections, certify compliance, and maintain a global AI incident registry.

Pillar 3

Economic Containment

Restructuring Incentives Toward Safety

Addresses the core political economy challenge: the current incentive structure rewards capability development and punishes safety investment. Requires mechanisms including mandatory AI liability insurance, AI safety tax credits, procurement standards requiring certified safety compliance, and investor ESG frameworks specifically incorporating AI governance as a material risk factor.

Pillar 4

Epistemic Containment

AI Literacy, Critical Thinking, and Cognitive Resilience

Addresses the population-level cognitive infrastructure required to navigate an AI-saturated information environment. A population that has outsourced critical reasoning to AI systems is uniquely vulnerable to those systems' failures, biases, and adversarial manipulation. Advocates for mandatory AI literacy curricula, media literacy programs, and research into cognitive AI dependency.

Pillar 5

Resilience Infrastructure

Systemic Robustness Against AI-Enabled Failure

Funds the development of systemic resilience against AI-enabled cascade failures — scenarios in which AI systems' interconnection creates failure modes that propagate across multiple domains simultaneously. Includes circuit-breaker mechanisms, mandatory human-in-the-loop requirements, analog fallback infrastructure, and international protocols for AI incident response.

Part III

Global Convergence: The Movement Has Already Begun

A mapping of existing global initiatives that are, knowingly or unknowingly, constructing the CONTAIN NOW ecosystem — and the case for formal coordination.

3.1 The OpenAI Precedent: $25 Billion in Involuntary Adherence

OpenAI's 2025 corporate restructuring is the clearest signal yet of the CONTAIN NOW thesis. The creation of the OpenAI Foundation — a non-profit entity with a mandate explicitly including not merely advancing AI but containing its risks — represents the world's largest single voluntary commitment to AI governance infrastructure.

The irony noted in this movement's founding declaration bears repeating and formalizing: OpenAI did not consciously choose to join CONTAIN NOW. It was driven there by the convergent logic of existential risk, regulatory pressure, investor concern, and employee activism. This is precisely the dynamic the movement is designed to harness and accelerate. Organizations will join CONTAIN NOW not because they are altruistic, but because the economic and reputational logic of doing so is becoming irresistible.

3.2 International Regulatory Architectures in Formation

European Union AI Act (2024)

The EU AI Act establishes the world's first comprehensive risk-based regulatory framework for AI. Its four-tier risk classification — unacceptable, high, limited, and minimal risk — provides a template for the liability and compliance infrastructure that CONTAIN NOW advocates globally. The Act's extraterritorial reach creates a de facto global standard — the "Brussels Effect" applied to AI governance.

UK AI Safety Institute (2023)

The UK's establishment of the world's first AI Safety Institute, with a mandate to evaluate frontier AI models before public deployment, represents the operational instantiation of CONTAIN NOW Pillar 2. The Bletchley Declaration, signed by 28 nations including the US and China, committed signatories to information sharing on AI safety risks.

US Executive Order on AI Safety (2023)

President Biden's Executive Order on AI included the most extensive federal AI governance requirements in US history: mandatory safety testing for frontier models, requirements for watermarking AI-generated content, and directives for federal agencies to assess AI risks.

UNESCO Recommendation on AI Ethics (2021)

UNESCO's globally-adopted Recommendation on the Ethics of AI — endorsed by all 193 member states — established eight core principles including proportionality, safety, fairness, sustainability, privacy, human oversight, transparency, and accountability.

NIST AI Risk Management Framework (2023)

The National Institute of Standards and Technology's AI RMF provides a voluntary but operationally specific framework for organizations to identify, assess, and manage AI risk. Its four functions — Govern, Map, Measure, Manage — constitute the operational layer of CONTAIN NOW Pillar 3.

The Brazilian Context

Brazil occupies a structurally advantageous position in the global AI governance landscape. As the world's sixth-largest economy, the largest economy in Latin America, and a historically successful mediator in multilateral forums, Brazil has both the scale and the diplomatic track record to play an outsized role in the emerging international AI governance architecture.

Part IV

The Manifesto: A Declaration and a Convocation

This is not a policy paper. It is a founding declaration. It is addressed to every person and institution capable of action.

We Declare

We declare that artificial intelligence has crossed the threshold from technological development to civilizational transformation. This transition is not a future event. It is the defining condition of the present moment.

We declare that the pace of AI capability development has outrun — by a margin that is empirically documented and institutionally acknowledged — the pace of governance, alignment research, and social adaptation infrastructure. This gap is not a failure of intent. It is a failure of architecture. The architecture does not exist. Building it is the defining challenge of our time.

We declare that the framing of AI safety as antagonistic to AI progress is false, strategically counterproductive, and historically illiterate. Every transformative technology that created durable value — aviation, nuclear power, pharmaceuticals, financial markets — did so because governance frameworks were ultimately established. The technology that escapes governance does not create value; it creates liability, backlash, and, eventually, existential risk.

We Assert

We assert that the most important economic opportunity of the 21st century is not AI capability development. It is AI containment infrastructure. The organizations — corporate, academic, governmental, civil society — that build the safety, alignment, interpretability, governance, and resilience infrastructure for AI will extract economic value proportional to the scale of the risk they are managing. That scale is civilizational.

We assert that the global AI safety research community — currently underfunded by two orders of magnitude relative to capability research — requires emergency resourcing. The gap is not a resource allocation problem awaiting normal budget cycles. It is a civilizational infrastructure problem requiring wartime mobilization logic.

We assert that international coordination on AI governance is not optional. It is structurally required. AI systems do not respect national borders. AI-enabled information operations do not respect democratic sovereignties. AI-driven economic disruption does not respect existing social contracts. Governance frameworks that operate below the global level will be systematically arbitraged into irrelevance.

We Convoke

To governments

We convoke governments to establish, fund, and empower national AI safety agencies with genuine regulatory authority; to collaborate on international governance instruments with binding force; and to price AI risk into their fiscal and procurement frameworks.

To corporations

We convoke corporations to commit not merely to responsible AI principles — the world has enough principles — but to investment, governance structures, and operational practices that make responsibility measurable, auditable, and accountable. Safety must move from mission statement to balance sheet.

To investors

We convoke investors to recognize AI governance capacity as a material risk factor; to require ESG frameworks that specifically assess AI safety and alignment practices; and to fund the safety research and governance technology companies that the market currently under-rewards.

To researchers

We convoke researchers to cross disciplinary boundaries; to prioritize publication, knowledge transfer, and policy translation alongside academic production; and to treat AI safety as what it is — the most important applied research challenge in the history of science.

To citizens

We convoke citizens to demand accountability; to develop AI literacy as a civic competency; and to recognize that the governance of AI is not a technical problem to be delegated to experts. It is a political problem to be decided democratically.

To the AI industry

We convoke the AI industry itself — its engineers, researchers, executives, and investors — to act on what they already know. The public statements of the field's founders confirm awareness of the risk. Awareness without action is not responsibility. It is complicity.

"The time for caution about acting is over. The time for urgency about not acting has arrived."
— CONTAIN NOW, 2025
Part V

Operational Roadmap: From Manifesto to Movement

Phase 0 — Ignition

2025
  • Publication and global distribution of this founding manifesto
  • Establishment of CONTAIN NOW founding council with representation from science, industry, civil society, and government across minimum five continents
  • Launch of CONTAIN NOW Index: a public-facing tracker of corporate AI governance commitments and verified compliance
  • First CONTAIN NOW Summit — convening 500 organizations across 50 countries
  • Partnership with at least two major multilateral institutions (UN, OECD, G20) for formal integration of CONTAIN NOW framework

Phase 1 — Architecture

2026–2027
  • Establishment of CONTAIN NOW Certification Standard — the first globally recognized voluntary certification for AI safety practices, modeled on ISO standards
  • Launch of CONTAIN NOW Venture Fund: dedicated investment vehicle for AI safety and governance technology companies
  • Pilot of AI Liability Insurance Framework in collaboration with major reinsurers
  • Launch of CONTAIN NOW Academy: global curriculum for AI literacy and governance, freely available in all UN languages
  • First Annual CONTAIN NOW Report: state of AI governance globally

Phase 2 — Institutionalization

2028–2030
  • CONTAIN NOW Certification adopted as a public procurement requirement in minimum 20 jurisdictions
  • International AI Safety Agency proposal formally tabled at UN General Assembly
  • CONTAIN NOW Venture Fund portfolio of 100+ AI safety companies across 30+ countries
  • AI Literacy curriculum adopted in minimum 40 national educational systems
  • First binding international instrument on AI governance substantially reflecting CONTAIN NOW framework

5.2 Metrics of Success

CONTAIN NOW explicitly rejects the metrics typically used by awareness movements — signatures, social media reach, media coverage — as insufficient and gameable. Success is measured by structural change.

IndicatorDescriptionCurrentTarget 2030
AI Safety Investment RatioGlobal ratio of safety/governance investment to capability investment<2%>15%
Jurisdictional Coverage% of global GDP covered by comprehensive AI governance frameworks~28% (EU only)>70%
Interpretability Capability% of frontier AI deployments with certified interpretability tooling<5%>60%
AI Literacy% of population in OECD nations with assessed AI literacy~8%>40%
Incident ResponseTime from AI incident detection to coordinated international responseNo mechanism<72 hours

WHAT IF?

Strategic Analysis

Can a small percentage of global AI resources act as a lever and inoculate the containment thesis across exponential waves of adoption?

Economic Angle: Financial Leverage and Multiplier Effects

Economically, yes: An initial percentage can act as "seed capital" that multiplies investments. In the aerospace industry, Rolls-Royce reallocated non-scalable resources into systemic innovations, with initial reallocations of ~5–10% of capacity leading to market dominance and waves of industrial innovation. Analogy for AI: Reallocating 3% (US$75B) to containment could finance a "global alignment fund." This "inoculates" the ecosystem: Diversified funds attract VCs, generating waves of governance-focused startups, raising the safety funding ratio from <2% to >15%.

Social and Viral Angle: Influencing Waves of Movements

Socially, the percentage can influence "waves" via cultural inoculation. Environmental examples show that reallocations of 1–2% of global GDP for climate transitions leveraged movements like the Paris Agreement, inoculating sustainability theses in corporations. For AI, a 5% percentage could finance epistemic literacy campaigns, spreading the thesis via social networks, like the Effective Altruism movement that lobbied for AI safety.

Political and Geopolitical Angle: Influence on Policies and Alliances

Politically, the percentage leverages via global coordination. States like the US and EU reallocate incentives with growing safety mandates. The Global Alignment Fund (proposed at 1% of R&D from 11 countries, ~US$4B) could 10x safety funding, creating waves in institutes like the UK AI Safety Institute. In Brazil, this amplifies the G20, positioning it as a hub for containment in the Global South.

Technological and Ethical Angle

Technologically, yes: Reallocations toward interpretability create tools that are self-reinforcing (AI containing AI, per the manifesto). Ethically, it leverages equity (reducing inequalities in job displacement), but implies dilemmas: Over-leverage may stifle beneficial innovation.

Conclusion

In summary, yes, it is plausible — and strategic — to imagine this percentage as a lever for waves inoculated by the CONTAIN NOW thesis, transforming containment into a global norm. Starting small, it can scale via economic, viral, and political multipliers.

Take Action

Sign the Petition

Add your name to the global call for responsible AI containment. Every signature strengthens the movement.

Global Signatories
9

individuals and organizations worldwide

Legal Notice & Data Processing Disclaimer

By signing this petition, you acknowledge and agree to the following terms in accordance with applicable data protection laws, including but not limited to the Brazilian General Data Protection Law (LGPD — Lei nº 13.709/2018), the European Union General Data Protection Regulation (GDPR — Regulation EU 2016/679), and other applicable international data protection frameworks: Data Controller: CONTAIN NOW Global Movement, represented by its founding author Pyr Marcondes. Purpose of Data Collection: Your personal data (name, country, and optionally email and organization) is collected solely for the purpose of recording your support for the CONTAIN NOW manifesto and, if you opt in, displaying your name as a public signatory. Legal Basis: Your explicit, informed, and freely given consent (LGPD Art. 7, I; GDPR Art. 6(1)(a)). Data Minimization: We collect only the minimum data necessary for the stated purpose. No identity documents, financial information, or sensitive personal data is collected. Data Retention: Your data will be retained for the duration of the CONTAIN NOW movement's active operations. You may request deletion at any time. Data Security: IP addresses are cryptographically hashed and never stored in raw form. All data is stored in encrypted databases with access controls. Your Rights: You have the right to access, rectify, delete, or port your data, and to withdraw consent at any time without affecting the lawfulness of prior processing. To exercise these rights, contact: containnow@movement.org International Transfers: Data may be processed on servers located outside your country of residence, with appropriate safeguards in place as required by applicable law. No Commercial Use: Your data will never be sold, shared with third parties for commercial purposes, or used for any purpose other than those stated herein.

Recent Signatories

Martha GabrielMartha Gabriel
Brazil
Ramiro Gustavo Fernandes PissettiFutural
Brazil
Rucelmar ReisAdvisor Tips
Brazil
André Bauch Zimmermann
Spain
Guilherme HollandKuber9 RegTech
Brazil
Fabio CardoRigel Nexus
Brazil
Ken Fujioka
Brazil
Michel Lent Schwartzman
Brazil
Jane Doe
Germany

Spread the word

Share CONTAIN NOW with your network

References & Scientific Foundations

  1. [1]Acemoglu, D. & Johnson, S. (2023). Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity. PublicAffairs.
  2. [2]Acemoglu, D. (2024). The Simple Macroeconomics of AI. NBER Working Paper 32487.
  3. [3]Anthropic (2022). Constitutional AI: Harmlessness from AI Feedback. Bai, Y. et al. arXiv:2212.08073.
  4. [4]Anthropic Interpretability Team (2023-2025). Towards Monosemanticity; Scaling Monosemanticity; On the Biology of a Large Language Model. transformer-circuits.pub.
  5. [5]Bastani, H. et al. (2024). Generative AI Can Harm Learning. The Wharton School, University of Pennsylvania. SSRN Working Paper.
  6. [6]Battiston, S. et al. (2016). Complexity theory and financial regulation. Science, 351(6275), 818-819.
  7. [7]Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.
  8. [8]Brynjolfsson, E. et al. (2023). GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models. NBER Working Paper 31161.
  9. [9]Buldyrev, S.V. et al. (2010). Catastrophic cascade of failures in interdependent networks. Nature, 464, 1025-1028.
  10. [10]European Parliament (2024). Regulation (EU) 2024/1689 — Artificial Intelligence Act. Official Journal of the European Union.
  11. [11]Future of Humanity Institute, Oxford University (various, 2014-2024). Technical Reports on AI Safety and Existential Risk.
  12. [12]Goldstein, J.A. et al. (2023). Generative Language Models and Automated Influence Operations. Georgetown CSET Report.
  13. [13]Goldman Sachs Research (2024). AI Investment Forecast to Approach $200 Billion Globally by 2025. GS Global Investment Research.
  14. [14]Hendrycks, D. et al. (2023). Aligning AI With Shared Human Values. ICML 2023.
  15. [15]Krakovna, V. et al. (2020). Specification Gaming: The Flip Side of AI Ingenuity. DeepMind Blog / Nature Machine Intelligence.
  16. [16]NIST (2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0). NIST AI 100-1. U.S. Department of Commerce.
  17. [17]Olah, C. et al. (2022). Mechanistic Interpretability, Variables, and the Importance of Interpretable Bases. Distill / Transformer Circuits.
  18. [18]Ord, T. (2020). The Precipice: Existential Risk and the Future of Humanity. Hachette Books.
  19. [19]Russell, S. (2019). Human Compatible: Artificial Intelligence and the Problem of Control. Viking/Penguin.
  20. [20]Stanford HAI (2024). AI Index Report 2024. Stanford Institute for Human-Centered Artificial Intelligence.
  21. [21]Taddeo, M. & Floridi, L. (2018). How AI can be a force for good. Science, 361(6404), 751-752.
  22. [22]Tegmark, M. (2017). Life 3.0: Being Human in the Age of Artificial Intelligence. Knopf.
  23. [23]UNESCO (2021). Recommendation on the Ethics of Artificial Intelligence. UNESCO SHS/BIO/PI/2021/1.
  24. [24]Wei, J. et al. (2022). Emergent Abilities of Large Language Models. Transactions on Machine Learning Research.
  25. [25]World Economic Forum (2025). Future of Jobs Report 2025. WEF, Geneva.
  26. [26]UK Department for Science, Innovation and Technology (2023). AI Safety Summit — Bletchley Declaration.
  27. [27]UN Secretary-General's Advisory Body on AI (2024). Governing AI for Humanity: Interim Report.
  28. [28]Various (2023). Statement on AI Risk. Center for AI Safety. Signed by 1,000+ AI researchers and executives.