A multidisciplinary evaluation of AI-accelerated inequalities
An exploration of key developments across critical industries
that inform trends in labor displacement, AI policymaking,
education, media literacy, and techno-nationalism.
Education
AI adoption in education is creating stark winners and losers, as institutions and individuals make strategic choices that prioritize self-interest over collective benefit, often resulting in systemic inefficiencies and entrenched disparities across school districts.
Cluely
NextGenAI
Infrastructure
Personalization

Cluely AI and academic integrity
Built by suspended, now-former Columbia students, Cluely’s covert exam-assistance tools are disproportionately adopted by affluent students. With $5.3M in venture funding, it enables a "pay-to-win" dynamic that mirrors broader wealth gaps-raising ethical concerns about meritocracy in an AI-driven world.


OpenAI’s NextGenAI Consortium
OpenAI’s $50M partnership with Ivy League schools provides elite institutions with cutting-edge AI models and compute resources, accelerating research and curriculum development.Meanwhile, underfunded universities lack access to these tools, widening the gap between "AI-powered" and "AI-deprived" institutions.


Trump's AI coordination failure
A new executive order encourages AI literacy training in K-12 schools, but wealthier districts deploy NVIDIA-powered tutors, while under-resourced schools struggle with basic connectivity. Critics argue the policy risks deepening geographic and economic divides, as 54% of rural schools lack infrastructure for AI integration.


School districts face a dilemma
Schools face competing incentives: adopting AI tutors boosts rankings but displaces 15% of teaching jobs, while abstaining risks student competitiveness. This deadlock leaves a significant portion of of districts in a stalemate, prioritizing short-term stability over long-term gains.
Analysis
The education landscape reveals AI's paradoxical impact: while promising democratized learning, it creates compounding advantages for resource-rich institutions. Elite universities receive $50M in consortium funding while underserved schools lack basic connectivity. Tools like Cluely create academic integrity divides along socioeconomic lines. This cascading inequality suggests a future where AI literacy becomes a critical determinant of social mobility, potentially cementing existing hierarchies through technological means rather than dismantling them.
Financial engineering
Coupled with increasingly advanced foundational AI models, proprietary automated trading algorithms that run independent of human intervention are generating millions in profits per hour. Wealth-building mechanisms are no longer being seen as dependent on proportional human employment.
Jane Street
Hudson River Trading
Citadel
AI profit dividend

Jane Street’s $20b earnings report
Reporting an unprecedented near-doubling in revenue, from $10.6 billion to $20.5 billion, Jane Street shows how lightweight teams focusing on ML-leveraged ETF trading bots outperform legacy institutions like Citigroup and Bank of America.

Hudson River Trading’s 10% hold
Hudson River Trading now controls 10% of US stock-trading volume with just 1,100 employees, illustrating how lean, tech-driven teams are capturing outsized market share and profits through high-frequency, AI-powered strategies.


GPU arms race
Top quant firms are racing to acquire NVIDIA H100 GPUs, leveraging their massive parallelism to shave nanoseconds off trades, asymmetric price discovery speed over smaller rivals and regulators.


Redistribution proposals
Proposals for digital value-added tax (VAT) or an “AI dividend” aim to redistribute the vast wealth generated by AI trading, but critics warn that such reactive, post-facto policies may arrive too late to counter entrenched inequality.
Analysis
AI-driven financial systems reveal an economic dilemma where productivity gains diverge from traditional employment-based economic multipliers. The concentration of algorithmic trading capabilities creates asymmetric market advantages while potentially reducing taxable wage income. This scenario presents a coordination challenge wherein locally rational optimization decisions by firms may collectively yield suboptimal societal outcomes, a classic example of how technological efficiency and wealth creation can operate independently from broader economic distribution mechanisms.
Government
Advanced AI is reshaping governance by accelerating policy implementation through algorithmic decision-making. This unprecedented privatization of government functions is enabling hyper-efficient execution of policies while raising questions about democratic oversight, accountability, and technocratic governance.
Palantir
Anduril
Peter Thiel
DOGE


Palantir's Government OS
Palantir secured a $30M contract to develop ImmigrationOS for ICE, providing "real-time visibility into self-deportation" and streamlining the "immigration lifecycle from identification to removal." Simultaneously, its AIP Tariff Scenario Planner helps businesses navigate Trump's new global tariffs, showcasing AI's dual role in policy execution.


Techno-Nationalism and DOGE
DOGE reflects a strategic tension between minimizing government and maximizing technological sovereignty. Behind this initiative lies a Silicon Valley power shift where venture capitalists like Thiel have transitioned from funding disruptors to becoming government insiders, implementing an unprecedented experiment in AI-driven governance with minimal oversight.


OpenAI & Anduril dual-use dilemma
The strategic alliance between consumer AI company OpenAI and defense firm Anduril reveals the collapse of traditional boundaries between civilian and military technologies. This convergence accelerates AI capabilities in counter-drone operations while raising unresolved ethical questions about autonomous targeting decisions.


Twitter as policy incubator
Musk and key opinion leaders' reliance on viral posts to shape policies on Ukraine, Israel, global tarrifs, mirrors a shift toward crowdsourced governance, prioritizing platform-aligned narratives over systemic analysis. Critics warn of opacity in algorithmic amplification driving arbitrary decisions.
Analysis
The convergence of private AI systems with government functions creates profound information asymmetries: Palantir's "real-time visibility" tools, Anduril's defense technologies, and DOGE's budget analytics concentrate decision-making power within proprietary algorithms inaccessible to citizens. Meanwhile, platform-driven policy formation (Twitter) bypasses traditional democratic processes, creating governance inequalities where Silicon Valley insiders shape public policy with minimal oversight. This technocratic shift establishes a two-tier governance system where those controlling AI infrastructure increasingly dictate policy outcomes.
Startups
AI is redefining startup economics through hyper-lean teams and automated development, creating a new paradigm where technical scalability often outpaces traditional growth constraints.
Cursor
Replit
SF Compute
Anthropic
Midjourney
ARR


Programming copilots
Tools like Cursor and Replit enable 10-person teams to generate 95% of code via AI, slashing development cycles from months to days while maintaining 40% lower error rates than manual coding. Startups now prototype full-stack apps in hours rather than weeks, with Y Combinator reporting 25% of its 2025 cohort relying entirely on AI-generated codebases.


San Francisco Compute Company
SF Compute Company’s "Airbnb for GPUs" model offers H100 clusters at $0.57/hr-80% cheaper than AWS-allowing bootstrapped startups to access enterprise-grade AI infrastructure without long-term contracts. This shift disrupts the $27B cloud GPU market, enabling lean teams to compete with tech giants in model training.


Vibe coding and junior developers
While Anthropic predicts 90% AI-generated code by 2026, junior developer hiring dropped 63% as startups prioritize "vibe coders" or non-technical founders using natural language to ship production apps. This creates a bifurcated job market: 80% of entry-level coding roles now require AI orchestration and tangential skills over traditional programming.


ARR per employee
Midjourney’s $20M/employee benchmark (vs. Big Tech’s $0.5M) exemplifies the AI productivity multiplier. Startups now target $1M+ ARR/employee through AI-automated workflows, with 78% of seed-stage ventures operating sub-50 person teams despite $10M+ revenues.
Analysis
AI-driven productivity gains are redefining organizational efficiency ratios: startups now achieve $1M+ ARR per employee versus traditional $200-350K benchmarks. This capital efficiency enables resource-constrained ventures to compete with established firms, as evidenced by 10-person teams shipping products via 95% AI-assisted code and accessing compute at 80% lower costs. However, this productivity risks accelerating labor displacement, with studies projecting 1-3 million jobs ultimately displaced as organizations prioritize automation over traditional headcount scaling.
Slowdown
Global efforts to decelerate AI advancement reveal a strategic tension between innovation and control, as policymakers, industries, and activists grapple with existential risks, labor displacement, and infrastructural vulnerabilities.
Pause AI
Job protectionism
Cloud computing
Open-source LLMs


Pause AI Movement
Pause AI protests across 13 countries demand binding treaties to halt advanced AI development, citing existential risks and unchecked corporate power. Their calls mirror historical tech moratoriums like the Montreal Protocol, but face skepticism over enforceability in a fragmented regulatory landscape.


Korean Laywer Unions
The Korean Bar Association banned AI legal services like Continental Aju, citing dignity violations and job threats. This mirrors broader resistance in regulated professions, where 41% of firms now restrict AI tools to preserve traditional roles.


Cloud privacy
64% of enterprises now limit cloud AI adoption due to data leakage risks (eg. national Deepseek ban in Korea). This has spurred $12B in annual investments for on-premise alternatives, slowing AI integration in critical sectors.


Open-source LLMs
Locally run models like LLaMA and Mistral saw 230% adoption growth in 2024, enabling privacy-first AI while circumventing cloud regulations. This “guerrilla AI” movement challenges centralized control but risks fragmenting safety standards.
Analysis
The AI slowdown movement faces a fundamental timing challenge: regulatory and protest efforts move at bureaucratic speeds while AI capabilities advance exponentially. This mismatch creates coordination failures where individual actors (countries, professions, enterprises) implement isolated barriers without addressing the global nature of AI development. Meanwhile, the technical community responds with decentralized solutions (230% growth in open-source models) that bypass centralized controls altogether. This fragmentation creates a self-reinforcing cycle where uneven regulation drives innovation toward less-restricted spaces, ultimately accelerating rather than controlling AI development.