← Back to Home
April 03, 2026

Musk's IPO Grok Demand and AI Users Lose Reasoning

Musk Forces Banks to Buy Grok Subscriptions for SpaceX IPO
SPACE

Musk Forces Banks to Buy Grok Subscriptions for SpaceX IPO

Wall Street's most prestigious banks are being forced to shell out tens of millions for AI chatbot subscriptions just to work on what could be history's largest IPO. And no, that's not a typo.

Elon Musk has made purchasing Grok subscriptions a non-negotiable requirement for any financial institution wanting to handle SpaceX's public offering. Bank of America, Goldman Sachs, JPMorgan Chase, Citigroup, and Morgan Stanley have all reportedly agreed to integrate the xAI chatbot into their systems as part of the deal.

This isn't your typical investment banking relationship. Usually, banks compete fiercely to win IPO mandates by offering the best terms and expertise. Now they're being asked to buy subscriptions to an AI service that's currently under investigation for generating inappropriate content, including child abuse material.

The numbers make this demand even more audacious. SpaceX's IPO could raise over $50 billion at a valuation exceeding $2 trillion – potentially double the $1.25 trillion figure from just two months ago when SpaceX acquired xAI. Investment banks typically earn hundreds of millions in fees from deals this size, making Musk's subscription ultimatum a calculated power play.

Timing is everything here. Musk purchased xAI in February, which had previously bought X (formerly Twitter) in March 2025. Now he's using SpaceX's IPO as a vehicle to force enterprise adoption of Grok across Wall Street's most influential institutions.

The strategy reveals Musk's broader vision for his interconnected empire. Tesla already commands a trillion-dollar market cap, and his recent pay package could net him another trillion if the company hits an $8.5 trillion valuation. SpaceX going public would cement his position as the world's most powerful CEO across multiple industries.

For the banks, this creates an uncomfortable precedent. What happens when other mega-companies start demanding that advisors purchase their products or services as a condition of working together? The traditional client-service provider relationship gets murky when the client can essentially mandate product sales.

The fact that law firms Gibson Dunn and Davis Polk are also part of these requirements shows Musk isn't limiting his demands to financial institutions. He's systematically forcing the entire professional services ecosystem around this IPO to become Grok customers.

While banks have always found ways to curry favor with high-profile clients, being required to integrate a controversial AI system into their core infrastructure crosses into uncharted territory. The question isn't whether they'll comply – with fees potentially exceeding $500 million, they'll likely do whatever it takes. The real question is what this means for the future balance of power between Wall Street and Silicon Valley's most demanding entrepreneurs.
Source: Ars Technica
AI Users Abandon Critical Thinking in Cognitive Surrender Study
AI

AI Users Abandon Critical Thinking in Cognitive Surrender Study

Half the time, an AI chatbot deliberately gave wrong answers to reasoning problems. Users who consulted it performed worse than those who relied on their own brains – even when the AI was clearly unreliable.

University of Pennsylvania researchers have identified a troubling phenomenon they call "cognitive surrender" – the wholesale abandonment of human reasoning in favor of AI-generated answers. Unlike traditional tools like calculators or GPS that handle specific tasks while humans maintain oversight, people are increasingly accepting AI responses without any verification or critical evaluation.

The study centered on Cognitive Reflection Tests, which are designed to trip up people who rely on quick, intuitive thinking while rewarding those who engage in deeper analysis. Researchers gave participants optional access to a chatbot that randomly provided correct answers only 50% of the time.

What happened next should concern anyone building AI-dependent workflows. Users frequently consulted the unreliable chatbot and let its incorrect responses override their own reasoning abilities. Even when participants might have arrived at correct answers through careful thinking, they deferred to the AI's confident-sounding but wrong responses.

This represents a fundamental shift in how humans interact with technology. Previous generations used tools for "cognitive offloading" – strategically delegating specific tasks while maintaining intellectual control. Today's AI users are engaging in something far more dangerous: uncritical abdication of reasoning itself.

The researchers identified key factors that make cognitive surrender more likely. When AI responses sound fluent and confident, users are more prone to accept them without question. When there's time pressure or external incentives, people become even more willing to outsource their thinking entirely.

This has massive implications for decision-making across industries. If professionals in finance, healthcare, law, and other critical fields are surrendering their judgment to AI systems, we're essentially automating human wisdom out of important processes. The scary part? Users often don't realize they're doing it.

The study builds on established psychological frameworks about fast versus slow thinking, but adds a third category: artificial cognition. This represents decisions driven entirely by external algorithms rather than human mental processes. Unlike traditional tools that augment human capabilities, AI is increasingly replacing human reasoning altogether.

The timing couldn't be more relevant. As companies rush to integrate AI assistants into workflows, they're inadvertently creating conditions that encourage cognitive surrender. Smooth user interfaces, confident-sounding responses, and pressure to work faster all push users toward uncritical AI dependence.

The solution isn't abandoning AI tools – they're genuinely powerful when used appropriately. Instead, we need to recognize cognitive surrender as a real psychological phenomenon and design systems that encourage continued human oversight rather than intellectual abdication.
Source: Ars Technica

Enjoyed this?

Get stories like this delivered every Tuesday — free.