CFTC’s Selig Points to Blockchain as Tool for AI Content Verification
Michael Selig, chair of the US Commodity Futures Trading Commission, said blockchain could play a key role in verifying AI-generated content, contending the technology can help distinguish authentic media from synthetic outputs as concerns over misinformation grow.
During an appearance on The Pomp Podcast on Thursday, Selig was asked by host Anthony Pompliano about the use of AI-generated memes and images in markets, and whether intent matters or such content should be restricted altogether. He told Pompliano:
The private markets have solutions — blockchain technology is a great one. If you can timestamp things and make sure there’s an identifier for each meme or AI generated posts, you can verify if it’s real or generated by AI… Having these technologies here in the US is critical.
He said regulators are focused on maintaining US leadership in crypto, adding that “you can’t have AI without blockchain.”
Regarding how regulators are approaching AI agents, as autonomous trading becomes more prevalent in financial markets and authorities are being pressed to distinguish between automated tools and fully autonomous agents, and how the latter should be regulated, Selig responded:
I’m concerned that we over-regulate and strangle some of the technology here in the US… I’m taking a very much minimum effective dose of regulation approach, where we’re… making sure that we’re regulating the actors… and not the software developers. The software developers are the ones building the tools, but they’re not actually engaging in the financial transactions.
Selig said the CFTC is assessing how AI models are used in markets, emphasizing that enforcement should focus on participants engaging in financial activity.
Related: AI and stablecoins are winning despite 2026 crypto market slump
Blockchain and proof-of-personhood tools emerge for AI verification
A central challenge amid the surge in artificial intelligence use is distinguishing real content from synthetic media. Selig’s comments could be seen to reflect a broader push among policymakers and developers to use blockchain for content verification and provenance.
One approach is proof-of-personhood systems, which aim to confirm that an account belongs to a real, unique human rather than a bot. The most prominent example is Sam Altman’s World, whose World ID protocol allows users to prove their humanity without revealing personal data. The system uses encrypted biometric iris scans stored on the user’s device, though it has drawn criticism over privacy risks and potential coercion.
In March, World launched AgentKit, a toolkit that allows AI agents to prove they are linked to a verified human while interacting with online services. It integrates proof-of-personhood credentials with the x402 micropayments protocol developed by Coinbase and Cloudflare, enabling agents to pay for access while presenting cryptographic proof of human backing.
Ethereum co-founder Vitalik Buterin has proposed using cryptography and blockchain to make online systems more verifiable, including through zero-knowledge proofs and onchain timestamps that could help validate how content is generated and distributed without exposing sensitive data.
The proposals come as US policymakers weigh broader AI regulation. On March 20, the Trump administration released a national framework calling for a unified federal approach, warning that a patchwork of state laws could hinder innovation and competitiveness.
Magazine: Agent wastes 14 hours of scammers’ time, LLMs ‘poisoned’ by Iran: AI Eye






