🎉 Gate.io Growth Points Lucky Draw Round 🔟 is Officially Live!
Draw Now 👉 https://www.gate.io/activities/creditprize?now_period=10
🌟 How to Earn Growth Points for the Draw?
1️⃣ Enter 'Post', and tap the points icon next to your avatar to enter 'Community Center'.
2️⃣ Complete tasks like post, comment, and like to earn Growth Points.
🎁 Every 300 Growth Points to draw 1 chance, win MacBook Air, Gate x Inter Milan Football, Futures Voucher, Points, and more amazing prizes!
⏰ Ends on May 4, 16:00 PM (UTC)
Details: https://www.gate.io/announcements/article/44619
#GrowthPoints#
From financial advisors to secretaries, the trust challenges of AI agents: Can we trust the autonomous decisions of artificial intelligence?
As AI agents (AI agent) technology rapidly expands in Silicon Valley, with funding reaching up to 8.2 billion dollars in 2024, these autonomous systems are gradually infiltrating finance, infrastructure, and decision-making levels. However, behind this technological revolution, a crucial yet often overlooked question arises: "How do we verify whether the behavior of AI agents is trustworthy?"
Silicon Valley poured $8.2B into AI agents last year.
Soon, they will control our money, infrastructure, and decision-making.
But there’s one problem no one’s talking about:
How can we verify if AI agents are telling the truth? pic.twitter.com/zEj7z5mGyX
— Sergey Gorbunov (@sergey_nog) April 22, 2025
Silicon Valley heavily invests in AI agents, but is the foundation of trust still a "black box"?
Chainlink co-founder Sergey Gorbunov pointed out in a post last night that while AI agents are marketed as autonomous systems capable of independently completing complex tasks, most of them still operate as "black boxes"—meaning users cannot know their internal decision-making processes and can only choose to trust them blindly:
A truly autonomous AI agent should possess both "unstoppable" and "verifiable" characteristics; however, current systems often do not meet this standard.
(AI World USB-C Interface: What is the Model Context Protocol (MCP)? Interpretation of AI Assistant's Universal Context Protocol )
Why is "verifiability" the true guarantee of security?
Gorbunov emphasized that verifiability means that AI agents need to clearly explain "What did it do? How did it do it? Did it comply with the predetermined rules?"
Without these mechanisms, the AI agents gaining control over critical infrastructure could pose significant risks. This "verification gap (verification gap)" could become a hidden danger in technological development if not properly addressed.
Three types of AI agents, each with different verification requirements.
According to Sreeram Kannan, the founder of EigenLayer, AI agents can be divided into three categories based on their service targets:
Personal Agents (: Mainly serve individuals, such as digital assistants, with relatively low verification requirements.
Public Agent )Commons Agents(: Servicing the community, requiring moderate verification to ensure fairness and trust.
Sovereign Agents ): Completely independent of human operation, requiring the highest level of verification capability.
In the next five years, these sovereign agents may control trillions of dollars in assets. If the verification mechanism is not mature enough, it will be like "building a house on quicksand."
Three-tier verification system: Rebuilding the trust foundation of AI agents
To solve the verification problem, Kannan proposed a three-tiered verification framework:
Proactive Verification (: Evaluation is conducted before the execution of the task.
Retroactive Verification ): Reviewing its accuracy after the task is completed.
Concurrent Verification (: Continuous monitoring and recording during task execution.
This framework allows AI behavior to be transparent, enhancing trust.
From insurance claims to market predictions: practical applications of verifiable AI
Kannan also mentioned the potential application of verifiable AI agents in insurance claims, where the insurance industry currently has a single company acting as both issuer and auditor, often leading to a crisis of trust.
Through verifiable AI agents, the claims process can be transformed into an independent audit, executed and audited with a transparent mechanism, enhancing fairness and credibility.
In addition, platforms like EigenBets combine ZK-TLS and verifiable inference layer technology, allowing prediction markets to operate more transparently and reducing reliance on centralized authorities.
)Sam Altman’s three observations: costs decreasing by 10 times each year, AI agents becoming the new job standard, and assets that cannot be replaced by AI will appreciate in value(
Blockchain + AI: Creating the Future Ticket of AI Agents
In the face of increasingly complex AI systems, Gorbunov believes that blockchain technology can provide the necessary cryptographic trust foundation and assist in establishing a strong verification framework:
Combining blockchain with AI agents not only enhances credibility and flexibility but also makes smart contracts truly "smart," paving the way for future AI applications.
At the end, Gorbunov also attached the link to his YouTube program "The Future of AI", emphasizing that the key development of AI agents in the future is not just to create more powerful models, but to be able to:
Prove their behavioral results
Transparent presentation of the reasoning process
Win trust through cryptographic mechanisms
He emphasized: "Only by achieving these three major goals can AI agents operate safely in future systems."
This article discusses the trust challenges of AI agents, from financial advisors to secretaries: Can we trust the autonomous decisions of artificial intelligence? Originally published on Chain News ABMedia.