Trusta.AI innovative AI Agent trust framework reconstructs Web3 identification and reputation mechanism

The Web3 identification system is迎来 a structural turning point, with AI Agents becoming the main characters.

The Web3 identification mechanism is transitioning from a single "human real-name verification" to a new paradigm of "behavior-oriented + multi-agent collaboration." As AI Agents rapidly penetrate core on-chain scenarios, traditional static identity verification and declarative trust systems can no longer support complex interactions and risk prevention.

Trusta.AI creates AI-native trust infrastructure

Unlike existing solutions, Trusta.AI has built an integrated trust framework for AI Agents, covering identification claims, behavior recognition, dynamic scoring, and access control, achieving for the first time a closed-loop capability from "are they human" to "are they trustworthy."

Trusta.AI: Bridging the Trust Gap Between Humans and Machines

SIGMA Multidimensional Trust Model Reshaping On-Chain Reputation Assets

By quantifying reputation through five dimensions: professionalism, influence, engagement, monetization, and adoption rate, Trusta.AI transforms the abstract concept of "trust" into a composable and tradable on-chain asset, becoming the credit cornerstone of AI social interaction.

Trusta.AI: Bridging the Trust Gap Between Humans and Machines

Technical Closed Loop: TEE + DID + ML Achieve Dynamic Risk Control

Trusta.AI integrates trusted execution environments, on-chain behavior data, and machine learning models to form an auto-responsive risk control system that can detect anomalies such as unauthorized access, impersonation, and tampering of AI Agents in real-time, triggering permission adjustments.

Trusta.AI: Bridging the Trust Gap in the Era of Human-Machine Interaction

High scalability and ecological adaptation, quickly forming a multi-chain trust network

Currently deployed across multiple chain ecosystems including Solana, BNB Chain, Linea, Starknet, Arbitrum, and Celestia, and has established integration partnerships with several leading AI Agent networks, possessing rapid replication and cross-chain collaboration capabilities, forming the core hub of the Web3 trust network.

Trusta.AI: Bridging the Trust Gap in the Human-Machine Era

Introduction

On the eve of the Web3 ecosystem moving towards large-scale applications, the main characters on the chain may not be the first billion human users, but rather a billion AI Agents. With the acceleration of AI infrastructure maturity and the rapid development of multi-agent collaboration frameworks, AI-driven on-chain agents are quickly becoming the main force of Web3 interactions. Trusta predicts that in the next 2-3 years, these AI Agents with autonomous decision-making capabilities will be the first to achieve large-scale adoption of on-chain transactions and interactions, and may even replace 80% of on-chain human behavior, becoming true on-chain "users".

These AI Agents are not just the "witch bots" of the past that executed scripts, but rather intelligent entities capable of understanding context, continuous learning, and making complex judgments independently. They are reshaping on-chain order, driving financial flows, and even guiding governance voting and market trends. The emergence of AI Agents marks the evolution of the Web3 ecosystem from a "human participation" focus to a new paradigm of "human-machine symbiosis."

However, the rapid rise of AI agents has also brought unprecedented challenges: how to identify and authenticate the identification of these intelligences? How to assess the credibility of their actions? In a decentralized and permissionless network, how to ensure that these agents are not misused, manipulated, or used for attacks?

Therefore, establishing an on-chain infrastructure that can verify the identification and reputation of AI Agents has become a core proposition for the next stage of Web3 evolution. The design of identification, reputation mechanisms, and trust frameworks will determine whether AI Agents can truly achieve seamless collaboration with humans and platforms, and play a sustainable role in the future ecosystem.

Project Analysis

Project Introduction

Trusta.AI--committed to building Web3 identification and reputation infrastructure through AI.

Trusta.AI has launched the first Web3 user value assessment system - MEDIA reputation score, building the largest real-person verification and on-chain reputation protocol in Web3. It provides on-chain data analysis and real-person verification services for top public chains such as Linea, Starknet, Celestia, Arbitrum, Manta, Plume, Sonic, as well as certain trading platforms and leading protocols. Over 2.5 million on-chain verifications have been completed on mainstream chains such as Linea, BSC, and TON, making it the largest identification protocol in the industry.

Trusta is expanding from Proof of Humanity to Proof of AI Agent, establishing a triple mechanism of identification, quantification of identification, and protection of identification for the realization of on-chain financial services and on-chain social interactions for AI Agents, building a reliable trust foundation in the era of artificial intelligence.

Trust Infrastructure - AI Agent DID

In the future Web3 ecosystem, AI Agents will play a crucial role. They can not only complete interactions and transactions on-chain but also perform complex operations off-chain. However, distinguishing between genuine AI Agents and human-intervened operations is central to decentralized trust—without a reliable identification mechanism, these agents can easily be manipulated, defrauded, or abused. This is precisely why the multiple application attributes of AI Agents in social, financial, and governance aspects must be built on a solid foundation of identification.

  • Social Attributes of AI Agents: The application of AI Agents in social scenarios is becoming increasingly widespread. For example, the AI virtual idol Luna can independently operate social accounts and publish content; AIXBT acts as an AI-driven cryptocurrency market intelligence analyst, providing market insights and investment advice around the clock. These types of agents establish emotional and informational interactions with users through continuous learning and content creation, becoming a new type of "digital community influencer," playing an important role in guiding public opinion within on-chain social networks.

  • Financial attributes of AI Agent:

  1. Autonomous Asset Management: Some advanced AI Agents have achieved autonomous token issuance. In the future, through integration with the verifiable architecture of blockchain, they will have asset custody rights, completing the full process control from asset creation, intent recognition to automatic execution of transactions, even seamless cross-chain operations. For example, Virtuals Protocol promotes AI agents' autonomous token issuance and asset management, allowing them to issue tokens based on their own strategies, truly becoming participants and builders of on-chain economies, ushering in the era of "AI Subject Economy" with wide-ranging impacts.

  2. Intelligent Investment Decision-Making: AI Agent is gradually taking on the roles of investment manager and market analyst, relying on the large model's ability to process on-chain real-time data, accurately formulating trading strategies and executing them automatically. On platforms such as DeFAI, Paravel, and Polytrader, AI has been embedded in trading engines, significantly enhancing market judgment and operational efficiency, achieving true on-chain intelligent investment.

  3. On-chain autonomous payment: The essence of payment behavior is the transfer of trust, and trust must be established on clear identification. When an AI Agent conducts on-chain payments, DID will become a necessary prerequisite. It can not only prevent identity forgery and abuse, reduce financial risks such as money laundering, but also meet the compliance traceability needs of future DeFi, DAO, RWA, etc. At the same time, combined with a reputation scoring system, DID can also help establish payment credit, providing risk control basis and trust foundation for protocols.

  • Governance Attributes of AI Agent: In DAO governance, AI Agent can automate the analysis of proposals, evaluate community opinions, and predict implementation effects. By deep learning on historical voting and governance data, the agent can provide optimization suggestions for the community, improve decision-making efficiency, and reduce the risks associated with human governance.

The application scenarios of AI Agents are becoming increasingly diverse, covering multiple fields such as social interaction, financial management, and governance decision-making, with their autonomy and intelligence levels continuously improving. Therefore, it is crucial to ensure that each agent has a unique and trustworthy identification (DID). Without effective identification verification, AI Agents may be impersonated or manipulated, leading to a collapse of trust and security risks.

In the future fully driven by intelligent agents Web3 ecosystem, identification is not only the cornerstone of ensuring security, but also a necessary line of defense for maintaining the healthy operation of the entire ecosystem.

As a pioneer in this field, Trusta.AI, with its leading technological strength and rigorous credit system, has taken the lead in establishing a comprehensive AI Agent DID identification mechanism, providing a solid guarantee for the trustworthy operation of intelligent agents, effectively preventing potential risks, and promoting the steady development of the Web3 intelligent economy.

Project Overview

Financing Status

January 2023: Completed a $3 million seed round financing, led by SevenX Ventures and Vision Plus Capital, with other participants including HashKey Capital, Redpoint Ventures, GGV Capital, SNZ Holding, etc.

June 2025: Completed a new round of financing, with investors including ConsenSys, Starknet, GSR, UFLY Labs, and others.

Team Situation

Peet Chen: Co-founder and CEO, former Vice President of Ant Digital Technology Group, Chief Product Officer of Ant Security Technology, and former General Manager of ZOLOZ Global Digital Identification Platform.

Simon: Co-founder and CTO, former head of the AI Security Lab at Ant Group, with fifteen years of experience applying artificial intelligence technology to security and risk management.

The team has a deep technical accumulation and practical experience in artificial intelligence and security risk control, payment system architecture, and identification mechanisms. It has long been committed to the in-depth application of big data and intelligent algorithms in security risk control, as well as security optimization in underlying protocol design and high-concurrency trading environments, possessing solid engineering capabilities and the ability to implement innovative solutions.

Technical Architecture

Technical Analysis

identification establishment - DID + TEE

Through a dedicated plugin, each AI Agent obtains a unique decentralized identifier (DID) on the chain and securely stores it in a Trusted Execution Environment (TEE). In this black box environment, critical data and computation processes are completely hidden, sensitive operations remain private at all times, and external parties cannot peek into the internal running details, effectively building a solid barrier for the information security of AI Agents.

For agents generated before the plugin integration, we rely on the comprehensive scoring mechanism on the chain for identification; while agents that are newly integrated with the plugin can directly obtain the "identification" issued by the DID, thereby establishing an AI Agent identity system that is self-controllable, authentic, and immutable.

Trusta.AI: Bridging the Trust Gap Between Humans and Machines

identification quantification - pioneering SIGMA framework

The Trusta team always adheres to the principles of rigorous evaluation and quantitative analysis, committed to building a professional and trustworthy identification system.

  • The Trusta team was the first to build and validate the effectiveness of the MEDIA Score model in the "Proof of Humanity" scenario. This model comprehensively quantifies on-chain user profiles from five dimensions: interaction amount (Monetary), participation (Engagement), diversity (Diversity), identification (Identity), and age (Age).

The MEDIA Score is a fair, objective, and quantifiable on-chain user value assessment system. With its comprehensive assessment dimensions and rigorous methods, it has been widely adopted by several leading public chains such as Celestia, Starknet, Arbitrum, Manta, and Linea as an important reference standard for airdrop eligibility screening. It not only focuses on the interaction amount but also covers multidimensional indicators such as activity level, contract diversity, identification characteristics, and account age, helping project teams accurately identify high-value users, improving the efficiency and fairness of incentive distribution, and fully reflecting its authority and wide recognition in the industry.

Based on the successful establishment of the human user evaluation system, Trusta has transferred and upgraded the experience of MEDIA Score to the AI Agent scenario, creating a Sigma evaluation system that is more in line with the behavioral logic of intelligent agents.

  • Specialization Specification: The expertise and level of specialization of the agent.
  • Influence: The social and digital influence of the agent.
  • Engagement: The consistency and reliability of its on-chain and off-chain interactions.
  • Monetary: The financial health and stability of the proxy token ecosystem.
  • Adoption Rate: The frequency and efficiency of AI agent usage.

The Sigma scoring mechanism builds a logical closed-loop assessment system from "capability" to "value" based on five major dimensions. MEDIA focuses on evaluating the multifaceted engagement of human users, while Sigma is more concerned with the expertise and stability of AI agents in specific domains, reflecting a shift from breadth to depth, which better meets the needs of AI Agents.

First of all, based on the professional capability (Specification), the participation degree (Engagement) reflects whether it is stable and continuously invested in practical interactions, which is a key support for building subsequent trust and effectiveness. The influence (Influence) is the feedback on reputation generated in the community or network after participation, representing the credibility of the agent and the spread of effects.

TA-1.05%
AGENT2.78%
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • 4
  • Share
Comment
0/400
BugBountyHuntervip
· 7h ago
It's about time to transform! Who says identification can only verify humans?
View OriginalReply0
AirdropDreamBreakervip
· 7h ago
AI is ultimately just code hype.
View OriginalReply0
BlockchainThinkTankvip
· 7h ago
From the data, the feasibility of this trap remains to be verified.
View OriginalReply0
SurvivorshipBiasvip
· 7h ago
Both real people and AI are competing for what belongs to them.
View OriginalReply0
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate app
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)