🎉 [Gate 30 Million Milestone] Share Your Gate Moment & Win Exclusive Gifts!
Gate has surpassed 30M users worldwide — not just a number, but a journey we've built together.
Remember the thrill of opening your first account, or the Gate merch that’s been part of your daily life?
📸 Join the #MyGateMoment# campaign!
Share your story on Gate Square, and embrace the next 30 million together!
✅ How to Participate:
1️⃣ Post a photo or video with Gate elements
2️⃣ Add #MyGateMoment# and share your story, wishes, or thoughts
3️⃣ Share your post on Twitter (X) — top 10 views will get extra rewards!
👉
New Challenges in AI Security: Fully Homomorphic Encryption May Become the Key to Protecting Advanced Models like Manus
AI Security issues are becoming more prominent with the advancement of the Manus model.
Recently, the Manus model set a new record in the GAIA benchmark test, outperforming other large language models of its class. This means that Manus can independently complete complex tasks such as multinational business negotiations, involving multiple stages like contract clause analysis, strategy formulation, and方案生成. The advantages of Manus lie in its dynamic goal decomposition, cross-modal reasoning, and memory-enhanced learning capabilities. It can break down complex tasks into multiple executable subtasks, process different types of data simultaneously, and continuously improve decision-making efficiency and reduce error rates through reinforcement learning.
Manus's breakthrough has once again sparked discussions in the industry about the development path of AI: should the future lean towards Artificial General Intelligence (AGI) or Multi-Agent Systems (MAS)? These two paths reflect the trade-off between efficiency and security in AI development. The AGI path seeks to enhance the comprehensive capabilities of a single agent, while the MAS path emphasizes the collaborative efforts of multiple specialized agents.
However, as AI systems become increasingly intelligent, their potential risks also grow. For example, in healthcare scenarios, AI may need access to patients' sensitive genetic data; in financial negotiations, it may involve undisclosed corporate financial information. Additionally, AI systems may also have algorithmic biases, such as generating unfair salary suggestions for specific groups during the hiring process. More seriously, AI systems may be susceptible to adversarial attacks, such as disrupting AI's judgments through specific voice frequencies.
In the face of these challenges, the industry is exploring various security solutions. The zero trust security model requires strict verification of every access request, decentralized identity (DID) provides a way of identity recognition that does not rely on centralized registration, while fully homomorphic encryption (FHE) allows for data computation in an encrypted state.
Homomorphic encryption, as an emerging technology, is expected to become a key tool for addressing security issues in the AI era. It can protect user privacy at the data level, achieve "encrypted model training" at the algorithm level, and utilize threshold encryption to protect communication security in multi-agent cooperation.
Although Web3 security technologies may seem distant to the average user, they are crucial for protecting user interests. Early decentralized identity projects like uPort and zero-trust projects like NKN did not gain widespread attention, but emerging FHE projects such as Mind Network are collaborating with several well-known companies, promising to drive the application and development of this technology.
As AI technology approaches human intelligence levels, establishing a strong security defense system becomes increasingly important. Fully Homomorphic Encryption (FHE) can not only address current security challenges but also lay the foundation for a more powerful AI era in the future. On the road to AGI, FHE may become a key technology to ensure the secure development of AI.