🎉 [Gate 30 Million Milestone] Share Your Gate Moment & Win Exclusive Gifts!
Gate has surpassed 30M users worldwide — not just a number, but a journey we've built together.
Remember the thrill of opening your first account, or the Gate merch that’s been part of your daily life?
📸 Join the #MyGateMoment# campaign!
Share your story on Gate Square, and embrace the next 30 million together!
✅ How to Participate:
1️⃣ Post a photo or video with Gate elements
2️⃣ Add #MyGateMoment# and share your story, wishes, or thoughts
3️⃣ Share your post on Twitter (X) — top 10 views will get extra rewards!
👉
AI and DePIN Intersection: The Rise of Decentralized GPU Networks and Comparison with Mainstream Projects
The Intersection of AI and DePIN: The Rise of Decentralized GPU Networks
Since 2023, AI and DePIN have been hot topics in the Web3 space, with market values reaching $30 billion and $23 billion respectively. This article will explore the intersection of the two and examine the development of protocols in this field.
In the AI technology stack, the DePIN network provides practicality for AI through computing resources. The development of large technology companies has led to a shortage of GPUs, leaving other developers without enough GPUs for computation. This often results in developers choosing centralized cloud providers, but the need to sign inflexible long-term high-performance hardware contracts leads to inefficiencies.
DePIN essentially offers a more flexible and cost-effective alternative that uses token rewards to incentivize resource contributions that align with network goals. In the AI field, DePIN crowdsources GPU resources from individual owners to data centers, forming a unified supply for users who need access to hardware. These DePIN networks not only provide developers who require computing power with customization and on-demand access but also offer additional income for GPU owners.
Overview of AI DePIN Network
Render is a pioneer of the P2P network that provides GPU computing power, initially focused on rendering graphics for content creation, and later expanded its scope to include AI computing tasks ranging from neural radiance fields (NeRF) to generative AI.
Interesting aspect:
Founded by the cloud graphics company OTOY, which possesses Oscar-winning technology.
GPU networks have been used by major companies in the entertainment industry such as Paramount Pictures, PUBG, and Star Trek.
Collaborate with Stability AI and Endeavor to integrate their AI models with 3D content rendering workflows using Render's GPU.
Approve multiple computing clients and integrate more GPUs from DePIN networks.
Akash positions itself as a "super cloud" alternative to traditional platforms that support storage, GPU, and CPU computing. By utilizing developer-friendly tools such as the Akash container platform and Kubernetes-managed compute nodes, it enables seamless deployment of software across environments, allowing for the operation of any cloud-native application.
Interesting point:
For a wide range of computing tasks from general computing to network hosting
AkashML allows its GPU network to run over 15,000 models on Hugging Face, while integrating with Hugging Face.
Akash hosts some noteworthy applications, such as the Mistral AI's LLM model chatbot, Stability AI's SDXL text-to-image model, and Thumper AI's new foundational model AT-1.
The platforms for building the metaverse, deploying artificial intelligence, and federated learning are leveraging Supercloud.
io.net provides access to distributed GPU cloud clusters, which are specifically designed for AI and ML use cases. It aggregates GPUs from data centers, crypto miners, and other decentralized networks. The company was previously a quantitative trading firm, and after a significant increase in high-performance GPU prices, it shifted to its current business.
Interesting part:
Its IO-SDK is compatible with frameworks such as PyTorch and Tensorflow, and its multi-layer architecture can automatically and dynamically scale according to computing needs.
Supports the creation of 3 different types of clusters, which can be started within 2 minutes.
Strong collaborative efforts to integrate the GPUs of other DePIN networks, including Render, Filecoin, Aethir, and Exabits.
Gensyn offers GPU computing power focused on machine learning and deep learning computations. It claims to achieve a more efficient verification mechanism compared to existing methods by combining concepts such as proof of work for validation, graph-based precise location protocols for re-running validation work, and Truebit-style incentive games involving staking and slashing of computing providers.
Interesting point:
The estimated hourly cost of an equivalent V100 GPU is about $0.40/hour, resulting in significant cost savings.
By stacking proofs, the pre-trained base model can be fine-tuned to accomplish more specific tasks.
These foundational models will be decentralized, globally owned, and will provide additional functionalities beyond the hardware computing network.
Aethir is specially equipped with enterprise GPUs, focusing on compute-intensive fields, primarily artificial intelligence, machine learning ( ML ), cloud gaming, and more. The containers in its network act as virtual endpoints for executing cloud-based applications, transferring workloads from local devices to the containers for a low-latency experience. To ensure high-quality service for users, they move GPUs closer to data sources based on demand and location, thereby adjusting resources.
Interesting part:
In addition to artificial intelligence and cloud gaming, Aethir has also expanded into cloud phone services and partnered with APhone to launch a decentralized cloud smartphone.
Established extensive partnerships with large Web2 companies such as NVIDIA, Super Micro, HPE, Foxconn, and Well Link.
Multiple partners in Web3, such as CARV, Magic Eden, Sequence, Impossible Finance, etc.
Phala Network serves as the execution layer for Web3 AI solutions. Its blockchain is a trustless cloud computing solution designed to address privacy issues through the use of its trusted execution environment (TEE). Its execution layer is not used as a computational layer for AI models, but rather allows AI agents to be controlled by smart contracts on the chain.
Interesting point:
Act as a co-processor protocol for verifiable computation, while also enabling AI agents to utilize on-chain resources.
Its AI agent contracts can access top large language models such as OpenAI, Llama, Claude, and Hugging Face through Redpill.
The future will include zk-proofs, multi-party computation (MPC), fully homomorphic encryption (FHE), and other multiple proof systems.
Future support for H100 and other TEE GPUs to enhance computing power.
project comparison
| | Render | Akash | io.net | Gensyn | Aethir | Phala | |--------|-------------|------------------|---------------------|---------|---------------|----------| | Hardware | GPU & CPU | GPU & CPU | GPU & CPU | GPU | GPU | CPU | | Business Focus | Graphic Rendering and AI | Cloud Computing, Rendering and AI | AI | AI | Artificial Intelligence, Cloud Gaming and Telecommunications | On-chain AI Execution | | AI Task Type | Inference | Both | Both | Training | Training | Execution | | Work Pricing | Performance-Based Pricing | Reverse Auction | Market Pricing | Market Pricing | Bidding System | Equity Calculation | | Blockchain | Solana | Cosmos | Solana | Gensyn | Arbitrum | Polkadot | | Data Privacy | Encryption&Hashing | mTLS Authentication | Data Encryption | Secure Mapping | Encryption | TEE | | Work Fee | 0.5-5% per job | 20% USDC, 4% AKT | 2% USDC, 0.25% reserve fee | Low fee | 20% per session | Proportional to the staked amount | | Security | Render Proof | Proof of Stake | Proof of Computation | Proof of Stake | Render Capability Proof | Inherited from Relay Chain | | Completion Proof | - | - | Time-Lock Proof | Learning Proof | Rendering Work Proof | TEE Proof | | Quality Assurance | Dispute | - | - | Verifiers and Whistleblowers | Checker Nodes | Remote Proof | | GPU Cluster | No | Yes | Yes | Yes | Yes | No |
Importance
Availability of Clusters and Parallel Computing
The distributed computing framework has implemented a GPU cluster, providing more efficient training without compromising model accuracy, while also enhancing scalability. Training more complex AI models requires strong computing power, which often relies on distributed computing to meet its demands. From a more intuitive perspective, OpenAI's GPT-4 model has over 1.8 trillion parameters and was trained over a period of 3-4 months using approximately 25,000 Nvidia A100 GPUs across 128 clusters.
Previously, Render and Akash only provided single-purpose GPUs, which may limit their market demand for GPUs. However, most key projects have now integrated clusters for parallel computing. io.net collaborates with other projects like Render, Filecoin, and Aethir to incorporate more GPUs into its network, and has successfully deployed over 3,800 clusters in the first quarter of 2024. Although Render does not support clusters, it operates similarly to clusters by breaking a single frame into multiple different nodes to process different ranges of frames simultaneously. Phala currently only supports CPUs but allows for CPU worker clustering.
It is very important to incorporate the cluster framework into the AI workflow network, but the number and type of cluster GPUs required to meet the needs of AI developers is a separate issue that we will discuss in the later sections.
Data Privacy
Developing AI models requires the use of large datasets, which may come from various sources and take on different forms. Sensitive datasets such as personal medical records and user financial data may face the risk of being exposed to model providers. Samsung has internally prohibited the use of ChatGPT due to concerns that uploading sensitive code to the platform could infringe on privacy, and Microsoft's 38TB private data leak incident further highlights the importance of taking adequate security measures when using AI. Therefore, having various data privacy methods is crucial for returning data control to the data providers.
Most of the projects covered use some form of data encryption to protect data privacy. Data encryption ensures that the data transmission from data providers to model providers ( and data receivers ) in the network is protected. Render uses encryption and hash processing when publishing rendering results back to the network, while io.net and Gensyn adopt some form of data encryption. Akash uses mTLS authentication, allowing only tenant-selected providers to receive data.
However, io.net recently partnered with Mind Network to launch fully homomorphic encryption (FHE), allowing encrypted data to be processed without the need for prior decryption. By enabling data to be securely transmitted for training purposes without revealing identities and data content, this innovation can better ensure data privacy compared to existing encryption technologies.
Phala Network introduces TEE, which is a secure area within the main processor of connected devices. Through this isolation mechanism, it can prevent external processes from accessing or modifying data, regardless of their permission level, even for individuals with physical access to the machine. In addition to TEE, it also incorporates the use of zk-proofs in its zkDCAP verifier and jtee command-line interface for programs integrated with RiscZero zkVM.
Calculation Completion Proof and Quality Inspection
The GPUs provided by these projects can offer computing power for a range of services. Due to the wide variety of these services, from rendering graphics to AI computations, the final quality of such tasks may not always meet users' standards. A proof of completion can be used to indicate that the specific GPU rented by the user was indeed used to run the required services, and quality checks are beneficial for users requesting the completion of such work.
After the calculation is completed, both Gensyn and Aethir will generate proofs to indicate that the work has been done, while the proof from io.net indicates that the performance of the rented GPU has been fully utilized without any issues. Both Gensyn and Aethir will conduct quality checks on the completed calculations. For Gensyn, it uses validators to re-run parts of the generated proof to cross-check with the proof, while the reporter acts as an additional layer of verification for the validators. Meanwhile, Aethir uses checking nodes to determine service quality and penalizes services that fall below standards. Render recommends using a dispute resolution process; if the review committee finds issues with a node, that node will be penalized. After Phala is completed, a TEE proof will be generated to ensure that the AI agent performs the required operations on-chain.
Hardware Statistics
| | Render | Akash | io.net | Gensyn | Aethir | Phala | |-------------|--------|-------|--------|------------|