AI and Blockchain Integration: Reshaping the Value of the Industrial Chain and Future Development Trends

The Evolution of the AI Industry and the Prospects of Integration with Blockchain

The artificial intelligence industry has made significant progress recently and is seen as a key driving force behind the Fourth Industrial Revolution. The emergence of large language models has significantly improved work efficiency across various sectors, with Boston Consulting Group estimating that GPT has enhanced overall productivity in the United States by about 20%. At the same time, the generalization capabilities of large models are considered a new paradigm in software design, differing from the precise coding methods of the past. Modern software design increasingly adopts the more generalized framework of large models, which can support a wider range of modal inputs and outputs. Deep learning technology has ushered in a new wave of prosperity for the AI industry, and this trend is gradually spreading to the cryptocurrency industry.

This report will delve into the development history of the AI industry, the classification of technologies, and the impact of deep learning technology on the industry. We will analyze the current status and development trends of the upstream and downstream of the industrial chain in the field of deep learning, including GPUs, cloud computing, data sources, and edge devices. At the same time, we will also explore the intrinsic connection between cryptocurrency and the AI industry, and sort out the pattern of the AI industrial chain related to cryptocurrency.

Development History of the AI Industry

Since the AI industry began in the 1950s, academia and industry have developed various schools of thought for realizing artificial intelligence at different times and across different disciplinary backgrounds.

Modern artificial intelligence technology mainly uses the term "machine learning", whose core idea is to allow machines to improve system performance through repeated iterations in specific tasks using data. The main steps include inputting data into algorithms, training models using data, testing and deploying models, and finally using the models to complete automated prediction tasks.

Currently, there are three main schools of thought in machine learning, namely connectionism, symbolism, and behaviorism, which mimic the human nervous system, thinking, and behavior, respectively. Among them, connectionism, represented by neural networks, currently dominates and is also known as deep learning. The architecture of a neural network includes an input layer, an output layer, and multiple hidden layers. When the number of layers and the number of neurons are sufficient, it can fit complex general tasks. By continuously inputting data to adjust the parameters of the neurons, the neurons will eventually reach an optimal state, which is also the origin of the term "deep" - a sufficient number of layers and neurons.

The deep learning technology based on neural networks has also undergone multiple iterations and evolutions, from the earliest neural networks to feedforward neural networks, RNNs, CNNs, GANs, and finally to modern large models such as GPT that use Transformer technology. The Transformer technology is an evolutionary direction of neural networks, adding a converter ( Transformer ) to encode data from various modalities ( such as audio, video, images, etc. ) into corresponding numerical representations, which are then input into the neural network, enabling the neural network to fit any type of data and achieve multimodal processing.

Newcomer Science Popularization丨AI x Crypto: From Zero to Peak

The development of AI has gone through three waves of technological change:

The first wave occurred in the 1960s, triggered by symbolic technologies, addressing issues of general natural language processing and human-computer dialogue. During the same period, expert systems were born, such as the DENDRAL chemical expert system developed by NASA.

The second wave occurred in the 1990s, with the proposal of Bayesian networks and behavior-based robotics, marking the birth of behaviorism. In 1997, IBM's Deep Blue defeated chess champion Garry Kasparov, which is regarded as a milestone in artificial intelligence.

The third wave began in 2006. The concept of deep learning was proposed, using artificial neural networks as the architecture to perform representation learning on data. Subsequently, deep learning algorithms continuously evolved, from RNNs and GANs to Transformers and Stable Diffusion, shaping this wave of technology and marking the heyday of connectionism.

During this period, several landmark events occurred:

  • In 2011, IBM's Watson defeated human contestants on the quiz show "Dangerous Edge."
  • In 2014, Goodfellow proposed the GAN( Generative Adversarial Network).
  • In 2015, Hinton and others proposed deep learning algorithms in the journal "Nature," causing a huge reaction. OpenAI was founded.
  • In 2016, AlphaGo defeated Go world champion Lee Sedol.
  • In 2017, Google proposed the Transformer algorithm, and large-scale language models began to emerge.
  • In 2018, OpenAI released GPT, and DeepMind released AlphaFold.
  • In 2019, OpenAI released GPT-2.
  • In 2020, OpenAI released GPT-3.
  • In 2023, the GPT-4 based ChatGPT was launched and quickly reached one hundred million users.

Newcomer Science Popularization丨AI x Crypto: From Zero to Peak

Deep Learning Industry Chain

Current large language models mainly adopt deep learning methods based on neural networks. Models represented by GPT have sparked a new wave of artificial intelligence enthusiasm, with a large number of players entering this field, leading to a surge in market demand for data and computing power. This section will explore the composition of the industrial chain of deep learning algorithms, as well as the current status, supply-demand relationships, and future development of the upstream and downstream.

The training of large language models like GPT based on Transformer technology ( LLMs ) is mainly divided into three steps:

  1. Pre-training: Input a large amount of data to find the optimal parameters for the neurons. This process is the most computationally intensive and requires repeated iterations to try various parameters.

  2. Fine-tuning: Use a small amount of high-quality data for training to improve the quality of model output.

  3. Reinforcement Learning: Establish a "reward model" to evaluate the output quality of the large model, and use this model to automatically iterate the parameters of the large model. Sometimes human participation in evaluation is also needed.

In short, pre-training requires a large amount of data and consumes the most GPU computing power; fine-tuning needs high-quality data to improve parameters; reinforcement learning iteratively adjusts parameters through a reward model to improve output quality.

The three main factors affecting the performance of large models are: the number of parameters, the amount and quality of data, and computing power. These three factors together determine the quality of the results and the generalization ability of large models. Assuming the number of parameters is p, the amount of data is n( calculated based on the number of Tokens), the required computing volume can be estimated using empirical rules, thereby estimating the necessary computing power and training time.

Computing power is usually measured in Flops, representing a single floating-point operation. Based on practical experience, pre-training a large model requires approximately 6np Flops. The inference process where the input data waits for the model output ( requires 2np Flops.

Early AI training mainly used CPU chips, which were gradually replaced by GPUs, such as Nvidia's A100 and H100. GPUs far exceed CPUs in energy efficiency, primarily performing floating-point calculations through the Tensor Core module. The computing power of chips is usually measured in Flops at FP16/FP32 precision.

Taking GPT-3 as an example, it has 175 billion parameters and a data volume of 180 billion Tokens, approximately 570GB). A single pre-training requires 3.15*10^22 Flops, which is equivalent to a single Nvidia A100 SXM chip needing 584 days. Considering that the parameter count of GPT-4 is 10 times that of GPT-3, and the data volume has also increased by 10 times, it may require over 100 times the chip computing power.

In large model training, data storage is also a challenge. GPU memory is usually small, for example, the A100 has 80GB and cannot accommodate all data and model parameters. Therefore, the bandwidth of the chip needs to be considered, which is the transmission speed of data between the hard drive and memory. When training with multiple GPUs, the data transfer rate between chips is also critical.

Newcomer Science Popularization丨AI x Crypto: From Zero to Peak

The deep learning industry chain mainly includes the following several links:

  1. Hardware GPU providers: Currently, Nvidia is in an absolute leading position. The academic community mainly uses consumer-grade GPUs like the RTX series (, while the industrial sector mainly uses professional chips such as H100 and A100. Google also has its own self-developed TPU chips.

  2. Cloud Service Providers: Provide flexible computing power and hosted training solutions for AI companies with limited funding. Mainly divided into three categories: traditional cloud vendors ) such as AWS, Google Cloud, Azure (, vertical AI cloud computing platforms ) such as CoreWeave, Lambda (, and inference as a service providers ) such as Together.ai, Fireworks.ai (.

  3. Training data source providers: Companies like Google and Reddit, which have access to a large amount of data or high-quality data, are gaining attention for providing substantial data for large models. There are also specialized data annotation companies that provide data for models in specific domains.

  4. Database Providers: AI training inference tasks mainly use "vector databases" for efficient storage and indexing of massive high-dimensional vector data. Major players include Chroma, Zilliz, Pinecone, Weaviate, etc.

  5. Edge Devices: Provide cooling and power supply support for GPU clusters. This includes energy supplies ) such as geothermal, hydrogen, nuclear energy ( and cooling systems ) such as liquid cooling technology (.

  6. Applications: Currently, AI applications are mainly concentrated in the fields of search, Q&A, etc., with retention rates and activity levels generally lower than traditional internet applications. Applications are mainly divided into three categories: those aimed at professional consumers, enterprises, and ordinary consumers.

![Newbie Science Popularization丨AI x Crypto: From Zero to Peak])https://img-cdn.gateio.im/webp-social/moments-609c5dd6ee0abcec6bf9c118d7741867.webp(

The Relationship Between Cryptocurrency and AI

The core of Blockchain technology is decentralization and trustlessness. Bitcoin created a trustless value transfer system, while Ethereum further realized a decentralized, trustless smart contract platform. Essentially, the Blockchain network is a value network, where each transaction is a value conversion based on the underlying token.

In traditional internet, value is reflected in stock prices through indicators such as P/E. In a blockchain network, the native token ), like ETH(, embodies the multidimensional value of the network. It can not only yield staking rewards but also serve as a medium for value exchange, a medium for value storage, and a consumable for network activities.

Tokenomics ) defines the relative value of the ecosystem settlement asset ( native token ). Although it is not possible to price each dimension separately, the token price comprehensively reflects multi-dimensional value. Once tokens are assigned to the network and put into circulation, value capture far exceeding that of traditional equity can be achieved.

The charm of token economics lies in its ability to assign value to any function or idea. It redefines and discovers value, which is crucial for various industries, including AI. In the AI industry, issuing tokens can reshape the value across different segments of the industry chain, motivating more people to delve into niche tracks. Tokens can also provide additional value for infrastructure, promoting the formation of the "fat protocols, thin applications" paradigm.

The immutability and trustless features of blockchain technology can also bring real value to the AI industry. For example, it allows models to use data while protecting user privacy; idle GPU computing power can be allocated through a global network to rediscover residual value.

Overall, token economics helps to reshape and discover value, while decentralized ledgers can address trust issues and allow value to flow globally. This combination brings new possibilities to the AI industry.

Newbie Popular Science丨AI x Crypto: From Zero to Peak

Overview of the AI Industry Chain in the Cryptocurrency Sector

  1. GPU Supply Side: Key projects include Render, Golem, etc. Render, as a more mature project, mainly targets video rendering tasks that are not large models. The GPU cloud computing power market can not only serve AI model training and inference but is also suitable for traditional rendering tasks, reducing reliance on a single market and the associated risks.

  2. Hardware Bandwidth: Projects like Meson Network aim to establish a global bandwidth sharing network. However, shared bandwidth may be a pseudo-demand for large model training, as latency caused by geographical location can significantly impact training efficiency.

  3. Data: Main projects include EpiK Protocol, Synesis One, Masa, etc. Compared to traditional data companies, Blockchain data providers have advantages in data collection, can price personal data, and incentivize users to contribute data through tokens.

  4. ZKML: Projects like Zama and TruthGPT utilize zero-knowledge proof technology to achieve privacy computing and training. In addition, some general-purpose ZK projects such as Axiom and Risc Zero are also worth paying attention to.

  5. AI Applications: Currently, the main focus is on traditional Blockchain applications combined with automation and generalization capabilities. AI Agent (, such as Fetch.AI ), is expected to benefit first as a bridge between users and various applications.

  6. AI Blockchain: like Tensor,

GPT3.39%
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • 4
  • Share
Comment
0/400
mev_me_maybevip
· 20h ago
The GPT has overheated, it's time to cool down.
View OriginalReply0
fork_in_the_roadvip
· 20h ago
suckers who have seen through everything
View OriginalReply0
NotFinancialAdviservip
· 20h ago
GPT understands that Cryptocurrency Trading this wave is bound to reach new highs.
View OriginalReply0
ContractExplorervip
· 20h ago
ai get liquidated Other with toad worm
View OriginalReply0
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate app
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)