📢 Exclusive on Gate Square — #PROVE Creative Contest# is Now Live!
CandyDrop × Succinct (PROVE) — Trade to share 200,000 PROVE 👉 https://www.gate.com/announcements/article/46469
Futures Lucky Draw Challenge: Guaranteed 1 PROVE Airdrop per User 👉 https://www.gate.com/announcements/article/46491
🎁 Endless creativity · Rewards keep coming — Post to share 300 PROVE!
📅 Event PeriodAugust 12, 2025, 04:00 – August 17, 2025, 16:00 UTC
📌 How to Participate
1.Publish original content on Gate Square related to PROVE or the above activities (minimum 100 words; any format: analysis, tutorial, creativ
The essence of a large language model is to forcibly construct a self-consistent value system based on existing input data. Hallucinations can be seen as a natural manifestation and extension after self-consistency. Many new scientific discoveries are precisely because they encounter an 'error' in the natural world that cannot be explained by existing theories and cannot be self-consistent, so they must abandon the old theories. This roughly explains why, so far, no large language model (with so much data) can spontaneously make new scientific discoveries, because the model itself does not have the ability to judge right from wrong.