📢 Exclusive on Gate Square — #PROVE Creative Contest# is Now Live!
CandyDrop × Succinct (PROVE) — Trade to share 200,000 PROVE 👉 https://www.gate.com/announcements/article/46469
Futures Lucky Draw Challenge: Guaranteed 1 PROVE Airdrop per User 👉 https://www.gate.com/announcements/article/46491
🎁 Endless creativity · Rewards keep coming — Post to share 300 PROVE!
📅 Event PeriodAugust 12, 2025, 04:00 – August 17, 2025, 16:00 UTC
📌 How to Participate
1.Publish original content on Gate Square related to PROVE or the above activities (minimum 100 words; any format: analysis, tutorial, creativ
Nostr: Getting out of the social graph's digital quandary
Metadata Leakage
Nostr's NIP 04 direct messages are known to leak metadata. This seems like an obvious flaw and has been pointed out many times. After all, if anyone can see who you message, how often, when, size of messages, other people you mention, and correlate multiple different conversations, how private are your communications?
A common rebuttal to those who "get" Nostr (myself included) is "it's not a bug, it's a feature". It’s reminiscent of the early days of the internet, when internet security was almost a concern, and social platforms thrived on multiple variants of anonymous confession-like apps. How fun it would be to be able to show off to your friends how often you DM with whom! Most conversations don't really need to be really confidential, so we might as well turn this into a game. The most important thing about Nostr is entertainment.
Seriously, though, metadata leaks are a problem. In a way, Nostr's direct messages are a huge improvement over traditional DMs (the platform can no longer denounce you to the FBI), but they're also a huge step back (anyone can denounce you to the FBI) . I fully believe that we will be able to solve this problem for direct messages, but it may be more difficult to solve the problem for other data types inside Nostr.
Social content
One use case for Nostr that I've been thinking about for the past few months is the web of trust for reviews and recommendations. Those Sybil attacks that allow bots to threaten social networks are also used as marketing tools by unscrupulous sellers. Buying reviews, platform complicity has destroyed the credibility of online product reviews in the same way keyword stuffing has ruined Google search results. Proof-of-work is useless against this attack because the problem is not volume but false credibility. The right tool to deal with false credibility is the web of trust - verifiable trustworthiness tied to the end user's own social graph.
This is a huge opportunity for Nostr and I'm very excited about it. Imagine you wondered if Vibrating Restructuring Strikers (VRSF) could give you noticeable abs in less than 6 days. There are over 4,000 five-star reviews on Amazon, and all the one-star reviews are filled with typos and illogical statements. So it must work and make you smarter too! But sadly, the obvious abs are actually an illusion given to you by the "big gym". Now, imagine being able to find three duped friends and ask them what they think - you'll probably get a lower average rating, and you'll definitely be more certain that VRSF's vibrating foam isn't worth the cost .
This query can be made for any product, service or cultural experience. And you're not limited to asking opinions from your entire social graph, you can easily curate a list of foodies to help you choose a restaurant, or trusted bookworms to help you decide which book to read next.
Right now, Big Tech cannot do that because Facebook doesn't share its social graph with Google, and Google doesn't share its business data with Facebook. But if an open database of people and businesses exists on Nostr, anyone can recombine these silos of data in new and interesting ways.
But let's consider the cons.
An open social graph coupled with testimonials means you can not only ask your friends what they think of a given product, but also:
The last one is especially interesting because it means you can find reasonable answers to some interesting questions:
It's a social experiment, for which Facebook has historically gotten a lot of criticism. Democratizing this data does not prevent its relevance from becoming an invasion of individual privacy, especially when performing complex analysis, which is computationally intensive and whose results can remain private. To be clear, this problem goes far beyond the combination of social information and public commentary. This is just one example of many similar problems that may arise in open databases of user behavior.
To put it bluntly, we risk unreservedly handing over the panopticon of surveillance to would-be overlords. Just like closed gardens of the past used to be managed and marketed by manipulating opinion or interests.
**How to solve? **
So what should we do? I want a rating system based on my social graph, but not at the expense of our collective privacy. We need to keep this threat in mind as we build Nostr to address novel use cases. Perhaps zero-knowledge proofs can be used here, or we can solve this problem by simply reconfiguring data custody. In the future, users could post to a small set of relayers they trust that won't forward their data, similar to @fiatjaf's NIP-29 chat proposal. These relays can then support more complex query interfaces so that questions are answered without revealing too much. An interesting aspect of this approach is that it may push relay towards the PWN model used by BlueSky. Not all data needs to be treated the same way, which allows us flexibility in implementing these heuristics. Just like a note can be broadcast to all and sent to a single person or group, some comments or other activity might only be made public to those who have authenticated in some way.