Two Subnets, One Model
Inside the Yanez-Bitmind
Deepfake Detection Partnership
Deepfake fraud: in a matter of months, this crime went from Sci-Fi speculation to constant, pressing problem. But this week, two of Bittensor’s most battle-hardened subnets announced a partnership designed to take it on. So could the combined forces of SN54 and SN34 break new ground in the fight against fraud?
@macrozack / editor, taotimes.ai

This Monday, two subnets teamed up, promising to train a new model fighting fraud.
SN54, Yanez and SN34, Bitmind formalized a partnership to build a deepfake detection model, fine-tuned to faces and purpose-built for identity verification.
It’s one of those combinations that you wouldn’t necessarily have predicted, but that makes total sense once you see it. And, having worked with both founders - SN34’s Ken Miyachi and SN54’s Jose Caldera - makes it even more promising.
Few people worldwide know as much about deepfakes than Ken, nor many matching Jose for the latest techniques in financial crime. I’ve seen their commitment, knowledge, and experience first-hand; they’re two of the ecosystem’s best.
But that’s not what gives this partnership such potential. It’s that neither team could tackle this problem effectively alone.
By training a joint model - and one targeting an enterprise market that is bleeding money - Yanez and Bitmind can be the first to get results in an unaddressed dark zone of financial crime.
Bitmind's proven AI content detection infrastructure, paired with Yanez's two decades of biometric expertise and proprietary face data: together, as partnerships go, this is one of the network’s most promising. Here’s why.
Facing the size of fraud
The numbers are stark. Deepfake fraud attempts in financial services have grown over 2,000% in the past three years. Identity fraud attempts using deepfakes, 3,000% - in 2023 alone. In the same year, attacks bypassing biometric authentication climbed 704%.
By 2024, the average cost of a single deepfake-related incident for a business sat just under $500,000 - so it’s not surprising to learn that generative AI fraud losses in the US are projected to reach $40 billion by 2027.
Increasingly, the attack surface is face-based. Fraudsters use face-swap deepfakes and virtual cameras to defeat the biometric liveness checks that guard onboarding, KYC, and account access flows. What’s more, the cryptocurrency sector is the most exposed industry - accounting for 88% of all detected deepfake fraud cases. Financial services follow close behind.
Growing problem, clear cost. And to complete the trinity: existing solutions are falling short. Today’s detection tools were built for general AI-generated content - broad classifiers trained on broad datasets. They were never optimized for the specific, high-stakes problem of biometric-grade face verification, so their accuracy drops when confronted with face-swap attacks in real-world identity workflows.
That’s the exact gap that Yanez and Bitmind are closing in on.
Joining forces
The collaboration has deeper roots than the announcement suggests.
Prior to formal partnership, team Yanez had been integrating Bitmind's detection model into its pipeline for several months. They found a clear technical fit: Bitmind’s strong general-purpose detection model hadn’t been trained on the kind of biometric face data that Yanez had spent years curating - but was perfect for it.
The early work involved testing Yanez's synthetic identity algorithms against Bitmind's detection capabilities, a kind of adversarial sparring that revealed how much better the model could become with domain-specific training data. Conversations between the teams deepened at Token2049 in Singapore, where Jose met Ken in person and began mapping out what a joint effort could look like.
"We had the data and the domain expertise in biometrics, and they had the detection model and the infrastructure," said Jose Caldera, founder of Yanez. "Once we started stress-testing our synthetic identities against their model, the opportunity was obvious. We could build something together that would be genuinely hard to replicate."

The sum of both subnets
On SN34, Bitmind has established itself as one of Bittensor's most proven subnets. Beyond its consistent top-20 price ranking, SN34’s AI-generated content detection model is already serving enterprise clients, and its infrastructure provides the foundation on which the joint model is being built.
Yanez, operating SN54, brings a portfolio of biometric patents, a proprietary dataset of face images, and a team with over 20 years of collective experience in identity security across multiple companies. Its existing client base includes identity verification providers who already use Yanez's synthetic identity images to test and harden their own systems. The market is finally beginning to take note: SN54 has grown over 50% in the past three months.
The joint model takes Bitmind's original detection architecture and fine-tunes it using Yanez's face data - narrowing the focus from general AI content to the specific image types and attack vectors that appear during KYC-style identity verification. It is a deliberately narrow model for a deliberately high-value problem.
"The general-purpose detection problem and the biometric verification problem look similar from the outside, but they require very different training approaches," said Ken Miyachi, founder of Bitmind. "This partnership lets us go deep on a use case where accuracy actually matters - where the cost of a false negative is measured in millions."
The bigger picture for Bittensor
For the Bittensor ecosystem, the partnership represents something worth paying attention to: two subnets with complementary capabilities choosing collaboration over competition to reach an enterprise market that decentralized AI has not yet served at this level.
This is a joint product built from shared infrastructure and shared data, aimed at customers who are already spending money to solve the problem it addresses. The model's target market - identity verification providers, banks, fintech platforms, crypto exchanges - has a measurable demand curve and an urgent pain point.
It is also a proof point for the subnet model itself. If Bittensor's architecture is meant to produce specialized intelligence networks, then the natural next step is those networks combining their specializations. The Yanez–Bitmind partnership is an early example of what that looks like in practice.

The longer arc: proof of humanhood
Deepfake detection is one component of a larger system that Yanez is building toward: a decentralized proof of unique humanhood layer.
Generative AI means that the internet has lost a reliable mechanism for verifying whether the entity on the other side of a transaction, a vote, or a conversation is a real human being. And as frontier models improve, the problem worsens - rapidly.
What’s more, two of the use cases are particularly relevant to Bittensor.
The first is sybil resistance for Web3 infrastructure - DAO voting, airdrops, and reward systems all suffer from the one-person-many-wallets problem, and protocol-level proof of uniqueness is the cleanest solution.
The second is agentic payments: as AI agents take on more of the internet's financial plumbing, the systems that process payments will need cryptographically sound verification that a human authorized a transaction. The face deepfake detection model being built with Bitmind is a foundational piece of that architecture.
Tomorrow’s targets
Model development is underway. The teams have committed to publishing technical updates and benchmark results as the work progresses. Further details on Yanez's proof of unique humanhood product will follow as it moves toward release.
For now, the signal is clear enough: two established subnets have found a way to build something together that is worth more than what either could build apart. In an ecosystem that sometimes struggles to articulate its value beyond emissions and incentive mechanics, that’s a story worth sharing.
Follow @yanez__ai and @BitMindAI for the latest updates on the model’s development

Bringing TAO to the world.
