You Built It. Now Get Paid. Ethereum’s Devansh Mehta on Fixing Open Source Funding

featured-image

Devansh Mehta, AI x Public Goods Governance Lead at the Ethereum Foundation, shares how his journey from grant writing burnout led to reimagining public goods funding through web3 and AI. From quadratic funding to deep funding, he explains how tools like dependency graphs and machine learning competitions can help allocate resources based on real impact—without applications, pitch decks, or chasing grants. This interview dives into the evolution of decentralized funding models and the potential for open-source contributors to get paid automatically for the value they create.

What if getting funded didn’t require a pitch deck—or even applying at all? Devansh Mehta, the Ethereum Foundation’s AI x Public Goods Governance Lead, is working on exactly that. We caught up after ETHSF to talk about how blockchain and machine learning might help rethink how open source work gets funded using web3 innovation—and how to make sure impact doesn’t get lost just because no one’s shouting about it.What do you find most exciting about blockchain, and how has your thinking about it developed over time?Early on, what got me excited was the idea of blockchain as a public database.

That alone lets you do a lot of interesting things. One obvious use case is sending money—and that’s still the main thing blockchains are used for. Another is the idea of a smart contract.



The best explanation I’ve heard of a smart contract is a vending machine: you put money in, and you get something out. It works exactly like that—you put something in, and based on code and rules, something comes back out. No middlemen needed.

As I spent more time in the space, I started thinking more about the architecture. What I find really interesting about blockchain is that it flips the typical relationship between front-end and back-end. In most apps, the back-end is private—you control it, and only the front-ends you authorize can talk to it.

With blockchain, the back-end is public—anyone in the world can build a front-end that interacts with it. That level of openness is pretty unique and opens up so many creative possibilities.But what I’m perhaps most passionate about—and what really kept me in web3—is thinking about how it can solve double selling of impact.

I come from the nonprofit world, and I saw that people would make an impact once and then get a marketing team to sell that same story to as many funders as possible. And they’d walk away with a lot of the money. Whereas people who kept trying to actually create new impact—but didn’t put effort into marketing—would fall short.

So, the idea of saying, “Okay, I created this impact, I recorded it on a public database, someone bought it, and now I have to create the next impact to earn again,” got me excited.How was your journey through web3, and what led you to the Ethereum Foundation?It started with quadratic funding (QF), which is a great entry point. I used to be a grant writer—I've written tons of applications and got pretty burnt out.

With traditional grants, it’s all or nothing. You put in so much effort and might walk away with nothing.Quadratic funding was different.

You apply, get accepted into a round, and then rally support—your friends, your community, maybe some visibility on Twitter Spaces. You get a few direct contributions and then matching funds based on how many people supported your project. I started out getting maybe $500, $1,000.

From there, it snowballed.I started working with Arbitrum DAO, one of the biggest decentralized autonomous organizations (DAOs) out there. They’ve put around a billion dollars into their “vending machine,” and the only way to access it is through a token holder vote.

I found that super interesting, so I began writing proposals—trying to get funding through that process. That’s how I got involved in online governance and started thinking more seriously about how communities coordinate and make decisions together.Over time, I started to see a broader pattern.

A lot of what I was drawn to had to do with solving market failures—externalities, where people create more value than they can capture; principal–agent problems, where decision-makers act in their own interest instead of the group’s. And then there’s information asymmetry—where the best people for a role or a grant might not even know it exists.That’s when I also started to tweet.

I was posting every day, sharing ideas and surfacing opportunities. It was my way of trying to reduce that asymmetry and make the ecosystem a bit more transparent. At some point, I tweeted about being interested in AI, governance, and quantifying impact—especially in relation to externalities.

Like, say you’ve created a software library that everyone uses but no one pays for. You’ve clearly created value, but how do you measure that?Could we use AI to estimate how much someone deserves based on the value they’ve created? That’s the first step—just trying to make that invisible value visible.After that tweet, Vitalik reached out.

He said he’d been thinking about a similar mechanism—where different AI models submit predictions about the value of a contribution, and a jury reviews just a sample. Based on that, you select the most accurate model and use it to score the rest.That’s what led to me joining the Ethereum Foundation.

I started working on that mechanism, and eventually, they asked me to lead it—running competitions that apply this approach to different challenges across Ethereum: funding predictions, repo importance, bot detection, and misinformation. The structure is always the same—many AI models submit answers, and we reward the ones that best align with a trusted signal.Open source software is a textbook example of a “public good”.

How do you think about public goods, and what are the challenges to funding them? Exactly. Open source creates a lot of value, but the people building it often don’t capture much of it—and that’s really the core issue with public goods.The way I define them is simple: value creation minus value capture.

If you're creating more than you're capturing, that gap should be filled through public goods funding. And if you're capturing more than you create—like a company that pollutes while still turning a profit—you should probably be taxed to make up the difference.But a lot of the conversation has started to move beyond the term “public goods.

” Asking “How valuable is this for the world?” is really hard to answer. That’s why more people are thinking in terms of dependency graphs instead.A dependency graph asks a different question: “For this project’s success, how important was this other project?” That’s much easier to answer.

It gives you a way to trace actual relationships and dependencies between projects.That framing matters. Imagine a city where everything is a private good.

No public roads, no parks, no sanitation—just things you pay for directly. It’s not a place most people would want to live. I’m happy to give up part of my income to make sure shared infrastructure exists.

And the same goes for an ecosystem...

If you only fund projects that earn revenue, you lose the basic building blocks that others depend on. A shop needs a road. An app needs core infrastructure.

Most things we interact with are built on top of tools that don’t get paid directly.Dependency graphs help make that visible, and they also help with accountability.There’s some fatigue around the term “public goods.

” It can feel like if you don’t have a revenue model, you just label yourself that and expect funding. But if you can say, “There are 20 projects that rely on us,” that’s clearer. It’s a better way to show value.

Is this what you’re trying to achieve with deep funding, the model you’re working on?Yes! The core problem deep funding addresses is how to fund projects that create value without generating revenue. In a corporate setting, you have cost centers and profit centers. Twitter’s ad team is a profit center—it makes money and is easy to measure.

But something like Community Notes is a cost center. It improves the product for everyone, but it doesn’t directly generate revenue.Still, executives can allocate budget to it from the ad team, even if that team objects.

But that kind of internal reallocation doesn’t exist in open source. Companies like Google earn revenue from user-facing services but rely on a stack of open source repos that go unfunded. There’s no automatic way to redirect revenue from profit to cost centers.

Deep funding is one way to fix that. It creates a dependency graph—basically, a map of which projects rely on which open source repos. Then, when revenue comes in, it’s distributed across the graph through smart contracts.

It’s all programmatic. Once the graph and weights are set, the money flows in automatically unless governance steps in to change it. But the hard part is setting the weights.

.. How much should each repo earn?To solve that, we run a machine learning competition.

Different people submit models that predict how important each repo is. A jury scores a subset of the repos, which we use to pick the best-performing model. That model’s predictions are then used to assign weights to the whole graph.

When an app earns revenue, funds are distributed across the graph, reaching both the projects that generate value directly and the ones supporting them in the background.In the context of web3, we’ve had about one new decentralized funding system every two years. In 2019, quadratic funding.

In 2021, retrospective public goods funding (RetroPGF) came up. In 2023, there was Protocol Guild—a collective where people put funds in, and everyone gets an equal share. And now, in 2025, we’re working on deep funding.

Deep funding is modular by design. Instead of building everything in-house, we work with teams already solving key pieces of the problem. For example, we worked with Open Source Observer to create a dependency graph from GitHub repos.

For voting, we used Pairwise—they’d built an interface for jurors to evaluate repos. And for distribution, we relied on Drips, an infrastructure for sending money to open source projects over time.By plugging in the best tools for each part, it becomes way faster to spin up new funding mechanisms.

That’s also the goal of Allo Protocol—enabling faster experimentation. Actually, if you have an idea for a funding mechanism, post it on their forum. They’ll help you test it!Have you come across anything surprising through this work yet?For sure! web3.

js was one. Many people think it’s a core part of the Ethereum ecosystem—it used to be everywhere, one of the main tools developers relied on. But when we built the dependency graph, it turned out it’s barely used anymore.

Tools like Viem and Viper have taken over. web3.js still showed up at first, but once we looked at the data, the usage didn’t hold up, and it eventually dropped out.

That kind of shift is easy to miss if you're going off reputation. With a graph, you can see what’s still in use and what’s not. That’s not to say web3.

js wasn’t important—it clearly was. And if you’re funding based on past impact, it still makes sense to include it. But if you’re focused on what’s active today, the graph shows where things have moved.

What’s your take on the current web3 funding mechanism landscape overall?Right now, the most common funding mechanism in web3 is still, unfortunately, centralized grant-giving, meaning that a foundation decides who gets funding. I say “unfortunately” because a lot of it depends on how much work you put into outreach—talking to people, staying visible, building relationships. But the best developers often aren’t interested in that.

They want to focus on building, not selling themselves.There’s also a power dynamic. When you're applying to a foundation, there’s often a feeling that you need to win someone over to get funded.

That’s why something like quadratic funding is exciting—it shifts power away from a few individuals. If the community believes in your work, you get funded. It’s more bottom-up.

But even QF has its limits. One of the big ones is what I call the crying babies get fed problem: Only the people who apply and promote themselves get money. So having at least one funding mechanism that doesn’t rely on applications feels really important.

That’s where RetroPGF can have lots of potential. It started with badgeholder voting—basically giving a small group of people the power to decide how funds get distributed. But if that group is just five people, it’s not much different from a foundation.

Optimism later scaled that to 500 badgeholders, which spreads things out a bit, but it still depends on people applying and making their case.What I’m most excited about is where RetroPGF is heading: a version that doesn’t require applications at all. Instead of waiting for people to ask for funding, you look at what’s actually being used—who’s creating value, what others are building on—and you fund based on that.

That’s the ideal outcome: you make something useful, it helps others, and the value comes back to you automatically. No forms, no pitches, no chasing grants.Whether that scales depends on a few things, though.

The first challenge is Goodhart’s Law: once a metric becomes a target, it stops being a good metric. If we rely too much on metrics, people will try to game them. So we’ll need to keep experimenting, updating what we measure, and making sure the results reflect real impact.

There’s also the retroactive angle. Most people want to fund what’s coming next—not what has already happened. That might make RetroPGF harder to scale unless the feedback loops are faster.

Like—if someone does something valuable this month, they get funded next month. That kind of rhythm might make it feel more relevant.How do you think about all this from the broader Ethereum perspective—and what are you hoping to achieve in your role?The main metric I’ve set for myself is how many machine learning competitions we run.

We just wrapped one on predicting how much funding a project would receive in a grant round—$20,000 in prizes across six winners, which was cool. Now we’ve got a few more running: one on detecting Sybil wallets, one on ranking open source repos by importance, and another on forecasting future funding. They’re all part of a broader push to run fun, useful ML competitions that make funding smarter.

The main focus right now is deepfunding.org—a live competition to predict the relative importance of 5,000 open source repos across 15,000 edges. Since many repos are used in multiple places, the challenge is assigning weights across those connections.

You can join as a model submitter or a juror—check it out!.