Structural challenges to address in CLR

This post by Alexey Akhunov makes some interesting criticisms of CLR and Gitcoin Grants. It’s worth discussing them here.

Paraphrasing his criticisms:

Donations should not go to recipients immediately because they can be recycled in the same round. addresses this directly via MACI, since funds are only distributed when the round ends.

Sybil attacks are a problem without a scalable solution. believes BrightID actually is a scalable solution.

CLR doesn’t actually create collective intelligence; it’s just a popularity contest with people donating to what they already know about.

This is an important criticism. As I see it there are two main problems:

  1. As @auryn suggests, a lot of this can be resolved with better recipient curation. Keeping out non-public goods, spam, etc.

  2. A structural problem where certain types of projects, by virtue of having a more direct communication channel with the people they benefit and therefore a built-in platform to ask for donations, have an advantage over projects with a less direction connection to the people they benefit.
    It gets worse. Often times the latter type of project is infrastructure, tooling, or middleware. Those projects benefit everybody, but they directly benefit only developers. Users of applications created by those developers also benefit, but indirectly (how many DeFi bros know that ethers.js exists?). And because there are many more users of applications than developers, even if a dev tooling project delivers the same overall value to the community as, say, a podcast, the valuation of the podcast will be split among all its listeners, while the valuation of the dev tooling will be concentrated among the smaller group of developers. The podcast will therefore get a larger CLR match, resulting in a mis-allocation of funds.

1 Like

Is it necessarily a misallocation?
I wonder if this view point is simply an expression of the dissonance between our subjective opinion of how funds should be allocated and how they are actually allocated?

Put another way. Perhaps we are just wrong about how the funds should be allocated. The fact that a podcast gets more of a match than a software library is the result of the podcast providing more value to more people.

1 Like

That doesn’t seem right, there will always be asymmetries with these kinds of things. For example a podcast that hires a marketing team to boost their popularity disproportionately (even via propaganda or other manipulation) to the extra value that that brings (via extra listeners).

I guess the hope would be that the if the podcast uses some opensource library, they would funnel some of the proceeds from the clr to that library; but in reality that might not happen that often.

A podcast in itself is a distribution mechanism (agreeing with point 2), that’s why a podcast can earn revenue from advertising; but a software library can’t.

(Aside: Should podcasts that are already monetizing via advertising for example be given the same benefits in the CLR as podcasts without ads? You’d hope that the community would fund them proportionately less for this, but I wonder in reality?)


Totally agree with that, value is always subjective.

The second part of the article is wrong about almost everything.

collective of 100 thousand individual donors donating to 10 thousand projects will not have a very large collective intelligence, in fact it will probably make very dump decisions, allocating most money to people they already know

Of course it will, because good things require maintenance and continuous funding.

At the end he’s suggesting to

replace an equation with human intelligence, via introduction of intermediaries

in the system that was specifically designed to get rid of “experts”.

In my opinion QF as a mechanism works exactly as intended, and will become even more effective with better curation and competition between instances.

1 Like

I fully agree that value is subjective and the ability to appropriately account for that subjectivity is a key goal of CLR.

However, my point is that differences in salient information available to users of different types of public goods projects creates structural advantages for some of those types. My example scenario above assumes explicitly that the podcast and the dev tooling deliver “the same overall value to the community”, and that the podcast receives more funding even though that is the case.

Toy Model

I created a toy model to illustrate this structural issue.

Image a world where…

  • there are 100 app users, 10 app developers, and 1 library developer
  • each app dev builds/maintains 1 app --> there are 10 apps
  • each library dev builds/maintains 1 library --> there is one library
  • All 10 apps use that library
  • Each user likes 5 apps, equally
  • Neither library devs nor app devs use any apps

Individuals have the following preferences (you can think of these as reservation prices / willingness to pay if the apps were paid):

  • All app devs value the library at $1 per app
  • All users value each app at $10. They don’t know about the library and its importance in enabling the apps they like.

Therefore, true value added is as follows:

  • The library adds $1 of value per app to each user, or $20 total for each user
  • Each app dev adds $9 of value to each user --> combined, all app devs add $45 to each user
  • In total, the library adds $500 in value to the world
  • Each app adds $450 in value to the world, a total of $4500

Note that the value delivered to users by the library dev is 10% of the total value they receive.

Begin QF

Now, let’s run this world through a round of QF.

When users don’t know about the library

Assume that all of the users and all 10 devs contribute to the QF, and that contributing parties donate their entire willingness to pay. Crucially, for this simulation, app users don’t know anything about the library that their apps rely on.

Under these assumptions, with a CLR budget of $1,000, the funding falls out as follows:

  • total net funding per app: $548.82
  • total net library funding: $519.04
  • ratio of total funding going to the library: 8%

8% is less than the portion of the total value that the library creates. Additionally, creating apps is now more lucrative than creating a library, even though the library is more valuable to the world.

When users do know about the library

Now, if we allow app users to know the full extent of the value of the library, and have them donate accordingly, we get a much different result:

  • total net funding per app: $531.82
  • total net library funding: $681.82
  • ratio of total funding going to the library: 11.4%

Now that the value of the library is known to all, we get a better result. The library is more reasonably funding


This is just one small sliver of the total parameter space, so results may vary depending on different setups.

I hope this illustratives how CLR may have a structural inefficiency when it comes to end-user knowledge of down-stack value.

The problem you describe here is not specific to CLR, it’s a general problem of knowledge distribution.

I don’t think that it requires any changes to the CLR mechanism itself. However, situation can be improved with:

  • Better education & marketing.
  • A separate matching pool that funds only infrastructure projects.

True, the problem of knowledge distribution is not unique to CLR / QF. However, note that in an equivalent setup with only private goods (e.g. apps are purchased by users and each app uses a paid API service), it does not matter whether the users know about the API service. The money flows from the users to the apps to the API service.

The structural issue with CLR is that this flow of money is distorted by the CLR mechanism, and so the funding allocation changes depending on whether the money is coming directly from users or via the apps.

In the first model scenario I previously described, the app devs contributed directly to the library and the users contributed only to apps. In the second, the users split their contributions (9:1) between apps and the library; because the app funding declined, they did not contribute to the library.

In other words, the amount contributed to the library is the same in both scenarios. The only difference is the distribution of the contributions (concentrated in the first and wide in the second), which via QF matching results in a significant difference total funding allocated.

I’ve put the model up on github for anybody who wants to take a closer look or play around with my assumptions:

I agree 100% with the second one. I don’t think we need to change CLR; I think needs to acknowledge this structural issue and address it directly.

It is not enough to rely on better education and marketing. We need to find a way to create separate matching pools where we do encounter these structural differences. Infrastructure vs. end user applications and media vs. almost anything else are the two primary examples that come to mind. I am sure there are others.

1 Like