Should the subgraph be added to the monorepo?

In this post, I propose to add the clrfund subgraph as a package within the monorepo (as opposed to its own repo or within the clrfund-deployer repo).

The main case is that a subgraph – even though it is not necessarily a strict dependency – benefits clrfund as a whole and should therefore be treated as part of the core packages.

Perceived benefits of a subgraph to clrfund

  • Creates a canonical data model that…
    • Can automatically expand to index new rounds and (with the use of a clrfund factory factory, eg as planned for clrfund-deployer) even new community instances of clrfund
    • Anybody can access, which opens up possibilities for all manner of new apps (alternative contribution UIs?) or other use cases (QF data analysis?) to be built permissionlessly by the community
  • Allows us to more easily build dashboards tracking clrfund data (contributions, matching funds, etc.) over time and across instances. This will be useful for several reasons:
    • for clrfund devs/contributors in evaluating and making decisions related to clrfund’s growth and success, and
    • for the clrfund project to promote its success to attract additional funding and devs/contributors
    • community members wishing to view aggregated (and detailed!) information about what’s happening in clrfund
  • Typically enables faster contract data retrieval by apps
    • note that the current clrfund app would likely need to be refactored to take advantage of these benefits; a decision on if/how/when to do that is outside the scope of this post

I’m quite confident that other benefits exist that I have not articulated here. I’m also confident that there are some drawbacks I’m not aware of. Please chime in on this thread with your thoughts on these and a reaction to the below recommendation.

Spencer’s Recommendation

@Daodesigner is currently working on developing a clrfund subgraph. My recommendation is to create a new subgraph package in the monorepo for this work in progress and ultimately for the completed subgraph code to live.

5 Likes

Any reason why it can’t live in its own repo under clrfund github org? I’m not denying that subgraph could be useful to someone, but putting everything in one monorepo is not a good idea because it just forces everyone to coordinate around a single repository and creates a lot of noise.

I’d rather split clrfund/monorepo into two parts, contracts and frontend.

Nope, will just do that instead, makes deploying easier anyway. :+1:

I’d rather split clrfund/monorepo into two parts, contracts and frontend.

Yeah, I’ve been wondering about this as well.
There are pros and cons to both approaches.

We initially went with a monorepo approach so as to minimize the need to coordinate dependencies between repos.
I think that’s still a valuable goal, but it definitely makes the project more opinionated.

That said, I don’t have a strong preference one way or the other, so I’m happy to go with whatever you guys prefer.

1 Like

Monoliths have their own problems but I wouldn’t say one way is clearly better than the other, IMO it doesn’t hurt to have multiple repos since we are growing and there are now multiple teams.

We are small enough that the maintainance overhead/benefits to using lerna and managing ovelapping deps is not worth it right now and would (is?) just slow[ing] us down, I’d rather keep up the momentum.

Also the subgraph(s) are a good project for first time contributors to add useful features within a manageable blast radius- no chance to affect prod repos with TVL.

Frontend depends on ABI definitions. Other than that, contracts and the frontend have very little in common.

Actually, monorepo was handy during early development when I worked simultaneously on contracts and frontend, but now the contracts and frontend are mostly worked on separately and there’s not much benefit in keeping them together. It’s not much of a burden either.

2 Likes

What’s you’re estimate on the workload to split the monorepo into two separate repos, frontend and contracts?

Perhaps this could also serve as a fresh start for roadmap, issues, PRs, etc, since there has been some recent confusion and disagreement, which seems to be in large part due to fragmented communications (really glad to see the forum being used more now :smiley:). It would be great for us all to be back on the same page and aligned around the roadmap and our next set of milestones.

1 Like

Not much. But, as I said, it’s a non-issue.

1 Like

Glad to see so much discussion on this! As OP I will just say that I don’t have a strong preference either way and will support what the devs recommend. Ultimately the objective of this post was to find a good, caring home for the subgraph and I think either approach will do.

1 Like

Will take point in splitting up the monorepo, have created repos for smart-contracts and front-end.

The smart contracts package will be moved to the new repository and the old front end will be deprecated in favor of the vue-app package the ETH2 CLR team is actively maintaining, which I will move to the new front-end repo (with their blessing). The contribution guidelines will be added to both. This way the teams actively maintaining/developing features for each will have their own hub to coordinate from (yay we’re growing!).

We have an internal deadline to deploy a round with all the upgrades the teams have been working on by next week, and it seems the monorepo structure is no longer tenable anyway.

1 Like

I’m fairly indifferent re: keeping the monorepo intact vs. splitting out the contracts & frontend into separate repos. As you folks mentioned, there’s clearly tradeoffs to each approach.

At this point in time, I personally like the developer experience of having everything in one repo & scripts that run the entire app. On the other hand, I agree that as the project continues to mature, the contracts will likely require fewer changes. Also if we intend to make this project easily forkable, in the long-term I could see value in decoupling the FE & e.g. having multiple FE templates, like one repo in React, one repo in Vue, given how opinionated individual teams can be about their FE stack.

This plan I do have concerns about. From my perspective, our FE repo was never intended to become the canonical clr.fund repo. We decided to fork the repo primarily because we knew we had specific requirements unique to our instance we plan to run (e.g. the assumption of a single funding round existing, tight control of the optimistic recipient registry flow, logic to send applicant data to a Google Spreadsheet for our team to review, pages & content specifically tailored to the Eth2 funding round). While we always intended to merge back features that the clr.fund community find useful (e.g. expanded recipient profile pages, cart features & round information UX improvements), we figured a wholesale adoption of our repo as the canonical app would likely not be in the best interest of clr.fund.

I’m open to how you folks want to move forward - if you’re set on this approach, I’m happy to bless your adoption of our code to be moved to the new FE repo. I just want to raise these considerations so we’re all aware.

From our perspective, the easiest approach would be to:

  1. our team continues to build off the Eth2 FE & repo
  2. we launch our funding round (current goal is September)
  3. afterwards, we merge back the useful generic features into the upstream clr.fund repo

I want to be cognizant of any blockers that may place on any other folks (e.g. if you’re waiting on specific features/components we’ve implemented), so please raise issues if you suspect that approach will slow you down in anyway or if you have any general concerns. Cheers.

2 Likes

Guys, I wanted to touch base with you to know in which state you are with this subgraph proposal. We believe that this could benefit us.

Recently, we (ETH2 CLR team) have deployed a new instance of the clr contracts into the Arbitrum testnet and we found the following issue.

The issue:

Currently, from the FE, we are calling queryFilter to fetch events. Under the hood, this is making a eth_getLogs call that fetches all the logs from the genesis block and then filters them by the event you want. Obviously, this is expensive and the queries are taking a long time to resolve and/or giving server errors.
One possible solution to this problem is to limit the block range in which you do the search but we think this is probably a short-term solution.

More context:

We have tested this in the Arbitrum testnet RPC, Alchemy, and Infura. Alchemy has a 2k block range limit to execute eth_getLogs. Despite the limitations that each RPC has, all of them are taking a lot of time to resolve this types of queries.


Based on your experience @spengrah @Daodesigner or anyone else:

  • Do you have any recommendations on how to solve this particular issue?
  • Is there a “best practice” in how we should fetch events?
  • Is the subgraph a good solution for this?

On the assumption that we agreed that the subgraph approach is the correct one, we would need some orientation, and probably coordination with you in how to implement it:

  • Have you already build a subgraph that we could use?
    • In case you are currently in the process, is there anything you feel we could collaborate or help with?
  • If not, should we start creating a subgraph ourselves and then pull it upstream from clrfund?

Thanks in advance for any suggestions or feedback you can give us!

1 Like

Hey doing sprint planning tomorrow, ping me so we can coordinate on this. I’m assuming you ran into a hard wall doing eth_get_logs on arbitrum, there’s no way around it right now which is why I pivoted to the subgraph.This is what I was trying to explain to some of the other ETH2 folks: we need a canonical UI to integrate the subgraph back into or we end up doing double work.

2 Likes

FWIW, at this point I am very much of the opinion that we should treat the monorepo as a monorepo and put everything in there.

So far as I’m aware right now, that means:

  • contracts
  • app
  • documentation
  • deployer app
  • subgraph

I’ve already started a branch to add in documentation, although it’s pretty bare bones right now.

If we have other branches that are more ready, then let’s start merging them back in so that people can more easily build on top of them.

2 Likes

Agreed @auryn! Also thanks @Daodesigner.

At this point my thinking on the canonical UI is converging with @Daodesigner’s earlier proposal: to use the Eth2 CLR FE we’ve been building on & role that into the monorepo.

I mentioned previously that for various reasons we thought it’d make sense to keep our fork independent. Since then, a few major pieces have taken shape:

  • It’s becoming increasingly clear that a subgraph integration looks to be a necessary piece in order to deploy a functioning app on Arbitrum
  • We’ve reached a decision to bring the clr.fund subgraph into the monorepo
  • Our Eth2 CLR team has made some solid progress on WalletConnect integration & the updated BrightID flow, both of which I think all clr.fund instances would benefit from
  • There’s been hardly any parallel development on the frontend of the monorepo

I suggest we chat through some of the specifics (perhaps on the community call tomorrow?) but I think the majority of changes we’ve made in our Eth2 CLR fork made could be incorporated into the monorepo.

There’s a handful of additions that we’ve made that you likely don’t want (which I mentioned in this thread above) but I’m thinking it might be easiest to merge upstream everything, then rip those individual pieces our rather than focusing on the specific features we do want to merge in. In terms of the Eth2-specific content, I think that could be altered fairly easily to be more generic for any clr.fund instance

Please let me know what you folks think of this approach. I think the biggest immediate win would be the ability for @Daodesigner & @pettinarip to collaborate closely on the subgraph so we avoid duplicate work. If you’re open to that @Daodesigner, I think we could all benefit greatly from your knowledge in this area. Again, I welcome input on this!

4 Likes

Great conversation on this so far… I think these points are well stated. The subgraph seems to be fairly essential given the limitations of the Arbitrum endpoints, and of course the less we repeat work the better. Utilizing the monorepo to house everything, including the Eth2 FE, sounds like the best way to accomplish this.

While I can’t speak to some of the specifics (though happy to see if I can help think through them tomorrow), I’m on board with this overall direction. I struggle to find a good argument against all of clr.fund working from the same front end framework.

Yeah, I agree. This seems like the the most reasonable path forward at this point.

What would this actually look like? I assume it’s something like:

  1. we merge back into a new branch in clrfund/monorepo
  2. rip out all of the things that need to be ripped out and change any ETH2 specific things to generic/clrfund specific
  3. merge it back into develop.

Not sure how that affects the ETH2 fork moving forward though. Would you just rebase your changes on the updated develop branch?

1 Like

Ya I think the approach you outlined @auryn makes sense to me.

In terms of where we’re at now:

@pettinarip has been making solid progress on integrating a GraphQL client to query the subgraph thanks to the work of @Daodesigner to build out the clr.fund subgraph. Pablo will push up his latest progress here: https://github.com/ethereum/clrfund/pull/306

In terms of next steps, we’re currently aiming to take an iterative approach:

  1. Move the subgraph files into the monorepo codebase in Pablo’s PR
  2. Add the GraphQL client to the frontend (using graphql-request)
  3. Replace the frontend queryFilter queries with GraphQL - these are the ones that use the RPC method eth_getLogs under the hood & have been causing the issues on Arbitrum
  4. Deploy a new instance of contracts to Arbitrum
  5. Run a graph node & web app locally to see if we can connect to Arbitrum & confirm the web app works

Once we can get that 1st iteration up & running, we’re thinking we can start to explore:

  • caching optimization with vue-query (which we may actually want to upgrade to Vue 3 before doing)
  • replacing other frontend queries with GraphQL (it might make sense to completely refactor the code to rely entirely on the subgraph for reading data but that is a larger undertaking)

How’s that sound to you folks? Anything you’d like to get involved with re: this approach?

1 Like

This all sounds great to me!