Content-Addressing: 2025 In Review

It's hard to believe that it was 2025 only two weeks ago, but all the same we'd like to wrap the year up tidily and look back at what happened in content addressing leading up to 2026!

"Content addressing?" you say. "Is there enough going on around content addressing to write a year in review post?" Content addressing has many uses, but two salient ones include trusting that you're getting the data you really want and ensuring that data can be independently verified without relying on the power of a centralized authority. It's easy to see how those two features are key to facing today's challenges. Over the past decade, the IPFS community has been at the forefront of making content addressing practical and accessible. Today, thousands of projects build on it, from decentralized websites and scientific data repositories to verifiable archives and supply chains.

The IPFS project started off from a more integrated full-stack approach including P2P networking but has gradually evolved into a suite of technologies that work well together but make sense independently from one another.

Modularity

Closer to home, about a year ago we said that we wanted to focus on making the IPFS technology suite more modular, adopting the principle that it should operate more in line with the good old-fashioned tenets of Unix philosophy: small tools strung together to assemble great power (if not always with great responsibility). For all that it may be accepted wisdom that the best-laid plans of mice and men often go awry, looking back over 2025 we're delighted to see that this one panned out.

In truth, the community was already ahead of us here, quietly shipping tight, purpose-built libraries for CAR, IPLD, and other primitives in the IPFS family. Together, over the past year, we’ve made real progress on modularity through community-wide efforts – spanning standards work, new specifications, and rethinking how our core libraries are structured.

And dawg, how best to show off modularity by giving you a year-in-review post (about Rust IPLD) in this year-in-review post? Check out how we got a 50% speed improvement in the Python wrapper around the Rust lib and other great boosts from migrating off of the old libipld and to more modular implementations.

This year also saw lots of action in the DASL space. If you don't know DASL, it's the part of the IPFS family that's laser-focused on adoption, interoperability, and web-style systems that need to be resilient in the face of dubious code, open-ended systems, and — gasp — potentially high volumes of users. DASL is all about modularity and tiny specs that mesh together like small Unix tools. (Read our introduction to DASL from earlier this year.) In addition to simple subsets of CIDs, CAR, and DAG-CBOR (aka DRISL), DASL also supports HTTP retrieval (RASL), packaging metadata (MASL), and bigger data (BDASL). And before you ask: yes, we do have a resident acronym expert on staff. 

We're driving interoperability between implementations thanks to the amazing test suite that the Hypha Worker Co-Operative built based on an IPFS Foundation grant. (They wrote about the testing work on this very blog.) Looking at test suites over the weeks has been cause for celebration: where initially there had been red everywhere — because no one writes perfectly interoperable code without a test suite — we can see green growing fast with increasing alignment across the board. And this very interoperability has made it possible for us to submit an Internet Draft lovingly titled The tag-42 profile of CBOR (after the CBOR tag for CIDs) to the IETF. This draft covers DASL CIDs and DRISL, and is particularly interesting in the context of ongoing discussions to standardize higher-level parts of the AT protocol at the IETF, like the “repository sync” operated over (DRISL-encoded) personal data servers by relays polling them for recent changes.

We hold monthly virtual meetings for the content addressing community, alternating between CID Congresses (presentations and discussions) and DASLing groups (hands-on working sessions). They’re always on the 3rd Thursday of each month – subscribe to the CID Congress Luma calendar to join.

Ecosystem Tooling

One of the joys of working on a truly open ecosystem is that you get genuine surprises. In July a new IPFS client, identifying itself as P2Pd and written in Python, that none of us had ever heard about launched and within only a few days skyrocketed to power 15% of the IPFS public network (Amino), now stabilizing near 10%.

Also within the Python community a number of geospatial tools have been used that rely on or connect with IPFS technology. One example is the ORCESTRA Campaign, which is about "Organized Convection and EarthCARE Studies over the Tropical Atlantic". (And if, like us, you can't resist a good acronym for breakfast, don't sleep on their subprojects: CELLO, CLARINET, or PICCOLO.) The whole project is too complex to describe in full here, but it's a fascinating collaboration. The University of Maryland’s EASIER (Efficient, Accessible, and Sustainable Infrastructure for Extracting Reliable) Data Initiative has also released ipfs-stac v0.2.0, a pivotal tool for onboarding and interfacing with geospatial data via STAC APIs on IPFS.

Of note is how they have a straightforward tooling reuse approach in which they prepare data to work with IPFS and integrate with Kubo, as can be observed in their codebase. We've been interested in IPFS tooling for geospatial work and for scientific data in general, and this is as good an example as they get.

One thing that content addressing is useful for is data management, notably data provenance and attaching verifiable attestations to data sets for governance or compliance purposes. Within that space, we've been impressed with EQTYLab's product suite that uses IPFS primitives for precisely those purposes. It simply looks slick and eminently usable.

And lest we forget: Bluesky blew past 40 million users this year, the growing AT ecosystem has over 400 apps with daily activity, and the community has shipped many libraries for all major languages to work with AT data and protocol components. Not too shabby for a content-addressed social network. (See our write-up of AT and the Eurosky event from November.)

Performance and Usability

It's been a great year for Kubo, shipping seven major releases up to the latest and shiniest v0.39. But the number of releases is less impressive than what was in them and if it feels like you're breathing dust right now it might be from Kubo's radical performance improvements. The DHT system was rebuilt from the ground up and the new "sweep" provider is able to efficiently provide hundreds of thousands of CIDs without tickling your memory or risking open warfare with your ISP. This joins Bitswap improvements that have demonstrated 50-95% bandwidth improvements and 80-98% message volume reduction in testing. Even better, those CIDs can now be served directly from your node to browsers using AutoTLS, automatically setting up the certificate needed to make Secure WebSocket connections work in many places they could not previously. Conversely, Kubo is now able to fetch from HTTP so that you can use battle-tested HTTP infrastructure to serve content to IPFS networks.

Helia scored a big win shipping verified fetch, a drop-in replacement for the classic Fetch API that verifies data for you. In turn, verified fetch powers the mighty Service Worker Gateway which is a key component that will allow us to phase out HTTP gateways entirely very soon. This makes IPFS all the more usable in the browser, without the end-user needing to manually install anything.

Iroh too had a rocking year with no fewer than 19 releases (and there I was thinking Rust made shipping hard…) and adding over 4,500 GitHub stars inside of 2025. They added support for many protocols, including live audio/video (working with Streamplace!) and many of those like gossip or blobs compile to WASM and run in the browser. The community growing around Iroh is nothing short of amazing, having brought us Typescript bindings, Alt-Sendme (with 4,500 stars of its own in Q4 alone!), high-performance end-to-end testing platform Endform, wallet Fedi, or Strada, a collaborative suite for creative teams that need high-speed access to massive media content.

And the standards appreciators among you have some meaty, perhaps even gamey from long maturation, specs to sink your teeth into: the UnixFS format used by most IPFS systems that expose some form of file system abstraction and Kademlia DHT, which describes how DHT-based IPFS networks make it possible for nodes to find content from one another.

We also have the CID Profiles specification almost, almost finished. CIDs have a lot of options, which is great whenever you need to take a Swiss Army chainsaw to your content identifiers but can make it challenging to get two people to generate the same CIDs (and therefore to verify content) as they need to be using exactly the same options. Profiles solve this by listing all the possible options so that they can be easily shared between parties that need to talk.

Events With A Wide Community

One of our areas of focus this year (and we're not about to stop!) was to find out more about how our stack or content addressing in general could be used to solve problems that people have across the board, from syncing data faster to helping save democracy with more governable protocols (which, yes, content addressing does help with). We do this by meeting people where they are, learning about the problems you have across many domains and walks of life.

This included our own Hashberg event in Berlin of course, but also so much more. We did attend a number of web3 events, such as the excellent ProtocolBerg, LabWeek, and DevConnect, but we deployed more energy connecting with the wider world. This included hacking heavies like the Local-First Conference, FOSDEM, and Web Engines HackFest, as well as more ecosystem-like events like Eurosky, DecidimFest, the Cypherpunk Camp, and classics like Re:publica or MozFest. We also hopped over to Japan to see how content addressing might work with the Originator Profile. We rubbed elbows with research and standards communities at the Dagstuhl Seminar, the Public AI Retreat, and of course the IETF.

We also went well outside of tech and into the real world at RightsCon, the French AI Summit and its FreeOurFeeds side event, the IGF notably with its workshop on social media infrastructure, the Summit on European Digital Sovereignty, and the UNDP's event on governance innovation. It's been a whirlwind of a year and we've learned a lot that continues to inform our work.

We'll be announcing more in 2026 but you can already catch us speaking at FOSDEM in early February (both Mosh and Robin), as well as at the AT Proto meetup on the Friday prior, and ATmosphereConf in March. And look for Volker speaking on An Open, Decentralized Network for Metadata (in German) at FOSSGIS Göttingen!

Looking Ahead

As we ride into the reddened sunrise of 2026, looking ominously stylish as four horsepeople are wont to, we already have a batch of goodies we've been preparing.

We want to extend the capabilities of our current content-addressing stack, especially for large data. Watch out for exciting announcements in the pipeline around geo data and verifiable range requests that work over vanilla HTTP. We're also continuing our partnership with the always-brilliant Igalia and hope to bring a number of improvements to ye aulde browsers, notably for streaming verification.

We've also been talking with our friends at Streamplace about collaborating on specs for a usable subset of C2PA and deterministic MPEG-4 containers so that you can watch content-addressable videos about content addressing. Another potential collaboration with secure chat providers and others who'd like to align on web-app containers might happen. It's still early days, we'll be sure to keep you posted as soon as there's even just a public description of the problem that covers more than me being a tease about it.

Overall, we'll be bringing more of the same. We'll keep working on modularization, interoperability, and adoption. We'll keep investing in test suites and implementations as needed. We'll keep pushing the IPFS family of technologies forward until it's so consistently easy to use that you stop noticing it entirely, until it's so straightforward you need not think about anything other than the specific problem you wish to solve.

Finally, the most important thing that we look forward to in 2026 is your participation. Ok, ok, I know, this sounds disgustingly trite like an LLM on obsequious mode wrote that for Christmas, but we actually mean it. We might well be a pigheaded cantankerous bunch but we're your pigheaded cantankerous bunch. Everything we did in 2025 was to make things better for you and was always informed by what we heard from people or observed in the wild. Next year will be no different — but for that to work we need to hear from you! There's many ways you can reach out: you can post on the forum, you can hit @ipfs.tech up on the Bluesky, you can open an issue on the relevant repo, come talk to us at an in-person event, or join any of the meetings on the IPFS Calendar that strikes your fancy. The rumors are true: we do bite; but we only bite the bad people, so come talk!