The Arc Community Hub
Sign in or Join the community to continue

Event Replay: Technical Insights on Arc Testnet Reliability

Posted Dec 19, 2025 | Views 311
# Arc
# Testnet
Share

Speakers

user's Avatar
Adi Seredinschi
Principal Product Manager @ Circle
user's Avatar
Corey Cooper
Senior Manager, Developer Relations and Ecosystem Marketing @ Circle

Corey Cooper is a technologist with experience spanning development, founding and scaling products, solutions engineering, and developer relations. He blends vision, creativity, and strong business acumen with deep technical expertise to drive go-lives, product launches, and high-impact developer experiences. Corey is known for his ability to navigate across roles, lead cross-functional initiatives, and deliver scalable, well-crafted solutions that support both product growth and customer success.

+ Read More

SUMMARY

Principal Product Manager Adi Seredinschi and Sr. DevRel Manager Corey Cooper walk through the reliability work behind the latest Technical Insights blog and break down how Arc achieves predictable performance and consistent block times.

They cover: → How the team tested Reth and the Malachite consensus engine → The tools used to push Arc to its limits: Quake, Guzzler, and Spammer → How Arc delivers stable ~500ms block cadence across validators

Deep dive. Engineering tradeoffs. Transparent look into how Arc is built.

+ Read More

TRANSCRIPT

Livestream kickoff: ARC community and testnet reliability

Sam: Alright. I see people starting to join us.

Sam: Yeah. Shout out to Thompson. Yes. You were there. Good to see you. Good to see you.

Sam: Yeah. Super, super excited for the stream. Once again, hey, everyone. Welcome. Great to see so many of you here today.

Sam: This livestream is really for the ARC community, and I wanna do this because it’s all about reliability—specifically, the real work behind the technical insights on the ARC testnet reliability blog we published recently.

Sam: Today, I don’t want this to be like a marketing walkthrough. It’s a chance to go behind the blog and hear directly from the people who made the decisions, ran the tests, hit the edge cases, and ultimately got the ARC public testnet to a place where performance is predictable and block times are consistent.

Sam: We’ll walk through the blog with Adi, who authored it, and Corey will help pull together all the threads—digging into the deeper technical choices, the trade-offs that were made, and the lessons learned along the way.

Sam: We’ll also talk a little bit about how ARC was tested before opening things up publicly, and what that tells you as a builder about what you can expect from the network.

Quick intros: Corey and Adi

Sam: Before we get started, let me make sure we go through some quick intros.

Sam: Corey is our senior manager of DevRel here at Circle. We like to call him a technologist who’s worked across development, product, solutions, engineering, and developer relations. He’s known for bridging deep technical detail with real-world execution—especially when it comes to launches, go-lives, and building high-impact developer experiences.

Sam: Adi, who was the one behind this blog, is a principal product manager at Circle working on ARC. Before Circle, he led product at Informal Systems on Malachite, a flexible BFT consensus engine in Rust. And before that, he focused on improving the efficiencies of replicated systems as a researcher at EPFL.

Sam: If you’re curious about how to say his name, it’s Adi. It’s pronounced like you skipped a “b.” So it’s like, you know, in “buddy.” So Adi. I love it.

Sam: We want this to be a conversation, not a lecture. So please drop comments. I see there’s tons of comments and GMs already going. Drop your comments. We’ll try to get to some of the questions and give you some answers before we jump off.

Sam: With that, I’m gonna go ahead and hop off. Let’s let Adi start the conversation and talk about the reliability work and some of the problems we’re trying to solve. Team, take it away.

Why reliability matters for money movement (deterministic finality, 24/7)

Corey: Thank you. Thanks. Thanks. Yeah. This is round two, Adi, of our conversations together. Always a pleasure to talk to you—learn a lot. And it’s exciting to build on infrastructure that you and your team are working on to help make developers’ experiences better as it relates to money movement.

Corey: In our last talk, we talked about how important deterministic finality is to financial institutions, to individuals and businesses who are moving money around the world 24/7, 365 days a year.

Corey: Now we’re well into roughly two months of public testnet. As we move to mainnet, one of the most important things is the transition of this infrastructure.

Corey: Does it work and run like a Ferrari 24/7, 365 days a year? And if it does have any bumps in the road, can it regain and recalibrate really quickly, rebound, and get back to humming like a Ferrari—pushing out transactions through every block?

Corey: I’d like to discuss your most recent blog post about the technical insights and learnings you all discovered by pressure testing this infrastructure—why that’s important, doing testnet—and how CTOs, developers, businesses, and even consumers should think about it.

Corey: First thing: why are these technical insights—testing testnet reliability—important for developers?

Progressive openness: feedback, open sourcing, node operators

Adi: Yeah. Where do I start?

Adi: First of all, great to be here again. Thanks, Corey, and thanks, Sam, for facilitating as well.

Adi: Why did we write this blog and why did we do this work? Because the key property we want to ensure is reliability and predictability of the infrastructure of the network. It has to run smoothly—like you said—like a Ferrari. We use many analogies: it should work like an atomic clock, just ticking twice a second—two blocks per second—and 500 milliseconds each.

Adi: The reason we took time and sat down and said, okay, let’s also tell people about what we’ve done and what happened behind the scenes, is because we’d like to progressively open things up over time as we slowly march towards mainnet.

Adi: We want to open more and more development and conversation with different builders. We want to progressively open the feedback loop so we can adapt the infrastructure. Maybe some features or design decisions are not the right ones, or maybe we overemphasize certain properties of the network—we want to hear that from builders.

Adi: As the network matures and as we gain confidence it’s ready, we’re also going to strive to open source the code. That’s another step in welcoming feedback—more eyes on the code, the stack, the design decisions, and so on.

Adi: Hopefully people launch their own nodes to operate on ARC—not as validators, but as node operators and indexers, whatever kind of infrastructure people want to run. Validators will likely continue to be permissioned, but we do want to allow people to run permissionless nodes.

Adi: All of it is in the grander vision of progressively opening up more and more of the insights, the code, the stack, the design decisions, and so on.

ARC architecture: consensus, execution, and the translation layer

Corey: Yeah. Okay. In the blog, you talked about three components to ARC. You have your consensus engine, which is Malachite. You have your execution layer, which is Reth. But there’s this middleware—this translation layer—that translates from consensus to Reth.

Corey: Can you talk about the tools you designed to test this architecture—consensus, middleware, execution? Why did you have to custom build these tools, and what are the three tools and their purposes?

Adi: Yeah. Great question. There’s a pretty simple picture—two boxes. And you’re right: there is in the middle a translation layer. It’s not explicitly called out in that picture. It doesn’t really belong in the consensus box, but today logically it sits there because the Engine API is a very opinionated interface designed for Reth in the EVM. We could not put the translation layer inside the execution binary, so it sits in what we call “consensus” in that simplified diagram.

Adi: On testing: the tools we built are quite customized. There aren’t a lot of off-the-shelf tools you can use to test the whole stack of distributed nodes around the world with latency, failures, and conditions that normally appear in practice.

Adi: There are some tools for the EVM—EVM is a bit more advanced than other ecosystems. I used to work in Cosmos, and in Cosmos we built tools for CometBFT. A lot of that experience inspired what we did for Malachite and ARC.

Adi: We built a suite of tools to test CometBFT-based networks, and we did similar things for Malachite and ARC. The first tool we talk about in the blog is Quake.

Tooling: Quake (orchestrate networks + inject failures)

Adi: Quake is a name we use internally. The purpose is to orchestrate networks—to spin them up.

Adi: It works with AWS or other cloud providers. You spin up a network, configure who the validators are, their voting power, where they’re placed on the map of the world, and the precise config parameters—monikers, buffer sizes, and so on. You can go arbitrarily low level.

Adi: Quake also lets you parameterize latency between nodes. Once you deploy, you can introduce latency spikes, drop packets, kill nodes and restart them, and so on.

Adi: We also test upgrades with Quake—having a private devnet running, introducing a breaking change, and deploying it in a coordinated upgrade fashion.

Corey: So would it be accurate to say Quake helps you simulate breaking changes, restarts—scenarios that could take the network down—and how it rebounds, and what happens to data, developer transactions, and history after disruption?

Adi: Yeah. That’s quite accurate. You can spin up a network with Quake and almost arbitrarily mess with it.

Adi: The goal is: if you disrupt the network, ideally the user should not notice. Like Gmail: servers go down all the time, but you don’t notice because the system masks failures.

Adi: One failure that’s hard to mask without side effects is if the node that’s about to propose the block goes down at the critical moment. Then you see a blip: instead of the next block being produced in 500 milliseconds, it might take one second because the designated proposer went down and the next in line takes over.

Tooling: Spammer (fill blocks with transactions, stress p2p + validator buffers)

Corey: That’s amazing. I hope developers are listening—pressure testing to make sure you get “four nines” uptime so every transaction goes through sub-second and the end-user experience doesn’t change.

Corey: The other tool was Spammer, where you’re able to fill up block capacity and simulate what happens. Can you talk about that tool and what it tests?

Adi: Yep. Spammer is pretty cool. You want to spam the network—but in a specific way.

Adi: One issue we encountered early is: when blocks were produced really fast (we were not throttling to 500ms yet), it’s hard to fill blocks. If you produce blocks every 100–300ms, you need a huge amount of parallel workload creation to send enough transactions.

Adi: The faster your chain, the harder it is to spam. So we built Spammer to make it more predictable to fully fill blocks—in bytes.

Adi: We want to fill the buffers, fill block size (bytes), stress block parts dissemination around the network, and in general the p2p network validators use to communicate and agree on blocks.

Adi: To do that, we pre-fund wallets ahead of time (in the genesis file). Spammer includes machinery to pre-fund accounts, buffer them in a channel, and then workers pick the next account and send a transaction, and so on.

Adi: Transactions don’t need to be sophisticated, but they do need to take space.

Corey: Were there things you learned when you pressure tested with Spammer—gas fees changing, anything weird?

Adi: It was frustrating initially because we weren’t able to hit gas limits. It’s not easy to exhaust gas in a block with simple transactions.

Adi: Spammer wasn’t for the gas limit. Spammer was to abuse the p2p network—send many legitimate-looking transactions and try to tip over the network.

Adi: A nice outcome would be validator buffers fill, a validator becomes overwhelmed, misses their slot, or starts lagging and can’t participate in voting.

Live interruption + Q&A: validator failures, recovery, and sync

Adi: You still there, Corey? Just double check.

Corey: Like I’m frozen. Do you hear me?

Adi: Yeah. I can hear you. You’re frozen.

Corey: Let me jump off and come right back in.

Adi: Cool. I’ll give you half a minute. Please keep the questions coming.

Adi: One question: what mechanisms are in place to detect and recover from validator failure or downtime during testnet load spikes?

Adi: The purpose of BFT consensus is to mask failures. If a validator goes down, others take its place. You never need more than two thirds of validators to be up and correctly running for the network to continue.

Adi: Detect and recover are more at lower levels. Nodes retry sending votes so peers catch up. There’s also a sync protocol: if a node goes down and restarts (we run nodes in Kubernetes, with automatic restarts), it catches up by asking peers for what it missed—votes/proofs used to finalize blocks—and applies them.

Adi: The network can go down if more than one third of validators are down, because you lose the two-thirds threshold needed to form quorum. Recovery is typically restarting nodes and diagnosing what went wrong—corrupt DB, partitions, misconfig, overflowed buffers, and so on.

Tooling: Guzzler (gas exhaustion / worst-case execution time)

Corey: Can you talk about Guzzler, and how it helps you monitor what happens when you pressure test as it relates to gas?

Adi: Yeah. Good segue from Spammer.

Adi: Spammer loads the pipes and stresses p2p. Guzzler complements that by creating a big transaction that takes a long time to execute—essentially consuming all the gas available in a block with a loop.

Adi: The goal is to see: if a user deploys something that fills the block 100% in terms of gas, what happens? Do validators need 2 seconds to execute? Are there limits or rough edges we haven’t explored?

Adi: Guzzler is a contract used to pressure test utilization of one of the most precious resources: gas. By itself, I don’t recall a specific regression, but in combination with Quake and Spammer we definitely surfaced issues.

What you learned: peering, persistent peers, sync efficiency, and write-ahead logs

Adi: One big lesson: peer topology matters a lot. Validators should peer directly with each other. If traffic routes through extra full nodes, you introduce extra hops and latency. We realized validators should use persistent peers—explicitly connect to other validators and stay connected.

Adi: Another lesson: the sync protocol was suboptimal. Catching up one block at a time is slow. It’s more efficient to stream ranges, but you still need to handle gaps and retries robustly.

Adi: We also had lessons around the write-ahead log. It prevents equivocation when a validator crashes and restarts mid-round. If you vote on one proposal, crash, forget, and then vote on a conflicting proposal, that’s slashing territory. The write-ahead log persists votes so after restart you don’t sign conflicting messages.

Q&A: tested TPS threshold

Corey: There’s a question about TPS goals. Can you share the TPS threshold you tested?

Adi: TPS is meaningful from the consensus layer perspective. Currently, we can do around 3,000 transactions per second reliably: about 1,500 transactions per block times two blocks per second.

Adi: You can try to push beyond that, but you start to hit limits on how fast transactions reach validators. Also, in public testnet today, RPC has DoS protection, so it may be harder to push to that level until there are more RPC providers and users can push directly to their own nodes.

Predictable 500ms block times: stable blocks for settlement UX

Corey: Toward the end of the blog, you talked about stable block times—submitting and settling blocks every 500 milliseconds. Can you talk about the design and how you achieve that consistently so banks and institutions have predictable settlement times?

Adi: Today the network has 10 validators spread across four geographic regions. With a relatively small setup, blocks can be produced fast—around 300–350ms on average.

Adi: The protocol underneath is responsive—so as soon as the network allows it, it produces the next block without waiting.

Adi: We decided to throttle to make block times predictable: finish producing the block, ship it, the explorer sees it, you get confirmation—then we “take a breath” to fill the 500ms budget. If we finish a block in 300–400ms, we wait the remainder and then proceed.

Developer impact: USDC, Gateway, and CCTP experience on ARC

Corey: Predictable latency is perfect for developers building systems and wanting to know exactly how long settlement takes.

Corey: Developers listening: Circle Gateway on ARC is a great example. Depositing funds into the Gateway contract happens within 500ms. It’s an amazing experience to deposit your USDC and immediately have it interoperable with supported chains.

Corey: And with CCTP: if ARC is your source chain, it’s so fast—500ms settlement—that you can move liquidity from chain A to chain B and not even have to use fast transfer functionality in some cases.

Corey: These decisions impact DevEx and the end user experience. This reliability work matters.

Closing and sign-off

Corey: Cool. Cool. Well, I think we’re at time. It’s been a great interview, and I appreciate your time, Adi. It’s always great talking to you.

Corey: We will answer questions through content and on social. Thanks to everybody who attended. We really appreciate you taking time to listen to us.

Adi: Alright. Thank you. Alright. We’re signing off. Thanks.

Sam: Yeah. Thanks everybody for joining. I appreciate all the questions that have come in. Glad that Adi was able to answer some of them live.

Sam: Until next time—this might be the last one for the month, but who knows? We may pop up again soon. Talk to you all. See you in Discord, in the chats, and we’ll be around. Thank you again. Bye.

+ Read More
Comments (8)
Popular
avatar


Watch More

Event Replay: Trustless USDC Agents on Arc
Posted Dec 06, 2025 | Views 194
# Arc
Terms of Service