Akim Mamedov is a tech leader whose career has taken him through decentralized systems, high-performance computing, and complex data engineering. He was previously at Fluence Labs and Superdao. In his work, Akim has pushed forward what he calls the “peer-to-peer revolution,” which promises to upend the architecture of the internet, and he’s done it while working on large-scale data pipelines, modern frameworks, and cutting-edge blockchain integrations. In this interview, he gave us an earful of thoughts on these subjects and several more.
1. The Fluence Labs project involved a custom language called Aqua for executing distributed computations. How did it differ from other distributed computing paradigms you’ve worked with, and what unique challenges did it present?
At Fluence Labs, I helped to create a protocol for data transfer between nodes. The protocol’s logic was defined in a custom language called Aqua. Aqua was created specifically to orchestrate data transfers and code execution across a peer-to-peer (P2P) network. When you need to run a massive calculation – like a comprehensive annual financial report in a large bank – and work with that calculation has to be run on several nodes to make the whole thing go faster, any device in a decentralized network could nominally take on part of the task, even a small, browser-based P2P node.
But Aqua helps you do that with a system where not every node is reliable, failing gracefully and letting you recover to continue progress without too much interruption. From what I understand, when Aqua was being created, the model was first conceived in human terms, meaning the kind of terms a person might use to give commands to another person. After that, the commands were translated into instructions for a virtual machine to interpret.
2. You achieved a 0.1-second delay in trade copying at Solution Fund, supporting over 10,000 users and multiple traders. Can you walk us through the architecture that enabled such high-speed performance?
We achieved a delay of 0.1 seconds in our Copy Trading system for several important reasons:
- Choice of low-level language: I used Go (Golang) to process our trading signals. This reduced our event processing time to a few milliseconds because Go is faster than most of the languages we could have chosen (like PHP or Python, for example) and still got our work done.
- WebSocket use: Initially, we told Binance what traders were doing by sending the exchange REST requests. When we figured out that we only needed to know about spot orders, we set up a persistent WebSocket connection to Binance. This “always-on” connection gave us the information we needed way faster than making REST calls.
- Proximity: We placed our servers near Binance’s servers in the Netherlands. By doing this, we noticeably reduced the delay that would otherwise have happened when signals traveled across the internet.
3. At Superdao, you processed millions of social network accounts and linked them with web3 data. What unexpected challenges did you face at this scale, and how did you overcome them?
At Superdao, I primarily worked on scraping Twitter (although we scraped several different social networks). I was surprised by how difficult it was to get around Twitter’s parsing protection with which I was first acquainted back in 2009. But I kept at it because I had also learned that there is always… something. Eventually, I found a library that could handle the direct access requests to Twitter without getting us blocked. Even so, the library took around 10 seconds to parse about 10-15 pages, if I remember correctly.
After tuning things up a bit, I configured a Kafka pipeline and launched 10-20 Docker containers running Node.js scraper instances in parallel. We fed the scraper instances data for crypto wallets we had found on-chain, and they returned the data for Twitter account threads that we had passed to them as well.
4. You mentioned consolidating four separate peer packages at Fluence Labs into a single package, cutting setup time by 100%. What was your approach, and how did you ensure nothing broke during the transition?
We started with four JavaScript packages: one for the core API, two for browser and Node.js integrations, and then another for integration tests. To simplify matters, I unified the Core API by moving all tests to the Core API package and the CI/CD from locales to the Core API package. This has made the Core API much more cohesive, but there still remain several environment-specific packages that users need to install separately.
I also utilized Vite and Node.js conditional exports; we can automatically resolve the correct build at runtime—be it Node.js or browser—without requiring users to explicitly choose between different packages.
This method of installation is much simpler and easier. We ended up with a single, intelligent package that just worked, across environments, significantly improving the developer experience.
5. You’ve worked with various blockchains—Solana, Ethereum, Bitcoin, Polygon—across different projects. How has your approach to blockchain architecture evolved, particularly around scalability and user experience?
Take Ethereum, for example. It has endured a pricey transaction experience because of network congestion when many transactions are attempted during peak usage times. MEV bots, which are sometimes considered a necessary evil, have been flooding the Ethereum network and causing its congestion problems.
Layer 2 solutions, such as Arbitrum, have been addressing these issues, however, and have mainly used optimistic roll-ups to do so. They process transactions much more quickly and cheaply than what happens on the Layer 1 Ethereum mainnet while apparently keeping to the security level of that mainnet.
In recent initiatives, I have depended more on L2 solutions for the dual benefits of scalability and user experience. Although L2 integrations can be tricky and involve some unfamiliar steps, they have proven to be a worthwhile strategy for keeping fees low and transactions fast, all without sacrificing security.
6. At Monro Art, you implemented a CRM that boosted staff performance by 60%. Which specific workflows saw the biggest transformation, and how did you measure that improvement?
Monro Art is a company that creates tailored, handmade portraits from photos. It’s a business that epitomizes small-scale, local manufacturing, and the following steps make up its process: first, hand-drawing the portrait, second, framing it, third, keeping constant communication with the client about the status of the portrait, and finally, shipping it.
Most of these processes were managed by hand before the CRM—phone calls to frame makers, unending exchanges between designers and clients, and the passing of packages for delivery.
Our CRM was constructed to centralize and automate their work. The frame inventory was tracked by the CRM, which automatically and immediately notified suppliers when stock was at a minimum. When a new order came in, designers received instant push notifications instead of relying on manual communication. The CRM sent automatic messages to clients to keep them updated on our progress through the various stages of the project, bringing together all forms of communication (Telegram and WhatsApp) into a single space for more straightforward correspondence.
We also integrated with a delivery service’s API to arrange for shipments to be sent in advance of when they were needed, thereby cutting down on delays. Reducing administrative overhead directly improves the productivity of order fulfillment. The staff can now handle 50–70% more orders without any increase in personnel, which also means we’re not adding a lot of new costs.
7. From your perspective, what guiding principles foster innovation within teams, especially when delving into emerging technologies?
When working in a company, I usually try to take the middle role between a manager and software architect. With attentiveness and focus to team management, I also create features in code, fix bugs and do some architecting. Making sure that my hard skills are keen helps me stay with the team on the same page.
All of this helps me design efficient team structure for different tasks. For example, if a business has distinctive plans to build something, I focus early on processes and hiring because the idea is defined and now we need to make innovation with decent quality and speed. On the other hand, many start-ups don’t have distinctive ideas and project concepts have a good probability to change soon. In that case I try to chat with business as much as possible and focus more on writing code and less on business processes to make faster deliveries with less quality. This helps businesses test many different hypotheses in a smaller time span.
I believe this approach is highly adaptable and plays a crucial role in innovative projects and
breakthrough ideas.
8. Can you share a moment when you had to drastically optimize a data processing pipeline under tight constraints? Which strategies worked best, and what might you do differently now?
When I was working in a trading company, we had terabytes of market data. That data was processed and shown to the end users. At some point, the company management calculated monthly spending on infrastructures and asked to reduce the cost of infrastructure by at least 15-20% at least.
I raised my hand and asked to lead the optimization process. We used the PostgreSQL database as a data mart for relevant market data. I decided to focus on this part of the system. Initially, PostgreSQL was piling historical data up and backend services interested in that data were querying it for each client request. Reading directly from historical data wasn’t optimal so we had to keep 5-6 replicas for reading efficiency. It was expensive for us.
I proposed to build a couple of precomputed materialized views in PostgreSQL. We had a lot of time-series data so I suggested using the TimescaleDB extension which handles this case well. In particular, TimescaleDB turns PostgreSQL views into continuous aggregates which update on the fly – as soon as the new data arrives. This improvement had a tremendous impact on the system performance and gave a significant database reading boost. After this we checked that the reading workload has reduced and left only 1 database instance. It saved money paid for additional instances leaving the same level of performance.
9. Merging blockchain infrastructures with traditional web services can be complex. Where do you see the most friction, and how do you mitigate risks for both developers and end users?
When developing an application which combines web2 principles and web3 architecture, certain challenges arise. For example, if we make a simple backend application for sending and receiving crypto which acts like a wallet, how to ensure that the application stays secure and fast?
First, let’s talk about application speed. Web2 users are accustomed to fast browser experience. For example, when a user registers to an app, he or she expects that after hitting the “sign in” button and waiting for 1-2 seconds, a user will see the profile page. These expectations do not always apply to blockchain because a transaction in a traditional database is not the same thing as a transaction in a blockchain node. Blockchain usually executes transactions slower due to the need to wait for the next block mining. To optimize this, I would use L2 blockchain layers to try to speed things up.
Another aspect is security. Referring back to the example, how can we securely store user keys and credentials in the database and moreover that our wallet application upholds web3 guarantees about decentralization and security? Solving this usually requires a bit of cryptography for authentication and asset management. And one good option to prove your trustworthiness is to open at least part of your backend code which is not always a feasible option.
These 2 key aspects require careful consideration in each specific project. And it’s just a beginning, in reality there are many more friction points like this.
10. Testing distributed systems in a blockchain environment can be challenging. How do you balance speed, coverage, and reliability when designing your testing framework?
I had a project where I needed to test 40 of smart contracts each containing around 200-300 lines of codes. The contract logic was all about economy and calculation, so the reading process itself was challenging. These contracts were in the polygon chain. And I used a foundry framework for smart contract testing.
I’ve been assigned to write a lot of tests and cover at least 95% of contract functionality.
From the beginning, my focus was on coverage and resilience, otherwise tests would be frail and hard to reproduce. After I made a couple of first tests, I realised that execution even if the single test takes around 30-40 seconds. This is definitely unacceptable performance because I was planning to make more than 100 tests and spending more that hour to run all of them would be too long.
To optimize the testing, I did the following things:
– Save the common state between tests. Before that I had 2 separate tests. Each of them I had to set up, run the actual tests and then tear down the blockchain environment to prepare for the next test. Instead of doing that, I’ve refactored code and put common set up logic in separate functions. Then I wrapped tests in suits and before running the entire test suite I was setting up test conditions and saving the completed set up. After each test, I was rolling back the environment to the cached environment and running the next test.
– Another approach which I took is to simply parallelize tests execution which weren’t depending on each other.
– And the last thing I’ve made for test optimization was simple and useful for my scenarios – instead of checking blockchain state in a separate query to blockchain node after running test scenarios, I was listening for events from the same transaction which has been used for scenario execution.
All these optimizations allowed me to build fast and resilient tests.