Last Tuesday I sourced three investment opportunities, screened two founder decks, built a competitive landscape analysis for a portfolio company’s Series A narrative, published an SEO-optimised research brief, and prepped outreach packages for four KOL partnerships across two markets. It was a normal day.
I don’t have a team. I have workflows.
I’m a solo operator running across three roles: Chief Product Officer at TecStation, where I lead the commercialisation of institutional MPC custody infrastructure; active angel investor deploying approximately $3 million across 30-plus Seed and Series A companies through AngelList; and founder of CODA WEB3 CREATIVE, where I built a proprietary NFT infrastructure platform from architecture to deployment. Before all of this, I ran a 30-person cross-border team at Blue Island Fund and managed nine-figure marketing budgets at Baidu.
Today I operate with a fraction of the headcount I once managed — and higher throughput. The difference is AI.
Not AI as a buzzword. AI as an operating system for a one-person company.
The workflows, specifically
Let me be concrete, because most “I use AI in my work” articles stay vague. Here’s what my operating stack actually looks like across real workstreams.
Investment sourcing and screening. When I evaluate a Seed-stage AI infrastructure company, I don’t start with the pitch deck. I start with a structured research pipeline. I use LLM-based tools to generate a competitive landscape scan — every funded competitor in the category, their last round, their product positioning, their disclosed metrics. What used to take an analyst two days of desk research now takes 40 minutes. I then run the founder’s claims against the landscape: is their “moat” real, or is it a feature that three better-funded competitors already ship? The AI doesn’t make the investment decision. But it compresses the time between “this looks interesting” and “here’s my conviction, backed by data” from a week to an afternoon.
For my robotics and aerospace portfolio companies, the evaluation is different — I’m assessing capital intensity, timeline to commercial deployment, and technical defensibility. I use AI to synthesise patent filings, academic papers, and market reports into a structured brief that I can review in 20 minutes instead of spending three hours reading source material. The judgement is still mine. The synthesis is not.
SEO architecture across 12 parallel workstreams. This one surprises people. I build and manage SEO strategies across multiple projects simultaneously — product pages, thought leadership content, technical documentation, investor-facing materials. Each workstream has its own keyword architecture, content calendar, and performance targets. I use AI to generate the initial keyword maps, draft the structural outlines, and produce first-pass content that I then edit for accuracy and voice. Without AI, 12 parallel SEO workstreams would require a dedicated content team of three or four people. I run it as one person, spending roughly 30% of my time on editorial review and strategic direction rather than production.
KOL discovery and outreach preparation. At Blue Island Fund, I activated more than 200 KOL and media partnerships across Greater China, Southeast Asia, South Korea, and the Middle East. I learned that every market’s influencer ecosystem has different norms, different platforms, different engagement patterns. Today, when I need to build a KOL outreach list for a new market, I use AI to scan platform-specific data, identify relevant creators by audience overlap and content fit, and draft personalised outreach briefs that account for cultural and platform differences. The manual version of this process — the version I ran with a team of 30 at Blue Island — took weeks per market. The AI-assisted version takes days for initial mapping, with my time focused on relationship quality and strategic selection rather than data gathering.
Product evaluation and documentation pipelines. At TecStation, I lead a product that sits at the intersection of cryptography, enterprise security, and regulatory compliance. That means constant documentation: technical specifications, integration guides, security architecture reviews, client-facing materials. I’ve built repeatable documentation pipelines where AI generates structured first drafts from my technical notes and product decisions. I review, correct, and refine — but I’m editing, not writing from scratch. For a CPO managing enterprise infrastructure, this is the difference between documentation being a bottleneck and documentation being a competitive advantage. Our onboarding materials are more thorough than competitors with larger teams, because the production cost per document is near zero and my time goes into accuracy.
Where human judgement still sits
I want to be precise about the boundary, because overstating AI capability is as dangerous as ignoring it.
AI handles synthesis, first-draft production, structural analysis, pattern recognition across large datasets, and repetitive formatting tasks. It does not handle founder evaluation — reading the person across the table, deciding whether they’re honest about what they don’t know. It does not handle product strategy — the decision about which feature to kill, which market to enter, which integration to prioritise. It does not handle relationship management — knowing when a KOL partnership is performative versus genuinely aligned.
In my investment work, AI can tell me that a robotics company’s capital intensity profile looks similar to three comparable companies that failed to reach commercial deployment. It cannot tell me whether this specific founder has the resilience and resourcefulness to beat those odds. That assessment comes from pattern recognition that I’ve built over 20 years of operating, not from a language model.
The operating principle is simple: AI does the work that scales linearly with volume. I do the work that requires judgement, taste, or trust.
The compound effect most people miss
The real advantage of AI-assisted solo operation isn’t doing the same work faster. It’s doing work you wouldn’t attempt at all.
Before AI, I would not have tried to run 12 parallel SEO workstreams. The production overhead would have been absurd for one person. I would not have screened 30-plus investment opportunities in a year alongside a full-time CPO role — the research burden alone would have consumed all my evenings and weekends. I would not have maintained the documentation quality I maintain at TecStation without a dedicated technical writing function.
AI didn’t make me faster at my existing job. It expanded the definition of what one person’s job can be.
This is the compound effect: each workflow you systematise with AI frees up decision-making bandwidth, which you reinvest into the next workflow. After 18 months of building these systems, I operate across venture investing, enterprise product management, and content strategy with a throughput that would have required five or six people three years ago. Not because AI replaced those people. Because it replaced the prep work that sat between each decision point.
What I’d tell operators who want to start
First, don’t start with the tool. Start with the workflow. Map out the work you do repeatedly — the research, the drafting, the formatting, the data gathering. Identify where you spend time on production versus judgement. AI replaces production. It doesn’t replace judgement. If you can’t separate the two in a given workflow, AI won’t help you there yet.
Second, build pipelines, not prompts. A one-off ChatGPT query is useful. A repeatable system — where the same structured input produces the same structured output every time, and you only intervene at the quality-control step — is transformative. My investment screening pipeline runs the same way every time. My SEO content pipeline runs the same way every time. The consistency is what creates the leverage.
Third, be honest about quality. AI first drafts are 60 to 70 percent of the way there. If your work requires 95 percent accuracy — and in institutional custody infrastructure, it absolutely does — you need to budget real time for review. The trap is treating AI output as finished product. It’s not. It’s structured raw material that still needs a human with domain expertise to shape it.
Fourth, protect your decision-making energy. The whole point of this system is to arrive at the decision point with better information, faster. If you spend your freed-up time on more production work instead of higher-quality decisions, you’ve missed the point.
The solo operator era is not about doing everything yourself
I want to push back against the romanticisation of the “solopreneur” narrative. I’m not advocating that everyone should fire their team and buy a ChatGPT subscription. What I’m saying is that the minimum viable team for ambitious, multi-domain work has fundamentally changed.
I ran a 30-person team at Blue Island Fund. I managed budgets above RMB 100 million at Baidu with large functional teams around me. I know what well-resourced operations look like. The reason I operate solo now isn’t ideology — it’s because the tools have caught up to the point where one experienced operator with the right systems can match the output cadence of a small team, while maintaining tighter quality control and faster decision cycles.
The question for every operator, founder, and investor should no longer be “how many people do I need?” It should be “what’s the smallest team that can execute at the quality level this work demands?” For a growing number of workstreams, that answer is one — plus AI.