Trendaavat aiheet
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.

Tomasz Tunguz
This was so fun, Mario. Thanks for having me on the show to talk about everything going on in the market!

Mario Gabriele 🦊22.7. klo 20.22
Our latest episode with Tomasz Tunguz is live!
The Decade of Data
@ttunguz has spent almost two decades turning data into investment insights. After backing Looker, Expensify, and Monte Carlo at Redpoint Ventures, he launched @Theoryvc in 2022 with a bold vision: build an "investing corporation" where researchers, engineers, and operators sit alongside investors, creating real-time market maps and in-house AI tooling. His debut fund closed at $238 million, followed just 19 months later by a $450 million second fund. Centered on data, AI, and crypto infrastructure, Theory operates at the heart of today's most consequential technological shifts. We explore how data is reshaping venture capital, why traditional investment models are being disrupted, and what it takes to build a firm that doesn't just predict the future but actively helps create it.
Listen now:
• YouTube:
• Spotify:
• Apple:
A big thank you to the incredible sponsors that make the podcast possible:
✨ Brex — The banking solution for startups:
✨ Generalist+ — Essential intelligence for modern investors and technologists:
We explore:
→ How Theory’s “investing corporation” model works
→ Why crypto exchanges could create a viable path to public markets for small-cap software companies
→ The looming power crunch—why data centers could consume 15% of U.S. electricity within five years
→ Stablecoins’ rapid ascent as major banks route 5‑10% of U.S. dollars through them
→ Why Ethereum faces an existential challenge similar to AWS losing ground to Azure in the AI era
→ Why Tomasz believes today’s handful of agents will become 100+ digital co‑workers by year‑end
→ Why Meta is betting billions on AR glasses to change how we interact with machines
→ How Theory Ventures uses AI to accelerate market research, deal analysis, and investment decisions
…And much more!
7,7K
OpenAI receives on average 1 query per American per day.
Google receives about 4 queries per American per day.
Since then 50% of Google search queries have AI Overviews, this means at least 60% of US searches are now AI.
It’s taken a bit longer than I expected for this to happen. In 2024, I predicted that 50% of consumer search would be AI-enabled. (
But AI has arrived in search.
If Google search patterns are any indication, there’s a power law in search behavior. SparkToro’s analysis of Google search behavior shows the top third of Americans who search execute upwards of 80% of all searches - which means AI use isn’t likely evenly distributed - like the future.
Websites & businesses are starting to feel the impacts of this. The Economist’s piece “AI is killing the web. Can anything save it?” captures the zeitgeist in a headline. (
A supermajority of Americans now search with AI. The second-order effects from changing search patterns are coming in the second-half of this year & more will be asking, “What Happened to My Traffic?” (
AI is a new distribution channel & those who seize it will gain market share.
- William Gibson saw much further into the future!
- This is based on a midpoint analysis of the SparkToro chart, is a very simple analysis, & has some error as a result.

8,55K
In working with AI, I’m stopping before typing anything into the box to ask myself a question : what do I expect from the AI?
2x2 to the rescue! Which box am I in?
On one axis, how much context I provide : not very much to quite a bit. On the other, whether I should watch the AI or let it run.
If I provide very little information & let the system run : ‘research Forward Deployed Engineer trends,’ I get throwaway results: broad overviews without relevant detail.
Running the same project with a series of short questions produces an iterative conversation that succeeds - an Exploration.
“Which companies have implemented Forward Deployed Engineers (FDEs)? What are the typical backgrounds of FDEs? Which types of contract structures & businesses lend themselves to this work?”
When I have a very low tolerance for mistakes, I provide extensive context & work iteratively with the AI. For blog posts or financial analysis, I share everything (current drafts, previous writings, detailed requirements) then proceed sentence by sentence.
Letting an agent run freely requires defining everything upfront. I rarely succeed here because the upfront work demands tremendous clarity - exact goals, comprehensive information, & detailed task lists with validation criteria - an outline.
These prompts end up looking like the product requirements documents I wrote as a product manager.
The answer to ‘what do I expect?’ will get easier as AI systems access more of my information & improve at selecting relevant data. As I get better at articulating what I actually want, the collaboration improves.
I aim to move many more of my questions out of the top left bucket - how I was trained with Google search - into the other three quadrants.
I also expect this habit will help me work with people better.

2,9K
That little black box in the middle is machine learning code.
I remember reading Google’s 2015 Hidden Technical Debt in ML paper & thinking how little of a machine learning application was actual machine learning.
The vast majority was infrastructure, data management, & operational complexity.
With the dawn of AI, it seemed large language models would subsume these boxes. The promise was simplicity : drop in an LLM & watch it handle everything from customer service to code generation. No more complex pipelines or brittle integrations.
But in building internal applications, we’ve observed a similar dynamic with AI.
Agents need lots of context, like a human : how is the CRM structured, what do we enter into each field - but input is expensive the Hungry, Hungry AI model.
Reducing cost means writing deterministic software to replace the reasoning of AI.
For example, automating email management means writing tools to create Asana tasks & update the CRM.
As the number of tools increases beyond ten or fifteen tools, tool calling no longer works. Time to spin up a classical machine learning model to select tools.
Then there’s watching the system with observability, evaluating whether it’s performant, & routing to the right model. In addition, there’s a whole category of software around making sure the AI does what it’s supposed to.
Guardrails prevent inappropriate responses. Rate limiting stops costs from spiraling out of control when a system goes haywire.
Information retrieval (RAG - retrieval augmented generation) is essential for any production system. In my email app, I use a LanceDB vector database to find all emails from a particular sender & match their tone.
There are other techniques for knowledge management around graph RAG & specialized vector databases.
More recently, memory has become much more important. The command line interfaces for AI tools save conversation history as markdown files.
When I publish charts, I want the Theory Ventures caption at the bottom right, a particular font, colors, & styles. Those are now all saved within .gemini or .claude files in a series of cascading directories.
The original simplicity of large language models has been subsumed by enterprise-grade production complexity.
This isn’t identical to the previous generation of machine learning systems, but it follows a clear parallel. What appeared to be a simple “AI magic box” turns out to be an iceberg, with most of the engineering work hidden beneath the surface.


3,65K
If 2025 is the year of agents, then 2026 will surely belong to agent managers.
Agent managers are people who can manage teams of AI agents. How many can one person successfully manage?
I can barely manage 4 AI agents at once. They ask for clarification, request permission, issue web searches—all requiring my attention. Sometimes a task takes 30 seconds. Other times, 30 minutes. I lose track of which agent is doing what & half the work gets thrown away because they misinterpret instructions.
This isn’t a skill problem. It’s a tooling problem.
Physical robots offer clues about robots manager productivity. MIT published an analysis in 2020 that suggested the average robot replaced 3.3 human jobs. In 2024, Amazon reported pickpack and ship robots replaced 24 workers.
But there’s a critical difference : AI is non-deterministic. AI agents interpret instructions. They improvise. They occasionally ignore directions entirely. A Roomba can only dream of the creative freedom to ignore your living room & decide the garage needs attention instead.
Management theory often guides teams to a span of control of 7 people.
Speaking with some better agent managers, I’ve learned they use an agent inbox, a project management tool for requesting AI work & evaluating it. In software engineering, Github’s pull requests or Linear tickets serve this purpose.
Very productive AI software engineers manage 10-15 agents by specifying 10-15 tasks in detail, sending them to an AI, waiting until completion & then reviewing the work. Half of the work is thrown away, & restarted with an improved prompt.
The agent inbox isn’t popular - yet. It’s not broadly available.
But I suspect it will become an essential part of the productivity stack for future agent managers because it’s the only way to keep track of the work that can come in at any time.
If ARR per employee is the new vanity metric for startups, then agents managed per person may become the vanity productivity metric of a worker.
In 12 months, how many agents do you think you could manage? 10? 50? 100? Could you manage an agent that manages other agents?

7,85K
For the last decade, the biggest line item in any startup’s R&D budget was predictable talent. But AI is pushing its way onto the P&L.
How much should a startup spend on AI as a percentage of its research and development spend?
10%? 30%? 60?
There are three factors to consider. First, the average salary for a software engineer in Silicon Valley. Second is the total cost of AI used by that engineer. Cursor is now at $200 per month for their Ultra Plan & reviews of Devin suggest $500 per month. Third, the number of agents an engineer can manage.
A first pass : (first image)
But the subscription costs are probably low. Over the last few days I’ve been playing around extensively with AI coding agents and I racked up a bill of $1,000 within the span of five days! 😳😅
So let’s update the table and assume another $1000 per month per engineer.
So for a typical startup, an estimate of 10 to 15% of total R&D expense today might conceivably be used for AI.
The variants will be much broader in practice as we all learn to use AI better and it penetrates more of the organization. Smaller companies that are AI native from the outset are likely to have significantly higher ratios.
If you’re interested to participate in an anonymous survey, I will be publishing the results if the sample size is sufficiently large to have a statistically significant result.
Survey is here :
This is a grossly simplified model where we are only reviewing salaries, not including benefits, hardware, dev & test infrastructure, etc.
This is an estimate based on discounted personal experience vibe coding.


1,95K
For the last decade, the biggest line item in any startup’s R&D budget was predictable talent. But AI is pushing its way onto the P&L.
How much should a startup spend on AI as a percentage of its research and development spend?
10%? 30%? 60?
There are three factors to consider. First, the average salary for a software engineer in Silicon Valley. Second is the total cost of AI used by that engineer. Cursor is now at $200 per month for their Ultra Plan & reviews of Devin suggest $500 per month. Third, the number of agents an engineer can manage.
A first pass : (first image)
But the subscription costs are probably low. Over the last few days I’ve been playing around extensively with AI coding agents and I racked up a bill of $1,000 within the span of five days! 😳😅
So let’s update the table and assume another $1000 per month per engineer.
So for a typical startup, an estimate of 10 to 15% of total R&D expense today might conceivably be used for AI.
The variants will be much broader in practice as we all learn to use AI better and it penetrates more of the organization. Smaller companies that are AI native from the outset are likely to have significantly higher ratios.
If you’re interested to participate in an anonymous survey, I will be publishing the results if the sample size is sufficiently large to have a statistically significant result.
Survey is here :
This is a grossly simplified model where we are only reviewing salaries, not including benefits, hardware, dev & test infrastructure, etc.
This is an estimate based on discounted personal experience vibe coding.


213
When you query AI, it gathers relevant information to answer you.
But, how much information does the model need?
Conversations with practitioners revealed the their intuition : the input was ~20x larger than the output.
But my experiments with Gemini tool command line interface, which outputs detailed token statistics, revealed its much higher.
300x on average & up to 4000x.
Here’s why this high input-to-output ratio matters for anyone building with AI:
Cost Management is All About the Input. With API calls priced per token, a 300:1 ratio means costs are dictated by the context, not the answer. This pricing dynamic holds true across all major models.
On OpenAI’s pricing page, output tokens for GPT-4.1 are 4x as expensive as input tokens. But when the input is 300x more voluminous, the input costs are still 98% of the total bill.
Latency is a Function of Context Size. An important factor determining how long a user waits for an answer is the time it takes the model to process the input.
It Redefines the Engineering Challenge. This observation proves that the core challenge of building with LLMs isn’t just prompting. It’s context engineering.
The critical task is building efficient data retrieval & context - crafting pipelines that can find the best information and distilling it into the smallest possible token footprint.
Caching Becomes Mission-Critical. If 99% of tokens are in the input, building a robust caching layer for frequently retrieved documents or common query contexts moves from a “nice-to-have” to a core architectural requirement for building a cost-effective & scalable product.
For developers, this means focusing on input optimization is a critical lever for controlling costs, reducing latency, and ultimately, building a successful AI-powered product.




4,19K
Yesterday, Figma filed its beautifully designed S-1.
It reveals a product-led growth (PLG) business with a remarkable trajectory. Figma’s collaborative design tool platform disrupted the design market long-dominated by Adobe.
Here’s how the two companies stack up on key metrics for their most recent fiscal year [see attached image]:
Figma is about 3% the size of Adobe but growing 4x faster. The gross margins are identical. Figma’s 132% Net Dollar Retention is top decile.
The data also shows Figma’s Research & Development spend nearly equals Sales & Marketing spend.
This is the PLG model at its best. Figma’s product is its primary marketing engine. Its collaborative nature fosters viral, bottoms-up adoption, leading to a best-in-class sales efficiency of 1.0. For every dollar spent on sales & marketing in 2023, Figma generated a dollar of new gross profit in 2024. Adobe’s blended bottoms-up & sales-led model yields a more typical 0.39.
The S-1 also highlights risks. The most significant is competition from AI products. While Figma is investing heavily in AI, the technology lowers the barrier for new entrants. Figma’s defense is its expanding platform—with products like FigJam, Dev Mode, & now Slides, Sites, & Make.
These new product categories have driven many PLG AI software companies to tens & hundreds of millions in ARR in record time.
Given its high growth & unique business model, how should the market value Figma? We can use a linear regression based on public SaaS companies to predict its forward revenue multiple. The model shows a modest correlation between revenue growth & valuation multiples (R² = 0.23).
Figma, with its 48% growth, would be the fastest-growing software company in this cohort setting aside NVIDIA. A compelling case can be made that Figma should command a higher-than-predicted valuation. Its combination of hyper-growth, best-in-class sales efficiency, & a passionate, self-propagating user base is rare.
Applying our model’s predicted 19.9x multiple to estimate forward revenue yields an estimated IPO valuation of approximately $21B 2 - a premium to the $20B Adobe offered for the company in 2022.
The S-1 tells the story of a category-defining company that built a collaborative design product, developed a phenomenal PLG motion, & is pushing actively into AI.
The $1.0 billion termination fee from Adobe was received in December 2023 and recorded as “Other income, net” in Fiscal Year 2024 (ending January 31, 2024). The large stock-based compensation charge of nearly $900 million is related to an employee tender offer in May 2024. Both of these are removed in the non-GAAP data cited above.
By taking Figma’s 48.3% trailing twelve-month growth rate & discounting it by 15% (to account for a natural growth slowdown), the model produces a forward growth estimate of 41.1%. This would imply forward revenue of about $1.1b.



6,34K
Johtavat
Rankkaus
Suosikit
Ketjussa trendaava
Trendaa X:ssä
Viimeisimmät suosituimmat rahoitukset
Merkittävin