Trendaavat aiheet
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.

Frank Downing
Yes.
I’ve heard “browsers are the ultimate distribution for agents”. I think this is backwards.
Agents will use browsers as a tool, among others, to accomplish tasks.
You will think of the agent as the product, not the browser.
You pay for ChatGPT Agent, which spins up browsers to accomplish tasks as needed. You can watch and jump in, but you don’t have to.
Same direction for Perplexity’s Comet. It meets users where they are today, but over time the agent will drive more and more and you will care less and less to see how the sausage is made.

qw22.7. klo 03.12
websites replace most desktop applications (last 2 decades)
then agents replace most websites (next 2 decades)
32,72K
Good data points on the importance of "context engineering":
Input tokens may be cheaper than output tokens, but context heavy tasks (like coding) can require 300-400x more input tokens of context than output tokens, making context 98% of total LLM usage costs.
Latency also grows w/ larger context size.
Underscores the importance of providing the right context at the right time when building AI applications, and, I assume, leaves a lot of room for competitive differentiation in AI-navtive SaaS apps.

Tomasz Tunguz9.7. klo 01.36
When you query AI, it gathers relevant information to answer you.
But, how much information does the model need?
Conversations with practitioners revealed the their intuition : the input was ~20x larger than the output.
But my experiments with Gemini tool command line interface, which outputs detailed token statistics, revealed its much higher.
300x on average & up to 4000x.
Here’s why this high input-to-output ratio matters for anyone building with AI:
Cost Management is All About the Input. With API calls priced per token, a 300:1 ratio means costs are dictated by the context, not the answer. This pricing dynamic holds true across all major models.
On OpenAI’s pricing page, output tokens for GPT-4.1 are 4x as expensive as input tokens. But when the input is 300x more voluminous, the input costs are still 98% of the total bill.
Latency is a Function of Context Size. An important factor determining how long a user waits for an answer is the time it takes the model to process the input.
It Redefines the Engineering Challenge. This observation proves that the core challenge of building with LLMs isn’t just prompting. It’s context engineering.
The critical task is building efficient data retrieval & context - crafting pipelines that can find the best information and distilling it into the smallest possible token footprint.
Caching Becomes Mission-Critical. If 99% of tokens are in the input, building a robust caching layer for frequently retrieved documents or common query contexts moves from a “nice-to-have” to a core architectural requirement for building a cost-effective & scalable product.
For developers, this means focusing on input optimization is a critical lever for controlling costs, reducing latency, and ultimately, building a successful AI-powered product.




1,48K
Two reinforcing flywheels are accelerating AI IDE (integrated development environment) adoption:
1. Model-product flywheel:
As foundation models improve (reasoning, planning, tool use), AI IDEs built on top of them instantly get better, driving more product usage. As AI IDEs become major token consumers, foundation model providers are incentivized to optimize for those workflows — further increasing performance.
2. User-data flywheel:
As AI IDEs attract more developers, they generate proprietary usage data (what devs ask, how they code, where they get stuck). This data can be used to train specialized models fine-tuned for the IDE itself — leading to better UX and performance, and a deeper moat.
#1 benefits the category. #2 benefits the category leader.
26,17K
Its funny to me that Claude does a better job of answering questions about my Gmail that Google's Gemini does.
Latest versions of Gemini crush it on benchmarks and have heard annectodaly that they're great at coding, but real world use of the Gemini app has never matched up for me relative to ChatGPT, Claude, or Grok
224,55K
We published a whitepaper with Coinbase in January 2017, 8 months before they raised their series D round at a $1.6 billion valuation. We admired the company's approach to onboarding new users to the crypto economy -> leaning into innovation with high product velocity while playing by the rules. Today the company is worth ~$65 billion, representing a roughly 60% annualized return to Series D investors over the last ~8 years.
Tesla, Palantir, and now Coinbase: All companies that the market fundamentally did not understand and punished as if they were going bankrupt, now recognized as category leaders in their respective spaces and part of the S&P 500. While we still think some of the greatest years for each of these companies are ahead of them, accessing innovation early, before a company makes it into the big indices, is key to building a complete portfolio.

ARK Invest14.5.2025
Coinbase is joining the S&P 500. To most, it’s a crypto milestone. To us, it’s a signal. 🧵
264,62K
Johtavat
Rankkaus
Suosikit
Ketjussa trendaava
Trendaa X:ssä
Viimeisimmät suosituimmat rahoitukset
Merkittävin