Trendaavat aiheet
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.

Santiago
Stop using out-of-the-box metrics to evaluate your AI applications. You are wasting your time, and your application is not improving.
Here are a few of these metrics: relevance, hallucination rate, correctness, completeness, naturalness, coherence, etc.
Stop.
I've seen applications excelling in every one of these metrics and still failing to produce good results.
What you need to do: run Error Analysis to identify specific metrics for your application.
Instead of "Completeness", I want to see you tracking something like "Response contains every relevant field", or "Scheduling time in response uses proper timezone."
Specific, custom metrics work. Out-of-the-box metrics don't.
5,55K
I want to help every developer become an order of magnitude better in AI/ML Engineering.
Here are the topics I'll cover in my upcoming cohort.
We start in two weeks (Aug 4th).
• 20+ hours of live classes
• Hands-on practice with an end-to-end system implementation
• Open-source tools
• Build once, deploy anywhere
Students from Google, AWS, Netflix, and other top companies. More than 2,000 graduates.
Best engineering program online.

16,94K
MCP's authentication is a huge step!
If we want agents to do practical work, we need them to:
• Call third-party APIs on your behalf
• Perform actions that should be traceable
• Enforce different permissions for different roles
For example, suppose you want your AI assistant to send emails on your behalf. You can now have the assistant use an MCP Server, ask you to authenticate on a browser, and then use the generated token to send those emails.
This is pretty cool!
11,75K
Caching is one of the hardest things you'll ever work on.
I'm currently working on a semantic cache for an AI application.
Based on what we are seeing, it's a massive trade-off between costs and accuracy.
We are saving around 20% of our inference bill: many of the queries are hitting the cache, so we don't have to spend the model generating answers for them.
However, many of the queries are now returning slightly outdated information, as they don't access the model, resulting in stale responses.
I'm spending most of my time working on the similarity function for detecting cache hits. It seems simple, but it's proving very hard to do this correctly.
Side note: This semantic cache introduces extra latency, especially because most queries will miss the cache. When they hit, responses are faster, but those are the exception.
Either way, this has been well worth it.
11,69K
Johtavat
Rankkaus
Suosikit
Ketjussa trendaava
Trendaa X:ssä
Viimeisimmät suosituimmat rahoitukset
Merkittävin