Trendaavat aiheet
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.

Elliot Hershberg
The word is out!
@packyM and I were stoked to help seed @Atelfo, @okaymaged and the @Convokebio crew while I was at @notboringco
Writers investing in writers 🤝🧬
Excited about this vision.

Convoke20.8. klo 01.04
We've raised $8.6m to build leading AI tools for the biopharma industry!
Our seed financing was led by @kleinerperkins and @_DimensionCap , with participation from @ACME, @CommaCapital, @Liquid2V, @notboringco, @AudaciousHQ, @Lux_Capital, and leaders in AI and biotechnology
10,59K
I think one of the most promising aspects of AI for drug discovery is the benefit of multi-modality and the ability to build models for many distinct tasks.
In his 2012 essay outlining Eroom's Law (drug discovery's exponentially declining R&D efficiency), one of Jack Scannell's "diagnoses" for the problem was the 'basic research–brute force' bias.
We have a tendency to overestimate the impact of scaling early-stage discovery technologies. Often times, these assays have low "predictive validity" of clinical success.
One way that AI models help solve this problem is that they can incorporate more translationally relevant predictions into the very earliest stages of discovery.
I think this is the genius of Brandon and Alex's vision at Axiom. By reducing the cost, time, and friction associated with toxicity testing, it can be pulled much earlier in the discovery process—as soon as you have a molecule.
This is one of the most radical departures of molecular machine learning relative to early computational chemistry efforts.
A large number of different discovery criteria can be accounted for in a single forward pass.

940
What does it take for a computer to learn the rules of RNA base pairing?
People are training large language models for RNA structure prediction. Some of these models have hundreds of millions of parameters.
On exciting early result has been that these models learn the rules of Watson-Crick-Franklin base pairing directly from data.
A research group at Harvard decided to see what the smallest possible model was that could achieve this result.
They trained a tiny probabilistic model with only 21 parameters using gradient descent.
With as few as 50 RNA sequences—with no corresponding structures—the rules of base pairing would pop out after only a few training epochs.
So the answer to their original question was that it takes "a lot less than you may think" to learn this type of model.
I don't think this means that the large-scaling training efforts are necessarily dumb or misguided. But this result suggests theres a lot of efficiency and performance that can still be eked out of architecture innovation.
There's a lot of underlying structure to the language of biology.

3,22K
It's awesome to have the word out for Tahoe's Series A.
@nalidoust and @iamjohnnyyu are a killer CEO/CSO technical founder duo. @genophoria and @kevansf are both incredible scientists. It's a dream team.
The scale of ambition to go after 1 *billion* cells across a wide range of patient backgrounds borders on absurd. But they are building the infrastructure to make it a reality.
And the commercial vision is equally intense.
I can't wait for more of Tahoe's story to soon be shared.
It's a privilege to partner with this team at Amplify.

Nima Alidoust11.8. klo 23.25
We’ve raised $30M to build the foundational dataset for Virtual Cell Models: 1Bn single-cell datapoints, mapping 1M drug-patient interactions, to be shared with one partner.
Our goal: Move the frontier - From models to precision medicines that help patients.
@tahoe_ai 🧵

6,16K
In biology, scaling laws work...
...until they don't.
For fitness prediction, protein language model performance increases with model size until it plateaus and then degrades.
As training loss (NLL) goes down, models start to predict higher sequence likelihoods and correlate less with underlying fitness.
Example 10,001 of why AI for biology requires careful consideration of underlying distributions, training objectives, and dozens of other details.
The intersection is rich but requires careful work across both disciplines.

13K
Tired: Silicon-based computers
Wired (literally): fungal computers
From "Sustainable Memristors from Shiitake Mycelium for High-Frequency Bioelectronics"
Idea is to use adaptive electrical signaling in shiitake (which I didn't know existed) as a replacement for neural organoids for neuromorphic computing applications.
"Our findings show that fungal computers can provide scalable, eco-friendly platforms for neuromorphic tasks, bridging bioelectronics
and unconventional computing."
100% one of the more "unconventional" research ideas I've recently come across...

896
Elliot Hershberg kirjasi uudelleen
We talked to @ElliotHershberg (Partner @AmplifyPartners) about the challenges in life sciences technology.
"There has been a general sentiment that you can make a huge amount of progress with new data and new technology in life sciences."
"It actually turns out that it's really hard to one-shot a cure for cancer."
"There are breakthroughs like the Nobel Prize for Alphafold. AI is making real impacts on hard biology problems."
"DNA sequencing is decreasing in cost faster than Moore's Law."
"People are starting to scale models, and it's getting impressive fast."
7,01K
What if flu vaccines really worked?
And I mean *really* worked—to the point that humanity's endemic relationship with influenza became history, not an ongoing global health challenge.
What if we could mitigate *all* rapidly mutating pathogens?
This is why Jake Glanville founded Centivax. His life mission is to "finish what Edward Jenner started" by developing universal vaccines that accelerate humanity's transition to a post-pathogen future.
Jake is exactly the type of Technical Founder we look to partner with at Amplify. He was an early pioneer of computational antibody design at Pfizer, before becoming one of the first graduate students in Computional & Systems Biology at Stanford with Mark Davis.
He's synthesized a lifetime of work into a distinct—and somewhat contrarian—idea for developing universal vaccines.
And the team he's assembled is equally extraordinary. For example, Centivax's CMO, Jerry Sadoff, is one of the most prolific vaccine developers alive.
It's truly a privilege for us to participate in the Series A syndicate for Centivax. Over the last decade, Jake and the team have assembled a comprehensive pre-clinical data package for their lead flu program.
The only remaining experiment is to see if this translates to humans, which is what this round underwrites.
If this technology is successful, the impact will be enormous. And the story of this team's perseverance will require it's own book in the biotech canon.

6,71K
Johtavat
Rankkaus
Suosikit
Ketjussa trendaava
Trendaa X:ssä
Viimeisimmät suosituimmat rahoitukset
Merkittävin