Trendaavat aiheet
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.

Jeffrey Emanuel
It sounds so stupid, but one of the biggest productivity hacks when using Claude Code with Opus 4.1 is this: after asking CC to implement a feature or fix a bug or whatever, and after it says it completed everything, you just repeatedly say the following to it over and over until it can't find any more mistakes (which sometimes takes up to 7 or 8 tries!):
"Great, now I want you to carefully read over all of the new code you just wrote and other existing code you just modified with "fresh eyes," looking super carefully for any obvious bugs, errors, problems, issues, confusion, etc."
Yes, this does take a while, but that's why it's so handy to have a bunch of CC sessions open at once. Then you can just rotate through them, pasting that sentence in again and again.
Somehow the "fresh eyes" stuff changes how it perceives what it just wrote in a really helpful way.
Weirdly, this trick doesn't seem to work as well with GPT-5 thinking-- it tends to just say "Yup, everything looks right!" Claude is much more prone to second guessing and to making careless mistakes the first time around, but good at catching them given enough chances.
4,82K
After several more days of intense usage of GPT-5 via Cursor and via the GPT-5 Pro model in the web app, I stand by everything I said about it being a much smarter model and better at coding than Opus 4.1
I still like Opus and do find the ergonomics of Claude Code to be nicer in many ways, but if you’re trying to do truly difficult stuff that requires really clever first-principles thinking and computer science chops, GPT-5 is next level.
But I suspect this only emerges when the reasoning effort mode is set to at least medium, and really manifests itself with the high effort setting.
A good example problem is preparing document “redlines” of two long, complex legal documents. Not different versions of the same document, but two different documents that come from a shared general template.
This is a very, very hard problem to do a good job on, and requires many clever tricks and heuristics to give decent performance and output quality (I’m talking about using traditional programming techniques here, not using LLMs to do this comparison).
GPT-5 with Cursor agent can simply come up with more, better, clever (yet pragmatic) ideas faster, and implement these correctly and without much hand-holding, compared to Opus4.1.
It depends on what you’re working on, though. I still think I prefer frontend code in NextJS by Opus, for example.
But you should absolutely check for yourself on your own actual problems and not trust all the many people saying the model sucks and that it’s proof we’ve hit a wall.
Either they’re using the bad free version without thinking, or they have no clue how to prompt effectively, or they’re letting their feelings towards OpenAI and Altman color their views.
35,97K
I think the highest compliment I can pay to @patrickc and the Stripe team is that they have such a great reputation and track record of making really polished and intuitive UI/UX for their services that it's very handy to reference them by name in coding prompts to get better results from AI coding agents.
For example, I have a variant of this saved in my text editor and paste it into Claude Code at least 10 times a day:
"I want you to do a spectacular job building absolutely world-class UI/UX components for displaying these grading reports, showing both the details and also as "badges" or "summary cards," with an intense focus on making the most visually appealing, user-friendly, intuitive, slick, polished, "Stripe-level" of quality UI/UX possible for this that leverages the good libraries that are already part of the project."
Then I tell it that whatever it made is really not that great ("god-awful" or "unbelievably bad") even if it is pretty good already, and that it has to DRAMATICALLY improve it to truly get to Stripe-class levels of user delight and slickness, polish, intuitiveness, etc.
Basically, applying the Steve Jobs Gaslighting Technique to iteratively achieve "insanely great" results.
And yes, this works unbelievably well if you keep doing it over and over again. The trick is that you need to include ALL those adjectives, or else it will devolve into icons spinning around and pulsing like acrobats ("slick" and "visually appealing"); you need the other terms like "polished" and "intuitive" and "Stripe-level" to temper that so it's also somewhat minimalistic and nice to use in practice.
I'm glad I don't have to work for me as an AI agent :/
1,7K
Just read the new GSPO paper from the Qwen team.
It’s funny how much these big theoretical improvements, despite having a seemingly deep fundamental basis (in this case, that optimizing across token sequences is better than optimizing across individual tokens), ultimately come down to just letting the gradients flow better by avoiding numerical conditioning problems.
When you take a step back and look at it, GSPO is fundamentally a way to get better numerical conditioning by averaging things together more in updates to avoid noisy bumps (almost like using momentum in rmsprop or Adam) and also ignoring updates that would lead to numerically “dangerous” situations in terms of conditioning.
But it all makes sense from a historical perspective, since deep learning really exploded when we figured out how to avoid the vanishing/exploding gradient problem by using things like momentum in optimizers. So in a way, this is simply the latest step in that tradition of navigating the loss landscape in a more robust way to avoid “driving into a ditch.”
3,33K
Johtavat
Rankkaus
Suosikit
Ketjussa trendaava
Trendaa X:ssä
Viimeisimmät suosituimmat rahoitukset
Merkittävin