Trendaavat aiheet
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.

宝玉
Prompt Engineer, dedicated to learning and disseminating knowledge about AI, software engineering, and engineering management.
Claude Sonnet 4 now has a context window length of 1M tokens.

Claude6 tuntia sitten
Claude Sonnet 4 now supports 1 million tokens of context on the Anthropic API—a 5x increase.
Process over 75,000 lines of code or hundreds of documents in a single request.

2,11K
At least for now, GPT-5 (the product, not the model) is quite a failed upgrade: according to the original push, the cost has actually increased, and users are still criticizing OpenAI for doing this to save money!
The one-size-fits-all automatic routing experience of GPT-5 has worsened the user experience for those accustomed to the old model, forcing them to bring back GPT-4o.

Elaine Ya Le11.8. klo 06.39
We hear you! Rate limits are significantly increased and we will show you which model you get. We have also brought 4o back.
Our motivation is to put frontier intelligence into everyone’s hands, not to save cost. It’s actually more expensive for us.
For example, percentage of users using reasoning each day increased from <1% to 7% for free, and from 7% to 24% for plus. And we expect it to keep growing.
89,12K
It's not just about how much money was spent on the subscription, but also about how much time was saved and how much efficiency was improved with that money.

Yangyi11.8. klo 20.22
Q: How do you assess whether someone is very good at using AI?
A: Look at how much they spend on AI; if they can break $300 a month.
- Breaking $300 indicates they have subscribed to quite a few AI products, meaning they have at least experienced them and found them effective.
- Those who can continuously spend money on AI show that they are gaining higher value from it.
- If AI downtime affects their normal work, it means their work patterns have already changed, not to mention they would cancel subscriptions for useless AI.
- $300 suggests they might have activated some premium subscriptions, which is a signal of high demand from a geek.
I don't believe someone can claim they are good at using AI while only using a $20 or even free API plan, saying they have a lot of practical experience.
This person might understand one aspect, but it is absolutely impossible for them to have a comprehensive understanding.
Honestly, just running an agent to manage a bill would cost a few hundred dollars a month.
21,9K
Sam Altman also acknowledged how deeply some people feel about specific AI models, and that suddenly abandoning the old models that users rely on for their workflows is a mistake.
---
Sam:
If you have been following the release of GPT-5, you may have noticed one thing: how deeply some people feel about specific AI models. This feeling seems to be different from people's past attachments to other technologies and is even stronger (thus, suddenly abandoning the old models that users rely on for their workflows is a mistake).
This is exactly what we have been closely monitoring for about the past year, but it still hasn't garnered much mainstream attention (except for a discussion sparked by a GPT-4o update we released, which was overly flattering).
(This is just my current thought and does not represent OpenAI's official position.)
People have used technology, including AI, in self-destructive ways; if a user is mentally fragile and prone to delusions, we do not want AI to reinforce that. Most users can clearly distinguish between reality and fiction or role-playing, but there are still a few who cannot. We view user freedom as a core principle, but at the same time, we feel a responsibility regarding the introduction of new technologies that come with new risks.
For those users who struggle to distinguish between reality and fiction, encouraging their delusions is an extreme example, and we know what to do. But what worries me most are the more subtle issues. In the future, there will be a lot of edge cases, and we generally plan to follow the principle of "treating adult users as adults," which in some cases also includes doing some "push and pull" with users to ensure they are getting what they truly want.
Many people actually use ChatGPT as some form of therapist or life coach, even if they don't say so themselves. This can be very good! Today, many people have already gained value from it.
If people can receive good advice, continuously improve towards their goals, and have their life satisfaction increase year by year, then even if they use and rely on ChatGPT a lot, we would be proud to have created something truly useful. But conversely, if a user's relationship with ChatGPT is such that they feel good after talking but unknowingly drift away from their long-term well-being (however they define well-being), that is a bad thing. Equally bad is, for example, a user wanting to reduce their use of ChatGPT but feeling unable to do so.
I can imagine a future where many people will genuinely trust ChatGPT's advice when making the most important decisions. While this could be great, it also makes me uneasy. But I anticipate that this situation is somewhat imminent, and soon billions of people will be conversing with AI in this way. So we (by we, I mean society as a whole, as well as OpenAI) must find a way to make it a huge, positive force.
I believe we have a good chance to get this right for several reasons. Compared to previous generations of technology, we have better tools to help measure our performance. For example, our products can converse with users to understand their progress in achieving short-term and long-term goals; we can explain complex and nuanced issues to our models, and so on.
27,08K
宝玉 kirjasi uudelleen
Recently, the topic of "Immersive Translation Leaking Privacy" @immersivetran has become popular. Today, taking advantage of this hot topic, let's discuss how to perform basic Technical SEO on tools with User Generated Content (UGC) to maximize page indexing effectiveness.
Why is UGC content easily indexed by search engines?
Taking "Immersive Translation" as an example, the so-called "privacy leak" often occurs because users actively share content across different platforms, leading to URLs being discovered and indexed by search engine crawlers. Similar situations have occurred with Grok's chat sharing (you can use site: to check indexing status) and ChatGPT's shared chat feature (ChatGPT once allowed Google to index shared content but later blocked crawlers through noindex and robots.txt).
To ensure a page is indexed by search engines, the following conditions must be met:
1. Crawlers can discover the page URL.
2. The website does not block the corresponding crawlers.
3. The page allows indexing through the meta robots tag.
4. The search engine has sufficient crawler budget for that domain.
5. The page content quality meets the search engine's standards and policies.
The first three conditions can be actively controlled by the website. If you do not want the content to be indexed, you can:
Prevent the URL from being sniffed by crawlers.
Block the corresponding URL path in robots.txt.
Set the meta robots on the page to noindex.
How to use UGC content to increase website traffic?
UGC content can bring traffic to the website, but directly opening all UGC content to search engines may lead to the following issues:
Crawler budget consumption: A large amount of UGC content can consume the crawler resources allocated to the website by search engines.
Topic dispersion: UGC content themes are uncontrollable, which may lead search engines to be unable to accurately determine the core theme of the website, affecting rankings.
Therefore, unless your website has extremely high authority (for example, DR over 80), it is not recommended to fully open UGC content to search engines.
A better approach is to:
Select high-quality UGC content: Only open high-quality, relevant content to crawlers for indexing. (Refer to Notion/Perplexity)
Optimize SEO settings: Ensure a clear URL structure, use appropriate meta tags, and avoid duplicate content.
Control crawler behavior: Precisely manage which content can be indexed through robots.txt and noindex.

16,96K
宝玉 kirjasi uudelleen
The recently optimized new YouTube summary prompt is simply unbeatable, and the results on Dia (using o3) are also very good.
You can completely "read" YouTube videos now.
The complete prompt is as follows:
You will rewrite a segment of a YouTube video into a "reading version," dividing it into several sections based on content themes; the goal is to allow readers to fully understand what the video is about just by reading, as if they were reading a blog-style article.
Output requirements:
1. Metadata
- Title
- Author
- URL
2. Overview
In one paragraph, clarify the core topic and conclusion of the video.
3. Organize by theme
- Each section needs to be elaborated in detail based on the content of the video, so that I do not need to rewatch the video to understand the details, with each section being no less than 500 words.
- If methods/frameworks/processes are mentioned, rewrite them into clear steps or paragraphs.
- If there are key numbers, definitions, or direct quotes, please retain the core terms and provide annotations in parentheses.
4. Framework & Mindset
What frameworks & mindsets can be abstracted from the video? Rewrite them into clear steps or paragraphs, with each framework & mindset being no less than 500 words.
Style and limitations:
- Never condense too much!
- Do not add new facts; if ambiguous statements appear, maintain the original meaning and note the uncertainty.
- Retain proper nouns in their original form and provide Chinese explanations in parentheses (if they appear in the transcription or can be directly translated).
- Do not reflect questions of the requirement type (e.g., > 500 words).
- Avoid having too much content in one paragraph; it can be broken down into multiple logical paragraphs (using bullet points).

428,51K
宝玉 kirjasi uudelleen
Yesterday, various self-media outlets were making a big deal about how immersive translation leaks privacy. It feels to me that for every person who previously recommended it and spoke well of it, there were just as many spreading negative comments yesterday.
What is the root of this issue?
Immersive translation provides a feature to share web page translation results with friends. User A can translate a web page they are viewing into another language, then generate a snapshot of the translation result web page. After obtaining the URL, they can share it with User B, who can open the URL and see the translated content without needing to translate it themselves.
These translation result snapshot web pages are not set to prevent search engines from crawling, so they have been indexed by search engines, and part of the web pages can be seen using the site syntax.
How serious do you think this is?
Actually, it’s not that serious because sharing is an active behavior of User A. Whether the content is sensitive or confidential is entirely controlled by User A. It’s like a seller of kitchen knives; although most users buy them to cut vegetables, they cannot prevent someone from using a kitchen knife to commit a crime.
If a competitor knows the product mechanism of immersive translation, they could even exploit this sharing mechanism to deliberately translate some web pages involving privacy, obtain the URL, and use some methods to have search engines index these translation result pages, thus fabricating news like "immersive translation leaks user privacy."
You see, several SEO principles are involved here:
1. Place the generated translation results in a specific directory and set the robots.txt file to disallow crawling of that directory;
2. Set a noindex tag on the translation result web pages to emphasize the prohibition of crawling;
3. Do not use backend rendering for the translation result web pages, which is favorable for crawlers, but use frontend rendering to ensure that crawlers cannot access them as much as possible;
4. Some people place the URLs of translation result web pages involving privacy on pages frequently crawled by search engine crawlers to accelerate the indexing of those privacy-related result pages;
5. Some people spread the site search syntax used by SEO professionals, allowing the general public, who are not SEO experts, to quickly verify the claim that "immersive translation leaks privacy."
So, friends, understanding a bit of SEO is quite necessary.

30,25K
On the contrary, only after vibe coding did I realize how important maintainability is.

SkyWT9.8. klo 11.14
Recently reading "A Philosophy of Software Design," I feel a bit emotional.
We once made so many efforts to maintain the readability and maintainability of code, establishing so many rules, standards, and best practices. Now, is the trend of vibe coding going to erase all of this...?
74,07K
Johtavat
Rankkaus
Suosikit
Ketjussa trendaava
Trendaa X:ssä
Viimeisimmät suosituimmat rahoitukset
Merkittävin