Trendaavat aiheet
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.

Yangyi
Human-machine collaboration content architect
Q: How do you assess whether someone is very good at using AI?
A: Look at how much they spend on AI; if they can break $300 a month.
- Breaking $300 indicates they have subscribed to quite a few AI products, meaning they have at least experienced them and found them effective.
- Those who can continuously spend money on AI show that they are gaining higher value from it.
- If AI downtime affects their normal work, it means their work patterns have already changed, not to mention they would cancel subscriptions for useless AI.
- $300 suggests they might have activated some premium subscriptions, which is a signal of high demand from a geek.
I don't believe someone can claim they are good at using AI while only using a $20 or even free API plan, saying they have a lot of practical experience.
This person might understand one aspect, but it is absolutely impossible for them to have a comprehensive understanding.
Honestly, just running an agent to manage a bill would cost a few hundred dollars a month.
64,21K
Teaching chatGPT, midjourney, and stable diffusion in 2023
Teaching comfyui, dify, and coze in 2024
Teaching deepseek, office vibe, and coding in 2025
I took a look, and 60% of the teachers are the same people 😂
The content changes, the character labels and titles change,
but the formula is the same.
Platforms invest 199 to turn small into big,
or just publish a book to lead a course.
Anyway, the biggest hurdle for teaching others to get started might just be swallowing their pride.
There are plenty of 40-point teachers everywhere,
and courses selling for thousands aren't bad either.
If we say people aren't FOMOing over AI, I think that's hard to argue.
But it seems there aren't many who can actually use it 😂
7,32K
To create something that can go viral, it must hit the following points:
- Show off: I share this thing to let others understand me and think I'm impressive and knowledgeable.
- Fun: I share this thing to bring joy to others.
- Social recognition: I share this thing because it has value to others, and I can gain social recognition.
- Actual value: I share this thing to make money or gain attention.
- Herd mentality: Everyone else is sharing, and if I don't share, I seem out of place.
Whether it's content or a product, the more elements it hits, the more likely it is to go viral.
If it can also form a profit chain, then it can spread even further.
The above is from the perspective of sharing motivation, but sharing is a funnel; others must first see it, then understand it, before they will share it.
So, exposure must come first.
However, there are some differences between wanting exposure and wanting to share.
Exposure mostly relies on comments, and the logic of stimulating comments is different from that of stimulating shares.
19,41K
I added Opus 4.1 on Readdit to look for posts and found mentions of two MCPs that enhance ClaudeCode's capabilities: Zen and Serena.
However, opinions are completely different; some really like it, while others think it's completely useless.
I believe that's how AI is: when it can deliver great results, people love it, but when the results are zero, people hate it (because it wastes their time).
If an AI product has a small group of people saying it's amazing, then it's worth a try.
Has anyone tried these two MCPs? Please comment on your experience!


5,25K
Anyone can do a VibeCoding mini project, like this -- a Reddit content listener.
It can actually be used to find study materials, translate content, and locate trending posts in specific subreddits, which helps you gradually get familiar with the style of that subreddit, making it easier to write popular posts.
I personally use it as an information source, and it can even be used to learn English. Agencies use it to find content inspiration, brands use it to monitor events related to other brands, and I mainly work on it to prepare for content marketing on Reddit, building an automated prompt information source for my Reddit Agent.
Limited beta test code: RDDT-TDGIA-DIZ
[Real beta test, already deleted the database once, haha]
32,44K
Here's another idea for unemployed workers to make money.
Use vibecoding to create iOS client applications, specifically skeletons.
What is a skeleton? For example, a categorized image gallery or a categorized audio listening app.
It's just a list that categorizes the content, making it easier for users to find.
Once you have a skeleton, you just need to keep the content interchangeable.
Now, vibecoding makes this very fast, and it's easy to adjust for you. It's also relatively hard to get flagged by the app store, essentially like AI skinning.
All you need to do is organize the content and package it all together.
Then, use something like Qimai to check ASO keywords.
Just lay it all out there.
The product is free, filled with ads, and an 8-dollar permanent membership removes ads.
You can create something like "Friends" or "Modern Times," or even some celebrity courses. Don't feel guilty about piracy; it's fine.
You can even use Li Kui and Li Gui to ride on others' brand names. It doesn't matter if the downloads are junk; let them uninstall as they please.
Previously, someone used vibecoding to generate tools for appconnect, continuously creating content for listing. This process doesn't take much time.
On your first day, you might take 3 days to list one app, but later you can list a new application in just 3 hours. If you keep doing this for a hundred or so applications, you'll always find long-tail keywords that rank. This thing adds up over time, and the core point is that you've packaged everything locally, with no resource consumption, creating zero-cost assets.
Find what can generate volume, and then invest in serious optimization.
In the AI era, there are just too many arbitrage opportunities like this. You can learn while you work; there's nothing you can't handle.
The most important thing is to believe in yourself and in AI. Don't magnify the difficulties for yourself.
If you do 20 of these, you'll naturally find the way and gain experience.

Yangyi3.8. klo 17.06
Is it true that if you don't work for a factory these days, you have no way to make a living?
There are so many ways to support yourself, why are people so fixated on just tightening screws?
As long as no one provides a stable living allowance for 10 hours a day, you can't survive.
When there really are no options, just go for manual labor; there’s always work that can be done with your hands, right?
If you don’t know what to move, just look at what the market needs and move that.
So many people want to take AI overseas.
Just find all the brand names of AI products and search on various platforms.
What Twitter, what YouTube, what Substack, Medium, Reddit, just search.
Keep searching and you’ll find the sources of information, and you can establish content sources.
You can continue to search for these people's names, find their English interviews,
translate YouTube videos with VideoLingo.
Post the videos on Bilibili, then throw the subtitle files to Gemini for further translation, and the articles will come out, post them on WeChat public account.
Then take that article and create a Notion page, add a couple of pictures for Xiaohongshu, and post that too.
When the traffic comes, you can sell reference readings, right?
You can create a reading community, calling it a study and exchange group.
One reading material for 365 days a year, and that traffic asset will just keep rolling in.
There are a lot of extremely simple things in this world with no barriers to entry.
Really, you won't starve. Don't think that just because of layoffs, the sky has fallen.
Most people probably earn around 20,000 to 30,000 yuan a month.
You can earn back whatever you do, so stop being anxious.
Everyone can do it; just get your hands dirty first.
45,61K
I took a look at the Prompt101 shared by Anthropic.
There are two points that I think are worth sharing to enhance usage:
The first is called Pre-fill Response.
>What it does: It provides a starting point for the model's response directly in the API call.
>Why: This is an effective technique to force the model to output a specific structure (especially JSON).
>Example: In the "assistant" section of the API request, directly fill in a left curly brace { to guide the model to complete a full JSON object.
This pre-fill makes it easier to guide the model to respond according to your output pattern because LLMs are like a game of word association.
The second is Prompt Debugging, utilizing the Extended Thinking / Scratchpad feature.
>Function: This is a powerful debugging tool. When this feature is enabled, the model will explicitly show its "thinking process" or "scratchpad."
>How to use: By analyzing the model's thought records, you can understand where it encountered difficulties or misunderstandings. Then, you can turn these insights into clearer step-by-step instructions, solidifying them into your prompts, making the prompts themselves stronger and more efficient.
This is actually an idea shared by Yulong in 2023, which is that when the output of the prompt does not meet expectations, it is necessary to communicate with the model to confirm which part of the prompt the model misunderstood semantically.
Printing thoughts is also a variant of this idea.
That's what I've learned; the video may have other information for learning 👇
18,9K
Johtavat
Rankkaus
Suosikit
Ketjussa trendaava
Trendaa X:ssä
Viimeisimmät suosituimmat rahoitukset
Merkittävin