Trending topics
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.
LibTV's brand new video Agent review
It can integrate with Claude Code, producing videos with just one sentence...
LiblibAI has launched a brand new workflow AI video Agent platform, LibTV, which allows for video generation creation by dragging and dropping nodes on an infinitely large canvas to combine various models.
The coolest part is that they released a skill that integrates Claude Code and crayfish, enabling AI to automatically help you arrange nodes...
Claude Code and others can automatically call its various nodes through the Skill interface, helping you build nodes automatically, running from script conception to final product editing seamlessly; just input a sentence, and the final video comes out.
During my testing, I kept exclaiming how impressive it is...
The command to install Skill in one line: npx skills add libtv-labs/libtv-skills
Send it to Claude Code or Little Lobster, and it will help you automatically install it, set up the Access Key, and you're done.

I used my own cartoon image "Little Mutual" as the protagonist,
Just make up a short story based on what you know about me and make a complete short film of about 2 minutes

Then you don't have to worry about it...
Just let it run automatically
However, it will give you a link to the workflow, which you can access in your browser to see the workflow in action and its status in real-time ↓
First, write the script, then generate the character's three-view.

Next are various storyboard scenes, followed by the generation of the storyboard video.

During the process, if you are not satisfied with the scenery or other pictures, you can directly modify and adjust it
Truly achieve full automation and visualization of the agent in parallel, the agent is responsible for the construction of automation, you come to be the overseer, there is a problem, you can intervene at any time, directly modify.
While automating, you have this strong control, which is very nice...
A model of human-machine cooperation... 🤪

In the end, it will automatically stitch together the generated storyboards into a video.
The whole process takes about 20 minutes, and I didn't touch it at all. The points consumed are about over 5000, which translates to a few dollars per video...
Take a look...
It's just an image + a sentence ↓
Claude can automatically help you queue up for background generation.
You can go to sleep without worrying; it will organize the plan for you ↓

Some issues
The duration should ideally be controlled to 1 minute for the content; if it's too long, it becomes slow and less controllable.
If it exceeds 1 minute, you'll have to break it down yourself, as it's currently not possible to handle long videos all at once.
One Key can only run one task at a time; if you want to run tasks in parallel, you have to queue them. Additionally, if multiple sessions are opened, there can be conflicts, and sometimes materials get inserted randomly.
There are still some uncontrollable issues with automation, and manual oversight is still needed on the workflow page to make adjustments.
The Chinese subtitles for the images are not very good; it seems to be a model issue, and manual adjustments are still needed for better results.
However, with the upgrade of model capabilities, such as the integration of Seedance 2.0 or improvements in workflows and skills, this should be resolved and become more intelligent.
The integration of models here is quite comprehensive, with all-purpose image V2, Seedream 5.0 for images, Keling O3, Wan 2.6 for videos, and audio features including music generation and text-to-speech.
I feel that integrating overseas models might yield better results, but the costs would be higher.
Currently, I think the highlight is: this is a new mode of human-machine collaboration.
For more detailed evaluations and introductions, see here:
49
Top
Ranking
Favorites
