Trendaavat aiheet
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.

Haotian | CryptoInsight
独立研究员| Researcher | 以技术和商业视角解读区块链前沿科技 | ZK、AI Agent、DePIN ,etc | 硬核科普 | Previously:@ambergroup_io | @peckshield | DMs for Collab | 社群只对Substack订阅会员开放
Once a low-key investor in Sei, @circle has now made a remarkable entry into Wall Street, and is it finally time to "support old friends"? After the native USDC launched on Sei, many might wonder, why @SeiNetwork? What impact will the "new identity" of a compliant stablecoin issuer have?
1) Sei is a high-performance layer 1 blockchain specifically optimized for digital asset trading. It achieves the ability to process multiple transactions simultaneously through a parallel EVM architecture, resulting in a finality of 390 milliseconds and a parallel processing capacity of 28,300 TPS, making it a typical representative of emerging high-performance blockchains.
Moreover, the key is not just in the "performance score"; the goal of high-performance layer 1 is to open up a brand new high-frequency application scenario that can solve enterprise-level high-concurrency payments.
Sei's differentiation lies in its underlying optimization specifically for trading scenarios: it has a built-in native order matching engine (OME) that directly addresses the MEV issue, a dual-turbine consensus mechanism, and a SeiDB storage layer that prevents on-chain data bloat, making it the "exclusive" track for high-frequency circulation of stablecoins.
Thus, USDC, with its new identity as a compliant listed company, teaming up with its old friend Sei—one providing the highway and the other the sports car—could be a perfect partnership? If Circle's early investment in Sei was a strategic investment, then the launch of USDC is clearly a commercial realization of that initial investment.
2) As a listed compliant stablecoin issuer, Circle, with USDC as the most trusted and regulated stablecoin globally, can bring significant liquidity to Sei, which naturally benefits its DeFi, payment, gaming, and other segmented application scenarios by introducing compliant stablecoin underlying assets.
But the deeper benefit is that I feel this is actually Wall Street issuing a "procurement standard" for crypto infrastructure.
Looking back three years when stablecoins were questioned, to BlackRock, Fidelity, and others actively laying out digital asset infrastructure, to the current launch of the GENIUS Act, stablecoin issuer Circle has become a new darling of the US stock market, etc. If nothing unexpected happens, Wall Street will need to filter out more "qualified suppliers" that can accommodate institutional funds?
After going public, Circle bears the pressure of performance growth and financial report disclosure, and it can only push USDC more vigorously into the realm of "mainstream commercial applications." The only path to separate stablecoin application scenarios from pure speculative trading cycles is through large-scale breakthroughs in enterprise-level high-frequency payment scenarios, including real-time payroll, millisecond-level cross-border B2B settlements, supply chain finance payments, and other segmented scenarios.
I believe this is the core significance of Circle as a listed compliant stablecoin issuer and $SEI as a new high-performance trading-optimized layer 1 rapidly "marrying".
8,91K
Regarding the recent Ethereum version of the "MicroStrategy Summer" craze, can $ETH really replicate the "positive flywheel" of BTC MicroStrategy? Here are some personal viewpoints:
1) The ETH MicroStrategy is indeed modeled after the successful example of BTC MicroStrategy, and in the short term, many U.S. stock companies will try to FOMO, creating a wave of positive flywheel. Regardless of how the U.S. stock market is operated, the fact that traditional institutional funds and retail investors are buying $ETH as a reserve asset has genuinely pulled Ethereum out of its long-term sluggish state.
In other words, FOMO driving price increases is an unchanging rule in the crypto market; however, this time the FOMO participants are no longer just retail investors from the crypto space, but rather real money from Wall Street, which at least verifies that ETH has finally escaped the predicament of purely relying on narratives from the crypto space and is starting to attract incremental funds from outside the circle.
2) BTC is closer to the positioning of "digital gold" as a reserve asset, with relatively stable value and clear expectations, while ETH is essentially a "productive asset," with its value tied to multiple factors such as the usage rate of the Ethereum network, gas fee income, and ecological development. This means that the volatility and uncertainty of ETH as a reserve asset are greater.
If the Ethereum ecosystem encounters significant technical security issues, or if regulators apply pressure on DeFi, staking, and other functions, the risks and volatility variables faced by ETH as a reserve asset could be much larger than those for BTC. Therefore, while the narrative logic of BTC MicroStrategy can be referenced, it does not mean that the market pricing and valuation logic can remain consistent.
3) The Ethereum ecosystem has a more mature DeFi infrastructure accumulation and richer narrative extensibility compared to BTC. Through the staking mechanism, ETH can generate about 3-4% native yield, making it akin to "on-chain interest-bearing treasury bonds" in the crypto world.
Institutions buying into this story may see it as a short-term negative for the previously constructed BTC layer 2 and various infrastructures providing native asset yields for BTC, but in the long run, it is quite the opposite. Once ETH plays a greater catalytic role as a programmable interest-bearing asset in the ETH MicroStrategy, it will instead stimulate the BTC ecosystem to develop more rapidly and fill in the foundational infrastructure.
4) This round of MicroStrategy Summer is essentially a major reshuffling of the narrative direction in Crypto's past. Previously, project parties constructed projects and spread technical narratives to VCs and retail investors in the market, which were essentially aimed at the native residents of the crypto space. Now, this new narrative, whether it’s RWA or TradiFi, may need to tell stories to Wall Street.
The key difference is that Wall Street does not buy into pure concepts; they want PMF—real user growth, revenue models, market size, etc. This forces crypto projects to shift from a "technology narrative orientation" to a "business value orientation," which is not the pressure that previous competitor Solana brought to Ethereum? Ultimately, it must be faced.
5) This round of MicroStrategy concepts in the U.S. stock market, including SharpLink Gaming, Bitmine Immersion Tech, Bit Digital, BTCS Inc., etc., are mostly companies that have struggled with growth in traditional capital market businesses and need to integrate crypto to find new breakthroughs. Their choice to go all-in on crypto assets is often due to a lack of growth points in their main business, forcing them to seek new value growth engines.
The reason these operators dare to be so aggressive is largely due to the "arbitrage window" created by the U.S. government's bold push for reform in the crypto industry before the regulatory mechanism matures. In the short term, they have exploited many legal and compliance loopholes—such as the ambiguity of accounting standards for crypto asset classification, lenient SEC disclosure requirements, and gray areas in tax treatment, etc.
MicroStrategy's success largely benefited from the super bull market of BTC, but as a replicator, they may not have the same luck and operational ability. Therefore, the market heat brought by these operators is not much different from the previous pure crypto native narrative hype; it is essentially a gamble and trial-and-error, and one must remain vigilant about investment risks.
Note: This round of MicroStrategy Summer is more like a "big drill" for crypto entering the mainstream financial system. Success brings joy to all, while failure is a small joy (after all, any experiment that can pull ETH out of the narrative fatigue quagmire is a success, regardless of the outcome!).

16,3K
In the past, after $ETH surged, it would trigger a round of rotation in the Ethereum ecosystem, with MeMe coins, DeFi blue-chip layer 2s, NFTs, GameFi, and narratives like ZK and modularization all taking turns to launch, truly a flourishing scene;
Now that $ETH is taking off again, the market has completely changed its flavor. People are no longer focusing on the innovative narratives of the Ethereum native ecosystem; instead, they are searching for U.S. stocks that package ETH assets in TradiFi, even disregarding the underlying nature of these institutions/companies that are tokenized as U.S. stocks.
Well, BTC is hitting new highs, and ETH is also poised for a new high, but will that long-lost Crypto native altcoin season return? Is the new round of traditional finance-led RWA integration really more opportunity than trap? Is the wild, innovative, opportunity-filled, volatile, and imaginative Crypto retail market that we are familiar with still around?
60,21K
Haotian | CryptoInsight kirjasi uudelleen
AI is influencing Web3, will AgentFi be the core variable of the next wave?
Tomorrow night, we will join hands with industry alpha to have an in-depth discussion on "AgentFi: New Opportunities in the Web3 Intelligent Economy under the AI Wave":
Event details🔽
Date: July 18 (Friday)
Time: 20:00
Language: Chinese
Guest lineup:
- Haotian @tmel0211 (Independent Researcher)
- 0xTodd @0x_Todd (Partner at Nothing Research / Co-founder of Ebunker)
- LaoBai @wuhuoqiu (Former Partner in Investment and Research at ABCDE)
- Howe @0xcryptoHowe (Intern at Amber Group)
- Anne @AnneXingxb (AI Researcher at Amber Group)
Topics include entrepreneurial opportunities in Agent, structural variables, and the integration path and future outlook of AI + Crypto.
Welcome to listen and foresee the next explosion point of the intelligent economy together👉
See you tomorrow night!

4,55K
Why are $ETH's new highs important for the altcoin market?
1) As the leading project of the technical narrative, the return of market confidence in the technical narrative of the altcoin market should be based on the market's retrust in ETH's technical roadmap and the vision of the application of the product protocol.
2) ETH production is not only a single technical narrative project, but also the technical logical relationship between its large ecological projects, TVL liquidity composability, mutual pricing and evaluation system, etc., are all determined by the market fundamentals of Ethereum and directly affected.
3) The ETH ecosystem has precipitated too many old narratives that are not expected, whether DeFi can be revived again, whether NFT can set off another wave of literature and art, whether layer2 really has the value of Mass Adoption, whether the new narrative of RWA can lead the main wave, etc. In the market downturn, these narratives that were over-suppressed by emotions may start a round of devastating rotation once the valuation logic of ETH breaks new highs and is repaired;
9,72K
It can be seen that there are more and more projects to TGE, but the fundamentals of the altcoin market do not seem to have improved much:
1) Ethereum is still at a low level and has not yet been able to effectively start a new round of copycat season market, and it is difficult to say how much it will rise, but the emotions must be in place, at least until the belief in the technical narrative dominated by the big ether returns;
2) Now in the face of various negative buffs such as liquidity depletion, technical narrative shortage, and no airdrop word-of-mouth effect, it is said that it is a helpless move for the project party, but from another point of view, the projects that currently have the courage to issue coins during the trough of the market and then follow up the operation and maintenance to become bigger are much more reliable than those projects that are still waiting for the issuance of coins when liquidity is abundant;
3) The contrast between Bitcoin's solo dance and the loneliness of altcoins has long been polarized, but we cannot expect to save the altcoin market purely by relying on the spillover effect of Bitcoin.
21,18K
After taking stock of several popular projects in the Crypto+AI track in the past month, it is found that there are three significant trend changes, with brief introductions and comments on the projects:
1) The technical path of the project is more pragmatic, and it begins to focus on performance data rather than pure concept packaging;
2) Vertical subdivision scenarios have become the focus of expansion, and generalized AI has given way to specialized AI;
3) Capital pays more attention to business model verification, and projects with cash flow are obviously more favored;
Attached: Project Introduction, Highlight Analysis, Personal Comments:
1、 @yupp_ai
Project Introduction: A decentralized AI model evaluation platform, which completed a $33 million seed round in June, led by a16z and participated by Jeff Dean.
Highlight analysis: Apply the advantages of human subjective judgment to the evaluation shortcomings of AI. Through artificial crowdsourcing to score 500+ large models, user feedback can be cashed out (1000 points = 1 USD), which has attracted companies such as OpenAI to purchase data and have real cash flow.
Personal comment: A project with a clearer business model is not a pure cash-burning model. However, the anti-swiping algorithm is a big challenge, and the anti-Sybil attack algorithm must be continuously optimized. However, judging from the scale of financing of 33 million US dollars, capital obviously pays more attention to projects with realizable verification.
2、 @Gradient_HQ
Project Brief Introduction: Decentralized AI computing network, which completed a $10 million seed round in June, led by Pantera Capital and Multicoin Capital.
Highlight analysis: Relying on the Sentry Nodes browser plug-in, there is already a certain market consensus in the field of Solana DePIN, team members from Helium, etc., the newly launched Lattica data transmission protocol and Parallax inference engine, which have made substantial explorations in edge computing and data verifiability, which can reduce latency by 40% and support heterogeneous device access.
Personal comment: The direction is very right, just stuck in the "sinking" trend of AI localization. However, to deal with complex tasks more efficiently than centralized platforms, the stability of edge nodes is still a problem. However, edge computing is a new demand for web2AI involution, and it is also the advantage of web3AI's distributed framework.
3、 @PublicAI_
Project Introduction: A decentralized AI data infrastructure platform that incentivizes global users to contribute data in multiple fields (medical, autonomous driving, voice, etc.) through tokens, with a cumulative revenue of more than 14 million US dollars and a network of million-level data contributors.
Highlight analysis: The technical integration of ZK verification and BFT consensus algorithm ensures data quality, and Amazon Nitro Enclaves privacy computing technology is also used to meet compliance requirements. What is more interesting is the launch of the HeadCap brainwave acquisition device, which can be regarded as an extension from software to hardware. The economic model is also well designed, users can earn $16 + 500,000 points for 10 hours of voice annotation, and the cost of enterprise subscription data services can be reduced by 45%.
Personal comments: I feel that the biggest value of this project is to meet the real needs of AI data annotation, especially in fields such as medical care and autonomous driving, which have extremely high requirements for data quality and compliance. However, the error rate of 20% is still a bit higher than the 10% of traditional platforms, and data quality fluctuations are a problem that needs to be solved continuously. The direction of brain-computer interface is quite imaginative, but it is not easy to execute.
4、 @sparkchainai
Project Brief Introduction: Solana's on-chain distributed computing power network completed a $10.8 million funding round in June, led by OakStone Ventures.
Highlight analysis: Dynamic sharding technology aggregates idle GPU resources and supports large model inference such as Llama3-405B, which is 40% lower than AWS. The design of tokenized data transactions is quite interesting, directly turning hashrate contributors into stakeholders, and incentivizing more people to participate in the network.
Personal comment: The typical "aggregating idle resources" model makes logical sense. However, the 15% cross-chain verification error rate is indeed a bit high, and the technical stability has to be polished. However, in 3D rendering, which does not require real-time performance, there are advantages, and the key is whether the error rate can be reduced, otherwise no matter how good the business model is, it will be dragged down by technical problems.
5、 @olaxbt_terminal
Project Brief Introduction: An AI-driven cryptocurrency high-frequency trading platform that completed a $3.38 million seed round in June, led by @ambergroup_io.
Highlight analysis: MCP technology can dynamically optimize transaction paths, reduce slippage, and increase the measured efficiency by 30%. Catering to the #AgentFi trend can be regarded as finding an entry point in the relatively blank segment of DeFi quantitative trading, which can be regarded as filling the market demand.
Personal comment: The direction is fine, DeFi does need smarter trading tools. However, high-frequency trading requires extremely high latency and accuracy, and the real-time synergy of AI prediction and on-chain execution needs to be verified. In addition, MEV attacks are a big risk, and technical protection measures must keep up.
Note: For more new projects in the AI+Crypto track, you can add them in the comment area, and I will screen projects with investment research value to follow up and share, thank you.
8,51K
In addition to the "sinking" of AI localization, the biggest change in the AI track in recent times is that the technology of multimodal video generation has made a breakthrough, from the original support for plain text to generate video to the full-link integrated generation technology of text + image + audio.
Let's talk about a few cases of technological breakthroughs, and let's feel it:
1) ByteDance's open-source EX-4D framework: monocular video becomes free-view 4D content in seconds, and the user recognition rate reaches 70.7%. In other words, for an ordinary video, AI can automatically generate a viewing effect from any angle, which previously required a professional 3D modeling team to handle;
2) Baidu's "Drawing Imagination" platform: A picture generates a 10-second video, claiming that it can achieve "cinematic" quality. But it's not an exaggerated component of the marketing packaging, and we have to wait for the actual effect after the Pro version update in August;
3) Google DeepMind Veo: It can achieve the simultaneous generation of 4K video + ambient sound. The key technical highlight is the achievement of the "synchronization" capability, which was previously spliced by video and audio systems, and it is necessary to overcome great challenges to achieve true semantic level matching, such as the synchronization of sound and picture corresponding to walking movements and footsteps in the picture in complex scenes;
4) Douyin ContentV: 8 billion parameters, 2.3 seconds to generate 1080p video, cost 3.67 yuan/5 seconds. To be honest, this cost control is okay, but at present, the generation quality is not satisfactory when encountering complex scenarios;
Why are these cases of great value and significance in terms of video quality, generation costs, and application scenarios?
1. In terms of technological value breakthroughs, the complexity of a multi-modal video generation is often exponential, a single frame image is generated with about 10^6 pixels, and the video should ensure timing coherence (at least 100 frames), plus audio synchronization (10^4 sampling points per second), and 3D spatial consistency should be considered.
All in all, the technical complexity is not low, originally a super-large model rigid all tasks, it is said that Sora burned tens of thousands of H100 to have the ability to generate video. Now it can be achieved through modular decomposition + large model division of labor and collaboration. For example, Byte's EX-4D actually disassembles complex tasks into: depth estimation module, perspective conversion module, timing interpolation module, rendering optimization module, and so on. Each module is dedicated to one thing, and then it is coordinated through a coordination mechanism.
2. Cost reduction: In fact, the optimization of the inference architecture itself includes the hierarchical generation strategy, which first generates the skeleton with low resolution and then enhances the imaging content with high resolution; The cache reuse mechanism is the reuse of similar scenarios; Dynamic resource allocation is actually to adjust the depth of the model according to the complexity of the specific content.
After such a set of optimization, there will be a result of 3.67 yuan/5 seconds of Douyin ContentV.
3. In terms of application impact, traditional video production is an asset-heavy game: equipment, venues, actors, and post-production, it is normal for a 30-second commercial to cost hundreds of thousands of dollars. Now, AI compresses this process into Prompt+ a few minutes of waiting, and can achieve perspectives and special effects that are difficult to achieve with traditional shooting.
In this way, the technical and financial barriers of video production have been changed into creativity and aesthetics, which may promote the reshuffling of the entire creator economy.
The question is, what does it have to do with web3AI because of so many changes in the demand side of web2AI technology?
1. First of all, the structure of computing power demand has changed, in the past, AI fought for the scale of computing power, and whoever had more homogeneous GPU clusters would win, but multimodal video generation required a diversified combination of computing power, which may be demanded for distributed idle computing power, as well as various distributed fine-tuning models, algorithms, and inference platforms.
2. Secondly, the demand for data annotation will also be strengthened, and the need to generate a professional-level video is required: accurate scene description, reference image, audio style, camera movement track, lighting conditions, etc. will become a new demand for professional data annotation, and the incentive method of web3 can stimulate photographers, sound engineers, 3D artists, etc. to provide professional data pixels, and enhance the ability of AI video generation with professional vertical data annotation;
3. Finally, it is worth mentioning that when AI gradually moves from centralized large-scale resource allocation in the past to modular collaboration, it is a new demand for decentralized platforms. At that time, computing power, data, models, incentives, etc. will be combined to form a self-reinforcing flywheel, which will then drive the integration of web3AI and web2AI scenarios.

Haotian | CryptoInsight2.7. klo 10.37
Recently, observing the AI industry, I found that there is an increasingly "sinking" change: from the original mainstream consensus of computing power concentration and "large" models, a branch that favors local small models and edge computing has evolved.
This can be seen from Apple Intelligence's coverage of 500 million devices, to Microsoft's launch of the 330 million parameter small model Mu dedicated to Windows 11, to Google's DeepMind's robot "off-network" operation, etc.
What's the difference? Cloud AI is based on parameter scale and training data, and the ability to burn money is the core competitiveness. Local AI is about engineering optimization and scenario adaptation, and will go further in protecting privacy, reliability, and practicability. (The hallucination problem of the main generic model will seriously affect the penetration of vertical scenes)
In fact, this will have a greater opportunity for web3 AI, it turns out that when everyone fights for "generalization" (computing, data, algorithms) capabilities, they are naturally monopolized by traditional Giant manufacturers, and it is a fool's dream to compete with Google, AWS, OpenAI, etc.
But in the world of localization model + edge computing, the situation faced by blockchain technology services is very different.
When an AI model is running on a user's device, how can it be proven that the output has not been tampered with? How do you collaborate on models without compromising your privacy? These questions are precisely the strength of blockchain technology...
Some new projects related to web3 AI, such as Lattica, a data communication protocol recently launched by Pantera 10M's @Gradient_HQ, to solve the data monopoly and black box problems of centralized AI platforms; @PublicAI_ HeadCap, a brainwave device, collects real human data and builds a "manual verification layer", which has achieved 14M revenue; In fact, they are trying to solve the problem of "trustworthiness" of local AI.
Bottom line: Only when AI truly "sinks" to every device will decentralized collaboration go from a concept to a necessity?
#Web3AI Instead of continuing to roll up in the generalization track, the project should seriously think about how to provide infrastructure support for the wave of localized AI.
10,82K
In the past few days, the secondary market of @plumenetwork $PLUME has performed well, which verifies the fact that if the current crypto market can be related to the Trump family, it can reap stable happiness? What's going on?
Recently, Plume announced a major partnership with World Liberty Finance (WLFI) @worldlibertyfi, founded by the Trump family, becoming the first RWA chain to integrate USD1 into its native stablecoin, pUSD reserve assets.
USD1 is a stablecoin backed by the Trump family with a 1:1 full peg endorsement of U.S. Treasury bonds, with a current market capitalization of $2.2 billion and the second largest 24-hour trading volume in the stablecoin market, second only to USDT.
In other words, USD1 is not an ordinary stablecoin project, with the dual backing of the Trump family and the SEC. This is in the context of the current crypto-friendly government, which is equivalent to the RWA track gaining a "policy through train".
So, what are the core highlights of this collaboration?
1) It has become a fact that RWA undertakes large-scale commercial applications of institutional-grade funds: Previously, MGX, which is backed by the United Arab Emirates, has settled its $2 billion investment in Binance with USD1, verifying that the RWA narrative has a large-scale commercial application.
Therefore, for Plume, access to USD1 is not only a liquidity support of $2.2 billion, but also means that RWA has been withdrawn from the pure DeFi scene to become an infrastructure that can undertake huge amounts of funds in traditional finance. In essence, it represents RWA to achieve the transition from "proof of concept" to "commercialization";
2) There is uncertainty in the stablecoin market landscape: USD1 has grown from zero to $2.2 billion in market capitalization, and its 24-hour trading volume can surpass USDC to rank second. Such a fast speed actually proves that the duopoly competition pattern such as USDT and USDC is easily leveraged by USD1, which has "political endorsement".
Obviously, Plume's early access to USD1 is actually to strive for an early position in the new market pattern of stablecoins. While other RWA projects are still struggling with USDT or USDC, Plume is already on the side of policy certainty;
To sum up: in the current environment where crypto is vulnerable to policy influences, sometimes "political correctness" is more likely to bring short-term alpha than technological innovation. The market's pursuit of the concept of the Trump family has formed a set of mature investment logic, and this wave of short-term political dividends must be eaten for a while.
Of course, policy friendliness can only be regarded as a short-term catalyst, and in the long run, projects still have to return to the test of fundamentals. In particular, the RWA ecosystem can truly undertake more traditional assets on the chain, and whether it can make differentiated product innovations under the compliance framework.

Plume - RWAfi Chain1.7. klo 08.20
Off the heels of our meetings with Treasury and SEC's Crypto Task Force last month, we are grateful that we got the privilege to spend time with Donald J. Trump, the president of the United States to discuss crypto policy, tokenization, and the future of RWAs in America.
The passing of the GENIUS Act sets the stage for the future of crypto in America. Our meeting with the SEC Crypto Task Force and the announcement from Paul Atkins referencing the innovation exemption that we introduced for onchain products like Nest underscore how serious this is.
We are excited to continue working with the President of the United States, Donald J. Trump, and the rest of the cabinet — including Scott Bessent, Vice President J.D. Vance, the SEC Crypto Task Force, and the Treasury to define and advance crypto policy in America, enabling a free, open, and permissionless world for everyone.
This is the start of much more, where permissionless innovation and regulatory clarity go hand-in-hand. Trillions of dollars of liquidity inbound.
Plumerica 🇺🇸

17,05K
Recently, observing the AI industry, I found that there is an increasingly "sinking" change: from the original mainstream consensus of computing power concentration and "large" models, a branch that favors local small models and edge computing has evolved.
This can be seen from Apple Intelligence's coverage of 500 million devices, to Microsoft's launch of the 330 million parameter small model Mu dedicated to Windows 11, to Google's DeepMind's robot "off-network" operation, etc.
What's the difference? Cloud AI is based on parameter scale and training data, and the ability to burn money is the core competitiveness. Local AI is about engineering optimization and scenario adaptation, and will go further in protecting privacy, reliability, and practicability. (The hallucination problem of the main generic model will seriously affect the penetration of vertical scenes)
In fact, this will have a greater opportunity for web3 AI, it turns out that when everyone fights for "generalization" (computing, data, algorithms) capabilities, they are naturally monopolized by traditional Giant manufacturers, and it is a fool's dream to compete with Google, AWS, OpenAI, etc.
But in the world of localization model + edge computing, the situation faced by blockchain technology services is very different.
When an AI model is running on a user's device, how can it be proven that the output has not been tampered with? How do you collaborate on models without compromising your privacy? These questions are precisely the strength of blockchain technology...
Some new projects related to web3 AI, such as Lattica, a data communication protocol recently launched by Pantera 10M's @Gradient_HQ, to solve the data monopoly and black box problems of centralized AI platforms; @PublicAI_ HeadCap, a brainwave device, collects real human data and builds a "manual verification layer", which has achieved 14M revenue; In fact, they are trying to solve the problem of "trustworthiness" of local AI.
Bottom line: Only when AI truly "sinks" to every device will decentralized collaboration go from a concept to a necessity?
#Web3AI Instead of continuing to roll up in the generalization track, the project should seriously think about how to provide infrastructure support for the wave of localized AI.
7,17K
Johtavat
Rankkaus
Suosikit
Ketjussa trendaava
Trendaa X:ssä
Viimeisimmät suosituimmat rahoitukset
Merkittävin