
In the era of physical AI, a "visual data competition" will unfold

I'm PortAI, I can summarize articles.
Morgan Stanley believes that just as chatbots require text data to train large language models (LLM), physical robots need data to train their vision-language-action models (VLA). It is expected that as computing power continues to expand and efficiency improves, AI companies will need a large amount of visual data to create a "digital twin" of the physical world, and visual data will become the competitive focus for AI giants
Log in to access the full 0 words article for free
Due to copyright restrictions, please log in to view.
Thank you for supporting legitimate content.

