AI Startup Hugging Face is Building Small LMs for 'Next Stage Robotics'
Hugging Face aims to utilize small language models (LMs) for advanced robotics, according to Co-Founder and Chief Science Officer Thomas Wolf. He articulated the necessity for low latency in robotic applications, emphasizing that small LMs could be crucial for enabling quicker understanding and responsiveness in robots. Wolf highlighted that these models are capable of performing tasks previously thought possible only with larger models, including operations on devices like laptops and smartphones. Hugging Face's recent release of their 1 billion parameter LLaMA model demonstrates that smaller models can achieve performance metrics comparable to those of much larger models. Wolf explained the company's strategy of using specialized datasets for training, which allows for efficient and rapid deployment in various applications, such as data handling and image processing. He noted that as the field of AI evolves, there will be a split between large models focused on solving complex human challenges and smaller, embedded models that enhance everyday tools and appliances.
Source 🔗