Page 1 of 1

Scaling AI Chatbots with Chatfuel

Posted: Tue Apr 22, 2025 10:04 am
by nusaibatara
For SaaS companies, delivering an exceptional customer experience is critical as it lays the foundation for trust and satisfaction.

Using Nebius AI Studio to leverage a cascade of Llama-405B models along with a custom SDK, Chatfuel, the leading AI-powered customer engagement automation platform, has achieved significant efficiency gains with improved response quality and interaction speed for its AI chatbot agents.

facts.jpg
Founded in 2015 by whatsapp number list two Y Combinator alumni, Chatfuel's no-code, AI-powered platform is designed to automate communication and improve customer engagement for service businesses.

In partnership with Meta since 2016, the company serves thousands of customers, integrating its software directly with the technology giant's messaging platforms and streamlining interactions on WhatsApp, Facebook Messenger, Instagram and websites.

Chatfuel’s AI agents are pre-trained on billions of messages, freeing up human agents to focus on growth and creativity. Previously, Chatfuel relied exclusively on GPT-4 models for its chatbot agents. However, it needed more reliable, high-performance models to streamline onboarding processes and enhance agent capabilities. This required an infrastructure capable of handling demanding training and testing processes.

Through our partnership, Chatfuel leveraged a cascade of state-of-the-art Llama-405B models using Nebius infrastructure to achieve exceptional chatbot performance. The Nebius team developed this cascade and custom SDK, enabling Chatfuel to achieve significant efficiency gains with improved response quality and interaction speed.

Balancing cost, performance and speed
It took just one month to launch the cascade of models into production using Nebius AI Studio. The Nebius team played a crucial role in developing and testing multiple versions of the model, helping Chatfuel identify the best balance between performance, cost, and speed. This collaboration ensured that Chatfuel’s new models met the stringent requirements needed for real-time chatbot performance and scalability.

Chatfuel used Nebius AI Studio to serve the Llama 405B models, enabling them to perform various tasks (constrained text generation, text classification, etc.) with state-of-the-art quality. Additionally, Chatfuel used Nebius Compute Cloud to host the solution components and evaluate their quality.

Adopting the Llama-405b model also improved the quality of Chatfuel’s customers’ analysis of free-form descriptions of desired agents and reduced the costs associated with LLM. It also improved performance over OpenAI’s GPT-405b model by 24% when it came to routing user input to the correct agent.

Data manipulation
Previously, creating effective AI-powered chatbots required large amounts of data to fine-tune models. However, much of the data Chatfuel works with initially lacked proper labeling, making it difficult to achieve high-quality results.

The Nebius team helped optimize the training process, allowing Chatfuel to maintain excellent results even with smaller, well-curated data samples. This approach not only improved the quality of Chatfuel’s solutions, but also saved significant costs and resources, allowing them to deliver powerful chatbot solutions more efficiently and effectively.