ru24.pro
News in English
Декабрь
2024
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
31

AI Translator Could Be ROI Boon

0

MIT researchers have created a way to turn baffling AI decisions into plain English, using an innovative system that acts like a translator between machines and humans.

EXPLINGO pairs artificial intelligence (AI) language models with a built-in fact-checker to turn complex machine-learning explanations into clear narratives. It’s part of an ongoing effort to make AI more explainable.

“Our goal with this research was to take the first step toward allowing users to have full-blown conversations with machine-learning models about the reasons they made certain predictions, so they can make better decisions about whether to listen to the model,” Alexandra Zytek, lead author of a paper on this technique, said in a news release.

This technology could help businesses provide clearer explanations to customers about AI-driven decisions, from product recommendations to loan applications. The ability to better understand AI reasoning could also help companies refine their automated systems to better serve customer needs while maintaining transparency about how decisions are made.

From Machine Language to Plain English

The system works in two stages. First, it takes dense technical data and visualizations that explain an AI decision and transforms them into readable paragraphs. Then, a quality-control mechanism automatically evaluates whether these plain-English explanations are accurate and reliable. Users can customize the explanations to match their expertise level or specific needs, making it possible for retailers and others to understand why an AI system made a particular choice.

“Retail was, for a long time, the land of gut instincts — except now, AI is supposed to have the better gut,” Lars Nyman, chief marketing officer at CUDO Compute, told PYMNTS. “If AI can say, ‘Stock 30% more parkas this month because snowfall forecasts increased by 25%, and similar patterns boosted sales last winter,’ it changes the game. Managers can now understand why the AI suggests what it does. It’s the difference between a general barking commands and a strategist briefing their team. With clear reasoning, managers can align faster, make sharper calls, and avoid the simple ‘computer says no’ position.”

The need for AI transparency extends beyond retail management to the customer experience. Dominique Ferraz, executive director of product and technology at digital product studio ustwo, told PYMNTS that customers are increasingly skeptical of opaque automated systems in retail, particularly AI-driven recommendations and pricing. While customers may not see the AI processes managing inventory and product availability, they significantly impact customer satisfaction.

Poor implementation of AI can erode brand trust and reduce sales, Ferraz said. However, when AI systems explain their reasoning through conversational interfaces that play a translator role, they can build customer relationships as effectively as traditional salespeople.

“This also applies to sales teams having to contend with pricing models and how to justify those to clients in this age of extremely tight supply chain tolerances and expectations of high availability or immediate satisfaction,” he added. “Making sure customers can see how pricing is affected by these models in a simple and conversational way can also help customers make sense of different options and better distinguish between retailers.”

AI’s Black Box Problem: When Machines Can’t Explain Themselves

MIT’s invention highlights a persistent challenge: explainability. Many AI systems operate as “black boxes,” producing decisions and recommendations without clear reasoning. This lack of transparency poses risks in areas like healthcare, finance and criminal justice, where understanding how decisions are made is essential.

The problem lies in the complexity of AI models, particularly neural networks, which process vast amounts of data to deliver outputs. While these systems excel at pattern recognition and prediction, they often fail to provide insight into their decision-making processes, leaving users unable to verify or challenge their conclusions.

In fields where accountability is vital, the absence of clear explanations can erode trust and complicate oversight. Efforts to address this issue include developing interpretable models and implementing stricter transparency guidelines, but solutions remain incomplete, raising questions about the ethical and practical implications of deploying AI in sensitive contexts.

The post AI Translator Could Be ROI Boon appeared first on PYMNTS.com.