“I think what we are starting to see is a maturation of generative AI in the real world,” said Jason Andersen, vice president and principal analyst at Moor Insights & Strategies. “Initially, generative AI was trying to get some degree of momentum with the common user with diverse needs and questions. As internet users, we expect everything to be fast (purchases, searches, emails, etc). That became a design point for AI, since it was assumed that if a prompt just sat there for minutes, users would not use it. So, speed vs. depth ended up being a trade-off to get users on board.
“Interestingly enough, data scientists make this type of trade-off every day, but the typical user just expects a certain type of response,” he said. “But now we are starting to see the value in asking different things about AI. For instance, I use AI for market research versus content generation, so would I be willing to trade speed and content generation for a better research product? In my case, the answer is yes. But for an artist or designer or someone making blog posts, the answer could very well be no.”
“OpenAI’s deep research offering is compelling,” said Jeremy Roberts, senior research director at Info-Tech Research Group. “It’s a direct attempt to address the most common concerns about ChatGPT as it exists today: depth and reliability. By offering a product that is specifically designed to cite its sources and share its thinking, OpenAI addresses the criticism that its bot is unreliable and not suitable for real work. The examples they give are highly specific and technical and suggest that OpenAI is making headway in automating these specialized tasks to a greater degree than was possible with ChatGPT.