When trying to understand where work will go with artificial intelligence automation, you need to understand the idea of an AI agent.
Agent software in various forms has been around for decades. The term referred to small programs that monitored activities and reacted to different circumstances. They might help process a data flow, alert specific people in case of problems, or otherwise take relatively limited actions under prearranged conditions.
According to a report from CB Insights, a market research and analyst firm, AI agents are more sophisticated versions of small utility programs. They are powered by large language model capabilities, with a greater capacity to interact with humans, and can respond in more complex ways. The software has become very popular, and more than 50 companies have focused on agents, agentic workflows, and agent infrastructure since 2022.
Some of the complex interactions include the ability to schedule meetings, use websites, and plan vacations. The technology is “evolving quickly and becoming more capable — with varying degrees of autonomy,” according to CB Insights. They can pursue directives and, in doing so, actually decide on what tasks are necessary, arrange them, and complete the tasks to reach the objective.
There are significant limitations currently, like working accurately with other tools like web pages or application programming interfaces, which allow one program to interact with or control another.
Depending on how further development goes, vendors might be able to a large degree replace people in performing tasks. CB Insights said it had spoken to one company in 2019 that replaced 135 full-time employees involved in call routing by using Google’s Contact Center AI Dialogflow solution. Other areas that will likely see heavy replacement of people are areas like customer support and even software development to a degree.
However, there are limitations. Companies trying to use AI often don’t trust the results and need to have people verify to ensure the agents act correctly. The software may not be sophisticated enough to respond as a human would have to. The technology may not be reliable.
In the meantime, companies trying to implement AI agents must understand how employees will react. They might get angry and seek other jobs, taking important intellectual capital with them. That could put companies in a position of not having human knowledge backup to correct problems.