Artificial intelligence is expected to contribute an estimated $15.7 trillion to the global economy by 2030, with the potential to boost the GDP of local economies by as much as 26%. The number of enterprises developing AI solutions and helping companies put them into operation has ballooned in tandem with growing demand.
AI has the power to transform the way organizations operate, reducing costs, increasing efficiency and improving compliance. But, as with any new technology, it’s not without its hurdles.
Some AI challenges are technical, like those relating to algorithms and data hygiene. Others are of the human variety, like the potential for user error. No matter their source, AI challenges must be part of the conversation when introducing intelligent systems, with proactive steps taken to prevent these pitfalls before they cause problems that outweigh the technology’s benefits.
Here are a few of the most common challenges to beware of when integrating AI and tips for how to avoid them.
- Unreliable data
AI systems rely on data to identify patterns and draw conclusions. If that data isn’t accurate, up-to-date or standardized, the resulting output isn’t reliable or useful.
In addition to reliability, there’s also the question of bias. Bias in AI systems frequently occurs when the data set used to build the algorithm doesn’t match or apply to the population the algorithm is being used on.
AI bias is a bit easier to grasp in the context of a real-world scenario, like in the case of machine-learning tools used to detect skin cancer. Some physicians have raised concerns about such tools because they’re “trained” on data sets from mostly white patients, making them less skilled at detecting melanoma on dark skin. In order to be applied to all patients with a higher degree of accuracy, the algorithm would need to be formulated using data from a much more diverse patient group.
To avoid data-based issues with AI systems, organizations must ensure that the data used to teach the system is both ample and varied. Complex AI models require anywhere from thousands to millions of data sets to perform calculations and make predictions that are reliable.
Additionally, the data that will be fed into the system on an ongoing basis must be unified. This means data from all sources, like different business units or geographical regions, is standardized into the same format (consistent naming conventions, units of measurement, and so on) and centralized in one place to produce a complete, non-siloed picture.
- Maintaining compliance
Staying within the bounds of the law is a key concern in any aspect of doing business, but AI introduces a new set of compliance challenges. Even those whose job it is to ensure compliance don’t always have a firm grasp on this new and uncharted territory; according to a KPMG survey, 80% of risk professionals were not confident about the governance in place around AI technologies.
First, companies must ensure the data their AI systems are using isn’t subject to privacy laws, like GDPR, and if it is, that the proper steps are taken to maintain compliance around it. Anonymization is one such strategy. In anonymization, data is stripped of personally identifying information while retaining other attributes that are pertinent to the algorithm.
Third-party AI vendors are another concern. With the exception of the largest enterprise organizations, most companies won’t manage every (or any) aspect of their AI setup and maintenance in-house. This means committing to work only with reputable outside partners who make staying abreast of the latest AI regulations a priority.
One of the best ways to mitigate risk is to automate compliance wherever possible. Compliance automation takes humans out of the equation and makes the steps involved in risk mitigation, like capturing consent, a built-in part of the process. In one such example, a caller might be automatically prompted to consent to call recording before being put through to a live agent.
Not only does compliance automation remove manual processes and reduce the risk of error, it automatically documents every step taken in the process, creating a paper trail that companies can refer back to if needed in the case of an audit or regulatory issue.
- Failing to quantify results
One of the potential pitfalls of AI technology stems from all of the buzz around it; because it’s become such a big part of the conversation, some companies run the risk of investing in AI purely for AI’s sake. This would be a mistake and a costly one at that.
There are plenty of studies out there that support the business cases for an AI strategy, but far fewer resources about how to actually quantify those benefits. Without hard benchmarks and measurable results, it’s more of a challenge to quantify the impacts of AI than it is for other business investments. Still, finding ways to measure and assess AI’s influence is a necessary undertaking to avoid needless spending.
Before investing in AI, set clearly defined benefits you’re hoping to obtain or goals you’re hoping to achieve as a result of it. Soft goals like ‘achieve better usability for agents’ or ‘encourage more positive attitudes toward virtual agents’ can be useful in demonstrating impact in not-easily-quantifiable areas.
Perhaps the biggest challenge of all when it comes to AI in business is the danger of adopting a ‘set it and forget it’ mentality. Just because the system can “think” for itself doesn’t mean it’s without the need for human intervention. Without careful monitoring and correction, any flaws in an AI system are likely to get worse, not better, with time.
Thus, it’s imperative to ensure a framework that facilitates collaboration–between people and technology, data scientists and business analysts, in-house and third-party teams–all working together and optimizing over time for the successful implementation of AI.