Operational risks in AI assess misleading programming and security vulnerabilities.
Algorithmic bias[]
Machine-learning algorithms identify patterns in data and codify them in predictions, rules and decisions. If those patterns happen to reflect some existing bias then machine-learning algorithms are likely to amplify that bias and may produce outcomes that reinforce existing patterns of discrimination.
Cyber attacks[]
AI systems can be trained to detect, monitor and repel cyber attacks. They identify software with certain distinguishing features – for example, a tendency to consume a large amount of processing power or transmit a lot of data – and then take action to close down the attack.
Programmatic errors[]
Errors are a common problem with all computer programs and AI is no exception. Where errors exist, algorithms may not perform as expected and may deliver misleading results that have serious consequences – for example, the integrity of an organization’s financial reporting could be compromised.