Insight and analysis on the data center space from industry thought leaders.
The Hidden Bias in Machines
As image recognition technology continues to be refined, developers need to be conscious of the images they use in their solutions and the implications it will have on their technology, and in the case of Google, society.
May 12, 2017
Vatsal Patel is Software Development Manager for Accusoft.
Michael Archambault is the Software Engineer for Accusoft.
Despite technology milestones in recent years, artificial intelligence (AI) can still suffer from unintended consequences. In 2016, Microsoft’s Twitter bot, Tay, tweeted racist and highly inappropriate statements after learning from her peers on the platform. Unable to overwrite the malicious information from social learning, Tay’s account was shut down and still remains private.
Other incidents like Tay have also surfaced. One of the most notable was with Google’s image recognition software. Google Photos was accused of discrimination after labeling some non-white users as “gorillas.” Google claims the incident was unintentional, yet it remains an issue of concern that turned an overlooked piece of code into an issue of race.
Unfortunately, algorithms like Microsoft’s and Google’s are still dependent on human input and their context is limited by the algorithm’s parameters. This is how Tay is unable to determine truth from internet trolling, and why Google Photos doesn’t have the ability to differentiate some non-white users from gorillas. However, this issue exists in more algorithms than we are aware of.
This machine-based bias stems from the point when humans program artificial intelligence to automate machine learning. Because humans build the datasets used to train artificial intelligence biases, limitations and human error can affect the output, so the fault lies in how humans are training these machines from the beginning.
Machines Are the Products of Human Interaction
With AI, humans are the puppet masters. It is the human input that guides machines when they process the information used to classify datasets. In its simplest form, AI analyzes unfamiliar inputs within a database of known values to arrive at the correct output. Just like in human learning, the more algorithms are fed indexed images, the more accurate the software processing becomes. If you train an algorithm with hundreds of cat photos, it will be able to classify a photo of a Siamese that it’s never seen before as “cat.” However, issues can form when algorithms are trained with typical or perfect images in a controlled environment. If developers do not train these machines with data that represents diverse conditions, complications can arise.
These issues can even affect unexpected applications of image processing software, like barcode recognition. Standard 1D barcodes consist of alternating black and white bars of various widths that contain encoded value. The scanners process the width by analyzing the width between the bars and matching it to a preselected set of parameters. If the bar is ambiguous due to poor lighting or print quality, the computer is unable to decipher the encoded data. In these cases, a computer is capable of detecting a variety of potential matches, but would need additional information to identify the correct value.
A misread barcode can go beyond someone receiving an incorrect product from an online order. In hospitals, barcodes identify patients’ critical health information, like medication-specific allergies; an incorrect or partial scan could lead to serious consequences like anaphylactic shock or even death. Constantly having to correct machine mistakes leaves users vulnerable to these errors.
Accuracy Requires Holistic Inputs
Scanners are trained to recognize images using perfect examples, like a well-lit, clear photo. In reality, barcodes are often imperfect. Barcodes found on shipping labels can easily become distorted in transport, leading to errors when processing. In order to preempt this, developers need to use a variety of conditions and expand the range of inputs when building algorithms.
In the case of barcode scanners, algorithms need to be trained using codes in an imperfect condition. For applications like Google’s Photo assistant, exposing the software to a diverse range of subjects can allow it to correctly identify and achieve the intended results. Like any good teacher, developers must create a realistic environment that the computer can use to process and compare features. For example, a person may see both a tiger and zebra and be able to differentiate the two based on the knowledge that they are different species. However, a computer, if not properly trained, will see the stripes and assume they are a similar classification. Humans know that it is illogical that a zebra and a tiger would be classified the same, but a computer needs to feed holistic inputs in order the clearly decipher the differences.
While it seems logical to create a comprehensive database with clear datasets, in reality, most situations have some ambiguity. AI-powered machines have the capability for accuracy when algorithms encompass as many inputs as possible, but this is not a fix-all solution. More input will also expose the same bias present in humans, so how the machines decipher the input and features is an essential factor. As image recognition technology continues to be refined, developers need to be conscious of the images they use in their solutions and the implications it will have on their technology, and in the case of Google, society.
Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
About the Author
You May Also Like