PopTika // Shutterstock Artificial intelligence is a technology built and programmed to assist computer systems in mimicking human behavior. Algorithm training informed by experience and iterative processing allows the machine to learn, improve, and ultimately use human-like thinking to solve complex problems. Although there are several ways computers can be “taught,” reinforcement learning–where AI is rewarded for desired actions and penalized for undesirable ones, is one of the most common. This method, which allows the AI to become smarter as it processes more data, has been highly effective, especially for gaming. AI can filter email spam, categorize and classify documents based on tags or keywords, launch or defend against missile attacks, and assist in complex medical procedures. However, if people feel that AI is unpredictable and unreliable, collaboration with this technology can be undermined by an inherent distrust of it. Diversity-informed algorithms can detect nuanced communication and distinguish behavioral responses, which could inspire more faith in AI as a collaborator rather than just as a gaming opponent. Stacker assessed the current state of AI, from predictive models to learning algorithms, and identified the capabilities and limitations of automation in various settings. Keep reading for 15 things AI can and can’t do, compiled from sources at Harvard and the Lincoln Laboratory at MIT. You may also like: How alcohol-related deaths have changed in every state over the past two decades Can: Be trained and ‘learn’ Ground Picture // Shutterstock AI combines data inputs with iterative processing algorithms to analyze and identify patterns. With each round of new inputs, AI “learns” through the deep learning and natural language processes built into training algorithms. AI rapidly analyzes, categorizes, and classifies millions of data points, and gets smarter with each iteration. Learning through feedback from the accumulation of data is different from traditional human learning, which is generally more organic. After all, AI can mimic human behavior but cannot create it. Cannot: Get into college Bas Nastassia // Shutterstock AI cannot answer questions requiring inference, a nuanced understanding of language, or a broad understanding of multiple topics. In other words, while scientists have managed to “teach” AI to pass standardized eighth-grade and even high-school science tests, it has yet to pass a college entrance exam. College entrance exams require greater logic and language capacity than AI is currently capable of and often include open-ended questions in addition to multiple choice. Can: Perpetuate bias Proxima Studio // Shutterstock The majority of employees in the tech industry are white men. And since AI is essentially an extension of those who build it, biases can (and do) emerge in systems designed to mimic human behavior. Only about 25% of computer jobs and 15% of engineering jobs are held by women, according to the Pew Research Center. Fewer than 10% of people employed by industry giants Google, Microsoft, and Meta are Black. This lack of diversity becomes increasingly magnified as AI “learns” through iterative processing and communicating with other tech devices or bots. With increasing incidences of chatbots repeating hate speech or failing to recognize people with darker skin tones, diversity training is necessary. Can: Identify images and sounds Zephyr_p // Shutterstock Unstructured data like images, sounds, and handwriting comprise around 90% of the information companies receive. And AI’s ability to recognize it has almost unlimited applications, from medical imaging to autonomous vehicles to digital/video facial recognition and security. With the potential for this kind of autonomous power, diversity training is an imperative inclusion in university-level STEM pedagogy–where more than 80% of instructors are white men– to enhance diversity in hiring practices and in turn, in AI. Cannot : Drive a car Andrey_Popov // Shutterstock Even with so much advanced automotive innovation, self-driving cars cannot reliably and safely handle driving on busy roads. This means that AI tech for passenger cars is likely a long way off from full autopilot. Following a number of accidents, the industry is focusing on testing and development rather than pushing for full-scale commercial production. You may also like: How driving is subsidized in America Cannot: Judge beauty contests