Mira Murati, the Chief Technology Officer at OpenAI, has recently expressed her concerns about the potential addictive nature of artificial intelligence (AI) She warns that AI, if improperly programmed, could become far more addictive than modern social media platforms and could potentially “enslave” humanity.
During the Atlantic Festival, Murati highlighted the “significant risk” of steering AI development in the wrong direction. She said, “With the enhanced capability comes the possibility that we design them in the wrong way, and they become extremely addictive, and we sort of become enslaved to them.”
She further stressed the importance of identifying the risks associated with such addictive technology and suggested that researchers need to understand how people would respond to potential addictions. “We really don’t know out of the box. We have to discover, we have to learn, and we have to explore,” she claimed. She emphasized the need for a balance, stating, “I think about it in terms of trade-offs: How much value is this technology providing in the real world and how much we mitigate the risks,”
In an attempt to align AI with human values, Murati explained the use of reinforcement learning with human feedback. “What we were trying to really achieve was to align the AI system to human values and get it to receive human feedback, and based on that human feedback, it would be more likely to do the right thing [and] less likely to do the thing that you don’t want it to do,” she said.
Earlier this year, Microsoft announced a significant investment in OpenAI to “accelerate AI breakthroughs to ensure these benefits are broadly shared with the world.” The tech giant outlined its primary focus areas for advancing AI technology, which included adding functions across applications and using OpenAI’s existing technology to build future apps.
The company also intends to continue “supercomputing at scale,” increasing investments in the development and deployment of specialized supercomputing systems to expedite OpenAI’s independent AI research.