Humans must police AI, argue Henry Kissinger, Schmidt and MIT dean

Humans must police AI to make sure it aligns with our moral values and doesn’t displace us, argue Henry Kissinger, former Google CEO Eric Schmidt and MIT dean of computing

  • Kissinger, Schmidt and Daniel Huttenlocher penned an op-ed on Monday
  • They called for a government commission to regulate advancement of AI
  • Warned that AI is ‘obviating the primacy of human reason’
  • Argued that the ‘philosophical’ ramifications of AI should be considered

Former Secretary of State Henry Kissinger and former Google CEO Eric Schmidt have joined with an MIT professor to call for a government commission to regulate the development of Artificial Intelligence.

Kissinger, Schmidt and Daniel Huttenlocher, dean of the Schwarzman College of Computing at MIT, shared their arguments in an op-ed published on Monday in the Wall Street Journal. 

In it, they warned that AI has the potential to raise profound existential and philosophical questions about ‘the primacy of human reason’ and the role of humans in the world.

The trio called for the establishment of a commission tasked with ‘shaping AI with human values, including the dignity and moral agency of humans.’ 



Left to right: Henry Kissinger, Eric Schmidt and Daniel Huttenlocher, dean of the Schwarzman College of Computing at MIT, shared their arguments in an op-ed published on Monday

‘In the U.S., a commission, administered by the government but staffed by many thinkers in many domains, should be established. The advancement of AI is inevitable, but its ultimate destination is not,’ they wrote.  

The three men argued that the development of AI raises important questions about the nature of creativity and role of humans in the world.

‘If an AI writes the best screenplay of the year, should it win the Oscar? If an AI simulates or conducts the most consequential diplomatic negotiation of the year, should it win the Nobel Peace Prize? Should the human inventors?’ they wrote.

‘For all of history, humans have sought to understand reality and our role in it,’ the essay stated.

‘Now AI, a product of human ingenuity, is obviating the primacy of human reason: It is investigating and coming to perceive aspects of the world faster than we do, differently from the way we do, and, in some cases, in ways we don’t understand,’ it continued. 

The three men argued that the development of AI raises important questions about the nature of creativity and role of humans in the world

Last month, the White House Office of Science and Technology Policy called for the creation of a ‘bill of rights’ to guard against abuses of AI.

‘Our country should clarify the rights and freedoms we expect data-driven technologies to respect,’ wrote OSTP Director Dr. Eric Lander and OSTP Deputy Director for Science & Society Dr. Alondra Nelson in an op-ed.

‘In a competitive marketplace, it may seem easier to cut corners,’ they added.

‘But it’s unacceptable to create AI systems that will harm many people, just as it’s unacceptable to create pharmaceuticals and other products—whether cars, children’s toys, or medical devices—that will harm many people.’

In recent years, the Federal Trade Commission has also tried to regulate certain applications of AI in lending decisions.

Many have also raised concerns about the potential for racial bias in AI systems.

On Wednesday, Black Lives Matter co-founder Opal Tometi urged the tech sector to act fast against perpetuating racism in systems such as facial recognition.

‘A lot of the algorithms, a lot of the data is racist,’ the U.S. activist who co-founded BLM told Reuters on the sidelines of Lisbon’s Web Summit.

‘We need tech to truly understand every way it (racism) shows up in the technologies they are developing,’ she said. 

Source: Read Full Article