In a policy document published this week, 23 AI experts, including two modern “godfathers” of the technology, said governments must be allowed to halt development of exceptionally powerful models. Gillian Hadfield, a co-author of the paper and the director of the Schwartz Reisman Institute for Technology and Society at the University of Toronto, said AI models were being built over the next 18 months that would be many times more powerful than those already in operation. “There are companies planning to train models with 100x more computation than today’s state of the art, within 18 months,” she said. “No one knows how powerful they will be. And there’s essentially no regulation on what they’ll be able to do with these models.”
The paper, whose authors include Geoffrey Hinton and Yoshua Bengio — two winners of the ACM Turing award, the “Nobel prize for computing” — argues that powerful models must be licensed by governments and, if necessary, have their development halted. “For exceptionally capable future models, eg models that could circumvent human control, governments must be prepared to license their development, pause development in response to worrying capabilities, mandate access controls, and require information security measures robust to state-level hackers, until adequate protections are ready.” The unrestrained development of artificial general intelligence, the term for a system that can carry out a wide range of tasks at or above human levels of intelligence, is a key concern among those calling for tighter regulation. Further reading: AI Risk Must Be Treated As Seriously As Climate Crisis, Says Google DeepMind Chief