How science fiction helped write AI’s first rule book
Stay informed with free updates
Simply sign up to the Artificial intelligence myFT Digest — delivered directly to your inbox.
The writer is vice-president of the European Commission for values and transparency
Shortly before the international artificial intelligence summit started at Bletchley Park last month, I went to grab a quick bite at a nearby pub. A man sat down opposite me and pulled out a crumpled paperback. On the cover was Isaac Asimov, one of science fiction’s most famous authors. Surely, this was no coincidence. Asimov was ahead of his time when, during the mid-20th century, he anticipated the powerful role that AI would have on our lives. In his books, he envisaged a series of laws that would ensure robots did not hurt or harm humans, but would obey them.
That man with his book reminded me of the importance of we politicians securing a similar outcome as I headed into the summit to debate the future of safe AI. During my travels to London, Tokyo, Washington, Beijing and of course Brussels this past autumn, I pondered the fact that we were writing the world’s first rule book for regulating computer processes that are much faster and more powerful than humans.
Real business quickly overtook science fiction. With Chinese politicians in Beijing, I discussed their model legislation. It doesn’t differ so much to ours with regards to the technical side of things, but rather in what it might contribute to state control over society. Representatives in the US, which has previously had a rather unregulated approach, pointed to the Biden administration’s executive order on AI in late October. And closer to home, on behalf of the EU, I led negotiations in the G7 group of countries. There we were able to achieve pre-binding legislation at a global level — a voluntary code for AI developers that builds in accountability for security and information sharing.
Europe has also responded swiftly to the demand for safe AI. The proposed framework of 2021 picked up pace as the urgent need to make the technology both safe and beneficial became evident. The so-called trialogue — the grand finale between the Spanish presidency, the parliament and the commission — lasted for 36 hours this month but eventually ended with a historic compromise.
The needs of individuals guided each and every paragraph. The act guarantees security and the protection of basic human rights in the face of superintelligent systems, which could in the long run prove better thinkers than us. We’ve come up with several risk categories for AI — low-risk ones include video games and algorithms to sort out our emails (things I’m sure we’d all benefit from). High-risk ones will have to meet stricter requirements, be it medical devices or influencing voter behaviour at the ballot box.
The list of the unacceptable includes that which threatens our fundamental human rights. This could include biometric sorting systems based on religion or race, emotion recognition in the workplace, or the untargeted extraction of faces from cameras in public places (exceptions will be made for national security issues).
But we are also aware of the potential rewards of safe AI and indeed want to make the EU a hub for it. That’s why we have decided to make our supercomputers available to European AI start-ups and SMEs. We will also invest more than €1bn a year in AI research from Horizon and Digital Europe.
Our political agreement is yet to be confirmed by the member states and the European parliament. The law will enter into force in phases, the full legislation provisionally set for 2026. In the meantime, AI will keep transforming all our lives. We will entrust it with many activities where it could replace humans, but not those in which it could take over our fundamental rights, such as free speech or the protection of intellectual property.
I have believed from the start that content created by AI must be labelled, so that human thinking and creativity would be left with something akin to “human copyright”. We are already learning how the technology can change our perceptions of reality and truth. Artificial intelligence works with the data it has at its disposal. It does not know what is true. And in a world where deepfakes can come out of nowhere, we are always in danger of losing our grasp on reality.
That’s what I was thinking about when the Englishman across the table reminded me of Asimov’s laws. These have now been transformed, along with other measures, into the first ever European legal norm, which may well become the basis for all similar regulations across the world. We must keep control of robots and artificial intelligence, to ensure truth and human rights can prevail in the future rather than becoming science fiction.
Read the full article Here