Governments around the world are increasingly debating how to regulate artificial intelligence. Among the most ambitious of the proposed regulations is the Artificial Intelligence Act that is currently making its way through the European Union’s legislative sausage making. In the U.S., the Federal Trade Commission has issued a number of warnings about the controls a company should have in place if it is using algorithms to make decisions and the agency has said it plans to begin rulemaking on the technology. But it is one thing to make new laws. It is another to be able to enforce them. Bryce Elder, a journalist with The Financial Times, makes this point in a well-argued opinion piece in the newspaper’s “Alphaville” section this week. Elder points out that the industry that is in many ways the furthest along in deploying autonomous systems is finance, where firms have embraced algorithmic trading for more than two-decades and are now increasingly replacing static, hard-coded algorithms with those created through machine learning. Algorithms account for as much as 75% of U.S. equities trading volumes, and 90% on foreign exchanges, according to a 2018 SelectUSA study.
There are stringent rules on the books in most jurisdictions about these algorithms: European Union law requires that they be thoroughly tested before being set loose, with firms asked to certify that their trading bots won’t cause market disorder and that they will continue to operate correctly even “in stressed market conditions.” It also specifies that humans at the trading firms using the algorithms bear ultimate responsibility should the software run amok. Trading venues are also held responsible for ensuring market participants have tested their algorithms to this standard.
But as Elder points out, enforcement is patchy at best. The system relies heavily on self-certification by the trading firms deploying the algorithms. Worse, there are no standard testing mechanisms specified. Compliance is low, with industry consultant TraderServe estimating that fewer than half of all firms have stress tested their algorithmic trading strategies to the appropriate level. While in the U.S., there have been some record-breaking fines for market abuse using algorithms, including the $920 million settlement that JPMorgan Chase agreed to pay in 2020 for manipulating the metal markets. But in Europe, there have been no equivalent enforcement actions.
Given this record, says Elder, “good luck with self-driving cars.” He could say the same thing about A.I. more broadly. Self-driving cars, despite the hype surrounding them, are still years—and maybe even decades—away from broad deployment on our roads. But autonomous software is making rapid inroads into other areas, such as health care and medical imaging, where the stakes are literally life and death. And yet, as in finance, there are very few rules governing exactly how rigorously these systems must be tested. The European A.I. Act says that such high-risk uses of A.I. should be held to stricter standards, with the firms deploying them needing to conduct risk assessments. Sounds good on paper. But making sure firms comply is another matter altogether.
Jeremy Kahn @jeremyakahn jeremy.kahn@fortune.com Correction: In last week’s newsletter, I misspelled the last name of Adept co-founder and CEO David Luan and the first name of Adept co-founder and CTO Niki Parmar. I apologize for the errors. And before we get to this week’s A.I. news, Fortune has new vertical launching this week: Fortune Well. It is dedicated to health and wellness, which are increasingly top of mind issues for both C-suite executives and rank-and-file employees. You can check it out here.
Sharing deepfake porn should be illegal, a top U.K. advisory body says. The Law Commission, an independent body that examines whether existing laws in Britain need to be overhauled, has recommended that the country adopt new laws to specifically make the sharing of deepfake porn illegal. There is currently no single criminal offense that covers deepfake porn, the commission, which has been studying the issue since 2019, said. Deepfakes are highly-realistic images and videos created using A.I., and in many cases the technique has been used to graft the head of a woman who has not appeared in a pornographic film onto the bodies of pornographic actresses. More here in the Financial Times.
FIFA will use A.I. to help with offside calls during the 2022 World Cup. The international governing body for soccer has said it will use a combination of sensors, including one in the ball itself, and stadium-mounted cameras, along with machine learning software that can track 29 different points on players' bodies, to help determine if any of those players are offside during the 2022 World Cup in Qatar in November. Alerts from this system will be sent to officials in a nearby control room, who will validate the decision and tell referees on the field what call to make, according to a story in tech publication The Verge.
DeepMind sets up partnership with Francis Crick Institute to apply machine learning to genomics and protein structure. DeepMind, the A.I. research company owned by Alphabet, has set up a partnership with one of the U.K.'s top biomedical research labs, the Francis Crick Institute in London. The deal will see DeepMind establish a lab within the Crick to build machine learning models "to understand and design biological molecules," according to a press release from the two organizations. The lab will also work on genomics. The idea is that biologists at the Crick will be able to experimentally test various designs or hypotheses developed by the A.I. systems that DeepMind's team builds.
Chinese researchers say they can read people's thoughts with A.I., but the world cringes at the totalitarian vibe. A.I. researchers at an institute in Hefei, in China’s Anhui province, say they have developed software that can gauge how loyal people are to the ruling Communist Party by analyzing their facial expressions as they read Communist Party materials online. But the claims sparked immediate outcry, both internationally and among many Chinese citizens. Many international A.I. researchers say they doubt the technology works as well as the Chinese scientists say. But, even if the claims are true, there was widespread concern that the technology would reinforce the increasingly totalitarian control the Chinese government exercises. The Voice of America has more on the story.
Ian Goodfellow, a top A.I. researcher who is credited with having invented generative adversarial networks (or GANs), which is the deep learning method behind deepfakes and many other advances in the generation of synthetic images and data, has joined DeepMind as a research scientist, according to a tweet Goodfellow posted. He had most recently been at Apple, but had balked at that company's return to work policies post-Covid.
Meta unveils new language translation system that boast big improvements for "low-resource" languages. Machine translation has made massive leaps in recent years thanks to breakthrough A.I. algorithms and improved training methods. But for languages that have relatively low-levels of written material available in electronic form on which to train an A.I. system, little progress has been made. Now Meta's A.I. researchers have created a system called "No Language Left Behind" (or NLLB for short) that can translate between 200 different languages, including tough low-resource languages such as Kamba, Lao, and a number of African languages. In an overall translation benchmark judging all of the languages the A.I. system supports, NLLB improved on existing state-of-the-art results by 44%. For some Indian and African languages the improvement was as great as 70%.
Meta has begun using NLLB on its own Facebook and Instagram services and it has also made many of the NLLB translation models freely-available as open-source software. The open-source models could help many other businesses to serve the populations that speak these low-resource languages much better and could also allow speakers of those languages to better access global markets and services online. You can read Meta's blog post about the breakthrough translation system here.
To solve the water crisis, companies are increasingly turning to A.I.—by Tony Listra
Amazon gives its smart shopping carts an upgrade and expands its checkout-free tech to a college football stadium—by Marco Quiroz-Gutierrez
Elon Musk claims Neuralink’s brain implants will ‘save’ memories like photos and help paraplegics walk again. Here’s a reality check—by Jeremy Kahn, Jonathan Vanian, and Mahnoor Khan
Europeans could be cut off from Facebook and Instagram as soon as September—and TikTok may be next on the block—by David Meyer
Is scale the secret to more powerful, advanced A.I.? There is certainly an entire camp of A.I. researchers who think so. Among the most prominent proponents of this view is Ilya Sutskever, the chief scientist at OpenAI. But there are also other believers in the bigger-will-be-better model of building more capable A.I. to be found scattered throughout most of the world's top A.I. research labs. A whole different group thinks that scale alone isn't the secret to getting us closer to artificial general intelligence (AGI)—the kind of A.I. you know from science fiction that can perform most cognitive skills as well or better than a human. This school thinks that today's A.I. is woefully inefficient, especially when it comes to power consumption but also in terms of learning from very few examples, compared to the human brain. These researchers believe more fundamental algorithmic breakthroughs are needed to get us to the lofty goal of AGI.
Now several researchers affiliated with New York University have created something they are calling the "Inverse Scaling Prize." It is a contest to find tasks where performance of A.I. systems actually decrease as the size of the A.I. model grows. One known example of this: the increasingly popular ultra-large language models do better on overall benchmarks the bigger they get, but when it comes to producing toxic language, bigger language models are also far more likely to output racist, sexist, homophobic, or stereotyped expressions. Now the prize hopes to give researchers an incentive to find more such examples.
The winner of the competition will receive $100,000, with up to five second prizes of $20,000 each being awarded as well, and up to 10 third prizes of $5,000 each. You can reach more about the contest here.
Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.
© 2022 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell My Personal Information | Ad Choices FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice. S&P Index data is the property of Chicago Mercantile Exchange Inc. and its licensors. All rights reserved. Terms & Conditions. Powered and implemented by Interactive Data Managed Solutions.