Andrew Tutt put out an interesting position paper where he argues that we need the equivalent of an FDA for algorithms. The paper
explains the diversity of algorithms that already exist and that are soon to come. In the future most algorithms will be “trained,” not “designed.” That means that the operation of many algorithms will be opaque and difficult to predict in border cases, and responsibility for their harms will be diffuse and difficult to assign. Moreover, although “designed” algorithms already play important roles in many life-or-death situations (from emergency landings to automated braking systems), increasingly “trained” algorithms will be deployed in these mission-critical applications.
It’s an interesting argument. Two things that come to mind when I think about this:
- The FDA ultimately still deals with drugs that operate on the body. I feel that algorithms that apply across multiple domains will require much more varied domain expertise, and it might be hard to do this within a single agency.
- A regulatory agency is slow. The FDA has been slow to react to the demands of personalized medicine, especially for rare diseases where the normal expectations of drug protocols might not be possible to achieve. How would a regulatory agency be nimble enough to adjust to the even more rapidly changing landscape of algorithm design?