The incredible pace of improvement in programs powered by machine learning has raised a lot of big questions that the broad public is only now, belatedly, starting to discuss. Is it wise to build machines that might one day become smarter than us, and then decide to dispense with their half-witted creators? Conversely, as machines get smarter, would it be ethical to keep them in what amounts to bondage? What sorts of safeguards should we build into these programs? What is society to do about the coming tsunami of plausible fakes? How should policymakers prepare for what promises to be a dramatic reshaping of the economy?
All these questions are important. Collectively, they might well provoke the most important debates humanity ever has. But this is going to take a long time and, in the meantime, individuals need to figure out how to answer the smaller questions posed by AI, such as “How do I, personally, protect myself?”
Hence the discussion about safewords. Voice-cloning technology has improved to the point where predators are using it to try to defraud unwary people by fooling them into thinking that a loved one is in serious trouble. As my husband and I were chatting about one of these cases, it occurred to me that the two of us, both journalists who have provided ample audio and video samples for a machine mimic, needed a shorthand way to say, “Yes, honey, it’s really me,” in case we actually are kidnapped, or arrested, or stranded in a foreign country without a wallet. (Especially since this last actually happened to me a few years ago.)
We’re probably not the only ones who should have this discussion, because the internet holds more information than people realize. Almost everyone now has video or audio of themselves out there, on their social media feed or someone else’s. And an emergency phone call from a panicked loved one is not a good time to try to figure out what details of life the two of you remember that the cloud does not.
Nor are these the only discussions families should have. You should also think about how you’d prove that a malicious fake isn’t you — say, audio of you apparently trash-talking the boss or cheating on your spouse.
Sorry to say, I have no good answer to this one yet, except: Now more than ever, it pays to establish a reputation for being extremely honest and trustworthy.
Speaking of bosses, job loss stands to become a bigger risk for many people than impersonation. No, I don’t think AI will render humans fully redundant anytime soon; most likely, as with earlier technological revolutions, it will destroy some jobs but not others, and over time workers will shift into the fields where humans maintain a comparative advantage.
But there is a difference between this technological revolution and most earlier ones, which tended to automate away rote, brute-force tasks. The large language models and their brethren, in contrast, threaten to automate more abstract, skilled parts of many jobs, leaving humans with the more boring tasks at which we still comparatively excel, including talking with one another and manipulating a variety of objects without crushing them. While machine-learning models turn out to be surprisingly good at medical diagnosis, for example, and will probably only get better, no one has yet invented a robot that can come close to replacing a home health-care aide.
Presumably, more people would prefer to diagnose illnesses than to change adult diapers. A shift away from one and toward the other would be wrenching at both a social and a personal level.
Though, it is not all doom and gloom. AI may automate the rote parts of many skilled jobs, leaving the rest of us to focus on the more fun, creative bits. But at the very least, workers should prepare themselves for their job description to be rewritten and, possibly, eliminated.
Start by taking a fearless inventory: What parts of your job could a machine-learning model do — right away or after a few more high-speed iterations? How quickly could your company or its competitors make such a shift? Could you survive a shift by learning to use ChatGPT and its many brethren to make yourself more productive, differentiating yourself from the obsolete herd? If not, what sorts of jobs could you retrain for that aren’t likely to soon be outsourced to a computer?
I suspect many of us have been avoiding these discussions for the same reason that society got a late start on deciding how to address AI risks: It’s a little scary (if also somewhat exhilarating), and it’s not obvious what to do. On the other hand, it’s quite clear what we should not do: pretend that the change already upon us isn’t happening.