Will Smart Homes Be a New Target for Subliminal Messaging?

A smart home device made waves recently after a report that one sent the audio of a conversation to a user’s contact without the user’s knowledge. It was an innocuous discussion about hardwood flooring, a simple speech-recognition mistake that can happen when a smart speaker is listening for its key phrase: “Alexa,” or “Hey, Siri,” or “Okay, Google.” But a recent research paper suggests that smart speakers—which can control everything from light switches to front doors to bank accounts—may be susceptible to intentional hijacking.

Researchers at Berkeley published a paper claiming that they could embed voice commands in a music or speech recording that would, for example, make your smartphone visit a website without your knowledge or consent. The trick is, though, that ordinary human listeners can’t discern the commands; only the machines can. If the researchers are correct, third parties could control or influence a device and its functions without accessing the device’s physical or logical controls. This sort of activity may be “hacking” that violates the Computer Fraud and Abuse Act (CFAA)’s prohibition on unauthorized access to protected computers, described below. And it raises other novel issues to watch for.

What concerns does the Berkeley paper raise?

As is often the case, new technology can outpace relevant legal frameworks. Hacking connected devices constitutes a violation of law, but the activity contemplated by the Berkeley researchers implicates another set of issues that may escape direct regulation. One of the Berkeley researchers said of his role in the study, “My assumption is that the malicious people already employ people to do what I do.” It is conceivable that this kind of undetectable messaging could, as the New York Times pointed out, be used in an innocuous YouTube video to add something to your shopping list without your knowledge, and it is unclear whether existing legal regimes proscribe this kind of manipulation.

We will get to interesting questions about the CFAA below but, first, what about the message itself?  Use of a signal undetectable to humans smacks of similarities to subliminal messaging, which the Bureau of Alcohol, Tobacco, and Firearms (ATF) defines as “images or sounds of a very brief nature that cannot be perceived at a normal level of awareness.”

Isn’t subliminal messaging illegal?

Perhaps surprisingly, no federal law prohibits the use of subliminal messaging in media, other than ATF’s regulation regarding alcohol advertising. The Federal Communications Commission has discouraged subliminal messaging in advertising as “contrary to the public interest,” and the Federal Trade Commission, which enforces laws against deceptive trade practices, states it “would be deceptive for marketers to embed ads with so-called subliminal messages that could affect consumer behavior,” but neither statement is binding. Courts similarly have had little opportunity to rule on the legality of subliminal messaging: in 1990, the Nevada District Court determined that subliminal messaging was not speech generally and, to the extent it was, was an invasion of privacy and would not be protected by the First Amendment. The few other court cases, usually against musicians who allegedly put subliminal messaging in songs, have referenced back to this standard.

But even these scant legal frameworks have dealt only with subliminal messaging aimed at influencing human behavior, not technological behavior. As smart speakers, and the smart homes they’re connected to, become more advanced, the processes that could be remotely controlled by encoded messages may be cause for concern. There’s little proof that subliminal messaging to humans even works outside a lab, according to the FTC and others. The stereotypical advertising gimmick of flashing images of refreshments between movie frames seeks to manipulate consumer behavior, but tricking speech recognition technology subverts the consumer altogether. Subliminal messaging aimed at machines may be able to access them without their authorized users noticing, posing a significant security risk.

Will the Computer Fraud and Abuse Act and analogous state laws apply to the surreptitious use of embedded messages?

Federal law regulates unauthorized access to computers. The Computer Fraud and Abuse Act (CFAA) prohibits, among other things, accessing protected computers without or in excess of authorization in order to cause damage, to defraud, or to obtain data or something else of value.  Although the CFAA was written before subliminal messaging attacks could have been a concern, the language in the statute is arguably broad enough to encompass that activity, depending on how it was conducted and what its effect was.

To the extent that transmitting audio commands to a device constitutes “access” to a computer, there are multiple ways that embedded messages may violate the CFAA.  First, the CFAA prohibits “the transmission of a program, information, code or command” that damages a computer. 18 U.S.C. §§ 1030(a)(2), (5). Damage is broadly defined as causing “any impairment to the integrity or availability of data, a program, a system, or information.” § 1030(e)(8). A bad actor also potentially could violate the CFAA by transmitting commands to a connected device with “the intent to defraud” if the transmission furthers the fraud and the actor obtains more than $5000 in value. § 1030(a)(4). Moreover, simply obtaining information from a protected computer via unauthorized access is illegal under the statute. Courts have dealt with all manner of litigation over the terms in the CFAA, as the government’s use of it has evolved; courts have often rejected the more expansive approaches to unauthorized access, including attempts to criminalize activity that goes beyond the terms of service of the website being accessed.

Many states have complementary laws about computer tampering, which could be triggered by the use of embedded commands.  Alabama has a criminal statute that outlaws “[a]ccessing and altering . . . any computer, computer system, or computer network,” “[d]isrupting or causing the disruption of a computer,” or “[p]reventing a computer user from exiting a site, computer system, or network-connected location.” Arizona, New York, and other states have similar provisions.  A fundamental question, as with the CFAA, is whether a voice command, pre-recorded and transmitted by someone other than the owner of the computer or system, would be considered “access.”  While it is certainly plausible, it will depend on the particular scenario involved and on how courts interpret the language in the statute.

What comes next?

Tech companies are constantly identifying and addressing innovative misuses of technology, and they respond to reports of vulnerabilities.  In response to the researchers’ report, Amazon told the New York Times it has “taken steps to ensure its Echo smart speaker is secure.” Apple’s HomePod user guide advises that in order to unlock security accessories, like door locks, you have to use a passcode or other authentication in addition to a voice command.

The advances that may be possible with connected smart home devices are exciting for users as well as the companies that create them. Not surprisingly, technology may enable novel behavior—some nefarious—that does not fit neatly into existing legal regimes.  The kind of manipulation that Berkeley researchers and their colleagues describe should keep everyone on their toes.  Courts and federal agencies have denigrated subliminal messaging, so we can be confident that authorities will disdain the behavior identified by the researchers, which may come within the broad reach of the CFAA.

As an aside, this activity, if conducted by so-called “ethical hackers,” may also be covered by proposed exceptions from the CFAA for research activities. That, too, depends on what activity is involved and its effects.

With each new technology comes interesting research and speculation about possible misuses. These misuses raise novel legal questions. Existing laws may offer protection in such circumstances, but policymakers should be watchful as new technologies emerge. Instead of rushing to respond or regulate before a problem arises, policymakers should consider how to manage issues as they emerge, and only to the extent existing tools prove inadequate.

Authors

Wiley Connect

Sign up for updates

Necessary Cookies

Necessary cookies enable core functionality such as security, network management, and accessibility. You may disable these by changing your browser settings, but this may affect how the website functions.

Analytical Cookies

Analytical cookies help us improve our website by collecting and reporting information on its usage. We access and process information from these cookies at an aggregate level.