Smart home assistants function best when user commands are direct—e.g., "turn on the kitchen light"—or when there are hard-coded routines in place to specify the right responses. However, more natural human communication is much less constrained, allowing us to describe goals (e.g., "make it cozy in here") rather than specific actions. Current smart homes cannot understand such commands since they require creative reasoning about devices and settings as they relate to human situations. We approach this problem using large language models (LLMs), exploring their potential to control devices and create automation routines in response to under-specified user commands. We analyze the quality and failure modes of LLM-created action plans in the base case and introduce a key metric—target relevance—for evaluating a model's performance at the task. Finally, we introduce Sasha, a smarter smart home assistant. Sasha responds to commands like "make it cozy" or "help me sleep better" by executing plans to achieve user goals—e.g., setting a mood with available devices, or devising automation routines. We demonstrate Sasha in a real-world implementation and user study, showing its capabilities and limitations when faced with unconstrained user-generated scenarios.
Evan King, Haoxiang Yu, Sangsu Lee, Christine Julien. "Sasha: creative goal-oriented reasoning in smart homes with large language models" arXiv preprint arXiv:2305.09802 (2023).
Evan King, Haoxiang Yu, Sangsu Lee, Christine Julien. "Get ready for a party: Exploring smarter smart spaces with help from large language models" arXiv preprint arXiv:2303.14143 (2023).