Apple researchers have published a study that looks into how LLMs can analyze audio and motion data to get a better overview of the user’s activities. Here are the details.
They’re good at it, but not in a creepy way
A new paper titled “Using LLMs for Late Multimodal Sensor Fusion for Activity Recognition” offers insight into how Apple may be considering incorporating LLM analysis alongside traditional sensor data to gain a more precise understanding of user activity.
This, they argue, has great potential to make activity analysis more precise, even in situations where there isn’t enough sensor data.
From the researchers:
Källa: Apple study shows LLMs can tell what you’re doing from audio data – 9to5Mac
