What are some of the key experiences users can expect from the new Bixby?
The most noticeable improvement is how intuitive device control has become.
Bixby understands user intent and recommends the most appropriate settings or features, eliminating the need to navigate menus or know exact feature names. Users can simply describe what they want in natural language.
For example, if a user says, “Make my screen visible only to me,” Bixby activates the Privacy Display feature.
Bixby can also answer questions about the device and provide personalised solutions based on current settings — essentially a service centre in your pocket. For example, asking “My eyes are tired — how can I make the screen easier to look at?” will prompt Bixby to recommend and activate the Eye comfort shield feature right then and there.
Users can get answers and solutions simply by asking questions during a conversation, without needing to search through settings or open separate apps such as a browser or maps.
In addition, Bixby is no longer limited to device-related queries. It now can analyse real-time web information and provide relevant answers. For example, users can ask, “Recommend three Korean restaurants in Seoul for a family of four,” and receive results directly within the conversation.
This allows users to ask follow-up questions naturally and get the information they need without interrupting their flow or switching contexts.
What was the most challenging part of the Bixby update process?
The biggest effort went into redesigning Bixby’s architecture from command-based to agentic, enabling it to better understand user intent and deliver optimal results.
Previously, Bixby classified user input and executed tasks based on preset scenarios. Now, with an LLM at its core, it can interpret intent more flexibly and generate its own execution plans.
More specifically, we transformed individual functions into callable agents and defined them in a way that allows the LLM to invoke them as needed. This enables the system to combine multiple functions and APIs to complete tasks more meaningfully, going beyond simple natural language understanding.
As a result, Bixby now handles complex, multi-step requests more naturally with greater contextual awareness, including scenarios that were previously difficult to process.