Artificial intelligence is on everyone’s lips. And with good reason: it is now opening up a whole host of possibilities, particularly in our everyday objects, which are becoming smarter, more responsive and, above all, more autonomous.
Artificial intelligence is on everyone’s lips. And with good reason: it is now opening up a whole host of possibilities, particularly in our everyday objects, which are becoming smarter, more responsive and, above all, more autonomous.
From security cameras to vehicles, smart buildings, and medical devices, AI integrated directly into hardware now enables applications that were previously reserved for heavy, cloud-dependent infrastructure.
At Rtone, we have believed in the potential of embedded machine learning for several years.
After an initial demonstration in 2024, carried out in partnership with STMicroelectronics at Embedded World, we repeated the experiment in 2025 with a new practical application: detecting drowsiness at the wheel.
In this article, we will see how the combination of a camera and real-time AI processing on a microcontroller can meet very specific needs in a variety of sectors: automotive, industry, construction, healthcare, logistics, and more.
We will then present our latest demonstration, developed by our teams, as well as the technical choices that accompany it.
Combining a camera with embedded artificial intelligence opens up a wide range of practical applications.
Thanks to real-time local processing, it becomes possible to analyze behavior, detect anomalies, and improve security—all without a cloud connection or network latency.
Let’s take a quick look at some examples 👇
Automotive IndustryDriving safety is a major issue, especially when faced with the risks of fatigue or inattention.
Thanks to on-board AI, it is now possible to detect signs of drowsiness (blinking, yawning, looking away from the road) in real time, as well as risky behaviour such as using a mobile phone while driving or not wearing a seatbelt. And all this works directly in the vehicle, without a connection to the cloud.

On construction sites or in factories, a lack of vigilance can have serious consequences.
Thanks to embedded vision, it is possible to detect a fall, illness or failure to wear PPE, and generate an immediate alert — without network infrastructure, even in isolated areas.
Smart BuildingA smart building is also a building capable of responding quickly.
Intrusion, abnormal behaviour, occupation of a space (meeting room/open space): embedded AI can detect all of this locally, in real time. Even in the event of a network outage, the system continues to run.
RetailUnderstanding how customers move around and what interests them allows you to rethink the shopping experience.
Embedded AI provides this insight directly on site: movements, waiting areas, flows… all useful information for improving the customer experience and optimising the layout.
Health / EHPADIn healthcare facilities or for elderly people, responsiveness is crucial.
Embedded AI can detect falls, fainting spells or prolonged immobility, while ensuring confidentiality: nothing is sent to the cloud and no images are stored.

Whether industrial, service or domestic robots, embedded vision improves their autonomy and interaction capabilities.
Object detection, people tracking, gesture recognition or automatic shutdown in the presence of humans: AI makes robots safer and more intuitive, even without a permanent connection.
These use cases are just a glimpse.
Beyond their diversity, all these cases share the same requirements: analyse locally, react quickly and comply with field constraints.
This is the advantage of combining a camera with embedded AI, as it allows you to meet specific needs in very different environments.
Drowsiness at the wheel remains one of the main causes of serious road accidents. It is often silent, difficult to anticipate and yet preventable.
Signs of fatigue such as yawning or drooping eyelids are short-lived events that can occur anywhere, including in areas without network coverage.
Based on this observation, we have developed an embedded AI demonstration capable of detecting the first signs of fatigue in real time.
Presented at Embedded World 2025, this solution is based on the brand new STM32N6 microcontroller from our partner ST Microelectronics, combined with a camera capable of analysing the driver’s face.
Blinking, yawning, heavy eyelids… these are all signals that our AI can recognise to trigger an alert and avoid a dangerous situation.
As soon as risky behaviour is detected, an alert is generated to immediately warn of the danger.
So how does it work in practice? What technical choices guided our team? And above all: why is this approach a game changer for embedded security?
For this demonstration, we decided to use an already recognised AI model, adapt it to our use case, and then optimise it so that it runs directly on a microcontroller.
Our solution is based on the following elements:
We chose YOLOv8, a benchmark in object detection, and then retrained it to specifically identify signs of fatigue such as closed or half-closed eyes and yawning.
In a typical configuration, the inference time is between 20 and 30 ms, which allows for real-time processing on a microcontroller.
We used a public dataset available on Roboflow, without collecting any proprietary data. This dataset contains images of drivers showing various signs of fatigue, which allowed us to cover a wide variety of situations.
The YOLOv8n model was retrained from this dataset to specialise in detecting drowsiness.
The main training parameters used are:
In order to ensure reliable detection in a variety of conditions, we applied different data augmentation techniques:
This approach increases the diversity of the dataset and therefore the robustness of the model, enabling it to function effectively in real-world situations, which can sometimes be unpredictable (variations in brightness, viewing angles, driver movements).
For an AI model like this to work in real-world conditions, the choice of hardware is crucial.
When we think about running an AI model, we often imagine a powerful processor with embedded Linux.
However, in many cases, a modern microcontroller such as the STM32N6 represents an attractive compromise:
In the specific case of drowsiness detection, which does not require a full OS or complex UI, the STM32N6 with integrated NPU enables:
In short, choosing the STM32N6 allows you to combine efficiency, simplicity and reliability.
After this concrete example, let’s take a look at the broader potential of this approach.
Our demonstration is just one example among many. Above all, it highlights all that embedded AI can offer in real-life situations.
The YOLOv8 model we used for drowsiness at the wheel can be quickly adapted to other similar use cases.
By simply changing the dataset and a few training parameters, the same approach can be used, for example, to:
In summary, the same technological foundation can cover a wide variety of needs without having to start from scratch each time.
A key issue today: personal data protection.
With embedded AI, all analyses are performed locally, in real time, on a microcontroller.
This not only guarantees GDPR compliance, but also maximum confidentiality, which is a decisive advantage for users and manufacturers.
The AI inference runs directly on the STM32N6 microcontroller, designed to deliver good performance while remaining very energy efficient.
As a result, the solution can be easily integrated into battery-powered systems without sacrificing autonomy.
This is a real advantage for all mobile or embedded applications, but also for those deployed in isolated environments where mains power is not always available.
In practical terms, this opens the door to smart cameras capable of operating for long periods of time, reliably and with low energy consumption, even in the field.
The strength of our demonstration lies in the combination of a latest-generation microcontroller and a proven AI model.
The STM32N6 from STMicroelectronics incorporates a dedicated Neural Processing Unit (NPU) capable of executing complex AI models with remarkable energy efficiency. The result is real-time performance with minimal power consumption.
YOLOv8 is one of the most powerful and flexible object detection models currently available. Its accuracy and adaptability make it an ideal choice for demanding embedded applications.
Together, these two technological building blocks offer several concrete benefits:
Embedded AI is no longer a promise: it is a reality that is becoming established in fields as varied as road safety, health and robotics.
The difference today is the speed at which everything is advancing. Models are being optimised, microcontrollers are becoming more powerful, and applications are being deployed more and more quickly. We are seeing the emergence of cascading AI models, the integration of vision-language models, and multi-sensor systems capable of making complex decisions on their own.
At Rtone, we support our customers in transforming these advances into concrete solutions: predictive maintenance, anomaly detection, signal classification, and local, rapid, and energy-efficient decision-making.
And if you want to go further, check out our demonstration of anomaly detection using embedded machine learning.
👉 Do you have a project in mind or an idea to explore? Contact-us !
Un peu de lecture
Des articles, des podcasts, des webinars… et surtout des conseils pratiques ! En bref, une collection de ressources pour mener à bien votre projet.