IoT
06 February 2026

Embedded AI and cameras: practical use cases for enhancing security

Artificial intelligence is on everyone’s lips. And with good reason: it is now opening up a whole host of possibilities, particularly in our everyday objects, which are becoming smarter, more responsive and, above all, more autonomous.

detection somnolence au volant

From security cameras to vehicles, smart buildings, and medical devices, AI integrated directly into hardware now enables applications that were previously reserved for heavy, cloud-dependent infrastructure.

At Rtone, we have believed in the potential of embedded machine learning for several years.

After an initial demonstration in 2024, carried out in partnership with STMicroelectronics at Embedded World, we repeated the experiment in 2025 with a new practical application: detecting drowsiness at the wheel.

In this article, we will see how the combination of a camera and real-time AI processing on a microcontroller can meet very specific needs in a variety of sectors: automotive, industry, construction, healthcare, logistics, and more.

We will then present our latest demonstration, developed by our teams, as well as the technical choices that accompany it.

 

Cameras with built-in AI: what are their uses in different sectors?  

Combining a camera with embedded artificial intelligence opens up a wide range of practical applications.

Thanks to real-time local processing, it becomes possible to analyze behavior, detect anomalies, and improve security—all without a cloud connection or network latency.

Let’s take a quick look at some examples 👇

 

IA sécurité routièreAutomotive Industry

Driving safety is a major issue, especially when faced with the risks of fatigue or inattention.

Thanks to on-board AI, it is now possible to detect signs of drowsiness (blinking, yawning, looking away from the road) in real time, as well as risky behaviour such as using a mobile phone while driving or not wearing a seatbelt. And all this works directly in the vehicle, without a connection to the cloud.

 

 

IA sécurité EPI

Safety at work

On construction sites or in factories, a lack of vigilance can have serious consequences.

Thanks to embedded vision, it is possible to detect a fall, illness or failure to wear PPE, and generate an immediate alert — without network infrastructure, even in isolated areas.

 

 

Smart Building

A smart building is also a building capable of responding quickly.

Intrusion, abnormal behaviour, occupation of a space (meeting room/open space): embedded AI can detect all of this locally, in real time. Even in the event of a network outage, the system continues to run.

 

 

Retail

Understanding how customers move around and what interests them allows you to rethink the shopping experience.

Embedded AI provides this insight directly on site: movements, waiting areas, flows… all useful information for improving the customer experience and optimising the layout.

 

 

Health / EHPAD

In healthcare facilities or for elderly people, responsiveness is crucial.

Embedded AI can detect falls, fainting spells or prolonged immobility, while ensuring confidentiality: nothing is sent to the cloud and no images are stored.

 

 

Robotic

Whether industrial, service or domestic robots, embedded vision improves their autonomy and interaction capabilities.

Object detection, people tracking, gesture recognition or automatic shutdown in the presence of humans: AI makes robots safer and more intuitive, even without a permanent connection.

 

These use cases are just a glimpse.

Beyond their diversity, all these cases share the same requirements: analyse locally, react quickly and comply with field constraints.

This is the advantage of combining a camera with embedded AI, as it allows you to meet specific needs in very different environments.

 

Detecting drowsiness at the wheel: on-board AI to protect drivers

Drowsiness at the wheel remains one of the main causes of serious road accidents. It is often silent, difficult to anticipate and yet preventable.

Signs of fatigue such as yawning or drooping eyelids are short-lived events that can occur anywhere, including in areas without network coverage.

Based on this observation, we have developed an embedded AI demonstration capable of detecting the first signs of fatigue in real time.

Presented at Embedded World 2025, this solution is based on the brand new STM32N6 microcontroller from our partner ST Microelectronics, combined with a camera capable of analysing the driver’s face.

Blinking, yawning, heavy eyelids… these are all signals that our AI can recognise to trigger an alert and avoid a dangerous situation.

As soon as risky behaviour is detected, an alert is generated to immediately warn of the danger.

So how does it work in practice? What technical choices guided our team? And above all: why is this approach a game changer for embedded security?

Technical Zoom

For this demonstration, we decided to use an already recognised AI model, adapt it to our use case, and then optimise it so that it runs directly on a microcontroller.

Our solution is based on the following elements:

The AI model

We chose YOLOv8, a benchmark in object detection, and then retrained it to specifically identify signs of fatigue such as closed or half-closed eyes and yawning.

In a typical configuration, the inference time is between 20 and 30 ms, which allows for real-time processing on a microcontroller.

 

Data set

We used a public dataset available on Roboflow, without collecting any proprietary data. This dataset contains images of drivers showing various signs of fatigue, which allowed us to cover a wide variety of situations.

 

Transfer learning

The YOLOv8n model was retrained from this dataset to specialise in detecting drowsiness.

The main training parameters used are:

  • Image size: 416 x 416
  • Batch size: 64
  • Number of epochs: between 150 and 300, with early termination if no improvement was observed after 40 epochs

 

Optimisation and continuous improvement

In order to ensure reliable detection in a variety of conditions, we applied different data augmentation techniques:

  • Mosaic: combining 4 images to improve the detection of small objects
  • MixUp: merging 2 images to enhance the model’s generalisation
  • Colour augmentation: adjusting brightness, contrast and saturation
  • Random transformations: flipping, rotating, blurring and adding noise

This approach increases the diversity of the dataset and therefore the robustness of the model, enabling it to function effectively in real-world situations, which can sometimes be unpredictable (variations in brightness, viewing angles, driver movements).

 

Why an STM32N6 rather than an MPU with embedded Linux?

For an AI model like this to work in real-world conditions, the choice of hardware is crucial.

When we think about running an AI model, we often imagine a powerful processor with embedded Linux.

However, in many cases, a modern microcontroller such as the STM32N6 represents an attractive compromise:

  • Competitive price: less than £7 in large quantities, well below the cost of an MPU.
  • Instant start-up: less than 100 ms, with no OS to load.
  • Ease of integration: no kernel, no file system, fewer software layers to maintain.
  • Enhanced security: a reduced attack surface, meaning fewer potential vulnerabilities.
  • Real-time reliability: essential for critical applications such as road safety.
  • Ultra-low power consumption: ideal for battery-powered systems.
  • Reduced maintenance: no critical Linux updates, fewer software dependencies.

In the specific case of drowsiness detection, which does not require a full OS or complex UI, the STM32N6 with integrated NPU enables:

  • Sufficient AI performance (optimised YOLOv8n inference),
  • Reduced hardware costs (simpler BoM),
  • And improved robustness in embedded environments.

In short, choosing the STM32N6 allows you to combine efficiency, simplicity and reliability.

After this concrete example, let’s take a look at the broader potential of this approach.

 

Why is it a technology of the future?

Our demonstration is just one example among many. Above all, it highlights all that embedded AI can offer in real-life situations.

A model that can be easily adapted to other use cases

The YOLOv8 model we used for drowsiness at the wheel can be quickly adapted to other similar use cases.

By simply changing the dataset and a few training parameters, the same approach can be used, for example, to:

  • Detect driver distraction (looking away from the road, loss of attention)
  • Identify mobile phone use while driving
  • Monitor the alertness of operators in control rooms or industrial sites

In summary, the same technological foundation can cover a wide variety of needs without having to start from scratch each time.

 

GDPR compliance and privacy protection

A key issue today: personal data protection.
With embedded AI, all analyses are performed locally, in real time, on a microcontroller.

  • No images are sent to the cloud.
  • No faces are stored.
  • No processing is delegated to a remote server.

This not only guarantees GDPR compliance, but also maximum confidentiality, which is a decisive advantage for users and manufacturers.

 

Energy efficiency for battery-powered systems

The AI inference runs directly on the STM32N6 microcontroller, designed to deliver good performance while remaining very energy efficient.

As a result, the solution can be easily integrated into battery-powered systems without sacrificing autonomy.

This is a real advantage for all mobile or embedded applications, but also for those deployed in isolated environments where mains power is not always available.

In practical terms, this opens the door to smart cameras capable of operating for long periods of time, reliably and with low energy consumption, even in the field.

 

Synergy between optimised hardware (STM32N6) and advanced AI (YOLOv8)

The strength of our demonstration lies in the combination of a latest-generation microcontroller and a proven AI model.

The STM32N6 from STMicroelectronics incorporates a dedicated Neural Processing Unit (NPU) capable of executing complex AI models with remarkable energy efficiency. The result is real-time performance with minimal power consumption.

YOLOv8 is one of the most powerful and flexible object detection models currently available. Its accuracy and adaptability make it an ideal choice for demanding embedded applications.

Together, these two technological building blocks offer several concrete benefits:

  • Local and instantaneous processing: no network latency
  • Cloud independence: no dependence on an external connection
  • Cost optimisation: no additional costs related to bandwidth or server infrastructure
  • Robustness in real-world conditions: the system remains functional even in the event of a network outage

 

To conclude

Embedded AI is no longer a promise: it is a reality that is becoming established in fields as varied as road safety, health and robotics.

The difference today is the speed at which everything is advancing. Models are being optimised, microcontrollers are becoming more powerful, and applications are being deployed more and more quickly. We are seeing the emergence of cascading AI models, the integration of vision-language models, and multi-sensor systems capable of making complex decisions on their own.

At Rtone, we support our customers in transforming these advances into concrete solutions: predictive maintenance, anomaly detection, signal classification, and local, rapid, and energy-efficient decision-making.

And if you want to go further, check out our demonstration of anomaly detection using embedded machine learning.

👉 Do you have a project in mind or an idea to explore?  Contact-us !

 

Un peu de lecture

Des articles, des podcasts, des webinars… et surtout des conseils pratiques ! En bref, une collection de ressources pour mener à bien votre projet.