Development of a Comprehensive Approach for Precise Positioning and Orientation of Multiple Mobile Robots in a Specified Formation Using Computer Vision and the Internet of Things
Main Article Content
Abstract
The integration of Internet of Things (IoT) technologies into MultiRobot Systems (MRS) has marked a significant advancement in the field of robotics and has opened up new avenues for innovation in robotics applications. By leveraging IoT technologies, robots in an MRS can be interconnected over a network, facilitating seamless communication and data exchange which can be used in various industrial as well as commercial applications. As manufacturing processes evolve towards increased automation and connectivity, MRS provides a versatile solution for tasks such as logistics, transportation, and collaborative assembly. This work presents a complete MRS architecture that combines Message Queue Telemetry Transport (MQTT) based wireless communication, overhead camera–assisted pose estimation using ArUco markers, a graphical control interface, and onboard sensing to achieve accurate multi-robot formation control. Unlike high-cost optical motion-capture systems like Vicon or infrastructure-dependent Ultra-Wide Band (UWB) localization, the proposed system attains centimeter-level accuracy using only a single overhead camera and low-power ESP32-based mobile robots. Real-time position and orientation feedback is computed using a Python–OpenCV pipeline, while MQTT ensures lightweight, low-latency communication between the master operating station and the robots. The Augmented Reality University of Cordoba (ArUco) markers are mounted on the top of each robot to give unique identity to each robot for easy identification and position orientation feedback. The system is experimentally validated across square, triangle, rectangle, and line formations. Each formation is executed over five independent trials, and statistical performance metrics in form of mean ± standard deviation demonstrate consistent accuracy and repeatability. A baseline comparison using pure encoder odometry shows substantially higher drift, confirming the benefit of closed-loop visual feedback. An ablation experiment disabling the magnetometer further quantifies its contribution to orientation stability. Additionally, a full timing and latency analysis covering frame rate, image-processing time, MQTT round-trip delay, and actuator reaction time verifies that the end-to-end control loop operates within real-time bounds. For the square formation task, the average position error remains below 10% relative to the robot’s chassis size and under 1% relative to the overall field dimensions. Similarly, the quantitative results and images are discussed for triangle, line and rectangle shape. The results demonstrate that the proposed MRS achieves robust, precise, and repeatable formation control with minimal hardware cost and infrastructure requirements. The effectiveness of the proposed algorithms and the overall MRS architecture for accurate positioning and orientation of mobile robots can be effectively utilized in a variety of industrial applications. Additionally, it can provide enhanced adaptability and flexibility in manufacturing, improved real-time communication through IoT, and feedback mechanisms.
Article Details

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.