Autonomous Intelligence Robotic Center - Projects

Development of Multi-Function Autonomous Robotic Systems to Improve Greenhouse Efficiencies

Status: Ongoing

Partner organization: Greenhouse Technology Network / Prism Farms

Grant: GTN

After successful development of the AI-driven robotic pollinator, we aim to add a new functionality to our autonomous robotic system. De-leafing plays a crucial role in cucumber cultivation within greenhouses. By selectively removing unwanted leaves from cucumber plants, de-leafing helps optimize plant health, productivity, and overall crop quality. This proposed grant aims to design and build a prototype model for an autonomous robotic system for de-leafing cucumber plants grown in greenhouses. This system consists of a mobile platform to carry the de-leafing system in the greenhouse environment, a 6 DoF (degree of freedom) robot manipulator which will be equipped with a vision system and end-effector to selectively remove unwanted leaves from cucumber plants. In addition, we design a vision unit for the task of detecting, and locating the leaves of the plants, as well as an end-effector that directly interacts with the leaves and performs the removal task.

 

Uncertainty-aware Pedestrian Behavior Prediction for Autonomous Vehicles

Status: Ongoing

Partner organization: NSERC/ Norwegian University of Science and Technology (NTNU)

Grant: International - Catalyst

With the rise of autonomous vehicles and the need for safer interactions between pedestrians and vehicles, pedestrian behavior prediction has become a highly relevant and sought-after research field. The main objective of this research project is to improve the prediction of pedestrian behavior in urban scenarios by incorporating uncertainty estimation techniques. In Deep Neural Networks (DNNs), there are two main types of uncertainties: aleatoric uncertainty and epistemic uncertainty. In this research, we will model the aleatoric uncertainty by incorporating probabilistic outputs. Epistemic uncertainty will be addressed through Bayesian neural networks which aim to capture and quantify the uncertainty arising from model parameters and architecture. By considering both aleatoric and epistemic uncertainties in DNNs, we can obtain a more reliable and robust pedestrian behavior prediction.

 

Learning-based Autonomous Robotic System for Package Sorting Application

Status: Ongoing

Partner organization: NSERC and Mitacs joint funding / Vistaprint

Grant: Mitacs Accelerate – NSERC Alliance

Despite the previous efforts, most robotic sorting systems fail to work in a complex environment for real-life applications. An autonomous sorting robot should have the following capabilities: (i) detection and classification of objects with different shapes, sizes and physical properties, (ii) optimal object grasping, and (iii) trajectory generation and motion planning within the 3D environment. In this project, we will develop an autonomous object sorting system using a robotic grasping mechanism with deep learning. Object detection and classification will be performed by convolutional neural networks (CNNs) which process RGB images. For the second part, object grasping, we will use point cloud data, generated by the RGB-D camera, and another CNN to process the local geometry and graspable surfaces of the objects. To be suitable for real-time applications, however, the processing time should be diminished through finding the best grasp candidates using a trained CNN with large online depth datasets. Finally, the optimal path planning and trajectory optimization will be carried out by a deep reinforcement learning (DRL) algorithm.

 

Human-machine interaction with capacitive pressure sensing for industrial automation, accessibility, and sustainability

Status: Ongoing

Partner organization: University of Windsor/ Department of Chemistry and Biochemistry

Grant: Faculty of Engineering Innovating Sustainability Grant

This project aims to develop a sensor-based human-machine interaction system for recognizing hand movements in real time. We will design and develop pressure sensors for capturing a gesture, signal conditioning circuit to amplify the sensor output, data acquisition module to convert amplified analog signals to digital signals for signal processing, gesture recognition module to classify different gestures based on pressure changes, and actuation module which controls the robotic arm.

 

Design and Development of AI-Driven Robotic Pollination in Greenhouses

Status: Completed (2024)

Partner organization: Greenhouse Technology Network / Zion Robotics and Controls

Grant: GTN

Utilizing robotic pollination in greenhouses represents a significant advancement in agricultural technology. We integrated Intel RealSense RGB-D with YOLOv8, a deep learning-based object detection algorithm, to accurately identify tomato flowers in real-time. All computations are performed on the Jetson Orin, which provides the necessary processing power for efficient data handling and analysis. Jetson sends the flowers location to the URe3 robotic arm for pollination and enhancing fruit set and yield. This innovative approach not only boosts productivity but also mitigates the reliance on traditional pollinators, addressing challenges posed by declining bee populations and optimizing greenhouse operations.

 

 

Cooperative Adaptive Cruise Control with Vehicle Trajectory Prediction Using Machine Learning

Techniques

Status: Completed (2023)

Partner organization: NSERC/ University of Leeds

Grant: International - Catalyst

This research project aims to improve adaptive cruise control (ACC) system performance by using vehicular communication and machine learning techniques. ACC relies on onboard sensors to adjust a vehicle’s speed with the speed of the preceding vehicle. Since ACC performance is restricted by its sensing range, cooperative adaptive cruise control (CACC) has emerged to supplant on-board sensors with vehicular communication to exchange information between the vehicles. The main objective of this project is to leverage machine learning techniques to predict vehicles trajectories in CACC. We will use the long-short term memory (LSTM) deep model to predict the position of target vehicles at future time steps.

 

Development of Automated Robotic Vision Inspection System for Accurate Quality Control

Status: Completed (2022)

Partner organization: NSERC/ Zion Robotics and Controls

Grant: Alliance

This project aims to replace human operators with robots to perform inspection tasks automatically in the factory environment. The main goal of this project is to carry out dimensional measurement and flaw detection in bore cylinders automatically on-site. To employ robots in dynamic and uncontrolled workplaces, robots should cope with several unforeseen circumstances. Robot learning is a new paradigm which enables robots to handle variations in the working environment, such as differences in the item position on the product line or illumination variation. We will use learning from demonstration (LfD) technique and other cutting-edge technologies, such as computer vision, artificial intelligence, and pattern recognition. The outcome of this project will have considerable impacts on automotive sector as well as other critical industries.

 

Collaborative Learning Environment for Remote Experimentation with Robots  

Status: Completed (2022)

Partner organization: eCampus Ontario

Grant: Virtual Learning Strategy

The pandemic has prompted a dramatic shift not only to online learning but also towards remote experimentation. This proposal aims to develop a collaborative and practical learning environment that enables students to remotely access a physical laboratory and perform experiments. This experimental environment is extremely beneficial in many academic areas, such as engineering programs, which require both theoretical knowledge and practical training. The developed learning environment is used for an online robotics course as a case study. The course is based on passive and active learning methods. While students can passively learn from lectures and demonstrations, active learning through exercises provides them a unique opportunity to practice the theoretical concepts. After successfully implementing their codes in the ROS (Robot Operating System) simulation environment, students will remotely connect to an on-site traditional laboratory and run their code on a real robot. This project covers both mobile robots and robotic manipulators.

 

Design and Development of COVID-19 Automated UV Object Disinfection Cabinet

Status: Completed (2022)

Partner organization: Mitacs/ Level One Robotics and Controls Inc.

Grant: Accelerate

The main objective of this project was to design and develop an automated cabinet to disinfect COVID-19 from objects based on ultraviolet germicidal irradiation. The proposed disinfection cabinet is able to detect objects, measure their sizes and adjust UV rate accordingly to assure complete disinfection and meet stringent disinfection standards. Moreover, this system is capable to decontaminate objects in a short time to make it a good candidate for major online retailers distributing hundreds of thousands of items per day. By some modifications, this system can be used in many other sectors in from manufacturing to grocery stores to disinfect other materials and items.