Zhou, H.Schubert, I.Toussaint, M.Öğüz, Salih Özgür2024-03-152024-03-152023-10-012153-0858https://hdl.handle.net/11693/114794Conference Name: IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2023Date of Conference: 1 October 2023through 5In this paper, we propose using deep neural architectures (i.e., vision transformers and ResNet) as heuristics for sequential decision-making in robotic manipulation problems. This formulation enables predicting the subset of objects that are relevant for completing a task. Such problems are often addressed by task and motion planning (TAMP) formulations combining symbolic reasoning and continuous motion planning. In essence, the action-object relationships are resolved for discrete, symbolic decisions that are used to solve manipulation motions (e.g., via nonlinear trajectory optimization). However, solving long-horizon tasks requires consideration of all possible action-object combinations which limits the scalability of TAMP approaches. To overcome this combinatorial complexity, we introduce a visual perception module integrated with a TAMP-solver. Given a task and an initial image of the scene, the learned model outputs the relevancy of objects to accomplish the task. By incorporating the predictions of the model into a TAMP formulation as a heuristic, the size of the search space is significantly reduced. Results show that our framework finds feasible solutions more efficiently when compared to a state-of-the-art TAMP solver.EnglishComputer visionDecision makingDeep learningIntelligent robotsSpatial reasoning via deep vision models for robotic sequential manipulationConference Paper10.1109/IROS55552.2023.103420102153-0866