Browsing by Author "Toussaint, M."
Now showing 1 - 5 of 5
- Results Per Page
- Sort Options
Item Open Access Learning robotic manipulation of natural materials with variable properties for construction tasks(Institute of Electrical and Electronics Engineers, 2022-03-15) Kalousdian, N. K.; Lochnicki, G.; Hartmann, V. N.; Leder, S.; Oğuz, Özgür S.; Menges, A.; Toussaint, M.The introduction of robotics and machine learning to architectural construction is leading to more efficient construction practices. So far, robotic construction has largely been implemented on standardized materials, conducting simple, predictable, and repetitive tasks. We present a novel mobile robotic system and corresponding learning approach that takes a step towards assembly of natural materials with anisotropic mechanical properties for more sustainable architectural construction. Through experiments both in simulation and in the real world, we demonstrate a dynamically adjusted curriculum and randomization approach for the problem of learning manipulation tasks involving materials with biological variability, namely bamboo. Using our approach, robots are able to transport bamboo bundles and reach to goal-positions during the assembly of bamboo structures.Item Open Access Leveraging building material as part of the in-plane robotic kinematic system for collective construction(Advanced Science, 2022-06-24) Leder, S.; Kim, H.; Oguz, Ozgur Salih; Kalousdian, N. K.; Hartmann, V. N.; Menges, A.; Toussaint, M.; Sitti, M.Although collective robotic construction systems are beginning to showcasehow multi-robot systems can contribute to building construction by efficientlybuilding low-cost, sustainable structures, the majority of research utilizesnon-structural or highly customized materials. A modular collective roboticconstruction system based on a robotic actuator, which leverages timberstruts for the assembly of architectural artifacts as well as part of the robotbody for locomotion is presented. The system is co-designed for in-planeassembly from an architectural, robotic, and computer science perspective inorder to integrate the various hardware and software constraints into a singleworkflow. The system is tested using five representative physical scenarios.These proof-of-concept demonstrations showcase three tasks required forconstruction assembly: the ability of the system to locomote, dynamicallychange the topology of connecting robotic actuators and timber struts, andcollaborate to transport timber struts. As such, the groundwork for a futureautonomous collective robotic construction system that could addresscollective construction assembly and even further increase the flexibility ofon-site construction robots through its modularity is laid.Item Open Access Long-horizon multi-robot rearrangement planning for construction assembly(Institute of Electrical and Electronics Engineers, 2022-08-26) Hartmann, V.N.; Orthey, A.; Driess, D.; Oğuz, Özgür S.; Toussaint, M.Robotic construction assembly planning aims to find feasible assembly sequences as well as the corresponding robot-paths and can be seen as a special case of task and motion planning (TAMP). As construction assembly can well be parallelized, it is desirable to plan for multiple robots acting concurrently. Solving TAMP instances with many robots and over a long time-horizon is challenging due to coordination constraints, and the difficulty of choosing the right task assignment. We present a planning system which enables parallelization of complex task and motion planning problems by iteratively solving smaller subproblems. Combining optimization methods to jointly solve for manipulation constraints with a sampling-based bi-directional space-time path planner enables us to plan cooperative multi-robot manipulation with unknown arrival-times. Thus, our solver allows for completing subproblems and tasks with differing timescales and synchronizes them effectively. We demonstrate the approach on multiple construction case-studies to show the robustness over long planning horizons and scalability to many objects and agents. Finally, we also demonstrate the execution of the computed plans on two robot arms to showcase the feasibility in the real world.Item Open Access RHH-LGP: Receding horizon and heuristics-based logic-geometric programming for task and motion planning(IEEE, 2022) Braun, C. V.; Ortiz-Haro, J.; Toussaint, M.; Oğuz, Özgür S.Sequential decision-making and motion planning for robotic manipulation induce combinatorial complexity. For long-horizon tasks, especially when the environment comprises many objects that can be interacted with, planning efficiency becomes even more important. To plan such long-horizon tasks, we present the RHH-LGP algorithm for combined task and motion planning (TAMP). First, we propose a TAMP approach (based on Logic-Geometric Programming) that effectively uses geometry-based heuristics for solving long-horizon manipulation tasks. The efficiency of this planner is then further improved by a receding horizon formulation, resulting in RHH-LGP. We demonstrate the robustness and effectiveness of our approach on a diverse range of long-horizon tasks that require reasoning about interactions with a large number of objects. Using our framework, we can solve tasks that require multiple robots, including a mobile robot and snake-like walking robots, to form novel heterogeneous kinematic structures autonomously. By combining geometry-based heuristics with iterative planning, our approach brings an order-of-magnitude reduction of planning time in all investigated problems.Item Open Access Spatial reasoning via deep vision models for robotic sequential manipulation(Institute of Electrical and Electronics Engineers , 2023-10-01) Zhou, H.; Schubert, I.; Toussaint, M.; Öğüz, Salih ÖzgürIn this paper, we propose using deep neural architectures (i.e., vision transformers and ResNet) as heuristics for sequential decision-making in robotic manipulation problems. This formulation enables predicting the subset of objects that are relevant for completing a task. Such problems are often addressed by task and motion planning (TAMP) formulations combining symbolic reasoning and continuous motion planning. In essence, the action-object relationships are resolved for discrete, symbolic decisions that are used to solve manipulation motions (e.g., via nonlinear trajectory optimization). However, solving long-horizon tasks requires consideration of all possible action-object combinations which limits the scalability of TAMP approaches. To overcome this combinatorial complexity, we introduce a visual perception module integrated with a TAMP-solver. Given a task and an initial image of the scene, the learned model outputs the relevancy of objects to accomplish the task. By incorporating the predictions of the model into a TAMP formulation as a heuristic, the size of the search space is significantly reduced. Results show that our framework finds feasible solutions more efficiently when compared to a state-of-the-art TAMP solver.