Simon Christoph Stein

Group(s): Computer Vision
Email:
-

Global QuickSearch:   Matches: 0

Search Settings

    Author / Editor / Organization
    Year
    Title
    Journal / Proceedings / Book
    Steingrube, S. and Timme, M. and Wörgötter, F. and Manoonpong, P. (2010).
    Self-Organized Adaptation of Simple Neural Circuits Enables Complex Robot Behavior. Nature Physics, 224-230, 6. DOI: 10.1038/nphys1508.
    BibTeX:
    @article{steingrubetimmewoergoetter2010,
      author = {Steingrube, S. and Timme, M. and Wörgötter, F. and Manoonpong, P.},
      title = {Self-Organized Adaptation of Simple Neural Circuits Enables Complex Robot Behavior},
      pages = {224-230},
      journal = {Nature Physics},
      year = {2010},
      volume= {6},
      doi = {10.1038/nphys1508},
      abstract = {Controlling sensori-motor systems in higher animals or complex robots is a challenging combinatorial problem, because many sensory signals need to be simultaneously coordinated into a broad behavioural spectrum. To rapidly interact with the environment, this control needs to be fast and adaptive. Present robotic solutions operate with limited autonomy and are mostly restricted to few behavioural patterns. Here we introduce chaos control as a new strategy to generate complex behaviour of an autonomous robot. In the presented system, 18 sensors drive 18 motors by means of a simple neural control circuit, thereby generating 11 basic behavioural patterns for example, orienting, taxis, self-protection and various gaits and their combinations. The control signal quickly and reversibly adapts to new situations and also enables learning and synaptic long-term storage of behaviourally useful motor responses. Thus, such neural control provides a powerful yet simple way to self-organize versatile behaviours in autonomous agents with many degrees of freedom}}
    Abstract: Controlling sensori-motor systems in higher animals or complex robots is a challenging combinatorial problem, because many sensory signals need to be simultaneously coordinated into a broad behavioural spectrum. To rapidly interact with the environment, this control needs to be fast and adaptive. Present robotic solutions operate with limited autonomy and are mostly restricted to few behavioural patterns. Here we introduce chaos control as a new strategy to generate complex behaviour of an autonomous robot. In the presented system, 18 sensors drive 18 motors by means of a simple neural control circuit, thereby generating 11 basic behavioural patterns for example, orienting, taxis, self-protection and various gaits and their combinations. The control signal quickly and reversibly adapts to new situations and also enables learning and synaptic long-term storage of behaviourally useful motor responses. Thus, such neural control provides a powerful yet simple way to self-organize versatile behaviours in autonomous agents with many degrees of freedom
    Review:
    Stein, S. and Schoeler, M. and Papon, J. and Wörgötter, F. (2014).
    Object Partitioning using Local Convexity. Conference on Computer Vision and Pattern Recognition CVPR, 304-311. DOI: 10.1109/CVPR.2014.46.
    BibTeX:
    @inproceedings{steinschoelerpapon2014,
      author = {Stein, S. and Schoeler, M. and Papon, J. and Wörgötter, F.},
      title = {Object Partitioning using Local Convexity},
      pages = {304-311},
      booktitle = {Conference on Computer Vision and Pattern Recognition CVPR},
      year = {2014},
      location = {Columbus, OH, USA},
      month = {06},
      doi = {10.1109/CVPR.2014.46},
      abstract = {The problem of how to arrive at an appropriate 3D-segmentation of a scene remains difficult. While current state-of-the-art methods continue to gradually improve in benchmark performance, they also grow more and more complex, for example by incorporating chains of classifiers, which require training on large manually annotated data- sets. As an alternative to this, we present a new, efficient learning- and model-free approach for the segmentation of 3D point clouds into object parts. The algorithm begins by decomposing the scene into an adjacency-graph of surface patches based on a voxel grid. Edges in the graph are then classified as either convex or concave using a novel combination of simple criteria which operate on the local geometry of these patches. This way the graph is divided into locally convex connected subgraphs, which - with high accuracy - represent object parts. Additionally, we propose a novel depth dependent voxel grid to deal with the decreasing point-density at far distances in the point clouds. This improves segmentation, allowing the use of fixed parameters for vastly different scenes. The algorithm is straight-forward to implement and requires no training data, while nevertheless producing results that are comparable to state-of-the-art methods which incorporate high-level concepts involving classification, learning and model fitting.}}
    Abstract: The problem of how to arrive at an appropriate 3D-segmentation of a scene remains difficult. While current state-of-the-art methods continue to gradually improve in benchmark performance, they also grow more and more complex, for example by incorporating chains of classifiers, which require training on large manually annotated data- sets. As an alternative to this, we present a new, efficient learning- and model-free approach for the segmentation of 3D point clouds into object parts. The algorithm begins by decomposing the scene into an adjacency-graph of surface patches based on a voxel grid. Edges in the graph are then classified as either convex or concave using a novel combination of simple criteria which operate on the local geometry of these patches. This way the graph is divided into locally convex connected subgraphs, which - with high accuracy - represent object parts. Additionally, we propose a novel depth dependent voxel grid to deal with the decreasing point-density at far distances in the point clouds. This improves segmentation, allowing the use of fixed parameters for vastly different scenes. The algorithm is straight-forward to implement and requires no training data, while nevertheless producing results that are comparable to state-of-the-art methods which incorporate high-level concepts involving classification, learning and model fitting.
    Review:
    Stein, S. and Wörgötter, F. and Schoeler, M. and Papon, J. and Kulvicius, T. (2014).
    Convexity based object partitioning for robot applications. IEEE International Conference on Robotics and Automation (ICRA), 3213-3220. DOI: 10.1109/ICRA.2014.6907321.
    BibTeX:
    @inproceedings{steinwoergoetterschoeler2014,
      author = {Stein, S. and Wörgötter, F. and Schoeler, M. and Papon, J. and Kulvicius, T.},
      title = {Convexity based object partitioning for robot applications},
      pages = {3213-3220},
      booktitle = {IEEE International Conference on Robotics and Automation (ICRA)},
      year = {2014},
      month = {05},
      doi = {10.1109/ICRA.2014.6907321},
      abstract = {The idea that connected convex surfaces, separated by concave boundaries, play an important role for the perception of objects and their decomposition into parts has been discussed for a long time. Based on this idea, we present a new bottom-up approach for the segmentation of 3D point clouds into object parts. The algorithm approximates a scene using an adjacency-graph of spatially connected surface patches. Edges in the graph are then classified as either convex or concave using a novel, strictly local criterion. Region growing is employed to identify locally convex connected subgraphs, which represent the object parts. We show quantitatively that our algorithm, although conceptually easy to graph and fast to compute, produces results that are comparable to far more complex state-of-the-art methods which use classification, learning and model fitting. This suggests that convexity/concavity is a powerful feature for object partitioning using 3D data. Furthermore we demonstrate that for many objects a natural decomposition into}}
    Abstract: The idea that connected convex surfaces, separated by concave boundaries, play an important role for the perception of objects and their decomposition into parts has been discussed for a long time. Based on this idea, we present a new bottom-up approach for the segmentation of 3D point clouds into object parts. The algorithm approximates a scene using an adjacency-graph of spatially connected surface patches. Edges in the graph are then classified as either convex or concave using a novel, strictly local criterion. Region growing is employed to identify locally convex connected subgraphs, which represent the object parts. We show quantitatively that our algorithm, although conceptually easy to graph and fast to compute, produces results that are comparable to far more complex state-of-the-art methods which use classification, learning and model fitting. This suggests that convexity/concavity is a powerful feature for object partitioning using 3D data. Furthermore we demonstrate that for many objects a natural decomposition into
    Review:
    Schoeler, M. and Stein, S. and Papon, J. and Abramov, A. and Wörgötter, F. (2014).
    Fast self-supervised on-line training for object recognition specifically for robotic applications. International Conference on Computer Vision Theory and Applications (VISAPP), 94-103, 2.
    BibTeX:
    @inproceedings{2014,
      author = {Schoeler, M. and Stein, S. and Papon, J. and Abramov, A. and Wörgötter, F.},
      title = {Fast self-supervised on-line training for object recognition specifically for robotic applications},
      pages = {94-103},
      booktitle = {International Conference on Computer Vision Theory and Applications (VISAPP)},
      year = {2014},
      volume= {2},
      month = {Jan},
      abstract = {Today most recognition pipelines are trained at an off-line stage, providing systems with pre-segmented images and predefined objects, or at an on-line stage, which requires a human supervisor to tediously control the learning. Self-Supervised on-line training of recognition pipelines without human intervention is a highly desirable goal, as it allows systems to learn unknown, environment specific objects on-the-fly. We propose a fast and automatic system, which can extract and learn unknown objects with minimal human intervention by employing a two-level pipeline combining the advantages of RGB-D sensors for object extraction and high-resolution cameras for object recognition. Furthermore, we significantly improve recognition results with local features by implementing a novel keypoint orientation scheme, which leads to highly invariant but discriminative object signatures. Using only one image per object for training, our system is able to achieve a recognition rate of 79% for 18 objects, benchmarked on 42 scenes with random poses, scales and occlusion, while only taking 7 seconds for the training. Additionally, we evaluate our orientation scheme on the state-of-the-art 56-object SDU-dataset boosting accuracy for one training view per object by +37% to 78% and peaking at a performance of 98% for 11 training views.}}
    Abstract: Today most recognition pipelines are trained at an off-line stage, providing systems with pre-segmented images and predefined objects, or at an on-line stage, which requires a human supervisor to tediously control the learning. Self-Supervised on-line training of recognition pipelines without human intervention is a highly desirable goal, as it allows systems to learn unknown, environment specific objects on-the-fly. We propose a fast and automatic system, which can extract and learn unknown objects with minimal human intervention by employing a two-level pipeline combining the advantages of RGB-D sensors for object extraction and high-resolution cameras for object recognition. Furthermore, we significantly improve recognition results with local features by implementing a novel keypoint orientation scheme, which leads to highly invariant but discriminative object signatures. Using only one image per object for training, our system is able to achieve a recognition rate of 79% for 18 objects, benchmarked on 42 scenes with random poses, scales and occlusion, while only taking 7 seconds for the training. Additionally, we evaluate our orientation scheme on the state-of-the-art 56-object SDU-dataset boosting accuracy for one training view per object by +37% to 78% and peaking at a performance of 98% for 11 training views.
    Review:
    Sutterlütti, R. and Stein, S. C. and Tamosiunaite, M. and Wörgötter, F. (2014).
    Object names correspond to convex entities. Cognitive Processing, 69 -- 71, 15, 1. DOI: 10.1007/s10339-013-0597-6.
    BibTeX:
    @article{sutterluettisteintamosiunaite2014,
      author = {Sutterlütti, R. and Stein, S. C. and Tamosiunaite, M. and Wörgötter, F.},
      title = {Object names correspond to convex entities},
      pages = {69 -- 71},
      booktitle = {Cognitive Processing},
      year = {2014},
      volume= {15},
      number = {1},
      language = {English},
      organization = {Springer Heidelberg Tiergartenstrasse 17, D-69121 Heidelberg, Germany},
      publisher = {Springer Berlin Heidelberg},
      url = {http://download.springer.com/static/pdf/411/art%253A10.1007%252Fs10339-014-0632-2.pdf?auth661427114510_ae25a34c4f91888c7fea13ea0fd0da15&ext.pdf},
      doi = {10.1007/s10339-013-0597-6}}
    Abstract:
    Review:
    Kappel, D. and Legenstein, R. and Habenschuss, S. and Hsieh, M. and Maass, W. (2018).
    A dynamic connectome supports the emergence of stable computational function of neural circuits through reward-based learning. eNeuro, ENEURO--0301, 5, 2. DOI: 10.1523/ENEURO.0301-17.2018.
    BibTeX:
    @article{kappellegensteinhabenschuss2018,
      author = {Kappel, D. and Legenstein, R. and Habenschuss, S. and Hsieh, M. and Maass, W.},
      title = {A dynamic connectome supports the emergence of stable computational function of neural circuits through reward-based learning},
      pages = {ENEURO--0301},
      journal = {eNeuro},
      year = {2018},
      volume= {5},
      number = {2},
      publisher = {Society for Neuroscience},
      url = {http://www.eneuro.org/content/5/2/ENEURO.0301-17.2018},
      doi = {10.1523/ENEURO.0301-17.2018},
      abstract = {Synaptic connections between neurons in the brain are dynamic because of continuously ongoing spine dynamics, axonal sprouting, and other processes. In fact, it was recently shown that the spontaneous synapse-autonomous component of spine dynamics is at least as large as the component that depends on the history of pre- and postsynaptic neural activity. These data are inconsistent with common models for network plasticity and raise the following questions: how can neural circuits maintain a stable computational function in spite of these continuously ongoing processes, and what could be functional uses of these ongoing processes? Here, we present a rigorous theoretical framework for these seemingly stochastic spine dynamics and rewiring processes in the context of reward-based learning tasks. We show that spontaneous synapse-autonomous processes, in combination with reward signals such as dopamine, can explain the capability of networks of neurons in the brain to configure themselves for specific computational tasks, and to compensate automatically for later changes in the network or task. Furthermore, we show theoretically and through computer simulations that stable computational performance is compatible with continuously ongoing synapse-autonomous changes. After reaching good computational performance it causes primarily a slow drift of network architecture and dynamics in task-irrelevant dimensions, as observed for neural activity in motor cortex and other areas. On the more abstract level of reinforcement learning the resulting model gives rise to an understanding of reward-driven network plasticity as continuous sampling of network configurations.}}
    Abstract: Synaptic connections between neurons in the brain are dynamic because of continuously ongoing spine dynamics, axonal sprouting, and other processes. In fact, it was recently shown that the spontaneous synapse-autonomous component of spine dynamics is at least as large as the component that depends on the history of pre- and postsynaptic neural activity. These data are inconsistent with common models for network plasticity and raise the following questions: how can neural circuits maintain a stable computational function in spite of these continuously ongoing processes, and what could be functional uses of these ongoing processes? Here, we present a rigorous theoretical framework for these seemingly stochastic spine dynamics and rewiring processes in the context of reward-based learning tasks. We show that spontaneous synapse-autonomous processes, in combination with reward signals such as dopamine, can explain the capability of networks of neurons in the brain to configure themselves for specific computational tasks, and to compensate automatically for later changes in the network or task. Furthermore, we show theoretically and through computer simulations that stable computational performance is compatible with continuously ongoing synapse-autonomous changes. After reaching good computational performance it causes primarily a slow drift of network architecture and dynamics in task-irrelevant dimensions, as observed for neural activity in motor cortex and other areas. On the more abstract level of reinforcement learning the resulting model gives rise to an understanding of reward-driven network plasticity as continuous sampling of network configurations.
    Review:

    © 2011 - 2017 Dept. of Computational Neuroscience • comments to: sreich _at_ gwdg.de • Impressum / Site Info