Simon Reich

Group(s): Computer Vision
Email:
sreich@gwdg.de
Phone: +49 551/ 39 10764
Room: E.01.103

Global QuickSearch:   Matches: 0

Search Settings

    Author / Editor / Organization
    Year
    Title
    Journal / Proceedings / Book
    Reich, S. and Abramov, A. and Papon, J. and Wörgötter, F. and Dellen, B. (2013).
    A Novel Real-time Edge-Preserving Smoothing Filter. Proceedings of the International Conference on Computer Vision Theory and Applications - Volume 1: VISAPP, 5 - 14, 1. DOI: 10.5220/0004214300050014.
    BibTeX:
    @inproceedings{reichabramovpapon2013,
      author = {Reich, S. and Abramov, A. and Papon, J. and Wörgötter, F. and Dellen, B.},
      title = {A Novel Real-time Edge-Preserving Smoothing Filter},
      pages = {5 - 14},
      booktitle = {Proceedings of the International Conference on Computer Vision Theory and Applications - Volume 1: VISAPP},
      year = {2013},
      volume= {1},
      location = {Barcelona (Spain)},
      month = {February 21-24},
      organization = {INSTICC},
      publisher = {SciTePress},
      url = {http://www.visapp.visigrapp.org/Abstracts/2013/VISAPP_2013_Abstracts.htm},
      doi = {10.5220/0004214300050014},
      abstract = {The segmentation of textured and noisy areas in images is a very challenging task due to the large variety of objects and materials in natural environments, which cannot be solved by a single similarity measure. In this paper, we address this problem by proposing a novel edge-preserving texture filter, which smudges the color values inside uniformly textured areas, thus making the processed image more workable for color-based image segmentation. Due to the highly parallel structure of the method, the implementation on a GPU runs in real-time, allowing us to process standard images within tens of milliseconds. By preprocessing images with this novel filter before applying a recent real-time color-based image segmentation method, we obtain significant improvements in performance for images from the Berkeley dataset, outperforming an alternative version using a standard bilateral filter for preprocessing. We further show that our combined approach leads to better segmentations in terms of a standard performance measure than graph-based and mean-shift segmentation for the Berkeley image dataset.}}
    Abstract: The segmentation of textured and noisy areas in images is a very challenging task due to the large variety of objects and materials in natural environments, which cannot be solved by a single similarity measure. In this paper, we address this problem by proposing a novel edge-preserving texture filter, which smudges the color values inside uniformly textured areas, thus making the processed image more workable for color-based image segmentation. Due to the highly parallel structure of the method, the implementation on a GPU runs in real-time, allowing us to process standard images within tens of milliseconds. By preprocessing images with this novel filter before applying a recent real-time color-based image segmentation method, we obtain significant improvements in performance for images from the Berkeley dataset, outperforming an alternative version using a standard bilateral filter for preprocessing. We further show that our combined approach leads to better segmentations in terms of a standard performance measure than graph-based and mean-shift segmentation for the Berkeley image dataset.
    Review:
    Reich, S. and Wörgötter, F. and Dellen, B. (2018).
    A Real-Time Edge-Preserving Denoising Filter. Proceedings of the 13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP): Visapp, 85-94, 4. DOI: 10.5220/0006509000850094.
    BibTeX:
    @inproceedings{reichwoergoetterdellen2018,
      author = {Reich, S. and Wörgötter, F. and Dellen, B.},
      title = {A Real-Time Edge-Preserving Denoising Filter},
      pages = {85-94},
      booktitle = {Proceedings of the 13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP): Visapp},
      year = {2018},
      volume= {4},
      location = {Funchal, Madeira (Portugal)},
      month = {January 27-29},
      organization = {INSTICC},
      publisher = {SciTePress},
      doi = {10.5220/0006509000850094},
      abstract = {Even in todays world, where augmented reality glasses and 3d sensors become rapidly less expensive and widely more used, the most important sensor remains the 2d RGB camera. Every camera is an optical device and prone to sensor noise, especially in dark environments or environments with extreme high dynamic range. The here introduced filter removes a wide variation of noise, for example Gaussian noise and salt-and-pepper noise, but preserves edges. Due to the highly parallel structure of the method, the implementation on a GPU runs in real-time, allowing us to process standard images within tens of milliseconds. The filter is first tested on 2d image data and based on the Berkeley Image Dataset and Coco Dataset we outperform other standard methods. Afterwards, we show a generalization to arbitrary dimensions using noisy low level sensor data. As a result the filter can be used not only for image enhancement, but also for noise reduction on sensors like acceleremoters, gyroscopes, or GPS-trackers, which are widely used in robotic applications.}}
    Abstract: Even in todays world, where augmented reality glasses and 3d sensors become rapidly less expensive and widely more used, the most important sensor remains the 2d RGB camera. Every camera is an optical device and prone to sensor noise, especially in dark environments or environments with extreme high dynamic range. The here introduced filter removes a wide variation of noise, for example Gaussian noise and salt-and-pepper noise, but preserves edges. Due to the highly parallel structure of the method, the implementation on a GPU runs in real-time, allowing us to process standard images within tens of milliseconds. The filter is first tested on 2d image data and based on the Berkeley Image Dataset and Coco Dataset we outperform other standard methods. Afterwards, we show a generalization to arbitrary dimensions using noisy low level sensor data. As a result the filter can be used not only for image enhancement, but also for noise reduction on sensors like acceleremoters, gyroscopes, or GPS-trackers, which are widely used in robotic applications.
    Review:
    Reich, S. and Aein, M. J. and Wörgötter, F. (2018).
    Context Dependent Action Affordances and their Execution using an Ontology of Actions and 3D Geometric Reasoning. Proceedings of the 13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP): Visapp, 218-229, 5. DOI: 10.5220/0006562502180229.
    BibTeX:
    @inproceedings{reichaeinwoergoetter2018,
      author = {Reich, S. and Aein, M. J. and Wörgötter, F.},
      title = {Context Dependent Action Affordances and their Execution using an Ontology of Actions and 3D Geometric Reasoning},
      pages = {218-229},
      booktitle = {Proceedings of the 13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP): Visapp},
      year = {2018},
      volume= {5},
      location = {Funchal, Madeira (Portugal)},
      month = {January 27-29},
      organization = {INSTICC},
      publisher = {SciTePress},
      doi = {10.5220/0006562502180229},
      abstract = {When looking at an object humans can quickly and efficiently assess which actions are possible given the scene context. This task remains hard for machines. Here we focus on manipulation actions and in the first part of this study define an object-action linked ontology for such context dependent affordance analysis. We break down every action into three hierarchical pre-condition layers starting on top with abstract object relations (which need to be fulfilled) and in three steps arriving at the movement primitives required to execute the action. This ontology will then, in the second part of this work, be linked to actual scenes. First the system looks at the scene and for any selected object suggests some actions. One will be chosen and, we use now a simple geometrical reasoning scheme by which this actions movement primitives will be filled with the specific parameter values, which are then executed by the robot. The viability of this approach will be demonstrated by analysing several scenes and a large number of manipulations.}}
    Abstract: When looking at an object humans can quickly and efficiently assess which actions are possible given the scene context. This task remains hard for machines. Here we focus on manipulation actions and in the first part of this study define an object-action linked ontology for such context dependent affordance analysis. We break down every action into three hierarchical pre-condition layers starting on top with abstract object relations (which need to be fulfilled) and in three steps arriving at the movement primitives required to execute the action. This ontology will then, in the second part of this work, be linked to actual scenes. First the system looks at the scene and for any selected object suggests some actions. One will be chosen and, we use now a simple geometrical reasoning scheme by which this actions movement primitives will be filled with the specific parameter values, which are then executed by the robot. The viability of this approach will be demonstrated by analysing several scenes and a large number of manipulations.
    Review:
    Reich, S. and Seer, M. and Berscheid, L. and Wörgötter, F. and Braun, J. (2018).
    Omnidirectional visual odometry for flying robots using low-power hardware. Proceedings of the 13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP): Visapp, 499-507, 5. DOI: 10.5220/0006509704990507.
    BibTeX:
    @inproceedings{reichseerberscheid2018,
      author = {Reich, S. and Seer, M. and Berscheid, L. and Wörgötter, F. and Braun, J.},
      title = {Omnidirectional visual odometry for flying robots using low-power hardware},
      pages = {499-507},
      booktitle = {Proceedings of the 13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP): Visapp},
      year = {2018},
      volume= {5},
      location = {Funchal, Madeira (Portugal)},
      month = {January 27-29},
      organization = {INSTICC},
      publisher = {SciTePress},
      doi = {10.5220/0006509704990507},
      abstract = {Currently, flying robotic systems are in development for package delivery, aerial exploration in catastrophe areas, or maintenance tasks. While many flying robots are used in connection with powerful, stationary computing systems, the challenge in autonomous devices---especially in indoor-rescue or rural missions---lies in the need to do all processing internally on low power hardware. Furthermore, the device cannot rely on a well ordered or marked surrounding. These requirements make computer vision an important and challenging task for such systems. To cope with the cumulative problems of low frame rates in combination with high movement rates of the aerial device, a hyperbolic mirror is mounted on top of a quadrocopter, recording omnidirectional images, which can capture features during fast pose changes. The viability of this approach will be demonstrated by analysing several scenes. Here, we present a novel autonomous robot, which performs all computations online on low power embedded hardware and is therefore a truly autonomous robot. Furthermore, we introduce several novel algorithms, which have a low computational complexity and therefore enable us to refrain from external resources.}}
    Abstract: Currently, flying robotic systems are in development for package delivery, aerial exploration in catastrophe areas, or maintenance tasks. While many flying robots are used in connection with powerful, stationary computing systems, the challenge in autonomous devices---especially in indoor-rescue or rural missions---lies in the need to do all processing internally on low power hardware. Furthermore, the device cannot rely on a well ordered or marked surrounding. These requirements make computer vision an important and challenging task for such systems. To cope with the cumulative problems of low frame rates in combination with high movement rates of the aerial device, a hyperbolic mirror is mounted on top of a quadrocopter, recording omnidirectional images, which can capture features during fast pose changes. The viability of this approach will be demonstrated by analysing several scenes. Here, we present a novel autonomous robot, which performs all computations online on low power embedded hardware and is therefore a truly autonomous robot. Furthermore, we introduce several novel algorithms, which have a low computational complexity and therefore enable us to refrain from external resources.
    Review:
    Ivanovska, T. and Reich, S. and Bevec, R. and Gosar, Z. and Tamosiunaite, M. and Ude, A. and Wörgötter, F. (2018).
    Visual Inspection And Error Detection In a Reconfigurable Robot Workcell: An Automotive Light Assembly Example. Proceedings of the 13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP): Visapp (VCEA), 607-615, 5. DOI: 10.5220/0006666506070615.
    BibTeX:
    @inproceedings{ivanovskareichbevec2018,
      author = {Ivanovska, T. and Reich, S. and Bevec, R. and Gosar, Z. and Tamosiunaite, M. and Ude, A. and Wörgötter, F.},
      title = {Visual Inspection And Error Detection In a Reconfigurable Robot Workcell: An Automotive Light Assembly Example},
      pages = {607-615},
      booktitle = {Proceedings of the 13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP): Visapp (VCEA)},
      year = {2018},
      volume= {5},
      location = {Funchal, Madeira (Portugal)},
      month = {January 27-29},
      organization = {INSTICC},
      publisher = {SciTePress},
      doi = {10.5220/0006666506070615},
      abstract = {Small and medium size enterprises (SMEs) often have small batch production. It leads to decreasing product lifetimes and also to more frequent product launches. In order to assist such production, a highly reconfigurable robot workcell is being developed. In this work, a visual inspection system designed for the robot workcell is presented and discussed in the context of the automotive light assembly example. The proposed framework is implemented using ROS and OpenCV libraries. We describe the hardware and software components of the framework and explain the systems benefits when compared to other commercial packages.}}
    Abstract: Small and medium size enterprises (SMEs) often have small batch production. It leads to decreasing product lifetimes and also to more frequent product launches. In order to assist such production, a highly reconfigurable robot workcell is being developed. In this work, a visual inspection system designed for the robot workcell is presented and discussed in the context of the automotive light assembly example. The proposed framework is implemented using ROS and OpenCV libraries. We describe the hardware and software components of the framework and explain the systems benefits when compared to other commercial packages.
    Review:

    © 2011 - 2017 Dept. of Computational Neuroscience • comments to: sreich _at_ gwdg.de • Impressum / Site Info