Removing posters and clutter from the background of an online speech deals with

Event-based Vision Resources

Table of Contents:



Survey paper

  • Gallego, G., Delbruck, T., Orchard, G., Bartolozzi, C., Taba, B., Censi, A., Leutenegger, S., Davison, A., Conradt, J., Daniilidis, K., Scaramuzza, D.,
    Event-based Vision: A Survey,
    IEEE Trans. Pattern Anal. Machine Intell. (TPAMI), 2020.

Workshops

  • MFI 2022 First Neuromorphic Event Sensor Fusion Workshop with videos incl. Event Sensor Fusion Jeopardy game - Virtual, 2022.
    • Video playlist
  • tinyML Neuromorphic Engineering Forum - Virtual, 2022.
    • Video playlist
  • CVPR 2021 Third International Workshop on Event-based Vision - Virtual.
    • Videos
  • Neuro-Inspired Computational Elements (NICE) Workshop Series
    • Videos
  • IEEE Embedded Vision Workshop Series, with focus on Biologically-inspired vision and embedded systems.
  • Capo Caccia Workshops toward Cognitive Neuromorphic Engineering.
  • The Telluride Neuromorphic Cognition Engineering Workshops.
    • Videos
    • Telluride 2020 (Online): YouTube playlist, Slides
  • CVPR 2019 Second International Workshop on Event-based Vision and Smart Cameras - Slides and Videos available on the website.
    • Videos
  • IROS 2018 Unconventional Sensing and Processing for Robotic Visual Perception.
  • ICRA 2017 First International Workshop on Event-based Vision - Slides and Videos available on the website.
    • Videos
  • IROS 2015 Event-Based Vision for High-Speed Robotics (slides), Workshop on Alternative Sensing for Robot Perception.
  • ICRA 2015 Workshop on Innovative Sensing for Robotics, with a focus on Neuromorphic Sensors.

Devices & Companies Manufacturing them

  • DVS (Dynamic Vision Sensor): Lichtsteiner, P., Posch, C., and Delbruck, T., A 128x128 120dB 15μs latency asynchronous temporal contrast vision sensor, IEEE J. Solid-State Circuits, 43(2):566-576, 2008. PDF
    • Product page at iniVation. Buy a DVS
    • Product specifications
    • User guide
    • Introductory videos about the DVS technology
    • iniVation AG invents, produces and sells neuromorphic technologies with a special focus on event-based vision into business. Slides by S. E. Jakobsen, board member of iniVation.
    • Event Cameras - Tutorial - Tobi Delbruck, version 4
  • Samsung's DVS
    • Slides and Video by Hyunsurk Eric Ryu, Samsung Electronics (2019).
    • Suh et al., A 1280×960 Dynamic Vision Sensor with a 4.95-μm Pixel Pitch and Motion Artifact Minimization, IEEE Int. Symp. Circuits and Systems (ISCAS), 2020.
    • Son, B., et al., A 640×480 dynamic vision sensor with a 9µm pixel and 300Meps address-event representation, IEEE Int. Solid-State Circuits Conf. (ISSCC), 2017, pp. 66-67.
    • SmartThings Vision commercial product for home monitoring. in Australia
    • Paper at IEDM 2019, about low-latency applications using Samsung's VGA DVS.
  • DAVIS (Dynamic and Active Pixel Vision Sensor) : Brandli, C., Berner, R., Yang, M., Liu, S.-C., Delbruck, T., A 240x180 130 dB 3 µs Latency Global Shutter Spatiotemporal Vision Sensor, IEEE J. Solid-State Circuits, 49(10):2333-2341, 2014. PDF
    • Product page at iniVation. Buy a DAVIS
    • Product specifications
    • User guide
    • Color-DAVIS: Li, C., Brandli, C., Berner, R., Liu, H., Yang, M., Liu, S.-C., Delbruck, T., Design of an RGBW Color VGA Rolling and Global Shutter Dynamic and Active-Pixel Vision Sensor, IEEE Int. Symp. Circuits and Systems (ISCAS), 2015, pp. 718-721.
    • SDAVIS192: Moeys, D. P., Corradi, F., Li, C., Bamford, S. A., Longinotti, L., Voigt, F. F., Berry, S., Taverni, G., Helmchen, F., Delbruck, T., A Sensitive Dynamic and Active Pixel Vision Sensor for Color or Neural Imaging Applications, IEEE Trans. Biomed. Circuits Syst. 12(1):123-136 2018.
    • DAVIS346: Taverni, G; Paul Moeys, D; Li, C; Cavaco, C; Motsnyi, V; San Segundo Bello, D; Delbruck, T.,
      Front and Back Illuminated Dynamic and Active Pixel Vision Sensors Comparison,
      IEEE Trans. Circuits Syst. Express Briefs, 2018
  • Insightness's Silicon Eye QVGA event sensor.
    • The Silicon Eye Technology
    • Slides and Video by Stefan Isler (2019).
    • Slides and Video by Christian Brandli, CEO and co-founder of Insightness (2017).
  • PROPHESEE’s Metavision Sensor and Software
    • ATIS (Asynchronous Time-based Image Sensor): Posch, C., Matolin, D., Wohlgenannt, R. (2011). A QVGA 143 dB Dynamic Range Frame-Free PWM Image Sensor With Lossless Pixel-Level Video Compression and Time-Domain CDS, IEEE J. Solid-State Circuits, 46(1):259-275, 2011. PDF, YouTube, YouTube
    • Prophesee Gen4 is described in: Finateu et al., A 1280×720 Back-Illuminated Stacked Temporal Contrast Event-Based Vision Sensor with 4.86μm Pixels, 1.066GEPS Readout, Programmable Event-Rate Controller and Compressive Data-Formatting Pipeline, IEEE Int. Solid-State Circuits Conf. (ISSCC), 2020, pp. 112-114.
    • Buy a Prophesee packaged sensor VGA
    • Prophesee Cameras Specifications
    • What is event-based vision and sample applications, YouTube
    • Download free or buy our software
    • Documentation and tutorials
    • Knowledge Base and Community Forum
  • CelePixel, Shanghai. CeleX-V: the first 1 Mega-pixel event-camera sensor.
  • Sensitive DVS (sDVS)
    • Leñero-Bardallo, J. A., Serrano-Gotarredona, T., Linares-Barranco, B., A 3.6us Asynchronous Frame-Free Event-Driven Dynamic-Vision-Sensor, IEEE J. of Solid-State Circuits, 46(6):1443-1455, 2011.
    • Serrano-Gotarredona, T. and Linares-Barranco, B., A 128x128 1.5% Contrast Sensitivity 0.9% FPN 3us Latency 4mW Asynchronous Frame-Free Dynamic Vision Sensor Using Transimpedance Amplifiers, IEEE J. Solid-State Circuits, 48(3):827-838, 2013.
  • DLS (Dynamic Line Sensor): Posch, C., Hofstaetter, M., Matolin, D., Vanstraelen, G., Schoen, P., Donath, N., and Litzenberger, M., A dual-line optical transient sensor with on-chip precision time-stamp generation, IEEE Int. Solid-State Circuits Conf. - Digest of Technical Papers, Lisbon Falls, MN, US, 2007.
    • Fact sheet at AIT.
  • LWIR DVS: Posch, C., Matolin, D., Wohlgenannt, R., Maier, T., Litzenberger, M., A Microbolometer Asynchronous Dynamic Vision Sensor for LWIR, IEEE Sensors Journal, 9(6):654-664, 2009.
    • Prototype, commercially n.a.
  • Smart DVS (GAEP): Posch, C., Hoffstaetter, M., Schoen, P., A SPARC-compatible general purpose Address-Event processor with 20-bit 10ns-resolution asynchronous sensor data interface in 0.18um CMOS, IEEE Int. Symp. Circuits and Systems (ISCAS), 2010.
    • Prototype, commercially n.a.
  • PDAVIS: Haessig, G. et al., Bio-inspired Polarization Event Camera,
    arXiv [cs.CV] (2021) PDAVIS video
    • Prototype, commercially n.a.
  • Center Surround Event Camera (CSDVS): Delbruck, T., Li, C., Graca, R. & Mcreynolds, B.,
    Utility and Feasibility of a Center Surround Event Camera
    arXiv [cs.CV] (2022) CSDVS videos
    • Proposed architecture.

Companies working on Event-based Vision

  • iniVation AG invents, produces and sells neuromorphic technologies with a special focus on event-based vision into business.
  • iniLabs AG invents neuromorphic technologies for research.
  • Samsung develops Gen2 and Gen3 dynamic vision sensors and event-based vision solutions.
    • IBM Research (Synapse project) and Samsung partenered to combine the TrueNorth chip (brain) with a DVS (eye).
  • Prophesee (Formerly Chronocam) is the inventor and supplier of 4 Event-Based sensors generations, including commercial-grade versions as well as industry’s largest software suite. The company focuses on Industrial, Mobile-IoT and Automotive applications.
  • Insightness AG builds visual systems to give mobile devices spatial awareness. The Silicon Eye Technology.
  • SLAMcore develops Localisation and mapping solutions for AR/VR, robotics & autonomous vehicles.
  • CelePixel (formerly Hillhouse Technology) offer integrated sensory platforms that incorporate various components and technologies, including a processing chipset and an image sensor (a dynamic vision sensor called CeleX).
  • AIT Austrian Institute of Technology sells neuromorphic sensor products.
    • Inspection during production of carton packs
    • UCOS Universal Counting Sensor
    • IVS Industrial Vision Sensor

Neuromorphic Systems

  • Serrano-Gotarredona, T. , Andreou, A.G. , Linares-Barranco, B.,
    AER Image Filtering Architecture for Vision Processing Systems,
    IEEE Trans. Circuits Syst. I, Fundam. Theory Appl., 46(9):1064-1071, 1999.
  • Serrano-Gotarredona, R., Oster, M., Lichtsteiner, P., Linares-Barranco, A., Paz-Vicente, R., Gomez-Rodriguez, F., Riis, H.K., Delbruck, T., Liu, S.-H., Zahnd, S., Whatley, A.M., Douglas, R., Hafliger, P., Jimenez-Moreno, G., Civit, A., Serrano-Gotarredona, T., Acosta-Jimenez, A., Linares-Barranco, B.,
    AER building blocks for multi-layer multi-chip neuromorphic vision systems,
    Advances in neural information processing systems, 1217-1224, 2006.
  • Liu, S.-C. and Delbruck, T.,
    Neuromorphic sensory systems,
    Current Opinion in Neurobiology, 20:3(288-295), 2010.
  • Zamarreño-Ramos, C., Linares-Barranco, A., Serrano-Gotarredona, T., Linares-Barranco, B.,
    Multi-Casting Mesh AER: A Scalable Assembly Approach for Reconfigurable Neuromorphic Structured AER Systems. Application to ConvNets,
    IEEE Trans. Biomed. Circuits Syst., 7(1):82-102, 2013.
  • Liu, S.-C., Delbruck, T., Indiveri, G., Whatley, A., Douglas, R.,
    Event-Based Neuromorphic Systems,
    Wiley. ISBN: 978-1-118-92762-5, 2014.
  • Chicca, E., Stefanini, F., Bartolozzi, C., Indiveri, G.,
    Neuromorphic Electronic Circuits for Building Autonomous Cognitive Systems,
    Proc. IEEE, 102(9):1367-1388, 2014.
  • Vanarse, A., Osseiran, A., Rassau, A,
    A Review of Current Neuromorphic Approaches for Vision, Auditory, and Olfactory Sensors,
    Front. Neurosci. (2016), 10:115.
  • Liu et al., Signal Process. Mag. 2019,
    Event-Driven Sensing for Efficient Perception: Vision and audition algorithms.
  • Event Cameras Tutorial - Tobi Delbruck, version 4.1, Sep. 18, 2020.
  • Kirkland, P., Di Caterina, G., Soraghan, J., Matich, G.,
    Neuromorphic technologies for defence and security,
    SPIE vol 11540, Emerging Imaging and Sensing Technologies for Security and Defence V; and Advanced Manufacturing Technologies for Micro- and Nanosystems in Security and Defence III; 2020.

Review / Overview papers

Sensor designs, Bio-inspiration

  • Delbruck, T.,
    Activity-driven, event-based vision sensors,
    IEEE Int. Symp. Circuits and Systems (ISCAS), 2010. PDF.
  • Posch, C.,
    Bio-inspired vision,
    J. of Instrumentation, 7 C01054, 2012. Bio-inspired explanation of the DVS and the ATIS. PDF
  • Posch, C., Serrano-Gotarredona, T., Linares-Barranco, B., Delbruck, T.,
    Retinomorphic Event-Based Vision Sensors: Bioinspired Cameras With Spiking Output,
    Proc. IEEE (2014), 102(10):1470-1484. PDF
  • Posch, C.,
    Bioinspired vision sensing,
    Biologically Inspired Computer Vision, Wiley-Blackwell, pp. 11-28, 2015. book index
  • Posch, C., Benosman, R., Etienne-Cummings, R.,
    How Neuromorphic Image Sensors Steal Tricks From the Human Eye, also published as Giving Machines Humanlike Eyes,
    IEEE Spectrum, 52(12):44-49, 2015. PDF
  • Cho, D., Lee, T.-J.,
    A Review of Bioinspired Vision Sensors and Their Applications,
    Sensors and Materials, 27(6):447-463, 2015. PDF

Algorithms, Applications

  • Delbruck, T.,
    Fun with asynchronous vision sensors and processing.
    Computer Vision - ECCV 2012. Workshops and Demonstrations. Springer Berlin/Heidelberg, 2012. A position paper and summary of recent accomplishments of the INI Sensors' group.
  • Delbruck, T.,
    Neuromorophic Vision Sensing and Processing (Invited paper),
    46th Eur. Solid-State Device Research Conference (ESSDERC), Lausanne, 2016, pp. 7-14.
  • Lakshmi, A., Chakraborty, A., Thakur, C.S.,
    Neuromorphic vision: From sensors to event-based algorithms,
    Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 9(4), 2019.
  • Steffen, L. et al., Front. Neurorobot. 2019,
    Neuromorphic Stereo Vision: A Survey of Bio-Inspired Sensors and Algorithms.
  • Gallego et al., TPAMI 2020,
    Event-based Vision: A Survey.
  • Chen, G., Cao, H., Conradt, J., Tang, H., Rohrbein, F., Knoll, A.,
    Event-Based Neuromorphic Vision for Autonomous Driving: A Paradigm Shift for Bio-Inspired Visual Sensing and Perception,
    IEEE Signal Processing Magazine, 37(4):34-49, 2020.
  • Chen, G., Wang, F., Li, W., Hong, L., Conradt, J., Chen, J., Zhang, Z., Lu, Y., Knoll, A.,
    NeuroIV: Neuromorphic Vision Meets Intelligent Vehicle Towards Safe Driving With a New Database and Baseline Evaluations,
    IEEE Trans. Intelligent Transportation Systems (TITS), 2020.
  • Tayarani-Najaran, M.-H., Schmuker, M.,
    Event-Based Sensing and Signal Processing in the Visual, Auditory, and Olfactory Domain: A Review,
    Front. Neural Circuits 15:610446, 2021.
  • Sun, R. Shi, D., Zhang, Y., Li, R., Li, R.,
    Data-Driven Technology in Event-Based Vision,
    Complexity, Vol. 2021, Article ID 6689337.

Algorithms

Feature Detection and Tracking

  • Litzenberger, M., Posch, C., Bauer, D., Belbachir, A. N., Schon. P., Kohn, B., Garn, H.,
    Embedded Vision System for Real-Time Object Tracking using an Asynchronous Transient Vision Sensor,
    IEEE 12th Digital Signal Proc. Workshop and 4th IEEE Signal Proc. Education Workshop, Teton National Park, WY, 2006, pp. 173-178. PDF
    • Litzenberger, M., Kohn, B., Belbachir, A.N., Donath, N., Gritsch, G., Garn, H., Posch, C., Schraml, S.,
      Estimation of Vehicle Speed Based on Asynchronous Data from a Silicon Retina Optical Sensor,
      IEEE Intelligent Transportation Systems Conf. (ITSC), 2006, pp. 653-658. PDF
    • Bauer, D., Belbachir, A. N., Donath, N., Gritsch, G., Kohn, B., Litzenberger, M., Posch, C., Schön, P., Schraml, S.,
      Embedded Vehicle Speed Estimation System Using an Asynchronous Temporal Contrast Vision Sensor,
      EURASIP J. Embedded Systems, 2007:082174. PDF
    • Litzenberger, M., Belbachir, N., Schon, P., Posch, C.,
      Embedded Smart Camera for High Speed Vision,
      ACM/IEEE Int. Conf. on Distributed Smart Cameras, 2007. PDF
  • Ni, Z., Bolopion, A., Agnus, J., Benosman, R., Regnier, S.,
    Asynchronous event-based visual shape tracking for stable haptic feedback in microrobotics,
    IEEE Trans. Robot. (TRO), 28(5):1081-1089, 2012. PDF
    • Ni, Ph.D. Thesis, 2013,
      Asynchronous Event Based Vision: Algorithms and Applications to Microrobotics.
    • Ni, Z., Ieng, S. H., Posch, C., Regnier, S., Benosman, R.,
      Visual Tracking Using Neuromorphic Asynchronous Event-Based Cameras,
      Neural Computation (2015), 27(4):925-953. PDF, YouTube
  • Piatkowska, E., Belbachir, A. N., Schraml, S., Gelautz, M.,
    Spatiotemporal multiple persons tracking using Dynamic Vision Sensor,
    IEEE Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), 2012, pp. 35-40. PDF
  • Lagorce, X., Ieng, S.-H., Clady, X., Pfeiffer, M., Benosman, R.,
    Spatiotemporal features for asynchronous event-based data,
    Front. Neurosci. (2015), 9:46.
    • Lagorce, X., Ieng, S. H., Benosman, R.,
      Event-based features for robotic vision,
      IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), 2013, pp. 4214-4219.
  • Saner, D., Wang, O., Heinzle, S., Pritch, Y., Smolic, A., Sorkine-Hornung, A., Gross, M.,
    High-Speed Object Tracking Using an Asynchronous Temporal Contrast Sensor,
    Int. Symp. Vision, Modeling and Visualization (VMV), 2014. PDF
  • Lagorce, X., Meyer, C., Ieng, S. H., Filliat, D., Benosman, R.,
    Asynchronous Event-Based Multikernel Algorithm for High-Speed Visual Features Tracking,
    IEEE Trans. Neural Netw. Learn. Syst. (TNNLS), 26(8):1710-1720, 2015. PDF, YouTube
    • Lagorce, X., Meyer, C., Ieng, S. H., Filliat, D., Benosman, R.,
      Live demonstration: Neuromorphic event-based multi-kernel algorithm for high speed visual features tracking,
      IEEE Biomedical Circuits and Systems Conference (BioCAS), 2014, pp. 178.
  • D. Reverter Valeiras, D., Lagorce, X., Clady, X., Bartolozzi, C., Ieng, S., Benosman, R.,
    An Asynchronous Neuromorphic Event-Driven Visual Part-Based Shape Tracking,
    IEEE Trans. Neural Netw. Learn. Syst. (TNNLS), 26(12):3045-3059, 2015. PDF, YouTube
  • Linares-Barranco, A., Gómez-Rodríguez, F., Villanueva, V., Longinotti, L., Delbrück, T.,
    A USB3.0 FPGA event-based filtering and tracking framework for dynamic vision sensors,
    IEEE Int. Symp. Circuits and Systems (ISCAS), 2015.
  • Leow, H. S., Nikolic, K.,
    Machine vision using combined frame-based and event-based vision sensor,
    IEEE Int. Symp. Circuits and Systems (ISCAS), 2015.
  • Liu, H., Moeys, D. P., Das, G., Neil, D., Liu, S.-C., Delbruck, T.,
    Combined frame- and event-based detection and tracking,
    IEEE Int. Symp. Circuits and Systems (ISCAS), 2016.
  • Tedaldi, D., Gallego, G., Mueggler, E., Scaramuzza, D.,
    Feature detection and tracking with the dynamic and active-pixel vision sensor (DAVIS),
    IEEE Int. Conf. Event-Based Control Comm. and Signal Proc. (EBCCSP), 2016. PDF, YouTube
    • Kueng et al., IROS 2016 Low-Latency Visual Odometry using Event-based Feature Tracks.
  • Braendli, C., Strubel, J., Keller, S., Scaramuzza, D., Delbruck, T.,
    ELiSeD - An Event-Based Line Segment Detector,
    Int. Conf. on Event-Based Control Comm. and Signal Proc. (EBCCSP), 2016. PDF
  • Glover, A. and Bartolozzi, C.,
    Event-driven ball detection and gaze fixation in clutter,
    IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), 2016, pp. 2203-2208. YouTube, Code
    • Glover, A. and Bartolozzi, C.,
      Robust Visual Tracking with a Freely-moving Event Camera,
      IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), 2017. YouTube, Code
    • Glover, A., Stokes, A.B., Furber, S., Bartolozzi, C.,
      ATIS + SpiNNaker: a Fully Event-based Visual Tracking Demonstration,
      IEEE/RSJ Int. Conf. Intelligent Robots and Systems Workshops (IROSW), 2018. Workshop on Unconventional Sensing and Processing for Robotic Visual Perception.
  • Clady, X., Maro, J.-M., Barré, S., Benosman, R. B.,
    A Motion-Based Feature for Event-Based Pattern Recognition.
    Front. Neurosci. (2017), 10:594. PDF
  • Zhu, A., Atanasov, N., Daniilidis, K.,
    Event-based Feature Tracking with Probabilistic Data Association,
    IEEE Int. Conf. Robotics and Automation (ICRA), 2017. PDF, YouTube, Code
  • Barrios-Avilés, J., Iakymchuk, T., Samaniego, J., Medus, L.D., Rosado-Muñoz, A.,
    Movement Detection with Event-Based Cameras: Comparison with Frame-Based Cameras in Robot Object Tracking Using Powerlink Communication,
    Electronics 2018, 7, 304. PDF pre-print
  • Li, J., Shi, F., Liu, W., Zou, D., Wang, Q., Park, P.K.J., Ryu, H.,
    Adaptive Temporal Pooling for Object Detection using Dynamic Vision Sensor,
    British Machine Vision Conf. (BMVC), 2017.
  • Peng, X., Zhao, B., Yan, R., Tang H., Yi, Z.,
    Bag of Events: An Efficient Probability-Based Feature Extraction Method for AER Image Sensors,
    IEEE Trans. Neural Netw. Learn. Syst. (TNNLS), 28(4):791-803, 2017.
  • Ramesh, B., Yang, H., Orchard, G., Le Thi, N.A., Xiang, C,
    DART: Distribution Aware Retinal Transform for Event-based Cameras,
    IEEE Trans. Pattern Anal. Machine Intell. (TPAMI), 2019. PDF
  • Gehrig, D., Rebecq, H., Gallego, G., Scaramuzza, D.,
    EKLT: Asynchronous, Photometric Feature Tracking using Events and Frames,
    Int. J. Computer Vision (IJCV), 2019. YouTube, Tracking code, Evaluation code
    • Gehrig, D., Rebecq, H., Gallego, G., Scaramuzza, D.,
      Asynchronous, Photometric Feature Tracking using Events and Frames,
      European Conf. Computer Vision (ECCV), 2018. Poster, YouTube, Oral presentation, Tracking code, Evaluation code
  • Everding, L., Conradt, J.,
    Low-Latency Line Tracking Using Event-Based Dynamic Vision Sensors,
    Front. Neurorobot. 12:4, 2018. Videos
  • Linares-Barranco, A., Liu, H., Rios-Navarro, A., Gomez-Rodriguez, F., Moeys, D., Delbruck, T.
    Approaching Retinal Ganglion Cell Modeling and FPGA Implementation for Robotics,
    Entropy 2018, 20(6), 475.
  • Mitrokhin, A., Fermüller, C., Parameshwara, C., Aloimonos, Y.,
    Event-based Moving Object Detection and Tracking,
    IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), 2018. PDF, YouTube, Project page and Dataset
  • Iacono, M., Weber, S., Glover, A., Bartolozzi, C.,
    Towards Event-Driven Object Detection with Off-The-Shelf Deep Learning,
    IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), 2018.
  • Ramesh, B., Zhang, S., Lee, Z.-W., Gao, Z., Orchard, G., Xiang, C.,
    Long-term object tracking with a moving event camera,
    British Machine Vision Conf. (BMVC), 2018. Video
    • Ramesh, B., Zhang, S., Yang, H., Ussa, A., Ong, M., Orchard, G., Xiang, C.,
      e-TLD: Event-based Framework for Dynamic Object Tracking,
      arXiv, 2020.
  • Dardelet, L., Ieng, S.-H., Benosman, R.,
    Event-Based Features Selection and Tracking from Intertwined Estimation of Velocity and Generative Contours,
    arXiv:1811.07839, 2018.
  • Wu, J., Zhang, K., Zhang, Y., Xie, X., Shi, G.,
    High-Speed Object Tracking with Dynamic Vision Sensor,
    China High Resolution Earth Observation Conference (CHREOC), 2018.
  • Huang, J., Wang, S., Guo, M., Chen, S.,
    Event-Guided Structured Output Tracking of Fast-Moving Objects Using a CeleX Sensor,
    IEEE Trans. Circuits Syst. Video Technol. (TCSVT), vol. 28, no. 9, pp. 2413-2417, 2018.
  • Renner, A., Evanusa, M., Sandamirskaya, Y.,
    Event-based attention and tracking on neuromorphic hardware,
    IEEE Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), 2019. Video pitch
  • Foster, B.J., Ye, D.H., Bouman, C.A.,
    Multi-target tracking with an event-based vision sensor and a partial-update GMPHD filter,
    IS&T International Symposium on Electronic Imaging 2019. Computational Imaging XVII.
  • Alzugaray, I., Chli, M.,
    Asynchronous Multi-Hypothesis Tracking of Features with Event Cameras,
    Int. Conf. 3D Vision (3DV), 2019. PDF, Code, YouTube
  • Linares-Barranco, A., Perez-Pena, F., Moeys, D.P., Gomez-Rodriguez, F., Jimenez-Moreno, G., Delbruck, T.
    Low Latency Event-based Filtering and Feature Extraction for Dynamic Vision Sensors in Real-Time FPGA Applications,
    IEEE Access, vol. 7, pp. 134926-134942, 2019. Code
  • Li, K., Shi, D., Zhang, Y., Li, R., Qin, W., Li, R.,
    Feature Tracking Based on Line Segments With the Dynamic and Active-Pixel Vision Sensor (DAVIS),
    IEEE Access, vol. 7, pp. 110874-110883, 2019.
  • Bolten T., Pohle-Fröhlich R., Tönnies K.D.,
    Application of Hierarchical Clustering for Object Tracking with a Dynamic Vision Sensor,
    Int. Conf. Computational Science (ICCS) 2019. PDF
  • Chen, H., Wu, Q., Liang, Y., Gao, X., Wang, H.,
    Asynchronous Tracking-by-Detection on Adaptive Time Surfaces for Event-based Object Tracking,
    ACM Int. Conf. on Multimedia (MM), 2019.
  • Reverter Valeiras, D., Clady, X., Ieng, S.-H., Benosman, R.,
    Event-Based Line Fitting and Segment Detection Using a Neuromorphic Visual Sensor,
    IEEE Trans. Neural Netw. Learn. Syst. (TNNLS), 30(4):1218-1230, 2019. PDF
  • Li, H., Shi, L.,
    Robust Event-Based Object Tracking Combining Correlation Filter and CNN Representation,
    Front. Neurorobot. 13:82, 2019. Dataset
  • Chen, H., Suter, D., Wu, Q., Wang, H.,
    End-to-end Learning of Object Motion Estimation from Retinal Events for Event-based Object Tracking,
    AAAI Conf. Artificial Intelligence, 2020. PDF, PDF.
  • Monforte, M., Arriandiaga, A., Glover, A., Bartolozzi, C.,
    Exploiting Event Cameras for Spatio-Temporal Prediction of Fast-Changing Trajectories,
    IEEE Int. Conf. Artificial Intelligence Circuits and Systems (AICAS), 2020.
  • Sengupta, J. P., Kubendran, R., Neftci, E., Andreou, A. G.,
    High-Speed, Real-Time, Spike-Based Object Tracking and Path Prediction on Google Edge TPU.
    IEEE Int. Conf. Artificial Intelligence Circuits and Systems (AICAS), 2020, pp. 134-135.
  • Seok, H., Lim, J.,
    Robust Feature Tracking in DVS Event Stream using Bezier Mapping,
    IEEE Winter Conf. Applications of Computer Vision (WACV), 2020. YouTube
  • Xu, L., Xu, W., Golyanik, V., Habermann, M., Fang, L., Theobalt, C.,
    EventCap: Monocular 3D Capture of High-Speed Human Motions using an Event Camera,
    IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2020. ZDNet news
  • Rodríguez-Gómez, J.P., Gómez Eguíluz, A., Martínez-de Dios, J.R., Ollero, A.,
    Asynchronous event-based clustering and tracking for intrusion monitoring,
    IEEE Int. Conf. Robotics and Automation (ICRA), 2020. PDF.
  • Boettiger, J. P., MSc 2020, A Comparative Evaluation of the Detection and Tracking Capability Between Novel Event-Based and Conventional Frame-Based Sensors.
  • Sarmadi, H., Muñoz-Salinas, R., Olivares-Mendez, M. A., Medina-Carnicer, R.,
    Detection of Binary Square Fiducial Markers Using an Event Camera,
    arXiv, 2020.
  • Alzugaray, I., Chli, M.,
    HASTE: multi-Hypothesis Asynchronous Speeded-up Tracking of Events,
    British Machine Vision Conf. (BMVC), 2020. PDF, Suppl. Mat., Code, Presentation, Youtube
  • Liu, Z., Fu, Y.,
    e-ACJ: Accurate Junction Extraction For Event Cameras,
    arXiv, 2021.
  • Dong, Y., Zhang, T.,
    Standard and Event Cameras Fusion for Feature Tracking,
    Int. Conf. on Machine Vision and Applications (ICMVA), 2021. Code
  • Mondal, A., Shashant, R., Giraldo, J. H., Bouwmans, T., Chowdhury, A. S.,
    Moving Object Detection for Event-based Vision using Graph Spectral Clustering,
    IEEE Int. Conf. Computer Vision Workshop (ICCVW), 2021. Youtube, Code.
  • Xiao Wang, Jianing Li, Lin Zhu, Zhipeng Zhang, Zhe Chen, Xin Li, Yaowei Wang, Yonghong Tian, Feng Wu,
    VisEvent: Reliable Object Tracking via Collaboration of Frame and Event Flows,
    arXiv, 2021. Code
  • Alzugaray, I., Ph.D. Thesis, 2022,
    Event-driven Feature Detection and Tracking for Visual SLAM.
  • Zhang, J., Zhao, K., Dong, B., Fu, Y., Wang, Y., Yang, X., Yin, B.,
    Multi-domain collaborative feature representation for robust visual object tracking,
    The Visual Computer, 2021. PDF, Project.
  • Zhang, J., Yang, X., Fu, Y., Wei, X., Yin, B., Dong, B.,
    Object Tracking by Jointly Exploiting Frame and Event Domain,
    IEEE Int. Conf. Computer Vision (ICCV), 2021. Project, PDF, code, dataset.
  • Zhang, J., Dong, B., Zhang, H., Ding, J., Heide, F., Yin, B., Yang, X.,
    Spiking Transformers for Event-based Single Object Tracking,
    IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2022. Project, PDF, code.
  • Gao, et at., FPGA, 2022,
    REMOT: A Hardware-Software Architecture for Attention-Guided Multi-Object Tracking with Dynamic Vision Sensors on FPGAs.
  • El Shair, Z., Rawashdeh, S.A.,
    High-Temporal-Resolution Object Detection and Tracking using Images and Events,
    Journal of Imaging, 2022. PDF, dataset.

Corner Detection and Tracking

  • Clady, X., Ieng, S.-H., Benosman, R.,
    Asynchronous event-based corner detection and matching,
    Neural Networks (2015), 66:91-106. PDF
  • Vasco, V., Glover, A., Bartolozzi, C.,
    Fast event-based Harris corner detection exploiting the advantages of event-driven cameras,
    IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), 2016, pp. 4144-4149. YouTube, Code
  • Mueggler, E., Bartolozzi, C., Scaramuzza, D.,
    Fast Event-based Corner Detection,
    British Machine Vision Conf. (BMVC), 2017. YouTube, Code
    • Liu, H., Kao, W.-T., Delbruck, T.,
      Live Demonstration: A Real-time Event-based Fast Corner Detection Demo based on FPGA,
      IEEE Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), 2019.
  • Alzugaray, I., Chli, M.,
    Asynchronous Corner Detection and Tracking for Event Cameras in Real Time,
    IEEE Robotics and Automation Letters (RA-L), 3(4):3177-3184, Oct. 2018. PDF, YouTube, Code.
  • Alzugaray, I., Chli, M.,
    ACE: An Efficient Asynchronous Corner Tracker for Event Cameras,
    Int. Conf. 3D Vision (3DV), 2018. PDF, YouTube
  • Scheerlinck, C., Barnes, N., Mahony, R.,
    Asynchronous Spatial Image Convolutions for Event Cameras,
    IEEE Robotics and Automation Letters (RA-L), 4(2):816-822, Apr. 2019. PDF, Website
  • Manderscheid, J., Sironi, A., Bourdis, N., Migliore, D., Lepetit, V.,
    Speed Invariant Time Surface for Learning to Detect Corner Points with Event-Based Cameras,
    IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2019. PDF
  • Li, R., Shi, D., Zhang, Y., Li, K., Li, R.,
    FA-Harris: A Fast and Asynchronous Corner Detector for Event Cameras,
    IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), 2019. PDF
  • Mohamed, S. A. S., Yasin, J. N., Haghbayan, M.-H., Miele, A., Heikkonen, J., Tenhunen, H., Plosila, J.,
    Dynamic Resource-aware Corner Detection for Bio-inspired Vision Sensors,
    Int. Conf. Pattern Recognition (ICPR), 2020.
  • Mohamed, S. A. S., Yasin, J. N., Haghbayan, M.-H., Miele, A., Heikkonen, J., Tenhunen, H., Plosila, J.,
    Asynchronous Corner Tracking Algorithm based on Lifetime of Events for DAVIS Cameras,
    Int. Symposium on Visual Computing (ISVC), 2020.
  • Yılmaz, Ö., Simon-Chane, C., Histace A.,
    Evaluation of Event-Based Corner Detectors ,
    J. Imaging, 2021.
  • Chiberre, P., Perot, E., Sironi, A., Lepetit, V.,
    Detecting Stable Keypoints From Events Through Image Gradient Prediction,
    IEEE Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), 2021. YouTube.
  • Sengupta, J. P., Villemur, M., Andreou, A. G.,
    Efficient, event-driven feature extraction and unsupervised object tracking for embedded applications,
    55th Annual Conf. on Information Sciences and Systems (CISS), 2021.
  • Alzugaray, I., Ph.D. Thesis, 2022,
    Event-driven Feature Detection and Tracking for Visual SLAM.

Particle Detection and Tracking

  • Drazen, D., Lichtsteiner, P., Haefliger, P., Delbruck, T., Jensen, A.,
    Toward real-time particle tracking using an event-based dynamic vision sensor,
    Experiments in Fluids (2011), 51(1):1465-1469. PDF
  • Ni, Z., Pacoret, C., Benosman, R., Ieng, S., Regnier, S.,
    Asynchronous event-based high speed vision for microparticle tracking,
    J. Microscopy (2011), 245(3):236-244. PDF
  • Borer, D., Roesgen, T.,
    Large-scale Particle Tracking with Dynamic Vision Sensors,
    ISFV16 - 16th Int. Symp. Flow Visualization, Okinawa 2014. Project page, Poster
  • Wang, Y., Idoughi, R., Heidrich, W.,
    Stereo Event-based Particle Tracking Velocimetry for 3D Fluid Flow Reconstruction,
    European Conf. Computer Vision (ECCV), 2020. Suppl. Mat.

Eye Tracking

  • Angelopoulos, A.N., Martel, J.N.P., Kohli, A.P.S., Conradt, J., Wetzstein, G.,
    Event Based, Near-Eye Gaze Tracking Beyond 10,000Hz,
    IEEE Trans. Vis. Comput. Graphics (Proc. VR), 2021. YouTube, Dataset, Project page
  • Ryan, C., Sullivan, B. O., Elrasad, A., Lemley, J., Kielty., P., Posch, C., Perot, E.,
    Real-Time Face & Eye Tracking and Blink Detection using Event Cameras,
    arXiv, 2020.

Optical Flow Estimation

  • Delbruck, T.,
    Frame-free dynamic digital vision,
    Int. Symp. on Secure-Life Electronics, Advanced Electronics for Quality Life and Society, pp. 21-26, 2008. PDF
  • Cook et al., IJCNN 2011,
    Interacting maps for fast visual interpretation. (Joint estimation of optical flow, image intensity and angular velocity with a rotating event camera).
  • Benosman, R., Ieng, S.-H., Clercq, C., Bartolozzi, C., Srinivasan, M.,
    Asynchronous Frameless Event-Based Optical Flow,
    Neural Networks (2012), 27:32-37. PDF, Suppl. Mat.
  • Orchard, G., Benosman, R., Etienne-Cummings, R., Thakor, N,
    A Spiking Neural Network Architecture for Visual Motion Estimation,
    IEEE Biomedical Circuits and Systems Conf. (BioCAS), 2013. PDF, Code
  • Benosman, R., Clercq, C., Lagorce, X., Ieng, S.-H., Bartolozzi, C.,
    Event-Based Visual Flow,
    IEEE Trans. Neural Netw. Learn. Syst. (TNNLS), 25(2):407-417, 2014. PDF, Code (jAER): LocalPlanesFlow
    • Clady et al., Front. Neurosci. 2014,
      Asynchronous visual event-based time-to-contact.
    • E. Mueggler, C. Forster, N. Baumli, G. Gallego, D. Scaramuzza,
      Lifetime Estimation of Events from Dynamic Vision Sensors,
      IEEE Int. Conf. Robotics and Automation (ICRA), 2015, pp. 4874-4881. PDF, PPT, Code
    • Lee, A. J., Kim, A.,
      Event-based Real-time Optical Flow Estimation,
      IEEE Int. Conf. on Control, Automation and Systems (ICCAS), 2017.
    • Aung, M.T., Teo, R., Orchard, G.,
      Event-based Plane-fitting Optical Flow for Dynamic Vision Sensors in FPGA,
      IEEE Int. Symp. Circuits and Systems (ISCAS), 2018. Code
  • Barranco, F., Fermüller, C., Aloimonos, Y.,
    Contour motion estimation for asynchronous event-driven cameras,
    Proc. IEEE (2014), 102(10):1537-1556. PDF
  • Lee, J.H., Lee, K., Ryu, H., Park, P.K.J., Shin, C.W., Woo, J., Kim, J.-S.,
    Real-time motion estimation based on event-based vision sensor,
    IEEE Int. Conf. Image Processing (ICIP), 2014.
  • Richter, C., Röhrbein, F., Conradt, J.,
    Bio inspired optic flow detection using neuromorphic hardware,
    Bernstein Conf. 2014. PDF
  • Barranco, F., Fermüller, C., Aloimonos, Y.,
    Bio-inspired Motion Estimation with Event-Driven Sensors,
    Int. Work-Conf. Artificial Neural Networks (IWANN) 2015, Advances in Computational Intell., pp. 309-321. PDF
  • Conradt, J.,
    On-Board Real-Time Optic-Flow for Miniature Event-Based Vision Sensors,
    IEEE Int. Conf. Robotics and Biomimetics (ROBIO), 2015.
  • Brosch, T., Tschechne, S., Neumann, H.,
    On event-based optical flow detection,
    Front. Neurosci. (2015), 9:137.
    • Tschechne, S., Brosch, T., Sailer, R., von Egloffstein, N., Abdul-Kreem L.I., Neumann, H.,
      On event-based motion detection and integration,
      Int. Conf. Bio-inspired Information and Comm. Technol. (BICT), 2014. PDF
    • Tschechne, S., Sailer R., Neumann, H.,
      Bio-Inspired Optic Flow from Event-Based Neuromorphic Sensor Input,
      IAPR Workshop on Artificial Neural Networks in Pattern Recognition (ANNPR) 2014, pp. 171-182.
    • Brosch, T., Neumann, H.,
      Event-based optical flow on neuromorphic hardware,
      Int. Conf. Bio-inspired Information and Comm. Technol. (BICT), 2015. PDF
    • Brosch, T., Tschechne, S., Neumann, H.,
      Visual Processing in Cortical Architecture from Neuroscience to Neuromorphic Computing,
      Int. Workshop on Brain-Inspired Computing (BrainComp), 2015. LNCS, vol 10087.
    • Kosiorek, A., Adrian, D., Rausch, J., Conradt, J.,
      An Efficient Event-Based Optical Flow Implementation in C/C++ and CUDA,
      Tech. Rep. TU Munich, 2015.
  • Milde et al., EBCCSP 2015,
    Bioinspired event-driven collision avoidance algorithm based on optic flow.
  • Giulioni, M., Lagorce, X., Galluppi, F., Benosman, R.,
    Event-Based Computation of Motion Flow on a Neuromorphic Analog Neural Platform,
    Front. Neurosci. (2016), 10:35. PDF
    • Haessig, G., Galluppi, F., Lagorce, X., Benosman, R.,
      Neuromorphic networks on the SpiNNaker platform,
      IEEE Int. Conf. Artificial Intelligence Circuits and Systems (AICAS), 2019.
  • Rueckauer, B. and Delbruck, T.,
    Evaluation of Event-Based Algorithms for Optical Flow with Ground-Truth from Inertial Measurement Sensor,
    Front. Neurosci. (2016), 10:176. YouTube
    • Code (jAER)
  • Bardow, P. A., Davison, A. J., Leutenegger, S.,
    Simultaneous Optical Flow and Intensity Estimation from an Event Camera,
    IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2016. YouTube, YouTube 2, Dataset: 4 sequences
  • Stoffregen, T., Kleeman, L.,
    Simultaneous Optical Flow and Segmentation (SOFAS) using Dynamic Vision Sensor,
    Australasian Conf. Robotics and Automation (ACRA), 2017. PDF, YouTube
  • Haessig, G., Cassidy, A. Alvarez, R., Benosman, R., Orchard, G.,
    Spiking Optical Flow for Event-based Sensors Using IBM's TrueNorth Neurosynaptic System,
    IEEE Trans. Biomed. Circuits Syst., 12(4):860-870, 2018. PDF
  • Gallego et al., CVPR 2018,
    A Unifying Contrast Maximization Framework for Event Cameras, with Applications to Motion, Depth and Optical Flow Estimation.
    • Stoffregen, T., Kleeman, L.,
      Event Cameras, Contrast Maximization and Reward Functions: An Analysis,
      IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2019.
    • Gallego et al., CVPR 2019,
      Focus Is All You Need: Loss Functions For Event-based Vision.
    • Stoffregen et al., ICCV 2019,
      Event-Based Motion Segmentation by Motion Compensation.
    • Shiba et al., Sensors 2022,
      Event Collapse in Contrast Maximization Frameworks.
    • Shiba et al., ECCV 2022,
      Secrets of Event-based Optical Flow.
    • Zhang et al., arXiv 2021,
      Formulating Event-based Image Reconstruction as a Linear Inverse Problem using Optical Flow.
  • Zhu, A., Yuan, L., Chaney, K., Daniilidis, K.,
    EV-FlowNet: Self-Supervised Optical Flow Estimation for Event-based Cameras,
    Robotics: Science and Systems (RSS), 2018. PDF, YouTube, Code
    • Gehrig et al., ICCV 2019,
      End-to-End Learning of Representations for Asynchronous Event-Based Data.
  • Liu, M., Delbruck, T.,
    Adaptive Time-Slice Block-Matching Optical Flow Algorithm for Dynamic Vision Sensors,
    British Machine Vision Conf. (BMVC), 2018. Supplementary material, Video
    • Liu, M., Delbruck, T.,
      Block-Matching Optical Flow for Dynamic Vision Sensors: Algorithm and FPGA Implementation,
      IEEE Int. Symp. Circuits and Systems (ISCAS), 2017.
  • Ye, C., Mitrokhin, A., Parameshwara, C., Fermüller, C., Yorke, J. A., Aloimonos,Y,
    Unsupervised Learning of Dense Optical Flow, Depth and Egomotion with Event-Based Sensors,
    IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), 2020. PDF, YouTube, Project page
  • Seifozzakerini, Ph.D. Thesis, 2018,
    Analysis of object and its motion in event-based videos.
  • Nagata, J., Sekikawa, Y., Hara, K., Aoki, Y.,
    FOE-based regularization for optical flow estimation from an in-vehicle event camera,
    Proc. SPIE 11049, Int. Workshop on Advanced Image Technology (IWAIT), 2019.
  • Paredes-Valles, F., Scheper, K. Y. W., de Croon, G. C. H. E.,
    Unsupervised Learning of a Hierarchical Spiking Neural Network for Optical Flow Estimation: From Events to Global Motion Perception,
    IEEE Trans. Pattern Anal. Machine Intell. (TPAMI), 2019. PDF, YouTube, Code.
  • Zhu, A. Z., Yuan, L., Chaney, K., Daniilidis, K.,
    Unsupervised Event-Based Learning of Optical Flow, Depth, and Egomotion,
    IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2019. PDF, YouTube, Patent
    • Zhu, A. Z., Yuan, L., Chaney, K., Daniilidis, K.,
      Unsupervised Event-Based Optical Flow Using Motion Compensation,
      European Conf. Computer Vision Workshops (ECCVW), 2018. PDF
  • Khoei, M.A., Benosman, R.,
    Asynchronous Event-Based Motion Processing: From Visual Events to Probabilistic Sensory Representation,
    Neural Computation (2019), 31(6):1114-1138. PDF
  • Almatrafi, M. M., Hirakawa, K.,
    DAViS Camera Optical Flow,
    IEEE Trans. Comput. Imag. (TCI), 6:396-407, 2019.
  • Almatrafi, M., Baldwin, R., Aizawa, K., Hirakawa, K.,
    Distance Surface for Event-Based Optical Flow,
    IEEE Trans. Pattern Anal. Machine Intell. (TPAMI), 2020. PDF, Dataset
  • Lee, C., Kosta, A., Zhu, A.Z., Chaney, K., Daniilidis, K., Roy, K.,
    Spike-FlowNet: Event-based Optical Flow Estimation with Energy-Efficient Hybrid Neural Networks,
    European Conf. Computer Vision (ECCV), 2020. Suppl. Mat., PDF
  • D'Angelo, G., Janotte, E., Schoepe, T., O'Keeffe, J., Milde, M. B., Chicca, E., Bartolozzi, C.,
    Event-Based Eccentric Motion Detection Exploiting Time Difference Encoding,
    Front. Neurosci. (2020), 14:451. Project page
  • Pan, L., Liu, M., Hartley, R.,
    Single Image Optical Flow Estimation with an Event Camera,
    IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2020.
  • Low, W. F., Gao, Z., Xiang, C., Ramesh, B.,
    SOFEA: A Non-Iterative and Robust Optical Flow Estimation Algorithm for Dynamic Vision Sensors,
    IEEE Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), 2020. PDF, Suppl. Mat.
  • Akolkar, H., Ieng, S.-H., Benosman, R.,
    Real-time high speed motion prediction using fast aperture-robust event-driven visual flow,
    IEEE Trans. Pattern Anal. Machine Intell. (TPAMI), 2020. PDF
  • Pivezhandi, M., Jones, P. H., Zambreno, J.,
    ParaHist: FPGA Implementation of Parallel Event-Based Histogram for Optical Flow Calculation,
    IEEE Conf. Application-specific Systems, Architectures and Processors (ASAP), 2020. PDF
  • Kepple, D.R., Lee, D., Prepsius, C., Isler, V., Park, I. M., Lee, D. D.,
    Jointly learning visual motion and confidence from local patches in event cameras,
    European Conf. Computer Vision (ECCV), 2020. Suppl. Mat.
  • Nagata, J., Sekikawa, Y., Aoki, Y.,
    Optical Flow Estimation by Matching Time Surface with Event-Based Cameras,
    Sensors 2021, 21, 1150. PDF
  • Paredes-Valles et al., CVPR 2021,
    Back to Event Basics: Self-Supervised Learning of Image Reconstruction for Event Cameras via Photometric Constancy.
  • Hagenaars, J. J., Paredes-Valles, F., de Croon, G. C. H. E.,
    Self-Supervised Learning of Event-Based Optical Flow with Spiking Neural Networks,
    Advances in Neural Information Processing Systems 34 (NeurIPS), 2021. Project page, PDF, Suppl. Mat., Code.
  • Sikorski, O., Izzo, D., Meoni, G.,
    Event-Based Spacecraft Landing Using Time-To-Contact,
    IEEE Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), 2021.
  • Peveri, F., Testa, S., Sabatini, S. P.,
    A Cortically-Inspired Architecture for Event-Based Visual Motion Processing: From Design Principles to Real-World Applications,
    IEEE Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), 2021. YouTube.
  • Barbier, T., Teuliere, C., Triesch, J.,
    Spike Timing-Based Unsupervised Learning of Orientation, Disparity, and Motion Representations in a Spiking Neural Network,
    IEEE Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), 2021. Suppl., YouTube.
  • Gehrig, M., Millhäusler, M., Gehrig, D., Scaramuzza, D.,
    E-RAFT: Dense Optical Flow from Event Cameras,
    IEEE Int. Conf. on 3D Vision (3DV), 2021. Code, Dataset, Youtube
  • Shiba, S., Aoki, Y., Gallego, G.,
    Event Collapse in Contrast Maximization Frameworks,
    Sensors, 2022. PDF
  • Shiba, S., Aoki, Y., Gallego, G.,
    Secrets of Event-based Optical Flow,
    Europen Conf. Computer Vision (ECCV), 2022. YouTube, Poster, Presentation at the PRG Seminar Series U. Maryland (Video), Presentation at the GRASP Laboratory (UPenn) seminar, Project page and Code
  • Brebion, V., Moreau, J., Davoine, F.,
    Real-Time Optical Flow for Vehicular Perception With Low- and High-Resolution Event Cameras,
    IEEE Trans. Intell. Transp. Syst. (T-ITS), 2021. PDF, Code, Dataset, YouTube.
  • Liu, M., Delbruck, T.,
    EDFLOW: Event Driven Optical Flow Camera with Keypoint Detection and Adaptive Block Matching,
    IEEE Trans. Circuits Syst. Video Technol. (TCSVT), 2022. Preprint PDF, Code and Dataset

Scene Flow Estimation

  • Ieng, S.-H., Carneiro, J., Benosman, R.,
    Event-Based 3D Motion Flow Estimation Using 4D Spatio Temporal Subspaces Properties,
    Front. Neurosci. (2017), 10:596.
    • Carneiro, Ph.D. Thesis, 2014,
      Asynchronous Event-Based 3D Vision - Chapter 3.

Reconstruction of Visual Information

Intensity-Image Reconstruction from events

  • Cook, M., Gugelmann, L., Jug, F., Krautz, C., Steger, A.,
    Interacting maps for fast visual interpretation,
    Int. Joint Conf. on Neural Networks (IJCNN), San Jose, CA, 2011, pp. 770-776. PDF, YouTube
    • Martel, J. N. P., Cook, M.,
      A Framework of Relational Networks to Build Systems with Sensors able to Perform the Joint Approximate Inference of Quantities,
      IEEE/RSJ Int. Conf. Intelligent Robots and Systems Workshop (IROSW), 2015. Workshop on Unconventional Computing for Bayesian Inference. PDF
    • Martel, J. N. P., Chau, M., Dudek, P., Cook, M.,
      Toward joint approximate inference of visual quantities on cellular processor arrays,
      IEEE Int. Symp. Circuits and Systems (ISCAS), 2015.
  • Belbachir et al., CVPRW 2014,
    A Novel HDR Depth Camera for Real-time 3D 360-degree Panoramic Vision.
  • Kim, H., Handa, A., Benosman, R., Ieng, S.-H., Davison, A. J.,
    Simultaneous Mosaicing and Tracking with an Event Camera,
    British Machine Vision Conf. (BMVC), 2014. PDF, YouTube, YouTube 2
    • Code for intensity reconstruction.
    • YouTube TU Graz
  • Barua, S., Miyatani, Y., Veeraraghavan, A.,
    Direct face detection and video reconstruction from event cameras,
    IEEE Winter Conf. Applications of Computer Vision (WACV), 2016. YouTube
  • Bardow et al., CVPR 2016,
    Simultaneous Optical Flow and Intensity Estimation from an Event Camera.
  • Moeys, D. P., Li, C., Martel, J. N. P., Bamford, S., Longinotti, L., Motsnyi, V., Bello, D. S. S., Delbruck, T.,
    Color Temporal Contrast Sensitivity in Dynamic Vision Sensors,
    IEEE Int. Symp. Circuits and Systems (ISCAS), 2017. PDF.
  • Munda, G., Reinbacher, C., Pock, T.,
    Real-Time Intensity-Image Reconstruction for Event Cameras Using Manifold Regularisation,
    Int. J. of Computer Vision (IJCV), 2018.
    • Reinbacher, C., Graber, G., Pock, T.,
      Real-Time Intensity-Image Reconstruction for Event Cameras Using Manifold Regularisation,
      British Machine Vision Conf. (BMVC), 2016. PDF, YouTube, Code
  • Watkins, Y., Thresher, A., Mascarenas, D., Kenyon, G.T.,
    Sparse Coding Enables the Reconstruction of High-Fidelity Images and Video from Retinal Spike Trains,
    Int. Conf. Neuromorphic Systems (ICONS), 2018. Article No. 8. PDF
  • Scheerlinck, C., Barnes, N., Mahony, R.,
    Continuous-time Intensity Estimation Using Event Cameras,
    Asian Conf. Computer Vision (ACCV), 2018. PDF, YouTube, Website
  • Rebecq, H., Ranftl, R., Koltun, V., Scaramuzza, D.,
    High Speed and High Dynamic Range Video with an Event Camera,
    IEEE Trans. Pattern Anal. Machine Intell. (TPAMI), 2020. PDF, YouTube, Code, Project page
    • Rebecq, H., Ranftl, R., Koltun, V., Scaramuzza, D.,
      Events-to-Video: Bringing Modern Computer Vision to Event Cameras,
      IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2019. PDF, YouTube, Slides, Video pitch.
    • Scheerlinck, C., Rebecq, H., Gehrig, D., Barnes, N., Mahony, R., Scaramuzza, D.,
      Fast Image Reconstruction with an Event Camera,
      IEEE Winter Conf. Applications of Computer Vision (WACV), 2020. PDF, YouTube, Website
    • Stoffregen, T., Scheerlinck, C., Scaramuzza, D., Drummond, T., Barnes, N., Kleeman, L., Mahony, R.,
      Reducing the Sim-to-Real Gap for Event Cameras,
      European Conf. Computer Vision (ECCV), 2020. PDF, Suppl. Mat., YouTube, Project page
  • Mostafavi, M., Wang, L., Yoon, K.J.,
    Learning to Reconstruct HDR Images from Events, with Applications to Depth and Flow Prediction,
    Int. J. Computer Vision (IJCV), 2021.
    • Mostafavi I., S.M., Wang, L., Ho, Y.S., Yoon, K.J.,
      Event-based High Dynamic Range Image and Very High Frame Rate Video Generation using Conditional Generative Adversarial Networks,
      IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2019. PDF
  • Scheerlinck et al., CVPRW 2019,
    CED: Color Event Camera Dataset.
  • Nagata, J., Sekikawa, Y., Hara, K., Suzuki, T., Aoki, Y.,
    QR-code Reconstruction from Event Data via Optimization in Code Subspace,
    IEEE Winter Conf. Applications of Computer Vision (WACV), 2020.
  • Zhang, S., Zhang, Y., Jiang, Z., Zou, D., Ren, J., Zhou, B.,
    Learning to See in the Dark with Events,
    European Conf. Computer Vision (ECCV), 2020. Suppl. Mat.
  • B. Su, L. Yu and W. Yang,
    Event-Based High Frame-Rate Video Reconstruction With A Novel Cycle-Event Network,
    IEEE Int. Conf. Image Processing (ICIP), 2020.
  • Gantier Cadena, P. R., Qian, Y., Wang, C., Yang, M.,
    SPADE-E2VID: Spatially-Adaptive Denormalization for Event-Based Video Reconstruction,
    IEEE Trans. Image Processing, 30:2488-2500, 2021. Project page
  • Baldwin et al., arXiv 2021.
    Time-Ordered Recent Event (TORE) Volumes for Event Cameras.
  • Paredes-Valles, F., de Croon, G. C. H. E.,
    Back to Event Basics: Self-Supervised Learning of Image Reconstruction for Event Cameras via Photometric Constancy,
    IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2021. Project page, PDF, Suppl. Mat., Code.
  • Zou, Y., Zheng, Y., Takatani, T., Fu, Y.,
    Learning To Reconstruct High Speed and High Dynamic Range Videos From Events,
    IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2021.
  • Zhang, X., Liao, W., Yu, L., Yang, W., Xia, G.-S.,
    Event-Based Synthetic Aperture Imaging With a Hybrid Network ,
    IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2021. Suppl., PDF, YouTube.
  • Cohen Duwek, H., Shalumov, A., Ezra Tsur, E.,
    Image Reconstruction From Neuromorphic Event Cameras Using Laplacian-Prediction and Poisson Integration With Spiking and Artificial Neural Networks,
    IEEE Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), 2021. YouTube.
  • Zhang, Z., Yezzi, A., Gallego, G.,
    Formulating Event-based Image Reconstruction as a Linear Inverse Problem using Optical Flow,
    arXiv, 2021.

Video Synthesis

  • Brandli, C., Muller, L., Delbruck, T.,
    Real-time, high-speed video decompression using a frame- and event-based DAVIS sensor,
    IEEE Int. Symp. on Circuits and Systems (ISCAS), 2014.
    • Brandli, Ph.D. Thesis, 2014,
      Event-Based Machine Vision - Section 4.11.
  • Liu HC., Zhang FL., Marshall D., Shi L., Hu SM.,
    High-speed Video Generation with an Event Camera,
    The Visual Computer, 2017. PDF.
  • Shedligeri, P.A., Mitra, K.,
    Photorealistic Image Reconstruction from Hybrid Intensity and Event based Sensor,
    J. Electronic Imaging, 28(6), 063012 (2019). PDF
  • Wang, Z. W., Jiang, W., He, K., Shi, B., Katsaggelos, A., Cossairt, O.,
    Event-driven Video Frame Synthesis,
    IEEE Int. Conf. Computer Vision Workshops (ICCVW), 2019. PDF
  • Pan, L., Scheerlinck, C., Yu, X., Hartley, R., Liu, M., Dai, Y.,
    Bringing a Blurry Frame Alive at High Frame-Rate with an Event Camera,
    IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2019. PDF. Slides, Video CVPR, Video CVPRW, Code
    • Pan, L., Hartley, R., Scheerlinck, C., Liu, M., Yu, X., Dai, Y.,
      High Frame Rate Video Reconstruction based on an Event Camera,
      arXiv, 2019.
    • Open-source Rust implementation: davis-EDI-rs
  • Pini, S., Borghi, G., Vezzani, R., Cucchiara, R.,
    Video Synthesis from Intensity and Event Frames,
    Int. Conf. Image Analysis and Processing (ICIAP), 2019. LNCS, vol 11751. PDF
  • Pini S., Borghi G., Vezzani R.,
    Learn to See by Events: Color Frame Synthesis from Event and RGB Cameras,
    Int. Joint Conf. on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISAPP) 2020. PDF
  • Haoyu, C., Minggui, T., Boxin, S., Yizhou, W., Tiejun, H.,
    Learning to Deblur and Generate High Frame Rate Video with an Event Camera,
    arXiv:2003.00847, 2020.
  • Jiang, Z., Zhang, Y., Zou, D., Ren, J., Lv, J., Liu, Y.,
    Learning Event-Based Motion Deblurring,
    IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2020.
  • Wang, B., He, J., Yu, L., Xia, G.-S., Yang, W.,
    Event Enhanced High-Quality Image Recovery,
    European Conf. Computer Vision (ECCV), 2020. Suppl. Mat., Video
  • Lin, S., Zhang, J., Pan, J., Jiang, Z., Zou, D., Wang, Y., Chen, J., Ren, J.,
    Learning Event-Driven Video Deblurring and Interpolation,
    European Conf. Computer Vision (ECCV), 2020. Suppl. Mat.
  • Zhang, L., Zhang, H., Chen, J., Wang, L.,
    Hybrid Deblur Net: Deep Non-Uniform Deblurring With Event Camera,
    IEEE Access, vol. 8, pp. 148075-148083, 2020.
  • Jiang, M., Liu, Z., Wang, B., Yu, L., Yang, W.,
    Robust Intensity Image Reconstruciton Based On Event Cameras,
    IEEE Int. Conf. Image Processing (ICIP), 2020.
  • Zhang, L., Zhang, H., Zhu, C., Guo, S., Chen, J., Wang, L.,
    Fine-Grained Video Deblurring with Event Camera,
    MultiMedia Modeling (MMM) 2021. LNCS, vol 12572.
  • Tulyakov, S., Gehrig, D., Georgoulis, S., Erbach, J., Gehrig, M., Li, Y., Scaramuzza, D.,
    Time Lens: Event-Based Video Frame Interpolation,
    IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2021. Project page, Suppl., PDF, YouTube, Slides, Code
  • Paikin, G., Ater, Y., Shaul, R., Soloveichik, E.,
    EFI-Net: Video Frame Interpolation from Fusion of Events and Frames,
    IEEE Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), 2021. YouTube, Suppl., Dataset.
  • Wang, Z., Ng, Y., Scheerlinck, C., Mahony., R.,
    An Asynchronous Kalman Filter for Hybrid Event Cameras,
    IEEE Int. Conf. Computer Vision (ICCV), 2021. PDF, Code, YouTube, Suppl.

Super-resolution

  • Li, H., Li, G., Shi, L.,
    Super-resolution of spatiotemporal event-stream image,
    Neurocomputing, vol. 335, pp. 206-214, 2019. PDF pre-print
  • Mostafavi I., S.M., Choi, J., Yoon, K.-J.,
    Learning to Super Resolve Intensity Images from Events,
    IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2020. PDF, Code
  • Wang, L., Kim, T.-K., Yoon, K.-J.,
    EventSR: From Asynchronous Events to Image Reconstruction, Restoration, and Super-Resolution via End-to-End Adversarial Learning,
    IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2020. PDF, YouTube, Dataset
  • Jing, Y., Yang, Y., Wang, X., Song, M., Tao, D.,
    Turning Frequency to Resolution: Video Super-Resolution via Event Cameras,
    IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2021.
  • Duan, P., Wang, Z. W., Zhou, X., Ma, Y., Shi, B.,
    EventZoom: Learning To Denoise and Super Resolve Neuromorphic Events,
    IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2021. Project page, Suppl.
  • Han, J., Yang, Y., Zhou, C., Xu, C., Shi, B.,
    EvIntSR-Net: Event Guided Multiple Latent Frames Reconstruction and Super-resolution,
    IEEE Int. Conf. Computer Vision (ICCV), 2021. PDF, Suppl.

Joint/guided filtering

  • Wang, Z. W., Duan, P., Cossairt, O., Katsaggelos, A., Huang, T., Shi, B.,
    Joint Filtering of Intensity Images and Neuromorphic Events for High-Resolution Noise-Robust Imaging,
    IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2020. YouTube, Dataset

Tone Mapping

  • Simon Chane, C., Ieng, S.-H., Posch, C., Benosman, R.,
    Event-Based Tone Mapping for Asynchronous Time-Based Image Sensor,
    Front. Neurosci. (2016), 10:391. PDF
  • Han, J., Zhou, C., Duan, P., Tang, Y., Xu, C., Xu, C., Huang, T., Shi, B.,
    Neuromorphic Camera Guided High Dynamic Range Imaging,
    IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2020. PDF, Suppl. Mat.

Visual Stabilization

  • Delbruck, T., Villanueva, V., Longinotti, L.,
    Integration of dynamic vision sensor with inertial measurement unit for electronically stabilized event-based vision,
    IEEE Int. Symp. Circuits and Systems (ISCAS), 2014. YouTube, YouTube 2: Stabilizing DVS output with IMU rate gyros, YouTube 3: hallway scene

Depth Estimation (3D Reconstruction)

Monocular Depth Estimation

  • Rebecq, H., Gallego, G., Mueggler, E., Scaramuzza, D.,
    EMVS: Event-Based Multi-View Stereo—3D Reconstruction with an Event Camera in Real-Time,
    Int. J. of Computer Vision (IJCV), 126(12):1394-1414, 2018. PDF, YouTube, Code.
    • Rebecq, H., Gallego, G., Scaramuzza, D.,
      EMVS: Event-based Multi-View Stereo,
      British Machine Vision Conf. (BMVC), 2016. PDF, YouTube, 3D Reconstruction Experiments from a Train using an Event Camera, Code.
  • Kim et al., ECCV 2016,
    Real-Time 3D Reconstruction and 6-DoF Tracking with an Event Camera.
  • Gallego, G., Rebecq, H., Scaramuzza, D.,
    A Unifying Contrast Maximization Framework for Event Cameras, with Applications to Motion, Depth and Optical Flow Estimation,
    IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2018. PDF, Poster, YouTube, Spotlight presentation.
  • Haessig, G., Berthelon, X., Ieng, S.-H., Benosman, R.,
    A Spiking Neural Network Model of Depth from Defocus for Event-based Neuromorphic Vision,
    Scientific Reports 9, Article number: 3744 (2019). PDF
  • Gallego, G., Gehrig, M., Scaramuzza, D.,
    Focus Is All You Need: Loss Functions For Event-based Vision,
    IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2019. PDF arXiv, Poster, YouTube
  • Chaney, K., Zhu, A., Daniilidis, K.,
    Learning Event-based Height from Plane and Parallax,
    IEEE Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), 2019. PDF, Video pitch,
    IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), 2019.
  • Zhu et al., CVPR 2019,
    Unsupervised Event-Based Learning of Optical Flow, Depth, and Egomotion.
  • Hidalgo-Carrió J., Gehrig D., Scaramuzza, D.,
    Learning Monocular Dense Depth from Events,
    IEEE Int. Conf. on 3D Vision (3DV), 2020. PDF, YouTube, Code, Project Page.
  • Baudron, A., Wang, Z. W., Cossairt, O., Katsaggelos, A. K.,
    E3D: Event-Based 3D Shape Reconstruction,
    arXiv, 2020. Code.
  • Gehrig, D., Rüegg, M., Gehrig, M., Hidalgo-Carrió J., Scaramuzza, D.,
    Combining Events and Frames Using Recurrent Asynchronous Multimodal Networks for Monocular Depth Prediction,
    IEEE Robotics and Automation Letters (RA-L), 2021. PDF, Code, Project Page.

Monocular Depth Estimation using Structured Light

  • Brandli, C., Mantel, T.A., Hutter, M., Hoepflinger, M.A., Berner, R., Siegwart, R., Delbruck, T.,
    Adaptive Pulsed Laser Line Extraction for Terrain Reconstruction using a Dynamic Vision Sensor,
    Front. Neurosci. (2014), 7:275. PDF, YouTube
  • Matsuda, N., Cossairt, O., Gupta, M.,
    MC3D: Motion Contrast 3D Scanning,
    IEEE Conf. Computational Photography (ICCP), 2015. PDF, YouTube, Project page
  • Leroux, T., Ieng, S.-H., Benosman, R.,
    Event-Based Structured Light for Depth Reconstruction using Frequency Tagged Light Patterns,
    arXiv:1811.10771, 2018.
  • Mangalore, A. R., Seelamantula, C. S., Thakur, C. S.,
    Neuromorphic Fringe Projection Profilometry,
    IEEE Signal Processing Letters (LSP), 2020. Project page
  • Wang et al. JSEN,
    Temporal Matrices Mapping Based Calibration Method for Event-Driven Structured Light Systems.
  • Takatani, T., Ito, Y., Ebisu, A., Zheng, Y., Aoto, T.,
    Event-Based Bispectral Photometry Using Temporally Modulated Illumination,
    IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2021. Project page, Suppl. Mat., YouTube.
  • Huang, X., Zhang, Y., Xiong Z.,
    High-speed structured light based 3D scanning using an event camera,
    Optics Express, 2021. Video.
  • Muglikar, M., Gallego, G., Scaramuzza, D.,
    ESL: Event-based Structured Light,
    IEEE Int. Conf. 3D Vision (3DV), 2021. Poster, YouTube, Project page and Dataset, Code.
  • Muglikar, M., Moeys, D., Scaramuzza, D.,
    Event Guided Depth Sensing,
    IEEE Int. Conf. 3D Vision (3DV), 2021. YouTube.

Stereo Depth Estimation

  • Misha Mahowald’s Stereo Chip - Tobi Delbruck- 2020 Telluride Neuromorphic workshop,
    A tour through Misha Mahowald's 1992 stereo fusion work at Caltech in Carver Mead's Physics of Computation lab.
    Mahowald's PhD thesis, 1992, VLSI Analogs of Neuronal Visual Processing: A Synthesis of Form and Function.
  • Schraml, C., Schon, P., Milosevic, N.,
    Smartcam for real-time stereo vision - address-event based embedded system,
    Int. Conf. Computer Vision Theory and Applications (VISAPP), 2007, pp. 466-471.
    • Schraml, S., Belbachir, A. N., Milosevic, N., Schon, P.,
      Dynamic stereo vision system for real-time tracking,
      IEEE Int. Symp. Circuits and Systems (ISCAS), 2010.
    • Schraml, S., Belbachir, A. N.,
      A spatio-temporal clustering method using real-time motion analysis on event-based 3D vision,
      IEEE Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), 2010. PDF
    • Schraml, S., Belbachir, A. N., Braendle, N.,
      A Real-time Pedestrian Classification Method for Event-based Dynamic Stereo Vision,
      IEEE Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), 2010. PDF
    • Schraml, S., Belbachir, A. N., Braendle, N.,
      Real-time classification of pedestrians and cyclists for intelligent counting of non-motorized traffic,
      IEEE Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), 2010. PDF
  • Kogler, J., Sulzbachner, C., Kubinger, W.,
    Bio-inspired stereo vision system with silicon retina imagers,
    Int. Conf. Computer Vision Systems (ICVS), 2009.
    • Kogler, J., Humenberger, M., Sulzbachner, C.,
      Event-Based Stereo Matching Approaches for Frameless Address Event Stereo Data,
      Int. Symp. Visual Computing (ISVC) 2011, Advances in Visual Computing, pp. 674-685.
    • Kogler, J., Sulzbachner, C., Humenberger, M., Eibensteiner, F.,
      Address-Event Based Stereo Vision with Bio-Inspired Silicon Retina Imagers,
      Advances in Theory and Applications of Stereo Vision (2011), pp. 165-188.
    • Kogler, J., Ph.D. Thesis 2016,
      Design and evaluation of stereo matching techniques for silicon retina cameras.
  • Kogler, J., Sulzbachner, C., Eibensteiner, F., Humenberger, M.,
    Address-Event Matching for a Silicon Retina based Stereo Vision System,
    Int. Conf. from Scientific Computing to Computational Engineering (IC-SCCE), 2010.
    • Sulzbachner, C., Kogler, J., Eibensteiner, F.,
      A novel verification approach for silicon retina stereo matching,
      IEEE Int. Symp. Electronics in Marine (ELMAR), 2010.
    • Sulzbachner, C., Zinner, C., Kogler, J.,
      An optimized silicon retina stereo matching algorithm using time-space correlation,
      IEEE Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), 2011. PDF
    • Eibensteiner, F., Kogler, J., Sulzbachner, C., Scharinger, J.,
      Stereo-Vision Algorithm Based on Bio-Inspired Silicon Retinas for Implementation in Hardware,
      Int. Conf. Computer Aided Systems Theory EUROCAST, LNCS, pp. 624–631, 2011.
    • Eibensteiner, F., Kogler, J., Scharinger, J.,
      A High-Performance Hardware Architecture for a Frameless Stereo Vision Algorithm Implemented on a FPGA Platform,
      IEEE Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), 2014. PDF
    • Eibensteiner, F., Brachtendorf, H. G., Scharinger, J.,
      Event-driven stereo vision algorithm based on silicon retina sensors,
      27th Int. Conf. Radioelektronika, 2017.
  • Belbachir, A.N., Schraml, S., Nowakoska, A.,
    Event-Driven Stereo Vision for Fall Detection,
    IEEE Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), 2011.
    • Belbachir, A.N., Nowakoska, A., Schraml, S., Wiesmann, G., Sablatnig, R.,
      Event-Driven Feature Analysis in a 4D Spatiotemporal Representation,
      IEEE Int. Conf. Computer Vision Workshops (ICCVW), 2011.
    • Belbachir, A.N., Litzenberger, M., Schraml, S., Hofstätter, M., Bauer, D., Schön, P., Humenberger, M., Sulzbachner, C., Lunden, T., Merne, M.,
      CARE: A dynamic stereo vision sensor system for fall detection,
      IEEE Int. Symp. Circuits and Systems (ISCAS), 2012.
    • Humenberger, M., Schraml, S., Sulzbachner, C., Belbachir, A.N., Srp A., Vajda, F.,
      Embedded Fall Detection with a Neural Network and Bio-inspired Stereo Vision,
      IEEE Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), 2012.
  • Benosman, R., Ieng, S. H., Rogister, P., Posch, C.,
    Asynchronous Event-Based Hebbian Epipolar Geometry,
    IEEE Trans. Neural Netw., 22(11):1723-1734, 2011. PDF
  • Rogister, P. , Benosman, R., Ieng, S.-H., Lichtsteiner, P., Delbruck, T.,
    Asynchronous Event-Based Binocular Stereo Matching,
    IEEE Trans. Neural Netw. Learn. Syst. (TNNLS), 23(2):347-353, 2012. PDF
  • Carneiro, J., Ieng, S.-H., Posch, C., Benosman, R.,
    Event-based 3D reconstruction from neuromorphic retinas,
    Neural Networks (2013), 45:27-38. PDF, YouTube, YouTube, YouTube, YouTube, YouTube, YouTube
    • Carneiro, Ph.D. Thesis, 2014,
      Asynchronous Event-Based 3D Vision.
  • Lee et al., TNNLS 2014
  • Piatkowska, E., Belbachir, A. N., Gelautz, M.,
    Cooperative and asynchronous stereo vision for dynamic vision sensors,
    Meas. Sci. Technol. (2014), 25(5).
    • Piatkowska, E., Belbachir, A. N., Gelautz, M.,
      Asynchronous Stereo Vision for Event-Driven Dynamic Stereo Sensor Using an Adaptive Cooperative Approach,
      IEEE Int. Conf. Computer Vision Workshops (ICCVW), 2013.
    • Piatkowska, E., Kogler, J., Belbachir, A. N., Gelautz, M.,
      Improved Cooperative Stereo Matching for Dynamic Vision Sensors with Ground Truth Evaluation,
      IEEE Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), 2017, pp. 370-377. PDF.
  • Camuñas-Mesa, L. A., Serrano-Gotarredona, T., Ieng, S. H., Benosman, R. B., Linares-Barranco, B.,
    On the use of orientation filters for 3D reconstruction in event–driven stereo vision,
    Front. Neurosci. (2014), 8:48. PDF
    • Camuñas-Mesa, L. A., Serrano-Gotarredona, T., Linares-Barranco, B., Ieng, S., Benosman, R.,
      Event-Driven Stereo Vision with Orientation Filters,
      IEEE Int. Symp. Circuits and Systems (ISCAS), 2014.
  • Kogler, J., Eibensteiner, F., Humenberger, M., Sulzbachner, C., Gelautz, M., Scharinger, J.,
    Enhancement of sparse silicon retina-based stereo matching using belief propagation and two-stage postfiltering,
    J. Electronic Imaging, 23(4), 043011 (2014).
    • Kogler, J., Ph.D. Thesis 2016,
      Design and evaluation of stereo matching techniques for silicon retina cameras.
  • Firouzi, M. and Conradt, J.,
    Asynchronous Event-based Cooperative Stereo Matching Using Neuromorphic Silicon Retinas,
    Neural Processing Letters, 43(2):311-326, Apr. 2016. PDF
    • Dikov, G., Firouzi, M., Röhrbein, F., Conradt, J., Richter, C.,
      Spiking Cooperative Stereo-Matching at 2 ms Latency with Neuromorphic Hardware,
      Conf. Biomimetic and Biohybrid Systems. Living Machines 2017: Biomimetic and Biohybrid Systems, pp. 119-137. LNCS, vol 10384. Springer, Cham. PDF, Videos
    • Kaiser, J., Weinland, J., Keller, P., Steffen, L., Vasquez Tieck, J.C., Reichard, D., Roennau, A., Conradt, J., Dillmann, R.,
      Microsaccades for Neuromorphic Stereo Vision,
      Int. Conf. Artificial Neural Networks (ICANN), 2018.
  • Zou, D., Guo, P., Wang, Q., Wang, X., Shao, G., Shi, F., Li, J., Park, P.K.J.,
    Context-Aware Event-driven Stereo Matching,
    IEEE Int. Conf. Image Processing (ICIP), 2016.
  • Osswald, M., Ieng, S.-H., Benosman, R., Indiveri, G.,
    A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems,
    Scientific Reports 7, Article number: 40703 (2017). PDF
    • Haessig et al., AICAS 2019,
      Neuromorphic networks on the SpiNNaker platform.
  • Zou, D., Shi, F., Liu, W., Li, J., Wang, Q., Park, P.K.J., Shi, C.-W., Roh, Y.J., Ryu, H.,
    Robust Dense Depth Map Estimation from Sparse DVS Stereos,
    British Machine Vision Conf. (BMVC), 2017. Supp. Material.
  • Camuñas-Mesa, L. A., Serrano-Gotarredona, T., Ieng, S., Benosman, R., Linares-Barranco, B.,
    Event-driven Stereo Visual Tracking Algorithm to Solve Object Occlusion,
    IEEE Trans. Neural Netw. Learn. Syst. (TNNLS), 2017.
  • Xie, Z., Chen, S., Orchard, G.
    Event-Based Stereo Depth Estimation Using Belief Propagation,
    Front. Neurosci. (2017), 11:535. YouTube
  • Everding, L., Ph.D. Thesis 2018,
    Event-Based Depth Reconstruction Using Stereo Dynamic Vision Sensors.
    • Kaelber, F., Bachelor Thesis 2016,
      A probabilistic method for event stream registration.
    • Galanis, M., Bachelor Thesis 2016,
      DVS event stream registration.
  • Andreopoulos, A., Kashyap, H.J., Nayak, T.K., Amir, A., Flickner, M.D.,
    A Low Power, High Throughput, Fully Event-Based Stereo System,
    IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2018.
    • Stereo Dataset.
  • Ieng, S.-H., Carneiro, J., Osswald, M., Benosman, R.,
    Neuromorphic Event-Based Generalized Time-Based Stereovision,
    Front. Neurosci. (2018), 12:442.
    • Carneiro, Ph.D. Thesis, 2014,
      Asynchronous Event-Based 3D Vision - Chapter 4.
  • Zhu, A., Chen, Y., Daniilidis, K.,
    Realtime Time Synchronized Event-based Stereo,
    European Conf. Computer Vision (ECCV), 2018. YouTube
  • Zhou, Y., Gallego, G., Rebecq, H., Kneip, L., Li, H., Scaramuzza, D.,
    Semi-Dense 3D Reconstruction with a Stereo Event Camera,
    European Conf. Computer Vision (ECCV), 2018. Poster, YouTube.
    • Zhou et al., TRO 2021,
      Event-based Stereo Visual Odometry.
  • Dominguez-Morales, M., Dominguez-Morales, J. P., Jimenez-Fernandez, A., Linares-Barranco, A., Jimenez-Moreno, G.,
    Stereo Matching in Address-Event-Representation (AER) Bio-Inspired Binocular Systems in a Field-Programmable Gate Array (FPGA),
    Electronics 2019, 8(4), 410.
  • Steffen, L., Reichard, D., Weinland, J., Kaiser, J., Roennau, A., Dillmann, R.,
    Neuromorphic Stereo Vision: A Survey of Bio-Inspired Sensors and Algorithms,
    Front. Neurorobot. (2019) 13:28.
  • Steffen, L., Hauck, B., Kaiser, J., Weinland, J., Ulbrich, S., Reichard, D., Roennau, A., Dillmann, R.,
    Creating an Obstacle Memory Through Event-Based Stereo Vision and Robotic Proprioception,
    IEEE Int. Conf. Automation Science and Engineering (CASE), 2019.
  • Hadviger, A., Markovic, I., Petrovic, I.,
    Stereo Event Lifetime and Disparity Estimation for Dynamic Vision Sensors,
    European Conf. Mobile Robots (ECMR), 2019. PDF arXiv.
  • Tulyakov, S., Fleuret, F., Kiefel, M., Gehler, P., Hirsch., M.,
    Learning an event sequence embedding for dense event-based deep stereo,
    IEEE Int. Conf. Computer Vision (ICCV), 2019. PDF, Video
  • Steffen, L., Ulbrich, S., Roennau, A., Dillmann, R.,
    Multi-View 3D Reconstruction With Self-Organizing Maps on Event-Based Data,
    IEEE Int. Conf. Advanced Robotics (ICAR), 2019.
  • Hadviger, A., Marković, I., Petrović, I.,
    Stereo Dense Depth Tracking Based on Optical Flow using Frames and Events,
    Advanced Robotics, 2020.
  • Ahmed, S. H., Jang, H. W., Uddin, S. M. N., & Jung, Y. J.,
    Deep Event Stereo Leveraged by Event-to-Image Translation,
    AAAI Conf. Artificial Intelligence, 2021. PDF, Video, Project page
  • Wang, Z., Pan, L., Ng, Y., Zhuang, Z., Mahony, R.,
    Stereo Hybrid Event-Frame (SHEF) Cameras for 3D Perception,
    IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), 2021. PDF, Video, Dataset
  • Ghosh, S., Gallego, G.,
    Multi-Event-Camera Depth Estimation and Outlier Rejection by Refocused Events Fusion (MCEMVS),
    Advanced Intelligent Systems (AISY), 2022. YouTube, Presentation at IEEE MFI workshop 2022 (YouTube), Slides MFI 2022, Presentation at the GRASP Laboratory (UPenn) seminar, Project page (with Code),
    • Ghosh, S., Gallego, G.,
      Silicon retinas to help robots navigate the world,
      Advanced Science News, 2022.
    • Ghosh, S., Gallego, G.,
      Event-based Stereo Depth Estimation from Ego-motion using Ray Density Fusion,
      2nd Int. Ego4D Workshop at European Conf. Computer Vision Workshops (ECCVW), 2022.

Stereo Depth Estimation using Structured Light

  • Martel, J.N.P., Mueller, J., Conradt, J., Sandamirskaya, Y.,
    An Active Approach to Solving the Stereo Matching Problem using Event-Based Sensors,
    IEEE Int. Symp. Circuits and Systems (ISCAS), 2018.
    • Martel, J.N.P., Müller, J., Conradt, J., Sandamirskaya, Y.,
      Live Demonstration: An Active System for Depth Reconstruction using Event-Based Sensors,
      IEEE Int. Symp. Circuits and Systems (ISCAS), 2018.

Stereoscopic Panoramic Imaging

  • smart eye TUCO-3D camera,
    Stereoscopic panoramic imaging camera based on dynamic vision sensors. PDF
  • Belbachir, A. N., Pflugfelder, R., Gmeiner, P.,
    A Neuromorphic Smart Camera for Real-time 360deg distortion-free Panoramas,
    IEEE Conference on Distributed Smart Cameras (ICDSC), 2010. PDF
  • Belbachir, A.N., Mayerhofer, M., Matolin, D., Colineau, J.,
    Real-time 360 degrees Panoramic Views using BiCa360, the Fast Rotating Dynamic Vision Sensor to up to 10 Rotations per Sec,
    IEEE Int. Symp. Circuits and Systems (ISCAS), 2012.
  • Belbachir, A.N., Mayerhofer, M., Matolin, D., Colineau, J.,
    360SCAN: High-speed rotating line sensor for real-time 360 degrees panoramic vision,
    IEEE Int. Conf. Distributed Smart Cameras (ICDSC), 2012.
  • Belbachir, A. N., Schraml, S., Mayerhofer, M., Hofstatter, M.,
    A Novel HDR Depth Camera for Real-time 3D 360-degree Panoramic Vision,
    IEEE Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), 2014, pp. 419-426. PDF
  • Schraml, S., Belbachir, A. N., Bischof, H.,
    Event-Driven Stereo Matching for Real-Time 3D Panoramic Vision,
    IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2015, pp. 466-474. PDF. Slides.
  • Schraml, S., Belbachir, A. N., Bischof, H.,
    An Event-Driven Stereo System for Real-Time 3-D 360° Panoramic Vision,
    IEEE Trans. Ind. Electron., 63(1):418-428, 2016.

SLAM (Simultaneous Localization And Mapping)

Localization, Ego-Motion Estimation

  • Weikersdorfer, D. and Conradt, J.,
    Event-based particle filtering for robot self-localization,
    IEEE Int. Conf. Robotics and Biomimetics (ROBIO), 2012. PDF
  • Censi, A., Strubel, J., Brandli, C., Delbruck, T., Scaramuzza, D.,
    Low-latency localization by Active LED Markers tracking using a Dynamic Vision Sensor,
    IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), 2013. PDF, Slides
  • Mueggler, E., Huber, B., Scaramuzza, D.,
    Event-based, 6-DOF Pose Tracking for High-Speed Maneuvers,
    IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), Chicago, IL, 2014, pp. 2761-2768. PDF, YouTube
  • Gallego, G., Forster, C., Mueggler, E., Scaramuzza, D.,
    Event-based Camera Pose Tracking using a Generative Event Model,
    arXiv:1510.01972, 2015.
  • Mueggler, E., Gallego G., Scaramuzza, D.,
    Continuous-Time Trajectory Estimation for Event-based Vision Sensors,
    Robotics: Science and Systems (RSS), 2015. PDF, PPT, Poster
  • Reverter Valeiras, D., Orchard, G., Ieng, S.-H., Benosman, R.,
    Neuromorphic Event-Based 3D Pose Estimation.
    Front. Neurosci. (2016), 9:522. PDF, Suppl. Mat., YouTube
  • Reverter Valeiras, D., Kime, S., Ieng, S.-H., Benosman, R.,
    An Event-Based Solution to the Perspective-n-Point Problem,
    Front. Neurosci. (2016), 10:208. PDF, Suppl. Mat.
  • Yuan, W., Ramalingam, S.,
    Fast Localization and Tracking using Event Sensors,
    IEEE Int. Conf. Robotics and Automation (ICRA), 2016. PDF, Video
  • Mueggler et al., IJRR 2017.
    The Event-Camera Dataset and Simulator: Event-based Data for Pose Estimation, Visual Odometry, and SLAM.
  • Gallego, G., Lund, J.E.A., Mueggler, E., Rebecq, H., Delbruck, T., Scaramuzza, D.,
    Event-based, 6-DOF Camera Tracking from Photometric Depth Maps,
    IEEE Trans. Pattern Anal. Machine Intell. (TPAMI), 2018. PDF, YouTube, Datasets
  • Nguyen, A., Do, T.-T., Caldwell, D. G., Tsagarakis, N. G.,
    Real-Time 6DOF Pose Relocalization for Event Cameras with Stacked Spatial LSTM Networks,
    IEEE Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), 2019. PDF. Project page. Video pitch
  • Maqueda et al., CVPR 2018.
    Event-based Vision meets Deep Learning on Steering Prediction for Self-driving Cars.
  • Gallego et al., CVPR 2018,
    A Unifying Contrast Maximization Framework for Event Cameras, with Applications to Motion, Depth and Optical Flow Estimation.
  • Bryner, S., Gallego, G., Rebecq, H., Scaramuzza, D.,
    Event-based, Direct Camera Tracking from a Photometric 3D Map using Nonlinear Optimization,
    IEEE Int. Conf. Robotics and Automation (ICRA), 2019. PDF, YouTube, Project page and Datasets
  • Gallego et al., CVPR 2019,
    Focus Is All You Need: Loss Functions For Event-based Vision.
  • Zhu et al., CVPR 2019,
    Unsupervised Event-Based Learning of Optical Flow, Depth, and Egomotion.
  • Xu, J., Jiang, M., Yu, L., Yang, W., Wang, W.,
    Robust Motion Compensation for Event Cameras With Smooth Constraint,
    IEEE Trans. Comput. Imag. (TCI), 6:604-614, 2020.
  • Fischer, T., Milford, M.,
    Event-based visual place recognition with ensembles of spatio-temporal windows,
    IEEE Robotics and Automation Letters (RA-L), 5(4):6924-6931, 2020. PDF including Suppl. Mat., Code
  • Kreiser, R., Renner, A., Leite, V.R.C., Serhan, B., Bartolozzi, C., Glover, A., Sandamirskaya, Y.,
    An On-chip Spiking Neural Network for Estimation of the Head Pose of the iCub Robot,
    Front. Neurosci. (2020), 14:551.
  • Nunes, U.M., Demiris, Y.,
    Entropy Minimisation Framework for Event-based Vision Model Estimation,
    European Conf. Computer Vision (ECCV), 2020. Suppl. Mat., YouTube, Code
    • Nunes, U.M., Demiris, Y.,
      Live Demonstration: Incremental Motion Estimation for Event-Based Cameras by Dispersion Minimisation,
      IEEE Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), 2021. Code
    • Nunes, U.M., Demiris, Y.,
      Robust Event-based Vision Model Estimation by Dispersion Minimisation,
      IEEE Trans. Pattern Anal. Machine Intell. (TPAMI), 2021. Suppl. Mat., Code
  • Peng, X., Wang, Y., Gao, L., Kneip, L.,
    Globally-Optimal Event Camera Motion Estimation,
    European Conf. Computer Vision (ECCV), 2020. PDF, Suppl. Mat., YouTube
    • Peng, X., Gao, L., Wang, Y., Kneip, L.,
      Globally-Optimal Contrast Maximisation for Event Cameras,
      IEEE Trans. Pattern Anal. Machine Intell. (TPAMI), 44(7):3479-3495, 2022. PDF
  • Bertrand, J., Yigit, A., Durand, S.,
    Embedded Event-based Visual Odometry,
    IEEE Int. Conf. Event-Based Control Comm. and Signal Proc. (EBCCSP), 2020. Code
  • Chamorro, W., Andrade-Cetto, J., Solà, J.,
    High-Speed Event Camera Tracking,
    British Machine Vision Conf. (BMVC), 2020. PDF, Suppl. Mat., PDF
  • Chen, G., Chen, W., Yang, Q., Xu, Z., Yang, L., Conradt, J., Knoll, A.,
    A Novel Visible Light Positioning System With Event-Based Neuromorphic Vision Sensor,
    IEEE Sensors Journal, 20(17):10211-10219, 2020.
  • Kong, D., Fang, Z., Li, H., Hou, K., Coleman, S., Kerr, D.,
    Event-VPR: End-to-End Weakly Supervised Network Architecture for Event-based Visual Place Recognition,
    arXiv, 2020.
  • Jiao, J., Huang, H., Li, L., He, Z., Zhu, Y., Liu, M.,
    Comparing Representations in Tracking for Event Camera-Based SLAM,
    IEEE Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), 2021. YouTube, Code.
  • Gu, C., Learned-Miller, E., Sheldon, D., Gallego, G., Bideau, P.,
    The Spatio-Temporal Poisson Point Process: A Simple Model for the Alignment of Event Camera Data,
    IEEE Int. Conf. Computer Vision (ICCV), 2021. Project page, Code
  • Peng, X., Xu, W., Yang, J., Kneip, L.,
    Continuous Event-Line Constraint for Closed-Form Velocity Initialization,
    British Machine Vision Conf. (BMVC), 2021, PDF, Video
  • Lee, A. J., Kim, A.,
    EventVLAD: Visual Place Recognition with Reconstructed Edges from Event Cameras,
    IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), 2021, PDF

Visual Servoing

  • Muthusamy, R., Ayyad, A., Halwani, M., Swart, D., Gan, D., Seneviratne, L., Zweiri, Y.,
    Neuromorphic Eye-in-Hand Visual Servoing,
    IEEE Access, 2021. YouTube.
  • Gomez Eguiluz, A., Rodriguez-Gomez, J.P., Martinez-de Dios, J.R., Ollero, A.,
    Asynchronous Event-based Line Tracking for Time-to-Contact Maneuvers in UAS,
    IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), 2020. Youtube.
  • Gomez Eguiluz, A., Rodriguez-Gomez, J.P., Tapia, R., F.J., Maldonado, J.A., Acosta, J.R., Martinez-de Dios, Ollero, A.,
    Why fly blind? Event-based visual guidance for ornithopter robot flight,
    IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), 2021. PDF, Youtube.

Mapping

  • See Depth Estimation (3D Reconstruction)

Visual Odometry / SLAM

Monocular

  • Cook et al., IJCNN 2011,
    Interacting maps for fast visual interpretation. (Joint estimation of optical flow, image intensity and angular velocity with a rotating event camera).
  • Weikersdorfer, D., Hoffmann, R., Conradt. J.,
    Simultaneous localization and mapping for event-based vision systems.
    Int. Conf. Computer Vision Systems (ICVS), 2013, pp. 133-142. PDF, Slides
  • Kim et al., BMVC 2014,
    Simultaneous Mosaicing and Tracking with an Event Camera.
  • Censi, A. and Scaramuzza, D.,
    Low-latency Event-based Visual Odometry,
    IEEE Int. Conf. Robotics and Automation (ICRA), 2014, pp. 703-710. PDF, Slides
  • Weikersdorfer, D., Adrian, D. B., Cremers, D., Conradt, J.,
    Event-based 3D SLAM with a depth-augmented dynamic vision sensor,
    IEEE Int. Conf. Robotics and Automation (ICRA), 2014, pp. 359-364.
    • Weikersdorfer, Ph.D. Thesis, 2014,
      Efficiency by Sparsity: Depth-Adaptive Superpixels and Event-based SLAM.
  • Kueng, B., Mueggler, E., Gallego, G., Scaramuzza, D.,
    Low-Latency Visual Odometry using Event-based Feature Tracks,
    IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), 2016, pp. 16-23. PDF. YouTube
  • Kim, H., Leutenegger, S., Davison, A.J.,
    Real-Time 3D Reconstruction and 6-DoF Tracking with an Event Camera,
    European Conf. Computer Vision (ECCV), 2016, pp. 349-364. PDF, YouTube, YouTube 2
  • Gallego, G. and Scaramuzza, D.,
    Accurate Angular Velocity Estimation with an Event Camera,
    IEEE Robotics and Automation Letters (RA-L), 2(2):632-639, 2017. PDF, PPT, Youtube.
  • Rebecq, H., Horstschaefer, T., Gallego, G., Scaramuzza, D.,
    EVO: A Geometric Approach to Event-based 6-DOF Parallel Tracking and Mapping in Real-time,
    IEEE Robotics and Automation Letters (RA-L), 2(2):593-600, 2017. PDF, PPT, Poster, Youtube, Code.
  • Reinbacher, C., Munda, G., Pock, T.,
    Real-Time Panoramic Tracking for Event Cameras,
    IEEE Int. Conf. Computational Photography (ICCP), 2017. PDF, YouTube, Code
  • Mueggler et al., IJRR 2017.
    The Event-Camera Dataset and Simulator: Event-based Data for Pose Estimation, Visual Odometry, and SLAM.
  • Zhu, D., Dong, J., Xu, Z., Ye, C., Hu, Y., Su, H., Liu, Z., Chen, G.,
    Neuromorphic Visual Odometry System for Intelligent Vehicle Application with Bio-inspired Vision Sensor,
    IEEE Int. Conf. Robotics and Biomimetics (ROBIO), 2019. PDF
  • Park, P.K.J., Kim, J.-S., Shin, C.-W, Lee, H., Liu, W., Wang, Q., Roh, Y., Kim, J., Ater, Y., Soloveichik, E., Ryu, H. E.,
    Low-Latency Interactive Sensing for Machine Vision,
    IEEE Int. Electron Devices Meeting (IEDM), 2019.
  • Liu, D., Parra, A., Chin, T.-J.,
    Globally Optimal Contrast Maximisation for Event-based Motion Estimation,
    IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2020. Project page
  • Gehrig, M., Shrestha, S. B., Mouritzen, D., Scaramuzza, D.,
    Event-Based Angular Velocity Regression with Spiking Networks,
    IEEE Int. Conf. Robotics and Automation (ICRA), 2020. PDF, Code
  • Kim, H., Kim, H.J.,
    Real-Time Rotational Motion Estimation With Contrast Maximization Over Globally Aligned Events,
    IEEE Robotics and Automation Letters (RA-L), 6(3):6016-6023, 2021. Project page and Dataset, YouTube, Code
  • Liu, D., Parra, A., Chin, T.-J.,
    Spatiotemporal Registration for Event-Based Visual Odometry,
    IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2021. Suppl. Mat., PDF, Dataset
  • Wang, Y., Yang, J., Peng, X., Wu, P., Gao, L., Huang, K., Chen, J., Kneip, L.,
    Visual Odometry with an Event Camera Using Continuous Ray Warping and Volumetric Contrast Maximization,
    Sensors 22 (15), 5687.
  • Zuo, Y., Yang, J., Chen, J., Wang, X., Wang, Y., Kneip, L.,
    DEVO: Depth-Event Camera Visual Odometry in Challenging Conditions,
    IEEE Int. Conf. Robotics and Automation (ICRA), 2022. PDF, YouTube

Stereo

  • Zhou, Y., Gallego, G., Shen, S.,
    Event-based Stereo Visual Odometry,
    IEEE Trans. Robot. (TRO), 2021. Project page, PDF, YouTube, Code.
  • Xiao et al., arXiv 2021,
    Research on Event Accumulator Settings for Event-Based SLAM.

Visual-Inertial Odometry

  • Mueggler, E., Gallego, G., Rebecq, H., Scaramuzza, D.,
    Continuous-Time Visual-Inertial Odometry for Event Cameras,
    IEEE Trans. Robot. (TRO), 2018.
  • Mueggler et al., IJRR 2017.
    The Event-Camera Dataset and Simulator: Event-based Data for Pose Estimation, Visual Odometry, and SLAM.
  • Zhu, A., Atanasov, N., Daniilidis, K.,
    Event-based Visual Inertial Odometry,
    IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2017. PDF, Supplementary material, YouTube, YouTube 2
  • Rebecq, H., Horstschaefer, T., Scaramuzza, D.,
    Real-time Visual-Inertial Odometry for Event Cameras using Keyframe-based Nonlinear Optimization,
    British Machine Vision Conf. (BMVC), 2017. PDF, Appendix, YouTube, Project page, PPT, Oral presentation.
  • Rosinol Vidal, A., Rebecq, H., Horstschaefer, T., Scaramuzza, D.,
    Ultimate SLAM? Combining Events, Images, and IMU for Robust Visual SLAM in HDR and High Speed Scenarios,
    IEEE Robotics and Automation Letters (RA-L), 3(2):994-1001, Apr. 2018. PDF, YouTube, Poster, Project page, ICRA18 video pitch.
  • Nelson, K. J., MSc Thesis 2019,
    Event-Based Visual-Inertial Odometry on a Fixed-Wing Unmanned Aerial Vehicle.
  • Rebecq et al., TPAMI 2020,
    High Speed and High Dynamic Range Video with an Event Camera.
    • Rebecq et al., CVPR 2019,
      Events-to-Video: Bringing Modern Computer Vision to Event Cameras.
  • Friedel, Z. P., MSc Thesis 2020,
    Event-Based Visual-Inertial Odometry Using Smart Features.
  • Le Gentil, C., Tschopp, F., Alzugaray, I., Vidal-Calleja, T., Siegwart, R., Nieto, J.,
    IDOL: A Framework for IMU-DVS Odometry using Lines,
    IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), 2020. PDF
  • Xiao, K., Wang, G., Chen, Y., Xie, Y., Li, H.,
    Research on Event Accumulator Settings for Event-Based SLAM,
    arXiv, 2021. PDF(Monocular VIO ony), Code (VIO and Stereo).

Segmentation

Object Segmentation

  • Barranco, F., Teo, C. L., Fermüller, C., Aloimonos, Y.,
    Contour Detection and Characterization for Asynchronous Event Sensors,
    IEEE Int. Conf. Computer Vision (ICCV), 2015, pp. 486-494. PDF, Project page
  • Marcireau, A., Ieng, S.-H., Simon-Chane, C., Benosman, R.,
    Event-Based Color Segmentation With a High Dynamic Range Sensor.
    Front. Neurosci. (2018), 12:135. PDF
  • Barranco, F., Fermüller, C., Ros, E.,
    Real-Time Clustering and Multi-Target Tracking Using Event-Based Sensors,
    IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), 2018. PDF, YouTube
  • Alonso I., Murillo A.,
    EV-SegNet: Semantic Segmentation for Event-based Cameras,
    IEEE Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), 2019. PDF. Project page. Video pitch
  • Wang, L., Chae, Y., Yoon, S.-H., Kim, T.-K., Yoon, K.-J.,
    EvDistill: Asynchronous Events To End-Task Learning via Bidirectional Reconstruction-Guided Cross-Modal Knowledge Distillation,
    IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2021. Code.

Motion Segmentation

  • Glover et al., IROS 2016,
    Event-driven ball detection and gaze fixation in clutter.
    • Glover et al., IROS 2017,
      Robust Visual Tracking with a Freely-moving Event Camera.
  • Mishra, A., Ghosh, R., Principe, J.C., Thakor, N.V., Kukreja, S.L.,
    A Saccade Based Framework for Real-Time Motion Segmentation Using Event Based Vision Sensors,
    Front. Neurosci. (2017), 11:83.
  • Vasco, V., Glover, A., Mueggler, E., Scaramuzza, D., Natale, L., Bartolozzi, C.
    Independent Motion Detection with Event-driven Cameras,
    Int. Conf. Advanced Robotics (ICAR), 2017, pp. 530-536. PDF
  • Stoffregen et al., ACRA 2017,
    Simultaneous Optical Flow and Segmentation (SOFAS) using Dynamic Vision Sensor.
  • Mitrokhin et al., IROS 2018,
    Event-based Moving Object Detection and Tracking.
  • Stoffregen, T., Gallego, G., Drummond, T., Kleeman, L., Scaramuzza, D.,
    Event-Based Motion Segmentation by Motion Compensation,
    IEEE Int. Conf. Computer Vision (ICCV), 2019. PDF (animations best viewed with Acrobat Reader), YouTube
  • Mitrokhin et al., IROS 2019,
    EV-IMO: Motion Segmentation Dataset and Learning Pipeline for Event Cameras.
  • Mitrokhin, A., Hua, Z., Fermüller, C., Aloimonos, Y.,
    Learning Visual Motion Segmentation Using Event Surfaces,
    IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2020. PDF, Suppl. Mat
  • Parameshwara, C. M., Sanket, N. J., Gupta, A., Fermüller, C., Aloimonos, Y.,
    MOMS with Events: Multi-Object Motion Segmentation With Monocular Event Cameras,
    arXiv, 2020.
  • Kepple et al., ECCV 2020,
    Jointly learning visual motion and confidence from local patches in event cameras.
  • Zhou, Y., Gallego, G., Lu, X., Liu, S., Shen, S.,
    Event-based Motion Segmentation with Spatio-Temporal Graph Cuts,
    IEEE Trans. Neural Netw. Learn. Syst. (TNNLS), 2021. Project page, YouTube, Code.

Pattern Recognition

Object Recognition

  • Serrano-Gotarredona, R., Oster, M., Lichtsteiner, P., Linares-Barranco, A., Paz-Vicente, R., Gómez-Rodríguez, F., Camuñas-Mesa, L., Berner, R., Rivas, M., Delbrück, T., Liu, S. C., Douglas, R., Häfliger, P., Jiménez-Moreno, G., Civit, A., Serrano-Gotarredona, T., Acosta-Jiménez, A., Linares-Barranco, B.,
    CAVIAR: A 45k-Neuron, 5M-Synapse, 12G-connects/sec AER Hardware Sensory-Processing-Learning-Actuating System for High Speed Visual Object Recognition and Tracking,
    IEEE Trans. on Neural Netw., 20(9):1417-1438, 2009. PDF
  • Belbachir, A. N., Hofstaetter, M., Litzenberger, M., Schoen, P.,
    High Speed Embedded Object Analysis Using a Dual-Line Timed-Address-Event Temporal Contrast Vision Sensor,
    IEEE Trans. Ind. Electron., 58(3):770-783, 2011.
  • Ghosh, R., Mishra, A., Orchard, G., Thakor, N.,
    Real-time object recognition and orientation estimation using an event-based camera and CNN,
    IEEE Biomedical Circuits and Systems Conf. (BioCAS), 2014.
  • Serrano-Gotarredona, T., Linares-Barranco, B., Galluppi, F., Plana, L., Furber, S.,
    ConvNets experiments on SpiNNaker,
    IEEE Int. Symp. Circuits and Systems (ISCAS), 2015.
  • Cohen, G., Orchard, G., Ieng, S.-H., Tapson, J., Benosman, R., van Schaik, A.,
    Skimming Digits: Neuromorphic Classification of Spike-Encoded Images,
    Front. Neurosci. (2016), 10:184. PDF
  • Barua et al., WACV 2016,
    Direct face detection and video reconstruction from event cameras.
  • Moeys, D., Corradi F., Kerr, E., Vance, P., Das, G., Neil, D., Kerr, D., Delbruck, T.,
    Steering a Predator Robot using a Mixed Frame/Event-Driven Convolutional Neural Network,
    IEEE Int. Conf. Event-Based Control Comm. and Signal Proc. (EBCCSP), 2016. PDF, YouTube 1, YouTube 2, YouTube 3
  • Li, H., Li, G., Shi, L.,
    Classification of Spatiotemporal Events Based on Random Forest,
    Int. Conf. Brain Inspired Cognitive Systems (BICS), 2016.
  • Ghosh, R., Siyi, T., Rasouli, M., Thakor, N. V., Kukreja, S. L.,
    Pose-Invariant Object Recognition for Event-Based Vision with Slow-ELM,
    Int. Conf. Artificial Neural Networks (ICANN), 2016. PDF
  • Cohen, G., Afshar, S., Orchard, G., Tapson, J., Benosman, R., van Schaik, A.,
    Spatial and Temporal Downsampling in Event-Based Visual Classification,
    IEEE Trans. Neural Netw. Learn. Syst. (TNNLS), 29(10):5030-5044, Oct. 2018.
  • Lenz, G., Ieng, S.-H., Benosman, R.,
    Event-Based Face Detection and Tracking Using the Dynamics of Eye Blinks,
    Front. Neurosci., 2020. PDF, YouTube
  • Ramesh, B., Ussa, A., Della Vedova, L., Yang, H., Orchard, G.,
    PCA-RECT: An Energy-Efficient Object Detection Approach for Event Cameras,
    Assian Conf. Computer Vision Workshops (ACCVW), 2018. Code
    • Ramesh, B., Ussa, A., Della Vedova, L., Yang, H., Orchard, G.,
      Low-Power Dynamic Object Detection and Classification With Freely Moving Event Cameras,
      Front. Neurosci. (2020), 14:135. N-SOD Dataset
  • Negri, P., Soto, M., Linares-Barranco, B., Serrano-Gotarredona, T.,
    Scene Context Classification with Event-Driven Spiking Deep Neural Networks,
    IEEE Int. Conf. Electronics, Circuits and Systems (ICECS), 2018.
  • Cannici, M., Ciccone, M., Romanoni, A., Matteucci, M.,
    Attention Mechanisms for Object Recognition With Event-Based Cameras,
    IEEE Winter Conf. Applications of Computer Vision (WACV), 2019.
  • Jiang, Z., Xia, P., Huang, K., Stechele, W., Chen, G., Bing, Z., Knoll, A.,
    Mixed Frame-/Event-Driven Fast Pedestrian Detection,
    IEEE Int. Conf. Robotics and Automation (ICRA), 2019.
  • Rueckauer, B., Kanzig, N., Liu, S.-C., Delbruck, T., Sandamirskaya, Y.,
    Closing the Accuracy Gap in an Event-Based Visual Recognition Task,
    arXiv, 2019.
  • Bi, Y., Chadha, A., Abbas, A., Bourtsoulatze, E., Andreopoulos, Y.,
    Graph-Based Object Classification for Neuromorphic Vision Sensing,
    IEEE Int. Conf. Computer Vision (ICCV), 2019. PDF, Suppl. Mat., PDF arXiv, Github Page, ASL-DVS Dataset.
    • Bi et al arXiv 2019,
      Graph-based Spatial-temporal Feature Learning for Neuromorphic Vision Sensing.
  • Li, J., Dong, S., Yu, Z., Tian, Y., Huang, T.,
    Event-Based Vision Enhanced: A Joint Detection Framework in Autonomous Driving,
    IEEE Int. Conf. Multimedia and Expo (ICME), 2019.
  • Iacono, M., D'Angelo, G., Glover, A., Tikhanoff, V., Niebur, E., Bartolozzi, C.,
    Proto-Object Based Saliency for Event-Driven Cameras,
    IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), 2019.
  • Gao, S., Guo, G., Chen, C.L.P.,
    Event-Based Incremental Broad Learning System for Object Classification,
    IEEE Int. Conf. Computer Vision Workshop (ICCVW), 2019.
    • Gao, S., Guo, G., Huang, H., Cheng, X., Chen, C.L.P.,
      An End-to-End Broad Learning System for Event-Based Object Classification,
      IEEE Access, vol. 8, pp. 45974-45984, 2020.
  • Nan, Y., Xiao, R., Gao, S., Yan, R.,
    An Event-based Hierarchy Model for Object Recognition,
    IEEE Symp. Series in Computational Intell. (SSCI), 2019.
  • Damien, J., Hubert, K., Frederic, C.,
    Convolutional Neural Network for Detection and Classification with Event-based Data,
    Int. Joint Conf. Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP), 2019. PDF.
  • Xiao, R., Tang, H., Ma, Y., Yan, R., Orchard, G.,
    An Event-Driven Categorization Model for AER Image Sensors Using Multispike Encoding and Learning,
    IEEE Trans. Neural Netw. Learn. Syst. (TNNLS), 2019.
  • Chen, G., Cao, H., Ye, C., Zhang, Z., Liu, X., Mo, X., Qu, Z., Conradt, J., Röhrbein, F., Knoll, A.,
    Multi-Cue Event Information Fusion for Pedestrian Detection With Neuromorphic Vision Sensors,
    Front. Neurorobot. 13:10, 2019.
  • Li and Shi, Front. Neurorobot. 2019,
    Robust Event-Based Object Tracking Combining Correlation Filter and CNN Representation.
  • Camuñas-Mesa, L.A., Linares-Barranco, B., Serrano-Gotarredona, T.,
    Low-power hardware implementation of SNN with decision block for recognition tasks,
    IEEE Int. Conf. Electronics, Circuits and Systems (ICECS), 2019.
  • Park et al., IEDM 2019,
    Low-Latency Interactive Sensing for Machine Vision.
  • Liu, Q., Ruan, H., Xing, D., Tang, H., Pan, G.,
    Effective AER Object Classification Using Segmented Probability-Maximization Learning in Spiking Neural Networks,
    AAAI Conf. Artificial Intelligence, 2020.
  • Ramesh, B., Yang, H.,
    Boosted Kernelized Correlation Filters for Event-based Face Detection,
    IEEE Winter Conf. Applications of Computer Vision Workshops (WACVW), 2020.
  • Giannone, G., Anoosheh, A., Quaglino, A., D'Oro, P., Gallieri, M., Masci, J.,
    Real-time Classification from Short Event-Camera Streams using Input-filtering Neural ODEs,
    arXiv, 2020.
  • A. Bisulco, F. Cladera Ojeda, V. Isler and D. D. Lee,,
    Near-chip Dynamic Vision Filtering for Low-Bandwidth Pedestrian Detection,
    IEEE Computer Society Annual Symposium on VLSI (ISVLSI), 2020.
  • Lu, J., Dong, J., Yan, R., Tang, H.,
    An Event-based Categorization Model Using Spatio-temporal Features in a Spiking Neural Network,
    IEEE Int. Conf. Advanced Computational Intelligence (ICACI), 2020.
  • Perot, E., de Tournemire, P., Nitti, D., Masci, J., Sironi, A.,
    Learning to Detect Objects with a 1 Megapixel Event Camera,
    Advances in Neural Information Processing Systems 33 (NeurIPS), 2020. 1Mpx Detection Dataset
  • Ojeda, F. C., Bisulco, A., Kepple, D., Isler, V., Lee, D. D.,
    On-Device Event Filtering with Binary Neural Networks for Pedestrian Detection Using Neuromorphic Vision Sensors,
    IEEE Int. Conf. Image Processing (ICIP), 2020.
  • Y. Deng, Y. Li and H. Chen.,
    AMAE: Adaptive Motion-Agnostic Encoder for Event-Based Object Classification,
    IEEE Robotics and Automation Letters (RA-L), 5(3):4596-4603, July 2020.
  • Baldwin et al., arXiv 2021.
    Time-Ordered Recent Event (TORE) Volumes for Event Cameras.
  • Patel, H., Iaboni, C., Lobo, D., Choi, J., Abichandani, P.,
    Event Camera Based Real-Time Detection and Tracking of Indoor Ground Robots,
    arXiv, 2021.
  • Kim, J., Bae, J., Park, G., Zhang, D., and Kim, Y.,
    N-ImageNet: Towards Robust, Fine-Grained Object Recognition with Event Cameras,
    IEEE Int. Conf. Computer Vision (ICCV), 2021. Suppl. Mat., Github Page, N-ImageNet Dataset.
  • Li, Y., Zhou, H., Yang, B., Zhang, Y., Cui, Z., Bao, H., Zhang, G.,
    Graph-based Asynchronous Event Processing for Rapid Object Recognition,
    IEEE Int. Conf. Computer Vision (ICCV), 2021. PDF, Suppl.
  • Wang, Z., Hu, Y., Liu, S.-C,
    Exploiting Spatial Sparsity for Event Cameras with Visual Transformers,
    IEEE Int. Conf. on Image Processing (ICIP), 2022. PDF.

Gesture Recognition

  • Lee, J. H., Delbruck, T., Pfeiffer, M., Park, P.K.J., Shin, C.-W., Ryu, H., Kang, B. C.,
    Real-Time Gesture Interface Based on Event-Driven Processing From Stereo Silicon Retinas,
    IEEE Trans. Neural Netw. Learn. Syst. (TNNLS), 25(12):2250-2263, 2014.
    • Lee, J., Delbruck, T., Park, P.K.J., Pfeiffer, M., Shin, C. W., Ryu, H., Kang, B. C.,
      Live demonstration: Gesture-Based remote control using stereo pair of dynamic vision sensors,
      IEEE Int. Symp. Circuits and Systems (ISCAS), 2012. PDF, YouTube
  • Kohn, B., Belbachir, A.N., Hahn, T., Kaufmann, H.,
    Event-driven Body Motion Analysis For Real-time Gesture Recognition,
    IEEE Int. Symp. Circuits and Systems (ISCAS), 2012. PDF
    • Kohn, B., Belbachir, A.N., Nowakowska, A.,
      Real-time Gesture Recognition using a Bio-inspired 3D Vision Sensor,
      IEEE Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), 2012.
    • Kohn, B., Nowakowska, A., Belbachir, A.N.,
      Real-time Body Motion Analysis for Dance Pattern Recognition,
      IEEE Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), 2012.
  • Orchard, G., Meyer, C., Etienne-Cummings, R., Posch, C., Thakor, N., Benosman, R.,
    HFIRST: A Temporal Approach to Object Recognition,
    IEEE Trans. Pattern Anal. Machine Intell. (TPAMI), 37(10):2028-2040, 2015. PDF, PDF
    • Code: HFIRST: A simple spiking neural network for recognition based on the canonical frame-based HMAX model.
  • Park, P.K.J., Lee, K., Lee, J.H., Kang, B., Shin, C.-W., Woo, J., Kim, J.-S., Suh, Y., Kim, S., Moradi, S., Gurel, O., Ryu, H.,
    Computationally efficient, real-time motion recognition based on bio-inspired visual and cognitive processing,
    IEEE Int. Conf. Image Processing (ICIP), 2015.
  • Park, P.K.J., Cho, B.H., Park, J.M., Lee, K., Kim, H.Y., Kang, H.A., Lee, H.G., Woo, J., Roh, Y., Lee, W.J., Shin, C.-W., Wang, Q., Ryu, H.,
    Performance improvement of deep learning based gesture recognition using spatiotemporal demosaicing technique,
    IEEE Int. Conf. Image Processing (ICIP), 2016.
  • Lungu, I.-A., Corradi, F., Delbruck, T.,
    Live Demonstration: Convolutional Neural Network Driven by Dynamic Vision Sensor Playing RoShamBo,
    IEEE Int. Symp. Circuits and Systems (ISCAS), 2017. YouTube, Slides 36-39
  • Amir, A., Taba, B., Berg, D., Melano, T., McKinstry, J., Di Nolfo, C., Nayak, T., Andreopoulos, A., Garreau, G., Mendoza, M., Kusnitz, J., Debole, M., Esser, S., Delbruck, T., Flickner, M., Modha, D.,
    A Low Power, Fully Event-Based Gesture Recognition System,
    IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2017. PDF, Dataset
    • YouTube: IBM Research demonstrates event-based gesture recognition using a brain-inspired chip
  • Maro, J.-M., Ieng, S.-H., Benosman, R.,
    Event-based Gesture Recognition with Dynamic Background Suppression using Smartphone Computational Capabilities,
    Front. Neurosci. (2020), 14:275. PDF arXiv
  • Wang, Q., Zhang, Y., Yuan, J., Lu, Y.,
    Space-time Event Clouds for Gesture Recognition: from RGB Cameras to Event Cameras,
    IEEE Winter Conf. Applications of Computer Vision (WACV), 2019.
  • Ghosh, R., Gupta, A., Nakagawa, A., Soares, A. B., Thakor, N. V.,
    Spatiotemporal Filtering for Event-Based Action Recognition,
    arXiv:1903.07067, 2019.
  • Wang, Y., Du, B., Shen, Y., Wu, K., Zhao, G., Sun, J., Wen, H.,
    EV-Gait: Event-Based Robust Gait Recognition Using Dynamic Vision Sensors,
    IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2019.
  • Calabrese, E., Taverni, G., Easthope, C., Skriabine, S., Corradi, F., Longinotti, L., Eng, K., Delbruck, T.,
    DHP19: Dynamic Vision Sensor 3D Human Pose Dataset,
    IEEE Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), 2019. Project page, Video pitch.
  • Pradhan, B. R., Bethi, Y., Narayanan, S., Chakraborty, A., Thakur, C. S.,
    N-HAR: A Neuromorphic Event-Based Human Activity Recognition System Using Memory Surfaces,
    IEEE Int. Symp. Circuits and Systems (ISCAS), 2019.
  • Sokolova, A., Konushin, A.,
    Human identification by gait from event-based camera,
    Int. Conf. Machine Vision Applications (MVA), 2019.
  • Chadha, A., Bi, Y., A., Abbas, A., Andreopoulos, Y.,
    Neuromorphic Vision Sensing for CNN-based Action Recognition,
    IEEE Int. Conf. Acoust., Speech, Signal Proc. (ICASSP), 2019, Github Page.
  • Chen, H., Liu, W., Goel, R., Lua, R., Siddharth, M., Huang, Y., Veeraraghavan, A., Patel, A.,
    Fast Retinomorphic Event-Driven Representations for Video Gameplay and Action Recognition,
    IEEE Trans. Comput. Imag. (TCI), 6:276-290, 2019.
  • Lungu, I.-A., Liu, S.-C., Delbruck, T.,
    Incremental learning of hand symbols using event-based cameras,
    IEEE J. Emerging Sel. Topics Circuits Syst. (JETCAS), 2019.
    • Lungu, I. A., Liu, S.-C., Delbruck, T.,
      Fast event-driven incremental learning of hand symbols,
      IEEE Int. Conf. AI Circuits Syst. (AICAS), 2019.
  • Park et al., IEDM 2019,
    Low-Latency Interactive Sensing for Machine Vision.
  • Vasudevan, A., Negri, P., Linares-Barranco, B., Serrano-Gotarredona, T.,
    Introduction and Analysis of an Event-Based Sign Language Dataset,
    IEEE Int. Conf. Automatic Face and Gesture Recognition (FG), 2020. Dataset
  • Chen, J., Meng, J., Wang, X., Yuan, J.,
    Dynamic Graph CNN for Event-Camera Based Gesture Recognition,
    IEEE Int. Symp. Circuits and Systems (ISCAS), 2020.
  • Huang, C.,
    Event-based Action Recognition Using Timestamp Image Encoding Network,
    arXiv, 2020.
  • Innocenti, S. U., Becattini, F., Pernici, F., Del Bimbo, A.,
    Temporal Binary Representation for Event-Based Action Recognition,
    Int. Conf. Pattern Recognition (ICPR), 2020.
  • Harrigan, S., Coleman, S., Kerr, D., Yogarajah, P., Fang, Z., Wu, C.,
    Neural Coding Strategies for Event-Based Vision Data,
    IEEE Int. Conf. Acoust., Speech, Signal Proc. (ICASSP), 2020.
  • Chen, G., Hong, L., Dong, J., Liu, P., Conradt, J., Knoll, A.,
    EDDD: Event-based drowsiness driving detection through facial motion analysis with neuromorphic vision sensor,
    IEEE Sensors Journal, 20(11):6170-6181, 2020.
  • Chen, G., Liu, P., Liu, Z., Tang, H., Hong, L., Dong, J., Conradt, J., Knoll, A.,
    NeuroAED: Towards Efficient Abnormal Event Detection in Visual Surveillance With Neuromorphic Vision Sensor,
    IEEE Trans. Information Forensics and Security (TIFS), 16:923-936, 2021.
  • Liu, Q., Xing, D., Tang, H., Ma, D., Pan, G.,
    Event-based Action Recognition Using Motion Information and Spiking Neural Networks,
    International Joint Conferences on Artifical Intelligence (IJCAI), 2021. PDF, DailyAction-DVS Dataset
  • Sabater, A., Montesano, L., Murillo, A.,
    Event Transformer. A sparse-aware solution for efficient event data processing,
    IEEE Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), 2022. PDF, Supp. Video, Code.

Representation / Feature Extraction

  • Camuñas-Mesa, L., Zamarreño-Ramos, C., Linares-Barranco, A., Acosta-Jiménez, A., Serrano-Gotarredona, T., Linares-Barranco, B.
    An Event-Driven Multi-Kernel Convolution Processor Module for Event-Driven Vision Sensors,
    IEEE J. of Solid-State Circuits, 47(2):504-517, 2012.
  • Zhao, B., Ding, R., Chen, S., Linares-Barranco, B., Tang, H.,
    Feedforward Categorization on AER Motion Events using Cortex-like Features in a Spiking Neural Network,
    IEEE Trans. Neural Netw. Learn. Syst. (TNNLS), 26(9):1963-1978, 2015.
  • Lagorce, X., Orchard, G., Gallupi, F., Shi, B., Benosman, R.,
    HOTS: A Hierarchy Of event-based Time-Surfaces for pattern recognition,
    IEEE Trans. Pattern Anal. Machine Intell. (TPAMI), 39(7):1346-1359, 2017. PDF
    • Tapiador-Morales, R., Maro, J.-M., Jimenez-Fernandez, A., Jimenez-Moreno, G., Benosman, R., Linares-Barranco, A.,
      Event-Based Gesture Recognition through a Hierarchy of Time-Surfaces for FPGA,
      Sensors 2020, 20(12), 3404.
  • Clady et al., FNINS,
    A Motion-Based Feature for Event-Based Pattern Recognition.
  • Yousefzadeh, A., Masquelier, T., Serrano-Gotarredona, T., Linares-Barranco, B.,
    Live demonstration: Hardware implementation of convolutional STDP for on-line visual feature learning,
    IEEE Int. Symp. Circuits and Systems (ISCAS), 2017.
  • Sullivan, K., Lawson, W.,
    Representing motion information from event-based cameras,
    IEEE Int. Symp. Robot and Human Interactive Comm. (RO-MAN), 2017.
  • Li, H., Li, G., Ji, X., Shi, L.P.,
    Deep representation via convolutional neural network for classification of spatiotemporal event streams,
    Neurocomputing 299, 2018.
  • Sironi, A., Brambilla, M., Bourdis, N., Lagorce, X., Benosman, R.,
    HATS: Histograms of Averaged Time Surfaces for Robust Event-based Object Classification,
    IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2018. PDF.
    • N-CARS Dataset: A large real-world event-based dataset for car classification.
  • Liu, W., Chen, H., Goel, R., Huang, Y., Veeraraghavan, A., Patel, A.,
    Fast Retinomorphic Event-Driven Representations for Video Recognition and Reinforcement Learning,
    arXiv: 1805.06374, 2018.
  • Cannici, M., Ciccone, M., Romanoni, A., Matteucci, M.,
    Asynchronous Convolutional Networks for Object Detection in Neuromorphic Cameras,
    IEEE Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), 2019. PDF. Video pitch
  • Afshar, S., Hamilton, T. J., Tapson, J., van Schaik, A., Cohen, G.,
    Investigation of Event-Based Surfaces for High-Speed Detection, Unsupervised Feature Extraction, and Object Recognition,
    Front. Neurosci. (2018), 12:1047.
  • Sekikawa, Y., Ishikawa, K., Hara, K., Yoshida, Y., Suzuki, K., Sato, I., Saito, H.,
    Constant Velocity 3D Convolution,
    Int. Conf. 3D Vision (3DV), 2018.
    • Sekikawa, Y., Ishikawa, K., Saito, H.,
      Constant Velocity 3D Convolution,
      IEEE Access, vol. 6, pp. 76490-76501, 2018.
  • Chen, G., Chen, J., Lienen, M., Conradt, J., Roehrbein, F., Knoll, A.C.,
    FLGR: Fixed Length Gists Representation Learning for RNN-HMM Hybrid-Based Neuromorphic Continuous Gesture Recognition,
    Front. Neurosci. (2019), 13:73.
  • Tapiador-Morales, R., Linares-Barranco, A., Jimenez-Fernandez, A., Jimenez-Moreno, G.
    Neuromorphic LIF Row-by-Row Multiconvolution Processor for FPGA,
    IEEE Trans. Biomed. Circuits Syst, 2019, vol. 13, issue 1.
  • Ghosh, R., Gupta, A., Tang, S., Soares, A. B., Thakor, N. V.,
    Spatiotemporal Feature Learning for Event-Based Vision,
    arXiv:1903.06923, 2019.
  • Sekikawa, Y., Hara, K., Saito, H.,
    EventNet: Asynchronous Recursive Event Processing,
    IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2019. PDF, Slides, YouTube
  • Gehrig, D., Loquercio, A., Derpanis, K. G., Scaramuzza, D.,
    End-to-End Learning of Representations for Asynchronous Event-Based Data,
    IEEE Int. Conf. Computer Vision (ICCV), 2019. PDF, Suppl. Mat, YouTube, Project Page.
  • Linares-Barranco, A., Rios-Navarro, A., Tapiador-Morales, R., Delbruck, T.,
    Dynamic Vision Sensor integration on FPGA-based CNN accelerators for high-speed visual classification,
    arXiv:1905.07419, 2019.
  • Afshar, S., Xu, Y., Tapson, J., van Schaik, A., Cohen, G.,
    Event-based Feature Extraction using Adaptive Selection Thresholds,
    arXiv:1907.07853, 2019
  • Zhu, A.Z., Wang, Z., Daniilidis, K.,
    Motion Equivariant Networks for Event Cameras with the Temporal Normalization Transform,
    arXiv:1902.06820, 2019.
  • Baldwin R.W., Almatrafi M., Kaufman J.R., Asari V., Hirakawa K.,
    Inceptive Event Time-Surfaces for Object Classification Using Neuromorphic Cameras,
    Int. Conf. on Image Analysis and Recognition (ICIAR), 2019. PDF, Code.
  • Bi, Y., Chadha, A., Abbas, A., Bourtsoulatze, E., Andreopoulos, Y.,
    Graph-based Spatial-temporal Feature Learning for Neuromorphic Vision Sensing,
    arXiv:1910.03579, 2019.
  • Zhu, A., Wang, Z., Khant, K., Daniilidis, K.,
    EventGAN: Leveraging Large Scale Image Datasets for Event Cameras,
    arXiv:1912.01584, 2019.
  • Cannici, M., Ciccone, M., Romanoni, A., Matteucci, M.,
    A Differentiable Recurrent Surface for Asynchronous Event-Based Data,
    European Conf. Computer Vision (ECCV), 2020. Suppl. Mat., PDF, Videos
  • Xu, F., Lin, S., Yang, W., Yu, L., Dai, D., Xia, G.,
    Matching Neuromorphic Events and Color Images via Adversarial Learning,
    arXiv:2003.00636, 2020.
  • Messikommer, N., Gehrig, D., Loquercio, A., Scaramuzza, D.,
    Event-based Asynchronous Sparse Convolutional Networks,
    European Conf. Computer Vision (ECCV), 2020. Suppl. Mat., PDF, YouTube, Code
  • Hu, Y., Delbruck, T., Liu, S.-C.,
    Learning to Exploit Multiple Vision Modalities by Using Grafted Networks,
    European Conf. Computer Vision (ECCV), 2020. Suppl. Mat., PDF
  • Samadzadeh, A., Far, F.S.T., Javadi, A., Nickabadi, A., Chehreghani, M. H.,
    Convolutional Spiking Neural Networks for Spatio-Temporal Feature Extraction,
    arXiv:2003.12346, 2020. Code
  • Lin, S., Xu, F., Wang., X., Yang, W., Yu, L.,
    Efficient Spatial-Temporal Normalization of SAE Representation for Event Camera,
    IEEE Robotics and Automation Letters (RA-L), 5(3):4265-4272, 2020.
  • Kostadinov, D., Scaramuzza, D.,
    Unsupervised Feature Learning for Event Data: Direct vs Inverse Problem Formulation,
    IAPR IEEE Int. Conf. Pattern Recognition (ICPR), Milan, 2021.
  • Wang. Y., Zhang, X., Shen, Y., Du, B., Zhao, G., Lizhen, L. C. C., Wen, H.,
    Event-Stream Representation for Human Gaits Identification Using Deep Neural Networks,
    IEEE Trans. Pattern Anal. Machine Intell. (TPAMI), 2021. Code.
  • Baldwin, R., Ruixu, L., Almatrafi, M., Asari, V., Hirakawa, K.,
    Time-Ordered Recent Event (TORE) Volumes for Event Cameras,
    arXiv:2103.06108, 2021. Code.

Regression Tasks

  • Maqueda, A.I., Loquercio, A., Gallego, G., Garcia, N., Scaramuzza, D.,
    Event-based Vision meets Deep Learning on Steering Prediction for Self-driving Cars,
    IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2018. PDF, Poster, YouTube.
  • Zhu et al., RSS 2018,
    EV-FlowNet: Self-Supervised Optical Flow Estimation for Event-based Cameras.
  • Paredes-Valles et al., TPAMI 2019,
    Unsupervised Learning of a Hierarchical Spiking Neural Network for Optical Flow Estimation: From Events to Global Motion Perception.
  • Gallego et al., CVPR 2019,
    Focus Is All You Need: Loss Functions For Event-based Vision.
  • Rebecq et al., TPAMI 2020,
    High Speed and High Dynamic Range Video with an Event Camera.
    • Rebecq et al., CVPR 2019,
      Events-to-Video: Bringing Modern Computer Vision to Event Cameras.
  • Zhu et al., CVPR 2019,
    Unsupervised Event-Based Learning of Optical Flow, Depth, and Egomotion.
  • Hu, Y., Chen, H. M., Delbruck, T.,
    Slasher: Stadium racer car for event camera end-to-end learning autonomous driving experiments,
    IEEE Int. Conf. AI Circuits Syst. (AICAS), 2019.
  • Hersche, M., Mello Rella, E., Di Mauro, A., Benini, L., Rahimi, A.,
    Integrating Event-based Dynamic Vision Sensors with Sparse Hyperdimensional Computing,
    Int. Symp. Low Power Electronics and Design (ISLPED), 2020. PDF
  • Hidalgo-Carrió et al., 3DV 2020, Learning Monocular Dense Depth from Events.
  • Gehrig et al., RA-L, 2021
    Combining Events and Frames Using Recurrent Asynchronous Multimodal Networks for Monocular Depth Prediction
  • Hagenaars et al., NeurIPS 2021,
    Self-Supervised Learning of Event-Based Optical Flow with Spiking Neural Networks.

Learning Methods / Frameworks

  • Perez-Carrasco, J. A., Zhao, B., Serrano, C., Acha, B., Serrano-Gotarredona, T., Chen, S., Linares-Barranco, B.,
    Mapping from Frame-Driven to Frame-Free Event-Driven Vision Systems by Low-Rate Rate-Coding and Coincidence Processing. Application to Feed-Forward ConvNets,
    IEEE Trans. Pattern Anal. Machine Intell. (TPAMI), 35(11):2706-2719, 2013.
  • Stromatias, E., Soto, M., Serrano-Gotarredona, T., Linares-Barranco, B.,
    An Event-Based Classifier for Dynamic Vision Sensor and Synthetic Data,
    Front. Neurosci. (2017), 11:350.
  • Haessig, G. and Benosman, R.,
    A Sparse Coding Multi-Scale Precise-Timing Machine Learning Algorithm for Neuromorphic Event-Based Sensors,
    Proc. SPIE 10639, Micro- and Nanotechnology Sensors, Systems, and Applications X, 106391U, 2018. PDF
  • Shrestha, S., Orchard, G.,
    SLAYER: Spike Layer Error Reassignment in time,
    Advances in Neural Information Processing Systems (NeurIPS) 2018. PDF, YouTube.
  • Macanovic, M., Chersi, F., Rutard, F., Ieng, S.-H., Benosman, R.,
    When Conventional machine learning meets neuromorphic engineering: Deep Temporal Networks (DTNets) a machine learning framework allowing to operate on Events and Frames and implantable on Tensor Flow Like Hardware,
    arXiv: 1811.07672, 2018.
  • Zanardi, A., Aumiller, A.J., Zilly, J., Censi, A., Frazzoli, E.,
    Cross-Modal Learning Filters for RGB-Neuromorphic Wormhole Learning,
    Robotics: Science and Systems (RSS), 2019. PDF
  • Kaiser, J., Friedrich, A., Vasquez Tieck, J.C., Reichard, D., Roennau, A., Neftci, E., Dillmann, R.,
    Embodied Neuromorphic Vision with Event-Driven Random Backpropagation,
    arXiv, 2019. PDF, Video
  • Gehrig, D., Gehrig, M., Hidalgo-Carrió, J., Scaramuzza, D.,
    Video to Events: Recycling Video Datasets for Event Cameras,
    IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2020. YouTube, Project page
  • Deng, Y., Chen, H., Chen, H. and Li, Y.,
    Learning From Images: A Distillation Learning Framework for Event Cameras,
    IEEE Trans. Image Process., vol. 30, pp. 4919-4931, 2021. Dataset

Signal Processing

  • Ieng, S.-H., Posch, C., Benosman, R.,
    Asynchronous Neuromorphic Event-Driven Image Filtering,
    Proc. IEEE, 102(10):1485-1499, 2014. PDF
  • Fillatre, L., Antonini, M.,
    Uniformly minimum variance unbiased estimation for asynchronous event-based cameras,
    IEEE Int. Conf. Image Processing (ICIP), 2014.
  • Mueggler et al., ICRA 2015,
    Lifetime Estimation of Events from Dynamic Vision Sensors.
  • Klein, P., Conradt, J., Liu, S.-C.,
    Scene stitching with event-driven sensors on a robot head platform,
    IEEE Int. Symp. Circuits and Systems (ISCAS), 2015.
  • Fillatre, L.,
    Bayes classification for asynchronous event-based cameras,
    European Signal Process. Conf. (EUSIPCO), 2015.
  • Sabatier, Q., Ieng, S.-H., Benosman, R.,
    Asynchronous Event-Based Fourier Analysis,
    IEEE Trans. Image Process., 2017, pp. 2192-2202. PDF, Suppl. Mat.
  • Scheerlinck et al., RAL 2019,
    Asynchronous Spatial Image Convolutions for Event Cameras.
  • Lee, S., Kim, H., Kim, H.J.,
    Edge Detection for Event Cameras using Intra-pixel-area Events,
    British Machine Vision Conf. (BMVC), 2019. PDF
  • Liu, S.-C., Rueckauer, B., Ceolini, E., Huber, A., Delbruck, T.,
    Event-Driven Sensing for Efficient Perception: Vision and audition algorithms,
    IEEE Signal Process. Mag., 36(6):29-37, Nov. 2019.
  • Sengupta, J. P., Villemur, M., Andreou, A. G.,
    A Spike-based Cellular-Neural Network Architecture for Spatiotemporal filtering,
    55th Annual Conf. on Information Sciences and Systems (CISS), 2021, pp. 1-6.

Event Denoising

  • Delbruck, T.,
    Frame-free dynamic digital vision,
    Proc. Int. Symp. on Secure-Life Electronics, Advanced Electronics for Quality Life and Society, 2008, vol. 1, pp. 21–26. PDF
  • Khodamoradi, A., Kastner, R.,
    O(N)-Space Spatiotemporal Filter for Reducing Noise in Neuromorphic Vision Sensors,
    IEEE Trans. Emerging Topics in Computing, 2018.
  • Berthelon, X., Chenegros, G., Finateu, T., Ieng, S.H., Benosman, R.,
    Effects of Cooling on the SNR and Contrast Detection of a Low-Light Event-Based Camera,
    IEEE Trans. Biomed. Circuits Syst., 2018, vol. 12, pp. 1467-1474.
  • Baldwin R.W., Almatrafi M., Asari V., Hirakawa K.,
    Event Probability Mask (EPM) and Event Denoising Convolutional Neural Network (EDnCNN) for Neuromorphic Cameras,
    IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2020. PDF, Dataset
  • Feng, Y., Lv, H., Liu, H., Zhang, Y., Xiao, Y., Han, C.,
    Event Density Based Denoising Method for Dynamic Vision Sensor,
    Appl. Sci. 10(6), 2024, 2020.
  • Wu, J., Ma, C., Li, L., Dong, W., Shi, G.,
    Probabilistic Undirected Graph Based Denoising Method for Dynamic Vision Sensor,
    IEEE Trans. Multimedia, 2020.
  • Wu, J., Ma, C., Yu, X., Shi, G.,
    Denoising of Event-Based Sensors with Spatial-Temporal Correlation,
    IEEE Int. Conf. Acoust., Speech, Signal Proc. (ICASSP), 2020.
    • Wu, J., Ma, C., Li, L., Dong, W., Shi, G.,
      Probabilistic Undirected Graph Based Denoising Method for Dynamic Vision Sensor,
      IEEE Trans. Multimedia (TMM), 2020.
  • Guo, S., Kang, Z., Wang, L., Zhang, L., Chen, X., Li, S., Xu, W.,
    A Noise Filter for Dynamic Vision Sensors based on Global Space and Time Information,
    arXiv, 2020.
  • Baldwin et al., arXiv 2021.
    Time-Ordered Recent Event (TORE) Volumes for Event Cameras.
  • Alkendi, Y., Azzam, R., Ayyad, A., Javed, S., Seneviratne, L., Zweiri, Y.,
    Neuromorphic Camera Denoising using Graph Neural Network-driven Transformers,
    arXiv, 2021. Youtube, Dataset
  • Guo, S. and Delbruck, T.,
    Low Cost and Latency Event Camera Background Activity Denoising,
    IEEE Trans. Pattern Anal. Mach. Intell., 2022. Full PDF incl. Supplementary Material, DND21 DeNoising Dynamic vision sensors website
  • Wang, Z., Yuan, D., Ng, Y., Mahony, R.,
    A Linear Comb Filter for Event Flicker Removal,
    IEEE Int. Conf. Robotics and Automation (ICRA), 2022. PDF, Project Page

Compression

  • Khan, N., Iqbal, K., Martini, M. G.,
    Lossless compression of data from static and mobile Dynamic Vision Sensors - Performance and trade-offs,
    IEEE Access, 2020.
  • Banerjee, S., Wang, Z. W., Chopp, H. H., Cossairt, O., Katsaggelos, A.,
    Quadtree Driven Lossy Event Compression,
    arXiv 2020. PDF
  • Khan, N., Iqbal, K., Martini, M. G.,
    Time-Aggregation-Based Lossless Video Encoding for Neuromorphic Vision Sensor Data,
    IEEE Internet of Things Journal, 2021.
  • Schiopu, I., Bilcu, R. C.,
    Lossless Compression of Event Camera Frames,
    IEEE Signal Proc. Letters 2022.

Control

  • Delbruck, T. and Lang, M.,
    Robotic Goalie with 3ms Reaction Time at 4% CPU Load Using Event-Based Dynamic Vision Sensor,
    Front. Neurosci. (2013), 7:223. PDF, YouTube
    • Delbruck, T. and Lichtsteiner, P.,
      Fast sensory motor control based on event-based hybrid neuromorphic-procedural system,
      IEEE Int. Symp. Circuits and Systems (ISCAS), 2007.
    • Cheng, R., Nikolic, K.,
      System integration of neuromorphic visual (DVS), processing (SpiNNaker) and servomotor modules into an autonomous robot controlled by a Spiking Neural Network,
      TechRxiv, 2020. YouTube
  • Conradt, J., Cook, M., Berner, R., Lichtsteiner, P., Douglas, R. J., Delbruck, T.,
    A Pencil Balancing Robot Using a Pair of AER Dynamic Vision Sensors,
    IEEE Int. Symp. Circuits and Systems (ISCAS), 2009. PDF, Poster, Project page, YouTube 1, YouTube 2, YouTube 3
    • Conradt, J., Berner, R., Cook, M., Delbruck, T.,
      An embedded AER dynamic vision sensor for low-latency pole balancing,
      IEEE Int. Conf. Computer Vision Workshops (ICCVW), 2009. PDF
  • Mueller, E., Censi, A., Frazzoli, E.,
    Efficient high speed signal estimation with neuromorphic vision sensors,
    IEEE Int. Conf. Event-Based Control Comm. and Signal Proc. (EBCCSP), 2015.
  • Censi, A.,
    Efficient Neuromorphic Optomotor Heading Regulation,
    American Control Conference (ACC), 2015.
  • Mueggler, E., Baumli, N., Fontana, F., Scaramuzza, D.,
    Towards Evasive Maneuvers with Quadrotors using Dynamic Vision Sensors,
    Eur. Conf. Mobile Robots (ECMR), Lincoln, 2015. PDF
  • Delbruck, T., Pfeiffer, M., Juston, R., Orchard, G., Mueggler, E., Linares-Barranco, A., Tilden, M. W.,
    Human vs. computer slot car racing using an event and frame-based DAVIS vision sensor,
    IEEE Int. Symp. Circuits and Systems (ISCAS), 2015. YouTube 1, YouTube 2, YouTube 3
  • Mueller, E., Censi, A., Frazzoli, E.,
    Low-latency heading feedback control with neuromorphic vision sensors using efficient approximated incremental inference,
    IEEE Conf. Decision and Control (CDC), 2015.
  • Moeys et al., EBCCSP 2016. VISUALISE Predator/Prey Dataset.
  • Vasco, V., Glover, A., Tirupachuri, Y., Solari, F., Chessa M., Bartolozzi C.,
    Vergence control with a neuromorphic iCub,
    IEEE Int. Conf. Humanoid Robotics (Humanoids), 2016.
  • Singh, P., Yong, S. Z., Gregoire, J., Censi, A., Frazzoli, E.,
    Stabilization of linear continuous-time systems using neuromorphic vision sensors,
    IEEE Conf. Decision and Control (CDC), 2016.
    • Singh, P., Yong, S. Z., Frazzoli, E.,
      Stabilization of stochastic linear continuous-time systems using noisy neuromorphic vision sensors,
      American Control Conference (ACC), 2017.
    • Singh, P., Yong, S. Z., Frazzoli, E.,
      Regulation of Linear Systems Using Event-Based Detection Sensors,
      IEEE Trans. Automatic Control, 2019.
  • Blum, H., Dietmüller, A., Milde, M., Conradt, J., Indiveri, G., Sandamirskaya, Y.,
    A neuromorphic controller for a robotic vehicle equipped with a Dynamic Vision Sensor,
    Robotics: Science and Systems (RSS), 2017.
  • Glover, A., Vasco, V., Bartolozzi, C.,
    A Controlled-Delay Event Camera Framework for On-Line Robotics,
    IEEE Int. Conf. Robotics and Automation (ICRA), 2018.
  • Falanga, D., Kim, S., Scaramuzza, D.,
    How Fast is Too Fast? The Role of Perception Latency in High-Speed Sense and Avoid,
    IEEE Robotics and Automation Letters (RA-L), 2019. YouTube
  • Sugimoto, R., Gehrig, M., Brescianini, D., Scaramuzza, D.,
    Towards Low-Latency High-Bandwidth Control of Quadrotors using Event Cameras,
    IEEE Int. Conf. Robotics and Automation (ICRA), 2020. PDF, YouTube
  • Youssef, I., Mutlu, M., Bayat, B., Crespi, A., Hauser, S., Conradt, J., Bernardino, A., Ijspeert, A. J.,
    A Neuro-Inspired Computational Model for a Visually Guided Robotic Lamprey Using Frame and Event Based Cameras,
    IEEE Robotics and Automation Letters (RA-L), 5(2):2395-2402, April 2020. PDF, YouTube.
  • Stagsted, R. K., Vitale, A., Binz, J., Renner, A., Larsen, L. B., Sandamirskaya, Y.,
    Towards neuromorphic control: A spiking neural network based PID controller for UAV,
    Robotics: Science and Systems (RSS), 2020. PDF, YouTube, Suppl. Video
  • Hagenaars, J. J., Paredes-Vallés, F., Bohté, S. M., de Croon, G. C. H. E.,
    Evolved Neuromorphic Control for High Speed Divergence-based Landings of MAVs,
    IEEE Robotics and Automation Letters (RA-L), 5(4):6239-6246, Oct. 2020. PDF, PDF.
  • Stagsted, R. K., Vitale, A., Renner A., Larsen, L. B., Christensen, A. L., Sandamirskaya, Y.,
    Event-Based PID Controller Fully Realized in Neuromorphic Hardware: A One DoF Study,
    IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), 2020.
  • Delbruck, T., Graca, R., Paluch, M.,
    Feedback control of event cameras, IEEE Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), 2021.
  • Vitale, A., Renner, A., Nauer, C., Scaramuzza, D., Sandamirskaya, Y.,
    Event-driven Vision and Control for UAVs on a Neuromorphic Chip,
    IEEE Int. Conf. Robotics and Automation (ICRA), 2021. PDF, YouTube, PPT Slides.
  • Ayyad, A., Halwani, M., Swart, D., Muthusamy, R., Almaskari, F., Zweiri, Y.,
    Neuromorphic Vision Based Control for the Precise Positioning of Robotic Drilling Systems,
    arXiv, 2021. Video.
  • Wang, Z., Cladera Ojeda, F., Bisulco A., Lee, D., Taylor, C. J., Daniilidis, K., Hsieh, A. M., Lee, D. D., Isler, V.,
    EV-Catcher: High-Speed Object Catching Using Low-Latency Event-Based Neural Networks,
    IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), 2021.

Obstacle Avoidance

  • Clady, X., Clercq, C., Ieng, S.H., Houseini, F., Randazzo, M., Natale, L., Bartolozzi, C., Benosman, R.,
    Asynchronous visual event-based time-to-contact,
    Front. Neurosci. (2014). 8:9. PDF
  • Milde, M. B., Bertrand, O.J.N., Benosman, R., Egelhaaf, M., Chicca, E.,
    Bioinspired event-driven collision avoidance algorithm based on optic flow,
    IEEE Int. Conf. Event-Based Control Comm. and Signal Proc. (EBCCSP), 2015 PDF.
  • Sanket, N.J., Parameshwara, C.M., Singh, C.D., Kuruttukulam, A.V., Fermüller, C., Scaramuzza, D., Aloimonos, Y.,
    EVDodgeNet: Deep Dynamic Obstacle Dodging with Event Cameras,
    IEEE Int. Conf. Robotics and Automation (ICRA), 2020. PDF, YouTube, Project page, Code.
  • Falanga, D., Kleber, K., Scaramuzza, D.,
    Dynamic obstacle avoidance for quadrotors with event cameras,
    Science Robotics, 5(40):eaaz9712, 2020. YouTube
  • Yasin, J. N., Mohamed, S. A. S., Haghbayan, M.-H., Heikkonen, J., Tenhunen, H., Yasin, M. M., Plosila, J.,
    Night vision obstacle detection and avoidance based on Bio-Inspired Vision Sensors,
    IEEE Sensors 2020. PDF.
  • Bisulco, A., Cladera Ojeda, F., Isler, V., Lee, D. D.,
    Fast Motion Understanding with Spatiotemporal Neural Networks and Dynamic Vision Sensors ,
    IEEE Int. Conf. Robotics and Automation (ICRA), 2021. PDF.
  • Walters, C., Hadfield, S.,
    EVReflex: Dense Time-to-Impact Prediction for Event-based Obstacle Avoidance,
    IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), 2021.
  • He, B., Li, H., Wu, S., Wang, D., Zhang, Z., Dong, Q., Xu, C., Gao, F.,
    FAST-Dynamic-Vision: Detection and Tracking Dynamic Objects with Event and Depth Sensing,
    IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), 2021. PDF, YouTube.
  • Rodriguez-Gomez, J.P., Tapia, R., M.M., Guzman, J.R., Martinez-de Dios, Ollero, A.,
    Free as a Bird: Event-Based Dynamic Sense-and-Avoid for Ornithopter Robot Flight,
    IEEE Robotics and Automation Letters (RA-L), 2022. PDF, YouTube.

Space Applications

  • Cohen, G., Afshar, S., van Schaik, A., Wabnitz, A., Bessell, T., Rutten, M., Morreale, B.,
    Event-based Sensing for Space Situational Awareness,
    Proc. Advanced Maui Optical and Space Surveillance Technologies Conf. (AMOS), 2017.
  • Cheung, B., Rutten, M., Davey, S., Cohen, G.,
    Probabilistic Multi Hypothesis Tracker for an Event Based Sensor,
    Int. Conf. Information Fusion (FUSION) 2018, pp. 1-8.
  • Cohen, G., Afshar, S., van Schaik, A.,
    Approaches for Astrometry using Event-Based Sensors,
    Proc. Advanced Maui Optical and Space Surveillance Technologies Conf. (AMOS), 2018.
  • Chin, T.-J., Bagchi, S., Eriksson, A., van Schaik, A.,
    Star Tracking using an Event Camera,
    IEEE Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), 2019. PDF. Project page. Video pitch
  • Western Sydney University ICNS
    • Astrosite: University News, ABC News.
    • Simultaneous Sky Mapping and Satellite Tracking
    • G. Cohen - The Astrosite mobile observatory - 2020 Telluride Neuromorphic
  • Zolnowski, M., Reszelewski, R., Moeys, D.P., Delbruck, T., Kaminski, K.,
    Observational Evaluation of Event Cameras Performance in Optical Space Surveillance,
    Proc. NEO and Debris Detection Conference, Darmstadt, Germany, Jan. 2019.
  • Bagchi, S., Chin, T.-J.,
    Event-based Star Tracking via Multiresolution Progressive Hough Transforms,
    IEEE Winter Conf. Applications of Computer Vision (WACV), 2020. PDF
  • Afshar, S., Nicholson, A. P., van Schaik, A., Cohen, G.,
    Event-based Object Detection and Tracking for Space Situational Awareness,
    arXiv:1911.08730, 2019. Dataset
  • Ralph, N.O., Maybour, D., Bethi, Y., Cohen, G.,
    Observations and Design of a new Neuromorphic Event-based All-Sky and Fixed Region Imaging System, Proc. Advanced Maui Optical and Space Surveillance Technologies Conf. (AMOS), 2019.
  • Roffe, S., Akolkar, H., George, A. D., Linares-barranco, B., Benosman, R.,
    Neutron-Induced, Single-Event Effects on Neuromorphic Event-based Vision Sensor: A First Step Towards Space Applications,
    arXiv, 2021.
  • McMahon-Crabtree, P., Monet, D.,
    Evaluation of Commercial-off-the-Shelf EventBased Cameras for Space Surveillance Applications,
    Applied Optics, 2021.
  • Ralph, N.O., Joubert, D., Jolley, A., Afshar, S., Tothill, N., van Schaik, A. and Cohen, G.,
    Real-Time Event-Based Unsupervised Feature Consolidation and Tracking for Space Situational Awareness, Frontiers in Neuroscience, 2022.
  • Ralph, N.O., Marcireau, A., Afshar, S., Tothill, N., van Schaik, A. and Cohen, G.
    Astrometric Calibration and Source Characterisation of the Latest Generation Neuromorphic Event-based Cameras for Space Imaging, arXiv preprint arXiv:2211.09939, 2022. Dataset.

Tactile Sensing Applications

  • Rigi, A., Baghaei Naeini, F., Makris, D., Zweiri, Y.,
    A Novel Event-Based Incipient Slip Detection Using Dynamic Active-Pixel Vision Sensor (DAVIS),
    Sensors 2018, 18, 333.
  • Naeini, F. B., Alali, A., Al-Husari, R., Rigi, A., AlSharman, M. K., Makris, D., Zweiri, Y.,
    A Novel Dynamic-Vision-Based Approach for Tactile Sensing Applications,
    IEEE Trans. Instrum. Meas., 2019.
  • Muthusamy, R., Huang, X., Zweiri, Y., Seneviratne, L., Gan, D.,
    Neuromorphic Event-Based Slip Detection and suppression in Robotic Grasping and Manipulation,
    IEEE Access, 2020. PDF
  • Haessig, G., Milde, M.B., Aceituno, P.V., Oubari, O., Knight, J.C., van Schaik, A., Benosman, R. B., Indiveri, G.,
    Event-Based Computation for Touch Localization Based on Precise Spike Timing,
    Front. Neurosci. (2020), 14:420.
  • Naeini, F.B., Makris, D., Dongming, G., Zweiri, Y.,
    Dynamic-Vision-Based Force Measurements Using Convolutional Recurrent Neural Networks,
    Sensors 2020, 20, 16.
  • Taunyazov, T., Sng, W., Lim, B., Hian, H., Kuan, J., Fatir, A., Tee, B., Soh, H.,
    Event-Driven Visual-Tactile Sensing and Learning for Robots,
    Robotics: Science and Systems (RSS), 2020. PDF, YouTube, Project Page
  • Ward-Cherrier, B., Conradt, J., Catalano, M. G., Bianchi, M., Lepora, N.F.,
    A Miniaturised Neuromorphic Tactile Sensor Integrated with an Anthropomorphic Robot Hand,
    IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), 2020.

Object Pose Estimation

Human Pose Estimation

  • Calabrese et al., CVPRW 2019,
    DHP19: Dynamic Vision Sensor 3D Human Pose Dataset.
  • Zhu et al., arXiv 2019,
    EventGAN: Leveraging Large Scale Image Datasets for Event Cameras.
  • Xu et al., CVPR 2020,
    EventCap: Monocular 3D Capture of High-Speed Human Motions using an Event Camera.
  • Baldwin et al., arXiv 2021,
    Time-Ordered Recent Event (TORE) Volumes for Event Cameras.
  • Scarpellini, G., Morerio, P., Del Bue, A.,
    Lifting Monocular Events to 3D Human Poses,
    IEEE Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), 2021. Project page, YouTube, Code.

Hand Pose Estimation

  • Rudnev, V., Golyanik, V., Wang, J., Seidel, H.-P., Mueller, F., Elgharib, M., Theobalt, C.,
    EventHands: Real-Time Neural 3D Hand Reconstruction from an Event Stream,
    arXiv, 2020. Project page

Indoor Lighting Estimation

  • Chen, Z., Zheng, Q., Niu, P., Tang, H., Pan, G.,
    Indoor Lighting Estimation Using an Event Camera ,
    IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2021. Suppl.

Data Encryption

  • Du, B., Li, W., Wang, Z., Xu, M., Gao, T., Li, J., Wen, H.,
    Event Encryption for Neuromorphic Vision Sensors: Framework, Algorithm, and Evaluation,
    Sensors, 2021.

Nuclear Verification

  • A. Glaser,
    Keeping Secrets at a Distance: New Approaches to Nuclear Monitoring and Verification,
    Distinguished Lecture, Cyber Security in the Age of Large-Scale Adversaries (CASA), Ruhr-Universität Bochum, Germany, 2022.

Optical Communication

  • Wang, Z., Ng, Y., Henderson, J., Mahony., R.,
    Smart Visual Beacons with Asynchronous Optical Communications using Event Cameras,
    IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), 2022. PDF, Project Page

Datasets and Simulators (sorted by topic)

Emulators and Simulators

  • Katz, M. L., Nikolic, K., Delbruck, T. (2012),
    Live demonstration: Behavioural emulation of event-based vision sensors,
    IEEE Int. Symp. Circuits and Systems (ISCAS), 2012. PDF
  • Kaiser, J., Tieck, J. C. V., Hubschneider, C., Wolf, P., Weber, M., Hoff, M., Friedrich., A., Wojtasik, K., Roennau, A., Kohlhaas, R., Dillmann, R., Zoellner, M. (2016),
    Towards a framework for end-to-end control of a simulated vehicle with spiking neural networks,
    IEEE Int. Conf. on Simulation, Modeling, and Programming for Autonomous Robots (SIMPAR), 2016. PDF, Gazebo DVS plugin
  • Pineda García, G., Camilleri, P., Liu, Q., Furber, S.,
    pyDVS: An extensible, real-time Dynamic Vision Sensor emulator using off-the-shelf hardware,
    IEEE Int. Symp. Series on Computational Intelligence (SSCI), 2016. Code
  • E. Mueggler, H. Rebecq, G. Gallego, T. Delbruck, D. Scaramuzza,
    The Event-Camera Dataset and Simulator: Event-based Data for Pose Estimation, Visual Odometry, and SLAM,
    Int. J. Robotics Research, 36:2, pp. 142-149, 2017. PDF, PDF IJRR, YouTube, Dataset.
  • Bi, Y. and Andreopoulos, Y.,
    PIX2NVS: Parameterized conversion of pixel-domain video frames to neuromorphic vision streams,
    IEEE Int. Conf. Image Processing (ICIP), 2017, GitHub Page.
  • W. Li, S. Saeedi, J. McCormac, R. Clark, D. Tzoumanikas, Q. Ye, Y. Huang, R. Tang, S. Leutenegger,
    Interiornet: Mega-scale multi-sensor photo-realistic indoor scenes dataset,
    British Machine Vis. Conf. (BMVC), 2018. YouTube, Project Page.
  • H. Rebecq, D. Gehrig, D. Scaramuzza,
    ESIM: an Open Event Camera Simulator,
    Conf. on Robot Learning (CoRL), 2018. PDF, YouTube, Project Page.
  • Gehrig et al. CVPR 2020,
    Video to Events: Recycling Video Datasets for Event Cameras.
  • Hu, Y., S.-C., Liu, Delbruck, T.,
    v2e: From Video Frames to Realistic DVS Events,
    IEEE Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), 2021. Project page, YouTube, Suppl., Code.
  • Nehvi, J., Golyanik, V., Mueller, F., Seidel, H.-P., Elgharib, M., Theobalt, C.,
    Differentiable Event Stream Simulator for Non-Rigid 3D Tracking,
    IEEE Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), 2021. Project page, YouTube, Suppl..
  • Joubert, C., Marcireau, A., Ralph, N., Jolley, A., van Schaik, A., Cohen, G.,
    Event Camera Simulator Improvements via Characterized Parameters
    Front. Neurosci., 2021, IEBCS simulator

Datasets (sorted by topic)

  • Datasets from the Sensors group at INI (Institute of Neuroinformatics), Zurich:
    • DVS09 - DVS128 Dynamic Vision Sensor Silicon Retina
    • DVSFLOW16 - DVS/DAVIS Optical Flow Dataset
    • DVSACT16 - DVS Datasets for Object Tracking, Action Recognition and Object Recognition
    • PRED18 - VISUALISE Predator/Prey Dataset
    • DDD17 - DAVIS Driving Dataset 2017
    • ROSHAMBO17 - RoShamBo Rock Scissors Paper game DVS dataset
    • DHP19 - DAVIS Human Pose Estimation and Action Recognition
    • DDD20 - End-to-End Event Camera Driving Dataset
    • DND21 - DeNoising Dynamic vision sensors dataset
    • EDFLOW21 - Event Driven Flow dataset

Stereo Depth Estimation

  • Andreopoulos et al., CVPR 2018, A Low Power, High Throughput, Fully Event-Based Stereo System.
  • Zhu et al., RAL 2018: MVSEC The Multi Vehicle Stereo Event Camera Dataset.
  • Zhou et al., ECCV 2018: Semi-Dense 3D Reconstruction with a Stereo Event Camera.
  • Zhou et al., TRO 2021, Event-based Stereo Visual Odometry.
  • Gehrig, M., Aarents, W., Gehrig, D., Scaramuzza, D.,
    DSEC: A Stereo Event Camera Dataset for Driving Scenarios,
    IEEE Robotics and Automation Letters (RA-L), 2021. Dataset, PDF, Code, Video.
  • Gao et al., RAL 2022, VECtor: A Versatile Event-Centric Benchmark for Multi-Sensor SLAM.

Optical Flow

  • DVS/DAVIS Optical Flow Dataset associated to the paper Rueckauer and Delbruck, FNINS 2016.
  • Bardow et al., CVPR2016, Four sequences
  • Zhu et al., RAL2018: MVSEC The Multi Vehicle Stereo Event Camera Dataset.
  • Almatrafi et al. PAMI 2020: Distance Surface for Event-Based Optical Flow. DVSMOTION20 Dataset
  • EDFLOW21 Event Driven Optical Flow Camera dataset associated with the paper EDFLOW: Event Driven Optical Flow Camera with Keypoint Detection and Adaptive Block Matching.
  • EV-IMO Event based Independently Moving Objects dataset associated to the paper EV-IMO: Motion Segmentation Dataset and Learning Pipeline for Event Cameras (motion vector flow added Jan 2022)

Intensity-Image Reconstruction from events

  • Bardow et al., CVPR2016, Four sequences
  • Scheerlinck et al., ACCV2018, Continuous-time Intensity Estimation Using Event Cameras. Website
  • Scheerlinck, C., Rebecq, H., Stoffregen, T., Barnes, N., Mahony, R., Scaramuzza, D.,
    CED: Color Event Camera Dataset,
    IEEE Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), 2019. Slides, Video pitch.
  • Rebecq et al., TPAMI 2020,
    High Speed and High Dynamic Range Video with an Event Camera. Project page
  • High Quality Frames (HQF) dataset associated to the paper Stoffregen et al., arXiv 2020.
  • Wang et al., CVPR 2020,
    Joint Filtering of Intensity Images and Neuromorphic Events for High-Resolution Noise-Robust Imaging. Project page

Visual Odometry and SLAM

  • Combined Dynamic Vision / RGB-D Dataset associated to the paper Weikersdorfer et al., ICRA 2014.
  • Barranco, F., Fermüller, C., Aloimonos, Y.,
    A Dataset for Visual Navigation with Neuromorphic Methods,
    Front. Neurosci. (2016), 10:49.
  • E. Mueggler, H. Rebecq, G. Gallego, T. Delbruck, D. Scaramuzza,
    The Event-Camera Dataset and Simulator: Event-based Data for Pose Estimation, Visual Odometry, and SLAM,
    Int. J. Robotics Research, 36:2, pp. 142-149, 2017. PDF, PDF IJRR, YouTube, Dataset.
  • Binas, J., Neil, D., Liu, S.-C., Delbruck, T.,
    DDD17: End-To-End DAVIS Driving Dataset,
    Int. Conf. Machine Learning, Workshop on Machine Learning for Autonomous Vehicles, 2017. Dataset
  • Zhu, A., Thakur, D., Ozaslan, T., Pfrommer, B., Kumar, V., Daniilidis, K.,
    The Multi Vehicle Stereo Event Camera Dataset: An Event Camera Dataset for 3D Perception,
    IEEE Robotics and Automation Letters (RA-L), 3(3):2032-2039, Feb. 2018. PDF, Dataset, YouTube.
  • Event-based, 6-DOF Camera Tracking from Photometric Depth Maps associated to the paper Gallego et al., PAMI 2018
  • Leung, S., Shamwell, J., Maxey, C., Nothwang, W. D.,
    Toward a large-scale multimodal event-based dataset for neuromorphic deep learning applications,
    Proc. SPIE 10639, Micro- and Nanotechnology Sensors, Systems, and Applications X, 106391T. PDF
  • Event-based, Direct Camera Tracking from a Photometric 3D Map using Nonlinear Optimization associated to the paper Bryner et al., ICRA 2019.
  • Delmerico, J., Cieslewski, T., Rebecq, H., Faessler, M., Scaramuzza, D.,
    Are We Ready for Autonomous Drone Racing? The UZH-FPV Drone Racing Dataset,
    IEEE Int. Conf. Robotics and Automation (ICRA), 2019. PDF, YouTube, Project page.
  • Lee, A. J., Cho, Y., Yoon, S., Shin, Y., Kim, A.,
    ViViD: Vision for Visibility Dataset,
    IEEE Int. Conf. Robotics and Automation (ICRA) Workshop: Dataset Generation and Benchmarking of SLAM Algorithms for Robotics and VR/AR, 2019.
  • Mitrokhin et al., IROS 2019.
    EV-IMO: Motion Segmentation Dataset and Learning Pipeline for Event Cameras
  • Hu, Y., Binas, J., Neil, D., Liu, S.-C., Delbruck, T.,
    DDD20 End-to-End Event Camera Driving Dataset: Fusing Frames and Events with Deep Learning for Improved Steering Prediction,
    IEEE Intelligent Transportation Systems Conf. (ITSC), 2020. Dataset, More datasets
  • Rodríguez-Gómez, J. P., Tapia, R., Paneque, J. L., Grau, P., Gómez Eguíluz, A., Martínez-de Dios, J. R., Ollero A.,
    The GRIFFIN Perception Dataset: Bridging the Gap Between Flapping-Wing Flight and Robotic Perception,
    IEEE Robotics and Automation Letters (RA-L), 2021.
  • Klenk S., Chui, J., Demmel, N., Cremers, D.,
    TUM-VIE: The TUM Stereo Visual-Inertial Event Data Set,
    IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), 2021.
  • Lee, A. J., Cho, Y., Shin, Y., Kim, A., Myung, H.,
    ViViD++: Vision for Visibility Dataset,
    IEEE Robotics and Automation Letters (RA-L), 2022. Dataset
  • Gao, L., Liang, Y., Yang, J., Wu, S., Wang, C., Chen, J., Kneip, L.,
    VECtor: A Versatile Event-Centric Benchmark for Multi-Sensor SLAM,
    IEEE Robotics and Automation Letters (RA-L), 7(3):8217-8224, 2022. PDF, Dataset, MPL Calibration Toolbox, MPL Dataset Toolbox.

Segmentation

  • Mitrokhin et al., IROS 2018, Extreme Event Dataset (EED). Project page and Dataset
  • Mitrokhin, A., Ye, C., Fermüller, C., Aloimonos, Y., Delbrück, T.,
    EV-IMO: Motion Segmentation Dataset and Learning Pipeline for Event Cameras,
    IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), 2019. PDF, Dataset, Project page

Recognition

  • Orchard, G., Jayawant, A., Cohen, G.K., Thakor, N.,
    Converting Static Image Datasets to Spiking Neuromorphic Datasets Using Saccades,
    Front. Neurosci. (2015), 9:437. YouTube
    • Neuromorphic-MNIST (N-MNIST) dataset is a spiking version of the original frame-based MNIST dataset (of handwritten digits). YouTube
    • The Neuromorphic-Caltech201 (N-Caltech201) dataset is a spiking version of the original frame-based Caltech201 dataset. YouTube
  • Serrano-Gotarredona,T. and Linares-Barranco, B.,
    Poker-DVS and MNIST-DVS. Their History, How They were Made, and Other Details,
    Front. Neurosci. (2015), 9:481.
    • MNIST-DVS and FLASH-MNIST-DVS datasets are based on the original frame-based MNIST dataset. MNIST-DVS are DVS128 recordings of moving MNIST digits (at 3 scales), while FLASH-MNIST-DVS datasets are recorded by flashing the digits on a monitor.
    • POKER-DVS. From a set of DVS recordings of very fast poker card browsing, 32x32 pixel windows tracking the symbols are cropped. On average each symbol lasts about 10-30ms.
    • SLOW-POKER-DVS. Paper printed poker card symbols are moved at "human speed" in front of a DVS camera and recorded at 128x128 resolution.
  • VISUALISE Predator/Prey Dataset associated to the paper Moeys et al., EBCCSP 2016
  • Hu, Y., Liu, H., Pfeiffer, M., Delbruck, T.,
    DVS Benchmark Datasets for Object Tracking, Action Recognition, and Object Recognition,
    Front. Neurosci. (2016), 10:405. Dataset
  • Liu, Q., Pineda-García, G., Stromatias, E., Serrano-Gotarredona, T., Furber, SB.,
    Benchmarking Spike-Based Visual Recognition: A Dataset and Evaluation,
    Front. Neurosci. (2016), 10:496. Dataset, Dataset
  • DVS128 Gesture Dataset: The dataset that was used to build the real-time gesture recognition system described in Amir et al., CVPR 2017.
  • N-CARS Dataset: A large real-world event-based dataset for car classification. Sironi et al., CVPR 2018.
  • Mitrokhin et al., IROS 2018 Event-based Moving Object Detection and Tracking. Project page and Dataset
  • ATIS Plane Dataset, assocated to the paper Afshar et al., Front. Neurosci. 2018.
  • Cheng, W., Luo, H., Yang, W., Yu, L., Chen, S., Li, W.,
    DET: A High-resolution DVS Dataset for Lane Extraction,
    IEEE Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), 2019. Project page.
  • Miao, S., Chen, G., Ning, X., Zi, Y., Ren, K., Bing, Z., Knoll, A.,
    Neuromorphic Vision Datasets for Pedestrian Detection, Action Recognition, and Fall Detection,
    Front. Neurorobot. (2019). Dataset
  • de Tournemire, P., Nitti, D., Perot, E., Migliore, D., Sironi, A.,
    A Large Scale Event-based Detection Dataset for Automotive,
    arXiv, 2020. Code, News
  • N-SOD Dataset associated to the paper Ramesh et al., FNINS 2020.
  • SL-ANIMALS-DVS Database associated to the paper Vasudevan et al., FG 2020. Recordings made using the sensitive DVS developed at IMSE.
  • Perot, E., de Tournemire, P., Nitti, D., Masci, J., Sironi, A., 1Mpx Detection Dataset: Learning to Detect Objects with a 1 Megapixel Event Camera. NeurIPS 2020.
  • Cannici, M., Plizzari, C., Planamente, M., Ciccone, M., Bottino, A., Caputo, B., Matteucci, M.,
    N-ROD: a Neuromorphic Dataset for Synthetic-to-Real Domain Adaptation,
    IEEE Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), 2021. Project page, YouTube, Poster.

Event Denoising

  • DVSNOISE20 associated to the paper Event Probability Mask (EPM) and Event Denoising Convolutional Neural Network (EDnCNN) for Neuromorphic Cameras.
  • DND21 DeNoising Dynamic vision sensors dataset associated to the paper Low Cost and Latency Event Camera Background Activity Denoising

Space Situational Awareness

  • The Event-Based Space Situational Awareness (EBSSA) Dataset associated to the paper Event-based Object Detection and Tracking for Space Situational Awareness.
  • The Event Based Space Imaging Slew Speed Star Dataset associated to the paper Astrometric Calibration and Source Characterisation of the Latest Generation Neuromorphic Event-based Cameras for Space Imaging.

Outdoor Monitoring / Surveillance

  • Bolten, T., Pohle-Frohlich, R., Tonnies, K. D.,
    DVS-OUTLAB: A Neuromorphic Event-Based Long Time Monitoring Dataset for Real-World Outdoor Scenarios,
    IEEE Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), 2021. Project page, YouTube

Software

Drivers

  • jAER (java Address-Event Representation) project. Real time sensory-motor processing for event-based sensors and systems. github page. Wiki
  • caer (AER event-based framework, written in C, targeting embedded systems)
  • libcaer (Minimal C library to access, configure and get/send AER data from sensors or to/from neuromorphic processors)
  • evl (Open Source Computer Vision Library for Event-based camera and vision for C++)
  • ROS (Robotic Operating System)
  • YARP (Yet Another Robot Platform)
  • Prophesee ROS Wrapper ROS driver and messages for Prophesee event-based sensors
  • Prophesee camera plugins
  • CeleX5 ROS Wrapper A ROS driver and some other tools for CeleX5_MP event-based sensor (which has a high resolution at 1280×800)

Synchronization

  • Sync Toolbox. This open-source toolbox provides a QT-based GUI to allow easy access for hardware-level multi-sensor synchronization (Prophesee Gen 3.1 included and tested). After proper configuration of the software, users can seamlessly record new ROS bags.

Lens Calibration

  • Lens focus adjustment or this other source.
  • For the DAVIS: use the grayscale frames to calibrate the optics of both frames and events.
    • ROS camera calibrator (monocular or stereo)
    • Kalibr software by ASL - ETH.
    • Basalt software by TUM.
  • For the DAVIS camera and IMU calibration: kalibr software by ASL - ETH, using the grayscale frames.
  • For the DVS (events-only):
    • Calibration using blinking LEDs or computer screens by RPG - UZH.
    • DVS camera calibration by G. Orchard.
    • DVS camera calibration by VLOGroup at TU Graz.
  • For Prophesee Camera (events-only):
    • Focus adjustment and calibration using blinking LEDs or computer screens
  • Song, R., Jiang, Z., Li, Y., Shan, Y., Huang, K.,
    Calibration of Event-based Camera and 3D LiDAR,
    WRC Symposium on Advanced Robotics and Automation (WRC SARA), 2018.
  • Dominguez-Morales, M. J., Jimenez-Fernandez, A., Jimenez-Moreno, G., Conde, C., Cabello, E., Linares-Barranco, A.,
    Bio-Inspired Stereo Vision Calibration for Dynamic Vision Sensors,
    IEEE Access, vol. 7, pp. 138415-138425, 2019.
  • Wang, Z., Ng, Y., van Goor, P., Mahony., R.,
    Event Camera Calibration of Per-pixel Biased Contrast Threshold,
    Australasian Conf. Robotics and Automation (ACRA) 2019. PDF, Github.
  • Dubeau, E., Garon, M., Debaque, B., de Charette, R., Lalonde, J.-F.,
    RGB-DE: Event Camera Calibration for Fast 6-DOF Object Tracking,
    arXiv, 2020.
  • Wang, G., Feng, C., Hu, X., Yang, H.,
    Temporal Matrices Mapping Based Calibration Method for Event-Driven Structured Light Systems,
    IEEE Sensors Journal, 2020.
  • Muglikar, M., Gehrig, M., Gehrig, D., Scaramuzza, D.,
    How to Calibrate Your Event Camera,
    IEEE Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), 2021. Project page, YouTube, Code.
  • Huang, K., Wang, Y., Kneip, L.,
    Dynamic Event Camera Calibration,
    IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), 2021. Video, Code.

Algorithms

  • Several event-processing filters in the jAER (java Address-Event Representation) project
  • A collection of tracking and detection algorithms using the YARP framework
  • Some detection and tracking algorithms in EVL
  • Prophesee Open Source library - OpenEB
  • Optical Flow
    • LocalPlanesFlow, inspired by the paper Benosman et al., TNNLS 2014.
    • Several algorithms compared in the paper by Rueckauer and Delbruck, FNINS 2016.
    • Event-Lifetime estimation, associated to the paper Mueggler et al., ICRA 2015.
    • EV-FlowNet, associated to the paper Zhu et al., RSS 2018.
  • Feature Tracking
    • Event-based Feature Tracking with Probabilistic Data Association, associated to the papers Zhu et al., ICRA 2017 and Zhu et al., CVPR 2017.
    • Tracking code associated to the paper Gehrig et al., IJCV 2019".
    • Evaluation code associated to the paper Gehrig et al., IJCV 2019".
  • Intensity-Image reconstruction from events
    • Code for intensity reconstruction, inspired by the paper Kim et al., BMVC 2014.
    • DVS Reconstruction code associated to the paper Reinbacher et al., BMVC 2016.
    • High-pass filter code associated to the paper Scheerlinck et al., ACCV 2018
    • E2VID code associated to the paper Rebecq et al., TPAMI 2020.
  • Localization and Ego-Motion Estimation
    • Panoramic tracking code associated to the paper Reinbacher et al., ICCP 2017.
  • Pattern Recognition
    • A simple spiking neural network for recognition associated to the paper Orchard et al., TPAMI 2015.

Utilities

  • Process AEDAT: useful scripts to work with data from jAER and cAER.
  • Matlab functions in jAER project
  • AEDAT Tools: scripts for Matlab and Python to work with aedat files.
  • aedat4to2: Python/DV script to convert AEDAT4 from DV to AEDAT2 for jAER.
  • aedat4tomat: Python/DV script to convert AEDAT4 from DV to matlab file.
  • Matlab AER functions by G. Orchard. Some basic functions for filtering and displaying AER vision data, as well as making videos.
  • Python code for AER vision data by G. Orchard.
  • edvstools, by D. Weikersdorfer: A collection of tools for the embedded Dynamic Vision Sensor eDVS.
  • Tarsier Framework for event-based Vision in C++.
  • events_h52bag C++ code to convert event data from HDF5 to ROSbags.
  • events_bag2h5 Python code to convert event data from ROSbags to HDF5.
  • CelexMatlabToolbox by Yuxin Zhang. Tools to decode events generated by CeleX IV DVS, visualize them and denoise.
  • Loris Python package to read files from neuromorphic cameras.
  • Marcireau A., Ieng S.-H., Benosman R.,
    Sepia, Tarsier, and Chameleon: A Modular C++ Framework for Event-Based Computer Vision,
    Front. Neurosci. (2020), 13:1338. Code
  • BIMVEE Python tools for Batch Import, Manipulation, Visualisation and Export of Events and other timestamped data. Imports from various file formats into a common workspace format, including native Python import of rosbags.
  • Tonic provides publicly available event datasets and data transformations much like Torchvision/audio.
  • Prophesee automotive dataset toolbox, Code
  • dv_ros ROS package for accumulating event frames with iniVation Dynamic Vision System's dv-sdk.
  • dvs_event_server ROS package used to transport "dvs/events" ROS topic to Python through protobuf and zmq, because Python ROS callback has a large delay.
  • AEStream A fast C++ library with a Python interface for streaming Address Event representations directly from Inivation and Prophesee cameras to various sources, such as STDOUT, UDP (network), or PyTorch.
  • AEDAT decoder A fast AEDAT 4 Python reader, with a Rust underlying implementation.
  • aedat-rs Standalone Rust library for decoding AEDAT 4 files for use in bespoke Rust event systems.
  • expelliarmus A pip-installable Python library to decode DAT, EVT2 and EVT3 files generated by Prophesee cameras to structured NumPy arrays.

Neuromorphic Processors and Platforms

  • Hoffstaetter, M., Belbachir, N., Bodenstorfer, E., Schoen, P.,
    Multiple Input Digital Arbiter with Timestamp Assignment for Asynchronous Sensor Arrays,
    IEEE Int. Conf. Electronics, Circuits and Systems (ICECS), 2006.
  • Belbachir, A., Hofstaetter, M., Reisinger, K., Litzenberger, M., Schoen, P.,
    High-Precision Timestamping and Ultra High-Speed Arbitration of Transient Pixels' Events,
    Int. Conf. on Electronics, Circuits and Systems (ICECS), 2008.
  • Hoffstaetter, M., Schoen, P., Posch, C., Bauer, D.,
    An integrated 20-bit 33/5M events/s AER sensor interface with 10ns time-stamping and hardware-accelerated event pre-processing,
    IEEE Biomedical Circuits and Systems Conference (BioCAS), 2009.
  • Hoffstaetter, M., Litzenberger, M., Matolin, D., Posch, C.,
    Hardware-accelerated address-event processing for high-speed visual object recognition,
    IEEE Int. Conf. Electronics, Circuits, and Systems (ICECS), 2011.
  • Dynamic Neuromorphic Asynchronous Processor (DYNAP) by aiCTX AG
    • Qiao, N., Mostafa, H., Corradi, F., Osswald, M., Stefanini, F., Sumislawska, D., Indiveri, G.,
      A reconfigurable on-line learning spiking neuromorphic processor comprising 256 neurons and 128K synapses,
      Front. Neurosci. (2015), 9:141. PDF
    • Indiveri, G., Qiao, N., Corradi, F.,
      Neuromorphic Architectures for Spiking Deep Neural Networks,
      IEEE Int. Electron Devices Meeting (IEDM), 2015. PDF
  • Wiesmann, G., Schraml, S., Litzenberger, M., Belbachir, A. N., Hofstatter, M., Bartolozzi, C.,
    Event-driven embodied system for feature extraction and object recognition in robotic applications,
    IEEE Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), 2012.
  • Galluppi, F., Denk, C., Meiner, M. C., Stewart, T. C., Plana, L. A., Eliasmith, C., Furber, S., Conradt, J.,
    Event-based neural computing on an autonomous mobile platform,
    IEEE Int. Conf. Robotics and Automation (ICRA), 2014. PDF
  • Graf, R., King, R., Belbachir, A.,
    Braille Vision Using Braille Display and Bio-inspired Camera,
    Int. Conf. Computer Supported Education (CSEDU), SCITEPRESS Digital Library, (2014), pp. 214 - 219.
  • Sengupta, J. P., Villemur, M., Mendat, D. R., Tognetti, G., Andreou, A. G.,
    Architecture and Algorithm Co-Design Framework for Embedded Processors in Event-Based Cameras,
    IEEE Int. Symp. Circuits and Systems (ISCAS), 2020, pp. 1-5.
  • Gao, Y., Wang, S., So, H. K. H.,
    REMOT: A Hardware-Software Architecture for Attention-Guided Multi-Object Tracking with Dynamic Vision Sensors on FPGAs,
    ACM/SIGDA Int. Symp. Field-Programmable Gate Arrays (FPGA), 2022. Code

Courses (Educational content)

  • Event-based Robot Vision at TU Berlin.
    • YouTube videos and Slides with links
  • Projects course: Bio-inspired Computer (Event-based) Vision at TU Berlin

Theses and Dissertations

Dissertations

  • Mahowald, M.,
    VLSI Analogs of Neuronal Visual Processing: A Synthesis of Form and Function,
    Ph.D. thesis, California Inst. Of Technology, Pasadena, CA, 1992. PDF
    She won the Caltech's Clauser prize for the best PhD thesis for this work, which included the silicon retina, AER communication, and a beautiful stereopsis chip.
    • Kluwer book from Misha’s thesis: Mahowald, M., An Analog VLSI System for Stereoscopic Vision. Boston: Springer Science & Business Media, 1994.
  • Delbrück, T.,
    Investigations of Analog VLSI Visual Transduction and Motion Processing,
    Ph.D. Thesis. California Inst. Of Technology, Pasadena, CA, 1993. PDF
  • Lichtsteiner, P.,
    A temporal contrast vision sensor,
    Ph.D. Thesis, ETH Zurich, Zurich, Switzerland, 2006. PDF
  • Matolin, D.,
    Asynchronous CMOS image sensor with extended dynamic range and suppression of time-redundant data,
    Ph.D. Thesis, TU Dresden & AIT, deutsch, 2010.
  • Berner, R.,
    Building Blocks for Event-Based Sensors,
    Ph.D. Thesis, ETH Zurich, Zurich, Switzerland, 2011. PDF
  • Ni, Z.,
    Asynchronous Event Based Vision: Algorithms and Applications to Microrobotics,
    Ph.D. Thesis, Université de Pierre et Marie Curie, Paris, France, 2013.
  • Carneiro, J.,
    Asynchronous Event-Based 3D Vision,
    Ph.D. Thesis, Université de Pierre et Marie Curie, Paris, France, 2014.
  • Weikersdorfer, D.,
    Efficiency by Sparsity: Depth-Adaptive Superpixels and Event-based SLAM,
    Ph.D. Thesis, Technical University of Munich, Munich, Germany, 2014. PDF
  • Borer, D. J.,
    4D Flow Visualization with Dynamic Vision Sensors,
    Ph.D. Thesis, ETH-Zurich, Zurich, Switzerland, 2014. PDF
  • Yang, M.,
    Silicon Retina and Cochlea with Asynchronous Delta Modulator for Spike Encoding,
    Ph.D. Thesis, ETH-Zurich, Zurich, Switzerland, 2015.
  • Brändli, C.,
    Event-Based Machine Vision,
    Ph.D. Thesis, ETH-Zurich, Zurich, Switzerland, 2015. PDF
  • Lagorce, X.,
    Computational methods for event-based signals and applications,
    Ph.D. Thesis, Université de Pierre et Marie Curie, Paris, France, 2015. PDF
  • Kogler, J.,
    Design and evaluation of stereo matching techniques for silicon retina cameras,
    Ph.D. Thesis, Technische Universität Wien, Vienna, Austria, 2016. PDF
  • Moeys, D. P.,
    Analog and digital implementations of retinal processing for robot navigation systems,
    Ph.D. Thesis, ETH-Zurich, Zurich, Switzerland, 2016. PDF
  • Cohen, G. K.,
    Event-Based Feature Detection, Recognition and Classification,
    Ph.D. Thesis, Université de Pierre et Marie Curie, Paris, France, 2016. PDF
  • Li, C.,
    Two-stream vision sensors,
    Ph.D. Thesis, ETH-Zurich, Zurich, Switzerland, 2017.
  • Neil, D.,
    Deep Neural Networks and Hardware Systems for Event-driven Data,
    Ph.D. Thesis, ETH-Zurich, Zurich, Switzerland, 2017. PDF
  • Mueggler, E.,
    Event-based Vision for High-Speed Robotics,
    Ph.D. Thesis, University of Zurich, Zurich, Switzerland, 2017.
  • Kim, H.,
    Real-time visual SLAM with an event camera,
    Ph.D. Thesis, Imperial College London, United Kingdom, 2017.
  • Huang, J.,
    Asynchronous high-speed feature extraction image sensor (CelePixel),
    Ph.D. Thesis, Nanyang Technological University, Singapore, 2018.
  • Gibson, T. T.,
    Inspired by nature: timescale-free and grid-free event-based computing with spiking neural networks,
    Ph.D. Thesis, The University of Queensland, Brisbane, Australia, 2018.
  • Everding, L.,
    Event-Based Depth Reconstruction Using Stereo Dynamic Vision Sensors,
    Ph.D. Thesis, Technical University of Munich, Munich, Germany, 2018.
  • Seifozzakerini, S.,
    Analysis of object and its motion in event-based videos,
    Ph.D. Thesis, Nanyang Technological University, Singapore, 2018.
  • Martel, J.,
    Unconventional Processing with Unconventional Visual Sensing. Parallel, Distributed and Event Based Vision Algorithms & Systems,
    Ph.D. Thesis, ETH Zurich, Zurich, Switzerland, 2019.
  • Bardow, P. A.,
    Estimating General Motion and Intensity from Event Cameras,
    Ph.D. Thesis, Imperial College London, United Kingdom, 2019.
  • Ye, C.,
    Learning of Dense Optical Flow, Motion and Depth, from Sparse Event Cameras,
    Ph.D. Thesis, University of Maryland, USA, 2019.
  • Liu, H.,
    Neuromorphic Vision for Robotic Tracking and Navigation,
    Ph.D. Thesis, ETH-Zurich, Zurich, Switzerland, 2019.
  • Zhu, A. Z.,
    Event-Based Algorithms for Geometric Computer Vision,
    Ph.D. Thesis, University of Pennsylvania, USA, 2019.
  • Rebecq, H.,
    Event Cameras: from SLAM to High Speed Video,
    Ph.D. Thesis, University of Zurich, Zurich, Switzerland, 2019.
  • Kaiser, J.,
    Synaptic Learning for Neuromorphic Vision,
    Ph.D. Thesis, Karlsruher Instituts für Technologie (KIT), Karlsruhe, Germany, 2020.
  • Wang, Z. (Winston),
    Synergy of physics and learning-based models in computational imaging and display,
    Ph.D. Thesis, Northwestern University, 2020. YouTube.
  • Mitrokhin, A.,
    Motion Segmentation and Egomotion Estimation with Event-Based Cameras,
    Ph.D. Thesis, University of Maryland, USA, 2020.
  • Scheerlinck, C.,
    How to See with an Event Camera,
    Ph.D. Thesis, Australian National University, Canberra, Australia, 2021. PDF
  • Stoffregen, T.,
    Motion Estimation by Focus Optimisation: Optic Flow and Motion Segmentation with Event Cameras,
    Ph.D. Thesis, Monash University, Melbourne, Australia, 2021.
  • Monforte, M.,
    Trajectory Prediction with Event-Based Cameras for Robotics Applications,
    Ph.D. Thesis, Italian Institute of Technology, Genoa, Italy, 2021. PDF
  • Lenz, G.,
    Neuromorphic algorithms and hardware for event-based processing,
    Ph.D. Thesis, Sorbonne University, Paris, France, 2021. PDF
  • Alzugaray, I.,
    Event-driven Feature Detection and Tracking for Visual SLAM,
    Ph.D. Thesis, ETH Zurich, Zurich, Switzerland, 2022. PDF
  • Liu, D.,
    Motion Estimation Using an Event Camera,
    Ph.D. Thesis, University of Adelaide, Adelaide, Australia, 2022.
  • See also Theses from Delbruck's group at INI

Master's (and Bachelor's) Theses

  • Reisinger, K.,
    EMC testing on Silicon Retinas,
    MSc. Thesis, TU Wien & AIT, Austria, 2006.
  • Nowakowska, A.,
    Recognition of a vision approach for fall detection using a biologically inspired dynamic stereo vision sensor,
    MSc. Thesis, TU Wien & AIT, Austria, 2011.
  • Reingruber, H.,
    An Asynchronous Data Interface for Event-based Stereo Matching,
    MSc. Thesis, TU Wien & AIT, Austria, 2011.
  • Zima, M.,
    Hand/Arm Gesture Recognition based on Address-Event-Representation Data,
    MSc. Thesis, TU Wien & AIT, Austria, 2012.
  • Huber, B.,
    High-Speed Pose Estimation using a Dynamic Vision Sensor,
    MSc. Thesis, University of Zurich, Switzerland, 2014.
  • Horstschaefer, T.,
    Parallel Tracking, Depth Estimation, and Image Reconstruction with an Event Camera,
    MSc. Thesis, University of Zurich, Switzerland, 2016.
  • Kaelber, F., (Everding, L., Conradt, J.,)
    A probabilistic method for event stream registration,
    Bacherlor Thesis, TU Munich, Germany, 2016.
  • Galanis, M., (Everding, L., Conradt, J.,)
    DVS event stream registration,
    Bacherlor Thesis, TU Munich, Germany, 2016.
  • Paredes-Valles, F.,
    Neuromorphic Computing of Event-Based Data for High-Speed Vision-Based Navigation,
    MSc. Thesis, TU Delft, The Netherlands, 2018.
  • Nelson, K. J.,
    Event-Based Visual-Inertial Odometry on a Fixed-Wing Unmanned Aerial Vehicle,
    MSc. Thesis, Air Force Institute of Technology, USA, 2019. PDF, PDF
  • Attanasio, G.,
    Event-based camera communications: a measurement-based analysis,
    MSc. Thesis, Politecnico di Torino, Italy, 2019.
  • Wang, Z.,
    Motion Equivariance of Event-based Camera Data with the Temporal Normalization Transform,
    MSc. Thesis, University of Pennsylvania, USA, 2019.
  • Boettiger, J. P.,
    A Comparative Evaluation of the Detection and Tracking Capability Between Novel Event-Based and Conventional Frame-Based Sensors,
    MSc. Thesis, Air Force Institute of Technology, USA, 2020. PDF
  • Friedel, Z. P.,
    Event-Based Visual-Inertial Odometry Using Smart Features,
    MSc. Thesis, Air Force Institute of Technology, USA, 2020.
  • Verecken, J.,
    Embedded real-time inference in spiking neural networks for neuromorphic IoT vision sensors,
    MSc. Thesis, Ecole polytechnique de Louvain, Université catholique de Louvain, Belgium, 2020.
  • Gava, L.,
    Event-driven Motion-In-Depth for Low Latency Control of the Humanoid Robot iCub,
    MSc. Thesis, University of Genoa, Italy, 2020.
  • Dubeau, E.,
    Suivi d'objet en 6 degrés de liberté avec caméra événementielle (Object Tracking in 6-DOF using an event camera),
    MSc. Thesis, Université Laval, Canada, 2022.

People / Organizations

  • Institute of NeuroInformatics (INI) of the University of Zurich (UZH) and ETH Zurich, Switzerland.
    • INI Sensors Group Videos
  • iniVation AG (commercialization of neuromorphic vision technology from INI), Switzerland.
  • Dynamic Vision Sensor (DVS) - asynchronous temporal contrast silicon retina
  • Robotics and Perception Group of the University of Zurich (UZH) and ETH Zurich, Switzerland.
  • Institut de la Vision Neuromorphics group Paris, France.
  • GRASP Lab at University of Pennsylvania, Kostas Daniilidis.
  • AIT Austrian Institute of Technology Sensing & vision solutions group in Vienna, Austria.
  • Event-Driven Perception for Robotics (EDPR) group at Istituto Italiano di Tecnologia (IIT), Italy.
  • Sinapse Singapore Institute for Neurotechnology, Singapore.
  • Western Sydney University’s International Centre for Neuromorphic Systems (ICNS), Australia.
  • Perception and Robotics Group at University of Maryland (UMD). Fermüller's Lab on Event-based vision
  • Intel Labs, Mike Davies (Intel’s neuromorphic computing program leader).
    • Video CVPRW 2019
    • Video Intel NICE 2018 Loihi, Video Intel NICE 2018 Day 3
    • Advancing neuromorphic computing from lab to mainstream applications -Telluride 2020
  • Robotics and Technology of Computers Lab - Sevilla (RTC) of the University of Seville (USE), Seville, Spain.
  • IMSE-CNM – Seville Institute of Microelectronics, Seville, Spain. News
  • Prophesee SA: Sensor and Software development and production

Press EETimes

  • Neuromorphic Revolution to Start in 2024, 10.2019.
  • Neuromorphic Vision Sensors Eye the Future of Autonomy, 04.2020.
  • The Slow But Steady Rise of the Event Camera, 06.2020
  • Europe Still the Focal Point for Neuromorphic Vision, 07.2020.
  • Telluride Neuromorphic Engineering Workshop Goes Large, 07.2020.
  • Prophesee Touts Toolkit for Event-based Vision, 09.2020.
  • Prophesee Showcases Neuromorphic Vision Systems from Biotech to Space Debris, 12.2021.
  • What Does “Neuromorphic” Mean Today?, EETimes Special Issue, 07. 2022.
    • Exclusive: An Interview with Carver Mead, 07.2022.
    • Inspiration or Imitation: How Closely Should We Copy Biological Systems?, 07.2022.
    • A Shift in Computer Vision is Coming, 04.2022.
    • Neuromorphic Sensing: Coming Soon to Consumer Products, 07.2022.
    • Reverse-Engineering Insect Brains to Make Robots, 07.2022.
    • Cars That Think Like You, 07.2022.

Press

  • Event-Based Vision: Taking a Cue From Biology, 03.2021.
  • Silicon retinas to help robots navigate the world, Advanced Science News, 10.2022.

Contributing

Please see CONTRIBUTING for details.


What factors should you consider when you are giving an online speech?

Use your voice expressively and meaningfully..
Minimize the uhs, ums, likes and y'knows..
Enunciate words clearly. Don't mumble or garble them..
Speak with appropriate loudness and speed. Consider audience, place and topic..
Use variations in speed, inflections, and force to enhance your meaning and hold audience attention..

What refers to the on screen elements seen by the audience during an online speech?

Visual Environment. The on-screen elements seen by the audience during an online speech. Includes: setting, lighting, framing, eye contact, & personal appearance. Guidelines for Online Speaking. Control the Visual Environment.

Is a TED talk an example of a recorded online speech?

A recorded online speech is delivered, recorded, and then uploaded to the Internet for later viewing. Examples are TED Talks and presentations in online or blended speech classes.

Which of the following is an example of a real time online speech?

For instance, in some cases a speaker talks in front of a live audience and a recording of the speech is posted online for a virtual audience to view. The best-known example of this kind of speech delivered to a live audience and posted online would be a TED Talk.