International Day of Light 2021 @ZESS

Smart Sensing with Light: from Photons to Information

On May 16th 2021, 14:00-18:20, our contribution to the international day of light took place at the Center for Sensor Systems.

Logo of the International Day of Light

It was well attended with an average of 20-30 visitors. 

The event took place exclusively digitally as a video conference. 

In addition, our international postgraduate programme MENELAOSNT joins international science leaders in support of global pledge to trust science (Press Release).

The agenda:

Light is everywhere in our lives, but despite of its ubiquitous nature, we are often unaware of the crucial role it plays in our daily lives. From well-stablished optical communication systems, photographic and video sensors, to the latest developments in the very extensive fields of optical sensing, light is a fundamental transversal component. At this event we aimed to raise awareness for the importance of light for the society by reviewing a set of high-impact active research field in optical sensing. The overarching idea is the conversion of photons into high-level information, of paramount importance in a variety of application fields, ranging from medical and health monitoring to Earth Observation (EO), autonomous driving, 3D modelling and scene understanding, safety and security, chemical sensing, etc., to cite a few. In this event we wanted to get together with researchers in the relevant fields through a journey that will unveil, for instance, current trends on how scattered photons are converted into a 3D representation of the environment, how lamps can substitute current WiFi networks, how the latter break-through paves the way for ubiquitous and passive 3D sensing, how light can sense our heartbeat or respiration rate, how photons can interrogate the chemical composition of exhaust gases, or how occlusion of light by clouds can be bypassed in EO.


Our programme was composed by 11 short contributions of researchers, with formats which ranged from interactive live demonstrations to inspiring talks. Each intervention was 15 minutes long and was followed by 5 minutes of questions. Two 20-minute breaks allowed open discussions between the attendees and the speakers.

Schedule:

TimeName, AffiliationTitle/Abstract
14:00-14:10Thomas Seeger,
ZESS, University of Siegen
Welcome message
14:10-14:30Peyman Fayyaz Shahandashti, CiTIUS, University of Santiago de CompostelaBeyond 2D CMOS Image Sensors: 3D Measurement Technology and its Applications
Two-dimensional solid-state imagers are now widely used not only in consumer electronics, such as digital cameras and cell phone cameras, but also in cameras used in security, robot vision, and biomedical applications. However, we live in a three-dimensional (3D) world, so scene depth extraction is of paramount importance in many applications. 3D range sensing methodologies can be divided into optical and non-optical sensing, depending on the physical principle and processing techniques used for depth determination. Among the most popular approaches are RADAR (Radio Detection And Ranging) and LiDAR (Light Detection And Ranging). In the non-optical sensing category, the well-known radar ranging system uses radio waves to measure the distance between the RF transmitter and an object. Optical sensing systems can be classified into geometry-based and time-of-flight (TOF) approaches. The operating principle of optical TOF is to measure the round-trip time delay or phase difference of the active light moving between the sensor and the target object. A 3D-sensing system provides unique improvement opportunities in many fields such as automotive applications, security, robotics, industrial control, gaming, virtual and augmented reality, etc., by significantly increasing the robustness of object classification and avoiding time and energy consumption.
14:30-14:50Faisal Ahmed, ZESS, University of SiegenPassive Indoor 3D ToF Sensing via Visible Light Communication
Recent developments in semiconductor devices have transformed the lighting paradigms from conventional lamps to LEDs. This transformation gave a birth to optical wireless communication, namely Visible Light Communication (VLC). VLC has recently become one of the most relevant emerging technologies for indoor wireless communication, being capable of providing dual functionalities such as illumination and communication in indoor settings. State-of-art active ToF sensing suffers from high power consumption due to active illumination sources, as compared to passive imaging modalities. In this case, this new technology has created a strong push for passive ToF sensing to take profit of VLC sources as opportunity illuminators. In this talk we outline how to channelize this synergistic potential and exploit it to obtain a novel passive 3D-sensing modality. This innovative idea has an enormous application potential in homes, offices, industries, and vehicles, where ToF cameras are a valuable asset.
14:50-15:10Álvaro López Paredes, ZESS, University of SiegenState-of-the-art ToF Cameras and Applications
Over the past decades, ToF imaging systems has become more and more relevant in our daily lives. Fields such as mobile robotics or autonomous driving have undoubtely taken advantage of this apace evolution. This demo will present a brief description of state-of-the-art ToF cameras. In addition, a brief workshop of our experiments at ZESS will be performed using the Microsoft® Azure Kinect.
15:10-15:30Jochen Kempfle, ZESS, University of SiegenA Breath of Light – How Light Enables us to Remotely Monitor a Human’s Respiration
This presentation will give you an insight into a recent method that allows to remotely measure a human’s respiration from distances of several meters, and without requiring any body contact – be it from a human or from a measuring device. You will learn how light plays a crucial role in this method, how it is used by a so-called depth camera, and how it enables us to monitor a human’s breathing.
15:30-15:50Coffee Break
15:50-16:10Zhouyan Qiu, INSITU EngineeringLight: Not Only for Color But Also For Distance
A camera is an optical instrument used to capture an image. In the field of mobile mapping applications, we use not only RGB cameras but also some specially designed cameras. A very similar camera to an RGB camera is a multispectral camera, which works by imaging different wavelengths of light. This kind of camera obtains colour information, but it is not the same as the colour that our eyes see. Another type of camera is based on the principle of constant speed of light and obtains the distance to the target by measuring the time difference of the reflected light. An example of such an active camera is LiDAR, which is now widely known by the public. High-precision LiDAR is too expensive, that is why we started to study multi-sensor fusion. There is a cheap active camera, the ToF camera, which has sufficiently high accuracy at close range. So, my current research is on integrating a low-cost ToF camera and an RGB camera to achieve the same results as high-precision LiDAR.
16:10-16:30Rabia Rashdi, INSITU EngineeringTrends in LiDAR Technologies
Over the years, light technologies have revolutionized our daily lives, providing major growth for a wide range of applications such as remote sensing, internet, virtual/augmented reality, medical imaging, and manufacturing, etc. Recently-developed georeferenced data capture devices based on Light Detection and Ranging (LiDAR) are becoming very popular in remote sensing technology. LiDAR is a type of active remote sensing device which emits laser pulses to measure the surroundings. Due to its ability to capture three-dimensional spatial data in short data acquisition time and accurately, LiDAR is being used widely in applications, from self-driving cars, robotics, mobile phones to 3D modelling analysis, environmental measurements, and forestry management. LiDAR provides data in the form of point clouds with high-quality information of the 3D geometry. However, the main issue with the LiDAR data is the heterogenous and large amount of data. To address these challenges, my research aims to normalize and integrate LiDAR point clouds from different platforms such as mobile, terrestrial, and aerial for creating compatible 3D point clouds such that they can be used as a base to build BIM for transport infrastructure.
16:30-16:50Saqib Nazir, CEOSpaceTech, Politehnica University of BucharestDepth Estimation from Monocular Images using Artificial Intelligence
Monocular depth estimation is a fundamental challenge since the foundation of computer vision with many real-world applications such as image segmentation, augmented reality, real-time continuous tracking, and especially self-driving cars. Depth maps with high spatial resolution, e.g., precise object boundaries, are especially important for some applications such as object recognition and depth-aware image re-rendering and editing. In this work, we used neural networks to estimate depth maps from single images and applied various smoothing regularization expressions to increase smoothness within homogeneous regions of predicted depths by highlighting depth discontinuities and recovering sharp edges.
16:50-17:10Marko Jaklin, CiTIUS, University of Santiago de CompostelaEvent Cameras. Bio-inspired sensors.
Conventional frame-based cameras output whole images synchronously in time. This often results in transmitting information that is not always useful. Unlike frame-based cameras, the photoreceptors in the human eye only transmit the information when they detect the change in the scene. This evolutionary-based strategy only gathers the information that is useful to the brain. By discarding redundant data the brain is able to quickly process the situation giving it the evolutionary advantage. Event cameras try to mimic biological vision by outputting “events.Events are generated locally inside a pixel when a change in the scene occurs. When compared to frame-based cameras, event-based cameras achieve lower power consumption, higher frame rate, and higher dynamic range. Promising applications of event cameras are in robotics and wearable electronics. Some of their uses are: object tracking and recognition, depth estimation, surveillance and monitoring.
17:10-17:30Coffee Break
17:30-17:50Jonas Hölzer, ZESS, University of SiegenImprovement of the Coherent Model Function for S Branch Raman Linewidth Determination in Oxygen
Determination of S-branch Raman linewidths of oxygen from picosecond time domain pure rotational coherent anti-Stokes Raman spectroscopy (RCARS) measurements requires consideration of coherence beating. We present an optimization of the established model for fitting the coherence decay in oxygen, which leads to an improvement in Raman linewidths data quality especially for the challenging small signal intensity and decay constant regime, enabling the application for low oxygen concentrations. Two modifications to the fitting procedure are discussed, which aim at reliably fitting the second coherence beat properly. These are evaluated statistically against simulated decay traces and weighing the data by the inverse of the data magnitude gives the best agreement. The presented temperature dependent O2-O2 S-branch Raman linewidth from the modified model show an improved data quality over the original model function for all studied temperatures. O2-N2 linewidths of oxygen in air for the temperature range from 295 K to 1900 K demonstrate applicability to small concentrations. Use of the determined RCARS O2-O2 S-branch linewidth instead of regularly used Q-branch derived linewidths leads to a lowering in evaluated RCARS-temperature by about 21 K and thereby a much better agreement with thermocouple measurements.
17:50-18:10Florian Wollig, ZESS, University of SiegenOn the Pulse in Light: Optics in Wearable Devices
Ever wondered what that green light on the back of your fitness tracker is for? In his 15-minute presentation, Florian Wolling from the University of Siegen, Germany will spotlight the optical measurement of the human heartbeat in wearable devices. The underlying technology, called photoplethysmography, is a fascinating application of light which comes from regular wards at hospitals but has a revival in modern fitness trackers and smartwatches. The talk will discuss the basic principle and then highlight the trends and challenges of the next-generation devices.
18:10-18:30Omid Ghozatlou, CEOSpaceTech, Politehnica University of BucharestDeep Learning for Shadow and Cloud Removal from Multispectral Sentinel-2 Satellite Images
One of the most popular and open-access optical remote sensing imagery is provided by the Sentinel-2 satellite. It is a high-resolution optical Earth observation mission developed by ESA that provides data and applications for operational land monitoring, emergency response, and security services. However, one major problem of optical remote sensing images is the presence of clouds and their associated shadows. Clouds hinder the retrieval of useful information from the images. Besides, the shadows associated with clouds worsen the situation by increasing the size of the unusable portion of the image. In order to recover background information, we need clear sky images also for the times when they are actually not available. Therefore, we improve the deep neural networks by leveraging trustworthy and transparent physical properties to extract reliable information from corrupted images.
18:30-18:40Paula López Martínez, CiTIUS, University of Santiago de CompostelaClosing Remarks
 
Jan
Jan

Head of Outreach and PR and coordinator of DFG Research Unit "Learning to Sense". ZESS staff photographer.

Articles: 80

Leave a Reply

Your email address will not be published. Required fields are marked *