Get Involved

Workshop Description and Topics
Speakers and Presentation Topics
Technology Showcase Presenters
Startup Showcase Presenters
Call for Speakers

Workshop Description and Topics

The conference will include a comprehensive workshop on automotive LIDAR with the following sessions:

  1. LIDAR fundamentals: how it works and why it’s necessary for ADAS and autonomous vehicles applications.  This session will also provide a historical perspective, explain why cameras and radar are not sufficient, and will also explain the differences between flash and scanned LIDAR.

  2. LIDAR scanning methods and optical considerations.  This session will include discussion on beam aperture, Rayleigh range, and optical aliasing.  Discussion will include scanning methods such as mechanical scanners, mirrors, optical phased arrays, liquid crystal waveguides, Risley prisms, and solid state electro-optics.  This session will also include optical systems such as monostatic and bi-static.

  3. Photon detection and interference. Here, discussion will include photon detection methods such as direct and coherent detection, interferers such as benign and malicious, noise and how to cope with it, as well as eye safety considerations.

  4. Mini case studies and overview of 25+ notable LIDAR companies such as Aeva, Aeye, Analog Photonics, Baraja, Blackmore, Cepton, Continental (ASC), Fotonic, Hesai, Holosense, Ibeo/Valeo, Innoviz, Leddartech, Luminar, OryxVision, Ouster, Panasonic, Phantom Intelligence, Pioneer, Princeton Lightwave, Quanergy, Robosense, Strobe, Tetravue, Velodyne, Waymo, and Xenomatics.

This workshop will be presented by Harvey Weinberg, Automotive Division Technologist at Analog Devices.  The workshop has been developed in collaboration with Microtech Ventures, a global firm focused on M&A advisory services, management consulting, and business development for MEMS, sensors, and microtechnology companies.

Biography:  Harvey Weinberg is the Division Technologist for the Automotive Business Unit at Analog Devices.  Over the past few years he has been working on long-time horizon technology identification as it pertains to automotive.  Lately this has been principally LIDAR.  Prior roles at ADI have been System Application Engineering Manager for the Automotive BU and before that, leader of the Applications Engineering group for MEMS inertial sensors.  He has 8 US patents in technologies varying from ultrasonic airflow measurement, to inertial sensor applications, to LIDAR systems.  He has been at ADI for 19 years. Before ADI, he worked for 12 years as a circuit and systems designer specializing in process control instrumentation. He holds a Bachelor of Electrical Engineering degree from Concordia University in Montreal, Canada.


Speakers and Presentation Topics

Edge, Centralized, and Fluid Real-Time Processing Techniques of LIDAR Data
Raul Bravo
CEO
Dibotics

When using a centralized architecture, the raw data from the LIDAR sensor is sent to the machine learning "brain" (AI), located in a central processing unit, often a high-end computing platform or GPU. In this scenario, the AI software can fuse data at a low level, but has to sift through high volumes of data, which can take up valuable network bandwidth and have significant costs in terms of time and energy consumption. Smart sensors refer to sensor modules with local processing (i.e. edge computing), where the output is typically a high-level description of detected objects. This approach optimizes the network use and can lower overall costs, but the AI cannot obtain rich-enough information, especially for multi-sensor fusion purposes. The presentation will explain these approaches with specific uses cases and will also introduce a third option, where the edge processing is delivering a “fluid” stream of data: low-level enough to be fused with other LIDARs or radars/cameras, but "smart enough" to decrease network and central processing requirements.

Biography: Raul Bravo is CEO and co-founder of Dibotics, and creator of the Augmented LIDAR technology. He is a serial entrepreneur in mobile robotics and a startup coach, with an extensive 15-year background in both bootstrapped and VC-backed startup creation and growth. He is an engineer from UPC (Barcelona, Spain) with an MBA from College des Ingénieurs (Paris, France). He has filed 10 patents and obtained 27 various awards for his engineering and entrepreneur career, among them the MIT Technology Review’s Top 10 Innovators Under 35 recognition.


Less Is More: Simpler LIDARs and Sensor Fusion for Efficient Autonomous Driving
Ambra Caprile, PhD
Sensor Expert
Magneti Marelli

Signals from several sensors, such as LIDARs, cameras, and radars can be combined through sensor fusion techniques to provide a richer set of data and to improve the reliability of estimation of the trajectory of objects in common driving scenarios. Our development of autonomous vehicle proof of concepts is following the idea that using cost-effective sensors with a reduced feature set, if taken singularly, does not invalidate the efficient and correct functioning of the overall system, provided that the features of each sensor are deeply understood and known in terms of needs for a reliable, repeatable and generally valid sensor fusion. State-of-the-art 3D scanning LIDARs, usually installed on autonomous vehicles and prototypes (such as Uber, Waymo, Ford, and others) to observe the surroundings, match point clouds and detect obstacles in the drivable space, have been substituted by smaller (and cheaper) LIDARs with a highly reduced mechanical complexity. Such devices have, in addition to the advantage of being small, robust, and easily integrated, opened the opportunity for a low-impact equipment design. Their simple performance and functioning opens the question -- what accuracy of the LIDAR sensor data allows a comprehensive reconstruction of the dynamic environment surrounding the vehicle? To provide an answer, it is necessary to conduct detailed characterization studies for each quantity returned by the sensor as an output, in order to achieve a deep knowledge of the sensor features and its behavior in non-conventional scenarios. This presentation will explain these types of approaches and also outline the challenges that still need to be addressed.

Biography: Ambra Caprile graduated from Turin University in Physics of Advanced Technologies, and afterwards won a PhD scholarship at the Politecnico of Turin, also in Physics. Her scientific activity was centered around broadband magnetic properties in soft magnetic materials and was carried out within the Magnetic Materials group of the Italian National Metrology Institute (INRIM). During her PhD research, Ambra also worked at the PTB in Braunschweig, Germany, for a collaboration project on magnetic tunnel junctions. This post-doctoral activity dealt with the analysis of spinwaves in time and frequency domains. Ambra worked in the Optics Division of the Quantum Optics group at the INRIM to the develop an innovative imaging setup for the investigation of magnetic Weiss domains. In 2015, Ambra won an ESRMG within the framework of the EU project SpinCal. At the National Physical Laboratory in Teddington, and in collaboration with Cambridge University and Bielefeld University, she carried out a project aimed at the production and characterization of ion-implanted devices. The investigation was focused on transport properties through anomalous Hall Effect and the effects of FIB modifications. Currently, Ambra is employed at the Innovation Technology department of Magneti Marelli, in the Automated Driving Technologies group, where the focus is aimed at the development of the autonomous vehicle. Her role is centered around the coordination of the activity of characterization of several types of sensors (LIDAR, radars, cameras, and ultrasounds) that provide the environmental perception to the central control of the driverless car.


LIDAR Gets Real: FMCW LIDAR vs. Traditional Pulsed LIDAR
Jim Curry
VP of Product
Blackmore Sensors and Analytics

This talk will discuss the differences of frequency modulated continuous wave (FMCW) LIDAR technologies over traditional pulsed LIDAR systems. Pulsed lidar systems can only measure range, not velocity, and are also susceptible to interference. FMCW LIDAR sensors simultaneously measure both the speed and the distance to any object, giving self-driving systems critical information for safe navigation. This talk will demonstrate how FMCW LIDAR eliminates interference, improves long-range performance, and measures both range and velocity — a triple threat to make autonomous driving safer. Challenges of FMCW LIDAR systems will also be discussed, including range-Doppler ambiguity. Because the expected range frequency and Doppler frequency are both in the megahertz regime, it can be very difficult to separate range measurement from velocity measurement, which can result in errors. FMCW LIDAR is also distinctly different from other sensors currently on the market, requiring people to think differently about how LIDAR sensors are applied. In addition, while FMCW will ultimately be the solution for mass-market LIDAR, large-scale production requires new chipsets which are still in development. We will also cover how the combination of high-resolution 3D and velocity data in the same point cloud is enabling development of new perception algorithms for advanced driver assistance systems (ADAS), autonomous driving, smart transportation, and other applications.

Biography: Mr. Jim Curry has 15 years of experience in the fields of computer science, computer engineering, signal processing, computer vision and computer graphics. An expert in computer hardware and software architectures, Jim has spent his career using desktop, embedded and GPGPU computing as well as FPGAs to acquire, process and visualize data in real-time. Before co-founding Blackmore, Jim lead software development and FPGA design activities at Bridger Photonics, Inc. and was a Principal Member of the Technical Staff at Sandia National Laboratories. Jim has a Bachelor’s of Science in Computer Systems Engineering/Computer Science from Rensselaer Polytechnic Institute and a Master’s of Science in Computer Science from Stanford University.


Evolving ADAS and Autonomy Landscape: Impacts on LIDAR Architecture
Jean-Yves Deschênes
President and Co-Founder
Phantom Intelligence

LIDAR is a relatively late entrant in the automotive sensor set, complementing the video and radar sensor suite. As with any newer technology, expectations have been high, and practical solutions have been taking time to reach the market, as solutions mature. In its early years, LIDAR has not escaped the hype surrounding autonomous car development solutions. Now that the autonomous vehicle development has passed the "peak of inflated expectations", the intelligent car (Level 4-5 and lower level autonomies) community is revising its expectations and adjusting requirements. This creates a strong pressure on LIDAR providers to review the "fit" of their solutions to the new, more pragmatic expectations of the automotive industry. It also creates opportunity for alternative and creative architectures to emerge. This presentation will highlight LIDAR architectures that have emerged in the recent years and current trends leading to future designs. The talk will also provide an overview of existing challenges with LIDAR systems for automotive applications.

Biography: As Co-founder and President of Phantom Intelligence, a Tier Two company that develops core signal processing technology for use in LIDAR-based sensors, Jean-Yves Deschênes has been steering strategic orientations of technologies aimed at the automotive industry for the last seven years. Mr. Deschênes has a software engineering degree from Université Laval in Canada and over 30 years of combined software, optics, and now LIDAR technology experience. Over those years, he has worked on many projects involving international collaboration and completely new uses of emerging technologies, projects that have set standards in their respective industries.


Advanced Physics Based LIDAR Simulation Modeling
Tony Gioutsos
Director of Sales and Marketing
TASS International, A Siemens Business

In order to provide a “due care” testing approach for highly automated vehicles, an advanced sensor simulation must be involved. Although real-world or field tests are required as well as test track testing, simulation can provide a bulk of the testing and also provide tests not producible via real or test track testing. However, to provide the most accurate and best validation, sensor simulation closest to “raw data” would be preferred. In this talk, we will focus on detailed advanced physics-based LIDAR modeling. This type of modeling can be used to produce ROC (receiver operating characteristic) curves, as well as other measures of detection and estimation system performance. Using these measures allows for robust systems for real world operation. Simulations can also be used for the testing and training of AI algorithms. This talk will also include a discussion on the most comment simulation challenges and emerging techniques.

Biography: Mr. Tony Gioutsos has been involved with automotive safety systems since 1990. As Director of Electronics R&D for both Takata and Breed Technologies, he was at the forefront of the safety revolution. His cutting-edge work on passive safety algorithm design and testing led him to start the first automotive algorithm company in 1992. After receiving both his BSEE and MSEE (specializing in Communications and Signal Processing) from the University of Michigan, Mr. Gioutsos worked on satellites and radar imaging for Defense applications before joining Takata. He has been a consultant for various companies in areas such as biomedical applications, gaming software, legal expert advisory, and numerous automotive systems. Mr. Gioutsos is currently Director of Sales and Marketing in the Americas for Siemens PLM where he has continued to define active safety algorithm testing requirements, as well as working on various other state-of-the-art approaches to enhance automated and connected car robustness. He has been awarded over 20 patents and presented over 75 technical papers.


The Role of LIDAR in Multi-Modal, High-Reliability Sensing
Edwin Olson, PhD
Co-Founder and CEO
May Mobility

Developing autonomous vehicles that can operate at very high reliabilities (e.g., one failure per billion miles) requires sensor performance that cannot generally be obtained from a single sensor. In this talk, we describe approaches to developing integrated sensing systems by combining LIDAR with other sensor modalities (e.g. radar), and the usefulness of using multiple types of LIDAR sensors within a single system. Fundamentally, the sensing system must be able to detect, identify, and track objects at long ranges. Ultimately, better systems allow better predictions of a target’s future behavior, enabling a self-driving vehicle to produce better plans. We will describe some of the strengths and weaknesses of currently-available sensor systems and show how careful system design can achieve high performance in real-world conditions at price points that allow autonomous vehicles to be not only safe but also economically viable.

Biography: Edwin Olson is an Associate Professor of Computer Science and Electrical Engineering at the University of Michigan, and co-founder/CEO of May Mobility, Inc., which develops self-driving shuttles. He earned his PhD from MIT in 2008 for work in robot mapping. He has worked on autonomous vehicles for over a decade, including work on the 2007 DARPA Urban Challenge, vehicles for Ford and Toyota Research Institute, and now May Mobility. His academic research includes work on perception, planning, and mapping. He was awarded a DARPA Young Faculty Award, named one of Popular Science's "Brilliant 10", and was winner of the 2010 MAGIC robotics competition. He is perhaps best known for his work on AprilTags, SLAM using MaxMixtures and SGD, and Multi-Policy Decision Making.


An Overview of Photodetectors for Automotive LIDAR Applications
Slawomir Piatek, PhD
Scientific Consultant
Hamamatsu

Whether for a car safety feature or a fully autonomous vehicle, high-resolution information about the distance to other vehicles on the road, unexpected road obstacles, or permanent structures near the road is of paramount importance. It has been realized that LIDAR, due to its superior spatial resolution over radar, is indispensable to providing such information. Surprisingly, despite years of research and development, there isn’t yet consensus which LIDAR concept will be adopted by the automotive market as there are engineering challenges associated with each type. One of these challenges is photodetection. After briefly reviewing the major LIDAR designs – flash, mechanical, MEMS-based, optical phased array, and frequency modulated – this presentation discusses technical aspects behind the selection of photonics components embedded in these systems. Particular attention will be paid to photodetectors such as silicon photomultiplier (SiPM), photodiode (PD), avalanche photodiode (APD), and single-photon avalanche diode (SPAD) and their suitability for various LIDAR concepts.

Biography: Dr. Slawomir Piatek has been measuring proper motions of nearby galaxies using images obtained with the Hubble Space Telescope as a senior university lecturer of physics at New Jersey Institute of Technology. He has developed a photonics training program for engineers at Hamamatsu Corporation in New Jersey in the role of a science consultant. Also at Hamamatsu, he is involved in popularizing a SiPM as a novel photodetector by writing and lecturing about it, and by experimenting with the device. He earned a PhD in Physics at Rutgers, the State University of New Jersey.


An Overview and Comparison of LIDAR Receiver Solutions
Marc Schillgalies, PhD
Vice President of Development
First Sensor

A key component to ensure automotive LIDAR reliability and functionality is the light receiver/detector element. With the significant increase of LIDAR developments, various detector technologies are now used. Laser wavelengths vary from 850nm to 1550nm and require different types of detectors, each with their specific advantages and disadvantages. Different signal amplification concepts also lead to distinct detector technologies. In terms of specifications, signal-to-noise figures are not the only parameters to be considered. Cost, reliability, as well as system capability and availability need to be factored in too. This talk will provided a comprehensive comparison of silicon photodiodes, silicon avalanche photodiodes, silicon SiPMs/SPADs, and InGaAs avalanche photodiodes with respect to SNR, performance under different ambient conditions, bandwidth, differences in signal path architecture, and cost. Furthermore, multi-channel and single-channel detector designs will be compared. The talk will also discuss the next steps in detector development for automotive LIDAR scanners, including robust designs of semiconductor and packaging for harsher environmental conditions and higher integration density.

Biography: Dr. Marc Schillgalies is currently Vice President of Development at the German detector company First Sensor in Berlin. At First Sensor he has worked in different development and product management roles since 2010. Prior to First Sensor he was developing semiconductor laser diodes at Osram Opto Semiconductors in Regensburg, Germany. He received a Doctorate degree and M. Sc. degree in physics from the University of Leipzig, Germany and a M. Sc. degree in Optical Sciences from the University of Arizona in Tucson. In Tucson and Leipzig, his academic work focused on the field of optical semiconductor devices. F urthermore, he received a M.B.A. degree in General Management from Steinbeis University in Berlin with attention to innovation management.


Lasers and Detectors: Requirements, Considerations, and Emerging Trends for Automotive LIDAR
Rajeev Thakur, P.E
Product Marketing Manager, Infrared Business Unit
OSRAM Opto Semiconductors

Being a leading provider of lasers for LIDAR, we are often besieged with requests for new laser designs. This presentation will share some of our insights on the market, the challenges faced in evaluating various LIDAR concepts, and the resulting requirements for lasers. We will discuss the range, resolution, field of view, eye safety, beam qualities, and collimation along with laser packages, laser drivers, detector considerations, and various related topics. Also, we will share a weighted quality function deployment matrix of application requirements and LIDAR design properties along with a few selected LIDAR architectures. We will also discuss sensor fusion gaps, as well as data throughput and regulations. The presentation will also include an overview of some of the interesting and recently funded LIDAR startups, as well as notable technologies that are coming out of universities and R&D labs. Finally, we will present a “crystal ball” outlook for LIDAR technology growth for ADAS and self-driving cars.

Biography: Rajeev Thakur is currently a Product Marketing Manager at OSRAM Opto Semiconductors, where he is responsible for infrared product management and business development in the NAFTA automotive market. His current focus is on LIDAR, driver monitoring, night vision, blind spot detection, and other ADAS applications. Rajeev joined OSRAM Opto Semiconductor in 2014. His experience in the Detroit automotive industry spans over 28 years -- working for companies such as Bosch, Johnson Controls, and Chrysler. He has concept-to-launch experience in occupant sensing, seating, and power train sensors. He holds a Master’s Degree in Manufacturing Engineering from the University of Massachusetts, Amherst and a Bachelor’s Degree in Mechanical Engineering from Guindy Engineering College in Chennai, India. He is a licensed professional engineer and holds a number of patents on occupant sensing. He is also a member of the SAE Active Safety Standards development committee.


Technology Showcase Presenters

(listed alphabetically, by company name)

Rajeev Thakur
Product Marketing Manager
OSRAM Opto Semiconductors

Aidan Browne
Applications Engineer
ON Semiconductor

Jane Zhang
CEO
Surestar Technology

Brian Wong
CEO
TriLumina

Georg Ockenfuss
Director WW FAE
VIAVI Solutions


Startup Showcase Presenters

(listed alphabetically, by company name)

Mohammad Musa
CEO and Co-Founder
Deepen

Kyle Bertin
Business Development
Deepscale

Gleb Akselrod, PhD
CTO and Founder
Lumotive

Joe LaChapelle
Vice President of Research and Development
Luminar

Andrew Miner, PhD
CEO and Founder
Mirada Technologies


Call for Speakers

If you’d like to participate as a speaker, please call Jessica Ingram at 360-929-0114 or send a brief email with your proposed presentation topic to jessica@memsjournal.com. All speakers will receive a complimentary pass to the conference.

Workshop and conference scope includes topics related to LIDAR in automotive applications, such as:

  • Expert reviews and analyses of state-of-the-art LIDAR technologies
  • Business trends, market projections, M&A developments, and startup activity
  • LIDAR data fusion with other types of sensors such as radar and camera
  • Impacts of enabling technologies such as artificial intelligence
  • Notable academic research related to LIDAR for automotive applications
  • LIDAR data processing techniques and algorithms
  • Supply chain trends and challenges, government regulations, and mandates
  • Fabrication, packaging, and system assembly techniques
  • Reliability testing methodologies and techniques
  • Technology transfer, ecosystems and research hubs, company formation