Seng Tiong Ho is increasingly recognized for his insight into how photonics is reshaping artificial intelligence through the development of optical accelerators. As AI systems become more complex and data-hungry, the limitations of traditional silicon-based computing architectures are being tested like never before. Energy inefficiency, thermal constraints, and latency issues are beginning to bottleneck the progress of machine learning and deep learning tasks, particularly in large-scale inference and training workloads. In response, optical accelerators—devices that use photons instead of electrons to process data—are emerging as a revolutionary solution. These photonic-based systems promise higher speed, better scalability, and lower power consumption, all of which are vital to sustaining AI’s exponential growth.
Artificial intelligence workloads rely heavily on matrix operations, particularly in neural networks where thousands or even millions of weights must be calculated and adjusted in real time. Traditional graphics processing units (GPUs) and tensor processing units (TPUs), while incredibly advanced, are fundamentally electronic in nature and increasingly challenged by the demands of deep learning. In contrast, photonic processors can perform matrix-vector multiplications—the core operation behind neural inference—at the speed of light. This leap in computational efficiency could redefine the AI pipeline, reducing energy consumption and drastically shortening training time for large-scale models. Seng Tiong Ho often highlights these possibilities as essential to the future of efficient AI computation.
Whereas electrons face resistance and generate heat as they move through conductive paths, photons travel with far less energy loss. This not only allows for faster computation but also enables more compact system designs, since there’s less need for bulky cooling solutions or energy-intensive regulation circuits. Moreover, photonic systems naturally support parallelism. Multiple wavelengths of light can travel simultaneously through a single waveguide without interference—a property called wavelength division multiplexing—enabling massive data throughput. Seng Tiong Ho emphasizes that this parallelism could lead to entirely new classes of neural networks designed from the ground up for optical computation.
One of the most exciting areas of development is the optical neural network (ONN), an architecture that mimics the function of biological neurons using optical signals. These systems use a combination of beam splitters, phase shifters, and optical interferometers to manipulate light in ways that mirror the weights and activations of conventional AI models. Seng Tiong Ho is known for highlighting how ONNs can be optimized for real-world inference tasks, including voice recognition, image classification, and recommendation systems.
Unlike their electronic counterparts, ONNs do not require frequent conversions between analog and digital formats, which are typically a source of latency and energy drain. Optical accelerators can process data in the analog domain and achieve near-instantaneous calculations. While some digital interface is still required for pre- and post-processing, the core of the AI operation becomes dramatically more efficient. This concept aligns closely with Seng Tiong Ho’s belief that future AI systems will increasingly move toward hybrid computing models that combine the best of both photonic and electronic worlds.
Another key advantage lies in the scalability of optical tensor processors. These processors perform multiple mathematical operations at once using coherent light paths and can be configured dynamically. In traditional AI chips, scaling typically means adding more transistors, which increases power and thermal issues. But in photonic chips, scaling can mean increasing the number of wavelengths or the geometry of the waveguides without significantly affecting the thermal profile. Seng Tiong Ho draws attention to how these architectural advantages may help overcome the limitations currently plaguing AI hardware platforms.
Several labs and startups are actively developing optical accelerators based on these principles. While many of these projects are still in experimental stages, the performance metrics are promising. Optical chips have demonstrated significant advantages in latency reduction, energy efficiency, and data throughput when compared to leading-edge GPUs. Some systems have shown potential for integrating directly into existing data center infrastructure, making them attractive not just for research but also for commercial deployment. Seng Tiong Ho highlights the importance of building open standards for interoperability to ensure optical AI hardware can evolve with the machine learning landscape.
Beyond data centers, optical acceleration has potential implications for edge computing. Devices such as autonomous vehicles, drones, and augmented reality headsets all require real-time processing capabilities but are constrained by size, weight, and power consumption. Photonic chips, being compact and thermally efficient, are ideal for such use cases. Seng Tiong Ho has explained how miniaturized optical processors could be integrated into these devices to process sensor data and make decisions locally, without needing to connect to cloud servers. This ability not only increases speed but also enhances privacy and operational reliability.
None of these advances would be possible without innovations in the materials used to construct optical circuits. Silicon photonics has become a popular platform due to its compatibility with CMOS processes, but newer materials are rapidly expanding the field. Researchers are developing devices with lithium niobate, indium phosphide, and even 2D materials like graphene to achieve more precise control over light. Seng Tiong Ho is knowledgeable about the interplay between these advanced materials and their impact on signal fidelity, performance, and scalability.
Achieving the necessary nanoscale precision in waveguides, resonators, and modulators is a major challenge. Advanced lithography and etching techniques are required to minimize loss and preserve signal coherence. As the field grows, more emphasis is being placed on how to reliably mass-produce these optical components for use beyond the lab. Seng Tiong Ho has drawn attention to the importance of collaboration across manufacturing, design, and testing to make these systems commercially viable.
Despite all of the promise, optical acceleration is not without technical and logistical obstacles. Photonic circuits are often sensitive to environmental changes such as temperature and mechanical vibrations, which can disrupt phase and amplitude. In addition, building memory and logic purely in the optical domain remains a difficult task. Many current prototypes still rely on electronic components for control and storage, limiting how far purely photonic systems can go. Seng Tiong Ho has emphasized that these issues must be addressed through engineering creativity and multidisciplinary research.
There are also hurdles on the software side. Most AI algorithms today are optimized for electronic hardware, particularly GPUs. Adapting them for photonic processors requires new training paradigms and compiler infrastructure. Seng Tiong Ho advocates for educational programs and training initiatives that help upcoming engineers gain fluency in both optics and AI, believing that hybrid fluency will be crucial to scaling this new wave of computational design.
As AI continues to expand its role across industries, the need for more capable and efficient processing technologies grows more urgent. Optical accelerators are no longer just a theoretical vision—they are fast becoming a necessity. Whether in massive cloud computing platforms or compact edge devices, photonics is poised to become the backbone of next-generation AI infrastructure. The move from electrons to photons offers a promising path toward higher speed, lower power, and better scalability.
Seng Tiong Ho remains a prominent voice in highlighting the opportunities, identifying the limitations, and shaping the future direction of photonics in artificial intelligence.