How do human beings experience the world? We have eyes to see, noses to smell, ears to hear, and fingers to touch and feel. But what allows us to experience and understand the world, and not just sense it?

The human brain, the world’s most advanced and preeminent computer. As we explore today’s sensor industry, it will be useful to think of our eyes, ears, noses, and fingers as ‘hardware’ and our brain as the ‘software’.

Balancing Software and Hardware

The miracle of life relies on the perfect balance between our (and other creatures’) ‘software’ and ‘hardware’. Achieving this balance, as it turns out, is the primary challenge faced by today’s sensor industry.

The sensor industry is part and parcel of a number of new and exciting industries. Among these include driverless cars, artificial intelligence, manufacturing machine vision, and medical devices. While precision optical components created by IRD Glass and others have pushed the hardware beyond the fold, the sensor industry and the many industries it serves are limited by an imbalance between hardware and software capabilities.

The human brain is very effective and efficient at adding context to unexpected or unknown sensory data. While sensors are excellent at measuring known stimuli, the devices in which they are used lack the advanced digital logic (the software) to cross reference stimuli, predict unknown variables, and react quickly and reliably to new stimuli.

The bottom line is that new advances in semiconductor technology are needed if sensory equipment hopes to match or exceed the human capabilities of sight, smell, touch, taste, and hearing. Which brings us to the semiconductor industry and its goal of matching the data processing ability of the human brain.

Common Sense: Semiconductors & Sensors

Embedded vision has seen a rapid rise in the automotive and robotic sectors. Each presents its own challenges, as goals differ both from human vision and from each other. As we discussed in a recent blog, a driverless car needs to recognize the speed of other cars and objects that may cross its path and have the ability to react in a fraction of a second. Furthermore, driverless cars and other robots will need to improve pattern recognition so that they can reliably make subtle differentiations — such as between a human face and a statue, or a ledge and the edge of a carpet. We make these differentiations with ease, and that’s thanks to our impeccable software.

The goal is to couple advanced sensors with ‘intelligence’ for better image processing and higher order logic that can interpret complex patterns and contextualize information. Leading the charge in this arena are semiconductor companies. The task is a mighty one — one that gets at the heart of what it means to be a living, sentient being. Work is being done, and advancements are being made. Surveillance and automotive applications now make up 3% of the market for semiconductors, but they are predicted to reach 5% or more in the near future.