Over two and half decades ago, I remember first picking up a mouse and keyboard to play Quake on a University lab computer for over eight hours straight. Fast forward to today, and that 3dfx Voodoo graphics card has been replaced with racks of memory laden GPU cards chained to crunch matrix calculations for machine learning algorithms.

But curiously, some things have remained the same: the mouse and keyboard. Why is it that while CPUs have evolved and changed, GPUs have grown and moved to the cloud, memory and disks have jumped from platter to solid state, the mouse and keyboard have hardly changed at all?

This point has stuck with me as my academic career has grown from being a student assistant at the Multimedia Design Center to a Master’s in Software Engineering where I did my thesis with the Carnegie Mellon Robotics Institute to where it is now at Macquarie University’s Computational NeuroSurgery Lab. All through these years I was excitedly waiting for a peripheral that would finally introduce neural interfaces into
computing in a meaningful way. Some companies did indeed try, with my first introduction into consumer neural interfaces being around 2010 with NeuroSky. It worked, and remained a curiosity to my software consulting clients and friends, but nobody saw where it could fit into their life.

Through much of that time, I was working at Disney as a freelance games developer, and couldn’t help but think that the tool you use to write essays and crunch numbers on a spread sheet should be the same tool you use to navigate a game character through a maze. With the rise of console gaming, especially seen with the introduction of the PlayStation controller, an alternative to mouse and keyboard gaming can be had. But their persistence in competitive gaming is an indicator that when it comes to user experience, they still reign supreme. Further still, most competitive gamers use wired peripherals to reduce latency and interference when competing. With this evidence, it might be easy to conclude that in those early years of computing, we may have simply hit a home run with the design of the mouse and keyboard for gaming and it has not changed because it works so well. But this would be a lazy assumption. In other fields where precision and speed are critical, for example in robotic assisted keyhole surgery, a unique physical peripheral interface was developed to control the many robotic arms in the device. Furthermore, new research by myself and others is being developed to analyse and interpret eye gaze patterns when free viewing MRI scans to see if performance gains can be made and transferred between novice and expert doctors.

But with all this research being made into reducing and optimising latency in interfaces used in critical medical procedures, could any of that knowledge be used in gaming, and specifically in esports? The answer could be seen plainly by the lengths Formula 1 competitors take when developing new materials to reduce weight in their multimillion-dollar race cars, or the elaborate measures competitive cyclists make in their clothing and gear. Perhaps an argument can be made that esports isn’t yet big enough to warrant this kind of tinkering. But this also is not true, with esports matches selling out the Los Angeles Staples Center even a decade ago. It is only a matter of time before a wiz kid who likes reading PubMed articles on eye tracking hacks together an electrooculogram interface with his mouse and keyboard to give them an edge over other esports competitors, launching a bio-peripheral latency arms race between elite esports competitors, eventually trickling down to consumers as new devices.


Submit your Innovation Story

Join and network with a global community of innovators  by sharing your success story with the world.