Gary, in recent blog posts, touched on the future of the user interface found in CTRM systems, both near-term and futuristic (Zero UI). He and I are in complete agreement that improvements in UI’s are occurring and innovations will continue, particularly in data visualization as traders and trading staff deal with a continuing flood of information from existing and emerging data sources.
That said, it got me thinking (like a good blog post should), not as a human-machine interface (HMI) expert (which I’m not), or even as an analyst (which I am), but as someone who is blankity-blank years old and has seen a lot of trading floors. So, from this perspective, I’m going out on a limb and say that while the future of computer interactions may eventually be eye glances, facial tics, spoken words, fingers flapping midair, or mind reading – none of these methods will ever be used for managing business in a CTRM system.
Let’s take voice recognition first – it has gotten pretty reliable as evidenced by the eavesdropping Amazon Alexa (which is now permanently muted in my house); and with a bit of coding, there is no question you could control any software, including a CTRM system, by voice command. But it just doesn’t make sense, particularly in CTRM. That technology existed in the 1990’s, and the ETRM company I worked for at the time, TransEnergy, gave it a spin in a prototype. Unfortunately, though cool, it turned out it just wasn’t that useful or compelling.
The problem is that traders use their voices for any number of things…making deals via phone, discussing strategies amongst their peers, shouting instructions or deal details across the trading floor, or even just talking via cell to their significant others about what to have for dinner that night. And, while these traders and support staff are having these conversations, they are multitasking – they’re doing all that talking and listening while simultaneously using their keyboards or mice to interact with their systems. Having to speak to your computer in order to get what you need, rather than just mousing around and/or hammering on your keyboard would reduce that ability to multitask. Again, I’m far from an HMI expert, but until I can see that using voice commands to input data or otherwise control your CTRM system could actually improve efficiency on the trading floor, I’m not signing on.
Ok then, so what about keeping the multitasking going by using hand gestures instead of the keyboard to control the computer? Again, why? Is there any real efficiency to be gained by waving your hands around as opposed to using a mouse or trackball? We all remember the famous scene in Minority Report where Tom Cruise stands and waves his hands around a virtual screen …and if you’re Tom Cruise, it looks cool. If you’re a bit doughy and tire easily, like me, its going to look a lot less elegant. Give me a mouse and I can throw images around a screen all day, just like Mr. Cruise. Make me stand and flap my arms…well, you’re going to get about 3 minutes of effort before I double-over with my hands on my knees and sweat dripping from my nose.
Along the same lines, remember virtual keyboards? They were once a thing in 2005 (though you can still buy them today). These are a laser generated image of a keyboard projected on your desk that you poke at to input data. But again why? They don’t provide any tactile feedback (making typing very error prone) and add nothing in terms of efficiency gains. Though they’ve been around for more than a decade, nobody but the geekiest of device nerds uses them for anything that requires actual productivity.
Eye control of cursor movements might be a bit more interesting, though frankly, I just have a hard time envisioning the use case. Think of a trader or a scheduler’s desk…there are anywhere from 3 to 8 LCD screens arrayed in front of them, giving them access to vital market information and allowing them to interact with exchanges, pipelines, TSO’s, trading partners, etc. Having a system that tracks the eye movements might make sense if it saves the user from having to swing the cursor from screen to screen via mouse in order to interact with those screens. But, if the benefit (not having to move your mouse two or three inches) doesn’t outweigh the cost, its probably not going to find traction.
What about a system that can read your thoughts for computer interactions? NO. Unless you’re slamming Ritalin 24/7 and floating in saltwater in a sensory deprivation tank, there is no way you could ever maintain enough thought control to operate a CTRM system…remember the old story about how guys think of sex 8000 times a day? Yeah, thought control is not happening in CTRM.
There’s no doubt that the future is going to be substantially different. But it seems to me much more likely that commodity trading as we know it will be long gone before Mr. Cruise waves up a trade of the next cargo of LNG from Lake Charles to Singapore.
Maybe I’m not really a dinosaur…just jaded. As a kid, I was promised that by the dawn of the 21st century, we would all have flying cars and could vacation on giant wheel-like space stations (offering side trips to the moon) operated by PanAm or TWA. None of that happened. We don’t even have the Dick Tracey wristwatch video communicator that was envisioned in the 1950’s (Don’t say “Oh yeah, what about Apple watches?” Sorry smart guy, they only do FaceTime audio, not video).
It’s important to keep in mind that CTRM vendors are not technology incubators that exist on the bleeding edge of innovation. The markets they serve are not the trillion-dollar consumer markets, and they don’t have billions to invest in the next “big thing”. CTRM software reflects the reality of the markets they serve – the business processes and commercial standards of commodity trading. While incremental improvements will continue in the user interfaces of these systems and screens will become more efficient, I will wager that your refrigerator will be sending you texts telling you to buy a gallon of milk years before you enter a gas deal by waving at your computer.