Dr. T.V. Raman earned his MSc degree in Mathematics from IIT Bombay in 1989 and PhD degree from Cornell University in 1994. His PhD thesis was awarded the ACM Doctoral Dissertation Award in 1994.
Dr. Raman has over 25 years of leadership experience in advanced technology development. After his graduation, he joined Cornell as a Mathematician who intended to apply his mathematics background to the field of Robotics and Computer Vision. While doing his Computer Science coursework, he discovered the potential presented by the combination of speech technology and richly-structured electronic documents. He then started building a system that used textto-speech (TTS) to render such content. This led to Audio System for Technical Readings (AsTeR).
Dr. Raman’s PhD thesis was awarded the ACM Doctoral Dissertation Award in 1994. This dissertation defined several seminal concepts including that of audio formatting and structured browsing —concepts that have continued to appear throughout his later work.
After finishing his graduate work at Cornell, Raman went on to apply the concept of audio formatting and structured browsing to desktop interfaces. To further explore these ideas, he
designed Aural CSS (ACSS) and built Emacspeak, an Open Source project that he continues to use for all his daily work. In addition to being an effective research work-bench for eyesfree interaction, Emacspeak has proven itself to be a useful accessibility solution on a wide range of platforms.
Prior to joining Google Research in 2005, Dr. Raman worked at Digital Equipment Corporation, Adobe Systems and IBM Research. While at Adobe, he brought the lessons learnt from AsTeR to enrich PDF with additional document structure to enable rich information extraction. At IBM, he worked on multimodal interfaces to the Web and made seminal contributions to several W3C specifications including Xforms.
Dr. Raman has authored 3 books and filed over 75 patents. His work on eyes-free interaction has been profiled in mainstream publications including the New York Times and Scientific American.
Dr. Raman now works on user-aware interfaces in the context of the Google Assistant. During his 12+ years as a Google Research Scientist, he has worked on Google Search, Android and ChromeOS to build the under- pinnings of Accessibility on Google’s core platforms. He works on the Assistant with the goal of creating friction-free user interfaces for eyes-free interaction. His objective is to deliver technologies that enable ubiquitous, eyes-free access to the cloud from a wide variety of devices ranging from smart phones and tablets to wearables. According to Dr. Raman, speech is the next dimension in user interfaces and he develops application frameworks that combine speech technologies with the power of the Cloud to deliver user-aware interfaces that enable anytime, anywhere access toone’s personal smart assistant.