Engineers are currently designing both computers and mobile devices to be more hands-free. This means people will have more meaningful conversations with computers. The primary way that engineers envision people communicating with devices is through voice. The future of hands-free human-machine interfaces depends on how well the devices understand human speech, with all its mistakes, pauses and accents.

There will likely be different, multiple points at which people can use voice to have conversations with computers. Languages vary considerably, with complex features such as tones and stresses. In addition, people speak very differently, at different volumes and with a wide range of pitches. The mistakes of Skype’s real-time translation software illustrates how far artificial intelligence has to go even in translating between two different variations of the same language.

[easy-tweet tweet=”Voice is becoming more of an interface mechanism. ” hashtags=”AI”]

The Rise of Digital Assistant

Digital assistants such as Alexa, Siri, Cortana, Google Assistant and others reveal a future where the screen as a computing interface could be eliminated. Voice is becoming more of an interface mechanism. Yet the devices seek to supplement voice with data. For example, Cortana mines its work users’ emails and calendars. Soon, with Microsoft Office 365 cloud service, Cortana should gain the ability to search files and find pertinent documents. Banking giants Wells Fargo and Visa are also working to create their own voice and biometric identification systems. These will use voice to engage in actions such as transferring funds. Hands-free computing through voice control promises the ability to engage with computers while going about business as usual.

But voice control poses a number of security risks. Even soft verbal communications can be picked up. People can also use mimicry to impersonate the authorised user and steal data. This means that voice control on a broad scale could threaten both individual and corporate rights to privacy and secrecy. Voice control may offer a new level of functionality and benefits without the need for people to have a phone in front of their face. How offices will seal the security risks to translate this need to the world of work remains to be seen.

Multi-Threading: The Holy Grail

Devices or programmes that are multi-threaded, or able to remember multiple situations, may be the key to improving conversations with AI bots. Today, a user usually must finish a use case before starting another one. Most people do not always finish an old conversation before beginning another one. When a user can engage in multiple conversations with an AI bot at the same time, in different stages, AI bots may be able to achieve much more. Engineers are currently working on how to give devices the flexibility to offer both voice and text as an interface. Voice can work in the privacy of a home or car, particularly as more workers start to perform their jobs remotely. In a crowded office, the text is likely a better option.

[easy-tweet tweet=”AI is now limited to natural language processing” hashtags=”AI”]

Building AI Into Different Systems

Engineers are also working on integrating AI into different systems and bots. This is a challenging task. AI is now limited to natural language processing and some basic skills correlated to fixed data, such as weather, traffic, trivia and inquiries about a company’s inventory.

If individuals want to communicate with bots in a more meaningful way, the bots must be “smarter” — more proactive and intuitive. They need to learn about the individual with whom they are communicating. The bot should know the user’s preferences and behaviour, to anticipate and suggest potential needs. Without such “learning,” the bot cannot really engage in meaningful two-way communication. Although an individual can be very forgiving in the beginning if the bot doesn’t always get it right, users expect a bit to improve and learn from their behaviour. In 2017, users will be looking for bots that are dedicated to achieving more.

 

+ posts

Claus Jepsen, Chief Architect, Head of the People Platform Team and Innovation Labs, Unit4

Claus Jepsen is Unit4โ€™s Chief Architect and Head of the People Platform Team and Innovation Labs focused on building cloud-based, super-scalable solutions and running technology and feature incubation and innovation projects. Claus and his team have recently been focusing on researching, designing and building next generation of user experience for enterprise software, most noticeable is Unit4โ€™s digital assistant Wanda and supporting technologies. Moreover, Claus spend the last 7 years architecting and designing various service oriented cloud based solutions with elasticity, fault tolerance and resilience as a core design criteria, based on a Micro Service Architecture.

Unlocking Cloud Secrets and How to Stay Ahead in Tech with James Moore

Newsletter

Related articles

How AI is Transforming Customer Communication Management

Business communication has evolved over the years. Today, it's...

Investment Opportunities for Startups and Technologies in AIย 

Although artificial intelligence developed from niche technology has become...

Four Surprising Lessons I’ve Learned Leading Tech Teams

Techies. Geeks. Boffins. Whatever your organisation calls its IT...

A Business Continuity Cheat Sheet

Right, let's be honest. When you hear "business continuity,"...

Challenges of Cloud & Ultima’s Solution to Transform Business

With the way that AWS and Microsoft dominate technology...