We gave our developers two weeks to produce a mock-up PoC Native iPad application that would serve to fill the gaps in the receptionist’s working day.
After all, humans have to go on lunch, so who’s there to greet visitors in their absence? Swoop in, the VR (virtual receptionist).
There are several AI frameworks and new technologies that our developers leverage for client work every day. But we can use them to improve our own office UX, too. At least, that’s the notion that Appscore set out to test with our latest experimental venture.
So is the latest AI technology compatible, sophisticated and reliable enough to pull off a fluid and positive experience for visitors? The scenario goes like this: someone walks into the building and up to the reception desk. Our receptionist is on break. Before said person gets through to the office, the VR activates.
The app recognises that there is a person and locates the face thanks to iOS Face Detection Vision framework; the first piece of tech leveraged for this experiment. Next, the ‘receptionist’ sends a photo of the face to an AI service from AWS – Rekognition – to see if we recognise it by comparing the face against staff profile pictures.
If the VR recognises the guest as someone who works at Appscore, it greets them and lets them pass. This is thanks to an iOS Framework called AVSpeechSynthesizer, which is a part of the AVFoundation framework that converts text to spoken audio. Appscore developers programme it to ‘speak’ to everyone it ‘meets’, serving to welcome employees and attract visitors’ attention.
The AVSpeechSynthesizer speech framework activates its speech synthesiser when someone comes in who the VR doesn’t recognise. The pre-planned script greeting the guest and asks who they’re here to see. We recapture audio of what the visitor says back with the speech framework. The API sends the person’s reply to be converted from speech to text then processed by a Google AI chatbot service, Dialogflow.
Dialogflow is configured with different conversation patterns. It recognises information it receives and its goal, such as who is visiting and why in order to formulate a response. Different conversation programs adhere to various scenarios. The speech framework sends a response back to the user to continue the conversation, and whatever it returns with from the user determines the next response.
The final step involves the chatbot detecting when it has all necessary information and that a conversation has finished. When this happens, it will automate a farewell message asking the guest to take a seat as they’ll be seen shortly, or similar. The VR simultaneously notifies the employee in question via Slack message, email or backend data.
Regarding the User Interface (UI), it will display a simple loading indicator during the initial facial recognition process and other such functions. It will display a Waveform animated with Core Graphics for speech/text processes. The Waveform UI is programmed to activate only when the app is ‘speaking’ to show that it is working.
Ultimately, this PoC is a fun and enriching learning exercise for our developers. Our advanced team is using this opportunity to truly test how mature some new AI services are, in different fields. They could become something that we would use ourselves to augment the receptionist’s already stressful job.
So how feasible is all this in practice, and is the technology advanced enough? We’ll be sure to follow up with notable developments from this AI experiment on the blog in due course.
The new AI may pave the way for a fresh way of experiencing office life, self-checkouts, virtual home care assistance and much more.
Have a business problem you need solved? Get in touch.
3X Winner
Winner
Winner