Experimenting with Voice-Control

By Lorena Salamanca, Software Engineer

At Picnic, we are always up for designing and developing new prototypes. We want to give our customers a unique in-store experience. To do this, I thought it would be cool to integrate our interaction flow with voice-commanded devices – such as Amazon Echo, or Google Home. The idea is that when you’re cooking and your hands are covered in grease, you remember that you’re out of milk. At that moment you can’t touch your phone. Instead, you just say ‘Alexa, add milk to my Picnic basket’. This means that you don’t forget later, and your phone doesn’t get covered in food.

A voice-command feature is of great value to our blind customers. The frontend guys have already put assisted speech in the app, this helps customers to shop by listening to the assortment, but it would be easier if people could just say the items and for them to automatically go into the basket.

After developing this idea, I presented it and I was given the time and material to begin working on a prototype. It’s been great to explore new technologies and frameworks, as well as learning more about SSML, AI, and automatic speech recognition. The project is still in a prototype phase but I’ve had great feedback from customers in User Testing, so I’d love to develop it further.

There are many diverse projects to work on. And as we’re growing so fast there are always new opportunities for cool ideas and features for experimentation. That’s why we’re constantly looking for new developers!