As we previously emphasized, many aspects of this project were uncertain. Even a month in (during the HCI tasks), exactly where the bulk of work would stand was still uncertain. Is it mostly server software? Does it have a UI? If so, is this an app, a desktop client, or something else? What job does it solve and who is the user?
Nonetheless, we had to produce a user-facing prototype, and so we tried to guess what our project would require. Unsurprisingly, we were wrong the first time, so we had to redesign everything with the research findings we had later.
Prototype 1 - Browser Extension
Our first guess was that we needed an administrative browser extension. The reason was simple - given that a major blocker was being able to get Voiceflow data out, it seemed that the likely approach is to scrape that data with a browser extension. And given that we have that browser extension, it seems natural to add the rest of our functionality there too.
Prototype 2 - Desktop App
As our research progressed, we discovered a better way to extract data out of Voiceflow by reverse engineering its APIs, explored here. That meant that we weren’t limited to a browser extension. Having a desktop application gives us more control to make it reliable and robust. Browser extensions are designed for simple one-off tasks anyway, and didn’t fit the seriousness of our administrative functions.
The End Result
As we were implementing the ideas of Prototype 2 into an actual app, we received further feedback from our client and TA that helped us refine some application screens. We designed the frontend with this feedback in mind. With our recent Voiceflow research, we started working with exported Voiceflow files instead of reverse engineering trickery. This removed some setup steps.
Initially, we described our UI screens and workflow in this HCI section. However, there was major content duplication across this and our user guide. Thus, we decided to keep only our user guide. You can explore our final UI design in the
For Users section.