📹 New! Remote User Testing - Get video + voice feedback on designs and prototypes
Read more
Viewpoint

Interactive Chat as a New Content Type

The Road to InterviewJS
Interactive Chat as a New Content Type

After well over six months of an incredible effort, Al Jazeera rolled out public beta version of a set of sophisticated tools that allow to compose and publish an entirely new type of content — a scripted interactive chat.

The goal of the entire endeavour was to create a more engaging and inclusive storytelling experience around the interview format. We worked under the premise that we can achieve that by letting the readers seemingly directly interact with the characters at the heart of a story. All through one-on-one message exchanges.

As the user experience designer and front-end developer on the project I had the pleasure to help design and build the ground-breaking product. Here’s my take on it.

Welcome InterviewJS

InterviewJS puts readers at the heart of a story allowing them to seemingly directly engage with the characters involved via a chat-like app. Of course, chats have been around for a while but the possibility to script a conversation, then edit, publish and share it — like you’d normally do with a blog post — is completely new.

Al Jazeera’s product is a workflow tool that allows to create and distribute scripted interactive chats the same way that WordPress does with plain articles. It’s both an editor and a publishing platform. You can sign in, compose your piece and publish it as you’d do with a blog post.

Access to the platform is entirely free which enables anyone to start composing scripted chats. Best of all: Al Jazeera is completely opening the source code which means that you can easily fork the code repository and contribute back with pull requests or simply set up your own instance of InterviewJS.

The scripted chat

InterviewJS scripted chat sample

A scripted chat is an interactive chat based on a real interview transcript or a script designed by the storyteller. End-readers interact with the interviewees directly by making comments and asking questions as if they would be leading the conversation when, in fact, they’re following a path set out for them by the creator of the story. Readers’ choices have effect only on the order in which the content is being served.

Such chats allow any kind of content —the reader can request and receive not only texts, but also links, images, videos, maps and other embeds. And although it enables scripting questionable story scenarios — where, for example, Barack Obama sends you an animated GIF— it does empower storytellers to craft their stories as they see fit.

What it’s best for

InterviewJS works best on stories with a few characters, preferably with contrasting views but not exclusively. In fact, one of InterviewJS pilot stories is a one-on-one interview with Snowden. Stories where multimedia, maps, videos and links are being shared by interviewees are likely to be more engaging for the end-reader.

And although InterviewJS is a tool created by journalists for journalists, it doesn’t mean that it doesn’t work for anyone else. Quite the opposite. And so I’m interested in seeing the non-journalistic stories people may be creating with it. I see it being useful in education, as an innovative take on FAQs, an element of an escape room experience or… you name it!

InterviewJS ecosystem

Our work on InterviewJS involved creation of four different packages, each living its own life in a dedicated environment:

  1. The Story Composer — the only area protected by an authentication provider where story creators can sign in to manage, compose, edit and publish their stories.
  2. The Story Viewer — used to render published stories. It takes dataset of a story created by the journalist and renders it as a navigable, interactive chat application. Each story has a unique URL where it can be accessed. And — as stories are not publicly listed anywhere — your piece remains secret unless you share it.
  3. The Style-guide running on Catalog — a “living” design documentation and front-end architecture reference. It lists the library of custom-made reusable React components we developed and used to assemble InterviewJS’ UIs.
  4. The public-facing website: https://interviewjs.io

The process

At one of our team meetings

InterviewJS is a fruit of work of an incredible team of journalists, producers, designers and developers spread across 5 countries and just as many time zones. We did occasionally meet in person — thought not altogether at once — but most of our collaboration was remote. Aside from the occasional team gatherings in London and Doha, we mainly used Slack and email to communicate and appear.in for our meetings and remote usability testing sessions. It all worked wonders with only a few occasional glitches.

Design

Our design phase went on for a little over two months. We worked off of early sketches created internally at Al Jazeera that I took as a base for subsequent design iterations. They were not “prescriptive” — as Juliana Ruhfus, the coordinator on the project, continuously stressed out from the early days — but I found them to be visionary and they shaped greatly the direction in which we went with the polished designs.

Early sketches by Juliana Ruhfus

From then onwards, I’d use Paper to draw rough sketches and early wireframes and Sketch to design the user interface. While we’re at it, it’s worth mentioning how I found prototyping message exchanges incredibly frustrating and counter-productive. After a few trials I quickly began avoiding to prototype the editorial elements of the product and focused on the core UI.

Wondering what’s the best approach/process people follow to prototype chatbots. Because it can’t be this: pic.twitter.com/A9uMceGyob

— piotr f. (@presentday) October 26, 2017

Altogether, we spent roughly about 30 days working on the final designs. The only way I know this is that each time I’d be doing revisions, I’d duplicate previous version of the Sketch file and name it by the date of edit.

From the very early wireframes to a nicely polished, sophisticated UI and design system—it took us over 30 iterations to get things right. Here’s our journey in a snap.#InterviewJS. Coming very soon.#storytelling #journalism #opennews #toolsfornews #GENSummit #googledni pic.twitter.com/AAH3PoPej5

— InterviewJS (@interview_js) March 26, 2018

Once I had the final designs ready to be tested, I quickly linked the views together with a prototyping tool. A couple of days later, we already had fellow designers and storytellers playing with the prototype and feeding back with invaluable insight.

The prototype

Development

We kicked off development with creation of a mono repository holding all required packages. Our living style-guide, running on Catalog, was then the first to see the light of day. We needed it in order to feed all other packages with a set of custom made React & styled-components that we’d later use to assemble UIs of both apps: the Composer and the Viewer.

We have concluded and published our style-guide of reusable React components we’re using to build our UIs. It all will be up for grabs soon. In the meantime do have a look: https://t.co/0nvveKATv9 #opensource #storytelling #news #chatbots pic.twitter.com/n0Q9QidZ0e

— InterviewJS (@interview_js) February 19, 2018

We then moved to building Composer views and started feeding them with some dummy JSON data using Redux. This may have been the most problematic part for me as I haven’t really done much Redux beforehand. Enter Wes Bos and his thorough “Learn Redux” online intro course—after watching the thing a couple of times I was ready to bring life into the Composer.

It works! Now to redux all things. ???? pic.twitter.com/VAIlpS3ZmN

— piotr f. (@presentday) February 9, 2018

What happened next was probably the hardest dev sprint I’ve been subject to in my entire career. Of my own free will, that is. Being the sole front-end developer on the project I obviously wanted to make things shine, the perfectionist in me was convinced I was building a cathedral. This translated into many sleepless nights and short weekends obsessing about the tiny details very few would notice. One of the lessons I’d like to take out from this endeavor is to adhere to the 80/20 rule more in the future ????.

Two months in, I handed over Composer’s front-end to @gridinoc and switched to implementing the Viewer. That was fairly straightforward with just a couple of exceptions:

  1. We wanted the stories to be played back without needing to log in, yet we needed to save readers’ progress for them to be able to restart conversations from where they left off. We therefore needed to rely on localstorage which has obvious size limitations depending on the device you’re accessing the site from. We chose to create history array holding reader’s path in the chat that would reference items from the source storyline array. Although it wasn’t the easiest to debug later on, it probably saved us quite some hassle in fixing inline base64 assets filling localstorage quickly.
  2. When the reader is presented with a binary choice, which interviewee’s bubble then the Viewer displays after tapping on either of the CTAs? What happens if there are more interviewee’s bubbles in a row? Can the interviewee start a chat or the user or both? What happens if the readers reach the end of a chat? Although answers to these questions may seem obvious now, we really needed to crack on this for a while and refactor our Chat.js multiple times to get this right.

Testing

I had the pleasure to conduct just a couple of the many usability testing sessions AJ run on InterviewJS. In the early days we tested remotely, via appear.in. Once the project reached alpha in late March, we set up a collaborative testing workshop in London. Unfortunately, I had to miss out on the following testing sessions in London and Doha. That said, after all these years, I still find them to be the most gratifying and joyful learning experience for the designer in me.

We’ve invited a bunch of storytellers to help us test the alpha version of our Story Composer. So here they are, working very hard on their very first #InterviewJS pieces. pic.twitter.com/YLyn6E8L22

— InterviewJS (@interview_js) March 20, 2018

The Story Composer

All states of the Composer

The Composer is a fairly complex beast with quite a few views, independent user flows, a bunch of modals and other contextual items. The figure above illustrates all incorporated states of the app which, once you’re passed authentication screens, we can easily narrow down to: story library, story creation wizard, the actual story editor and story publish wizard.

Story editor

The editor is where most of the magic happens: storytellers create new interviewees, store interview transcripts, add user actions and interviewee’s responses. It’s also where they get to preview their chats. The central area of the editor serves as the storyline canvas where story creators add interviewees’ speech bubbles (left panel) and end-readers’ actions (right panel).

The Story Viewer

At the core of the Story Viewer is the actual chat with an interviewee. It’s a one-on-one conversation which comes down to: a) end-reader asking questions, each user action becoming a speech bubble appearing from the right b) interviewee replying with text or media bubbles appearing from the left.

Aside from the actual chat experience, InterviewJS gives the author means to ease readers in and out of the chat by elegant introduction and outro screens — all to guarantee a continuous storytelling experience.

Viewer intro screens

Viewer chat screens

Viewer outro screens

Each InterviewJS story has its own unique ID used to generate its public URL and reference data being saved onto browser’s localstorage. We use localstorage to save end-reader’s chat history and poll choices. We need the former to allow readers to return to unfinished chats and pick up where they left off as well as to calculate the score of how much information was consumed by a single reader. We use the latter to block successive poll submits.

Your readers will be able to switch between interviewees at any point in time. More importantly, they will be able to pick up where they left off afterwards!#InterviewJS. Coming very soon.#chatbots #chats #messages #opensource #storytelling #journalism #opennews #toolsfornews pic.twitter.com/XDrQtdujhQ

— InterviewJS (@interview_js) March 26, 2018

The navigation is linear and straightforward. There’s usually only one way to move forward. At any point in time readers can go back a step in the flow all the way to the intro screens. Story elements — such as title and byline — and information about the platform itself are also available at all times. These details populate story’s meta tags which social networks rely on for generating link previews.

The fine details

Having spent an enormous amount of time polishing up InterviewJS designs and tweaking micro-interactions, I thought I’d share a small compilation of the “little big details” we thought about and implemented to make the product POP.

Responsive viewer

Your #InterviewJS stories will perform nicely on all devices thanks to responsive UI we’ll building. Coming very soon.#opensource #opennews #chatbot #chatbots #toolsfornews #storytelling #journalism #DniFund #googledni #hackshackers #NICAR18 pic.twitter.com/iGyNT2LHip

— InterviewJS (@interview_js) March 16, 2018

Emojis

At any point in time, your readers will be able to respond to your interviewees with emojis. ????????????????????#InterviewJS. Coming very soon.#storytelling #opennews #chatbots #chatbot #journalism #toolsfornews #emojis #gif #animation #googledni #DniFund #aljazeera #tools pic.twitter.com/4FnqL5YpA2

— InterviewJS (@interview_js) March 18, 2018

Dancing bubbles when editing previously added chat nodes

We’re adding the functionality to edit previously added speech bubbles. Now how’s that?#InterviewJS. Coming very soon.#opennews #storytelling #ui #design #news #journalism #chatbot #messaging #chatbots #googledni #toolsfornews #bots #interviews #reactjs pic.twitter.com/8bKi6ov8Fs

— InterviewJS (@interview_js) April 11, 2018

Intro tour to the composer

As we don’t want you to miss out on all the fancy things you can do with #InterviewJS, we’re preparing a quick welcome tour to greet you with the first time you’ll visit our story composer.????????

Coming very soon.#opennews #journalism #toolsfornews #storytelling #chatbots #tools pic.twitter.com/mCzUk0xhuI

— InterviewJS (@interview_js) March 27, 2018

Tablet-friendly composer

As our story Composer works actually pretty well on tablets, you’ll be able to script your stories on the go.????????#InterviewJS. Coming very soon.#journalism #newsroom #toolsfornews #storytelling #storytellers #storytellingsaveslives #opennews #opensource pic.twitter.com/juETWoVFtB

— InterviewJS (@interview_js) March 19, 2018

Drag & drop composer

We want to make editing of stories as smooth as possible. So today we added quick drag & drop support to our storyline composer. Whoah?#InterviewJs. Coming soon.#chatbots #storytelling #journalism #tools #toolsfornews #chatbot #ui #reactjs #googledni #news pic.twitter.com/22K7M59MTL

— InterviewJS (@interview_js) March 8, 2018

The possibility to re-order interviewees

Want an easy way to reorder interviewees? Like this? You’re welcome!#InterviewJS. Coming very soon.#news #toolsfornews #journalism #ui #design #product #opennews #opensource #reactjs #chatbots #chatbot #interviews #storytelling #gif #interface pic.twitter.com/k1P51T7eAt

— InterviewJS (@interview_js) April 14, 2018

Intelligent speech bubble colour coding

We’re integrating `get-contrast` library to calculate WCAG-appropriate text colour based on the custom background you assign to your interviewees. Pretty sweet, huh?#opennews #toolsfornews #journalism #interviewjs #littlebigdetails #ixd #ux #news #storytelling #DniFund pic.twitter.com/EDM57WI1pr

— InterviewJS (@interview_js) March 14, 2018

Simple opinion poll

Let your readers have their say. We’re making it easy for you to create simple binary option polls you can close your #interviewjs stories with.#opensource #opennews #news #toolsfornews #journalism #storytelling #nicar18 #googledni #DniFund pic.twitter.com/woJSm6nP2X

— InterviewJS (@interview_js) March 13, 2018

The limitations

When talking about “scripted chats” we’re already discarding a subset of features that one would normally expect from a chat app. There isn’t much sophisticated AI behind InterviewJS, but a very simple ruleset that enforces certain conversation scenarios. Which is why end-readers can’t really type in their messages, or respond with selfies.

During our early tests, we found that even simple scripting logic can get complex quite easily. And although we did explore the possibility of allowing to script more sophisticated “explore” loops and nest threads, it became evident soon enough that storytellers struggle to script more complex storylines. Which is why we settled on simple branched narratives as outlined below.

Simple branched narratives

“Simple branched narratives” is a fancy way to say that end-readers, when presented with binary choices, can either go one way or the other. As such, it may as well happen that — unless the script has been cleverly structured — readers may never consume 100% of the content. We’re cool with that.

The value

The release of InterviewJS is important for several reasons. Here’s a few that matter to me the most:

  1. By launching InterviewJS Al Jazeera enables storytellers to craft an entirely new type of stories without having to worry about the technical side of things.
  2. By relying on web technologies AJ allows for scripted chats to be widely available for a complete range of platforms and devices. In fact, InterviewJS stories perform great on phones, tablets and desktops no matter the operating system.
  3. By designing and developing the product in the open we hope to create a community of web technologists willing to contribute as well as inspire and educate young designers and developers working in the field of online storytelling.
  4. By completely open-sourcing the code AJ enables publishers to be able to integrate the tool within their workflow, get involved and help to evolve the product once it’s out of beta.
  5. By releasing it for free AJ wants everyone to start creating their own scripted chats.

Next steps

These are early days for InterviewJS. As we’re currently testing the product ourselves we’re also interested in comments, suggestions and bug reports coming from the community. We have already identified a bunch of issues we’re trying to fix and enhancements that, we’d hope, will make their way into the next public release before the product goes out of beta. Our roadmap is open and available on Github where anyone is welcome to join the conversation.

Closing remarks

InterviewJS is special to me in many ways. I enjoyed working with the talented team, I was thrilled to be involved in shaping a groundbreaking product, I found it incredibly exciting to be creating a new storytelling tool and I loved that we could deliver such an outstanding piece of software in such a short amount of time while collaborating 100% remotely. But most of all, I found a great pleasure in working openly on an open-source product. For all that, I’m very grateful to be part of the InterviewJS team.

Gotta love designing and developing in the open, being able to share progress as you go along, be receptive to suggestions coming from the community, respond to feedback—all makes it a very enjoyable process. So I’m having a blast working on @interview_js these days.????????

— piotr f. (@presentday) March 8, 2018

Where from here?

If you’re interested in learning more about the product, make sure to visit interviewjs.io. I warmly invite you to run through the pilot stories too — they’re great editorial pieces as well as fantastic examples of the full potential of the platform.

First stories

Share your comments and/or suggestions with the team on Twitter or via email at interviewjs@aljazeera.net. If you’d like to look through the source code, you’ll find it hanging on Github. For all things server-side and infrastructure, hit @gridinoc while I’ll be happy to address any design and front-end related questions at @presentday. Thanks for reading!

This article was originally published at piotrf.pl.

Get started with Marvel Enterprise

Get started with Marvel Enterprise

From CEOs to marketing, get your entire organisation collaborating in one place.

Get started with Enterprise

Product designer & full-stack developer. Eclectic flâneur extraordinaire in constant exploration of spoken, visual and programming languages.

Related Posts

Making good component design decisions in react when it’s hard to see how an existing component can still be reused

Disney’s 12 Principles of Animation is one of the inestimable guides when traditional animations are considered. It was put forth by Ollie Johnston and Frank Thomas in their book – The Illusion of Life. These principles were originally designed for traditional animations like character animations. However, these principles can still be applied in designing interface animations. So, this is just… Read More →

Design is no longer subjective. Data rules our world now. We’re told all design decisions must be validated by user feedback or business success metrics. Analytics are measuring the design effectiveness of every tweak and change we make. If it can’t be proven to work in a prototype, A/B test, or MVP, it’s not worth trying at all. In this… Read More →

Categories