How human and artificial intelligence can augment one another
Mennatallah El-Assady was seven years old when she began learning the flute. At first, she found it hard to read all the black dots dancing about on the sheet of music. Her solution was to mark each individual note with a different colour. “That was a real breakthrough: suddenly I could read music,” she recalls. Only later would she discover that she had a form of dyslexia.
Today, aged 31, she develops smart software. Her speciality is interactive information visualisation, a skill she has brought to the ETH AI Center, where she was recently appointed a Post-Doc Fellow. Her graphics help picture the mathematical data behind AI applications. This enables people without any knowledge of programming to understand how algorithms work, and it lets them influence the decisions those algorithms take.
It’s been a while since El-Assady played the flute on a regular basis, but there’s something about the ability – or not – to read music that still intrigues her. Recently, she co-developed a web app that uses colour, as well as musical notation, to represent a melody. This can display any piece of music whatsoever, using her colour-augmented notation. In addition, the app boasts intuitive features that help users find their way around all the intricacies of a particular composition.
This can involve using the mouse to draw a sequence of high and low notes in a box. Search algorithms then hunt down the corresponding melody and highlight it in colour. “These are visual metaphors that help people interpret a song even if they can’t read music,” El-Assady explains.
An early fascination for all things AI
The music app is just one of many projects she was involved in during her doctoral studies – half of which she completed at the University of Konstanz and half at Ontario Tech. “It was really just a side project, something I developed more for fun at first,” she says. The idea came about, she adds, because of the many similarities between music and language.
El-Assady has been working in the field of computer-based language analysis for almost nine years now. As a student, she was fascinated by the idea of AI-based human support – in the form of search engines, voice assistants and chatbots. It wasn’t long, however, before she realised that human input is also vital – especially when algorithms are being trained to perform a specific task in everyday life.
By way of example, El-Assady cites a language model designed specifically to identify key terms and topics in a text database. Such an AI program could be useful for political scientists, she explains, because it would help them analyse long transcripts of political debates more quickly and easily. The trouble is that there are numerous ways of defining the category for each topic.
An interactive program requesting human input
“Linguistic models don’t look at a text in the same way a person would,” El-Assady explains. Rather, they combine text blocks that frequently occur together and assign them to a common category. “By contrast, a human reader also takes into account the wider context and brings along a broad range of general knowledge.”
And that’s precisely why El-Assady is so interested in the collaboration between human and artificial intelligence: because they are different, and because each person brings along their own perspective. “Together, humans and computers perform better than they would on their own.” That said, the problem with conventional AI programs is that it’s complicated to tailor algorithms so that they accommodate individual preferences. As a rule, this is a job for the machine learning developer.
El-Assady has long been fired by the idea of being able to overcome this barrier, empowering humans. Using AI-based text analysis, she has developed a series of interactive visual interfaces to refine linguistic models. These depict in real time how an algorithm assigns text blocks to new categories during the learning process.
If the AI model is unsure about which category to select, it provides the user with a range of options and asks them to help decide. The more feedback the model gets from a particular person, the more it learns about their preferences and adjusts the options accordingly.
Big interest prompted by analysis of US presidential debate
But, as El-Assady explains, there’s more to this than simply tailoring AI to individuals: “This approach will also enables us to develop computer models based on input from people with different backgrounds.” And that’s especially vital in language analysis, she says, because you need as many different perspectives as possible in order to root out bias and discrimination.
Another of her visualisation tools offers a deep dive into political debates. This uses coloured animated bubbles to illustrate how much attention certain topics receive and which person is leading the conversation. El-Assady’s interactive visualization also shows the patterns of argumentation followed by the speakers.
To showcase the powers of interactive language analysis, El-Assady focused on the TV debates between Donald Trump and Hillary Clinton in the autumn run-up to the US presidential election of 2016. Her bubble visualisation highlights how Trump was able to dominate the agenda, forcing Clinton and the moderator to follow his lead.
Looking forward to new, interdisciplinary projects
In the wake of its online publication, El-Assady’s analysis generated a big response. Since then, a number of US universities have invited her to give guest lectures. At the same time, the project has seeded new collaboration and novel avenues of research. Back in Zurich, where her AI-based political analysis also made a splash, scientists from the Swiss Data Science Center have now approached her about developing a similar program for Swiss-German parliamentary debates.
Meanwhile, El-Assady is turning her thoughts towards new areas of application for her visual analytics tools. She sees big scope for language analysis in the battle against fake news. For example, they could help evaluate the information content and lines of argumentation found in the echo chambers that flourish on social media.
The opportunity to follow up on these and many other ideas was what led her to apply for the Fellowship at the ETH AI Center. “The scientists here are working on AI applications in so many different areas,” she says. That’s why she can’t wait to get started on some new, interdisciplinary projects with other Pioneer Fellows. This spring semester, which has just got underway, she will be also sharing her knowledge in a new elective lecture for computer science students.