Making AI work for everyone
Less than a year after the latest version of ChatGPT was unleashed on the world the explosive growth in the use of generative AI tools and foundational machine learning models shows no let up.
A McKinsey Global Survey published this month found that one-third of survey respondents’ organizations are already using generative AI regularly in at least one business function. More than one-quarter of respondents from companies using AI say generative AI is already on their boards’ agendas and 40 percent said their organizations will increase investment in AI overall because of recent developments in this technology.
Despite its rapid advance it is unclear whether this game changing tech will transform our societies for the better or pose serious risks, or both. It can be argued that the transformation in the past couple of decades to today’s digital world has made a few entrepreneurs and investors very wealthy and influential without really benefitting the majority.
Today at EPFL a special Applied Machine Learning Days event on Generative AI, Large Language and other foundation models has been hearing from some of the world’s leading scientists and industry shapers working in the field, with speakers discussing the latest research innovations as well as risks and solutions at both political and societal levels.
EPFL Assistant Professor Antoine Bosselut, Head of the Natural Language Processing Lab in the School of Computer and Communication Sciences, is one of the event’s key organizers and says it’s critical there are more checks and balances on how Generative AI develops than there have been on earlier technologies.
"The immense power of Generative AI and other foundation models means that we need to reimagine how we will engage with the world around us in many ways. This tech has an enormous potential to benefit society in areas ranging from novel drug development and interactive tutoring, but there are also risks that need to be considered from the outset. It is far from perfect, and we need to ensure that the roll-out of Generative AI is transparent and democratic," he explained.
One looming risk is the potential of Generative AI to put millions of people out of work. One of this morning’s speakers, Daniel Rock, an Assistant Professor at the University of Pennsylvania, researches the economic effects of digital technologies, with a particular emphasis on the economics of artificial intelligence.
He told the conference that whilst it’s difficult to know exactly how Large Language Models like ChatGPT are going to change the Future of Work there are already ways of measuring its potential. “Our research found that there are two types of roles that will be most exposed to change. The first may seem obvious – quantitative knowledge workers like mathematicians, software developers and other low-level clerical work. But interestingly, something that’s new about generative AI is that it’s the most highly paid work that is more exposed. So, expertise as we know it is likely to change. The jobs with the highest barriers to entry, that are the most training intensive and highest valued work, like doctors, lawyers and pharmacists are much more exposed and I think that’s what’s generating a lot of anxiety because places where we’ve made really big human capital investments are the jobs where we are going to start seeing a lot of risk.”
Another of this morning’s speakers was Dragoș Tudorache, a Member of the European Parliament and Vice-President of the Renew Europe Group.
The use of artificial intelligence in the EU will be regulated by the AI Act, the world’s first comprehensive AI law, to ensure that its development and use creates benefits including in healthcare; transport; manufacturing; and more sustainable energy. The new rules establish obligations for providers and users depending on the level of risk from artificial intelligence, categorized as unacceptable, high and limited.
“In the summer of 2022 we had already decided that we would have something in the legislation on foundation models, as it’s clear that they do undermine the way our societies work. First, these models are challenging the monopoly we, as humans, have had on creation and that’s a fundamental psychological shift. The second is truth and trust, as these models change the way we acquire knowledge. There were always liars and lies but in terms of scale and intensity these models are like nothing before. Add to this the geopolitics and leaders are wondering if there aren’t parallels to atomic energy and whether we need a global regulatory framework based on common understandings around rules and risk,” Dragos told the conference.
Despite the potential risks of Generative AI, it is already revolutionizing some sectors, such as healthcare in positive ways. One example is with clinical predictive models that can help physicians and administrators make decisions.
Presenting this afternoon, Kyunghyun Cho, associate professor of computer science and data science at New York University has just led a study that leverages recent advances in natural language processing to train a large language model for medical language (NYUTron), and subsequently finetune it across a wide range of clinical and operational predictive tasks.
“Existing structured data based clinical predictive models have limited use in everyday practice due to complexity in data processing, model development, and deployment. Our study used unstructured clinical notes from the electronic health record to enable the training of clinical language models and showed an improvement compared to traditional models, showing the potential of this approach.”
“Given how ubiquitous artificial intelligence will be in our lives in the coming decades we need to ensure that Generative AI and foundation models are developed in ways that minimize potential risks,” said EPFL professor Maria Brbic, another conference organizer and head the Machine Learning for Biomedical Discovery Laboratory.
“We are only at the beginning of this journey. Full capabilities but also failure modes and biases that these technologies could have are unknown to us so we need to proactively ensure that generative artificial intelligence technologies do not cause harm to society. To do this, everyone needs to be involved in the conversation. Events such as this are a start,” she concluded.
EPFL continues to work on this theme with more than a dozen scientists contributing to research to ensure that the impact of Generative AI on society is positive. Applied Machine Learning Days (AMLD), founded by Professor Marcel Salathe, head of the Digital Epidemiology Lab, also has a focus on these issues. Mark your calendars for the next full edition of Applied Machine Learning Days 2024, March 23-26 at the Swiss Tech Convention Center, EPFL.