There has been a dramatic increase in the use of algorithms and artificial intelligence to address a wide range of problems and challenges. While their adoption, especially with the rise of artificial intelligence, is reshaping nearly every industry, discipline and research area, such innovations often expose unforeseen consequences involving new norms, new expectations, and new rules and laws.
To facilitate deeper understanding, the Social and Ethical Responsibilities of Computing (SERC), a cross-cutting initiative of the MIT Schwarzman College of Computing, recently brought together social scientists and humanists with computer scientists, engineers and other computer science educators for an exploration of the ways in which the broad applicability of algorithms and artificial intelligence has presented both opportunities and challenges in many aspects of society.
The very nature of our reality is changing. AI has the ability to do things that until recently were only the realm of human intelligence things that can challenge our understanding of what it means to be human, noted Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing, in his keynote address at the inaugural SERC Symposium. This poses philosophical, conceptual and practical questions on a scale not experienced since the beginning of the Enlightenment. Faced with such profound change, we need new conceptual maps to navigate the change.
The symposium offered a look at SERC’s vision and activities in both research and education. We believe our responsibility with SERC is to educate and empower our students and enable our faculty to contribute to the responsible development and implementation of technology, said Georgia Perakis, William F. Pounds Professor of Management at MIT Sloan School of Management, co-associate dean of SERC and the lead organizer of the symposium. We are drawing on the many strengths and diversity of disciplines across MIT and beyond and bringing them together to gain more perspectives.
Through a series of panels and sessions, the symposium delved into a variety of topics related to the social and ethical dimensions of computing. In addition, 37 undergraduate and graduate students from a range of majors, including urban studies and planning, political science, mathematics, biology, electrical and computer engineering, and brain and cognitive sciences, participated in a poster session to showcase their research in this space, covering topics such as quantum ethics, AI collusion in storage markets, cyber waste, and empowering users on social platforms for better content credibility.
A showcase for a diversity of work
In three sessions devoted to the topics of beneficial and equitable computing, equitable and personalized health, algorithms, and humans, the SERC Symposium showcased the work of 12 faculty members in these domains.
One such project by a multidisciplinary team of archaeologists, architects, digital artists and computational social scientists aimed to preserve endangered heritage sites in Afghanistan with digital twins. The project team produced highly detailed searchable 3D models of heritage sites, as well as extended reality and virtual reality experiences, as learning resources for audiences who cannot access these sites.
In a project for the United Network for Organ Sharing, researchers showed how they used applied analytics to optimize various aspects of an organ allocation system in the United States that is currently undergoing a major overhaul to make it more efficient, equitable and inclusive for different racial, age and gender groups, among others.
Another talk discussed an area that has not yet received adequate public attention: the broader equity implications that biased sensor data has for the next generation of models in computing and healthcare .
A talk on bias in algorithms considered both human bias and algorithmic bias, and the potential to improve results by accounting for differences in the nature of the two types of bias.
Other research highlighted included the interaction between online platforms and human psychology; a study on the possibility that decision-makers make systemic forecast errors on the available information; and an illustration of how advanced analytics and computation can be leveraged to inform supply chain management, operations and regulatory work in the food and pharmaceutical industries.
Improving the algorithms of tomorroww
Algorithms are, without a doubt, influencing every aspect of our lives, said Asu Ozdaglar, deputy dean of academics for the MIT Schwarzman College of Computing and head of the Department of Electrical and Computer Engineering, kicking off a panel that moderated the implications of data and algorithms.
Whether it’s social media, online commerce, automated business, and now a much wider range of creative interactions with the advent of generative AI tools and large language models, there’s no doubt that more is to come. Ozdaglar said. While the promise is obvious to all of us, there’s also a lot to worry about. This truly is the time for imaginative thinking and careful deliberation to improve the algorithms of tomorrow.
Moving on to the panel, Ozdaglar asked experts in computer science, social sciences, and data science for insights into how to figure out what’s next and shape it to enrich the outcomes for the greater part of humanity.
Sarah Williams, an associate professor of technology and urban planning at MIT, stressed the critical importance of understanding the process of assembling data sets, since data is the foundation of all models. She also stressed the need for research to address the potential implication of biases in algorithms that often find their way through their creators and the data used in their development. It’s up to us to think of our own ethical solutions to these problems, she said. Just as it’s important to move forward with technology, we need to start looking at these questions about what biases are there in algorithms? What biases are there in data or the data journey?
Shifting the focus to generative models and whether the development and use of these technologies should be regulated, speakers which also included MIT’s Srini Devadas, professor of electrical and computer engineering, John Horton, professor of information technology, and Entrepreneurship professor Simon Johnson all agree that regulating open source algorithms, which are publicly accessible, would be difficult as regulators are still catching up and also struggling to establish barriers to the now 20-year-old technology .
Returning to the question of how to effectively regulate the use of these technologies, Johnson proposed a progressive corporate tax system as a possible solution. He recommends basing corporate tax payments on their profits, especially for large corporations whose huge earnings go untaxed due to offshore banking. In doing so, Johnson said this approach can serve as a regulatory mechanism that discourages companies from trying to own the entire world by imposing disincentives.
The role of ethics in computer science education
As computing continues to advance with no signs of slowing down, it is imperative to educate students to be intentional in the social impact of the technologies they will develop and implement in the world. But can these things really be taught? If so, how?
Caspar Hare, a philosophy professor at MIT and co-associate dean of SERC, posed this looming question to faculty in a panel he moderated on the role of ethics in computer science education. All experts in teaching ethics and thinking about the social implications of computing, each speaker shared their perspective and approach.
A strong advocate of the importance of learning from history, Eden Medina, an associate professor of science, technology and society at MIT, said that often the way we frame computer science is that everything is new. One of the things I do in my teaching is look at how people have dealt with these issues in the past and try to draw on that as a way to think about possible ways forward. Medina regularly uses case studies in her classes and refers to a paper written by Yale University science historian Joanna Radin on the Pima Indian Diabetes Dataset which raised ethical questions about the history of that particular dataset that many do not consider as an example of how decisions around technology and data can arise from very specific contexts.
Milo Phillips-Brown, an associate professor of philosophy at the University of Oxford, spoke about the Ethical Computing Protocol which he co-created while a SERC postdoc at MIT. The protocol, a four-step approach to building technology responsibly, is designed to train computer science students to think better and more accurately about the social implications of technology, by breaking the process down into more manageable steps. The basic approach we take draws heavily from the fields of value-sensitive design, responsible research and innovation, participatory design as guiding insights, and therefore is also fundamentally interdisciplinary, he said.
Fields such as biomedicine and law have an ethical ecosystem that distributes the function of ethical reasoning in these areas. Oversight and regulation are provided to guide frontline stakeholders and decision makers when problems arise, as are training programs and access to the interdisciplinary expertise they can draw upon. In this space, we have none of that, said John Basl, an associate professor of philosophy at Northeastern University. For the current generation of computer scientists and other decision makers, they were actually having them do the ethical reasoning themselves. Basl further commented that teaching basic ethical reasoning skills throughout the curriculum, not just in philosophy classes, is essential and that the goal should not be for every computer scientist to be a professional ethicist, but to know enough of the landscape to be able to ask the right questions and seek out existing relevant skills and resources.
After the final session, interdisciplinary groups of professors, students and researchers engaged in animated discussions on the topics covered during the day during a reception which marked the end of the symposium.
#Bring #social #ethical #responsibilities #information #technology #fore