Artificial intelligence, or AI, is all over the news these days. For those who aren’t working in this sphere, it might feel mysterious or even like a science fiction film. However, for researchers at UMBC, AI is just another tool in a growing collection of instruments that can make life better for their fellow human beings. AI-driven thinking opens up possibilities for improvements and problem solving in health care, the environment, civil engineering, and beyond. It can make previously unthinkable amounts of data easy to analyze. But work of this magnitude also calls for an ethical approach, both in how it’s taught and applied.
We sat down with Keith J Bowman, dean of UMBC’s College of Engineering and Information Technology (COEIT), and Vandana Janeja, professor and chair, Information Systems, to talk about why taking a humanistic route to research and teaching AI is such an important way of making a positive difference in our community and the world, and why UMBC is the perfect place for students with an interest in this emerging field.
UMBC Magazine: With UMBC’s “public research for public good” approach in mind, what are a few examples of creative ways our researchers are breaking boundaries with AI?
If we can curate good data, there’s lots of good stuff we can do with AI. One NSF funded project example at UMBC was done by Dr. Nirmalya Roy in the Information Systems (IS) department, who combines sensor data with social media. The sensors read the water levels, and the water levels are shared with the community over tweets, tweets on flood severity can be quantified/measured and confirmed by water level sensors as well, so this is a very good example of how you can actually make an impact in the community right where you are. It actually impacted our neighborhood.
And then there are other things like studying deep fakes. That’s an NSF funded project that’s happening in my lab along with Dr. Christine Mallinson in the Center for Social Science Scholarship (CS3). We are trying to understand how to better educate our students in understanding deep fakes. On the one side, these audio files are created as fakes through AI, but then we are also trying to work with our colleagues to see how we can improve the detection and discernment of it, either through training the students or making algorithms aware of the human side of things—introducing humanistic aspects to AI.
There are ways by which we can train our algorithms to be really, really precise in tasks that may be difficult for humans to see, but then we also have to be careful how to balance it with well-curated, well-trained data.
Keith Bowman: Often people think about AI only in relation to computing topics. But Tyler Josephson in chemical engineering at UMBC is working on trying to develop and use machine learning (where a machine learns to imitate human intelligence) and artificial intelligence tools in order to assess complex properties of some materials. These computational tools can be applied to chemical reactions, phase changes, statistical variability, or even human factors involved in chemical processing and materials manufacturing. They can also be used to foster improvements in theoretical understanding. The artificial intelligence can work through all of the cases and all of the examples that may be there.
Almost every engineering field has people working on how to apply AI and AI tools to work on things that have been challenges for many years, or trying to find faster ways to come to a resolution or a solution in areas as varied as the x-ray or ultrasound imaging used in human healthcare or the health of aging bridges and buildings.
There’s health monitoring that is useful for infrastructure for public safety, for instance. Radiography and ultrasound can be used to look for flaws or potential failures in bridges buildings or even aircraft components. But then human beings end up interpreting the images. Increasingly, we are able to collect massive amounts of information and artificial intelligence tools can help with quantifying information used in predicting outcomes. For instances with a high degree of complexity, having computational tools assist in the analysis can enhance the quality of the result. And also any place as a real backup to where humans might drop the ball in some cases.
UMBC Magazine: It’s amazing to think about just how broad the use of AI could be.
Vandana Janeja: I want to emphasize one of the things that Keith said, and I think that’s really the crux of it where AI can be helpful, is literally the massive amounts of data we have, the terms of scalability and complexity we are talking about, that we literally cannot compute at the current capacity of our minds. And you’re talking about decision making that a human has to make. But now if you augment it with AI, it does so much better.
And another NSF funded project I should mention, iHarp—which focuses on climate data in polar regions—where there’s so many different systems and subsystems, just to put all of that data together and make these complex connections, even thousands of scientists may not be able to do it. But if you start making those connections across even some of those subsystems, it advances the science by leaps, tens of years. So that’s the kind of impact that AI can have. Now the trick is, can we make those connections well? Can we train and have well curated data? Because all data is not good data.
UMBC Magazine: That’s a great segue into the human piece of all this. Can you talk to us about why UMBC takes such a humanistic approach to AI?
Vandana Janeja: You can look at it from multiple perspectives. And we are also impacting our students on how they are thinking. It’s very important to see who’s at the center of the AI application. You can ask, who are we impacting? Who are we working with? And who’s helping us create these connections? And then finally, are we able to produce algorithms that don’t harm individuals? So the positive impact, making sure it’s in ethical bounds, and then also making sure who we are working with. In all of the project examples we mentioned, there is a community impact. If you go onto the IS department research website you see that almost every project has a community partner. And then most importantly, it’s hard, but you really have to work with different disciplines.
UMBC Magazine: So for a student who’s interested in this kind of work, what would you give them in terms of advice?
Keith Bowman: To me, the thing that we need more of is students from other disciplines who do coursework in some of these related topics. That includes completing certificates and minors. A lot of COEIT students will do a second major or do a minor in other places on campus, and I think that’s fine. But I also think you need more students from other areas who can and are willing to do the reverse and establish some technical backgrounds. We need a broader range of people, including those from arts, humanities, social sciences, and life sciences, who also have enough of the technical background to even ask better questions regarding AI.
Vandana Janeja: People will come at data from different angles. I had a student talk about social justice in one of my classes, about the data and use of data, how it is empowering or disempowering people. And I encourage this.
At the end of the day, I say to my students: keep asking questions. Can you connect what you are doing to the big picture? You want to use your own inner compass as a guide, think how are you contributing? Not everything has to be this big life-shattering thing, but at the same time, are you chipping away at it? Are you contributing to society as such? The UMBC education and the ecosystem we have at UMBC really empowers students to do that.