Panel Reminds Us That Artificial Intelligence Can Only Guess, Not Reason for Itself

Written by: Evan Koblentz

Expert panelists took a measured tone about the trends, challenges and ethics of artificial intelligence, at a campus forum organized by NJIT’s Institute for Data Science this month.

The panel moderator was institute director David Bader, who is also a distinguished professor in NJIT Ying Wu College of Computing and who shared his own thoughts on AI in a separate Q&A recently. The panel members were Kevin Coulter, field CTO for AI, Dell Technologies; Grace Wang, distinguished professor and director of NJIT’s Center for Artificial Intelligence Research; and Mengjia Xu, assistant professor of data science. DataBank Ltd., a data center firm that hosts NJIT’s Wulver high-performance computing cluster, was the event sponsor.

“If you have not heard about AI, you must be in a cocoon. AI is really creating new and extensive opportunities for innovation and knowledge creation,” NJIT President Teik C. Lim said while introducing the panel. “AI is arguably going to have the greatest effect on the creation and the delivery of knowledge goods and services, since the invention of the internet and smartphone.”

“We have been working on AI before AI became a buzzword.”

“We have been working on AI before AI became a buzzword,” Lim noted. An archive search found that AI was part of university research before New Jersey Institute of Technology existed, with results as far back as 1974 by Newark College of Engineering doctoral student John Comerford.

Following are highlights of the panel comments, lightly edited for clarity, in the order they were presented.

Bader: “What do you see as the most pressing challenges and opportunities in the field of AI today?”

Xu: “I think we have a lot of different scientific questions that we can apply AI techniques like the large language model and foundation models. I think we need collaborations from different domains. I think we need understanding of the underlying principles of deep learning, including the very powerful transformer model [and] GPT models and so on, from the mathematical side and also working with computer scientists [to] bridge the gap between the different domains.”

Wang: “The first thing I did is open ChatGPT and ask, what does ChatGPT believe is the challenge and also the opportunity. And I got a very long essay on ethical issues. There’s a lot of different topics related to challenges and opportunities, but I wasn’t satisfied to be frank. I think the biggest challenge and also the opportunity is how to monetize the investment. … So for a very big company, it’s very hard and you have to make a big investment. For the startup, the entrance barrier is actually very low because if you want to play, you just call the larger language model API. So basically everyone can have a small AI startup.

Bader: “I should come clean. You mentioned you looked at ChatGPT about the questions I was asking. Okay, I’ll come clean now. So I wrote a bunch of questions for the panel, and they’re okay, but I went to ChatGPT with the questions, with your biographies, with some of the goals I had for the panel, and my gosh, it came up with some much better questions. And I thought this is an AI panel anyway [laughs]. So for full disclosure, Thank you to ChatGPT and to Anthropic’s Claude, because I wanted to see if they would all give me different questions. And in fact, they agreed with each other as to the questions I should ask. So they really did help out. And I figured maybe we don’t even need me as moderator. There could be some AI sitting here! Maybe next year that’s what you’ll see at this lecture, but today you have us.”

“Every day something new comes out in AI, some new tool, a new startup. So as soon as you think you know something, all of a sudden you don’t have it.”

Coulter: “It’s the vastness of everything that’s out there. There’s so much to take in with AI, whether it’s the technology stack, whether it’s a software stack, whether it’s talking in terms of AI or who you’re talking to. Talking to a data scientist is very different than talking to a CFO and being able to integrate these conversations. We talk within our company a lot around value versus feasibility, and how to align those two with our customers, because they vary from ‘I don’t know where to begin’ to organizations like NJIT which has a whole board associated with AI. So it’s filling that gap, I think. And the other thing I think is a big challenge is just the amount of change. Every day something new comes out in AI, some new tool, a new startup. So as soon as you think you know something, all of a sudden you don’t have it.”

Bader: “There’s also a lot of concerns that get raised with AI in terms of privacy, in terms of ethics, in terms of its usage. So I really want to understand your thoughts on how we ensure that AI systems are developed and deployed ethically. And are there specific frameworks or guidelines that you would follow?”

Coulter: “It’s interesting just seeing the dynamics going on right now. … I’m a musician that’s playing music, so I follow the music industry quite a bit, and there’s somebody who created literally hundreds of ideas and made ten million dollars on AI music. It wasn’t even his own, he just created a program to do this. And I think the way to tackle this, really is in an ecosystem. We talk about ecosystems all the time, talking to each other, working with each other, having diversity in every aspect of what’s happening out there. That way you get different perspectives. … In innovation, I think ethics will start to become just a natural part of developing these AI systems.”

“Sometimes it looks ethical but maybe what’s behind it is amplifying the bias.”

Wang: “Well, I always believe that AI at its core is just a tool, so there’s no difference for the AI and say, lock picking tools. Now, picking tools can open your door if you lock yourself out and it can also open others. That’s a crime, right? So it depends on how AI is used. From that perspective, there’s not much special when we talk about AI ethics, or, say, computer security ethics, or the ethics related to how to use a gun, for example. But what is different is, as AI is too complex, it’s beyond the knowledge of many of us how it works. Sometimes it looks ethical but maybe what’s behind it is amplifying the bias by using the AI tools without our knowledge. So whenever we talk about AI ethics, I think the most important one is education if you know what AI is about, how it works and what AI can do and what AI cannot. I think for now we have the fear that AI is so powerful it can do anything, but actually, many of the things that people believe AI can do now can be done in the past by just any software system. So education is very, very important to help us to demystify AI accordingly, so we can talk about AI ethics. I want to emphasize transparency. If AI is used for decision making, if we understand how the decision is made, that becomes very, very important. And another important topic related to AI ethics is auditing if we don’t know what’s inside. At least we have some assessment tools to know whether there’s a risk or not in certain circumstances. Whether it can generate a harmful result or is not very much like the stress testing to the financial system after 2008.”

Bader: “AI has really come into the mainstream. It’s probably the fastest that we’ve seen go from a technology lab into interacting with the general public, and we see AI used for everything from lawyers writing cases, we’ve seen it being used in healthcare for evaluating patients by looking at imagery, we’ve seen it really in many, many sectors. And what I’m wondering about for AI research, something that NJIT does and is a leader in, is how should research interact with other fields such as law, healthcare and ethics, and how do we approach more of that interdisciplinary collaboration?”

Coulter: “Every time I talk to a customer about some of the concerns that they bring out, I always have to highlight the fact that human discernment does not go away in AI. We still have to interact. Human annotation is one of the greatest aspects of working with retraining an AI model. That interaction is important using scientific methodologies. I think sometimes when we talk day-to-day to scientists, we forget they’re scientists. Let’s start with theory. Improve the theory. Have the peers all talk to each other. Can it be proven over and over again? All those methods are critical to how we’re using and leveraging the AI space. We really have to make sure we’re looking at this from a holistic perspective and using these research methods as part of our natural process of developing these tools. I think that’s the way we become more and more successful.”

Audience member: “There is industry debate about whether AI can already, or soon will, have the ability to reason. What do you think?”

Wang: “This question has been asked many times, and there is ongoing heated debate about it. Someone believed that the current path of all our language models seems hopeless, but others believe this is the right way. My own opinion is we’re not there.”

Coulter: “It’s a tool. It’s a tool as far as learning from each other. It’s interesting because there is this tendency I find with a lot of times when customers come in concerned about putting human psychology over AI. I don’t think it’s even close to that. In fact, we had one customer who’s a banking customer who said, AI doesn’t really know anything about investment banking really. It just knows the language about it, and can sort of figure out what to look for that would be investment banking-related, right? So I think that’s a long way from anything that’s sentient or knows what’s going on.”

“Be open to any new tools and the new skills, and be prepared that whatever skill you have may no longer be needed shortly. So just be prepared. … Being a lifetime learner is very, very important.”

Bader: “What advice would you give to aspiring AI professionals?

Wang: “Be open to any new tools and new skills, and be prepared that whatever skill you have may no longer be needed shortly. So just be prepared. … Being a lifetime learner is very, very important. Because of the availability of the AI tools, that means whenever you need help, you can have help. Well, in the past, if you write an essay and you want someone to revise it, you have to submit it to your professor or turn to friends. There is a long time for you to get the feedback, but now, because of the availability of the tools, you can immediately get the feedback to improve yourself. [Also] if you did not keep up with the technology for five years, then probably you are really out. Others accumulated the skills that you don’t have. But now I think the nice thing is, if you are out for five years because of the tools, you can quickly catch up. So this is a very dynamic time.”

Coulter: “I’m going to put my dad hat on for a minute. Keep in mind, the answer you’re getting from AI might not be the right answer. Verify, validate! It’s this whole trust-but-verify conversation that I have with a lot of customers. I think the other important thing, too, is the social aspect of AI. I think you’re learning a lot, you’re going to be learning a lot, you’re going to continue to learn at an accelerated pace, because the information is going to be readily available. Work with your peers. Talk about what you’re finding, talk about what’s going on. Talk about what’s right, what’s wrong and ways they might be using AI. … Find out where the information is coming from and who wrote it. That’s just as important as finding the answer itself. So that’d be my recommendation. Just stay curious and do diligence.”

Wang: “Personally, I feel that the government should pay more attention to the regulations. But on the other hand, at least in this Supreme Court they are super open to new technology. So here I want to emphasize again about the importance of education transparency as well as auditing. They require the judges and the attorneys to be aware of how AI works, so when they are presented with certain evidence that is related to AI, they know how to treat this evidence and how they should make a decision that this particular evidence can be presented properly or not.

Bader: “How do you bridge the gap between the technical complexities of AI and the strategic needs of businesses?”

Coulter: What we’re trying to do as a company is operationalize a lot of what’s happening. From a technology staff perspective, I think it’s simpler to deploy a use test, which then from a business perspective, I can talk about value, I can talk about use cases, and then it’s more of here’s the value. Let’s take a look at the underpinnings of what’s going on in your organization — do you have the right technology today? Do you have the data? Is it clean data? I can focus on business processes, and I don’t have to worry about the underlying technology stack, because we’re always deploying those and innovating them and working on them every single day. Nvidia and AMD and Qualcomm are all our partners out there who are always innovating, so it’s really just making the technology conversation easier in a way.”

https://news.njit.edu/panel-reminds-us-artificial-intelligence-can-only-guess-not-reason-itself

David A. Bader
David A. Bader
Distinguished Professor and Director of the Institute for Data Science

David A. Bader is a Distinguished Professor in the Department of Computer Science at New Jersey Institute of Technology.