#shorts Kennedy: Alright, professor Marcus, if you could be specific. This is your shot, man. Laugh. Talk in plain English and tell me what, if any rules we ought to implement. And please don’t just use concepts. I’m looking for specificity.
Marcus: Number one, a safety review like we used with the FDA prior to widespread deployment. If you’re gonna introduce something to a hundred million people, somebody has to have their eyeballs on it.
Kennedy: There you go. Okay. That’s a good one Number. I’m not sure I agree with it, but that’s a good one. What else?
Marcus: You didn’t ask for three that you would agree with. Number two, a nimble monitoring agency to follow what’s going on. Not just pre-review but also post as things are out there in the world with authority to call things back, which we’ve discussed today. And number three would be funding geared towards things like ai, constitution, ai, that can reason about what it’s doing. I would not leave things entirely to current technology, which I think is poor at behaving in ethical fashion and behaving in honest fashion. And so I would have funding to try to basically focus on AI safety research. That term has a lot of complications in my field. There’s both safety, let’s say short term and long term. And I think we need to look at both rather than just funding models to be bigger, which is the popular thing to do. Okay. We need to fund models to be more trustworthy.
On 5/16/2023, the U.S. Senate Judiciary Committee Subcommittee on Privacy, Technology and the Law hosted a hearing titled “Oversight of A.I.: Rules for Artificial Intelligence.”
Witnesses included:
Samuel Altman, CEO, OpenAI
Christina Montgomery, Chief Privacy & Trust Officer, IBM
Gary Marcus, Professor Emeritus, New York University
other clips of this published longer video is here: https://youtu.be/k0YwwofDYWY
Kennedy: This is your shot, man. Talk in plain English. Gary Marcus, Professor Emeritus, New York University