OpenAI CEO Sam Altman on AI Regulations & ChatGPT | No Equity in OpenAI | Salary Reveal | Senate


On 5/16/2023, the U.S. Senate Judiciary Committee Subcommittee on Privacy, Technology and the Law hosted a hearing titled “Oversight of A.I.: Rules for Artificial Intelligence.”
Witnesses included:
Samuel Altman, CEO, OpenAI
Christina Montgomery, Chief Privacy & Trust Officer, IBM
Gary Marcus, Professor Emeritus, New York University

Sen. John Kennedy of Louisiana asked Sam Altman, the CEO of ChatGPT maker OpenAI, if he could recommend some people to oversee a new agency to oversee A.I.—that is, to pick his own regulators.

Samuel Altman, CEO, OpenAI (ChatGPT) on AI regulations, Senate Hearing
Kennedy: Cuz I’m want to hear from Mr. Altman. Mr. Altman, here’s your shot.

Altman: Thank you Senator. Number one, I would form a new agency that licenses any effort above a certain scale of capabilities and can take that license away and ensure compliance with safety standards. Number two, I would create a set of safety standards focused on what you said in your third hypothesis as the dangerous capability evaluations. One example that we’ve used in the past is looking to see if a model can self-replicate and sell the exfiltrate into the wild. We can give you office a long other list of the things that we think are important there, but specific tests that a model has to pass before it can be deployed into the world. And then third I would require independent audits. So not just from the company or the agency, but experts who can say the model is or is an in compliance with these stated safety thresholds and these percentages of performance on question X or Y.

Kennedy: Can you send me that information?

Altman: We will do that.

Kennedy: You make a lot of money. Do you? Altman: enough for health insurance & no equity in OpenAI
Altman: I love my current job
Kennedy: Would you be qualified to, to if we promulgated those rules, to administer those rules?
Altman: I love my current job.
Kennedy: Cool. Are there people out there that would be qualified?
Altman: We’d be happy to send you recommendations for people out there. Yes.

Kennedy: Okay. You make a lot of money. Do you?
Altman: I make no… I get paid enough for health insurance. I have no equity in OpenAI.
Kennedy: Really? Yeah. That’s interesting. You need a lawyer.
Altman: I need a what?
Kennedy: You need a lawyer or an agent.
Altman: I’m doing this cuz I love it.
Kennedy: Thank you Mr. Chairman.

3 hypotheses from Kennedy: 1. Congress do not understand AI, 2. could hurt AI & 3. could hurt us
Kennedy: Thank you all for being here. Permit me to share with you three hypotheses that I would like you to assume for the moment to be true. Hypothesis number one, many members of Congress do not understand artificial intelligence. Hypothesis. Number two, that absence of understanding may not prevent Congress from plunging in with enthusiasm and trying to regulate this technology in a way that could hurt this technology. Hypothesis number three, that I would like you to assume there is likely a berserk wing of the artificial intelligence community that intentionally or unintentionally could use artificial intelligence to kill all of us and hurt us the entire time that we are dying. Assume all of those to be true. Please tell me in plain English, two or three reforms, regulations, if any, that you would, you would implement if you were queen or king for a day … this is your chance, folks to tell us how to get this right. Please use it Congress, which I mean this is your chance, folks to tell us how to get this right. Please use it.

Montgomery: Right. I mean, I think, again, the rules should be focused on the use of AI in certain contexts …

Kennedy: So we ought to first pass a law that says you can use AI for these uses but not others. Is that, is that what you’re saying? We

Montgomery: Need to define the highest risk uses of AI.

Kennedy: Alright, professor Marcus, if you could be specific. This is your shot, man. Talk in plain English and tell me what, if any rules we ought to implement. And please don’t just use concepts. I’m looking for specificity.

Marcus: Number one, a safety review like we used with the FDA prior to widespread deployment. If you’re gonna introduce something to a hundred million people, somebody has to have their eyeballs on it.

Kennedy: There you go. Okay. That’s a good one Number. I’m not sure I agree with it, but that’s a good one. What else?

Marcus: You didn’t ask for three that you would agree with. Number two, a nimble monitoring agency to follow what’s going on. Not just pre-review but also post as things are out there in the world with authority to call things back, which we’ve discussed today. And number three would be funding geared towards things like ai, constitution, ai, that can reason about what it’s doing. I would not leave things entirely to

https://www.facebook.com/HygoNewsUSA/videos/659801585964067
OpenAI CEO Sam Altman on AI Regulations & ChatGPT | No Equity in OpenAI | Salary Reveal | Senate Hearing

,