Is SuperIntelligent AI inevitable – the path to Controllable AGIs

Is SuperIntelligent AI inevitable?

Thus asks Max Tegmark in his latest TED AI 2023 talk.

“Once the machine thinking method had started, it would not take long to outstrip our feeble powers. At some stage therefore we should have to expect the machines to take control”

The TL;DR is that AI development is coming to us faster than we anticipated and AI safety is paramount.

He talks about how we are focussed on training the AI not to state specific things ie like financial advice but the focus should be on ensuring it is not ‘doing’

And why guardrails are important – that is still in the realm of ‘trying’ while we need fool-proof mechanisms to guard against mal-intention – both from the machine or Machine+man

While a lot of this can be considered a hype or science fiction – it is worthwhile to note that, that a lot of what we see around us was one day, just that – science fiction

He introduces the paper he co-authored with Steve Omohundro – “PROVABLY SAFE SYSTEMS: THE ONLY PATH TO CONTROLLABLE AGI”

He puts Mathematical proof as central to their approach to AGI safety

‘We argue that mathematical proof is humanity’s most powerful tool for controlling AGIs. Regardless of how intelligent a system becomes, it cannot prove a mathematical falsehood or do what is provably impossible. And mathematical proofs are cheap to check with inexpensive, extremely reliable hardware.”

A Provably Compliant System (PCS) is a system (hardware, software, social or any combination thereof) that provably meets certain formal specifications.

“In terms of policy, AI would become more like biotech is today. Before a biotech company is allowed to market a new drug, they need to prove to government-appointed experts at e.g. the FDA that it satisfies certain safety standards.

Analogously, before a potentially harmful AI is allowed to be deployed, it would need to prove that it satisfies certain safety standards. In both the biotech and AI cases, these safety standards would be agreed upon by society, but in the AI case, the enforcement would be fully automated: all providers of sufficiently powerful hardware would be mandated to ensure that their hardware would not run code which doesn’t carry a proof of compliance with these standards.”

Give this a read over this weekend.

(P.S – On a side-note, much like what Max Tegmark to Jeff Hawkins (A Thousand Brains) argue, we don’t need Super Intelligent AI to have a future of humanity that can benefit from AI – but then, if you have read of AI, Moloch, and the race to the bottom it does help for us to think through AI safety, earlier than later )

**************************************************

Ranjani Mani #GenerativeAIandI

#Technology | #Books | #BeingBetter

Leave a Comment

Your email address will not be published. Required fields are marked *