How Dangerous Is AI?
As a
budding/wannabe software engineer, I am fascinated by AI, though it is very
likely that hundreds of thousands of software engineers may become redundant on
account of AI.
An open letter—signed by Elon Musk and over 1,000 others with knowledge, power,
and influence in the tech space—calls for the halt to all “giant AI
experiments” for six months. According Musk et. al:
- AI systems with human-competitive intelligence can pose profound risks to society and humanity;
- Contemporary AI systems are now becoming human-competitive at general tasks and could flood our information channels with propaganda and untruth, make it possible to automate away all jobs, including the fulfilling ones;
- Nonhuman minds might eventually outnumber, outsmart, obsolete and replace humans;
- Powerful AI systems should be developed only once humans are confident that their effects will be positive and their risks will be manageable;
- AI labs and independent experts should use the 6-month pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts; and
- In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems.
However,
not everyone thinks AI is so dangerous that further development should be
halted.
Bill Gates does not think so. Neither do
many AI developers like Anthropic.
The
best argument against the 6-month pause advocated by Elon Musk is that the
pause will be adhered to be the good guys and not by the bad buys, who will
get ahead of the good guys.
So,
let’s not cower in fear. Let’s harness AI. However, let AI developers develop
robust governance systems and let regulators come up with sensible rules and
regulations!
Comments
Post a Comment