You have /5 articles left.
Sign up for a free account or log in.

With the topic of artificial intelligence at the forefront, I asked students in my Information Science course what they hoped AI would accomplish. “Better detection of disease.” “End of boring jobs.” “Efficiency.” They were all good answers but curiously modest. If AI is as transcendent as some observers make it out to be, then why not reach farther? End of world hunger? Global peace?

A few days later, I asked ChatGPT some of these questions. On the question of world hunger, the algorithm offered a 10-point plan. That plan included sustainable agriculture, empowerment of women, better infrastructure, international agreements and—no surprise—more technology to regulate everything from seeds to distribution. Fleshed out, this response would make for a good A-minus/A undergraduate paper. Next was world peace. Interestingly similar responses: international cooperation, disarmament, empowering women, climate change and personal responsibility. Again, as an undergraduate exercise on policy, all good points.

Policy is not why we have world hunger or war. Those reasons lie within us. It is human nature. To the beautiful and beneficial qualities of intelligence, mercy and grace lie hatred, violence and greed. The rise and fall of civilizations, the double-edged sword of world religions, the agony and the ecstasy of the human condition.

It is against this backdrop that competing and contradictory views on AI arise. Optimists hope for singularity, the seamless threading of the human to the machine, boosting human potential to new levels of intelligence and productivity. Global efficiencies, underscored by trillions of dollars, are at the end of this rainbow. But then there are the darker aspects. AI will exacerbate existing challenges such as the adverse influence of social media on youth, disinformation or advanced persistent cyberthreats. More damaging problems could emerge. Machines may become so powerful as to manipulate us into caring for them at our own expense. To some, the destruction of humanity hangs in the balance.

When it was released to the public some 30 years ago, the internet both fascinated and frightened us. Bound by the trust of a closed environment among higher education, industry and the government while under development, the internet surprised people as its agnostic protocols reflected both the good and the bad when it went into the wild. We manage the unique qualities of the technology, its amplification and scope, poorly. Excitement about the technology and our market-driven greed have kept American society from establishing clear accountability for shoddy software and the damage it does to businesses and users. Privacy is a dream some of us had. Seriously negligent practices in social media adversely affect people, especially our youth. The internet has become a powerful vector for paranoid styles of American politics. We have no international framework to control either cybercrime or warfare.

A tip of the hat, then, to Sam Altman and some members of Congress who at least want to be on the record for having voiced concern. AI, the internet on a megadose of steroids, emerges amid these puzzles. Do not expect that these warning bells will result in anything consequential. With bipartisan attention to youth, a modest proposal to raise the age of consent in the Child Online Privacy Protection Act from 13 to 16 might occur; the FTC under this administration is already after Facebook for failing to curb information gathering on youth. But accountability for software? Content moderation? Global agreement on cyber? Our government cannot even get beyond voluntary disclosure of security incidents from the private sector. We are utterly unprepared for a serious cyberattack on our infrastructure.

AI, for all its promise and peril, will not change that landscape. Workers will bear the brunt of workforce disruption. Having learned nothing from the dislocation of outsourcing, neither industry nor the government has a plan to deal with it. Already fabulously wealthy, investors will become richer yet, further separating rich and poor and squeezing the middle class. Well-intentioned legislation to help youth with social media will have little effect because for the most part the law can only be a nudge to the education and social norms that should be addressed. Thirteen billion dollars did little to aid homelessness in California. People defecate on city streets in San Francisco—and elsewhere—now; they will likely be doing so as AI moves out of this hype cycle to a more mature place in our economy.

Policy could address these social, political and market issues that include aspect of AI, but it won’t because incumbents lack the incentive to do it. Another point about human nature is that the powerful do not willingly give it up. Nothing less than force or real conversion prompts people who are doing well to change in a way that would be meaningful for everyone else. That observation brings us back to start. We get the technology that we deserve. Anthropomorphizing tendencies reveal the narcissism of it, not least among those super-smart guys (and it is mostly guys) who made it. If there is one thing I would change about consumer AI, it would be to have it not present as human. It is not. Presenting as human is a self-serving trope that confuses people who do not have the education to know the difference. But, alas, we are. And in our considerations of AI, that is the real challenge we countenance.

Next Story

Written By

Found In

More from Law, Policy—and IT?