There are two schools of thought when it comes to the future of artificial intelligence (AI):
The utopian view: Intelligent systems will usher in a new age of enlightenment where humans are free from work to pursue more noble goals. AI systems will be programmed to cure disease, settle disputes fairly and augment our human existence only in ways that benefit us.
The apocalyptic view: Intelligent systems will steal our jobs, surpass humans in evolution, become war machines and prioritize a distant future over current needs. Our dubious efforts to control them will only reveal our own shortcomings and inferior ability to apply morality to technology we cannot control.
As with most things, the truth is probably somewhere in the middle. Regardless of where you fall on this spectrum, it’s important to consider how humans might influence AI as the technology evolves. One idea is that humans will largely form the conscience or moral fabric of AI. But how would we do that? And how can we apply ethics to AI to help prevent the worst from happening?
Important Note
Content Editors rate, curate and regularly update what we believe are the top 11% of all AI resource and good practice examples and is why our content is rated from 90% to 100%. Content rated less than 90% is excluded from this site. All inclusions are vetted by experienced professionals with graduate level data science degrees.
In the broadest sense, any inclusion of content on this site is not an endorsement or recommendation of any service, product or content that may be discussed, recommended, endorsed or affiliated with the content, company or spokesperson. We are a 501(c)3 nonprofit and receive no website advertising monies or direct or indirect compensation for any content or other information on any of our websites. For more information, visit our TOS.