Welcome to the Technocracy A.I. Abstract Series for Published Scientific Work in the A.I. and Artificial General Intelligence field.
Title: Implementing a Seed Safe/Moral Motivational System with the Independent Core Observer Model (ICOM).
Peer reviewed by Biologically Inspired Cognitive Architectures Association or BICA 2016 in New York City and Procedia Computer Science.
Authored By David Kelley and Mark Waser.
Abstract: Arguably, the most important questions about machine intelligences revolve around how they will decide what actions to take. If they decide to take actions which are deliberately, or even incidentally, harmful to humanity, then they would likely become an existential risk. If they were naturally inclined, or could be convinced, to help humanity, then it would likely lead to a much brighter future than would otherwise be the case. This is a true fork in the road towards humanity’s future and we must ensure that we engineer a safe solution to this most critical of issues.
As always thank you for listening to the Technocracy Abstract Series and a special thank you for our sponsors the Foundation, Transhumanity.net and the AGI Laboratory.
0 Comments