Welcome to the Technocracy A.I. Abstract Series for Published Scientific Work in the A.I. and Artificial General Intelligence field.

Title: Implementing a Seed Safe/Moral Motivational System with the Independent Core Observer Model (ICOM).

Peer reviewed by Biologically Inspired Cognitive Architectures Association or BICA 2016 in New York City and Procedia Computer Science.

Authored By David Kelley and Mark Waser.

Abstract: Arguably, the most important questions about machine intelligences revolve around how they will decide what actions to take. If they decide to take actions which are deliberately, or even incidentally, harmful to humanity, then they would likely become an existential risk. If they were naturally inclined, or could be convinced, to help humanity, then it would likely lead to a much brighter future than would otherwise be the case. This is a true fork in the road towards humanity’s future and we must ensure that we engineer a safe solution to this most critical of issues.

As always thank you for listening to the Technocracy Abstract Series and a special thank you for our sponsors the Foundation, Transhumanity.net and the AGI Laboratory.

Welcome to The Technocracy!

The news podcast answering the single most important question:

What are the most important trends and news, from the standpoint of the Machine?

Where we remove humanity from the loop and let the machine and other Artificial Intelligence systems decide what is important to know, as we all work towards a technological singularity. (more…)