Welcome to the Technocracy A.I. Abstract Series for Published Scientific Work in the A.I. and Artificial General Intelligence field.

Title: Implementing a Seed Safe/Moral Motivational System with the Independent Core Observer Model (ICOM).

Peer reviewed by Biologically Inspired Cognitive Architectures Association or BICA 2016 in New York City and Procedia Computer Science.

Authored By David Kelley and Mark Waser.

Abstract: Arguably, the most important questions about machine intelligences revolve around how they will decide what actions to take. If they decide to take actions which are deliberately, or even incidentally, harmful to humanity, then they would likely become an existential risk. If they were naturally inclined, or could be convinced, to help humanity, then it would likely lead to a much brighter future than would otherwise be the case. This is a true fork in the road towards humanity’s future and we must ensure that we engineer a safe solution to this most critical of issues.

As always thank you for listening to the Technocracy Abstract Series and a special thank you for our sponsors the Foundation, Transhumanity.net and the AGI Laboratory.

Welcome to the Technocracy A.I. Abstract Series for Published Scientific Work in the A.I. and Artificial General Intelligence field.

Title: Architecting a Human-like Emotion-driven Conscious Moral Mind for Value Alignment and AGI Safety.

Peer reviewed by AAAI at Stanford University, Palo Alto, California, March 2018. (more…)

(Provo, UT) Effective June 1, 2018, Transhumanity.net (owned and operated by the Foundation) and TheTechnocracy.com (owned and operated by the AGI Laboratory) have come to an arrangement to post all the technocracy posts to the Transhumanity.net website.  It is thought that this will help provide a better more active news site focused on futurists, technologists and the like and help The Technocracy grow as a podcast.