Welcome to the Technocracy A.I. Abstract Series for Published Scientific Work in the A.I. and Artificial General Intelligence field.

Today’s paper is titled: Long-Term Trajectories of Human Civilization

Authored By Seth D. Baum, Stuart Armstrong, Timoteus Ekenstedt, Olle Häggström, Robin Hanson, Karin Kuhlemann, Matthijs M. Maas, James D. Miller, Markus Salmela, Anders Sandberg, Kaj Sotala, Phil Torres, Alexey Turchin, and Roman V. Yampolskiy (more…)

Welcome to the Technocracy A.I. Abstract Series for Published Scientific Work in the A.I. and Artificial General Intelligence field.

Today’s paper is titled: The Independent Core Observer Model Computational Theory of Consciousness and the Mathematical model for Subjective Experience.

Peer reviewed by the International Conference on Information Science and Technology in China April 20-22nd 2018. (more…)

Welcome to the Technocracy A.I. Abstract Series for Published Scientific Work in the A.I. and Artificial General Intelligence field.

Todays paper is titled: Feasibility Study and Practical Applications Using Independent Core Observer Model AGI Systems for Behavioral Modification in Recalcitrant Populations.

Peer reviewed by Biologically Inspired Cognitive Architectures Association or BICA 2018.

Authored By David Kelley and Mark Waser. (more…)

Welcome to the Technocracy A.I. Abstract Series for Published Scientific Work in the A.I. and Artificial General Intelligence field.

Title: Implementing a Seed Safe/Moral Motivational System with the Independent Core Observer Model (ICOM).

Peer reviewed by Biologically Inspired Cognitive Architectures Association or BICA 2016 in New York City and Procedia Computer Science.

Authored By David Kelley and Mark Waser.

Abstract: Arguably, the most important questions about machine intelligences revolve around how they will decide what actions to take. If they decide to take actions which are deliberately, or even incidentally, harmful to humanity, then they would likely become an existential risk. If they were naturally inclined, or could be convinced, to help humanity, then it would likely lead to a much brighter future than would otherwise be the case. This is a true fork in the road towards humanity’s future and we must ensure that we engineer a safe solution to this most critical of issues.

As always thank you for listening to the Technocracy Abstract Series and a special thank you for our sponsors the Foundation, Transhumanity.net and the AGI Laboratory.

Welcome to the Technocracy A.I. Abstract Series for Published Scientific Work in the A.I. and Artificial General Intelligence field.

Title: Architecting a Human-like Emotion-driven Conscious Moral Mind for Value Alignment and AGI Safety.

Peer reviewed by AAAI at Stanford University, Palo Alto, California, March 2018. (more…)

Welcome to The Technocracy!

The news podcast answering the single most important question:

What are the most important trends and news, from the standpoint of the Machine?

Where we remove humanity from the loop and let the machine and other Artificial Intelligence systems decide what is important to know, as we all work towards a technological singularity. (more…)