Decentralised Protocol & Network for AI

Generic computing on Prometheus consensus with open/free market for software, computing power and data

Censorship-resistant consensus protocol for scalable trustless decentralised computing

Join us, receive the latest news about AI and Pandora Box—Āhain

Centralisation is The Problem

Centralisation creates fragility

by introducing a single point of failure (which is the center itself):

Centralised Computing

attacks and leaks in data centers or even at the level of processors

Centralised AI

risks of its capture of power (new «nuclear weapon»)

We need to be antifragile and decentralisation is the key. Free & open markets are the most decentralised type of systems invented by humans. However, free market requires censorship resistance, they are synonyms (since each censorship is a constraint, that makes the market closed in some part).

That’s why regulations for AI & computing limit the free market, introduce fragility and long-term risks. They also draw negative economic impact, including higher computing costs, less competition etc.

To secure steady progress and mitigate risks from AI humanity needs decentralised, trustless global computing network with two key properties:

  1. it should be censorship-resistant (which implies strict privacy)
  2. at it’s very core technological level and it must be scalable enough for high loads

Our GOAL is to launch a network that will solve these problems.


To achieve it we have found proper scientific models (with byzantine fault tolerance, game theory, non-linear and complexity science) and we are developing formally-verified technology. We have an «unfair advantage», since most of blockchain projects do not pay attention to censorship resistance (the only two exceptions are Bitcoin and Monero).


We see that in the future computing will be the core of the economy. Most of it will require running antifragile AI at its backends. Economically, we expect that by creating free market and fundamentally-private and uncensored system we will attract a significant part of the global economy to the network, like Internet has done over the past three decades.


Dr Maxim Orlovsky, MD

Core idea
& vision

neuroscience, machine learning, complexity science, business & scientific management

Founding director at Bitcoin Foundation Ukraine, Head of, Soros Prize laureate, multiple scientific awards

Sabina Sachtachtinskagia

game theory

game theory, governance systems, finance, deregulation, incentives, competition

PhD candidate and researcher in Athens University of Economics and Business

Andrey Sobol


blockchain, game theory, decentralised systems, software development, information security

CTO at Satoshi Fund

Olga Ukolova, MD

Agile software,
product design, HR

quality assurance & operations in software development, project management, biotechnology, brain-computer interfaces, machine learning, HR

COO, PMO and QA at software outsourcing & AI consulting companies

Vitalii Bulychov


business management, blockchain researcher and investor since 2012

Entrepreneur, investor in a number of crypto-related projects, including industrial-scale mining facilities

Andriy Khavryuchenko

machine learning

IT business, software engineering, machine learning, decentralisation technologies, blockchain

Dash Core member; Founder at

Andrii Dubetsky

Business development,

business development, decentralisation technologies, private equity, stock exchanges, finance

Executive Director at Warsaw Stock Exchange (WSE), Representative Office in Kyiv; Founding head & director of Bitcoin Foundation Ukraine

Dr Oleksandr Ivanov

Core research,
machine learning

nonlinear science, game theory, computer science, data science, AI, multiagent systems, technology, machine learning

University of Groningen

Julian Konchunas

Core dev:
blockchain development

Ubisoft team lead

software architecture, system development, C++

Sergey Korostelyov

Core dev:
computing kernels

10+ years of software architecture design & development

enterprise software architecture, system analysis, machine learning

Kostiantyn Smyrnov

Core dev:
web apps, APIs

20+ years of software architecture design & development

full stack web software architecture, business analysis, continuous integration

Dmitry Litvinov


Art-director, UI/UX product designer at software & design agencies

UI/UX, product design, conception, branding & identity, style guides, system design (22 years of experience)

Aliaksei Rubanau

Data Scientist,
generalist, polymath.
Mastery: algorithms,
neuroscience, topology

ANN, brain-computer interfaces, topology, learning algorithms

CV: and co-founder and CTO

Roadmap & Status

Testnet 1

Ethereum-based PoCW consensus

Aug '17
Start Testnet 1

Jan '18
Blockchain explorer

Mar '18
ANN training

Jun '18
real-world cases

Nov '17
Python node: ANN inference

Feb '18
Web client

May '18
Computing verification

Aug '18
Test token economy

Testnet 2

Prometheus PoCW+PoR

of existing protocols

Dec '17
Start Testnet 2

Feb '18
Block production &

May '18
Token &
reputation economy

Sep '18
Public testnet

Jan '18
First node in Rust:
Kademlia-based p2p networking

Mar '18
Bitcoin Script support
in transactions

Jul '18
Computation channels


Prometheus PoCW+PoR

Feb '19

Production workflow:

Formal specifications

Yellow papers

Formal verifications

Scientific proofs


Formally verifiable with proper QA


Most of the popular decentralisation solutions take ad-hoc approach: they create a technology and test it in the real world. While software development QA allows to reduce the number of bugs, it still can not be applied neither to the economic part of blockchain systems nor to general consensus designs. This results in multiple hacks and value losses due to poor design or bad balance of risks and rewards.

What we research?

Byzantine Fault Tolerance

Game Theory

Non-linear and Complexity Science

Pandora uses proper scientific models (byzantine fault tolerance, game theory, non-linear and complexity science) to design Prometheus — a consensus. Protocol with provable properties. Its implementation in the Pandora network will be formally-verified as well.


Core of the system — an algorithm for proving computing work in trustless environment without repeating actual computing. It utilises game theory with provable Nash equilibriums. Works for:

  1. parallel computing (like AI inference, map-reduce tasks etc)
  2. sequential computing (like AI model training, smart contracts etc)

Hybrid consensus

Prometheus combines PoW and PoS consensuses into two-tier hybrid protocol.

Proof of Computing Work

Core of the system — an algorithm for proving computing work in trustless environment without repeating actual computing. It utilises game theory with provable Nash equilibriums.

Proof of Reputation

Reputation (coming from computational proof) at the second tier: proof of reputation, a kind of PoS, but with reputation instead of stake.

Solves PoS/PoW problems:

True randomness in leader selection:

instead of energy-consuming PoW and attackable BFT and PoS models Prometheus uses external randomness coming from actual computing work

Protection from long-range nothing-at-stake attacks:

two-tier consensus renders them economically inefficient

Provable absence of stake centralisation:

mining rewards come either from useful computing work or from unalienable reputation; it’s not proportional to the current stake of the nodes

Low probability of network centralisation:

multiple types of nodes with different profit models create a truly decentralised environment

Node types


Simple full node. Listens to the network, detects byzantine faults and is rewarded for reporting them


Node with locked PAN stake. Performs useful computations (running/training AI models and other forms of computing)


Worker with stake AND high reputation. Checks computing results


Verifier with higher reputation. Participates in arbitrations in case of verifier failures


Top-N reputation nodes of the network, N~1000. Mints blocks at the top level of consensus (proof of reputation)

Scalability: beyond blockchain

Blockchain is a technology, not a goal in itself. Blockchain is fundamentally unscalable.
Scalability comes via:

  1. parallel computing (like AI inference, map-reduce tasks etc)
  2. sequential computing (like AI model training, smart contracts etc)

Economic model

Economic rewards designed with game theory leading to provable Nash-equilibriums for non-byzantine strategies. Two token types are required:

PAN: Currency

  • mined with useful computations
    + minted in new blockchain blocks
  • limited supply (10m)


  • gives special rights
    (like minting blocks, governance etc)
  • untransferrable/unsellable
  • unlimited supply

Ecosystem & Adoption

Key to adoption — application of the system to the real-world scenarios. To achieve it, the system and the network will include:

  1. Marketplace(s) for AI models, software/algorithms & big data
  2. Computing resources auctions
  3. System that supports other assets
    (like tokens/cryptocurrencies) for interoperability purposes and to allow AI startups add their own incentivisation mechanics

Network will be used for:

  1. AI model learning/training
  2. AI inference on static and streaming data
  3. Performing other large-scale computing tasks

What we have already developed:

PoCW consensus on Ethereum
implementation of PoCW consensus (Pyrrha)


Python worker node running AI
model training & inference (Pynode)

for consumers

Prometheus consensus
full node in Rust (Rustheus)

Source code:

Join the Community of Game Changers