🍼Overview
The entire system is divided into three layers: Model Network Layer, LLM Interoperability Layer, and Decentralized GPU Layer.
Model Network Layer: This layer supports the integration of various models (Models), applications (Spaces), and datasets (Datasets), with LLMs being a crucial model category. We have established mirror sites at global nodes to ensure fast data transmission for designers and users. Additionally, this layer is compatible with Hugging Face interfaces, thus enriching the variety and quantity of available models.
LLM Interoperability Layer: This layer contains four core components: LLM universal protocol, LLM universal environment, Workflow graphical editor, and Agent optimal path module. They respectively provide LLM sharing transmission protocol, training and testing environment, graphical interface for LLM Workflow construction, and a functional module for autonomous exploration of optimal Agent paths.
Decentralized GPU Layer: We will connect to existing GPU computing platforms and record the benefits generated from training models under the computing power provider's ID. Through a "joint mining" mechanism, computing power providers can negotiate benefit distribution ratios with model trainers, thereby achieving computing power investment.
Last updated