EMETH White Paper
  • EMETH White Paper
  • 1. Introduction
    • 1-1. The Need for a Global Distributed Computing Platform
  • 2. Project
    • 2-1. EMETH's Vision
    • 2-2. Overview of EMETH Project
      • 2-2-1. Data Privacy Protection
      • 2-2-2. Hybrid Parallel Processing
      • 2-2-3. EMETH L2 Roll Up
  • 3.Token Economy
    • 3-1. EMETH ($EMETH) Token Overview
    • 3-2. Token Allocation
    • 3-3. EMETH Token Utility
    • 3-4. GPU Mining Program
      • 3-4-1. Staking Program.
    • 3-5. Calculation Method for JOB Execution Fees
    • 3-6. Overview of fee state transition
  • 4. Node
    • 4-1. Benefits that EMETH node can enjoy
    • 4-2. How to Become a EMETH Node
      • 4-2-1. How to set up EMETH Node for Windows users
      • 4-2-2. How to set up EMETH Node for Ubuntu users
      • 4-2-3. How to set up EMETH Portable for mobile device users
  • 5. Service
    • 5-1. AI Inference
      • 5-1-1 Pricing
    • 5-2. Rent GPUs
  • 6. DAO
    • 6-1. EMETH DAO
  • 7. EMETH Architecture
    • 7-1. Overview
      • 7-1-1. Splitter
      • 7-1-2. Aggregator
      • 7-1-3. Verifier
      • 7-1-4. Signer
    • 7-2. Layer 1 Entire Process
    • 7-3. Layer 2 Entire Process
  • 8. ROADMAP
Powered by GitBook
On this page
  1. 2. Project
  2. 2-2. Overview of EMETH Project

2-2-2. Hybrid Parallel Processing

Data parallelism and model parallelism in AI development are important technologies for efficiently processing large data sets and complex models. Data parallelism is a method of dividing a large dataset into multiple parts and processing each part concurrently. Each processing unit (CPU, GPU, or machine) processes a subset of the data independently and aggregates the results. Model parallelism is a method of dividing a large and complex model into multiple parts and processing each part concurrently on different processing units. Each processing unit is responsible for a part of the model and advances calculations while communicating intermediate results with other processing units.

EMETH has developed hybrid parallel processing that combines these two. The feature of hybrid parallel processing is that it performs processing simultaneously on multiple nodes, which significantly reduces the computation time of machine learning from large-scale image classification tasks to natural language processing, and can perform more advanced functions.

To implement hybrid parallel processing, you need hardware and software frameworks that support parallel processing. Also, the ability to achieve proper division of data and models, efficient communication and synchronization, and load distribution, is a feature that could only be achieved by the EMETH team with specialized knowledge and experience in AI development.

Previous2-2-1. Data Privacy ProtectionNext2-2-3. EMETH L2 Roll Up

Last updated 1 year ago