Skip to content

Latest commit

 

History

History
51 lines (42 loc) · 2.85 KB

RELEASE.md

File metadata and controls

51 lines (42 loc) · 2.85 KB

Release 2.2.0

Major Features and Improvements

  • Integrate the PDSS algorithm, a novel framework that enhances local small language models (SLMs) using differentially private protected Chain of Thoughts (Cot) generated by remote LLMs:
    • Implement InferDPT for privacy-preserving Cot generation.
    • Support an encoder-decoder mechanism for privacy-preserving Cot generation.
    • Add prefix trainers for step-by-step distillation and text encoder-decoder training.
  • Integrate the FDKT algorithm, a framework that enables domain-specific knowledge transfer from LLMs to SLMs while preserving SLM data privacy
  • Deployment Optimization: support installation of FATE-LLM by PyPi

Release 2.1.0

Major Features and Improvements

  • New FedMKT Federated Tuning Algorithms: Federated Mutual Knowledge Transfer for Large and Small Language Models
    • Support three distinct scenarios: Heterogeneous, Homogeneous and One-to-One
    • Support LLM to SLM one-way knowledge transfer
  • Introduce the InferDPT algorithm, which leverages differential privacy (DP) to facilitate privacy-preserving inference for large language models.
  • Introduce FATE-LLM Evaluate: evaluate FATE-LLM models in few lines with Python SDK or simple CLI commands(fate_llm evaluate), built-in cases included

Release 2.0.0

Major Features and Improvements

  • Adapt to fate-v2.0 framework:
    • Migrate parameter-efficient fine-tuning training methods and models.
    • Migrate Standard Offsite-Tuning and Extended Offsite-Tuning(Federated Offsite-Tuning+)
    • Newly trainer,dataset, data_processing function design
  • New FedKSeed Federated Tuning Algorithm: train large language models in a federated learning setting with extremely low communication cost

Release 1.3.0

Major Features and Improvements

  • FTL-LLM(Fedrated Learning + Transfer Learning + LLM)
    • Standard Offsite-Tuning and Extended Offsite-Tuning(Federated Offsite-Tuning+)now supported
    • Framework available for Emulator and Adapter development
    • New Offsite-Tuning Trainer introduced
    • Includes built-in models such as GPT-2 family, Llama7b, and Bloom family
  • FedIPR
    • Introduced WatermarkDataset as the foundational dataset class for backdoor-based watermarks
    • Added SignConv and SignLayerNorm blocks for feature-based watermark models
    • New FedIPR Trainer available
    • Built-in models with feature-based watermarks include Alexnet, Resnet18, DistilBert, and GPT2
  • More models support parameter-efficient fine-tuning: ChatGLM2-6B and Bloom-7B1

Release 1.2.0

Major Features and Improvements

  • Support Federated Training of LLaMA-7B with parameter-efficient fine-tuning.

Release 1.1.0

Major Features and Improvements

  • Support Federated Training of ChatGLM-6B with parameter-efficient fine-tuning adapters: like Lora and P-Tuning V2 etc.
  • Integration of peft, which support many parameter-efficient adapters.