{
  "schemaVersion": "1.0",
  "item": {
    "slug": "peft",
    "name": "Peft Fine Tuning",
    "source": "tencent",
    "type": "skill",
    "category": "AI 智能",
    "sourceUrl": "https://clawhub.ai/Desperado991128/peft",
    "canonicalUrl": "https://clawhub.ai/Desperado991128/peft",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/peft",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=peft",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "SKILL.md",
      "references/advanced-usage.md",
      "references/troubleshooting.md"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-30T16:55:25.780Z",
      "expiresAt": "2026-05-07T16:55:25.780Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
        "contentDisposition": "attachment; filename=\"network-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/peft"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/peft",
    "agentPageUrl": "https://openagent3.xyz/skills/peft/agent",
    "manifestUrl": "https://openagent3.xyz/skills/peft/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/peft/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "PEFT (Parameter-Efficient Fine-Tuning)",
        "body": "Fine-tune LLMs by training <1% of parameters using LoRA, QLoRA, and 25+ adapter methods."
      },
      {
        "title": "When to use PEFT",
        "body": "Use PEFT/LoRA when:\n\nFine-tuning 7B-70B models on consumer GPUs (RTX 4090, A100)\nNeed to train <1% parameters (6MB adapters vs 14GB full model)\nWant fast iteration with multiple task-specific adapters\nDeploying multiple fine-tuned variants from one base model\n\nUse QLoRA (PEFT + quantization) when:\n\nFine-tuning 70B models on single 24GB GPU\nMemory is the primary constraint\nCan accept ~5% quality trade-off vs full fine-tuning\n\nUse full fine-tuning instead when:\n\nTraining small models (<1B parameters)\nNeed maximum quality and have compute budget\nSignificant domain shift requires updating all weights"
      },
      {
        "title": "Installation",
        "body": "# Basic installation\npip install peft\n\n# With quantization support (recommended)\npip install peft bitsandbytes\n\n# Full stack\npip install peft transformers accelerate bitsandbytes datasets"
      },
      {
        "title": "LoRA fine-tuning (standard)",
        "body": "from transformers import AutoModelForCausalLM, AutoTokenizer, TrainingArguments, Trainer\nfrom peft import get_peft_model, LoraConfig, TaskType\nfrom datasets import load_dataset\n\n# Load base model\nmodel_name = \"meta-llama/Llama-3.1-8B\"\nmodel = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=\"auto\", device_map=\"auto\")\ntokenizer = AutoTokenizer.from_pretrained(model_name)\ntokenizer.pad_token = tokenizer.eos_token\n\n# LoRA configuration\nlora_config = LoraConfig(\n    task_type=TaskType.CAUSAL_LM,\n    r=16,                          # Rank (8-64, higher = more capacity)\n    lora_alpha=32,                 # Scaling factor (typically 2*r)\n    lora_dropout=0.05,             # Dropout for regularization\n    target_modules=[\"q_proj\", \"v_proj\", \"k_proj\", \"o_proj\"],  # Attention layers\n    bias=\"none\"                    # Don't train biases\n)\n\n# Apply LoRA\nmodel = get_peft_model(model, lora_config)\nmodel.print_trainable_parameters()\n# Output: trainable params: 13,631,488 || all params: 8,043,307,008 || trainable%: 0.17%\n\n# Prepare dataset\ndataset = load_dataset(\"databricks/databricks-dolly-15k\", split=\"train\")\n\ndef tokenize(example):\n    text = f\"### Instruction:\\n{example['instruction']}\\n\\n### Response:\\n{example['response']}\"\n    return tokenizer(text, truncation=True, max_length=512, padding=\"max_length\")\n\ntokenized = dataset.map(tokenize, remove_columns=dataset.column_names)\n\n# Training\ntraining_args = TrainingArguments(\n    output_dir=\"./lora-llama\",\n    num_train_epochs=3,\n    per_device_train_batch_size=4,\n    gradient_accumulation_steps=4,\n    learning_rate=2e-4,\n    fp16=True,\n    logging_steps=10,\n    save_strategy=\"epoch\"\n)\n\ntrainer = Trainer(\n    model=model,\n    args=training_args,\n    train_dataset=tokenized,\n    data_collator=lambda data: {\"input_ids\": torch.stack([f[\"input_ids\"] for f in data]),\n                                 \"attention_mask\": torch.stack([f[\"attention_mask\"] for f in data]),\n                                 \"labels\": torch.stack([f[\"input_ids\"] for f in data])}\n)\n\ntrainer.train()\n\n# Save adapter only (6MB vs 16GB)\nmodel.save_pretrained(\"./lora-llama-adapter\")"
      },
      {
        "title": "QLoRA fine-tuning (memory-efficient)",
        "body": "from transformers import AutoModelForCausalLM, BitsAndBytesConfig\nfrom peft import get_peft_model, LoraConfig, prepare_model_for_kbit_training\n\n# 4-bit quantization config\nbnb_config = BitsAndBytesConfig(\n    load_in_4bit=True,\n    bnb_4bit_quant_type=\"nf4\",           # NormalFloat4 (best for LLMs)\n    bnb_4bit_compute_dtype=\"bfloat16\",   # Compute in bf16\n    bnb_4bit_use_double_quant=True       # Nested quantization\n)\n\n# Load quantized model\nmodel = AutoModelForCausalLM.from_pretrained(\n    \"meta-llama/Llama-3.1-70B\",\n    quantization_config=bnb_config,\n    device_map=\"auto\"\n)\n\n# Prepare for training (enables gradient checkpointing)\nmodel = prepare_model_for_kbit_training(model)\n\n# LoRA config for QLoRA\nlora_config = LoraConfig(\n    r=64,                              # Higher rank for 70B\n    lora_alpha=128,\n    lora_dropout=0.1,\n    target_modules=[\"q_proj\", \"v_proj\", \"k_proj\", \"o_proj\", \"gate_proj\", \"up_proj\", \"down_proj\"],\n    bias=\"none\",\n    task_type=\"CAUSAL_LM\"\n)\n\nmodel = get_peft_model(model, lora_config)\n# 70B model now fits on single 24GB GPU!"
      },
      {
        "title": "Rank (r) - capacity vs efficiency",
        "body": "RankTrainable ParamsMemoryQualityUse Case4~3MMinimalLowerSimple tasks, prototyping8~7MLowGoodRecommended starting point16~14MMediumBetterGeneral fine-tuning32~27MHigherHighComplex tasks64~54MHighHighestDomain adaptation, 70B models"
      },
      {
        "title": "Alpha (lora_alpha) - scaling factor",
        "body": "# Rule of thumb: alpha = 2 * rank\nLoraConfig(r=16, lora_alpha=32)  # Standard\nLoraConfig(r=16, lora_alpha=16)  # Conservative (lower learning rate effect)\nLoraConfig(r=16, lora_alpha=64)  # Aggressive (higher learning rate effect)"
      },
      {
        "title": "Target modules by architecture",
        "body": "# Llama / Mistral / Qwen\ntarget_modules = [\"q_proj\", \"v_proj\", \"k_proj\", \"o_proj\", \"gate_proj\", \"up_proj\", \"down_proj\"]\n\n# GPT-2 / GPT-Neo\ntarget_modules = [\"c_attn\", \"c_proj\", \"c_fc\"]\n\n# Falcon\ntarget_modules = [\"query_key_value\", \"dense\", \"dense_h_to_4h\", \"dense_4h_to_h\"]\n\n# BLOOM\ntarget_modules = [\"query_key_value\", \"dense\", \"dense_h_to_4h\", \"dense_4h_to_h\"]\n\n# Auto-detect all linear layers\ntarget_modules = \"all-linear\"  # PEFT 0.6.0+"
      },
      {
        "title": "Load trained adapter",
        "body": "from peft import PeftModel, AutoPeftModelForCausalLM\nfrom transformers import AutoModelForCausalLM\n\n# Option 1: Load with PeftModel\nbase_model = AutoModelForCausalLM.from_pretrained(\"meta-llama/Llama-3.1-8B\")\nmodel = PeftModel.from_pretrained(base_model, \"./lora-llama-adapter\")\n\n# Option 2: Load directly (recommended)\nmodel = AutoPeftModelForCausalLM.from_pretrained(\n    \"./lora-llama-adapter\",\n    device_map=\"auto\"\n)"
      },
      {
        "title": "Merge adapter into base model",
        "body": "# Merge for deployment (no adapter overhead)\nmerged_model = model.merge_and_unload()\n\n# Save merged model\nmerged_model.save_pretrained(\"./llama-merged\")\ntokenizer.save_pretrained(\"./llama-merged\")\n\n# Push to Hub\nmerged_model.push_to_hub(\"username/llama-finetuned\")"
      },
      {
        "title": "Multi-adapter serving",
        "body": "from peft import PeftModel\n\n# Load base with first adapter\nmodel = AutoPeftModelForCausalLM.from_pretrained(\"./adapter-task1\")\n\n# Load additional adapters\nmodel.load_adapter(\"./adapter-task2\", adapter_name=\"task2\")\nmodel.load_adapter(\"./adapter-task3\", adapter_name=\"task3\")\n\n# Switch between adapters at runtime\nmodel.set_adapter(\"task1\")  # Use task1 adapter\noutput1 = model.generate(**inputs)\n\nmodel.set_adapter(\"task2\")  # Switch to task2\noutput2 = model.generate(**inputs)\n\n# Disable adapters (use base model)\nwith model.disable_adapter():\n    base_output = model.generate(**inputs)"
      },
      {
        "title": "PEFT methods comparison",
        "body": "MethodTrainable %MemorySpeedBest ForLoRA0.1-1%LowFastGeneral fine-tuningQLoRA0.1-1%Very LowMediumMemory-constrainedAdaLoRA0.1-1%LowMediumAutomatic rank selectionIA30.01%MinimalFastestFew-shot adaptationPrefix Tuning0.1%LowMediumGeneration controlPrompt Tuning0.001%MinimalFastSimple task adaptationP-Tuning v20.1%LowMediumNLU tasks"
      },
      {
        "title": "IA3 (minimal parameters)",
        "body": "from peft import IA3Config\n\nia3_config = IA3Config(\n    target_modules=[\"q_proj\", \"v_proj\", \"k_proj\", \"down_proj\"],\n    feedforward_modules=[\"down_proj\"]\n)\nmodel = get_peft_model(model, ia3_config)\n# Trains only 0.01% of parameters!"
      },
      {
        "title": "Prefix Tuning",
        "body": "from peft import PrefixTuningConfig\n\nprefix_config = PrefixTuningConfig(\n    task_type=\"CAUSAL_LM\",\n    num_virtual_tokens=20,      # Prepended tokens\n    prefix_projection=True       # Use MLP projection\n)\nmodel = get_peft_model(model, prefix_config)"
      },
      {
        "title": "With TRL (SFTTrainer)",
        "body": "from trl import SFTTrainer, SFTConfig\nfrom peft import LoraConfig\n\nlora_config = LoraConfig(r=16, lora_alpha=32, target_modules=\"all-linear\")\n\ntrainer = SFTTrainer(\n    model=model,\n    args=SFTConfig(output_dir=\"./output\", max_seq_length=512),\n    train_dataset=dataset,\n    peft_config=lora_config,  # Pass LoRA config directly\n)\ntrainer.train()"
      },
      {
        "title": "With Axolotl (YAML config)",
        "body": "# axolotl config.yaml\nadapter: lora\nlora_r: 16\nlora_alpha: 32\nlora_dropout: 0.05\nlora_target_modules:\n  - q_proj\n  - v_proj\n  - k_proj\n  - o_proj\nlora_target_linear: true  # Target all linear layers"
      },
      {
        "title": "With vLLM (inference)",
        "body": "from vllm import LLM\nfrom vllm.lora.request import LoRARequest\n\n# Load base model with LoRA support\nllm = LLM(model=\"meta-llama/Llama-3.1-8B\", enable_lora=True)\n\n# Serve with adapter\noutputs = llm.generate(\n    prompts,\n    lora_request=LoRARequest(\"adapter1\", 1, \"./lora-adapter\")\n)"
      },
      {
        "title": "Memory usage (Llama 3.1 8B)",
        "body": "MethodGPU MemoryTrainable ParamsFull fine-tuning60+ GB8B (100%)LoRA r=1618 GB14M (0.17%)QLoRA r=166 GB14M (0.17%)IA316 GB800K (0.01%)"
      },
      {
        "title": "Training speed (A100 80GB)",
        "body": "MethodTokens/secvs Full FTFull FT2,5001xLoRA3,2001.3xQLoRA2,1000.84x"
      },
      {
        "title": "Quality (MMLU benchmark)",
        "body": "ModelFull FTLoRAQLoRALlama 2-7B45.344.844.1Llama 2-13B54.854.253.5"
      },
      {
        "title": "CUDA OOM during training",
        "body": "# Solution 1: Enable gradient checkpointing\nmodel.gradient_checkpointing_enable()\n\n# Solution 2: Reduce batch size + increase accumulation\nTrainingArguments(\n    per_device_train_batch_size=1,\n    gradient_accumulation_steps=16\n)\n\n# Solution 3: Use QLoRA\nfrom transformers import BitsAndBytesConfig\nbnb_config = BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_quant_type=\"nf4\")"
      },
      {
        "title": "Adapter not applying",
        "body": "# Verify adapter is active\nprint(model.active_adapters)  # Should show adapter name\n\n# Check trainable parameters\nmodel.print_trainable_parameters()\n\n# Ensure model in training mode\nmodel.train()"
      },
      {
        "title": "Quality degradation",
        "body": "# Increase rank\nLoraConfig(r=32, lora_alpha=64)\n\n# Target more modules\ntarget_modules = \"all-linear\"\n\n# Use more training data and epochs\nTrainingArguments(num_train_epochs=5)\n\n# Lower learning rate\nTrainingArguments(learning_rate=1e-4)"
      },
      {
        "title": "Best practices",
        "body": "Start with r=8-16, increase if quality insufficient\nUse alpha = 2 * rank as starting point\nTarget attention + MLP layers for best quality/efficiency\nEnable gradient checkpointing for memory savings\nSave adapters frequently (small files, easy rollback)\nEvaluate on held-out data before merging\nUse QLoRA for 70B+ models on consumer hardware"
      },
      {
        "title": "References",
        "body": "Advanced Usage - DoRA, LoftQ, rank stabilization, custom modules\nTroubleshooting - Common errors, debugging, optimization"
      },
      {
        "title": "Resources",
        "body": "GitHub: https://github.com/huggingface/peft\nDocs: https://huggingface.co/docs/peft\nLoRA Paper: arXiv:2106.09685\nQLoRA Paper: arXiv:2305.14314\nModels: https://huggingface.co/models?library=peft"
      }
    ],
    "body": "PEFT (Parameter-Efficient Fine-Tuning)\n\nFine-tune LLMs by training <1% of parameters using LoRA, QLoRA, and 25+ adapter methods.\n\nWhen to use PEFT\n\nUse PEFT/LoRA when:\n\nFine-tuning 7B-70B models on consumer GPUs (RTX 4090, A100)\nNeed to train <1% parameters (6MB adapters vs 14GB full model)\nWant fast iteration with multiple task-specific adapters\nDeploying multiple fine-tuned variants from one base model\n\nUse QLoRA (PEFT + quantization) when:\n\nFine-tuning 70B models on single 24GB GPU\nMemory is the primary constraint\nCan accept ~5% quality trade-off vs full fine-tuning\n\nUse full fine-tuning instead when:\n\nTraining small models (<1B parameters)\nNeed maximum quality and have compute budget\nSignificant domain shift requires updating all weights\nQuick start\nInstallation\n# Basic installation\npip install peft\n\n# With quantization support (recommended)\npip install peft bitsandbytes\n\n# Full stack\npip install peft transformers accelerate bitsandbytes datasets\n\nLoRA fine-tuning (standard)\nfrom transformers import AutoModelForCausalLM, AutoTokenizer, TrainingArguments, Trainer\nfrom peft import get_peft_model, LoraConfig, TaskType\nfrom datasets import load_dataset\n\n# Load base model\nmodel_name = \"meta-llama/Llama-3.1-8B\"\nmodel = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=\"auto\", device_map=\"auto\")\ntokenizer = AutoTokenizer.from_pretrained(model_name)\ntokenizer.pad_token = tokenizer.eos_token\n\n# LoRA configuration\nlora_config = LoraConfig(\n    task_type=TaskType.CAUSAL_LM,\n    r=16,                          # Rank (8-64, higher = more capacity)\n    lora_alpha=32,                 # Scaling factor (typically 2*r)\n    lora_dropout=0.05,             # Dropout for regularization\n    target_modules=[\"q_proj\", \"v_proj\", \"k_proj\", \"o_proj\"],  # Attention layers\n    bias=\"none\"                    # Don't train biases\n)\n\n# Apply LoRA\nmodel = get_peft_model(model, lora_config)\nmodel.print_trainable_parameters()\n# Output: trainable params: 13,631,488 || all params: 8,043,307,008 || trainable%: 0.17%\n\n# Prepare dataset\ndataset = load_dataset(\"databricks/databricks-dolly-15k\", split=\"train\")\n\ndef tokenize(example):\n    text = f\"### Instruction:\\n{example['instruction']}\\n\\n### Response:\\n{example['response']}\"\n    return tokenizer(text, truncation=True, max_length=512, padding=\"max_length\")\n\ntokenized = dataset.map(tokenize, remove_columns=dataset.column_names)\n\n# Training\ntraining_args = TrainingArguments(\n    output_dir=\"./lora-llama\",\n    num_train_epochs=3,\n    per_device_train_batch_size=4,\n    gradient_accumulation_steps=4,\n    learning_rate=2e-4,\n    fp16=True,\n    logging_steps=10,\n    save_strategy=\"epoch\"\n)\n\ntrainer = Trainer(\n    model=model,\n    args=training_args,\n    train_dataset=tokenized,\n    data_collator=lambda data: {\"input_ids\": torch.stack([f[\"input_ids\"] for f in data]),\n                                 \"attention_mask\": torch.stack([f[\"attention_mask\"] for f in data]),\n                                 \"labels\": torch.stack([f[\"input_ids\"] for f in data])}\n)\n\ntrainer.train()\n\n# Save adapter only (6MB vs 16GB)\nmodel.save_pretrained(\"./lora-llama-adapter\")\n\nQLoRA fine-tuning (memory-efficient)\nfrom transformers import AutoModelForCausalLM, BitsAndBytesConfig\nfrom peft import get_peft_model, LoraConfig, prepare_model_for_kbit_training\n\n# 4-bit quantization config\nbnb_config = BitsAndBytesConfig(\n    load_in_4bit=True,\n    bnb_4bit_quant_type=\"nf4\",           # NormalFloat4 (best for LLMs)\n    bnb_4bit_compute_dtype=\"bfloat16\",   # Compute in bf16\n    bnb_4bit_use_double_quant=True       # Nested quantization\n)\n\n# Load quantized model\nmodel = AutoModelForCausalLM.from_pretrained(\n    \"meta-llama/Llama-3.1-70B\",\n    quantization_config=bnb_config,\n    device_map=\"auto\"\n)\n\n# Prepare for training (enables gradient checkpointing)\nmodel = prepare_model_for_kbit_training(model)\n\n# LoRA config for QLoRA\nlora_config = LoraConfig(\n    r=64,                              # Higher rank for 70B\n    lora_alpha=128,\n    lora_dropout=0.1,\n    target_modules=[\"q_proj\", \"v_proj\", \"k_proj\", \"o_proj\", \"gate_proj\", \"up_proj\", \"down_proj\"],\n    bias=\"none\",\n    task_type=\"CAUSAL_LM\"\n)\n\nmodel = get_peft_model(model, lora_config)\n# 70B model now fits on single 24GB GPU!\n\nLoRA parameter selection\nRank (r) - capacity vs efficiency\nRank\tTrainable Params\tMemory\tQuality\tUse Case\n4\t~3M\tMinimal\tLower\tSimple tasks, prototyping\n8\t~7M\tLow\tGood\tRecommended starting point\n16\t~14M\tMedium\tBetter\tGeneral fine-tuning\n32\t~27M\tHigher\tHigh\tComplex tasks\n64\t~54M\tHigh\tHighest\tDomain adaptation, 70B models\nAlpha (lora_alpha) - scaling factor\n# Rule of thumb: alpha = 2 * rank\nLoraConfig(r=16, lora_alpha=32)  # Standard\nLoraConfig(r=16, lora_alpha=16)  # Conservative (lower learning rate effect)\nLoraConfig(r=16, lora_alpha=64)  # Aggressive (higher learning rate effect)\n\nTarget modules by architecture\n# Llama / Mistral / Qwen\ntarget_modules = [\"q_proj\", \"v_proj\", \"k_proj\", \"o_proj\", \"gate_proj\", \"up_proj\", \"down_proj\"]\n\n# GPT-2 / GPT-Neo\ntarget_modules = [\"c_attn\", \"c_proj\", \"c_fc\"]\n\n# Falcon\ntarget_modules = [\"query_key_value\", \"dense\", \"dense_h_to_4h\", \"dense_4h_to_h\"]\n\n# BLOOM\ntarget_modules = [\"query_key_value\", \"dense\", \"dense_h_to_4h\", \"dense_4h_to_h\"]\n\n# Auto-detect all linear layers\ntarget_modules = \"all-linear\"  # PEFT 0.6.0+\n\nLoading and merging adapters\nLoad trained adapter\nfrom peft import PeftModel, AutoPeftModelForCausalLM\nfrom transformers import AutoModelForCausalLM\n\n# Option 1: Load with PeftModel\nbase_model = AutoModelForCausalLM.from_pretrained(\"meta-llama/Llama-3.1-8B\")\nmodel = PeftModel.from_pretrained(base_model, \"./lora-llama-adapter\")\n\n# Option 2: Load directly (recommended)\nmodel = AutoPeftModelForCausalLM.from_pretrained(\n    \"./lora-llama-adapter\",\n    device_map=\"auto\"\n)\n\nMerge adapter into base model\n# Merge for deployment (no adapter overhead)\nmerged_model = model.merge_and_unload()\n\n# Save merged model\nmerged_model.save_pretrained(\"./llama-merged\")\ntokenizer.save_pretrained(\"./llama-merged\")\n\n# Push to Hub\nmerged_model.push_to_hub(\"username/llama-finetuned\")\n\nMulti-adapter serving\nfrom peft import PeftModel\n\n# Load base with first adapter\nmodel = AutoPeftModelForCausalLM.from_pretrained(\"./adapter-task1\")\n\n# Load additional adapters\nmodel.load_adapter(\"./adapter-task2\", adapter_name=\"task2\")\nmodel.load_adapter(\"./adapter-task3\", adapter_name=\"task3\")\n\n# Switch between adapters at runtime\nmodel.set_adapter(\"task1\")  # Use task1 adapter\noutput1 = model.generate(**inputs)\n\nmodel.set_adapter(\"task2\")  # Switch to task2\noutput2 = model.generate(**inputs)\n\n# Disable adapters (use base model)\nwith model.disable_adapter():\n    base_output = model.generate(**inputs)\n\nPEFT methods comparison\nMethod\tTrainable %\tMemory\tSpeed\tBest For\nLoRA\t0.1-1%\tLow\tFast\tGeneral fine-tuning\nQLoRA\t0.1-1%\tVery Low\tMedium\tMemory-constrained\nAdaLoRA\t0.1-1%\tLow\tMedium\tAutomatic rank selection\nIA3\t0.01%\tMinimal\tFastest\tFew-shot adaptation\nPrefix Tuning\t0.1%\tLow\tMedium\tGeneration control\nPrompt Tuning\t0.001%\tMinimal\tFast\tSimple task adaptation\nP-Tuning v2\t0.1%\tLow\tMedium\tNLU tasks\nIA3 (minimal parameters)\nfrom peft import IA3Config\n\nia3_config = IA3Config(\n    target_modules=[\"q_proj\", \"v_proj\", \"k_proj\", \"down_proj\"],\n    feedforward_modules=[\"down_proj\"]\n)\nmodel = get_peft_model(model, ia3_config)\n# Trains only 0.01% of parameters!\n\nPrefix Tuning\nfrom peft import PrefixTuningConfig\n\nprefix_config = PrefixTuningConfig(\n    task_type=\"CAUSAL_LM\",\n    num_virtual_tokens=20,      # Prepended tokens\n    prefix_projection=True       # Use MLP projection\n)\nmodel = get_peft_model(model, prefix_config)\n\nIntegration patterns\nWith TRL (SFTTrainer)\nfrom trl import SFTTrainer, SFTConfig\nfrom peft import LoraConfig\n\nlora_config = LoraConfig(r=16, lora_alpha=32, target_modules=\"all-linear\")\n\ntrainer = SFTTrainer(\n    model=model,\n    args=SFTConfig(output_dir=\"./output\", max_seq_length=512),\n    train_dataset=dataset,\n    peft_config=lora_config,  # Pass LoRA config directly\n)\ntrainer.train()\n\nWith Axolotl (YAML config)\n# axolotl config.yaml\nadapter: lora\nlora_r: 16\nlora_alpha: 32\nlora_dropout: 0.05\nlora_target_modules:\n  - q_proj\n  - v_proj\n  - k_proj\n  - o_proj\nlora_target_linear: true  # Target all linear layers\n\nWith vLLM (inference)\nfrom vllm import LLM\nfrom vllm.lora.request import LoRARequest\n\n# Load base model with LoRA support\nllm = LLM(model=\"meta-llama/Llama-3.1-8B\", enable_lora=True)\n\n# Serve with adapter\noutputs = llm.generate(\n    prompts,\n    lora_request=LoRARequest(\"adapter1\", 1, \"./lora-adapter\")\n)\n\nPerformance benchmarks\nMemory usage (Llama 3.1 8B)\nMethod\tGPU Memory\tTrainable Params\nFull fine-tuning\t60+ GB\t8B (100%)\nLoRA r=16\t18 GB\t14M (0.17%)\nQLoRA r=16\t6 GB\t14M (0.17%)\nIA3\t16 GB\t800K (0.01%)\nTraining speed (A100 80GB)\nMethod\tTokens/sec\tvs Full FT\nFull FT\t2,500\t1x\nLoRA\t3,200\t1.3x\nQLoRA\t2,100\t0.84x\nQuality (MMLU benchmark)\nModel\tFull FT\tLoRA\tQLoRA\nLlama 2-7B\t45.3\t44.8\t44.1\nLlama 2-13B\t54.8\t54.2\t53.5\nCommon issues\nCUDA OOM during training\n# Solution 1: Enable gradient checkpointing\nmodel.gradient_checkpointing_enable()\n\n# Solution 2: Reduce batch size + increase accumulation\nTrainingArguments(\n    per_device_train_batch_size=1,\n    gradient_accumulation_steps=16\n)\n\n# Solution 3: Use QLoRA\nfrom transformers import BitsAndBytesConfig\nbnb_config = BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_quant_type=\"nf4\")\n\nAdapter not applying\n# Verify adapter is active\nprint(model.active_adapters)  # Should show adapter name\n\n# Check trainable parameters\nmodel.print_trainable_parameters()\n\n# Ensure model in training mode\nmodel.train()\n\nQuality degradation\n# Increase rank\nLoraConfig(r=32, lora_alpha=64)\n\n# Target more modules\ntarget_modules = \"all-linear\"\n\n# Use more training data and epochs\nTrainingArguments(num_train_epochs=5)\n\n# Lower learning rate\nTrainingArguments(learning_rate=1e-4)\n\nBest practices\nStart with r=8-16, increase if quality insufficient\nUse alpha = 2 * rank as starting point\nTarget attention + MLP layers for best quality/efficiency\nEnable gradient checkpointing for memory savings\nSave adapters frequently (small files, easy rollback)\nEvaluate on held-out data before merging\nUse QLoRA for 70B+ models on consumer hardware\nReferences\nAdvanced Usage - DoRA, LoftQ, rank stabilization, custom modules\nTroubleshooting - Common errors, debugging, optimization\nResources\nGitHub: https://github.com/huggingface/peft\nDocs: https://huggingface.co/docs/peft\nLoRA Paper: arXiv:2106.09685\nQLoRA Paper: arXiv:2305.14314\nModels: https://huggingface.co/models?library=peft"
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/Desperado991128/peft",
    "publisherUrl": "https://clawhub.ai/Desperado991128/peft",
    "owner": "Desperado991128",
    "version": "0.1.0",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/peft",
    "downloadUrl": "https://openagent3.xyz/downloads/peft",
    "agentUrl": "https://openagent3.xyz/skills/peft/agent",
    "manifestUrl": "https://openagent3.xyz/skills/peft/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/peft/agent.md"
  }
}