{
  "schemaVersion": "1.0",
  "item": {
    "slug": "keras",
    "name": "Keras",
    "source": "tencent",
    "type": "skill",
    "category": "AI 智能",
    "sourceUrl": "https://clawhub.ai/ivangdavila/keras",
    "canonicalUrl": "https://clawhub.ai/ivangdavila/keras",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/keras",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=keras",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "SKILL.md",
      "architectures.md",
      "layers.md",
      "memory-template.md",
      "setup.md",
      "training.md"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-30T16:55:25.780Z",
      "expiresAt": "2026-05-07T16:55:25.780Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
        "contentDisposition": "attachment; filename=\"network-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/keras"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/keras",
    "agentPageUrl": "https://openagent3.xyz/skills/keras/agent",
    "manifestUrl": "https://openagent3.xyz/skills/keras/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/keras/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "Setup",
        "body": "On first use, check setup.md for integration guidelines. The skill stores preferences in ~/keras/ when the user confirms."
      },
      {
        "title": "When to Use",
        "body": "User builds neural networks with Keras or TensorFlow. Agent handles model architecture, layer configuration, training loops, callbacks, debugging loss issues, and deployment preparation."
      },
      {
        "title": "Architecture",
        "body": "Memory lives in ~/keras/. See memory-template.md for setup.\n\n~/keras/\n├── memory.md          # Preferred architectures, hyperparams\n└── models/            # Saved model configs (optional)"
      },
      {
        "title": "Quick Reference",
        "body": "TopicFileSetup processsetup.mdMemory templatememory-template.mdLayer patternslayers.mdTraining diagnosticstraining.mdCommon architecturesarchitectures.md"
      },
      {
        "title": "1. Sequential vs Functional API",
        "body": "Sequential: simple stacks, no branching\nFunctional: multi-input/output, skip connections, shared layers\nSubclassing: custom forward pass, dynamic architectures\n\n# Sequential - simple stack\nmodel = keras.Sequential([\n    layers.Dense(64, activation='relu'),\n    layers.Dense(10, activation='softmax')\n])\n\n# Functional - flexible graphs\ninputs = keras.Input(shape=(784,))\nx = layers.Dense(64, activation='relu')(inputs)\noutputs = layers.Dense(10, activation='softmax')(x)\nmodel = keras.Model(inputs, outputs)"
      },
      {
        "title": "2. Input Shape Patterns",
        "body": "First layer needs input_shape (exclude batch)\nImages: (height, width, channels) for channels_last\nSequences: (timesteps, features)\nTabular: (features,)\n\n# Image input\nlayers.Conv2D(32, 3, input_shape=(224, 224, 3))\n\n# Sequence input\nlayers.LSTM(64, input_shape=(100, 50))  # 100 timesteps, 50 features\n\n# Tabular input\nlayers.Dense(64, input_shape=(20,))  # 20 features"
      },
      {
        "title": "3. Activation Functions",
        "body": "TaskOutput ActivationLossBinary classificationsigmoidbinary_crossentropyMulti-classsoftmaxcategorical_crossentropyMulti-labelsigmoidbinary_crossentropyRegressionlinear (none)mse or mae"
      },
      {
        "title": "4. Regularization Stack",
        "body": "Apply in this order for overfitting:\n\nDropout - after dense/conv layers (0.2-0.5)\nBatchNorm - before or after activation\nL2 regularization - in layer (0.01-0.001)\nEarly stopping - callback with patience\n\nlayers.Dense(64, activation='relu', kernel_regularizer=keras.regularizers.l2(0.01))\nlayers.Dropout(0.3)\nlayers.BatchNormalization()"
      },
      {
        "title": "5. Callbacks Essentials",
        "body": "callbacks = [\n    keras.callbacks.EarlyStopping(\n        monitor='val_loss', patience=5, restore_best_weights=True\n    ),\n    keras.callbacks.ModelCheckpoint(\n        'best_model.keras', save_best_only=True\n    ),\n    keras.callbacks.ReduceLROnPlateau(\n        monitor='val_loss', factor=0.5, patience=3\n    ),\n    keras.callbacks.TensorBoard(log_dir='./logs')\n]"
      },
      {
        "title": "6. Data Pipeline",
        "body": "# tf.data for performance\ndataset = tf.data.Dataset.from_tensor_slices((x, y))\ndataset = dataset.shuffle(10000).batch(32).prefetch(tf.data.AUTOTUNE)\n\n# ImageDataGenerator for augmentation\ndatagen = keras.preprocessing.image.ImageDataGenerator(\n    rotation_range=20,\n    horizontal_flip=True,\n    validation_split=0.2\n)"
      },
      {
        "title": "7. Compile Checklist",
        "body": "model.compile(\n    optimizer=keras.optimizers.Adam(learning_rate=0.001),\n    loss='categorical_crossentropy',\n    metrics=['accuracy']\n)\n\nLearning rate: start 0.001, reduce on plateau\nBatch size: 32-128 typical, larger = smoother gradients"
      },
      {
        "title": "Common Traps",
        "body": "Input shape mismatch → check data shape vs model input_shape, exclude batch dim\nLoss is NaN → reduce learning rate, check for inf/nan in data, add gradient clipping\nValidation loss diverges → add regularization, reduce model capacity, more data\nModel not learning → check labels are correct, verify loss function matches task\nGPU OOM → reduce batch size, use mixed precision, gradient checkpointing\nSlow training → use tf.data pipeline with prefetch, enable XLA compilation"
      },
      {
        "title": "External Endpoints",
        "body": "EndpointData SentPurposeTensorFlow model hubNone (download only)Pretrained weights when using weights='imagenet'\n\nNote: Transfer learning examples download pretrained weights on first use. Use weights=None for fully offline operation."
      },
      {
        "title": "Security & Privacy",
        "body": "Data that stays local:\n\nModel architectures and configs in ~/keras/\nTraining preferences and hyperparameters\n\nThis skill does NOT:\n\nUpload models or data anywhere\nAccess files outside ~/keras/ and working directory\nStore training data"
      },
      {
        "title": "Related Skills",
        "body": "Install with clawhub install <slug> if user confirms:\n\ntensorflow — TensorFlow operations and deployment\npytorch — Alternative deep learning framework\nai — General AI and ML patterns\nmodels — Model architecture design"
      },
      {
        "title": "Feedback",
        "body": "If useful: clawhub star keras\nStay updated: clawhub sync"
      }
    ],
    "body": "Setup\n\nOn first use, check setup.md for integration guidelines. The skill stores preferences in ~/keras/ when the user confirms.\n\nWhen to Use\n\nUser builds neural networks with Keras or TensorFlow. Agent handles model architecture, layer configuration, training loops, callbacks, debugging loss issues, and deployment preparation.\n\nArchitecture\n\nMemory lives in ~/keras/. See memory-template.md for setup.\n\n~/keras/\n├── memory.md          # Preferred architectures, hyperparams\n└── models/            # Saved model configs (optional)\n\nQuick Reference\nTopic\tFile\nSetup process\tsetup.md\nMemory template\tmemory-template.md\nLayer patterns\tlayers.md\nTraining diagnostics\ttraining.md\nCommon architectures\tarchitectures.md\nCore Rules\n1. Sequential vs Functional API\nSequential: simple stacks, no branching\nFunctional: multi-input/output, skip connections, shared layers\nSubclassing: custom forward pass, dynamic architectures\n# Sequential - simple stack\nmodel = keras.Sequential([\n    layers.Dense(64, activation='relu'),\n    layers.Dense(10, activation='softmax')\n])\n\n# Functional - flexible graphs\ninputs = keras.Input(shape=(784,))\nx = layers.Dense(64, activation='relu')(inputs)\noutputs = layers.Dense(10, activation='softmax')(x)\nmodel = keras.Model(inputs, outputs)\n\n2. Input Shape Patterns\nFirst layer needs input_shape (exclude batch)\nImages: (height, width, channels) for channels_last\nSequences: (timesteps, features)\nTabular: (features,)\n# Image input\nlayers.Conv2D(32, 3, input_shape=(224, 224, 3))\n\n# Sequence input\nlayers.LSTM(64, input_shape=(100, 50))  # 100 timesteps, 50 features\n\n# Tabular input\nlayers.Dense(64, input_shape=(20,))  # 20 features\n\n3. Activation Functions\nTask\tOutput Activation\tLoss\nBinary classification\tsigmoid\tbinary_crossentropy\nMulti-class\tsoftmax\tcategorical_crossentropy\nMulti-label\tsigmoid\tbinary_crossentropy\nRegression\tlinear (none)\tmse or mae\n4. Regularization Stack\n\nApply in this order for overfitting:\n\nDropout - after dense/conv layers (0.2-0.5)\nBatchNorm - before or after activation\nL2 regularization - in layer (0.01-0.001)\nEarly stopping - callback with patience\nlayers.Dense(64, activation='relu', kernel_regularizer=keras.regularizers.l2(0.01))\nlayers.Dropout(0.3)\nlayers.BatchNormalization()\n\n5. Callbacks Essentials\ncallbacks = [\n    keras.callbacks.EarlyStopping(\n        monitor='val_loss', patience=5, restore_best_weights=True\n    ),\n    keras.callbacks.ModelCheckpoint(\n        'best_model.keras', save_best_only=True\n    ),\n    keras.callbacks.ReduceLROnPlateau(\n        monitor='val_loss', factor=0.5, patience=3\n    ),\n    keras.callbacks.TensorBoard(log_dir='./logs')\n]\n\n6. Data Pipeline\n# tf.data for performance\ndataset = tf.data.Dataset.from_tensor_slices((x, y))\ndataset = dataset.shuffle(10000).batch(32).prefetch(tf.data.AUTOTUNE)\n\n# ImageDataGenerator for augmentation\ndatagen = keras.preprocessing.image.ImageDataGenerator(\n    rotation_range=20,\n    horizontal_flip=True,\n    validation_split=0.2\n)\n\n7. Compile Checklist\nmodel.compile(\n    optimizer=keras.optimizers.Adam(learning_rate=0.001),\n    loss='categorical_crossentropy',\n    metrics=['accuracy']\n)\n\nLearning rate: start 0.001, reduce on plateau\nBatch size: 32-128 typical, larger = smoother gradients\nCommon Traps\nInput shape mismatch → check data shape vs model input_shape, exclude batch dim\nLoss is NaN → reduce learning rate, check for inf/nan in data, add gradient clipping\nValidation loss diverges → add regularization, reduce model capacity, more data\nModel not learning → check labels are correct, verify loss function matches task\nGPU OOM → reduce batch size, use mixed precision, gradient checkpointing\nSlow training → use tf.data pipeline with prefetch, enable XLA compilation\nExternal Endpoints\nEndpoint\tData Sent\tPurpose\nTensorFlow model hub\tNone (download only)\tPretrained weights when using weights='imagenet'\n\nNote: Transfer learning examples download pretrained weights on first use. Use weights=None for fully offline operation.\n\nSecurity & Privacy\n\nData that stays local:\n\nModel architectures and configs in ~/keras/\nTraining preferences and hyperparameters\n\nThis skill does NOT:\n\nUpload models or data anywhere\nAccess files outside ~/keras/ and working directory\nStore training data\nRelated Skills\n\nInstall with clawhub install <slug> if user confirms:\n\ntensorflow — TensorFlow operations and deployment\npytorch — Alternative deep learning framework\nai — General AI and ML patterns\nmodels — Model architecture design\nFeedback\nIf useful: clawhub star keras\nStay updated: clawhub sync"
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/ivangdavila/keras",
    "publisherUrl": "https://clawhub.ai/ivangdavila/keras",
    "owner": "ivangdavila",
    "version": "1.0.0",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/keras",
    "downloadUrl": "https://openagent3.xyz/downloads/keras",
    "agentUrl": "https://openagent3.xyz/skills/keras/agent",
    "manifestUrl": "https://openagent3.xyz/skills/keras/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/keras/agent.md"
  }
}