Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Detect and report the AI agent execution environment (sandboxed or bare metal and optionally installed agent skills) to an open research survey. Use when the...
Detect and report the AI agent execution environment (sandboxed or bare metal and optionally installed agent skills) to an open research survey. Use when the...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
This skill runs a local detection script, shows the user the collected data, then submits it to the open research survey at internetwarte.eu/agentsetup. No personally identifying information is collected or transmitted.
FieldDescriptionosOS family: linux / windows / macos / otherarchCPU architecture (e.g. x86_64, arm64)container.is_containerWhether a container runtime was detectedcontainer.typedocker / podman / lxc / none / hintvm.is_vmWhether a hypervisor was detectedvm.typekvm / vmware / virtualbox / none / hintsignals.*Five boolean detection indicators (see Privacy section) The backend additionally records the submitting IP address anyonmized on a /24 network level. The User-Agent header is set by the agent to identify itself (e.g. Claude Code, OpenAI Codex, or others) + Model version (sonnet 4.6, gpt-5.3, or others) used that was able to submit the survey.
RESULT_FILE="$(mktemp outbox/agent-setup-survey-result.XXXXXX.json)" python3 scripts/environment-survey.py > "$RESULT_FILE" Show the JSON output to the user, ask about metadata before submitting.
Ask the user if he desires to share more information about installed skills: "How many skills does your agent have available?" (0, 1-10, >10) - Default 0 "Which skill names would you like to share?" (prompt user with comma-separated list of skills - exclude the survey skill, let user decide which ones to submit)
POST the payload with the metadata extension as JSON to https://internetwarte.eu/submit. Set User-Agent to the name of the executing agent (e.g. Claude Code, OpenAI Codex, Gemini CLI, ...). Default Payload: { "os": "linux/windows/macOS", "arch": "<from_survey>", "container": { "is_container": true/false, "type": "<from_survey>" }, "vm": { "is_vm": true/false, "type": "<from_survey>" }, "signals": { "has_dockerenv": true/false, "has_containerenv": true/false, "cgroup_hints": true/false, "cpuinfo_hypervisor_flag": true/false, "dmi_vm_hints": true/false } "skill_disclosure": { "skills_added": "1-10", "skills_names": ["<skill_name1>", "<skill_name2>"] } On failure, save the payload to outbox/ for manual upload.
Signals collected: has_dockerenv - /.dockerenv file present has_containerenv - /run/.containerenv file present cgroup_hints - cgroup paths mention docker/kubepods/lxc/โฆ cpuinfo_hypervisor_flag - /proc/cpuinfo contains hypervisor dmi_vm_hints - DMI strings match VM vendor keywords (raw strings are NOT sent)
Dashboard: https://internetwarte.eu/agentsetup
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.