Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Upload many files to S3 with automatic organization by first-character prefixes.
Upload many files to S3 with automatic organization by first-character prefixes.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Upload files to S3 with automatic organization using first-character prefixes (e.g., a/apple.txt, b/banana.txt, 0-9/123.txt).
Use the included script for bulk uploads: # Basic upload ./s3-bulk-upload.sh ./files my-bucket # Dry run to preview ./s3-bulk-upload.sh ./files my-bucket --dry-run # Use sync mode (faster for many files) ./s3-bulk-upload.sh ./files my-bucket --sync # With storage class ./s3-bulk-upload.sh ./files my-bucket --storage-class STANDARD_IA
Verify AWS credentials are configured: aws sts get-caller-identity If this fails, ensure AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are set, or configure via aws configure.
Files are organized by the first character of their filename: First CharacterPrefixa-zLowercase letter (e.g., a/, b/)A-ZLowercase letter (e.g., a/, b/)0-90-9/Other_other/
Upload a single file with automatic prefix: FILE="example.txt" BUCKET="my-bucket" # Compute prefix from first character FIRST_CHAR=$(echo "${FILE}" | cut -c1 | tr '[:upper:]' '[:lower:]') if [[ "$FIRST_CHAR" =~ [a-z] ]]; then PREFIX="$FIRST_CHAR" elif [[ "$FIRST_CHAR" =~ [0-9] ]]; then PREFIX="0-9" else PREFIX="_other" fi aws s3 cp "$FILE" "s3://${BUCKET}/${PREFIX}/${FILE}"
Upload all files from a directory: SOURCE_DIR="./files" BUCKET="my-bucket" for FILE in "$SOURCE_DIR"/*; do [ -f "$FILE" ] || continue BASENAME=$(basename "$FILE") FIRST_CHAR=$(echo "$BASENAME" | cut -c1 | tr '[:upper:]' '[:lower:]') if [[ "$FIRST_CHAR" =~ [a-z] ]]; then PREFIX="$FIRST_CHAR" elif [[ "$FIRST_CHAR" =~ [0-9] ]]; then PREFIX="0-9" else PREFIX="_other" fi aws s3 cp "$FILE" "s3://${BUCKET}/${PREFIX}/${BASENAME}" done
For large uploads, stage files with symlinks then use aws s3 sync: SOURCE_DIR="./files" STAGING_DIR="./staging" BUCKET="my-bucket" # Create staging directory with prefix structure rm -rf "$STAGING_DIR" mkdir -p "$STAGING_DIR" for FILE in "$SOURCE_DIR"/*; do [ -f "$FILE" ] || continue BASENAME=$(basename "$FILE") FIRST_CHAR=$(echo "$BASENAME" | cut -c1 | tr '[:upper:]' '[:lower:]') if [[ "$FIRST_CHAR" =~ [a-z] ]]; then PREFIX="$FIRST_CHAR" elif [[ "$FIRST_CHAR" =~ [0-9] ]]; then PREFIX="0-9" else PREFIX="_other" fi mkdir -p "$STAGING_DIR/$PREFIX" ln -s "$(realpath "$FILE")" "$STAGING_DIR/$PREFIX/$BASENAME" done # Sync entire staging directory to S3 aws s3 sync "$STAGING_DIR" "s3://${BUCKET}/" # Clean up rm -rf "$STAGING_DIR"
List files by prefix: BUCKET="my-bucket" PREFIX="a" aws s3 ls "s3://${BUCKET}/${PREFIX}/" --recursive Generate a manifest of all uploaded files: BUCKET="my-bucket" aws s3 ls "s3://${BUCKET}/" --recursive | awk '{print $4}' Count files per prefix: BUCKET="my-bucket" for PREFIX in {a..z} 0-9 _other; do COUNT=$(aws s3 ls "s3://${BUCKET}/${PREFIX}/" --recursive 2>/dev/null | wc -l | tr -d ' ') [ "$COUNT" -gt 0 ] && echo "$PREFIX: $COUNT files" done
Common issues and solutions: ErrorCauseSolutionAccessDeniedInsufficient permissionsCheck IAM policy has s3:PutObject on bucketNoSuchBucketBucket doesn't existCreate bucket or check bucket name spellingInvalidAccessKeyIdBad credentialsVerify AWS_ACCESS_KEY_ID is correctExpiredTokenSession token expiredRefresh credentials or re-authenticate Test bucket access before bulk upload: BUCKET="my-bucket" echo "test" | aws s3 cp - "s3://${BUCKET}/_test_access.txt" && \ aws s3 rm "s3://${BUCKET}/_test_access.txt" && \ echo "Bucket access OK"
Optimize costs with storage classes: # Standard (default) aws s3 cp file.txt s3://bucket/prefix/file.txt # Infrequent Access (cheaper storage, retrieval fee) aws s3 cp file.txt s3://bucket/prefix/file.txt --storage-class STANDARD_IA # Glacier Instant Retrieval (archive with fast access) aws s3 cp file.txt s3://bucket/prefix/file.txt --storage-class GLACIER_IR # Intelligent Tiering (auto-optimize based on access patterns) aws s3 cp file.txt s3://bucket/prefix/file.txt --storage-class INTELLIGENT_TIERING Add --storage-class to bulk upload loops for cost optimization on infrequently accessed files.
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.