Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Azure Blob Storage SDK for Python. Use for uploading, downloading, listing blobs, managing containers, and blob lifecycle. Triggers: "blob storage", "BlobServiceClient", "ContainerClient", "BlobClient", "upload blob", "download blob".
Azure Blob Storage SDK for Python. Use for uploading, downloading, listing blobs, managing containers, and blob lifecycle. Triggers: "blob storage", "BlobServiceClient", "ContainerClient", "BlobClient", "upload blob", "download blob".
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Client library for Azure Blob Storage โ object storage for unstructured data.
pip install azure-storage-blob azure-identity
AZURE_STORAGE_ACCOUNT_NAME=<your-storage-account> # Or use full URL AZURE_STORAGE_ACCOUNT_URL=https://<account>.blob.core.windows.net
from azure.identity import DefaultAzureCredential from azure.storage.blob import BlobServiceClient credential = DefaultAzureCredential() account_url = "https://<account>.blob.core.windows.net" blob_service_client = BlobServiceClient(account_url, credential=credential)
ClientPurposeGet FromBlobServiceClientAccount-level operationsDirect instantiationContainerClientContainer operationsblob_service_client.get_container_client()BlobClientSingle blob operationscontainer_client.get_blob_client()
container_client = blob_service_client.get_container_client("mycontainer") container_client.create_container()
# From file path blob_client = blob_service_client.get_blob_client( container="mycontainer", blob="sample.txt" ) with open("./local-file.txt", "rb") as data: blob_client.upload_blob(data, overwrite=True) # From bytes/string blob_client.upload_blob(b"Hello, World!", overwrite=True) # From stream import io stream = io.BytesIO(b"Stream content") blob_client.upload_blob(stream, overwrite=True)
blob_client = blob_service_client.get_blob_client( container="mycontainer", blob="sample.txt" ) # To file with open("./downloaded.txt", "wb") as file: download_stream = blob_client.download_blob() file.write(download_stream.readall()) # To memory download_stream = blob_client.download_blob() content = download_stream.readall() # bytes # Read into existing buffer stream = io.BytesIO() num_bytes = blob_client.download_blob().readinto(stream)
container_client = blob_service_client.get_container_client("mycontainer") # List all blobs for blob in container_client.list_blobs(): print(f"{blob.name} - {blob.size} bytes") # List with prefix (folder-like) for blob in container_client.list_blobs(name_starts_with="logs/"): print(blob.name) # Walk blob hierarchy (virtual directories) for item in container_client.walk_blobs(delimiter="/"): if item.get("prefix"): print(f"Directory: {item['prefix']}") else: print(f"Blob: {item.name}")
blob_client.delete_blob() # Delete with snapshots blob_client.delete_blob(delete_snapshots="include")
# Configure chunk sizes for large uploads/downloads blob_client = BlobClient( account_url=account_url, container_name="mycontainer", blob_name="large-file.zip", credential=credential, max_block_size=4 * 1024 * 1024, # 4 MiB blocks max_single_put_size=64 * 1024 * 1024 # 64 MiB single upload limit ) # Parallel upload blob_client.upload_blob(data, max_concurrency=4) # Parallel download download_stream = blob_client.download_blob(max_concurrency=4)
from datetime import datetime, timedelta, timezone from azure.storage.blob import generate_blob_sas, BlobSasPermissions sas_token = generate_blob_sas( account_name="<account>", container_name="mycontainer", blob_name="sample.txt", account_key="<account-key>", # Or use user delegation key permission=BlobSasPermissions(read=True), expiry=datetime.now(timezone.utc) + timedelta(hours=1) ) # Use SAS token blob_url = f"https://<account>.blob.core.windows.net/mycontainer/sample.txt?{sas_token}"
# Get properties properties = blob_client.get_blob_properties() print(f"Size: {properties.size}") print(f"Content-Type: {properties.content_settings.content_type}") print(f"Last modified: {properties.last_modified}") # Set metadata blob_client.set_blob_metadata(metadata={"category": "logs", "year": "2024"}) # Set content type from azure.storage.blob import ContentSettings blob_client.set_http_headers( content_settings=ContentSettings(content_type="application/json") )
from azure.identity.aio import DefaultAzureCredential from azure.storage.blob.aio import BlobServiceClient async def upload_async(): credential = DefaultAzureCredential() async with BlobServiceClient(account_url, credential=credential) as client: blob_client = client.get_blob_client("mycontainer", "sample.txt") with open("./file.txt", "rb") as data: await blob_client.upload_blob(data, overwrite=True) # Download async async def download_async(): async with BlobServiceClient(account_url, credential=credential) as client: blob_client = client.get_blob_client("mycontainer", "sample.txt") stream = await blob_client.download_blob() data = await stream.readall()
Use DefaultAzureCredential instead of connection strings Use context managers for async clients Set overwrite=True explicitly when re-uploading Use max_concurrency for large file transfers Prefer readinto() over readall() for memory efficiency Use walk_blobs() for hierarchical listing Set appropriate content types for web-served blobs
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.