Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
LLM chat interface using OpenAI-compatible APIs with streaming support and session management. Use when working with pywayne.llm.chat_bot module for creating...
LLM chat interface using OpenAI-compatible APIs with streaming support and session management. Use when working with pywayne.llm.chat_bot module for creating...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
This module provides a synchronous LLM chat interface compatible with OpenAI APIs (including local servers like Ollama).
from pywayne.llm.chat_bot import LLMChat # Create chat instance chat = LLMChat( base_url="https://api.example.com/v1", api_key="your_api_key", model="deepseek-chat" ) # Single-turn conversation (non-streaming) response = chat.ask("Hello, LLM!", stream=False) print(response) # Streaming response for token in chat.ask("Explain recursion", stream=True): print(token, end='', flush=True)
# Use chat() for history tracking for token in chat.chat("What is a class in Python?"): print(token, end='', flush=True) # Continuation - remembers previous context for token in chat.chat("How do I define a constructor?"): print(token, end='', flush=True) # View history for msg in chat.history: print(f"{msg['role']}: {msg['content']}") # Clear history chat.clear_history()
from pywayne.llm.chat_bot import LLMConfig config = LLMConfig( base_url="https://api.example.com/v1", api_key="your_api_key", model="deepseek-chat", temperature=0.7, max_tokens=8192, top_p=1.0, frequency_penalty=0.0, presence_penalty=0.0, system_prompt="You are a helpful assistant" ) chat = LLMChat(**config.to_dict())
chat.update_system_prompt("You are now a Python expert, provide code examples")
from pywayne.llm.chat_bot import ChatManager manager = ChatManager( base_url="https://api.example.com/v1", api_key="your_api_key", model="deepseek-chat", timeout=300 # Session timeout in seconds ) # Get or create chat instance (maintains per-session history) chat1 = manager.get_chat("user1") chat2 = manager.get_chat("user2") # Sessions are independent chat1.chat("Hello from user1") chat2.chat("Hello from user2") # Remove a session manager.remove_chat("user1")
custom_config = LLMConfig( base_url=base_url, api_key=api_key, model="deepseek-chat", temperature=0.9, system_prompt="You are a creative writer" ) chat3 = manager.get_chat("user3", config=custom_config)
MethodDescriptionask(prompt, stream=False)Single-turn conversation without historychat(prompt, stream=True)Multi-turn conversation with history trackingupdate_system_prompt(prompt)Update system prompt in-placeclear_history()Clear conversation history (keeps system prompt)history (property)Get copy of current conversation history
MethodDescriptionget_chat(chat_id, stream=True, config=None)Get or create chat instance by IDremove_chat(chat_id)Remove chat session
ParameterDefaultDescriptionbase_urlrequiredAPI base URL (e.g., https://api.deepseek.com/v1)api_keyrequiredAPI authentication keymodel"deepseek-chat"Model nametemperature0.7Controls randomness (0-2)max_tokens2048/8192Maximum output tokenstop_p1.0Nucleus sampling (0-1)frequency_penalty0.0Reduces repetition (-2 to 2)presence_penalty0.0Encourages new topics (-2 to 2)system_prompt"δ½ ζ―δΈδΈͺδΈ₯θ°¨ηε©ζ"System messagetimeoutinfSession timeout in seconds (ChatManager only)
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.