What if your real-world robot could get better simply by chatting with you?
Current physical AI models (e.g., Embodied VLA models) struggle to obtain dense and accurate reward signals during real-world interactions, limiting their ability to improve robot manipulation performance continuously.
PhysClaw* leverages the power of OpenClaw agents to efficiently convert simple human chat into dense, accurate reinforcement learning signals. This enables robots to learn continuously from natural language feedback.
A user-friendly, Node-centric architecture for efficient management of physical-world AI models and entities.
We are actively opening more high-impact modules and benchmark directions for the community. Join as a contributor to co-build the next generation of physical-world continual learning.
Become a ContributorEnable Vision-Language-Action models to improve continuously through simple conversational feedback.
Refine and update World Models using natural language interactions to better understand and predict physical environments.
Open for contributors: build richer failure-cause attribution, confidence tracking, and interpretable diagnostics for Value Models.
Community Contributor NeededOpen for contributors: extend unified sensor interfaces across robot types, with robust state normalization and plug-in adapters.
Community Contributor NeededOpen for contributors: design memory retrieval strategies that decide when to prioritize Value Model tuning versus VLA/World Model tuning.
Community Contributor NeededOpen for contributors: improve how VLM/LLM converts human feedback into stable and efficient reward signals for RL loops.
Community Contributor Needed