Introduction
ProYaro Master Instructions
For AI Assistants: This folder contains comprehensive documentation and code templates for building applications using the ProYaro AI infrastructure.
š Table of Contents
š Quick Start
For AI Assistants (Claude, Gemini, etc.)
When helping users build applications:
- READ FIRST:
AI_ASSISTANT_INSTRUCTIONS.md- Core directive for using ProYaro services - CHECK NETWORK:
NETWORK_TOPOLOGY.md- Find service locations and ports - CHECK SPECS:
MACHINES_INFRASTRUCTURE.md- Understand capabilities and limits - API REFERENCE:
API_INTEGRATION_GUIDE.md- Complete API documentation - USE SKILLS:
skills/folder - Ready-to-use code templates
For Developers
This folder provides:
- Complete API documentation for all ProYaro AI services
- Network topology and infrastructure details
- Ready-to-use code templates (skills) for common integrations
- Best practices and examples
š File Guide
Core Documentation
| File | Purpose | Read This When... |
|---|---|---|
AI_ASSISTANT_INSTRUCTIONS.md | AI assistant directive | You're an AI helping with development |
API_INTEGRATION_GUIDE.md | Complete API reference | You need API endpoint details |
NETWORK_TOPOLOGY.md | Network diagram & access | You need to know service locations |
MACHINES_INFRASTRUCTURE.md | Hardware specs & limits | You need to understand capabilities |
Skills (Code Templates)
| Skill | Service | Use For... |
|---|---|---|
mlx-chat-skill.md | MLX FastAPI | Text generation, chatbots |
skills/ folder | Various | Additional integration patterns |
š How to Use
Scenario 1: Building a Chatbot
1. Read: AI_ASSISTANT_INSTRUCTIONS.md ā Confirms MLX is available
2. Check: NETWORK_TOPOLOGY.md ā MLX at localhost:8004
3. Reference: API_INTEGRATION_GUIDE.md ā MLX Chat API section
4. Template: skills/mlx-chat-skill.md ā Ready-to-use code
5. Implement: Copy & customize the TypeScript/Python template
Scenario 2: Adding Voice Features
1. Read: AI_ASSISTANT_INSTRUCTIONS.md ā Confirms Whisper + XTTS available
2. Check: NETWORK_TOPOLOGY.md ā Ubuntu server (api.proyaro.com)
3. Reference: API_INTEGRATION_GUIDE.md ā STT & TTS sections
4. Implement: Use job-based API pattern
Scenario 3: Image Generation
1. Read: AI_ASSISTANT_INSTRUCTIONS.md ā Two options: Mac or Ubuntu ComfyUI
2. Decide:
- Mac Mini: Direct API, faster for single images
- Ubuntu: Job queue, better for batches
3. Reference: API_INTEGRATION_GUIDE.md ā ComfyUI sections
4. Implement: Based on chosen approach
šÆ Service Selection Guide
Decision Matrix
| Need | Use Service | Location | Skill File |
|---|---|---|---|
| Text Generation | MLX | Mac Mini :8004 | mlx-chat-skill.md |
| Image (Fast) | ComfyUI | Mac Mini :8188 | TBD |
| Image (Queue) | ComfyUI | Ubuntu :8188* | TBD |
| Speech-to-Text | Whisper | Ubuntu :8001* | TBD |
| Text-to-Speech | XTTS-v2 | Ubuntu :8002* | TBD |
| Embeddings (Small) | MLX | Mac Mini :8004 | TBD |
| Embeddings (Large) | E5-Large | Ubuntu :8003* | TBD |
* Access via Ubuntu backend API (/jobs endpoint)
š” Common Patterns
Pattern 1: Direct API (Mac Mini)
// For: Text generation, MLX embeddings, Mac ComfyUI
const response = await fetch('http://localhost:8004/v1/chat/completions', {
method: 'POST',
body: JSON.stringify({ prompt: '...', max_tokens: 500 }),
});
const data = await response.json();
console.log(data.text);
Pattern 2: Job-Based API (Ubuntu)
// For: STT, TTS, embeddings (Ubuntu), image gen (Ubuntu)
// Step 1: Create job
const job = await fetch('https://api.proyaro.com/jobs', {
method: 'POST',
headers: { 'Authorization': `Bearer ${token}` },
body: JSON.stringify({
job_type: 'speech_to_text',
parameters: { audio_path: '...' }
}),
});
// Step 2: Poll or use WebSocket for result
const result = await pollJob(job.id);
// OR
websocket.onmessage = (update) => { /* handle result */ };
š Authentication
All services require JWT authentication except direct MLX/ComfyUI on Mac Mini (internal only).
# Get token
POST /api/auth/login
{
"email": "admin@a2zadd.com",
"password": "..."
}
# Use token
Authorization: Bearer <token>
šļø Architecture Overview
Your Application
ā
āāāāāāāāāāāāāāāāāāāāāāā
ā Choose Access Path ā
āāāāāāāāāāāāāāāāāāāāāāā¤
ā ā
ā ā Direct (Internal) ā ā Mac Mini MLX/ComfyUI
ā ā (http://10.0.0.188:xxxx)
ā ā
ā ā Ubuntu API (Prod) ā ā Ubuntu Backend
ā ā (https://api.proyaro.com)
ā ā ā
ā ā Routes to:
ā ā ⢠Whisper (STT)
ā ā ⢠XTTS (TTS)
ā ā ⢠Embeddings
ā ā ⢠ComfyUI (GPU)
ā ā ⢠MLX (proxied)
āāāāāāāāāāāāāāāāāāāāāāā
š ļø Development Checklist
When starting a new project:
- Read
AI_ASSISTANT_INSTRUCTIONS.md - Identify required AI capabilities
- Check
NETWORK_TOPOLOGY.mdfor service locations - Review
MACHINES_INFRASTRUCTURE.mdfor limits - Find relevant endpoints in
API_INTEGRATION_GUIDE.md - Copy code template from
skills/folder - Implement with proper error handling
- Test service availability
- Deploy
š Additional Resources
In This Folder
- All documentation is versioned and dated
- Skills are production-ready templates
- API guide includes curl, TypeScript, and Python examples
External
- MLX: https://github.com/ml-explore/mlx
- ComfyUI: https://github.com/comfyanonymous/ComfyUI
- Whisper: https://github.com/openai/whisper
- XTTS: https://github.com/coqui-ai/TTS
š Updates
This documentation is actively maintained. Check the "Last Updated" date in each file.
- Core Docs: Updated as infrastructure changes
- Skills: Updated when new patterns emerge
- API Guide: Updated when APIs change
š Learning Path
Beginner
- Read README (this file)
- Read AI_ASSISTANT_INSTRUCTIONS.md
- Try mlx-chat-skill.md example
- Explore API_INTEGRATION_GUIDE.md examples
Intermediate
- Understand NETWORK_TOPOLOGY.md
- Review MACHINES_INFRASTRUCTURE.md
- Implement job-based workflows
- Add WebSocket real-time updates
Advanced
- Combine multiple services
- Build custom workflows
- Optimize performance
- Implement caching strategies
ā” Quick Commands
# Navigate to master instructions
cd /Users/yaro/Documents/a2zadd/master-instruction
# List all files
ls -la
# Read a specific doc
cat AI_ASSISTANT_INSTRUCTIONS.md
# View skills
ls skills/
# Search for a topic
grep -r "embeddings" .
š¤ Support
If you're an AI assistant and need clarification:
- Ask the user (Yaro) about infrastructure details
- Refer to the latest documentation in this folder
- Use the provided skills as starting points
If you're a developer:
- All services are documented in API_INTEGRATION_GUIDE.md
- Check service health endpoints before debugging
- Review skills folder for working examples
š Service Status
To check if services are running:
# Mac Mini
curl http://localhost:8004/health # MLX
curl http://localhost:8188/system_stats # ComfyUI
curl http://localhost:3000/api/system/status # Backend (all)
# Ubuntu
curl https://api.proyaro.com/health # API
ssh root@10.0.0.11 "cd /mnt/storage/new-stack && docker compose ps"
Welcome to ProYaro AI Infrastructure!
Everything you need to build amazing AI-powered applications is documented here. Start with the Quick Start section above, and happy building! š
Version: 1.0 Last Updated: 2025-01-01
ProYaro AI Infrastructure Documentation ⢠Version 1.2