PromptEval Documentation
Professional testing framework for LLM applications with enterprise license management and semantic validation.
v2.0.0Python 3.11+LicensedJWT Auth
๐ License Management
JWT authentication, test tracking, and machine binding
๐งช Test Tracking
Monitor test consumption with monthly limits per plan
๐ค Semantic Validation
ML-powered semantic matching with 85%+ accuracy
๐ Dashboard
Web dashboard for license and usage management
Quick Start
Get started with PromptEval in under 5 minutes:
# 1. Install PromptEval
pip install prompteval-core
# 2. Activate your license
prompteval license activate PE-XXXX-XXXX-XXXX-XXXX
# 3. Check your license status
prompteval license status
# โ
License is VALID
# Plan: professional
# Tests Remaining: 10000/10000
# 4. Create a test file
cat > test_example.yml << EOF
tests:
- name: greeting_test
prompt: "Say hello"
expected: "Hello! Are you ready for fun?"
threshold: 0.85
EOF
# 5. Run tests
prompteval run test_example.yml --report
# Tests automatically tracked in your licenseInstallation
Requirements
- Python 3.11 or higher
- Valid PromptEval license
Install via pip
pip install prompteval-coreInstall from wheel
pip install prompteval_core-0.1.0-py3-none-any.whlVerify installation
prompteval --version
# Output: PromptEval v2.0.0
prompteval license info
# Shows your license informationLicense Management
PromptEval v2.0 includes enterprise license management with test tracking and machine binding.
Activating Your License
# Activate license
prompteval license activate PE-XXXX-XXXX-XXXX-XXXX
# Output:
# โ
License Activated Successfully!
#
# License Information:
# Plan: professional
# Tests Remaining: 10000/10000
# Expires: 2026-01-25
# Machines: 1/3Checking License Status
prompteval license status
# Output:
# โ
License is VALID
#
# Plan: professional
# Tests Used: 450/10000
# Tests Remaining: 9550
# Machines: 1/3
# Expires in: 365 daysLicense Plans
| Feature | Free | Starter | Professional | Enterprise |
|---|---|---|---|---|
| Tests/Month | 20 | 1000 | 5,000 | Unlimited |
| Machines | 1 | 1 | 3 | Unlimited |
| Semantic Validation | โ | โ | โ | โ |
| Dashboard Access | โ | โ | โ | โ |
| API Access | - | - | โ | โ |
Deactivating License
# Deactivate on current machine
prompteval license deactivate
# This allows you to activate on another machine
# (if your plan allows multiple machines)Test Tracking
All test execution is automatically tracked and counted against your monthly limit.
How It Works
- You activate your license
- Every time you run tests, they are counted
- Usage is reported to the license server
- You can see remaining tests anytime
- Limits reset on the 1st of each month
Checking Usage
# Via CLI
prompteval license status
# Output:
# โ
License is VALID
#
# Plan: professional
# Tests Used: 450/10000
# Tests Remaining: 9550
# Machines: 1/3
# Expires in: 365 days
# Via Dashboard
# Login to https://prompteval.com/dashboard
# View pie chart and daily usage graphWhat Counts as a Test?
- Each test case in your YAML file
- Both passing and failing tests
- Tests run via CLI or programmatically
Example
# test_suite.yml has 10 test cases
tests:
- name: test1
...
- name: test2
...
# ... 8 more tests
# Running this file counts as 10 tests
prompteval run test_suite.yml
# โ
Tests: 10 passed
# ๐ Tests Remaining: 9990/10000CLI Reference
License Commands
# Activate license
prompteval license activate <LICENSE_KEY>
# Check license status
prompteval license status
# View license info
prompteval license info
# Deactivate license
prompteval license deactivate
# Check machine fingerprint
prompteval license machineTest Commands
# Run tests
prompteval run <file_or_directory>
# Run with custom config
prompteval run tests/ --config=custom.yaml
# Generate HTML report
prompteval run tests/ --report
# Validate YAML syntax (doesn't consume tests)
prompteval validate tests/
# Show version
prompteval --versionPython API
from prompteval import TestRunner, SemanticValidator
# License is automatically validated on import
# Tests are automatically tracked
# Initialize runner
runner = TestRunner(config_path="prompteval.yaml")
# Run tests (counts against your limit)
results = runner.run("tests/")
# Semantic validation
validator = SemanticValidator(threshold=0.85)
similarity = validator.compare(
text1="Hello, how can I help?",
text2="Hi! How may I assist?"
)
print(f"Similarity: {similarity:.2%}")Ready to Get Started?
Get your license today and start testing your LLM applications professionally.