mini\Inference\InferenceServiceInterface
abstract
interface
Documentation
Inference service interface for LLM-based structured evaluation
Applications implement this interface to provide LLM inference capabilities. The evaluate() method takes a prompt and a JSON Schema (via Validator or array) and returns a response guaranteed to match the schema.
Example implementation using Ollama:
class OllamaInference implements InferenceServiceInterface
{
public function __construct(
private string $model = 'llama3.2',
private string $baseUrl = 'http://localhost:11434'
) }
public function evaluate(string $prompt, Validator|\JsonSerializable|array $schema): mixed
{
$response = $this->request('/api/generate', [
'model' => $this->model,
'prompt' => $prompt,
'format' => $schema,
'stream' => false,
]);
return json_decode($response['response'], true);
}
}
Usage:
use function mini\inference;
use function mini\validator;
// Boolean evaluation
$needsReview = inference()->evaluate(
"Does this message require human action?\n\n$text",
validator()->enum([true, false])
);
// Classification
$sentiment = inference()->evaluate(
"Classify the sentiment:\n\n$text",
validator()->enum(['positive', 'negative', 'neutral'])
);
// Structured extraction
$data = inference()->evaluate(
"Extract contact info:\n\n$text",
validator()->type('object')->properties([
'name' => validator()->type('string')->required(),
'email' => validator()->type('string'),
])
);
Methods (2)
Evaluate a prompt and return a structured response matching the schema
Evaluate multiple prompts with the same schema