SDK
Pre-built evaluators
Pre-built evaluators are a useful starting point for setting up evaluations. Refer to pre-built evaluators for how to use pre-built evaluators with LangSmith.Create your own LLM-as-a-judge evaluator
For complete control of evaluator logic, create your own LLM-as-a-judge evaluator and run it using the LangSmith SDK (Python / TypeScript). Requireslangsmith>=0.2.0
UI
Pre-built evaluators
Pre-built evaluators are a useful starting point when setting up evaluations. The LangSmith UI supports the following pre-built evaluators:- Hallucination: Detect factually incorrect outputs. Requires a reference output.
- Correctness: Check semantic similarity to a reference.
- Conciseness: Evaluate whether an answer is a concise response to a question.
- Code checker: Verify correctness of code answers.
- When running an evaluation using the playground
- As part of a dataset to automatically run evaluations on experiments
- When running an online evaluation
Customize your LLM-as-a-judge evaluator
Add specific instructions for your LLM-as-a-judge evalutor prompt and configure which parts of the input/output/reference output should be passed to the evaluator.Select/create the evaluator
- In the playground or from a dataset: Select the +Evaluator button
- From a tracing project: Select Add rules, configure your rule and select Apply evaluator
Configure the evaluator
Prompt
Create a new prompt, or choose an existing prompt from the prompt hub.- Create your own prompt: Create a custom prompt inline.
- Pull a prompt from the prompt hub: Use the Select a prompt dropdown to select from an existing prompt. You can’t edit these prompts directly within the prompt editor, but you can view the prompt and the schema it uses. To make changes, edit the prompt in the playground and commit the version, and then pull in your new prompt in the evaluator.
Model
Select the desired model from the provided options.Mapping variables
Use variable mapping to indicate the variables that are passed into your evaluator prompt from your run or example. To aid with variable mapping, an example (or run) is provided for reference. Click on the the variables in your prompt and use the dropdown to map them to the relevant parts of the input, output, or reference output. To add prompt variables type the variable with double curly brackets{{prompt_var}}
if using mustache formatting (the default) or single curly brackets {prompt_var}
if using f-string formatting.
You may remove variables as needed. For example if you are evaluating a metric such as conciseness, you typically don’t need a reference output so you may remove that variable.
Preview
Previewing the prompt will show you of what the formatted prompt will look like using the reference run and dataset example shown on the right.Improve your evaluator with few-shot examples
To better align the LLM-as-a-judge evaluator to human preferences, LangSmith allows you to collect human corrections on evaluator scores. With this selection enabled, corrections are then inserted automatically as few-shot examples into your prompt. Learn how to set up few-shot examples and make corrections.Feedback configuration
Feedback configuration is the scoring criteria that your LLM-as-a-judge evaluator will use. Think of this as the rubric that your evaluator will grade based on. Scores will be added as feedback to a run or example. Defining feedback for your evaluator:- Name the feedback key: This is the name that will appear when viewing evaluation results. Names should be unique across experiments.
- Add a description: Describe what the feedback represents.
- Choose a feedback type:
- Boolean: True/false feedback.
- Categorical: Select from predefined categories.
- Continuous: Numerical scoring within a specified range.