Tool System Reference
Complete reference for Verdent's tool system
What You'll Learn
Comprehensive reference for Verdent's built-in tool system, including file operations, search capabilities, command execution, and integration tools.
File Operations
Tools for reading, editing, and creating files.
Read file contents with optional line ranges for large files. Works with all text formats. Essential for understanding code before modifications.
| Parameter | Description |
|---|---|
path | File path to read |
start_line | Starting line number (optional, for reading specific sections) |
max_lines | Maximum number of lines to return (optional, for limiting output) |
Use Cases:
- Reading configuration files before editing them
- Understanding existing implementation patterns in a codebase
- Reviewing test files to understand current coverage
Example:
# Read entire file
file_read("src/components/Button.tsx")
# Read specific range for large files
file_read("package-lock.json", start_line=1, max_lines=50)Best Practices:
- Use line ranges for files over 500 lines to avoid context overload
- Read only the sections relevant to your current task
- For large files, use
grep_contentfirst to identify relevant line numbers before reading
Limits:
- Files larger than 256KB return only the first 256KB of content
- Very large files (>10,000 lines) should be read in sections to avoid context window exhaustion and slow response times
For files over 500 lines, always use line ranges with file_read to maintain optimal performance.
Precise text replacement using exact string matching. Supports multiple replacement operations and preserves file structure and formatting.
| Parameter | Description |
|---|---|
path | File path to edit |
old_text | The exact text to find and replace |
new_text | The replacement text |
multiple | Set to true to replace all occurrences of the match |
Use Cases:
- Updating function implementations with new logic
- Modifying configuration values across files
- Refactoring variable or function names
Best Practices:
- Ensure the
old_textstring is unique enough to avoid unintended matches - Use
multiple=truewhen renaming variables or making repeated changes - Always verify file paths before editing to avoid modifying wrong files
- Use
file_editfor targeted changes; for complete rewrites, usefile_writeinstead
Create new files from scratch or completely replace existing file contents. Handles any text-based format.
| Parameter | Description |
|---|---|
path | File path to create or overwrite |
content | The complete content to write to the file |
Use Cases:
- Generating new components, modules, or configuration files
- Creating test files for new functionality
- Writing complete file rewrites when
file_editwould be impractical
Best Practices:
- Double-check paths before writing to prevent accidental overwrites of important files
- Use only for new files or complete rewrites
- For partial modifications, prefer
file_editwhich is safer and more precise
Overwrites existing files completely. Use file_edit for partial modifications.
Search & Navigation
Tools for finding files and searching content. No limits on search results.
Find files matching glob patterns like **/*.ts or src/**/*.js. Supports filtering by directory path, exclude patterns, and result limiting.
| Parameter | Description |
|---|---|
pattern | Glob pattern to match files (e.g., **/*.ts, src/**/*.js) |
exclude | Patterns to exclude from results (e.g., **/node_modules/**) |
max_results | Maximum number of files to return |
Use Cases:
- Finding all components of a specific type in a project
- Locating test files across the codebase
- Identifying configuration files scattered across directories
Example Patterns:
**/*.tsx # All TypeScript React files
src/**/*.test.js # All test files in src
**/config.* # All config files anywhereBest Practices:
- Use specific patterns to narrow scope (
src/**/*.tsinstead of**/*) - Exclude large directories like
node_modulesto improve performance - Set
max_resultsto prevent overwhelming output on large codebases
# Good: Specific scope
glob("src/components/**/*.tsx", max_results=50)
# Less efficient: Too broad
glob("**/*") # Returns thousands of resultsSearch file contents using regex patterns with context lines before and after matches. Supports case-insensitive searching and filtering by file type using glob patterns.
| Parameter | Description |
|---|---|
pattern | Regex pattern to search for in file contents |
glob | File pattern to filter which files are searched |
context_before | Number of lines to show before each match |
context_after | Number of lines to show after each match |
case_sensitive | Set to false for case-insensitive matching |
Use Cases:
- Finding function definitions and their implementations
- Locating API endpoint handlers across the codebase
- Searching for specific error messages or log statements
Example:
# Find authentication-related code
grep_content("auth.*login", glob="**/*.ts")
# Search with context lines
grep_content("TODO", glob="src/**", context_before=2, context_after=2)Performance Tips:
- Literal string searches are faster than complex regex patterns
- Case-insensitive searches (
case_sensitive=false) are slower - Request only the context lines you actually need
List files containing pattern matches, returning only file paths without content. Faster than grep_content when you only need to know which files match. Supports regex patterns and glob filtering.
| Parameter | Description |
|---|---|
pattern | Regex pattern to search for in file contents |
glob | File pattern to filter which files are searched |
Use Cases:
- Identifying which files need refactoring before starting work
- Finding all files that import a specific module
- Locating files containing deprecated patterns or APIs
Recommended Workflow:
- Use
grep_fileto quickly identify relevant files - Read specific files with
file_readto examine details - Use
grep_contentonly when you need surrounding context
Display directory hierarchy with configurable depth and exclude patterns to filter the output.
| Parameter | Description |
|---|---|
path | Directory path to list |
max_depth | How many levels deep to traverse the hierarchy |
exclude | Patterns to exclude from the output |
Use Cases:
- Understanding the overall structure of an unfamiliar project
- Verifying that directory organization matches expectations
- Finding specific subdirectories within a large codebase
Execution & Integration
Tools for running commands and coordinating tasks. No limits on concurrent subagents.
Execute shell commands with configurable timeout, descriptive summaries, and support for command chaining via &&.
| Parameter | Description |
|---|---|
command | The shell command to execute |
timeout | Maximum execution time in milliseconds (hard limit: 120000ms / 2 minutes) |
summary | Human-readable description of what the command does |
Use Cases:
- Running test suites and build processes
- Installing or updating dependencies
- Executing git operations (commit, push, pull)
- Running database migrations or scripts
Example:
# Clear summary and reasonable timeout
bash("npm test", timeout=60000, summary="Run Jest test suite")
# Chained dependent commands
bash("npm install && npm run build", timeout=120000)Limits:
- Maximum timeout: 120 seconds (2 minutes, hard limit)
- Commands exceeding the timeout are automatically terminated
- Set explicit timeouts appropriate to expected execution time
Best Practices:
- Always provide clear summaries so the command's purpose is obvious
- Chain dependent commands with
&&to ensure proper sequencing - Review destructive commands carefully before execution (rm, drop, truncate)
Security Considerations:
- Commands execute with your user permissions
- Never run commands from untrusted sources
- Use Plan Mode for review when working in shared codebases
- Avoid commands that might expose credentials or sensitive data
Always review bash commands in Plan Mode when working in shared codebases or production environments.
For operations exceeding 2 minutes:
- Break the work into smaller, sequential commands
- Run in background and check results separately
- Execute manually in your terminal for full control
Launch specialized subagents with isolated contexts to handle specific tasks without consuming main conversation context. Supports parallel execution for efficiency.
| Parameter | Description |
|---|---|
type | The subagent type (e.g., verifier) |
task | Description of the task for the subagent to perform |
Built-in Subagent:
- @verifier: Quick validation checks for implementation logic, syntax verification, and isolated testing
Use Cases:
- Delegating research and validation to keep main context focused on development
- Running verification checks without cluttering the conversation
- Performing parallel code review and security assessment
Best Practices:
- Delegate validation and review tasks to preserve main context for active development
- Launch multiple subagents in parallel when tasks are independent
- Use subagents for isolated verification that doesn't need conversational continuity
Performance Benefits:
- Reduced total execution time through parallel processing
- Efficient resource utilization with automatic task orchestration
- No limits on the number of concurrent subagents
Context Management Strategy:
- Read files strategically, requesting only what's needed for the current task
- Offload validation and review work to subagents
- Monitor context consumption during long sessions
- Break complex operations into tracked steps with
todo_update
Efficient Workflow Pattern:
- Planning: Use glob/grep to identify the scope of changes needed
- Reading: Read only the relevant files or sections
- Execution: Delegate appropriate tasks to subagents
- Verification: Run quick checks with @verifier subagent
Create and manage task lists to track progress through complex implementations. Supports status tracking (pending, in_progress, completed).
| Parameter | Description |
|---|---|
tasks | List of task objects with content and status |
status | Current state: pending, in_progress, or completed |
Use Cases:
- Breaking down complex implementations into manageable, trackable steps
- Maintaining visibility into progress across multi-file changes
- Coordinating multi-step workflows with clear status indicators
Web Access
Tools for searching and fetching web content.
Query internet search engines with control over result count and freshness filtering to find recent information.
| Parameter | Description |
|---|---|
query | The search query string |
num_results | How many search results to return |
freshness_days | Only return results from the last N days |
Use Cases:
- Finding official documentation for unfamiliar APIs or libraries
- Researching specific error messages to find solutions
- Checking current best practices and up-to-date recommendations
Retrieve web page contents and analyze them with specific queries to extract structured information.
| Parameter | Description |
|---|---|
url | The web page URL to fetch and analyze |
query | A specific question to answer based on the page content |
Use Cases:
- Reading and extracting key information from documentation pages
- Analyzing API documentation to understand usage patterns
- Extracting code examples and implementation details from tutorials
FAQs
How long can bash commands run?
Maximum timeout: 120 seconds (2 minutes)
Commands exceeding 2 minutes will be automatically terminated.
Alternatives for long operations:
- Break into smaller commands
- Run in background and check results separately
- Execute manually in your terminal
What's the difference between grep_file and grep_content?
grep_file returns only file paths that contain matches. Use it to quickly identify which files to examine.
grep_content returns the matching lines with optional context. Use it when you need to see the actual code.
Recommended workflow: Start with grep_file to find relevant files, then use file_read or grep_content for details.
When should I use file_edit vs file_write?
file_edit is for targeted changes. It replaces specific text while preserving the rest of the file.
file_write is for complete rewrites. It overwrites the entire file with new content.
Use file_edit when possible; it's safer and preserves file structure.