radar

ONE Sentinel

smart_toyAI/PROMPT ENGINEERING

datasette-llm-usage 0.2a0

sourceSimon Willison
calendar_todayApril 1, 2026
schedule1 min read
lightbulb

EXECUTIVE SUMMARY

Enhancements in datasette-llm-usage 0.2a0 Streamline AI Model Management

Summary

The article discusses the release of datasette-llm-usage version 0.2a0, highlighting significant changes in functionality and configuration for managing AI model usage. Key features include the removal of pricing allowances and the introduction of logging capabilities for prompts and responses.

Key Points

  • Version 0.2a0 of datasette-llm-usage has been released.
  • Features related to allowances and estimated pricing have been removed, now managed by datasette-llm-accountant.
  • The new version depends on datasette-llm for model configuration.
  • Full prompts, responses, and tool calls can be logged to the llm_usage_prompt_log table.
  • Logging can be enabled by setting the new datasette-llm-usage.log_prompts plugin configuration.
  • The /-/llm-usage-simple-prompt page has been redesigned and now requires specific permissions.
  • Tags associated with the release include llm and datasette.

Analysis

The updates in datasette-llm-usage 0.2a0 reflect a shift towards more organized management of AI model interactions, particularly in logging and permissions. This is significant for IT professionals as it enhances accountability and traceability in AI usage, which is crucial for compliance and optimization.

Conclusion

IT professionals should consider implementing the latest version of datasette-llm-usage to take advantage of improved logging capabilities and better manage AI model configurations. Staying updated with these enhancements can lead to more efficient AI deployment and monitoring practices.