Skip to content

Python: Allow @tool functions to return rich content (images, audio)#4331

Open
giles17 wants to merge 13 commits intomicrosoft:mainfrom
giles17:giles/tool-rich-content-results
Open

Python: Allow @tool functions to return rich content (images, audio)#4331
giles17 wants to merge 13 commits intomicrosoft:mainfrom
giles17:giles/tool-rich-content-results

Conversation

@giles17
Copy link
Contributor

@giles17 giles17 commented Feb 26, 2026

Description

Closes #4272 and #2513

When a @tool function returns a Content object (e.g. Content.from_data(image_bytes, "image/png")), the framework now preserves it as rich content that the model can perceive natively, instead of serializing it to a JSON string.

Problem

Previously, FunctionTool.parse_result() serialized any Content return to JSON text via _make_dumpable(). The model received a text blob, not the actual image. The same issue existed in MCP tool results where ImageContent was JSON-serialized.

Solution

Added an items field to function_result Content that carries rich Content objects (images, audio, files) alongside the text result. Providers format these items using their existing multi-modal content handling.

User API — no decorator changes needed:

@tool
async def capture_screenshot(url: str) -> Content:
    image_bytes = await take_screenshot(url)
    return Content.from_data(data=image_bytes, media_type="image/png")

@tool
async def render_chart(data: str) -> list[Content]:
    image_bytes = render(data)
    return [
        Content.from_text("Chart rendered."),
        Content.from_data(data=image_bytes, media_type="image/png"),
    ]

Changes

Core framework:

  • _types.py: Added items field to Content. Updated from_function_result() to accept str | list[Content] and split text from rich items internally.
  • _tools.py: Updated parse_result() to preserve Content returns instead of JSON-serializing. Updated invoke() return type.
  • _mcp.py: Updated _parse_tool_result_from_mcp() to return list[Content] for image/audio instead of JSON strings. Preserves original content ordering.

All 6 providers updated:

  • OpenAI Responses: Injects rich items as user message with input_image after function_call_output
  • OpenAI Chat Completions: Injects rich items as follow-up user message (Chat Completions API only supports string content in tool messages)
  • Anthropic: Formats rich items as native image blocks in tool_result content array
  • Bedrock/Ollama/Azure-AI: Logs warning when rich items present (unsupported by these APIs)

Tests: 8 new tests + 2 updated existing tests, all passing.

…udio)

Add support for tool functions to return Content objects that the model can perceive natively. Closes microsoft#4272

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Copilot AI review requested due to automatic review settings February 26, 2026 19:48
@markwallace-microsoft
Copy link
Member

markwallace-microsoft commented Feb 26, 2026

Python Test Coverage

Python Test Coverage Report •
FileStmtsMissCoverMissing
packages/anthropic/agent_framework_anthropic
   _chat_client.py4283492%430, 433, 514, 601, 603, 783–784, 862, 892–893, 938, 954–955, 962–964, 968–970, 974–977, 1091, 1101, 1153, 1274, 1301–1302, 1319, 1332, 1345, 1370–1371
packages/azure-ai/agent_framework_azure_ai
   _chat_client.py4847883%415–416, 418, 602, 607–608, 610–611, 614, 617, 619, 624, 885–886, 888, 891, 894, 897–902, 905, 907, 915, 927–929, 933, 936–937, 945–948, 958, 966–969, 971–972, 974–975, 982, 990–991, 999–1012, 1017, 1020, 1028, 1034, 1042–1044, 1047, 1067–1068, 1201, 1228, 1243, 1359, 1406, 1481
packages/core/agent_framework
   _mcp.py4246484%97–98, 108–113, 124, 129, 183–184, 196–201, 213–214, 224, 271, 280, 343, 351, 502, 569, 604, 606, 610–611, 613–614, 668, 683, 701, 742, 847, 860–865, 887, 936–937, 943–945, 964, 989–990, 994–998, 1015–1019, 1163
   _tools.py7928389%168–169, 328, 330, 348–350, 358, 376, 390, 397, 404, 420, 422, 429, 437, 469, 494, 498, 515–517, 564–566, 589, 615, 658, 680, 747–753, 789, 800–811, 830–832, 836, 840, 854–856, 1195, 1215, 1291–1295, 1419, 1423, 1447, 1473, 1475, 1565, 1595, 1615, 1617, 1670, 1733, 1924–1925, 1976, 2045–2046, 2106, 2111, 2118
   _types.py10217292%58, 67–68, 122, 127, 146, 148, 152, 156, 158, 160, 162, 180, 184, 210, 232, 237, 242, 246, 276, 654–655, 1204, 1276, 1293, 1311, 1329, 1339, 1383, 1515–1517, 1705, 1796–1801, 1826, 1994, 2006, 2255, 2276, 2371, 2600, 2803, 2871, 2886, 2907, 3112–3114, 3117–3119, 3123, 3128, 3132, 3216–3218, 3247, 3301, 3320–3321, 3324–3328, 3334
packages/core/agent_framework/openai
   _chat_client.py3122791%210, 240–241, 245, 368, 375, 451–458, 460–463, 473, 551, 553, 570, 617, 630, 654, 670, 710
   _responses_client.py81212884%312–315, 319–320, 325–326, 336–337, 344, 359–365, 386, 394, 417, 514, 516, 613, 668, 672, 674, 676, 678, 746, 760, 840, 850, 855, 898, 977, 994, 1007, 1068, 1159, 1164, 1168–1170, 1174–1175, 1230, 1259, 1265, 1275, 1281, 1286, 1292, 1297–1298, 1359, 1381–1382, 1397–1398, 1416–1417, 1458–1461, 1623, 1678, 1680, 1760–1768, 1890, 1945, 1960, 1980–1990, 2003, 2014–2018, 2032, 2046–2057, 2066, 2098–2101, 2109–2110, 2112–2114, 2128–2130, 2140–2141, 2147, 2162
TOTAL22726282287% 

Python Unit Test Overview

Tests Skipped Failures Errors Time
4709 20 💤 0 ❌ 0 🔥 1m 20s ⏱️

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR enables @tool-decorated functions to return rich content (images, audio, files) that models can perceive natively, rather than having them serialized to JSON strings. This addresses issue #4272 by allowing vision-in-the-loop workflows where tools like capture_screenshot() or render_chart() can feed image content back into the model for analysis.

Changes:

  • Core framework now preserves Content objects with rich media instead of JSON-serializing them
  • Added items field to function_result Content to carry rich media alongside text results
  • Updated all 6 provider implementations to handle rich content (OpenAI Responses, OpenAI Chat, Anthropic support it natively; Bedrock, Ollama, Azure-AI log warnings)

Reviewed changes

Copilot reviewed 11 out of 11 changed files in this pull request and generated 1 comment.

Show a summary per file
File Description
python/packages/core/agent_framework/_types.py Added items parameter to Content.init and from_function_result() to store rich media items; updated to_dict() to serialize items
python/packages/core/agent_framework/_tools.py Updated parse_result() to return str or list[Content] instead of always serializing; added _build_function_result() helper to separate text and rich items; updated invoke() return type
python/packages/core/agent_framework/_mcp.py Updated _parse_tool_result_from_mcp() to return list[Content] for results containing images/audio instead of JSON strings
python/packages/core/agent_framework/openai/_responses_client.py Injects rich items as separate user message with input_image content after function_call_output
python/packages/core/agent_framework/openai/_chat_client.py Formats tool message content as multi-part array with text and image_url/input_audio/file parts when items present
python/packages/anthropic/agent_framework_anthropic/_chat_client.py Formats rich items as native image blocks in tool_result content array; handles both data and uri image types
python/packages/bedrock/agent_framework_bedrock/_chat_client.py Logs warning when rich items present (Bedrock doesn't support them); omits items from tool result
python/packages/ollama/agent_framework_ollama/_chat_client.py Logs warning when rich items present (Ollama doesn't support them); omits items from tool result
python/packages/azure-ai/agent_framework_azure_ai/_chat_client.py Logs warning when rich items present (Azure AI Agents doesn't support them); omits items from tool output
python/packages/core/tests/core/test_types.py Added 8 new tests for parse_result(), _build_function_result(), and Content.from_function_result() with items; updated 2 existing tests to expect list[Content] instead of JSON
python/packages/core/tests/core/test_mcp.py Updated test_parse_tool_result_from_mcp to expect list[Content] for results with images; added test_parse_tool_result_from_mcp_audio_content

Copy link
Member

@eavanvalkenburg eavanvalkenburg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So we recently made the switch to restrict return types, and one of the reasons was performance, the constant parsing of these results, both for otel and for the client is a bit wasteful. So could you have a look at whether a cache could be used in the parsing function in the different places? And we also need to do integration testing with this because openai chat shouldn't support this, so let's be sure, both with openai, azure openai, ollama and foundry local and maybe others that derive from openai chat

@eavanvalkenburg
Copy link
Member

This is also #2513

giles17 and others added 3 commits February 27, 2026 11:58
…esult, fix Chat client

- Preserve original content order in MCP tool results instead of text-first
- Move _build_function_result logic into Content.from_function_result()
- Chat Completions: inject user message for rich items (API only supports string tool content)
- Update tests for ordering and new from_function_result behavior

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
@giles17 giles17 changed the title Python: Allow @tool functions to return rich content (images, audio) Python: Allow @tool functions to return rich content (images, audio) Mar 2, 2026
giles17 and others added 3 commits March 2, 2026 20:02
- Responses client: put rich items directly in function_call_output's
  output field as list (native API support) instead of user message injection
- Chat client: warn and omit rich items (API doesn't support multi-part
  tool results), matching Ollama/Bedrock pattern
- Unify test image: use sample_image.jpg across all integration tests
- Add Azure OpenAI Responses integration test
- Assert model describes house image to verify perception

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Copy link
Contributor

@moonbox3 moonbox3 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Automated Code Review

Reviewers: 3 | Confidence: 86%

✗ Correctness

This PR adds rich content (images, audio) support in tool results across multiple LLM provider clients. The implementation is well-structured with proper tests. The main correctness issue is a missing test asset file: the Anthropic integration test references a sample_image.jpg in its own tests/assets/ directory, but the diff only adds this file under python/packages/core/tests/assets/. The Azure and OpenAI tests correctly use parent.parent to reach the core assets directory, but the Anthropic test uses parent which resolves to a non-existent path. The remaining changes are logically sound with appropriate fallback/warning behavior for providers that don't support rich tool results.

✓ Security Reliability

This PR adds rich content (images, audio) support to tool results across multiple LLM provider clients. The implementation is generally sound with appropriate fallback warnings for unsupported providers. There are no critical security issues, but there are a few reliability edge cases: Content.from_function_result lacks validation when result is a list, which can cause AttributeError on non-Content items; the Anthropic client can send an empty content array to the API if all rich items are unsupported; and the OpenAI Chat Completions client introduces a continue that may alter the original message-building control flow.

✗ Test Coverage

This diff adds rich content (images, audio) support in tool results across all providers. Core types and parse_result logic have solid unit tests (test_types.py), and MCP parsing is well-covered (test_mcp.py). However, the provider-specific formatting logic for rich content — the most complex new code — lacks unit tests entirely. The Anthropic client's new branching logic in _prepare_message_for_anthropic (data images, URI images, unsupported types) has zero unit tests. The OpenAI Responses client's new output_parts building in _prepare_content_for_openai also has no unit tests. The OpenAI Chat Completions client changed control flow (added continue statement) with no test verifying the warning/behavior with items. All three only have integration tests marked @pytest.mark.flaky, which won't catch regressions in normal CI runs.

Blocking Issues

  • The Anthropic integration test will fail with FileNotFoundError: Path(__file__).parent / "assets" / "sample_image.jpg" resolves to python/packages/anthropic/tests/assets/sample_image.jpg, but the image file is only added at python/packages/core/tests/assets/sample_image.jpg. Either copy the asset to the Anthropic tests directory or fix the path.
  • No unit tests for Anthropic _prepare_message_for_anthropic rich content handling. The new branching logic (lines 716-753 of _chat_client.py) covers three distinct paths — data images, URI images, and unsupported types — none of which are tested. The existing test_prepare_message_for_anthropic_function_result only covers the plain-text fallback path.
  • No unit tests for OpenAI Responses _prepare_content_for_openai rich content in function results. The new output_parts construction (lines 1214-1224 of _responses_client.py) recursively calls _prepare_content_for_openai for each item with no test coverage. Only a flaky integration test covers this path.
  • The OpenAI Chat Completions client (lines 578-583 of openai/_chat_client.py) changed the control flow for ALL function_result messages by adding an explicit append+continue, and added a warning path for items. There is no unit test verifying that function results with items produce a warning and that the result is still correctly appended.

Suggestions

  • In _tools.py parse_result, a Content with type="text" and empty/None text will fall through to JSON serialization via _make_dumpable. Consider returning "" for this edge case.
  • In _mcp.py, consider using Content.from_data (with base64-decoded bytes) instead of Content.from_uri with a synthetic data: URI for ImageContent/AudioContent. This avoids downstream consumers needing to parse the data: URI back out.
  • In _types.py from_function_result, the isinstance(result, list) branch assumes all items are Content objects (accesses .type, .text). If the list contains non-Content items (e.g., strings), this will raise AttributeError. Consider adding a guard like all(isinstance(c, Content) for c in result) or handling non-Content items gracefully, consistent with how parse_result does it.
  • In the Anthropic _chat_client.py, if content.items is truthy but all items have unsupported media types and content.result is falsy, tool_content will be an empty list sent to the API. Consider falling back to the non-rich-content path or adding a text placeholder when tool_content is empty.
  • In _tools.py parse_result, a Content object with type='text' and empty/None text falls through to generic JSON serialization via _make_dumpable, which may produce unexpected results. Consider returning '' for that case.
  • Add a unit test for Content.from_function_result with a list containing only rich items (no text) to verify result is empty string and items are populated.
  • Add unit tests for the warning log paths in Bedrock, Azure AI, and Ollama when content.items is non-empty, to ensure warnings are emitted and results are still correctly formatted.
  • Consider adding a unit test for FunctionTool.parse_result with a list mixing Content and non-Content items to verify the Content.from_text(str(item)) fallback path.
  • The integration test assertions like assert 'house' in response.text.lower() are inherently fragile even with @pytest.mark.flaky. Consider asserting on structural properties (e.g., response contains text, tool was called) rather than model-generated content.

Automated review by moonbox3's agents

giles17 and others added 4 commits March 4, 2026 16:05
- Add isinstance guard in from_function_result for non-Content lists
- Fix Anthropic empty tool_content fallback to string result
- Fix Content(type='text', text=None) edge case in parse_result
- Rewrite MCP _parse_tool_result_from_mcp as single-pass (no index counters)
- Add Anthropic unit tests: data image, uri image, unsupported media, all-unsupported
- Add OpenAI Chat unit test: rich items warning and omission
- Add OpenAI Responses unit tests: function_result with/without items
- Add test_types tests: only-rich-items list, non-Content list fallback

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Python: [Feature]: Allow @tool functions to return image content that the model can analyze

5 participants