Google AItool use

Code Execution

View Original →

Gemini 3.1 Flash-Lite Preview is now available. Try it in AI Studio.

Home

Gemini API

Docs

Send feedback

Code execution

The Gemini API provides a code execution tool that enables the model to

generate and run Python code. The model can then learn iteratively from the

code execution results until it arrives at a final output. You can use code

execution to build applications that benefit from code-based reasoning. For

example, you can use code execution to solve equations or process text. You can

also use the libraries included in the code execution

environment to perform more specialized tasks.

Gemini is only able to execute code in Python. You can still ask Gemini to

generate code in another language, but the model can't use the code execution

tool to run it.

Enable code execution

To enable code execution, configure the code execution tool on the model. This

allows the model to generate and run code.

Python

from google import genai

from google.genai import types

client = genai.Client()

response = client.models.generate_content(

model="gemini-3-flash-preview",

contents="What is the sum of the first 50 prime numbers? "

"Generate and run code for the calculation, and make sure you get all 50.",

config=types.GenerateContentConfig(

tools=[types.Tool(code_execution=types.ToolCodeExecution)]

),

)

for part in response.candidates[0].content.parts:

if part.text is not None:

print(part.text)

if part.executable_code is not None:

print(part.executable_code.code)

if part.code_execution_result is not None:

print(part.code_execution_result.output)

JavaScript

import { GoogleGenAI } from "@google/genai";

const ai = new GoogleGenAI({});

let response = await ai.models.generateContent({

model: "gemini-3-flash-preview",

contents: [

"What is the sum of the first 50 prime numbers? " +

"Generate and run code for the calculation, and make sure you get all 50.",

],

config: {

tools: [{ codeExecution: {} }],

},

});

const parts = response?.candidates?.[0]?.content?.parts || [];

parts.forEach((part) => {

if (part.text) {

console.log(part.text);

}

if (part.executableCode && part.executableCode.code) {

console.log(part.executableCode.code);

}

if (part.codeExecutionResult && part.codeExecutionResult.output) {

console.log(part.codeExecutionResult.output);

}

});

Go

package main

import (

"context"

"fmt"

"os"

"google.golang.org/genai"

)

func main() {

ctx := context.Background()

client, err := genai.NewClient(ctx, nil)

if err != nil {

log.Fatal(err)

}

config := &genai.GenerateContentConfig{

Tools: []*genai.Tool{

{CodeExecution: &genai.ToolCodeExecution{}},

},

}

result, _ := client.Models.GenerateContent(

ctx,

"gemini-3-flash-preview",

genai.Text("What is the sum of the first 50 prime numbers? " +

"Generate and run code for the calculation, and make sure you get all 50."),

config,

)

fmt.Println(result.Text())

fmt.Println(result.ExecutableCode())

fmt.Println(result.CodeExecutionResult())

}

REST

curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-3-flash-preview:generateContent" \

-H "x-goog-api-key: $GEMINI_API_KEY" \

-H 'Content-Type: application/json' \

-d ' {"tools": [{"code_execution": {}}],

"contents": {

"parts":

{

"text": "What is the sum of the first 50 prime numbers? Generate and run code for the calculation, and make sure you get all 50."

}

},

}'

Note: This REST example doesn't parse the JSON response as shown in the

example output.

The output might look something like the following, which has been formatted for

readability:

Okay, I need to calculate the sum of the first 50 prime numbers. Here's how I'll

approach this:

  • Generate Prime Numbers: I'll use an iterative method to find prime
  • numbers. I'll start with 2 and check if each subsequent number is divisible

    by any number between 2 and its square root. If not, it's a prime.

  • Store Primes: I'll store the prime numbers in a list until I have 50 of
  • them.

  • Calculate the Sum: Finally, I'll sum the prime numbers in the list.
  • Here's the Python code to do this:

    def is_prime(n):

    """Efficiently checks if a number is prime."""

    if n <= 1:

    return False

    if n <= 3:

    return True

    if n % 2 == 0 or n % 3 == 0:

    return False

    i = 5

    while i * i <= n:

    if n % i == 0 or n % (i + 2) == 0:

    return False

    i += 6

    return True

    primes = []

    num = 2

    while len(primes) < 50:

    if is_prime(num):

    primes.append(num)

    num += 1

    sum_of_primes = sum(primes)

    print(f'{primes=}')

    print(f'{sum_of_primes=}')

    primes=[2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67,

    71, 73, 79, 83, 89, 97, 101, 103, 107, 109, 113, 127, 131, 137, 139, 149, 151,

    157, 163, 167, 173, 179, 181, 191, 193, 197, 199, 211, 223, 227, 229]

    sum_of_primes=5117

    The sum of the first 50 prime numbers is 5117.

    This output combines several content parts that the model returns when using

    code execution:

    • text: Inline text generated by the model

    • executableCode: Code generated by the model that is meant to be executed

    • codeExecutionResult: Result of the executable code

    The naming conventions for these parts vary by programming language.

    Code Execution with images (Gemini 3)

    The Gemini 3 Flash model can now write and execute Python code to actively

    manipulate and inspect images.

    Use cases Zoom and inspect: The model implicitly detects when details are too small

    (e.g., reading a distant gauge) and writes code to crop and re-examine the area

    at higher resolution.

    Visual math: The model can run multi-step calculations using code (e.g.,

    summing line items on a receipt).

    Image annotation: The model can annotate images to answer questions, such

    as drawing arrows to show relationships.

    Note: While the model automatically handles zooming for small details, you

    should prompt it explicitly to use code for other tasks, such as "Write code to

    count the number of gears" or "Rotate this image to make it upright".

    Enable Code Execution with images

    Code Execution with images is officially supported in Gemini 3 Flash. You can

    activate this behavior by enabling both Code Execution as a tool and Thinking.

    Python

    from google import genai
    

    from google.genai import types

    import requests

    from PIL import Image

    import io

    image_path = "https://goo.gle/instrument-img"

    image_bytes = requests.get(image_path).content

    image = types.Part.from_bytes(

    data=image_bytes, mime_type="image/jpeg"

    )

    Ensure you have your API key set

    client = genai.Client()

    response = client.models.generate_content(

    model="gemini-3-flash-preview",

    contents=[image, "Zoom into the expression pedals and tell me how many pedals are there?"],

    config=types.GenerateContentConfig(

    tools=[types.Tool(code_execution=types.ToolCodeExecution)]

    ),

    )

    for part in response.candidates[0].content.parts:

    if part.text is not None:

    print(part.text)

    if part.executable_code is not None:

    print(part.executable_code.code)

    if part.code_execution_result is not None:

    print(part.code_execution_result.output)

    if part.as_image() is not None:

    # display() is a standard function in Jupyter/Colab notebooks

    display(Image.open(io.BytesIO(part.as_image().image_bytes)))

    JavaScript

    async function main() {
    

    const ai = new GoogleGenAI({ });

    // 1. Prepare Image Data

    const imageUrl = "https://goo.gle/instrument-img";

    const response = await fetch(imageUrl);

    const imageArrayBuffer = await response.arrayBuffer();

    const base64ImageData = Buffer.from(imageArrayBuffer).toString('base64');

    // 2. Call the API with Code Execution enabled

    const result = await ai.models.generateContent({

    model: "gemini-3-flash-preview",

    contents: [

    {

    inlineData: {

    mimeType: 'image/jpeg',

    data: base64ImageData,

    },

    },

    { text: "Zoom into the expression pedals and tell me how many pedals are there?" }

    ],

    config: {

    tools: [{ codeExecution: {} }],

    },

    });

    // 3. Process the response (Text, Code, and Execution Results)

    const candidates = result.candidates;

    if (candidates && candidates[0].content.parts) {

    for (const part of candidates[0].content.parts) {

    if (part.text) {

    console.log("Text:", part.text);

    }

    if (part.executableCode) {

    console.log(\nGenerated Code (${part.executableCode.language}):\n, part.executableCode.code);

    }

    if (part.codeExecutionResult) {

    console.log(\nExecution Output (${part.codeExecutionResult.outcome}):\n, part.codeExecutionResult.output);

    } } } }

    main();

    Go

    ``

    package main

    import (

    "context"

    "fmt"

    "io"

    "log"

    "net/http"

    "os"

    "google.golang.org/genai"

    )

    func main() {

    ctx := context.Background()

    // Initialize Client (Reads GEMINI_API_KEY from env)

    client, err := genai.NewClient(ctx, nil)

    if err != nil {

    log.Fatal(err)

    }

    // 1. Download the image

    imageResp, err

    codeexecutiontools

    Related Articles