Laravel 13 – How to Integrate Gemini Using Prism.

Touseef Afridi
06 May 26

Laravel 13 – How to Integrate Gemini Using Prism.

In this tutorial, we will learn how to build a real-time AI chat interface in Laravel 13 using Gemini AI and Prism. We will cover setup, API integration, and handling dynamic responses for a smooth interactive experience.


If you're a video person, feel free to skip the post and check out the video instead!


Quick Overview

This guide walks you through setting up a fresh Laravel 13 application and integrating Gemini AI using the Prism package. You start by creating a new Laravel project with a minimal setup, then install and configure Prism along with your Gemini API key in the .env file. After that, you create a controller to handle the chat logic, define routes for sending and receiving messages, and build a Blade view using Tailwind CSS for the chat interface. The frontend uses AJAX to send user prompts to the backend, where Prism communicates with Gemini AI and returns responses that are displayed dynamically in the chat window with a loading state for a smooth real-time experience.

Step # 1 : Set Up a Fresh Laravel 13 Project.

Before we begin building anything, make sure your system has Composer installed, since Laravel relies on it to manage dependencies and install packages. It’s also highly recommended to install the Laravel Installer globally, which allows you to create new Laravel projects quickly from your terminal. If you don’t already have it installed, run the following command.
composer global require laravel/installer
Once the Installer is ready, you can create a new Laravel 13 project using.
laravel new gemini
This starts an interactive setup where Laravel asks a few configuration questions that define your project structure. To keep things clean and lightweight, choose the following options during setup.
  • Which starter kit would you like to install? → Select None to start with a clean Laravel project without any pre-built authentication or UI scaffolding.

  • Which testing framework do you prefer? → Choose Pest for a modern, simple, and readable testing syntax.

  • Do you want to install Laravel Boost to improve AI-assisted coding? → Select No to keep the project lightweight and avoid adding extra AI tooling.

  • Which database will your application use? → Choose SQLite as your database for lightweight, file-based data storage.

  • Would you like to run npm install and npm run build? → Select Yes to install frontend dependencies and compile assets so the project is ready to run immediately.

After completing these steps, your Laravel 13 project will be set up in a clean and minimal state. It will contain no unnecessary scaffolding, giving you full control to build your Gemini AI application from scratch in a structured and scalable way.

Step # 2 : Navigate to Project Directory.

After creating your Laravel 13 project, move into the project folder using your terminal (Git Bash, Command Prompt, or any terminal you prefer) and run.
cd c:/xampp/htdocs/gemini
Now you are inside your Laravel project directory and can start running commands like installing packages and building features.

Step # 3 : Install Prism (AI Layer for Laravel).

Now we’ll add Prism, a powerful package that makes working with AI in Laravel super simple. It acts as a bridge between your application and AI providers like Gemini, OpenAI, and others so you can switch models without changing your code. Install it using Composer.
composer require prism-php/prism
Once installed, Prism is available in your project and ready for configuration.

Step # 4 : Publish Prism Config File.

After installing Prism, the next step is to publish its configuration file so you can customize its behavior for your project. Run the following Artisan command.
php artisan vendor:publish --tag=prism-config
This will create the configuration file inside your Laravel config directory, where you can modify the default settings based on your requirements.

Step # 5 : Get Gemini API Key and Configure Environment Variable.

To generate your API key, follow these steps.
  1. Go to https://aistudio.google.com/ and sign in with your Google account.

  2. Click on Create API key a popup will appear.


  3. Select Choose an imported project, then click Create project. Give your project a name and click on Create project.



  4. Once the project is created, it will be automatically selected in the popup. Now click Create key.



  5. Your API key will be generated instantly. Copy it and keep it safe as you’ll need it in your application.



Now open your .env file and add the following line.
GEMINI_API_KEY=Define_Your_Gemini_Key_here
Replace Define_Your_Gemini_Key_here with the API key you copied from Google AI Studio, then save the file. If you change the environment variable name (GEMINI_API_KEY), make sure to update it in config/prism.php as well.

Step # 6 : Create the AI Controller.

Now we’ll create a controller that will handle everything related to our Gemini integration, including loading the UI and processing user prompts. Run the following Artisan command.
php artisan make:controller AIController
Once the controller is created, navigate to the app/Http/Controllers directory and open the AIController.php file. Replace its content with the following code.
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
use Prism\Prism\Facades\Prism;
use Prism\Prism\Enums\Provider;
class AIController extends Controller
{
    // Return the main view where users will interact with Gemini
    public function index()
    {
        return view('gemini');
    }
    // Handle the incoming request from the frontend (AJAX)
    public function ask(Request $request)
    {
        // Validate that a prompt is provided and is a string
        $request->validate([
            'prompt' => 'required|string'
        ]);
        // Send the user's prompt to Gemini via Prism
        $response = Prism::text()
            // Specify the provider (Gemini) and the model to use
            ->using(Provider::Gemini, 'gemini-2.5-flash')
            // Attach the user's input as the prompt
            ->withPrompt($request->prompt)
            // Request a plain text response from the model
            ->asText();
        // Return the generated response as JSON for frontend handling
        return response()->json([
            'answer' => $response->text
        ]);
    }
}
At this point, we’ve wired up the basic flow. The controller loads the view for the user and handles incoming prompts by sending them to Gemini using Prism, then returns the response so it can be displayed. The request is first validated to make sure we’re actually receiving usable input, and then everything is passed through Prism, which takes care of communicating with the Gemini API behind the scenes. The response is returned as JSON, making it easy to handle on the frontend, especially when working with AJAX or dynamic updates. This keeps the setup clean and focused, while giving you a solid base to build more interactive features on top of it.

Step # 7 : Define Routes for Gemini Integration.

Now we need to connect our controller methods to actual URLs so the application can load the interface and also handle requests sent to Gemini. First, open your routes/web.php file and import the controller at the top.
use App\Http\Controllers\AIController;
Next, define the routes that will handle both displaying the UI and processing the AI requests.
// Load the page where users can interact with Gemini
Route::get('/gemini', [AIController::class, 'index']);
// Handle user prompt, send it to Gemini, and return the generated response
Route::post('/gemini', [AIController::class, 'ask']);
At this point, our controller methods are now properly connected to the application through routes. The GET route loads the Gemini interface where users can enter their prompts, while the POST route handles the submitted input, passes it to the controller, and returns the response generated by Gemini.

Step # 8 : Create a Blade View for Gemini.

Now we will create the Blade view that will act as the frontend for our Gemini AI chat system. Go to the resources/views directory in your Laravel project and create a new file named: gemini.blade.php. Paste the following code inside it.
<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>Code Shotcut - Laravel 13 Integrating Gemini Using Prism</title>
    <script src="https://cdn.tailwindcss.com"></script>
    <meta name="csrf-token" content="{{ csrf_token() }}">
    <style>
        /* chat scrollbar */
        #chatBox::-webkit-scrollbar {
            width: 6px;
        }
        #chatBox::-webkit-scrollbar-thumb {
            background: #334155;
            border-radius: 10px;
        }
        #chatBox::-webkit-scrollbar-track {
            background: transparent;
        }
    </style>
</head>
<body class="bg-gradient-to-br from-[#050814] via-[#0b1220] to-[#050814] text-white">
<div class="h-screen flex items-center justify-center p-4">
    <div class="w-full max-w-4xl h-full flex flex-col rounded-2xl overflow-hidden
                border border-white/10 bg-white/5 backdrop-blur-xl shadow-2xl">
        <!-- Header -->
        <div class="px-6 py-4 flex items-center justify-between border-b border-white/10 bg-white/5">
            <div>
                <h1 class="text-lg font-semibold">Code Shotcut</h1>
                <p class="text-xs text-gray-400">
                    Laravel 13 integrating Gemini using Prism to simplify AI integration
                </p>
            </div>
            <div class="w-3 h-3 rounded-full bg-green-400 shadow-[0_0_10px_#22c55e]"></div>
        </div>
        <!-- Chat area -->
        <div id="chatBox" class="flex-1 overflow-y-auto px-6 py-6 space-y-5">
            <div class="text-center text-gray-500 text-sm mt-10">
                👋 Start chatting with Gemini AI
            </div>
        </div>
        <!-- Input -->
        <div class="p-4 border-t border-white/10 bg-white/5">
            <div class="flex items-end gap-3">
                <textarea id="prompt"
                          rows="1"
                          class="flex-1 bg-[#0b1220]/70 border border-white/10 rounded-xl px-4 py-3 text-sm
                                 focus:outline-none focus:ring-2 focus:ring-cyan-500
                                 placeholder:text-gray-500 resize-none overflow-hidden"
                          placeholder="Ask Gemini anything..."></textarea>
                <button onclick="sendMessage()"
                        class="bg-gradient-to-r from-cyan-600 to-indigo-600
                               hover:from-cyan-500 hover:to-indigo-500
                               px-6 py-3 rounded-xl text-sm font-medium
                               text-white shadow-lg shadow-cyan-600/20 transition">
                    Send
                </button>
            </div>
        </div>
    </div>
</div>
<script>
const token = document.querySelector('meta[name="csrf-token"]').getAttribute('content');
const promptInput = document.getElementById('prompt');
/* Enter sends, Shift+Enter newline */
promptInput.addEventListener('keydown', function (e) {
    if (e.key === 'Enter' && !e.shiftKey) {
        e.preventDefault();
        sendMessage();
    }
});
/* Auto resize input */
promptInput.addEventListener('input', function () {
    this.style.height = 'auto';
    this.style.height = this.scrollHeight + 'px';
});
/* AI icon */
function aiIcon() {
    return `
        <div class="w-9 h-9 rounded-full bg-gradient-to-br from-cyan-500 to-indigo-600
                    flex items-center justify-center shadow-lg shadow-cyan-500/20">
            <span class="text-xs font-bold">AI</span>
        </div>
    `;
}
/* Append chat message */
function appendMessage(html) {
    let chatBox = document.getElementById('chatBox');
    chatBox.innerHTML += html;
    chatBox.scrollTop = chatBox.scrollHeight;
}
async function sendMessage() {
    let prompt = promptInput.value.trim();
    if (!prompt) return;
    /* User message */
    appendMessage(`
        <div class="flex justify-end items-start gap-3">
            <div class="bg-cyan-600/90 text-white px-4 py-3 rounded-2xl max-w-[75%] text-sm shadow-md">
                ${prompt}
            </div>
            <div class="w-9 h-9 rounded-md overflow-hidden border border-white/10">
    <img src="https://www.google.com/s2/favicons?domain=codeshotcut.com&sz=128"
         class="w-full h-full object-cover"
         alt="CodeShotcut Logo">
</div>
        </div>
    `);
    promptInput.value = '';
    promptInput.style.height = 'auto';
    /* Loading state */
    let loaderId = Date.now();
    appendMessage(`
        <div id="${loaderId}" class="flex justify-start items-start gap-3">
            ${aiIcon()}
            <div class="bg-white/10 border border-white/10 px-4 py-3 rounded-2xl text-sm text-gray-300 animate-pulse">
                Thinking...
            </div>
        </div>
    `);
    /* Ajax request to backend */
    let response = await fetch('/gemini', {
        method: 'POST',
        headers: {
            'Content-Type': 'application/json',
            'X-CSRF-TOKEN': token
        },
        body: JSON.stringify({ prompt })
    });
    let data = await response.json();
    /* Remove loader */
    document.getElementById(loaderId).remove();
    /* AI response */
    appendMessage(`
        <div class="flex justify-start items-start gap-3">
            ${aiIcon()}
            <div class="bg-white/10 border border-white/10 px-4 py-3 rounded-2xl
                        max-w-[75%] text-sm text-gray-100 leading-relaxed">
                ${data.answer}
            </div>
        </div>
    `);
}
</script>
</body>
</html>
The view uses Tailwind CSS to build a modern real-time chat interface with a message window, a textarea for user input, and a send button that sends requests using AJAX without reloading the page. When a message is submitted, it is sent to the Laravel backend through the /gemini route, processed using the Prism package with Gemini AI, and the response is returned and displayed dynamically inside the chat window. A loading state is shown while waiting for the response, creating a smooth and interactive AI chat experience inside a Laravel application.

Step # 9 : Run and Test the Gemini Integration.

Now it’s time to test our Laravel 13 project and make sure the Gemini integration is working correctly. Start the Laravel development server by running the following command in your terminal.
php artisan serve
Once the server starts, open your browser and visit: http://127.0.0.1:8000/gemini. You should now see your Gemini interface.


Try entering a prompt and sending it you’ll receive a response generated by Gemini, confirming that your integration is working as expected. At this point, your setup is complete and you can successfully communicate with Gemini through your Laravel application.

Conclusion

By following this guide, you have successfully set up a Laravel 13 application and integrated Gemini AI using the Prism package. You learned how to install and configure Prism, connect your Gemini API key, create a controller to handle requests, define routes, and build a real-time chat interface using a Blade view with Tailwind CSS. You also implemented AJAX to send user prompts and display AI-generated responses dynamically with a loading state for a smooth experience. This gives you a solid foundation to build more advanced AI-powered features in Laravel applications, such as chat assistants, automation tools, or smart content generators.
For more details, refer to the official Prism documentation: https://github.com/prism-php/prism.

Share this with friends!


"Give this post some love and slap that 💖 button as if it owes you money! 💸😄"
0

0 Comments

To engage in commentary, kindly proceed by logging in or registering