1. Introduction

What is DeepSeek R1?

DeepSeek R1 is a powerful open-source large language model (LLM), similar in capability to ChatGPT, but completely free and self-hostable. It was released by DeepSeek AI and trained on a vast amount of data using instruction-following techniques, which means it’s particularly good at tasks like:

  • Summarizing long documents
  • Answering questions from text
  • Analyzing or rewriting content
  • Extracting structured data from unstructured text

Key Features of DeepSeek R1:

  • 13B+ parameters – Highly capable, especially in understanding context
  • Can run locally – No internet or cloud needed
  • Fully open-source – No license fees, usage limits, or vendor lock-in
  • Instruction-tuned – Responds well to natural prompts (like “Summarize this article”)

Use Cases of DeepSeek R1

You can use DeepSeek R1 in many real-world applications, such as:

  • Text Summarization – Convert long articles, emails, or reports into shorter summaries
  • Document Analysis – Extract key points from legal or business documents
  • Chat Assistants – Build your own AI assistant for customer support or internal tools
  • Sentiment Analysis – Determine tone or emotional context from user input
  • Entity Recognition – Extract names, dates, places, etc., from large chunks of text

In this blog, we’ll focus specifically on text summarization — letting users paste text or upload PDFs, and getting clear, concise summaries generated by DeepSeek R1.


What is LLM Studio?

LLM Studio (by lmstudio.ai) is a desktop application that makes it incredibly easy to run and use large language models locally on your own machine.

Why use LLM Studio?

  • No setup hassle: No need to write server code — just install, load a model, and start using it.
  • API ready: It exposes an OpenAI-compatible API on http://localhost:1234, so your Laravel app can connect to it just like it would with ChatGPT.
  • Runs models locally: You get the power of DeepSeek or other models directly on your own hardware.
  • Developer-friendly: Supports advanced settings, prompt debugging, streaming, and more.

Think of it as your personal AI server, with a GUI and built-in API that works out of the box.


Why Laravel + Blade for This Project?

Laravel is a modern PHP framework known for being simple, elegant, and fast to work with. Blade is Laravel’s templating engine, perfect for creating clean user interfaces.

Here’s why it’s a great fit:

  • Quick to set up and extend
  • Handles file uploads and form input easily
  • Easily integrates with external APIs (like LLM Studio)
  • Ideal for internal tools, dashboards, and prototypes

What Are We Building in This Blog?

We’ll create a Laravel web application that allows users to:

  • Paste in text or upload a PDF document
  • Select the NLP task: Summarization
  • Send the text to DeepSeek R1, running locally via LLM Studio
  • Display a concise summary of the input text

All of this will happen on your local machine — no internet required, no API keys, no costs.

2. Project Setup

Before we start building the Laravel app that connects to DeepSeek R1 and performs summarization, let’s get our local development environment ready.


2.1 Laravel Installation

Prerequisites

Make sure the following are installed on your system:

  • PHP 8.1+
  • Composer (dependency manager for PHP)
  • MySQL (or MariaDB)
  • A web server (like Apache, Nginx, or use Laravel’s built-in one)

Install Laravel

Open your terminal and run:

composer create-project laravel/laravel PrivateSummarizer "10.*"
cd PrivateSummarizer

This creates a new Laravel project in a folder named PrivateSummarizer.

Run the App Locally

Use Laravel’s built-in development server:

php artisan serve

Now visit http://localhost:8000 in your browser — you should see the Laravel welcome page.


2.2 Database Configuration (MySQL)

If you want to store chat or summarization history (optional but useful), you’ll need to set up your database.

1. Create a New MySQL Database

Use your preferred tool (e.g., phpMyAdmin or terminal):

CREATE DATABASE deepseek_nlp;

2. Update Laravel Environment File

Open .env in the Laravel root directory and update the following lines:

DB_CONNECTION=mysql
DB_HOST=127.0.0.1
DB_PORT=3306
DB_DATABASE=deepseek_nlp
DB_USERNAME=your_mysql_username
DB_PASSWORD=your_mysql_password

2.3 Install Bootstrap for the UI (Using CDN Only)

To keep things lightweight and simple, we’ll use Bootstrap via CDN (no npm or asset compilation required).

Open the resources/views/layouts/app.blade.php (you can create this file if needed), and include this in the <head> section:

<!-- Bootstrap 4.5 CDN -->
<link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.5.2/css/bootstrap.min.css">

Now you can use Bootstrap classes across all your Blade views without installing or compiling anything.

3. Setting Up DeepSeek R1 via LLM Studio

To summarize text using a powerful LLM locally, we’ll be using DeepSeek R1, a high-performance open-source model, and serve it using LLM Studio — a user-friendly desktop app that makes this integration smooth and code-free.

Let’s walk through the setup step by step.


Install LLM Studio

LLM Studio is available for Windows, macOS, and Linux. It allows you to download and run large language models directly on your machine — and provides a built-in API that your Laravel app can call.

Download:

Go to the official site:
🌐 https://lmstudio.ai

Choose the installer for your OS and follow the installation instructions.


Download DeepSeek R1 Model

Once LLM Studio is installed:

  1. Open LLM Studio
  2. Go to the Models tab
  3. Click Download Model
  4. Search for:
deepseek-ai/deepseek-llm-7b-instruct

5. Select the model and wait for the download to complete (it may take a few minutes depending on your internet speed and disk space)

Load the Model in the GUI

After the model is downloaded:

  1. Go to the Chat tab
  2. Click the dropdown to select your model (deepseek-llm-7b-instruct)
  3. Click Load
  4. Once loaded, you can test prompts like “Summarize this paragraph…” directly in the GUI

Tip: DeepSeek R1 is instruction-tuned, which means it understands tasks like “Summarize the following text” really well.

Confirm the API Is Running

LLM Studio runs a local HTTP API that’s OpenAI-compatible. By default, the server is available at:

http://localhost:1234

This allows us to send requests programmatically — exactly what we’ll do from our Laravel backend.


Test the API Using cURL

To make sure the API is working, you can try a quick test in your terminal:

curl http://localhost:1234/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "deepseek-llm-7b-instruct",
    "messages": [
      {"role": "system", "content": "You are a helpful assistant."},
      {"role": "user", "content": "Summarize this: Laravel is a popular PHP framework for building web applications quickly and easily."}
    ],
    "max_tokens": 100
  }'

You should receive a JSON response with the model’s reply.

4. Building the Blade UI (Bootstrap-based)

To make our application user-friendly, we’ll create a simple Blade template that uses Bootstrap for styling. This UI will allow users to:

  • Paste plain text
  • Or upload a PDF file
  • Click a button to receive a summary

We’ll also display the model’s response below the form.


4.1 Basic Layout with Bootstrap

First, let’s create a layout file we can reuse.

resources/views/layouts/app.blade.php

<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="UTF-8">
  <meta name="viewport" content="width=device-width, initial-scale=1">
  <title>DeepSeek Summarizer</title>
  <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.5.2/css/bootstrap.min.css">
</head>
<body>
  <nav class="navbar navbar-expand-lg navbar-dark bg-dark">
    <div class="container">
      <a class="navbar-brand" href="#">DeepSeek Summarizer</a>
    </div>
  </nav>

  <main class="py-4">
    @yield('content')
  </main>
</body>
</html>

4.2 Form for Text Summarization

We’ll now create the actual view where users interact with the app.

resources/views/chat.blade.php

@extends('layouts.app')

@section('content')
<div class="container">
  <h2 class="mb-4">Summarize Text or PDF</h2>

  <form method="POST" action="/ask" enctype="multipart/form-data">
    @csrf

    <div class="form-group">
      <label for="text">Paste Text Here</label>
      <textarea name="text" class="form-control" rows="6" placeholder="Enter your content here...">{{ old('text') }}</textarea>
    </div>

    <div class="form-group mt-3">
      <label for="pdf">Or Upload a PDF</label>
      <input type="file" name="pdf" class="form-control-file">
    </div>

    <!-- Only one NLP task: Summarization -->
    <input type="hidden" name="mode" value="summarize">

    <button type="submit" class="btn btn-primary mt-3">Summarize</button>
  </form>

  @if(session('response'))
    <div class="alert alert-success mt-4" style="white-space: pre-wrap;">
      <h5>Summary:</h5>
      {{ session('response') }}
    </div>
  @endif
</div>
@endsection

5. Routes and Controller Logic

Now that our Blade UI is set up for summarization, we’ll connect it to the Laravel backend.

This section will walk you through:

  • Setting up web routes
  • Creating the controller
  • Handling text input and PDF upload
  • Sending the input to DeepSeek R1 via LLM Studio API
  • Returning the summary back to the view

5.1 Define Routes

Open routes/web.php and add the following:

use App\Http\Controllers\LLMController;

Route::get('/chat', [LLMController::class, 'view']);
Route::post('/ask', [LLMController::class, 'ask']);
  • GET /chat → Loads the Blade UI
  • POST /ask → Handles the summarization request

5.2 Create the Controller

Run this Artisan command:

php artisan make:controller LLMController

This creates app/Http/Controllers/LLMController.php.

Now, open the file and add the following code:


LLMController.php

namespace App\Http\Controllers;

use Illuminate\Http\Request;
use Illuminate\Support\Facades\Http;
use Smalot\PdfParser\Parser;

class LLMController extends Controller
{
    // Load the Blade view
    public function view()
    {
        return view('chat');
    }

    // Handle form submission
    public function ask(Request $request)
    {
        $text = $request->input('text');

        // Check if a PDF was uploaded
        if ($request->hasFile('pdf')) {
            $parser = new Parser();
            $pdf = $parser->parseFile($request->file('pdf')->getRealPath());
            $text = $pdf->getText();
        }

        // Default to empty if both inputs are missing
        if (!$text) {
            return back()->with('response', 'Please enter text or upload a PDF.');
        }

        // Send request to DeepSeek R1 via LLM Studio API
        $response = Http::post('http://localhost:1234/v1/chat/completions', [
            'model' => 'deepseek-llm-7b-instruct',
            'messages' => [
                ['role' => 'system', 'content' => 'You are a text summarizer.'],
                ['role' => 'user', 'content' => "Summarize this:\n\n" . $text],
            ],
            'max_tokens' => 300,
            'temperature' => 0.7,
        ]);

        $summary = $response->json('choices.0.message.content') ?? 'No response from model.';

        return redirect('/chat')->with('response', $summary)->withInput();
    }
}

💡 Notes:

  • Http::post() sends the user’s input to DeepSeek R1
  • If a PDF is uploaded, it’s parsed using smalot/pdfparser (install it with composer require smalot/pdfparser)
  • The model is always given a system instruction to summarize
  • If neither text nor PDF is provided, it shows an error

7. Save Summary History in MySQL (Optional but Recommended)

Saving user input and the model’s summarized output in a database allows you to:

  • Review past summaries
  • Build user-specific histories (if login is added later)
  • Debug or analyze performance
  • Extend the app with stats or filters later

Let’s walk through how to do this in Laravel using a MySQL table.


7.1 Create a Migration for conversations Table

Run this Artisan command to create a migration:

php artisan make:migration create_conversations_table

Open the generated file in database/migrations/ and update the up() method:

public function up()
{
    Schema::create('conversations', function (Blueprint $table) {
        $table->id();
        $table->string('mode')->default('summarize'); // Task type
        $table->longText('input');   // Original input text or PDF content
        $table->longText('response'); // Summarized result
        $table->timestamps();
    });
}

Now run the migration:

php artisan migrate

7.2 Create the Eloquent Model

Run this command:

php artisan make:model Conversation

This creates app/Models/Conversation.php. Make sure it’s ready to work with mass assignment:

namespace App\Models;

use Illuminate\Database\Eloquent\Model;

class Conversation extends Model
{
    protected $fillable = ['mode', 'input', 'response'];
}

7.3 Save Summaries in the Controller

In your LLMController, add this after receiving the model response:

use App\Models\Conversation;

Conversation::create([
    'mode' => 'summarize',
    'input' => $text,
    'response' => $summary,
]);

This stores every interaction in your MySQL database — input, output, and a timestamp.


7.4 Show History in Blade View

Modify your view() method in the controller to pass conversation history to the Blade file:

public function view()
{
    $conversations = \App\Models\Conversation::latest()->take(10)->get();
    return view('chat', compact('conversations'));
}

And update your chat.blade.php to display them:

@if(isset($conversations) && $conversations->count())
<div class="mt-5">
  <h4>Past Summaries</h4>
  @foreach($conversations as $c)
    <div class="card mt-3">
      <div class="card-header">
        {{ $c->created_at->format('d M Y, h:i A') }}
      </div>
      <div class="card-body">
        <p><strong>Input:</strong><br>{{ $c->input }}</p>
        <p><strong>Summary:</strong><br>{{ $c->response }}</p>
      </div>
    </div>
  @endforeach
</div>
@endif

8. Example Use Cases — Summarization Only

StyleUse Case
“Summarize the following text”General-purpose
“Summarize into bullet points”Highlights or lists
“Summarize in one sentence”TL;DR style
“Detailed summary of the content”Longer, richer output
“Summarize each paragraph”Structured or segmented content

9. Deployment Tips

Now that your Laravel app is fully functional and connected to DeepSeek R1 via LLM Studio, let’s look at how to keep it running smoothly — locally on your machine.


Local Deployment (Personal or Internal Use)

This setup is ideal for individuals, developers, researchers, or small teams who want full control over their data and don’t need a cloud server.

Recommended Local Setup:

  • Run the Laravel app using:
php artisan serve

or configure it with Apache or Nginx if preferred.

Run LLM Studio on the same machine and ensure:

  • Your model (e.g., deepseek-llm-7b-instruct) is loaded
  • The API is available at http://localhost:1234

Visit your app in the browser:

http://localhost:8000/chat

Keep LLM Studio Running

  • LLM Studio must stay open and active for the app to work
  • You can:
    • Minimize it to the tray
    • Add it to startup apps on your OS
    • Monitor it manually or script a launcher

Security (Local Use)

Since everything runs on your machine and uses localhost:

  • No external access is possible
  • No data is sent to third-party APIs
  • No additional authentication is needed for now

⚠️ Just make sure not to expose localhost:1234 on a public IP.


Laravel Environment File Example

To keep configuration clean, update your .env like this:

LLM_API_URL=http://localhost:1234/v1/chat/completions

And in your controller:

$response = Http::post(env('LLM_API_URL'), [...]);

Running Everything Locally

ComponentHow to Run
Laravel Appphp artisan servehttp://localhost:8000
LLM StudioOpen the app → Load DeepSeek model
TestUse browser to summarize pasted text or uploaded PDF

Want to Deploy to Production?

If you want to make your summarization tool available across your network or online, and need help with:

  • Securing LLM Studio on a server
  • Hosting Laravel in the cloud
  • Setting up user authentication
  • Adding real-time features

Reach out to us for custom production deployment and integration support.

Contact: info@muneebdev.com

10. Future Enhancements

Now that you’ve built a fully functional text summarization web app using Laravel + DeepSeek R1 + LLM Studio, there’s plenty of room to grow and make it more powerful, interactive, and intelligent.

Here are some realistic next steps you can take to enhance the project for both personal and professional use:


1. Streamed Responses (Typing Effect)

Instead of showing the model’s output all at once, you can simulate real-time typing using:

  • JavaScript + AJAX polling
  • Laravel Echo (WebSockets)
  • Or simply animate text appearance for better UX

This gives users a more “chat-like” experience.


2. Summarization for Large Files

Right now, you send the entire document to the model at once. But for large PDFs or long text, you can:

  • Break content into smaller chunks
  • Summarize each section individually
  • Then combine those into a final, high-level summary

You could even let users choose summary depth (brief vs. detailed).


3. Multi-Model Support (Optional)

LLM Studio supports multiple models. In the future, you could:

  • Let users choose between DeepSeek, Mistral, or LLaMA
  • Compare outputs side by side
  • Assign models based on task type

4. Q&A Over Documents

Go beyond summarization — let users ask questions about a PDF they upload.

Example:

“What’s the main conclusion of this report?”

This can be done by changing the prompt structure and storing both the original text and question.


5. Embedding + Vector Search

If your app grows to handle multiple documents, you can:

  • Convert text into embeddings (semantic vectors)
  • Store them in a vector database (like FAISS or Weaviate)
  • Perform semantic search before summarization

This adds memory and knowledge retrieval capabilities to your app.


6. Add User Accounts

Use Laravel Breeze or Laravel Jetstream to:

  • Enable login and registration
  • Track history per user
  • Create admin tools to view summaries across the platform

7. Analytics Dashboard

Build an admin dashboard to see:

  • Most summarized topics
  • Daily/weekly usage
  • Summary length stats
  • Export history to CSV

8. Secure API Gateway

If you plan to allow other apps to call your Laravel summarizer, wrap it in an API layer with:

  • API key protection
  • Rate limiting
  • Token expiration

This transforms your app into a local, secure summarization microservice.


Summary of Ideas

FeaturePurpose
Streamed responsesImprove UX
Chunked summarizationHandle long text
Q&A on documentsInteractive understanding
Vector DB integrationMemory & semantic search
User accountsPersonalization & multi-user support
AnalyticsUsage insights
Secure APIsExternal app integration

These enhancements can turn your simple summarizer into a powerful internal tool, a research assistant, or even a SaaS product powered by open-source AI.

11. Recap & Conclusion

Congratulations — you’ve just built a complete AI-powered text summarization web app, fully running locally, with no cloud dependencies, and 100% open-source technology.

Let’s quickly recap what you’ve accomplished:


What You Built

  • Loaded DeepSeek R1 using LLM Studio
  • Connected Laravel to the local LLM API
  • Built a clean Bootstrap-based UI with Blade
  • Allowed users to paste text or upload PDF files
  • Generated high-quality summaries using DeepSeek R1
  • Optionally stored input/output in MySQL for history
  • Deployed and ran the entire stack locally on your machine

Tools Used

ToolPurpose
LaravelBackend logic and routing
BladeTemplating engine for UI
BootstrapSimple, responsive design
LLM StudioRun DeepSeek R1 model locally
DeepSeek R1Perform natural language summarization
MySQLStore summary history (optional)

Why This Matters

This setup proves that you don’t need cloud APIs or expensive infrastructure to use powerful AI. With just your local machine, some open-source tools, and a bit of Laravel magic, you’ve created an app that can:

  • Save time
  • Improve productivity
  • Work offline
  • Keep sensitive data private

What’s Next?

From here, you can take the app even further:

  • Add document Q&A
  • Integrate with a vector search engine
  • Deploy to your team or organization
  • Turn it into a full SaaS product

The possibilities are wide open — and you’re already way ahead by running your own LLM locally.


Need Help or Want to Collaborate?

If you’d like help deploying this for your team, customizing it further, or integrating it into a larger system — reach out to us!

📧 Email: info@muneebdev.com
🌐 Website: muneebdev.com/hire


Thank you for following along — and enjoy building with DeepSeek R1 + Laravel!

Categorized in: