Latest Post

Reinforcement Learning for Credit Scoring: Applications in Fintech

Here’s something that’ll blow your mind: the way fintech companies decide whether to lend you money is getting a serious upgrade. And I’m not talking about minor tweaks to old formulas — I’m talking about reinforcement learning algorithms that literally learn from every lending decision they make.

Gradio Python Tutorial: Create ML Demos and Share Your Models

You’ve trained a cool model. You want to share it with friends, add it to your portfolio, or let your manager test it. Streamlit requires understanding layouts and caching. Flask needs HTML templates. You just want something that works in 5 minutes so you can share a link and move on with your life. You don’t want to become a web developer — you just want to demo your model.

I discovered Gradio when I needed to share a computer vision model with a client who wanted to test it immediately. I built the demo in literally 10 lines of code, got a shareable link automatically, and was done in 15 minutes. No configuration, no deployment setup, no frontend code. Just a function and Gradio’s interface builder. For quick demos and model sharing, nothing beats Gradio’s simplicity.

Let me show you how to create shareable ML demos faster than you thought possible.

Gradio Python Tutorial

What Is Gradio and Why It’s Different

Gradio is a Python library for creating ML demos with minimal code. While Streamlit is for building apps, Gradio is specifically for wrapping models in interfaces and sharing them instantly.

What Gradio provides:

  • Automatic UI generation from function signatures
  • Built-in components for ML inputs/outputs
  • Instant shareable links (no deployment needed)
  • Hugging Face Hub integration
  • Flagging system for collecting feedback
  • Examples gallery

What makes Gradio special:

  • Minimal code: Often just 3–5 lines
  • Automatic sharing: Get a public link instantly
  • Zero configuration: No layout decisions needed
  • ML-focused: Built for model demos, not general apps

Think of Gradio as “function to web app in one line” — the absolute fastest way to demo a model.

Installation and First Demo (Stupidly Simple)

Install Gradio:

bash

pip install gradio

Create your first interface:

python

import gradio as gr
def greet(name):
return f"Hello, {name}!"
demo = gr.Interface(fn=greet, inputs="text", outputs="text")
demo.launch()

Run it:

bash

python app.py

You get:

That’s it. Three lines of code, instant shareable demo. This is why Gradio exists.

Image Classification Demo (Actually Useful)

Let’s wrap a real model:

python

import gradio as gr
import torch
from torchvision import transforms, models
from PIL import Image
# Load model
model = models.resnet50(pretrained=True)
model.eval()
# Load ImageNet labels
with open('imagenet_classes.txt') as f:
labels = [line.strip() for line in f.readlines()]
def classify_image(image):
"""Classify an image using ResNet50."""
# Preprocess
transform = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]
)
])

img_tensor = transform(image).unsqueeze(0)

# Predict
with torch.no_grad():
outputs = model(img_tensor)
probabilities = torch.nn.functional.softmax(outputs[0], dim=0)

# Get top 5 predictions
top5_prob, top5_idx = torch.topk(probabilities, 5)

# Return as dictionary {label: probability}
return {labels[idx]: prob.item() for idx, prob in zip(top5_idx, top5_prob)}
# Create interface
demo = gr.Interface(
fn=classify_image,
inputs=gr.Image(type="pil"),
outputs=gr.Label(num_top_classes=5),
title="Image Classifier",
description="Upload an image to classify it using ResNet50",
examples=["cat.jpg", "dog.jpg", "car.jpg"]
)
demo.launch(share=True)  # share=True creates public link

This creates a complete image classifier with:

  • Drag-and-drop image upload
  • Top 5 predictions with probabilities
  • Example images
  • Shareable public link

All in ~40 lines. No HTML, CSS, or frontend knowledge required.

Input Types

Gradio has pre-built components for common ML inputs:

Images

python

# Basic image input
gr.Image(type="pil") # Returns PIL Image
gr.Image(type="numpy") # Returns numpy array
gr.Image(type="filepath") # Returns file path
# With options
gr.Image(
type="pil",
label="Upload Image",
source="upload", # or "webcam", "canvas"
shape=(224, 224) # Resize automatically
)

Text

python

# Single line
gr.Textbox(label="Enter text", placeholder="Type here...")
# Multi-line
gr.Textbox(lines=5, label="Long text")
# Markdown rendering
gr.Markdown("**Bold** and *italic* text")

Audio

python

gr.Audio(
type="filepath", # or "numpy"
label="Upload audio",
source="upload" # or "microphone"
)

Video

python

gr.Video(label="Upload video")

Files

python

gr.File(label="Upload file", file_types=[".csv", ".txt"])

Numeric Inputs

python

# Slider
gr.Slider(minimum=0, maximum=100, value=50, label="Temperature")
# Number input
gr.Number(value=0.5, label="Learning rate")

Categorical Inputs

python

# Dropdown
gr.Dropdown(
choices=["Option 1", "Option 2", "Option 3"],
label="Select option"
)
# Radio buttons
gr.Radio(choices=["A", "B", "C"], label="Pick one")
# Checkboxes
gr.CheckboxGroup(choices=["X", "Y", "Z"], label="Select multiple")
# Checkbox
gr.Checkbox(label="Enable feature")

Output Types

Gradio provides components for displaying results:

Text Outputs

python

gr.Textbox(label="Result")
gr.Markdown() # Formatted text
gr.JSON() # JSON display
gr.HTML() # HTML rendering

Classification Outputs

python

# Label with probabilities
gr.Label(num_top_classes=5)
# Returns dict: {"class1": 0.8, "class2": 0.15, ...}

Images

python

gr.Image(label="Output Image")
gr.Gallery(label="Generated Images") # Multiple images

Plots

python

gr.Plot()  # Matplotlib or Plotly figures

Data

python

gr.Dataframe()  # Display pandas DataFrames

Multiple Inputs/Outputs Example

Gradio handles multiple inputs and outputs elegantly:

python

import gradio as gr
import torch
from transformers import pipeline
# Load sentiment model
sentiment_model = pipeline("sentiment-analysis")
# Load translation model
translator = pipeline("translation_en_to_fr")
def analyze_and_translate(text, translate):
"""Analyze sentiment and optionally translate."""
# Sentiment analysis
sentiment = sentiment_model(text)[0]
sentiment_result = f"{sentiment['label']}: {sentiment['score']:.2%}"

# Translation if requested
if translate:
translation = translator(text)[0]['translation_text']
else:
translation = "Translation not requested"

return sentiment_result, translation
demo = gr.Interface(
fn=analyze_and_translate,
inputs=[
gr.Textbox(label="Enter text", lines=3),
gr.Checkbox(label="Translate to French")
],
outputs=[
gr.Textbox(label="Sentiment"),
gr.Textbox(label="Translation")
],
title="Text Analysis",
description="Analyze sentiment and translate text"
)
demo.launch()

Gradio automatically arranges inputs and outputs in a logical layout.

Advanced Interface: Blocks API

For more control, use Blocks:

python

import gradio as gr
with gr.Blocks() as demo:
gr.Markdown("# Image Processing Pipeline")

with gr.Row():
with gr.Column():
input_img = gr.Image(label="Input")
model_choice = gr.Dropdown(
choices=["ResNet50", "VGG16", "EfficientNet"],
label="Model"
)
threshold = gr.Slider(0, 1, 0.5, label="Threshold")
submit_btn = gr.Button("Process")

with gr.Column():
output_img = gr.Image(label="Output")
predictions = gr.Label(label="Predictions")
confidence = gr.Number(label="Max Confidence")

# Define interaction
submit_btn.click(
fn=process_image,
inputs=[input_img, model_choice, threshold],
outputs=[output_img, predictions, confidence]
)
demo.launch()

Blocks gives you:

  • Custom layouts (rows, columns)
  • Multiple interactions
  • Complex workflows
  • Full control over design

Use Interface for simple demos, Blocks for custom layouts.

Real-World Example: Text Generation Demo

python

import gradio as gr
from transformers import pipeline
# Load model
generator = pipeline('text-generation', model='gpt2')
def generate_text(prompt, max_length, temperature, top_p):
"""Generate text based on prompt and parameters."""
result = generator(
prompt,
max_length=max_length,
temperature=temperature,
top_p=top_p,
num_return_sequences=1
)
return result[0]['generated_text']
# Create interface with blocks for better layout
with gr.Blocks(theme=gr.themes.Soft()) as demo:
gr.Markdown("# 🤖 Text Generation with GPT-2")
gr.Markdown("Enter a prompt and adjust parameters to generate text")

with gr.Row():
with gr.Column(scale=2):
prompt = gr.Textbox(
label="Prompt",
placeholder="Once upon a time...",
lines=3
)

with gr.Row():
max_length = gr.Slider(
minimum=50,
maximum=500,
value=100,
step=10,
label="Max Length"
)
temperature = gr.Slider(
minimum=0.1,
maximum=2.0,
value=1.0,
step=0.1,
label="Temperature"
)

top_p = gr.Slider(
minimum=0.1,
maximum=1.0,
value=0.9,
step=0.05,
label="Top P"
)

generate_btn = gr.Button("Generate", variant="primary")

with gr.Column(scale=3):
output = gr.Textbox(
label="Generated Text",
lines=10,
show_copy_button=True
)

# Examples
gr.Examples(
examples=[
["Once upon a time in a distant land", 100, 0.8, 0.9],
["The future of artificial intelligence", 150, 1.0, 0.95],
["In the year 2050, humans discovered", 200, 1.2, 0.9]
],
inputs=[prompt, max_length, temperature, top_p]
)

# Connect function
generate_btn.click(
fn=generate_text,
inputs=[prompt, max_length, temperature, top_p],
outputs=output
)
demo.launch(share=True)

This creates a professional-looking text generation interface with:

  • Custom layout
  • Parameter controls
  • Example inputs
  • Shareable link
  • Copy button for output

Flagging: Collect User Feedback

Gradio includes built-in feedback collection:

python

import gradio as gr
def classify(image):
# Your classification logic
return prediction
demo = gr.Interface(
fn=classify,
inputs=gr.Image(),
outputs=gr.Label(),
flagging_mode="manual", # User can flag results
flagging_options=["incorrect", "offensive", "other"]
)
demo.launch()

When users click “Flag”, their input and output are saved to a CSV file. Perfect for collecting training data or identifying model failures.

Hugging Face Spaces Integration

Deploy to Hugging Face for permanent hosting:

python

# app.py
import gradio as gr
def predict(text):
# Your model here
return result
demo = gr.Interface(fn=predict, inputs="text", outputs="text")
if __name__ == "__main__":
demo.launch()

Deploy:

  1. Push to GitHub
  2. Connect to Hugging Face Spaces
  3. Your demo is live permanently

Free hosting, no server management, automatic HTTPS.

Sharing Options

Gradio makes sharing trivially easy:

Temporary Public Link

python

demo.launch(share=True)  # Creates 72-hour public link

Local Network Only

python

demo.launch(share=False)  # Only accessible locally

Custom Server Settings

python

demo.launch(
server_name="0.0.0.0", # Allow external connections
server_port=7860,
share=True
)

Authentication

python

demo.launch(
auth=("username", "password"), # Basic auth
share=True
)
# Or multiple users
demo.launch(
auth=[("user1", "pass1"), ("user2", "pass2")],
share=True
)

Gradio vs Streamlit (When to Use Which)

Both are great, but different:

Use Gradio when:

  • Quick model demos
  • Sharing specific models
  • Want auto-generated interface
  • Need instant shareable links
  • Building for Hugging Face ecosystem
  • Minimal code is priority

Use Streamlit when:

  • Building full applications
  • Need complex layouts
  • Multiple pages/workflows
  • Custom dashboards
  • More control over design
  • Internal tools

IMO, Gradio for “demo this model quickly” and Streamlit for “build an application.” They’re complementary, not competitive.

Common Patterns and Best Practices

Pattern 1: Model Caching

python

import gradio as gr
from transformers import pipeline
# Load model once (module level)
model = pipeline("sentiment-analysis")
def analyze(text):
return model(text)[0]
demo = gr.Interface(fn=analyze, inputs="text", outputs="label")
demo.launch()

Load models at module level, not inside functions. Gradio doesn’t reload your script on each request.

Pattern 2: Error Handling

python

def predict(input_data):
try:
result = model(input_data)
return result
except Exception as e:
return f"Error: {str(e)}"
demo = gr.Interface(
fn=predict,
inputs="text",
outputs="text",
allow_flagging="never" # Disable flagging on errors
)

Pattern 3: Progress Indicators

python

import gradio as gr
import time
def slow_function(text):
for i in range(10):
time.sleep(0.5)
yield f"Processing... {i*10}%"

yield "Complete!"
demo = gr.Interface(
fn=slow_function,
inputs="text",
outputs="text"
)
demo.launch()

Functions that yield show progress updates.

Pattern 4: Examples Gallery

python

demo = gr.Interface(
fn=classify,
inputs=gr.Image(),
outputs=gr.Label(),
examples=[
["example1.jpg"],
["example2.jpg"],
["example3.jpg"]
],
cache_examples=True # Precompute example outputs
)

Examples let users test your model instantly without finding their own inputs.

Common Mistakes to Avoid

Learn from these Gradio failures:

Mistake 1: Loading Model Inside Function

python

# Bad - reloads model on every call
def predict(text):
model = load_model() # Slow!
return model(text)
# Good - load once at module level
model = load_model()
def predict(text):
return model(text)

Model loading inside the function makes every prediction slow.

Mistake 2: Wrong Input/Output Types

python

# Bad - type mismatch
def process(image):
# Expects PIL Image but receives filepath
return image.size
demo = gr.Interface(
fn=process,
inputs=gr.Image(type="filepath"), # Wrong type!
outputs="text"
)
# Good - matching types
def process(image):
# Now works with PIL
return f"Size: {image.size}"
demo = gr.Interface(
fn=process,
inputs=gr.Image(type="pil"), # Correct type
outputs="text"
)

Match your function’s expected input types with Gradio component types.

Mistake 3: Not Using Examples

python

# Mediocre - no examples
demo = gr.Interface(fn=predict, inputs="text", outputs="text")
# Better - includes examples
demo = gr.Interface(
fn=predict,
inputs="text",
outputs="text",
examples=["Example 1", "Example 2", "Example 3"]
)

Examples dramatically improve user experience. Always include them.

Mistake 4: Forgetting share=True

python

# Bad - only accessible locally
demo.launch()
# Good - gets shareable link
demo.launch(share=True)

If you want to share your demo, don’t forget share=True. FYI, I've made this mistake more times than I'd like to admit. :/

The Bottom Line

Gradio exists for one purpose: make model demos absurdly easy. It’s not for building complex applications — it’s for wrapping a function in an interface and sharing it in minutes. For that specific use case, nothing beats Gradio’s simplicity.

Use Gradio when:

  • Demoing models quickly
  • Sharing with non-technical users
  • Building for Hugging Face
  • Prototyping ML interfaces
  • Portfolio projects
  • Client demos

Skip Gradio when:

  • Building production applications
  • Need complex workflows
  • Require precise layout control
  • Building internal tools (Streamlit better)

For ML practitioners who need to share models fast, Gradio is invaluable. The alternative is spending hours on frontend code or never sharing your work. Gradio makes “ship a demo” the path of least resistance.

Installation:

bash

pip install gradio

Stop avoiding demos because they take too long to build. Start using Gradio to wrap models in interfaces and get shareable links in minutes. Your portfolio needs demos, your clients want to test models, and your friends want to try your cool ML project. Gradio makes all of that trivially easy — no web development required, just a few lines of Python. :)

Comments