Latest Post

Reinforcement Learning for Credit Scoring: Applications in Fintech

Here’s something that’ll blow your mind: the way fintech companies decide whether to lend you money is getting a serious upgrade. And I’m not talking about minor tweaks to old formulas — I’m talking about reinforcement learning algorithms that literally learn from every lending decision they make.

TensorFlow Datasets (TFDS): Load and Preprocess Data Efficiently

You’re ready to train a model. You’ve got your architecture planned out. But first, you need data. So you start downloading CSVs, writing loading scripts, handling edge cases, normalizing values, and two hours later you’re still fighting with data pipelines instead of actually training anything.

I’ve been there way too many times. Here’s what nobody tells you: data loading and preprocessing takes up about 80% of your actual work time in machine learning projects. It’s tedious, error-prone, and honestly kind of boring.

TensorFlow Datasets (TFDS) solves this. It’s a collection of ready-to-use datasets with preprocessing pipelines already built. One line of code gets you data that’s ready to train on. Let me show you how to use it properly, skip the common pitfalls, and actually save yourself some serious time.

TensorFlow Datasets

What Is TensorFlow Datasets and Why It Matters

TensorFlow Datasets is a library that provides standardized datasets for machine learning. It handles downloading, preprocessing, and loading data into formats that work seamlessly with TensorFlow and Keras.

What you get:

  • 350+ ready-to-use datasets covering images, text, audio, video, and more
  • Standardized API that works the same across all datasets
  • Efficient loading with built-in performance optimizations
  • Preprocessing pipelines that handle the tedious stuff
  • Versioning so your experiments are reproducible

Think of it as a massive library of datasets that just work. No hunting for download links. No writing custom loading code. No dealing with corrupt files or weird formats.

When I discovered TFDS, it changed how I prototype models. I can go from idea to training in minutes instead of hours.

๐Ÿ‘‰๐Ÿ‘‰Learn Natural Language Processing with TensorFlow : Click Here Now!

Installation and Setup (Keep It Simple)

Getting started is straightforward:

python

pip install tensorflow-datasets

That’s it. You’ll also need TensorFlow installed, obviously:

python

pip install tensorflow

If you’re on a machine with a GPU, install the GPU version of TensorFlow. TFDS works with both CPU and GPU seamlessly.

Import it in your code:

python

import tensorflow_datasets as tfds
import tensorflow as tf

Done. No configuration files. No environment variables. It just works. :)

Loading Your First Dataset (Actually Simple)

Let’s load MNIST — the “Hello World” of computer vision:

python

# Load the dataset
dataset, info = tfds.load('mnist', split='train', with_info=True)
# That's it. You're done.

Two lines. You now have 60,000 handwritten digit images loaded and ready to use. The first time you run this, TFDS downloads the data. After that, it uses the cached version.

The info object contains metadata about the dataset:

python

print(info)

You’ll see details like number of examples, feature types, splits available, and citation information. Actually useful stuff.

Understanding Splits

Most datasets have predefined splits — train, validation, test. You specify which one you want:

python

train_ds = tfds.load('mnist', split='train')
test_ds = tfds.load('mnist', split='test')

You can also create custom splits:

python

# First 80% for training, last 20% for validation
train_ds = tfds.load('mnist', split='train[:80%]')
val_ds = tfds.load('mnist', split='train[80%:]')

Pretty intuitive once you see it in action.

Working With the Data (Beyond Just Loading)

TFDS returns tf.data.Dataset objects. These are TensorFlow's way of representing datasets, and they're incredibly powerful.

Inspecting Your Data

See what you’re actually working with:

python

# Load dataset
ds = tfds.load('mnist', split='train')
# Look at a single example
for example in ds.take(1):
image = example['image']
label = example['label']
print(f"Image shape: {image.shape}")
print(f"Label: {label.numpy()}")

Each example is a dictionary with keys like ‘image’ and ‘label’. Different datasets have different keys — check the dataset documentation or the info object.

Basic Preprocessing

You’ll usually need to preprocess your data. Here’s how:

python

def normalize_img(image, label):
"""Normalizes images to [0, 1] range."""
return tf.cast(image, tf.float32) / 255., label
ds = tfds.load('mnist', split='train')
ds = ds.map(normalize_img, num_parallel_calls=tf.data.AUTOTUNE)

The map function applies your preprocessing to each example. num_parallel_calls=AUTOTUNE makes it run faster by using multiple CPU cores. Always use AUTOTUNE unless you have a specific reason not to.

๐Ÿ‘‰๐Ÿ‘‰Learn Natural Language Processing with TensorFlow : Click Here Now!

Batching and Shuffling

You need batched data for training:

python

ds = tfds.load('mnist', split='train')
ds = ds.map(normalize_img, num_parallel_calls=tf.data.AUTOTUNE)
ds = ds.shuffle(1024) # Shuffle with a buffer of 1024 examples
ds = ds.batch(32) # Batches of 32
ds = ds.prefetch(tf.data.AUTOTUNE) # Prefetch for performance

What’s happening:

  • shuffle: Randomizes order (important for training)
  • batch: Groups examples into batches
  • prefetch: Loads next batch while GPU trains on current batch

This pattern — map, shuffle, batch, prefetch — is what you’ll use 90% of the time.

Performance Optimization (Make It Fast)

Loading data efficiently matters. A slow data pipeline bottlenecks your entire training process.

The Standard Pipeline

Here’s the pipeline I use for almost every project:

python

def prepare_dataset(dataset, shuffle_buffer_size=1024, batch_size=32):
# Normalize
dataset = dataset.map(normalize_img, num_parallel_calls=tf.data.AUTOTUNE)

# Cache to memory or disk
dataset = dataset.cache()

# Shuffle
dataset = dataset.shuffle(shuffle_buffer_size)

# Batch
dataset = dataset.batch(batch_size)

# Prefetch
dataset = dataset.prefetch(tf.data.AUTOTUNE)

return dataset
# Use it
train_ds = tfds.load('mnist', split='train')
train_ds = prepare_dataset(train_ds)

Understanding Cache

The cache() method is powerful but often misunderstood:

python

# Cache in memory (fast, but uses RAM)
ds = ds.cache()
# Cache to disk (slower than memory, but doesn't use RAM)
ds = ds.cache('/path/to/cache/file')

Use memory caching for small datasets that fit in RAM. Use disk caching for larger datasets. IMO, disk caching is underrated — it’s way faster than reprocessing data every epoch.

Ever wonder why your first epoch takes forever but subsequent epochs are fast? That’s caching in action.

Parallel Processing

Use num_parallel_calls everywhere you can:

python

ds = ds.map(preprocessing_function, num_parallel_calls=tf.data.AUTOTUNE)

This uses multiple CPU cores to process data in parallel. The performance difference is massive on multi-core machines.

Exploring Available Datasets

TFDS has hundreds of datasets. Here’s how to find what you need:

List All Datasets

python

# List all available datasets
print(tfds.list_builders())

You’ll see names like ‘mnist’, ‘cifar10’, ‘imagenet2012’, ‘imdb_reviews’, etc.

Get Dataset Information

Before loading a dataset, check what it contains:

python

# Get info without loading data
info = tfds.builder('cifar10').info
print(info)

This shows you splits, features, number of examples, and more. Saves you from downloading a dataset only to realize it’s not what you needed.

๐Ÿ‘‰๐Ÿ‘‰Learn Natural Language Processing with TensorFlow : Click Here Now!

Popular Datasets to Know

Image Classification:

  • mnist: Handwritten digits
  • cifar10: 10 classes of 32x32 color images
  • cifar100: 100 classes of 32x32 color images
  • imagenet2012: The famous ImageNet dataset
  • oxford_flowers102: Flower classification

Object Detection:

  • coco/2017: Common Objects in Context
  • voc/2007: Pascal VOC 2007

Text:

  • imdb_reviews: Movie review sentiment
  • glue: Various NLP benchmark tasks
  • wikipedia: Wikipedia articles

Audio:

  • speech_commands: Simple audio classification
  • librispeech: Speech recognition

I’ve used most of these at some point. They’re well-maintained and just work.

Real-World Example: Image Classification Pipeline

Let’s build a complete pipeline for CIFAR-10:

python

import tensorflow as tf
import tensorflow_datasets as tfds
# Load data
(train_ds, val_ds, test_ds), info = tfds.load(
'cifar10',
split=['train[:90%]', 'train[90%:]', 'test'],
as_supervised=True,
with_info=True
)
# Preprocessing function
def preprocess(image, label):
# Normalize to [0, 1]
image = tf.cast(image, tf.float32) / 255.0

# Data augmentation for training
image = tf.image.random_flip_left_right(image)
image = tf.image.random_brightness(image, 0.2)

return image, label
# Prepare training dataset
train_ds = train_ds.map(preprocess, num_parallel_calls=tf.data.AUTOTUNE)
train_ds = train_ds.cache()
train_ds = train_ds.shuffle(1024)
train_ds = train_ds.batch(64)
train_ds = train_ds.prefetch(tf.data.AUTOTUNE)
# Prepare validation dataset (no augmentation)
def preprocess_val(image, label):
image = tf.cast(image, tf.float32) / 255.0
return image, label
val_ds = val_ds.map(preprocess_val, num_parallel_calls=tf.data.AUTOTUNE)
val_ds = val_ds.cache()
val_ds = val_ds.batch(64)
val_ds = val_ds.prefetch(tf.data.AUTOTUNE)
# Now train your model
model.fit(train_ds, validation_data=val_ds, epochs=10)

This is production-ready code. It handles everything — loading, splitting, preprocessing, augmentation, and optimization.

Advanced Preprocessing Techniques

Sometimes basic normalization isn’t enough. Let’s get fancy.

Data Augmentation

For image tasks, augmentation improves generalization:

python

def augment(image, label):
# Random crops
image = tf.image.random_crop(image, size=[28, 28, 1])

# Random flips
image = tf.image.random_flip_left_right(image)
image = tf.image.random_flip_up_down(image)

# Random brightness and contrast
image = tf.image.random_brightness(image, max_delta=0.1)
image = tf.image.random_contrast(image, lower=0.9, upper=1.1)

return image, label
train_ds = train_ds.map(augment, num_parallel_calls=tf.data.AUTOTUNE)

Use augmentation only on training data. Don’t augment validation or test sets.

Text Preprocessing

For text datasets, you’ll need tokenization:

python

# Load text dataset
ds = tfds.load('imdb_reviews', split='train', as_supervised=True)
# Create tokenizer
tokenizer = tf.keras.preprocessing.text.Tokenizer(num_words=10000)
# Fit on training data
texts = [text.numpy().decode('utf-8') for text, _ in ds]
tokenizer.fit_on_texts(texts)
# Preprocessing function
def preprocess_text(text, label):
# Tokenize
text = tokenizer.texts_to_sequences([text.numpy().decode('utf-8')])[0]
# Pad to fixed length
text = tf.keras.preprocessing.sequence.pad_sequences([text], maxlen=200)[0]
return text, label
# Apply to dataset
ds = ds.map(lambda text, label: tf.py_function(
preprocess_text, [text, label], [tf.int32, tf.int64]
), num_parallel_calls=tf.data.AUTOTUNE)

Text preprocessing is messier than image preprocessing, but TFDS still makes it manageable.

Handling Large Datasets (When RAM Isn’t Enough)

Some datasets don’t fit in memory. TFDS handles this gracefully.

Streaming Large Datasets

Don’t load everything at once:

python

# This streams data from disk as needed
ds = tfds.load('imagenet2012', split='train', shuffle_files=True)

The shuffle_files=True parameter shuffles the file order, which helps with randomness even when you can't shuffle all examples in memory.

Using Smaller Subsets for Testing

Test your pipeline on a tiny subset first:

python

# Load only first 1000 examples
ds = tfds.load('imagenet2012', split='train[:1000]')

This lets you debug preprocessing code without waiting for the full dataset to load. FYI, I do this constantly — test on small data, scale up when it works.

Memory Management

python

ds = (tfds.load('imagenet2012', split='train')
.map(preprocess, num_parallel_calls=tf.data.AUTOTUNE)
.cache('/tmp/imagenet_cache') # Cache to disk, not RAM
.shuffle(10000)
.batch(32)
.prefetch(tf.data.AUTOTUNE))

Disk caching prevents out-of-memory errors on large datasets.

Common Mistakes and How to Fix Them

I’ve made all these mistakes. Learn from my suffering:

Mistake 1: Not Using as_supervised

Without as_supervised=True, you get dictionaries:

python

ds = tfds.load('mnist', split='train')
# Each example is {'image': ..., 'label': ...}

With as_supervised=True, you get tuples:

python

ds = tfds.load('mnist', split='train', as_supervised=True)
# Each example is (image, label)

Tuples work directly with model.fit(). Dictionaries require extra processing. Use as_supervised=True for standard supervised learning tasks.

Mistake 2: Wrong Shuffle Buffer Size

Too small:

python

ds = ds.shuffle(10)  # Only shuffles 10 examples

This barely shuffles anything. Use at least 1024 for decent randomization.

Too large:

python

ds = ds.shuffle(1000000)  # Uses tons of memory

This can cause out-of-memory errors. Balance memory usage and randomness — typically 1024 to 10000 works well.

Mistake 3: Forgetting Prefetch

Without prefetch, your GPU sits idle while waiting for data:

python

ds = ds.batch(32)  # GPU waits for CPU to load next batch

With prefetch, data loads in parallel:

python

ds = ds.batch(32).prefetch(tf.data.AUTOTUNE)  # GPU always has data ready

Always end your pipeline with .prefetch(tf.data.AUTOTUNE). Always.

Mistake 4: Augmenting Validation Data

Don’t do this:

python

val_ds = val_ds.map(augment)  # Wrong!

Augmentation is for training only. Validation and test data should be consistent across evaluations.

Custom Datasets (When TFDS Doesn’t Have What You Need)

Sometimes you need to use your own data. You can still leverage TFDS patterns:

python

# Create dataset from directory
train_ds = tf.keras.utils.image_dataset_from_directory(
'path/to/train',
image_size=(224, 224),
batch_size=32
)
# Apply TFDS-style processing
def preprocess(image, label):
image = tf.cast(image, tf.float32) / 255.0
return image, label
train_ds = (train_ds
.map(preprocess, num_parallel_calls=tf.data.AUTOTUNE)
.cache()
.shuffle(1000)
.prefetch(tf.data.AUTOTUNE))

You get the efficiency of TFDS pipelines with your own data.

Debugging Your Pipeline (When Things Break)

Data pipelines fail silently. Here’s how to debug them:

Check Pipeline Output

python

# Take one batch and examine it
for images, labels in train_ds.take(1):
print(f"Batch shape: {images.shape}")
print(f"Label shape: {labels.shape}")
print(f"Image range: {tf.reduce_min(images)} to {tf.reduce_max(images)}")

This catches issues like wrong shapes, incorrect normalization, or missing data.

Visualize Samples

python

import matplotlib.pyplot as plt
# Visualize a batch
for images, labels in train_ds.take(1):
plt.figure(figsize=(10, 10))
for i in range(9):
plt.subplot(3, 3, i + 1)
plt.imshow(images[i])
plt.title(f"Label: {labels[i]}")
plt.axis('off')
plt.show()

Seeing your data helps catch preprocessing errors immediately.

The Bottom Line

TensorFlow Datasets eliminates the drudgery of data loading. You spend less time writing boilerplate and more time building models. The standardized API means code you write for one dataset works for others with minimal changes.

Is it perfect? No. Some datasets have quirks. Documentation could be better in places. But compared to manually downloading, parsing, and preprocessing dozens of different data formats? TFDS is a massive productivity boost.

Start using it on your next project. Pick a dataset, load it in one line, apply the standard pipeline I showed you, and train a model. You’ll wonder how you ever did it any other way.

Now stop writing custom data loaders and go train something. :) Your data pipeline doesn’t need to be that complicated.

Comments