Latest Post

Reinforcement Learning for Credit Scoring: Applications in Fintech

Here’s something that’ll blow your mind: the way fintech companies decide whether to lend you money is getting a serious upgrade. And I’m not talking about minor tweaks to old formulas — I’m talking about reinforcement learning algorithms that literally learn from every lending decision they make.

Scikit-learn Pipeline Tutorial: Streamline Your ML Workflow in Python

Look, if you’ve been working with machine learning in Python for more than five minutes, you’ve probably found yourself copy-pasting the same preprocessing code over and over again. Sound familiar? Yeah, I’ve been there too, and honestly, it’s annoying as hell.

Here’s the thing: scikit-learn pipelines are about to change your life. I’m not being dramatic — okay, maybe a little — but seriously, once you start using pipelines, you’ll wonder how you ever lived without them. They’re like the Marie Kondo of machine learning: everything has its place, and your code actually sparks joy for once.

Photo by Brecht Corbeel on Unsplash

What Exactly Is a Scikit-learn Pipeline?

Think of a pipeline as a conveyor belt for your data. You put raw, messy data in one end, and it comes out the other end all cleaned up, transformed, and ready for modeling. The beauty? You define this conveyor belt once, and it handles everything automatically.

A pipeline chains together multiple steps — preprocessing, feature engineering, and model training — into a single object. This means you apply the same transformations to your training data and test data without writing the same code twice. No more forgetting to normalize your test set or accidentally fitting your scaler on test data (we’ve all done it, don’t lie).

IMO, this is one of those tools that separates beginner data scientists from people who actually ship production-ready models. It’s that important.

Why Should You Care About Pipelines?

Ever shipped a model to production only to realize your preprocessing steps were slightly different between training and deployment? Yeah, that’s a nightmare scenario. Pipelines solve this problem elegantly.

Here’s what pipelines give you:

  • Consistency: Your data transformations happen the same way every single time
  • Simplicity: Less code means fewer bugs and easier maintenance
  • Cross-validation compatibility: Pipelines play nicely with GridSearchCV and cross_val_score
  • Production readiness: Save your entire pipeline as one object and deploy it

Think about it — would you rather maintain 15 different preprocessing scripts or one clean pipeline? The answer’s pretty obvious.

Building Your First Pipeline: The Basics

Let’s start simple. I’m talking really simple, because complex examples right off the bat just confuse everyone.

Here’s what a basic pipeline looks like:

python

from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
pipeline = Pipeline([
('scaler', StandardScaler()),
('classifier', LogisticRegression())
])
pipeline.fit(X_train, y_train)
predictions = pipeline.predict(X_test)

See that? Three lines of actual code, and you’ve got a complete workflow. The pipeline automatically scales your training data, fits the model, then scales your test data the same way before making predictions.

Each step in the pipeline is a tuple containing a name (you choose this) and a transformer or estimator object. The last step should always be your model — everything before that is preprocessing.

Understanding the Flow: What Happens Under the Hood

When you call fit() on a pipeline, something cool happens. The pipeline moves through each step sequentially:

  1. It fits the first transformer (StandardScaler) on your training data
  2. It transforms the training data using that fitted transformer
  3. It passes the transformed data to the next step
  4. Rinse and repeat until it reaches the final estimator
  5. The final estimator gets fitted on the fully transformed data

When you call predict(), the magic continues. The pipeline transforms your new data through each fitted transformer, then makes predictions using the fitted model. You don't have to remember to scale your test data—the pipeline handles it.

Pretty neat, right? This is why pipelines prevent so many production bugs. You literally can’t forget a preprocessing step.

Column Transformers: Because Real Data Is Messy

Here’s where things get interesting. Real-world datasets don’t have just one type of feature — you’ve got numerical columns, categorical columns, text data, and who knows what else. Do you really want to preprocess each type separately and keep track of everything manually? :/

Enter ColumnTransformer. This lets you apply different preprocessing steps to different columns:

python

from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import StandardScaler, OneHotEncoder
preprocessor = ColumnTransformer([
('num', StandardScaler(), ['age', 'income']),
('cat', OneHotEncoder(), ['city', 'occupation'])
])
full_pipeline = Pipeline([
('preprocessor', preprocessor),
('classifier', LogisticRegression())
])

Now you’re preprocessing numerical and categorical features differently, all within one pipeline. Your numerical features get scaled, your categorical features get one-hot encoded, and everything happens automatically when you fit the pipeline.

This is what I use in literally every real-world project. FYI, you can mix and match as many transformers as you need.

Handling Missing Values Like a Pro

Missing data is inevitable. Your pipeline should handle it gracefully instead of throwing errors at the worst possible moment.

SimpleImputer is your friend here:

python

from sklearn.impute import SimpleImputer
numeric_transformer = Pipeline([
('imputer', SimpleImputer(strategy='median')),
('scaler', StandardScaler())
])
categorical_transformer = Pipeline([
('imputer', SimpleImputer(strategy='constant', fill_value='missing')),
('onehot', OneHotEncoder(handle_unknown='ignore'))
])

Notice how I’m creating mini-pipelines for each column type? You can nest pipelines inside ColumnTransformers. It’s pipelines all the way down, and it actually makes sense once you get used to it.

The imputer fills missing values before scaling or encoding. Order matters here — you can’t scale data that has NaN values, so imputation comes first.


Writing on Medium is something I truly enjoy, and your support keeps it going. If you’d like to contribute, here’s my link: buymeacoffee.com/samaustin.
 Your generosity genuinely motivates me. 🙏

Custom Transformers: Making Pipelines Your Own

Sometimes scikit-learn’s built-in transformers aren’t enough. Maybe you need custom feature engineering, or you’re working with domain-specific data transformations. No worries — you can build your own.

Here’s a simple custom transformer:

python

from sklearn.base import BaseEstimator, TransformerMixin
class LogTransformer(BaseEstimator, TransformerMixin):
def fit(self, X, y=None):
return self

def transform(self, X):
return np.log1p(X)

You inherit from BaseEstimator and TransformerMixin, implement fit() and transform(), and boom—you've got a pipeline-compatible transformer. Use it just like any other transformer in your pipeline.

I’ve built custom transformers for text preprocessing, date feature extraction, and all sorts of domain-specific stuff. Once you learn this pattern, your pipelines become incredibly powerful.

Hyperparameter Tuning with Pipelines

Ever wondered how to tune hyperparameters when everything’s wrapped in a pipeline? It’s actually super straightforward.

You access pipeline steps using double underscores:

python

from sklearn.model_selection import GridSearchCV
param_grid = {
'preprocessor__num__imputer__strategy': ['mean', 'median'],
'classifier__C': [0.1, 1.0, 10.0],
'classifier__penalty': ['l1', 'l2']
}
grid_search = GridSearchCV(full_pipeline, param_grid, cv=5)
grid_search.fit(X_train, y_train)

The naming convention is step_name__parameter_name. If you have nested pipelines, you keep adding underscores: step__substep__parameter. It looks weird at first, but you get used to it.

This is incredibly powerful. You’re tuning preprocessing and model parameters simultaneously, and cross-validation automatically applies all transformations correctly to each fold. Try doing that without pipelines — I dare you.

Saving and Loading Pipelines: Production Ready

You’ve built the perfect pipeline, tuned all the hyperparameters, and achieved amazing accuracy. Now what? You need to deploy it, and pipelines make this ridiculously easy:

python

import joblib
# Save everything
joblib.dump(full_pipeline, 'my_model_pipeline.pkl')
# Load it later
loaded_pipeline = joblib.load('my_model_pipeline.pkl')
predictions = loaded_pipeline.predict(new_data)

One file. That’s it. Your entire workflow — preprocessing, feature engineering, the trained model — all saved together. No more “it works on my machine” problems because someone forgot a preprocessing step.

I’ve deployed dozens of models this way, and it’s saved me countless hours of debugging deployment issues.

Common Pitfalls and How to Avoid Them

Let me save you some frustration with mistakes I’ve made (so you don’t have to):

Pitfall 1: Fitting transformers on test data. The pipeline prevents this, but if you manually apply transformations before the pipeline, you’ll contaminate your test set. Let the pipeline handle everything.

Pitfall 2: Forgetting that fit_transform() only works on intermediate steps. Your final estimator should only implement fit(), not fit_transform(). This trips people up constantly.

Pitfall 3: Column order matters. If your ColumnTransformer expects specific column names or positions, and your new data has columns in a different order, things break. Use column names instead of indices when possible.

Pitfall 4: Not handling unknown categories. Always use handle_unknown='ignore' in your OneHotEncoder unless you're 100% certain your production data won't have new categories.

Learn from my pain. Trust me on these.

Putting It All Together: A Complete Example

Let me show you a realistic pipeline that handles everything:

python

from sklearn.pipeline import Pipeline
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.impute import SimpleImputer
from sklearn.ensemble import RandomForestClassifier
# Define feature types
numeric_features = ['age', 'income', 'credit_score']
categorical_features = ['city', 'occupation', 'education']
# Numeric pipeline
numeric_transformer = Pipeline([
('imputer', SimpleImputer(strategy='median')),
('scaler', StandardScaler())
])
# Categorical pipeline
categorical_transformer = Pipeline([
('imputer', SimpleImputer(strategy='constant', fill_value='unknown')),
('onehot', OneHotEncoder(handle_unknown='ignore'))
])
# Combine transformers
preprocessor = ColumnTransformer([
('num', numeric_transformer, numeric_features),
('cat', categorical_transformer, categorical_features)
])
# Full pipeline
full_pipeline = Pipeline([
('preprocessor', preprocessor),
('classifier', RandomForestClassifier(random_state=42))
])
# Train and predict
full_pipeline.fit(X_train, y_train)
accuracy = full_pipeline.score(X_test, y_test)

This pipeline handles missing values, scales numerical features, encodes categorical features, and trains a model. All in one clean, reusable object. Copy this template, adjust the features and model, and you’re good to go for most projects.

Final Thoughts: Make Pipelines Your Default

Here’s my advice: stop writing machine learning code without pipelines. I know it feels like extra work initially, but you’ll save so much time in the long run. Your code will be cleaner, your models will be more reliable, and deployment will be straightforward.

Pipelines aren’t just a nice-to-have feature — they’re essential for professional machine learning work. Every time I review code without pipelines, I immediately think “this person doesn’t deploy models.” Don’t be that person.

Start with simple pipelines and gradually add complexity as you need it. Build that custom transformer when you need it, not before. Keep your pipelines readable by using descriptive step names.

And remember: the best code is code you don’t have to maintain. Pipelines help you write that code. Now go forth and streamline your workflows — your future self will thank you! :)


If this article added value to your day, you can support my work at: buymeacoffee.com/samaustin. Thank you for reading.

Comments