Understanding k-Fold Cross-Validation Data Splitting

English
Machine Learning
Author

Manuel Solano

Published

July 18, 2024

Model validation is a crucial step in machine learning to ensure that the model generalizes well to unseen data. One of the most effective techniques for evaluating the generalizability of a model is k-Fold cross-validation. This approach involves dividing the dataset into k folds or subsets, where each fold acts as a test set once, and the remaining k-1 folds serve as the training set. By doing so, every data point is used for both training and testing, leading to a more reliable estimate of the model’s performance.

In this blog post, we will explore how to implement a k-Fold cross-validation data splitting function in Python. We’ll break down the process into clear, manageable steps and explain the benefits of this approach.

Steps in k-Fold Cross-Validation Data Split:

  1. Split the dataset into k groups: Instead of shuffling the dataset, we’ll assume the data is already in a unique order for this demonstration.

  2. Generate Data Splits: For each group, treat that group as the test set and the remaining groups as the training set.

Benefits of this Approach:

  • Ensures all data is used for both training and testing.
  • Reduces bias since each data point gets to be in a test set exactly once.
  • Provides a more robust estimate of model performance.
Example:
        input: data = np.array([[1, 2], [3, 4], [5, 6], [7, 8], [9, 10]]), k = 5
        
        '''
        output: [[[[3, 4], [5, 6], [7, 8], [9, 10]], [[1, 2]]],
                [[[1, 2], [5, 6], [7, 8], [9, 10]], [[3, 4]]],
                [[[1, 2], [3, 4], [7, 8], [9, 10]], [[5, 6]]], 
                [[[1, 2], [3, 4], [5, 6], [9, 10]], [[7, 8]]], 
                [[[1, 2], [3, 4], [5, 6], [7, 8]], [[9, 10]]]]
        reasoning: The dataset is divided into 5 parts, each being used once as a test set while the remaining parts serve as the training set.
        '''

Input

import numpy as np

data = np.array([[1, 2], [3, 4], [5, 6], [7, 8], [9, 10]])

k = 5

Function

Here is the function I created for solving this challenge:

def cross_validation_split(data: np.ndarray, k: int) -> list:
    #np.random.shuffle(data)  # This line can be removed if shuffling is not desired in examples
    fold_size = len(data) // k
    folds = []
    
    for i in range(k):
        start, end = i * fold_size, (i + 1) * fold_size if i != k-1 else len(data)
        
        #print(start, end)
        test = data[start:end]
        train = np.concatenate([data[:start], data[end:]])
        
        #print(data[:start])
        folds.append([train.tolist(), test.tolist()])
    
        #print([train.tolist(), test.tolist()])
    return folds
    
cross_validation_split(data,k)
[[[[3, 4], [5, 6], [7, 8], [9, 10]], [[1, 2]]],
 [[[1, 2], [5, 6], [7, 8], [9, 10]], [[3, 4]]],
 [[[1, 2], [3, 4], [7, 8], [9, 10]], [[5, 6]]],
 [[[1, 2], [3, 4], [5, 6], [9, 10]], [[7, 8]]],
 [[[1, 2], [3, 4], [5, 6], [7, 8]], [[9, 10]]]]