ARTICLE AD BOX
I'm trying to load a pre-trained PyTorch Lightning model from the DiffProtect repository (published in 2023) in Google Colab, but I'm encountering a numpy compatibility error. Environment:
Google Colab (Python 3.12) PyTorch Lightning: 2.5.6 (latest) NumPy: 2.2.1 (latest from Colab) PyTorch: 2.5.1+cu121 Checkpoint: Trained with older versions in 2023
Code:
import pytorch_lightning as pl from experiment import LitModel # Attempt 1: Direct loading model = LitModel.load_from_checkpoint( 'checkpoints/ffhq256_autoenc.ckpt', map_location='cuda', strict=False ) ``` ### **Error:** ``` ModuleNotFoundError: No module named 'numpy.lib.function_base' Traceback: File "/content/DiffProtect/experiment.py", line 8, in <module> import pytorch_lightning as pl ...ModuleNotFoundError: No module named 'numpy.lib.function_base'
What I've Tried: Attempt 1: Alternative loading method
import torch checkpoint = torch.load('checkpoints/ffhq256_autoenc.ckpt', map_location='cuda') model = LitModel(conf) model.load_state_dict(checkpoint['state_dict'], strict=False)Result: Same error when importing the module Attempt 2: Checking numpy version
import numpy as np print(np.__version__) # 2.2.1The issue is that numpy.lib.function_base was removed in NumPy 2.0, but the checkpoint was created with NumPy 1.x. Research:
NumPy 2.0 migration guide mentions numpy.lib.function_base was deprecated PyTorch Lightning changed checkpoint format between versions The DiffProtect repo was last updated in 2023 (before NumPy 2.0)
Question: How can I successfully load this old checkpoint in a modern Google Colab environment? I need a solution that:
✅ Works in Google Colab (2024/2025 environment) ✅ Loads the 2023-era PyTorch Lightning checkpoint ✅ Doesn't break other dependencies ✅ Allows the model to run inference on GPU
Additional Context: The checkpoint structure includes:
state_dict: Model weights hyper_parameters: Config including a conf object PyTorch Lightning metadata
Is there a way to:
Downgrade numpy to 1.x safely in Colab? Load the checkpoint with compatibility mode? Extract just the model weights and load them manually?
Any guidance on best practices for loading old ML model checkpoints in modern environments would be appreciated!
