3D VisionApril 26, 20268 min readOmniE2E Engineering

Camera Calibration for Fisheye Lenses: Theory and Practice

Complete guide to intrinsic and extrinsic calibration of fisheye cameras, including automatic corner detection, optimization strategies, and accuracy validation for 3D spatial applications.


Camera Calibration for Fisheye Lenses: Theory and Practice

Accurate camera calibration is the foundation of any 3D vision system. For fisheye cameras—especially those mounted on ceilings looking downward—standard calibration approaches fail. This post covers the complete calibration pipeline from intrinsic parameter estimation to multi-camera extrinsic alignment, including the mathematical foundations and practical implementation details.

Why Fisheye Calibration is Different

Standard pinhole camera models assume small field of view where radial distortion can be approximated with polynomial terms. Fisheye lenses (FOV > 180°) require fundamentally different projection models.

Projection Models Comparison

Pinhole with Radial Distortion:

rdistorted=r(1+k1r2+k2r4+k3r6)r_{distorted} = r(1 + k_1 r^2 + k_2 r^4 + k_3 r^6)

This polynomial expansion diverges for large angles, making it unsuitable for fisheye.

Equidistant (f-theta) Model:

r=fθr = f \cdot \theta

Linear relationship between image radius and incident angle. Most common for fisheye calibration.

Kannala-Brandt Generic Model:

r=k1θ+k2θ3+k3θ5+k4θ7+k5θ9r = k_1 \theta + k_2 \theta^3 + k_3 \theta^5 + k_4 \theta^7 + k_5 \theta^9

Flexible polynomial in θ\theta (not rr), handles various fisheye designs.

OpenCV Fisheye Model

OpenCV implements a modified equidistant model:

θd=θ(1+k1θ2+k2θ4+k3θ6+k4θ8)\theta_d = \theta(1 + k_1\theta^2 + k_2\theta^4 + k_3\theta^6 + k_4\theta^8)

Where θ=arctan(r/f)\theta = \arctan(r/f) is the angle from the optical axis.

The projection equations:

x=xrθd,y=yrθdx' = \frac{x}{r} \cdot \theta_d, \quad y' = \frac{y}{r} \cdot \theta_d

u=fxx+cx,v=fyy+cyu = f_x \cdot x' + c_x, \quad v = f_y \cdot y' + c_y

Intrinsic Calibration Pipeline

Step 1: Calibration Target Design

For ceiling fisheye cameras, we use large AprilTag grids (2m × 2m) placed on the floor:

import cv2
import numpy as np
from cv2 import aruco

def create_calibration_board(
    rows=6, cols=8,
    square_size=0.15,  # meters
    marker_size=0.11,
    dictionary=aruco.DICT_4X4_100
):
    """Generate ChArUco board for printing."""
    aruco_dict = aruco.getPredefinedDictionary(dictionary)
    board = aruco.CharucoBoard(
        (cols, rows),
        squareLength=square_size,
        markerLength=marker_size,
        dictionary=aruco_dict
    )
    
    # Generate high-res image for printing
    img = board.generateImage((4000, 3000))
    return board, img

Why ChArUco over checkerboard?

  • Partial visibility handling: Only need a subset of markers
  • Unique corner identification: Each corner has a known ID
  • Robustness to occlusions: Critical for large boards with people walking

Step 2: Multi-pose Image Collection

Collect 50-100 images with the board at various positions:

class CalibrationImageCollector:
    def __init__(self, camera, board, output_dir):
        self.camera = camera
        self.board = board
        self.output_dir = output_dir
        self.detector = aruco.CharucoDetector(board)
        self.images = []
        self.corners_list = []
        self.ids_list = []
    
    def capture_and_detect(self):
        """Capture frame and detect ChArUco corners."""
        frame = self.camera.capture()
        gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
        
        # Detect ChArUco
        charuco_corners, charuco_ids, marker_corners, marker_ids = \
            self.detector.detectBoard(gray)
        
        if charuco_corners is not None and len(charuco_corners) >= 6:
            # Require minimum corners for reliable pose
            self.corners_list.append(charuco_corners)
            self.ids_list.append(charuco_ids)
            self.images.append(frame.copy())
            
            # Visualize
            annotated = aruco.drawDetectedCornersCharuco(
                frame.copy(), charuco_corners, charuco_ids
            )
            return True, annotated
        
        return False, frame
    
    def get_coverage_map(self, img_size):
        """Visualize corner distribution across image."""
        coverage = np.zeros(img_size[:2], dtype=np.float32)
        
        for corners in self.corners_list:
            for corner in corners:
                x, y = corner.ravel().astype(int)
                cv2.circle(coverage, (x, y), 20, 1.0, -1)
        
        return coverage

Coverage requirements:

  • Corners distributed across entire image, especially edges
  • Multiple distances from camera
  • Various board orientations

Step 3: Fisheye Calibration

def calibrate_fisheye(corners_list, ids_list, board, image_size):
    """
    Calibrate fisheye camera using OpenCV.
    Returns: K (3x3), D (4x1), rvecs, tvecs, reproj_error
    """
    # Prepare object points for each detection
    obj_points = []
    img_points = []
    
    for corners, ids in zip(corners_list, ids_list):
        obj_pts, img_pts = board.matchImagePoints(corners, ids)
        if obj_pts is not None and len(obj_pts) >= 6:
            obj_points.append(obj_pts.astype(np.float32))
            img_points.append(img_pts.astype(np.float32))
    
    # Initial guess for intrinsics
    K = np.array([
        [image_size[0] * 0.5, 0, image_size[0] * 0.5],
        [0, image_size[1] * 0.5, image_size[1] * 0.5],
        [0, 0, 1]
    ], dtype=np.float64)
    
    D = np.zeros((4, 1), dtype=np.float64)
    
    # Calibration flags
    flags = (
        cv2.fisheye.CALIB_RECOMPUTE_EXTRINSIC |
        cv2.fisheye.CALIB_CHECK_COND |
        cv2.fisheye.CALIB_FIX_SKEW
    )
    
    # Run calibration
    criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 100, 1e-6)
    
    ret, K, D, rvecs, tvecs = cv2.fisheye.calibrate(
        obj_points,
        img_points,
        image_size,
        K,
        D,
        flags=flags,
        criteria=criteria
    )
    
    return K, D, rvecs, tvecs, ret

# Usage
K, D, rvecs, tvecs, reproj_error = calibrate_fisheye(
    corners_list, ids_list, board, (1920, 1080)
)
print(f"Reprojection error: {reproj_error:.4f} pixels")

Step 4: Validation

Visualize reprojection errors to identify outliers:

def compute_per_image_error(obj_points, img_points, K, D, rvecs, tvecs):
    """Compute reprojection error for each calibration image."""
    errors = []
    
    for obj_pts, img_pts, rvec, tvec in zip(obj_points, img_points, rvecs, tvecs):
        projected, _ = cv2.fisheye.projectPoints(
            obj_pts.reshape(-1, 1, 3), rvec, tvec, K, D
        )
        
        error = np.sqrt(np.sum((img_pts - projected.reshape(-1, 2))**2, axis=1))
        errors.append({
            'mean': np.mean(error),
            'max': np.max(error),
            'std': np.std(error),
            'per_point': error
        })
    
    return errors

def filter_outlier_images(errors, threshold_factor=2.0):
    """Remove images with high reprojection error."""
    mean_errors = [e['mean'] for e in errors]
    median = np.median(mean_errors)
    mad = np.median(np.abs(mean_errors - median))
    
    threshold = median + threshold_factor * mad * 1.4826
    
    valid_indices = [i for i, e in enumerate(errors) if e['mean'] < threshold]
    return valid_indices

Extrinsic Calibration: Camera-to-World

Once intrinsics are known, we need the camera's pose in world coordinates.

Ground Plane Calibration

For ceiling cameras, we calibrate to a floor-level coordinate system:

class FloorCalibrator:
    def __init__(self, K, D):
        self.K = K
        self.D = D
        self.floor_points_3d = []  # Known world coordinates
        self.floor_points_2d = []  # Corresponding image points
    
    def add_reference_point(self, world_xyz, image_uv):
        """Add a known floor point."""
        self.floor_points_3d.append(world_xyz)
        self.floor_points_2d.append(image_uv)
    
    def calibrate(self, method='pnp'):
        """Compute camera extrinsics from floor reference points."""
        obj_pts = np.array(self.floor_points_3d, dtype=np.float32)
        img_pts = np.array(self.floor_points_2d, dtype=np.float32)
        
        # Undistort image points
        img_pts_undist = cv2.fisheye.undistortPoints(
            img_pts.reshape(-1, 1, 2), self.K, self.D, P=self.K
        ).reshape(-1, 2)
        
        if method == 'pnp':
            success, rvec, tvec = cv2.solvePnP(
                obj_pts, img_pts_undist, self.K, None,
                flags=cv2.SOLVEPNP_ITERATIVE
            )
        elif method == 'ransac':
            success, rvec, tvec, inliers = cv2.solvePnPRansac(
                obj_pts, img_pts_undist, self.K, None,
                reprojectionError=5.0
            )
        
        R, _ = cv2.Rodrigues(rvec)
        
        # Camera position in world coordinates
        camera_position = -R.T @ tvec
        
        return R, tvec, camera_position

Multi-Camera Extrinsic Alignment

For multiple overlapping fisheye cameras, we need consistent world coordinates:

class MultiCameraCalibrator:
    def __init__(self, cameras):
        self.cameras = cameras
        self.shared_points = {}  # point_id -> {cam_id: image_coords}
    
    def add_shared_observation(self, point_id, cam_id, image_coords):
        """Record observation of shared point from a camera."""
        if point_id not in self.shared_points:
            self.shared_points[point_id] = {}
        self.shared_points[point_id][cam_id] = image_coords
    
    def triangulate_point(self, point_id):
        """Triangulate 3D position from multiple views."""
        observations = self.shared_points[point_id]
        
        if len(observations) < 2:
            return None
        
        # Build projection matrices
        A = []
        for cam_id, img_pt in observations.items():
            cam = self.cameras[cam_id]
            P = cam.K @ np.hstack([cam.R, cam.t])
            
            # Undistort point
            pt_undist = cv2.fisheye.undistortPoints(
                np.array([[img_pt]], dtype=np.float32),
                cam.K, cam.D, P=cam.K
            ).ravel()
            
            x, y = pt_undist
            A.append(x * P[2] - P[0])
            A.append(y * P[2] - P[1])
        
        A = np.array(A)
        _, _, Vt = np.linalg.svd(A)
        X = Vt[-1]
        X = X[:3] / X[3]
        
        return X
    
    def optimize_extrinsics(self, initial_poses, shared_points_3d):
        """Bundle adjustment to refine all camera poses."""
        from scipy.optimize import least_squares
        
        def residuals(params):
            # Unpack parameters
            poses = params.reshape(-1, 6)  # [rx, ry, rz, tx, ty, tz] per camera
            
            errors = []
            for point_id, point_3d in shared_points_3d.items():
                for cam_id, img_pt in self.shared_points[point_id].items():
                    cam_idx = list(self.cameras.keys()).index(cam_id)
                    rvec = poses[cam_idx, :3]
                    tvec = poses[cam_idx, 3:]
                    
                    cam = self.cameras[cam_id]
                    projected, _ = cv2.fisheye.projectPoints(
                        point_3d.reshape(1, 1, 3),
                        rvec, tvec, cam.K, cam.D
                    )
                    
                    errors.extend((projected.ravel() - np.array(img_pt)).tolist())
            
            return np.array(errors)
        
        # Initial parameter vector
        x0 = np.array([
            np.hstack([cv2.Rodrigues(p['R'])[0].ravel(), p['t'].ravel()])
            for p in initial_poses
        ]).ravel()
        
        result = least_squares(residuals, x0, method='lm')
        
        return result.x.reshape(-1, 6)

Practical Calibration Tips

1. Handling Extreme Distortion at Edges

Corner detection fails at image periphery. Use subpixel refinement with large search windows:

def refine_corners_fisheye(gray, corners, win_size=11):
    """Subpixel corner refinement for fisheye images."""
    criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 100, 0.001)
    
    refined = cv2.cornerSubPix(
        gray, 
        corners.astype(np.float32),
        (win_size, win_size),
        (-1, -1),
        criteria
    )
    
    return refined

2. Temperature Compensation

Lens focal length varies with temperature:

def temperature_compensate_intrinsics(K, D, temp_celsius, ref_temp=25):
    """
    Adjust intrinsics for temperature drift.
    Typical coefficient: ~20 ppm/°C for glass lenses
    """
    temp_coeff = 20e-6  # ppm per degree
    delta_temp = temp_celsius - ref_temp
    
    scale = 1 + temp_coeff * delta_temp
    
    K_adjusted = K.copy()
    K_adjusted[0, 0] *= scale  # fx
    K_adjusted[1, 1] *= scale  # fy
    
    return K_adjusted

3. Validation with Known Distances

Always validate with independent measurements:

def validate_calibration(K, D, R, t, known_distances):
    """
    Validate calibration using known floor distances.
    known_distances: list of ((u1,v1), (u2,v2), ground_truth_meters)
    """
    errors = []
    
    for (pt1_img, pt2_img, gt_distance) in known_distances:
        # Project points to floor plane (Z=0 in world)
        pt1_world = image_to_floor(pt1_img, K, D, R, t)
        pt2_world = image_to_floor(pt2_img, K, D, R, t)
        
        measured = np.linalg.norm(pt1_world - pt2_world)
        error_percent = abs(measured - gt_distance) / gt_distance * 100
        errors.append(error_percent)
    
    print(f"Distance validation:")
    print(f"  Mean error: {np.mean(errors):.2f}%")
    print(f"  Max error: {np.max(errors):.2f}%")
    print(f"  Std: {np.std(errors):.2f}%")
    
    return errors

Complete Calibration Configuration

# calibration_config.yaml
intrinsics:
  board:
    type: "charuco"
    rows: 6
    cols: 8
    square_size: 0.15  # meters
    marker_size: 0.11
    dictionary: "DICT_4X4_100"
  
  collection:
    min_images: 50
    max_images: 100
    min_corners_per_image: 10
    coverage_threshold: 0.8  # 80% of image area
  
  optimization:
    fix_skew: true
    fix_principal_point: false
    recompute_extrinsic: true
    max_iterations: 100
    epsilon: 1e-6
  
  validation:
    max_reproj_error: 0.5  # pixels
    outlier_threshold: 2.0  # MAD multiplier

extrinsics:
  method: "floor_calibration"
  min_reference_points: 6
  ransac_threshold: 5.0  # pixels
  
  floor_markers:
    - id: "A1"
      world_position: [0.0, 0.0, 0.0]
    - id: "A2"
      world_position: [3.0, 0.0, 0.0]
    - id: "B1"
      world_position: [0.0, 4.0, 0.0]
    # ... more markers

validation:
  known_distances:
    - points: ["A1", "A2"]
      distance: 3.0
    - points: ["A1", "B1"]
      distance: 4.0
  max_distance_error_percent: 1.0

Conclusion

Fisheye camera calibration requires:

  1. Correct projection model: Use OpenCV's fisheye module, not standard calibration
  2. Appropriate targets: Large ChArUco boards for ceiling-mounted cameras
  3. Complete coverage: Corners across entire image, especially edges
  4. Iterative refinement: Bundle adjustment for multi-camera setups
  5. Independent validation: Always verify with known physical measurements

A well-calibrated system achieves < 0.5 pixel reprojection error and < 1% distance measurement error on the floor plane—essential for accurate human localization and spatial analytics.