Search
Loading...
Skip to content

Integrating a Custom Background Removal Tool in iOS

The CE.SDK provides a flexible architecture that allows you to extend its functionality to meet your specific needs.

This guide demonstrates how to integrate a custom background removal feature into the Photo Editor. The same approach can be applied to all other editor solutions.

You will learn how to add a dedicated button to the editor’s UI that triggers your own background removal logic, processes the image, and then updates the editor with the result. This approach is ideal when you want to use a specific background removal technology developed by your team, such as Apple’s on-device Vision framework, a third-party library, or your own server-side API.

This guide will walk you through the following steps:

  • Step 1: Add a Custom Button: Create a new button in the editor’s dock to initiate the background removal process.
  • Step 2: Extract Image Data: Retrieve the selected image from the engine to prepare it for processing.
  • Step 3: Process the Image: Apply background removal using a custom implementation. We will use Apple’s Vision framework as an example.
  • Step 4: Replace the Image: Update the image block in the editor with the new, background-free version.

Step 1: Add a Custom Dock Button#

The first step is to add a new button to the editor’s main toolbar, known as the Dock. We can achieve this using the .imgly.modifyDockItems modifier, which allows you to add, remove, or replace items in the default dock layout. You can learn more about configuring the Dock in the iOS editor in the dedicated guide here.

In the example below, we add a new Dock.Button to the end of the existing items. This button is configured with a unique ID, a label, and an action to be executed when tapped.

// 'modifyDockItems' allows you to change the dock buttons, in this case I am just adding an extra button to the
// default ones
.imgly.modifyDockItems { context, items in
// Add custom background removal button to the editor dock at the last position of the default ones
items.addFirst {
backgroundRemovalButton(context: context)
}
}

The button itself is defined as a separate function that returns a Dock.Item:

private func backgroundRemovalButton(context: Dock.Context) -> some Dock.Item {
Dock.Button(
id: "ly.img.backgroundRemoval",
action: { context in
Task {
await performBackgroundRemoval(context: context)
}
},
label: { _ in
Label("Remove BG", systemImage: "person.crop.circle.fill.badge.minus")
},
)
}

Step 2: Extract the Image#

Now that we have a button in place, we need to implement the action that will be triggered when the user taps it.

With the context parameter, we have access to the engine and can retrieve the image. First, we access the current page, retrieve its fill, and verify that it is an image fill. This ensures that the background removal operation is only attempted on valid image layers.

// Get the current page (canvas) from the scene
guard let currentPage = try engine.scene.getCurrentPage() else {
throw BackgroundRemovalError.noPageFound
}
// Validate that the page contains an image
let imageFill = try engine.block.getFill(currentPage)
let fillType = try engine.block.getType(imageFill)
guard fillType == FillType.image.rawValue else {
throw BackgroundRemovalError.noImageFound(currentType: fillType)
}
// Set block into loading state
try engine.block.setState(imageFill, state: .pending(progress: 0.5))
// Step 1: Extract image data from block
let imageData = try await extractImageData(from: imageFill, engine: engine)
// Step 2: Convert to UIImage for processing
guard let originalImage = UIImage(data: imageData) else {
try engine.block.setState(imageFill, state: .ready)
throw BackgroundRemovalError.imageConversionFailed
}

Step 3: Process the Image#

Now that you have the UIImage, you can integrate the background removal solution of your choice. Our example uses a custom BackgroundRemover class that leverages Apple’s Vision framework for high-quality, on-device person segmentation. This is just an example, and you can replace it with your own implementation. In this case, it tries to detect persons and faces for the most accurate results.

// Step 3: Apply background removal
// In this case we are using apple vision
guard let processedImage = await BackgroundRemover.removeBackground(from: originalImage) else {
try engine.block.setState(imageFill, state: .ready)
throw BackgroundRemovalError.backgroundRemovalFailed
}

Step 4: Replace the Image in the Editor#

After the background has been successfully removed, the resulting UIImage (with a transparent background) must be saved to a temporary location. We then use the URL of this new file to update the source of the original image block in the editor. This replaces the old image with the new one seamlessly.

// Step 4: Save processed image
let processedImageURL = try saveImageToCache(processedImage)
// Step 5: Replace the original image with the new one without background
try await engine.block.addImageFileURIToSourceSet(
imageFill,
property: "fill/image/sourceSet",
uri: processedImageURL,
)
// Set block into ready state again
try engine.block.setState(imageFill, state: .ready)

Full Implementation#

Here is the complete source code for the Editor view and the BackgroundRemover utility. You can adapt this code to fit your project’s needs and substitute the BackgroundRemover with your own implementation.

BackgroundRemovalEditorSolution.swift#

The main Editor view integrates the background removal functionality into the Photo Editor:

import IMGLYEngine
import IMGLYPhotoEditor
import SwiftUI
/// Photo Editor with AI-powered background removal capabilities.
///
/// This view demonstrates how to:
/// - Initialize the IMG.LY Photo Editor with an image URL
/// - Add custom functionality (background removal) to the editor toolbar
/// - Handle background removal processing and apply the result back to the PhotoEditor
struct BackgroundRemovalEditorSolution: View {
let settings = EngineSettings(license: secrets.licenseKey,
userID: "<your unique user id>")
// MARK: - Properties
/// The URL of the image to be edited
let url: URL
// MARK: - State Properties
/// Tracks if background removal is currently in progress
@State private var isProcessingBackgroundRemoval = false
/// Stores any processing errors to display to the user
@State private var processingError: BackgroundRemovalError?
// MARK: - Body
var body: some View {
PhotoEditor(settings)
// Here we initialize the editor with the provided image
.imgly.onCreate { engine in
// Load the selected image into the editor
try await OnCreate.loadImage(from: url)(engine)
}
// 'modifyDockItems' allows you to change the dock buttons, in this case I am just adding an extra button to the
// default ones
.imgly.modifyDockItems { context, items in
// Add custom background removal button to the editor dock at the last position of the default ones
items.addFirst {
backgroundRemovalButton(context: context)
}
}
.alert("Background Removal Error", isPresented: .constant(processingError != nil)) {
Button("OK") {
processingError = nil
}
} message: {
Text(processingError?.localizedDescription ?? "An unexpected error occurred")
}
}
// MARK: - UI Components
private func backgroundRemovalButton(context: Dock.Context) -> some Dock.Item {
Dock.Button(
id: "ly.img.backgroundRemoval",
action: { context in
Task {
await performBackgroundRemoval(context: context)
}
},
label: { _ in
Label("Remove BG", systemImage: "person.crop.circle.fill.badge.minus")
},
)
}
// MARK: - Background Removal Logic
/// Performs AI-powered background removal on the current image
/// - Parameter context: The dock context containing the engine instance
private func performBackgroundRemoval(context: Dock.Context) async {
do {
// Prevent multiple simultaneous operations
guard !isProcessingBackgroundRemoval else { return }
isProcessingBackgroundRemoval = true
defer { isProcessingBackgroundRemoval = false }
let engine = context.engine
// Get the current page (canvas) from the scene
guard let currentPage = try engine.scene.getCurrentPage() else {
throw BackgroundRemovalError.noPageFound
}
// Validate that the page contains an image
let imageFill = try engine.block.getFill(currentPage)
let fillType = try engine.block.getType(imageFill)
guard fillType == FillType.image.rawValue else {
throw BackgroundRemovalError.noImageFound(currentType: fillType)
}
// Set block into loading state
try engine.block.setState(imageFill, state: .pending(progress: 0.5))
// Step 1: Extract image data from block
let imageData = try await extractImageData(from: imageFill, engine: engine)
// Step 2: Convert to UIImage for processing
guard let originalImage = UIImage(data: imageData) else {
try engine.block.setState(imageFill, state: .ready)
throw BackgroundRemovalError.imageConversionFailed
}
// Step 3: Apply background removal
// In this case we are using apple vision
guard let processedImage = await BackgroundRemover.removeBackground(from: originalImage) else {
try engine.block.setState(imageFill, state: .ready)
throw BackgroundRemovalError.backgroundRemovalFailed
}
// Step 4: Save processed image
let processedImageURL = try saveImageToCache(processedImage)
// Step 5: Replace the original image with the new one without background
try await engine.block.addImageFileURIToSourceSet(
imageFill,
property: "fill/image/sourceSet",
uri: processedImageURL,
)
// Set block into ready state again
try engine.block.setState(imageFill, state: .ready)
} catch {
// Handle any errors that occurred during processing
if let backgroundRemovalError = error as? BackgroundRemovalError {
processingError = backgroundRemovalError
} else {
processingError = .unexpectedError(error)
}
print("Background removal failed: \(error.localizedDescription)")
}
}
/// Extracts image data from a design block
private func extractImageData(from block: DesignBlockID, engine: Engine) async throws -> Data {
// I could also use here to check if the block is using a sourceSet
let imageFileURI = try engine.block.getString(block, property: "fill/image/imageFileURI")
guard let url = URL(string: imageFileURI) else {
throw BackgroundRemovalError.noImageSourceFound
}
let (data, _) = try await URLSession.shared.data(from: url)
return data
}
/// Save processed image to cache directory
private func saveImageToCache(_ image: UIImage) throws -> URL {
guard let imageData = image.pngData() else {
throw BackgroundRemovalError.imageSavingFailed
}
let cacheURL = try FileManager.default
.url(for: .cachesDirectory, in: .userDomainMask, appropriateFor: nil, create: false)
.appendingPathComponent(UUID().uuidString, conformingTo: .png)
try imageData.write(to: cacheURL)
return cacheURL
}
}
// MARK: - Error Handling
enum BackgroundRemovalError: LocalizedError {
case noPageFound
case noImageFound(currentType: String)
case noImageSourceFound
case imageConversionFailed
case backgroundRemovalFailed
case imageSavingFailed
case unexpectedError(Error)
var errorDescription: String? {
switch self {
case .noPageFound:
"No active page found in the editor."
case let .noImageFound(currentType):
"The current page doesn't contain an image. Current content type: \(currentType)"
case .noImageSourceFound:
"No image source found for background removal."
case .imageConversionFailed:
"Failed to convert image data for processing."
case .backgroundRemovalFailed:
"AI background removal failed. Please ensure the image contains a clearly visible person."
case .imageSavingFailed:
"Failed to save the processed image."
case let .unexpectedError(error):
"Unexpected error: \(error.localizedDescription)"
}
}
}

BackgroundRemover.swift#

The BackgroundRemover utility class uses Apple’s Vision framework to perform the actual background removal:

static func removeBackground(from image: UIImage) async -> UIImage? {
// Convert UIImage to CIImage for processing
guard let ciImage = CIImage(image: image) else {
debugPrint("❌ Failed to convert UIImage to CIImage")
return nil
}
return await withCheckedContinuation { continuation in
Task {
// Step 1: Generate person segmentation mask
guard let maskImage = await generatePersonMask(from: ciImage) else {
debugPrint("❌ Failed to generate person mask")
continuation.resume(returning: nil)
return
}
// Step 2: Apply mask to remove background
guard let resultCIImage = applyTransparencyMask(maskImage, to: ciImage) else {
debugPrint("❌ Failed to apply transparency mask")
continuation.resume(returning: nil)
return
}
// Step 3: Convert result back to UIImage
guard let resultUIImage = convertToUIImage(resultCIImage) else {
debugPrint("❌ Failed to convert result to UIImage")
continuation.resume(returning: nil)
return
}
continuation.resume(returning: resultUIImage)
}
}
}

The background removal process involves generating a person segmentation mask:

private static func generatePersonMask(from image: CIImage) async -> CIImage? {
await withCheckedContinuation { continuation in
DispatchQueue.global(qos: .userInitiated).async {
let requestHandler = VNSequenceRequestHandler()
do {
try requestHandler.perform([
faceDetectionRequest,
bodyDetectionRequest,
personSegmentationRequest,
], on: image)
// Validate that a person was detected
let faces = faceDetectionRequest.results ?? []
let bodies = bodyDetectionRequest.results ?? []
guard !faces.isEmpty || !bodies.isEmpty else {
continuation.resume(returning: nil)
return
}
// Extract the segmentation mask
guard let maskPixelBuffer = personSegmentationRequest.results?.first?.pixelBuffer else {
continuation.resume(returning: nil)
return
}
var maskImage = CIImage(cvPixelBuffer: maskPixelBuffer)
let scaleTransform = calculateScaleTransform(from: maskImage.extent.size, to: image.extent.size)
maskImage = maskImage.transformed(by: scaleTransform)
continuation.resume(returning: maskImage)
} catch {
continuation.resume(returning: nil)
}
}
}
}

Then applying the mask to create a transparent background:

private static func applyTransparencyMask(_ mask: CIImage, to image: CIImage) -> CIImage? {
guard let blendFilter = CIFilter(name: "CIBlendWithRedMask") else { return nil }
blendFilter.setDefaults()
blendFilter.setValue(image, forKey: kCIInputImageKey)
let transparentBackground = CIImage(color: CIColor.clear).cropped(to: image.extent)
blendFilter.setValue(transparentBackground, forKey: kCIInputBackgroundImageKey)
blendFilter.setValue(mask, forKey: kCIInputMaskImageKey)
return blendFilter.outputImage
}