The CE.SDK provides a flexible architecture that allows you to extend its functionality to meet your specific needs.
This guide demonstrates how to integrate a custom background removal feature into the Photo Editor
. The same approach can be applied to all other editor solutions.
You will learn how to add a dedicated button to the editor’s UI that triggers your own background removal logic, processes the image, and then updates the editor with the result. This approach is ideal when you want to use a specific background removal technology developed by your team, such as Apple’s on-device Vision framework, a third-party library, or your own server-side API.
This guide will walk you through the following steps:
- Step 1: Add a Custom Button: Create a new button in the editor’s dock to initiate the background removal process.
- Step 2: Extract Image Data: Retrieve the selected image from the engine to prepare it for processing.
- Step 3: Process the Image: Apply background removal using a custom implementation. We will use Apple’s Vision framework as an example.
- Step 4: Replace the Image: Update the image block in the editor with the new, background-free version.
Step 1: Add a Custom Dock Button#
The first step is to add a new button to the editor’s main toolbar, known as the Dock
. We can achieve this using the .imgly.modifyDockItems
modifier, which allows you to add, remove, or replace items in the default dock layout. You can learn more about configuring the Dock in the iOS editor in the dedicated guide here.
In the example below, we add a new Dock.Button
to the end of the existing items. This button is configured with a unique ID, a label, and an action to be executed when tapped.
PhotoEditor(editorSettings) .imgly.modifyDockItems { context, items in items.addLast { Dock.Button( id: "ly.img.backgroundRemoval", action: { context in /// Trigger background removal process }, label: { _ in Label("Remove Background", systemImage: "person.crop.circle.fill.badge.minus") } } }
Step 2: Extract the Image#
Now that we have a button in place, we need to implement the action that will be triggered when the user taps it.
With the context parameter, we have access to the engine and can retrieve the image. First, we access the current page, retrieve its fill, and verify that it is an image fill. This ensures that the background removal operation is only attempted on valid image layers.
private func performBackgroundRemoval(context: Dock.Context) async { do { let engine = context.engine
// Get the current page from the scene guard let currentPage = try engine.scene.getCurrentPage() else { throw BackgroundRemovalError.noPageFound }
// Validate that the page contains an image let imageFill = try engine.block.getFill(currentPage) let fillType = try engine.block.getType(imageFill) guard fillType == FillType.image.rawValue else { throw BackgroundRemovalError.noImageFound(currentType: fillType) }
// Extract image data from block using the URI from the block's properties // For simplicity, we will just use the imageFileURI property, but you could also check if the block is using a sourceSet let imageFileURI = try engine.block.getString(imageFill, property: "fill/image/imageFileURI") guard let url = URL(string: imageFileURI) else { throw BackgroundRemovalError.noImageSourceFound }
// Convert to Data let (imageData, _) = try await URLSession.shared.data(from: url)
// Convert to UIImage for processing // Note: Some images may not contain orientation metadata, but for simplicity, we won't address this here guard let originalImage = UIImage(data: imageData) else { throw BackgroundRemovalError.imageConversionFailed }
/// ... rest of the background removal logic... } catch { // ... error handling ... }}
Step 3: Process the Image#
Now that you have the UIImage, you can integrate the background removal solution of your choice. Our example uses a custom BackgroundRemover
class that leverages Apple’s Vision framework for high-quality, on-device person segmentation.
This is just an example, and you can replace it with your own implementation. In this case, it tries to detect persons and faces for the most accurate results. The full implementation is at the end of the article.
guard let processedImage = await BackgroundRemover.removeBackground(from: originalImage) else { throw BackgroundRemovalError.backgroundRemovalFailed}
Step 4: Replace the Image in the Editor#
After the background has been successfully removed, the resulting UIImage
(with a transparent background) must be saved to a temporary location. We then use the URL of this new file to update the source of the original image block in the editor. This replaces the old image with the new one seamlessly.
// Save processed image to a temporary cache URLlet processedImageURL = try saveImageToCache(processedImage)
// Replace the original image with the new one// As explained earlier, if you prefer you can use sourceSet instead of imageFileURItry await engine.block.addImageFileURIToSourceSet( imageFill, property: "fill/image/sourceSet", uri: processedImageURL)
Full Implementation#
Here is the complete source code for the Editor
view and the BackgroundRemover
utility. You can adapt this code to fit your project’s needs and substitute the BackgroundRemover
with your own implementation.
Editor.swift#
import IMGLYEngineimport IMGLYPhotoEditorimport SwiftUI
/// Configuration settings for the IMG.LY Editor/// Replace the license key with your own valid license associated with your bundle identifierprivate let editorSettings = EngineSettings( license: "your_license_key_here", userID: "identifier-for-your-user")
/// Photo Editor with AI-powered background removal capabilities.////// This view demonstrates how to:/// - Initialize the IMG.LY Photo Editor with an image URL/// - Add custom functionality (background removal) to the editor toolbar/// - Handle background removal processing and apply the result back to the PhotoEditorstruct Editor: View { // MARK: - Properties
/// The URL of the image to be edited let url: URL
// MARK: - State Properties
/// Tracks if background removal is currently in progress @State private var isProcessingBackgroundRemoval = false
/// Stores any processing errors to display to the user @State private var processingError: BackgroundRemovalError?
// MARK: - Body
var body: some View { PhotoEditor(editorSettings) // Here we initialize the editor with the provided image .imgly.onCreate { engine in // Load the selected image into the editor try await OnCreate.loadImage(from: url)(engine) } // 'modifyDockItems' allows you to change the dock buttons, in this case I am just adding an extra button to the default ones .imgly.modifyDockItems { context, items in // Add custom background removal button to the editor dock at the last position of the default ones items.addLast { backgroundRemovalButton(context: context) } } .alert("Background Removal Error", isPresented: .constant(processingError != nil)) { Button("OK") { processingError = nil } } message: { Text(processingError?.localizedDescription ?? "An unexpected error occurred") } }
// MARK: - UI Components
/// Creates the background removal button for the editor dock private func backgroundRemovalButton(context: Dock.Context) -> some Dock.Item { Dock.Button( id: "ly.img.backgroundRemoval", action: { context in Task { await performBackgroundRemoval(context: context) } }, label: { _ in Label("Remove Background", systemImage: "person.crop.circle.fill.badge.minus") } ) }
// MARK: - Background Removal Logic
/// Performs AI-powered background removal on the current image /// - Parameter context: The dock context containing the engine instance private func performBackgroundRemoval(context: Dock.Context) async { do { // Prevent multiple simultaneous operations guard !isProcessingBackgroundRemoval else { return } isProcessingBackgroundRemoval = true defer { isProcessingBackgroundRemoval = false }
let engine = context.engine
// Get the current page (canvas) from the scene guard let currentPage = try engine.scene.getCurrentPage() else { throw BackgroundRemovalError.noPageFound }
// Validate that the page contains an image let imageFill = try engine.block.getFill(currentPage) let fillType = try engine.block.getType(imageFill) guard fillType == FillType.image.rawValue else { throw BackgroundRemovalError.noImageFound(currentType: fillType) }
// Set block into loading state try engine.block.setState(imageFill, state: .pending(progress: 0.5))
// Step 1: Extract image data from block let imageData = try await extractImageData(from: imageFill, engine: engine)
// Step 2: Convert to UIImage for processing guard let originalImage = UIImage(data: imageData) else { try engine.block.setState(imageFill, state: .ready) throw BackgroundRemovalError.imageConversionFailed }
// Step 3: Apply background removal // In this case we are using apple vision guard let processedImage = await BackgroundRemover.removeBackground(from: originalImage) else { try engine.block.setState(imageFill, state: .ready) throw BackgroundRemovalError.backgroundRemovalFailed }
// Step 4: Save processed image let processedImageURL = try saveImageToCache(processedImage)
// Step 5: Replace the original image with the new one without background try await engine.block.addImageFileURIToSourceSet( imageFill, property: "fill/image/sourceSet", uri: processedImageURL )
// Set block into ready state again try engine.block.setState(imageFill, state: .ready)
} catch { // Handle any errors that occurred during processing if let backgroundRemovalError = error as? BackgroundRemovalError { processingError = backgroundRemovalError } else { processingError = .unexpectedError(error) } print("Background removal failed: \(error.localizedDescription)") } }
/// Extracts image data from a design block private func extractImageData(from block: DesignBlockID, engine: Engine) async throws -> Data { // I could also use here to check if the block is using a sourceSet let imageFileURI = try engine.block.getString(block, property: "fill/image/imageFileURI") guard let url = URL(string: imageFileURI) else { throw BackgroundRemovalError.noImageSourceFound }
let (data, _) = try await URLSession.shared.data(from: url) return data }
/// Save processed image to cache directory private func saveImageToCache(_ image: UIImage) throws -> URL { guard let imageData = image.pngData() else { throw BackgroundRemovalError.imageSavingFailed }
let cacheURL = try FileManager.default .url(for: .cachesDirectory, in: .userDomainMask, appropriateFor: nil, create: false) .appendingPathComponent(UUID().uuidString, conformingTo: .png)
try imageData.write(to: cacheURL) return cacheURL }}
// MARK: - Error Handling
enum BackgroundRemovalError: LocalizedError { case noPageFound case noImageFound(currentType: String) case noImageSourceFound case imageConversionFailed case backgroundRemovalFailed case imageSavingFailed case unexpectedError(Error)
var errorDescription: String? { switch self { case .noPageFound: return "No active page found in the editor."
case let .noImageFound(currentType): return "The current page doesn't contain an image. Current content type: \(currentType)"
case .noImageSourceFound: return "No image source found for background removal."
case .imageConversionFailed: return "Failed to convert image data for processing."
case .backgroundRemovalFailed: return "AI background removal failed. Please ensure the image contains a clearly visible person."
case .imageSavingFailed: return "Failed to save the processed image."
case let .unexpectedError(error): return "Unexpected error: \(error.localizedDescription)" } }}
BackgroundRemover.swift#
@preconcurrency import CoreImageimport Foundationimport UIKitimport Vision
/// Background removal utility using Apple's Vision framework./// This class is more focused on face and body detection, since you have control about this code you can change based on your needsenum BackgroundRemover { // MARK: - Vision Requests Configuration
/// Vision request for detecting faces in images private static var faceDetectionRequest: VNDetectFaceRectanglesRequest = { let request = VNDetectFaceRectanglesRequest() request.revision = VNDetectFaceRectanglesRequestRevision3 return request }()
/// Vision request for detecting human bodies in images private static var bodyDetectionRequest: VNDetectHumanRectanglesRequest = { let request = VNDetectHumanRectanglesRequest() request.revision = VNDetectHumanRectanglesRequestRevision2 return request }()
/// Vision request for generating person segmentation masks private static var personSegmentationRequest: VNGeneratePersonSegmentationRequest = { let request = VNGeneratePersonSegmentationRequest() request.qualityLevel = .accurate // Use highest quality setting request.outputPixelFormat = kCVPixelFormatType_OneComponent8 request.revision = VNGeneratePersonSegmentationRequestRevision1 return request }()
// MARK: - Public Interface
/// Removes the background from an image using AI-powered person segmentation. static func removeBackground(from image: UIImage) async -> UIImage? { // Convert UIImage to CIImage for processing guard let ciImage = CIImage(image: image) else { debugPrint("❌ Failed to convert UIImage to CIImage") return nil }
return await withCheckedContinuation { continuation in Task { // Step 1: Generate person segmentation mask guard let maskImage = await generatePersonMask(from: ciImage) else { debugPrint("❌ Failed to generate person mask") continuation.resume(returning: nil) return }
// Step 2: Apply mask to remove background guard let resultCIImage = applyTransparencyMask(maskImage, to: ciImage) else { debugPrint("❌ Failed to apply transparency mask") continuation.resume(returning: nil) return }
// Step 3: Convert result back to UIImage guard let resultUIImage = convertToUIImage(resultCIImage) else { debugPrint("❌ Failed to convert result to UIImage") continuation.resume(returning: nil) return } continuation.resume(returning: resultUIImage) } } }
// MARK: - Private Implementation
/// Generates a person segmentation mask from the input image. private static func generatePersonMask(from image: CIImage) async -> CIImage? { await withCheckedContinuation { continuation in DispatchQueue.global(qos: .userInitiated).async { let requestHandler = VNSequenceRequestHandler() do { try requestHandler.perform([ faceDetectionRequest, bodyDetectionRequest, personSegmentationRequest ], on: image)
// Validate that a person was detected let faces = faceDetectionRequest.results ?? [] let bodies = bodyDetectionRequest.results ?? [] guard !faces.isEmpty || !bodies.isEmpty else { continuation.resume(returning: nil) return }
// Extract the segmentation mask guard let maskPixelBuffer = personSegmentationRequest.results?.first?.pixelBuffer else { continuation.resume(returning: nil) return }
var maskImage = CIImage(cvPixelBuffer: maskPixelBuffer) let scaleTransform = calculateScaleTransform(from: maskImage.extent.size, to: image.extent.size) maskImage = maskImage.transformed(by: scaleTransform) continuation.resume(returning: maskImage) } catch { continuation.resume(returning: nil) } } } }
/// Applies a transparency mask to create a background-removed image. private static func applyTransparencyMask(_ mask: CIImage, to image: CIImage) -> CIImage? { guard let blendFilter = CIFilter(name: "CIBlendWithRedMask") else { return nil } blendFilter.setDefaults() blendFilter.setValue(image, forKey: kCIInputImageKey) let transparentBackground = CIImage(color: CIColor.clear).cropped(to: image.extent) blendFilter.setValue(transparentBackground, forKey: kCIInputBackgroundImageKey) blendFilter.setValue(mask, forKey: kCIInputMaskImageKey) return blendFilter.outputImage }
/// Converts a CIImage to UIImage. private static func convertToUIImage(_ ciImage: CIImage) -> UIImage? { let context = CIContext(options: [.useSoftwareRenderer: false]) guard let cgImage = context.createCGImage(ciImage, from: ciImage.extent) else { return nil } return UIImage(cgImage: cgImage) }
/// Calculates the transform needed to scale one size to match another. private static func calculateScaleTransform(from fromSize: CGSize, to toSize: CGSize) -> CGAffineTransform { let scaleX = toSize.width / fromSize.width let scaleY = toSize.height / fromSize.height return CGAffineTransform(scaleX: scaleX, y: scaleY) }}