--- title: "Animation" description: "Add motion to designs with support for keyframes, timeline editing, and programmatic animation control." platform: ios url: "https://img.ly/docs/cesdk/ios/animation-ce900c/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Animation](https://img.ly/docs/cesdk/ios/animation-ce900c/) --- --- ## Related Pages - [Overview](https://img.ly/docs/cesdk/ios/animation/overview-6a2ef2/) - Add motion to designs with support for keyframes, timeline editing, and programmatic animation control. - [Supported Animation Types](https://img.ly/docs/cesdk/ios/animation/types-4e5f41/) - Explore the types of animations supported by CE.SDK, including object, text, and transition effects. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Overview" description: "Add motion to designs with support for keyframes, timeline editing, and programmatic animation control." platform: ios url: "https://img.ly/docs/cesdk/ios/animation/overview-6a2ef2/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Animation](https://img.ly/docs/cesdk/ios/animation-ce900c/) > [Overview](https://img.ly/docs/cesdk/ios/animation/overview-6a2ef2/) --- Animations in CreativeEditor SDK (CE.SDK) bring your designs to life by adding motion to images, text, and design elements. Whether you're creating a dynamic social media post, a video ad, or an engaging product demo, animations help capture attention and communicate ideas more effectively. With CE.SDK, you can create and edit animations either through the built-in UI timeline or programmatically using the CreativeEngine API. Animated designs can be exported as MP4 videos, allowing you to deliver polished, motion-rich content entirely client-side. [Explore Demos](https://img.ly/showcases/cesdk?tags=ios) [Get Started](https://img.ly/docs/cesdk/ios/get-started/overview-e18f40/) --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Supported Animation Types" description: "Explore the types of animations supported by CE.SDK, including object, text, and transition effects." platform: ios url: "https://img.ly/docs/cesdk/ios/animation/types-4e5f41/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Animation](https://img.ly/docs/cesdk/ios/animation-ce900c/) > [Supported Animation Types](https://img.ly/docs/cesdk/ios/animation/types-4e5f41/) --- ## Animation Categories There are three different categories of animations: *In*, *Out* and *Loop* animations. ### In Animations *In* animations animate a block for a specified duration after the block first appears in the scene. For example, if a block has a time offset of 4s in the scene and it has an *In* animation with a duration of 1s, then the appearance of the block will be animated between 4s and 5s with the *In* animation. ### Out Animations *Out* animations animate a block for a specified duration before the block disappears from the scene. For example, if a block has a time offset of 4s in the scene and a duration of 5s and it has an *Out* animation with a duration of 1s, then the appearance of the block will be animated between 8s and 9s with the *Out* animation. ### Loop Animations *Loop* animations animate a block for the total duration that the block is visible in the scene. *Loop* animations also run simultaneously with *In* and *Out* animations, if those are present. ## Animation Presets We currently support the following *In* and *Out* animation presets: - `'//ly.img.ubq/animation/slide'` - `'//ly.img.ubq/animation/pan'` - `'//ly.img.ubq/animation/fade'` - `'//ly.img.ubq/animation/blur'` - `'//ly.img.ubq/animation/grow'` - `'//ly.img.ubq/animation/zoom'` - `'//ly.img.ubq/animation/pop'` - `'//ly.img.ubq/animation/wipe'` - `'//ly.img.ubq/animation/baseline'` - `'//ly.img.ubq/animation/crop_zoom'` - `'//ly.img.ubq/animation/spin'` - `'//ly.img.ubq/animation/ken_burns'` - `'//ly.img.ubq/animation/typewriter_text'` (text-only) - `'//ly.img.ubq/animation/block_swipe_text'` (text-only) - `'//ly.img.ubq/animation/merge_text'` (text-only) - `'//ly.img.ubq/animation/spread_text'` (text-only) and the following *Loop* animation types: - `'//ly.img.ubq/animation/spin_loop'` - `'//ly.img.ubq/animation/fade_loop'` - `'//ly.img.ubq/animation/blur_loop'` - `'//ly.img.ubq/animation/pulsating_loop'` - `'//ly.img.ubq/animation/breathing_loop'` - `'//ly.img.ubq/animation/jump_loop'` - `'//ly.img.ubq/animation/squeeze_loop'` - `'//ly.img.ubq/animation/sway_loop'` ## Animation Type Properties ## Baseline Type A text animation that slides text in along its baseline. This section describes the properties available for the **Baseline Type** (`//ly.img.ubq/animation/baseline`) block type. | Property | Type | Default | Description | | ------------------------------ | -------- | ---------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `animation/baseline/direction` | `Enum` | `"Up"` | The direction of the wipe animation., Possible values: `"Up"`, `"Right"`, `"Down"`, `"Left"` | | `animationEasing` | `Enum` | `"Linear"` | The easing function to apply to the animation., Possible values: `"Linear"`, `"EaseIn"`, `"EaseOut"`, `"EaseInOut"`, `"EaseInQuart"`, `"EaseOutQuart"`, `"EaseInOutQuart"`, `"EaseInQuint"`, `"EaseOutQuint"`, `"EaseInOutQuint"`, `"EaseInBack"`, `"EaseOutBack"`, `"EaseInOutBack"`, `"EaseInSpring"`, `"EaseOutSpring"`, `"EaseInOutSpring"` | | `playback/duration` | `Double` | `0.6` | The duration in seconds for which this block should be visible. | | `textAnimationOverlap` | `Float` | `0.35` | The overlap factor for text animations. | | `textAnimationWritingStyle` | `Enum` | `"Line"` | The writing style for text animations (e.g., by character, by word)., Possible values: `"Block"`, `"Line"`, `"Character"`, `"Word"` | ## Block Swipe Text Type A text animation that reveals text with a colored block swiping across. This section describes the properties available for the **Block Swipe Text Type** (`//ly.img.ubq/animation/block_swipe_text`) block type. | Property | Type | Default | Description | | ----------------------------------------- | -------- | --------------------------- | ----------------------------------------------------------------------------------------------------------------------------------- | | `animation/block_swipe_text/blockColor` | `Color` | `{"r":0,"g":0,"b":0,"a":1}` | The overlay block color. | | `animation/block_swipe_text/direction` | `Enum` | `"Right"` | The direction of the block swipe animation., Possible values: `"Up"`, `"Right"`, `"Down"`, `"Left"` | | `animation/block_swipe_text/useTextColor` | `Bool` | `true` | Whether the overlay block should use the text color. | | `playback/duration` | `Double` | `1.2` | The duration in seconds for which this block should be visible. | | `textAnimationOverlap` | `Float` | `0.35` | The overlap factor for text animations. | | `textAnimationWritingStyle` | `Enum` | `"Line"` | The writing style for text animations (e.g., by character, by word)., Possible values: `"Block"`, `"Line"`, `"Character"`, `"Word"` | ## Blur Type An animation that applies a blur effect over time. This section describes the properties available for the **Blur Type** (`//ly.img.ubq/animation/blur`) block type. | Property | Type | Default | Description | | --------------------------- | -------- | ---------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `animation/blur/fade` | `Bool` | `true` | Whether an opacity fade animation should be applied during the blur animation. | | `animation/blur/intensity` | `Float` | `1` | The maximum intensity of the blur. | | `animationEasing` | `Enum` | `"Linear"` | The easing function to apply to the animation., Possible values: `"Linear"`, `"EaseIn"`, `"EaseOut"`, `"EaseInOut"`, `"EaseInQuart"`, `"EaseOutQuart"`, `"EaseInOutQuart"`, `"EaseInQuint"`, `"EaseOutQuint"`, `"EaseInOutQuint"`, `"EaseInBack"`, `"EaseOutBack"`, `"EaseInOutBack"`, `"EaseInSpring"`, `"EaseOutSpring"`, `"EaseInOutSpring"` | | `playback/duration` | `Double` | `0.6` | The duration in seconds for which this block should be visible. | | `textAnimationOverlap` | `Float` | `0.35` | The overlap factor for text animations. | | `textAnimationWritingStyle` | `Enum` | `"Line"` | The writing style for text animations (e.g., by character, by word)., Possible values: `"Block"`, `"Line"`, `"Character"`, `"Word"` | ## Blur Loop Type A looping animation that continuously applies a blur effect. This section describes the properties available for the **Blur Loop Type** (`//ly.img.ubq/animation/blur_loop`) block type. | Property | Type | Default | Description | | ------------------------------- | -------- | ------- | --------------------------------------------------------------- | | `animation/blur_loop/intensity` | `Float` | `1` | The maximum blur intensity of this effect. | | `playback/duration` | `Double` | `1.2` | The duration in seconds for which this block should be visible. | ## Breathing Loop Type A looping animation with a slow, breathing-like scale effect. This section describes the properties available for the **Breathing Loop Type** (`//ly.img.ubq/animation/breathing_loop`) block type. | Property | Type | Default | Description | | ------------------------------------ | -------- | ------- | --------------------------------------------------------------------------------------------------------------------------------------- | | `animation/breathing_loop/intensity` | `Float` | `0` | Controls the intensity of the scaling. A value of 0 results in a maximum scale of 1.25. A value of 1 results in a maximum scale of 2.5. | | `playback/duration` | `Double` | `1.2` | The duration in seconds for which this block should be visible. | ## Crop Zoom Type An animation that zooms the content within the block's frame. This section describes the properties available for the **Crop Zoom Type** (`//ly.img.ubq/animation/crop_zoom`) block type. | Property | Type | Default | Description | | --------------------------- | -------- | ---------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `animation/crop_zoom/fade` | `Bool` | `true` | Whether an opacity fade animation should be applied during the crop zoom animation. | | `animation/crop_zoom/scale` | `Float` | `1.25` | The maximum crop scale value. | | `animationEasing` | `Enum` | `"Linear"` | The easing function to apply to the animation., Possible values: `"Linear"`, `"EaseIn"`, `"EaseOut"`, `"EaseInOut"`, `"EaseInQuart"`, `"EaseOutQuart"`, `"EaseInOutQuart"`, `"EaseInQuint"`, `"EaseOutQuint"`, `"EaseInOutQuint"`, `"EaseInBack"`, `"EaseOutBack"`, `"EaseInOutBack"`, `"EaseInSpring"`, `"EaseOutSpring"`, `"EaseInOutSpring"` | | `playback/duration` | `Double` | `1.2` | The duration in seconds for which this block should be visible. | ## Fade Type An animation that fades the block in or out. This section describes the properties available for the **Fade Type** (`//ly.img.ubq/animation/fade`) block type. | Property | Type | Default | Description | | --------------------------- | -------- | ---------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `animationEasing` | `Enum` | `"Linear"` | The easing function to apply to the animation., Possible values: `"Linear"`, `"EaseIn"`, `"EaseOut"`, `"EaseInOut"`, `"EaseInQuart"`, `"EaseOutQuart"`, `"EaseInOutQuart"`, `"EaseInQuint"`, `"EaseOutQuint"`, `"EaseInOutQuint"`, `"EaseInBack"`, `"EaseOutBack"`, `"EaseInOutBack"`, `"EaseInSpring"`, `"EaseOutSpring"`, `"EaseInOutSpring"` | | `playback/duration` | `Double` | `0.6` | The duration in seconds for which this block should be visible. | | `textAnimationOverlap` | `Float` | `0.35` | The overlap factor for text animations. | | `textAnimationWritingStyle` | `Enum` | `"Line"` | The writing style for text animations (e.g., by character, by word)., Possible values: `"Block"`, `"Line"`, `"Character"`, `"Word"` | ## Fade Loop Type A looping animation that continuously fades the block in and out. This section describes the properties available for the **Fade Loop Type** (`//ly.img.ubq/animation/fade_loop`) block type. | Property | Type | Default | Description | | ------------------- | -------- | ------- | --------------------------------------------------------------- | | `playback/duration` | `Double` | `1.2` | The duration in seconds for which this block should be visible. | ## Grow Type An animation that scales the block up from a point. This section describes the properties available for the **Grow Type** (`//ly.img.ubq/animation/grow`) block type. | Property | Type | Default | Description | | --------------------------- | -------- | ---------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `animation/grow/direction` | `Enum` | `"All"` | Whether the grow animation should be applied to the width or height or both., Possible values: `"Horizontal"`, `"Vertical"`, `"All"` | | `animationEasing` | `Enum` | `"Linear"` | The easing function to apply to the animation., Possible values: `"Linear"`, `"EaseIn"`, `"EaseOut"`, `"EaseInOut"`, `"EaseInQuart"`, `"EaseOutQuart"`, `"EaseInOutQuart"`, `"EaseInQuint"`, `"EaseOutQuint"`, `"EaseInOutQuint"`, `"EaseInBack"`, `"EaseOutBack"`, `"EaseInOutBack"`, `"EaseInSpring"`, `"EaseOutSpring"`, `"EaseInOutSpring"` | | `playback/duration` | `Double` | `0.6` | The duration in seconds for which this block should be visible. | | `textAnimationOverlap` | `Float` | `0.35` | The overlap factor for text animations. | | `textAnimationWritingStyle` | `Enum` | `"Line"` | The writing style for text animations (e.g., by character, by word)., Possible values: `"Block"`, `"Line"`, `"Character"`, `"Word"` | ## Jump Loop Type A looping animation with a jumping motion. This section describes the properties available for the **Jump Loop Type** (`//ly.img.ubq/animation/jump_loop`) block type. | Property | Type | Default | Description | | ------------------------------- | -------- | ------- | -------------------------------------------------------------------------------------------- | | `animation/jump_loop/direction` | `Enum` | `"Up"` | The direction of the jump animation., Possible values: `"Up"`, `"Right"`, `"Down"`, `"Left"` | | `animation/jump_loop/intensity` | `Float` | `0.5` | Controls how far the block should move as a percentage of its width or height. | | `playback/duration` | `Double` | `1.2` | The duration in seconds for which this block should be visible. | ## Ken Burns Type An animation that simulates the Ken Burns effect by panning and zooming on content. This section describes the properties available for the **Ken Burns Type** (`//ly.img.ubq/animation/ken_burns`) block type. | Property | Type | Default | Description | | ----------------------------------------- | -------- | ---------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `animation/ken_burns/direction` | `Enum` | `"Right"` | The direction of the pan travel., Possible values: `"Up"`, `"Right"`, `"Down"`, `"Left"` | | `animation/ken_burns/fade` | `Bool` | `false` | Whether an opacity fade animation should be applied during the animation. | | `animation/ken_burns/travelDistanceRatio` | `Float` | `1` | The movement distance relative to the length of the crop. | | `animation/ken_burns/zoomIntensity` | `Float` | `0.5` | The factor by which to zoom in or out. | | `animationEasing` | `Enum` | `"EaseOutQuint"` | The easing function to apply to the animation., Possible values: `"Linear"`, `"EaseIn"`, `"EaseOut"`, `"EaseInOut"`, `"EaseInQuart"`, `"EaseOutQuart"`, `"EaseInOutQuart"`, `"EaseInQuint"`, `"EaseOutQuint"`, `"EaseInOutQuint"`, `"EaseInBack"`, `"EaseOutBack"`, `"EaseInOutBack"`, `"EaseInSpring"`, `"EaseOutSpring"`, `"EaseInOutSpring"` | | `playback/duration` | `Double` | `2.4` | The duration in seconds for which this block should be visible. | ## Merge Text Type A text animation where lines of text merge from opposite directions. This section describes the properties available for the **Merge Text Type** (`//ly.img.ubq/animation/merge_text`) block type. | Property | Type | Default | Description | | -------------------------------- | -------- | ---------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `animation/merge_text/direction` | `Enum` | `"Left"` | The in-animation direction of the first line of text., Possible values: `"Right"`, `"Left"` | | `animation/merge_text/intensity` | `Float` | `0.5` | The intensity of the pan. | | `animationEasing` | `Enum` | `"Linear"` | The easing function to apply to the animation., Possible values: `"Linear"`, `"EaseIn"`, `"EaseOut"`, `"EaseInOut"`, `"EaseInQuart"`, `"EaseOutQuart"`, `"EaseInOutQuart"`, `"EaseInQuint"`, `"EaseOutQuint"`, `"EaseInOutQuint"`, `"EaseInBack"`, `"EaseOutBack"`, `"EaseInOutBack"`, `"EaseInSpring"`, `"EaseOutSpring"`, `"EaseInOutSpring"` | | `playback/duration` | `Double` | `1.2` | The duration in seconds for which this block should be visible. | ## Pan Type An animation that pans the block across the view. This section describes the properties available for the **Pan Type** (`//ly.img.ubq/animation/pan`) block type. | Property | Type | Default | Description | | --------------------------- | -------- | ---------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `animation/pan/direction` | `Float` | `0` | The movement direction of the animation in radians. | | `animation/pan/distance` | `Float` | `0.1` | The movement distance relative to the longer side of the page. | | `animation/pan/fade` | `Bool` | `true` | Whether an opacity fade animation should be applied during the pan animation. | | `animationEasing` | `Enum` | `"Linear"` | The easing function to apply to the animation., Possible values: `"Linear"`, `"EaseIn"`, `"EaseOut"`, `"EaseInOut"`, `"EaseInQuart"`, `"EaseOutQuart"`, `"EaseInOutQuart"`, `"EaseInQuint"`, `"EaseOutQuint"`, `"EaseInOutQuint"`, `"EaseInBack"`, `"EaseOutBack"`, `"EaseInOutBack"`, `"EaseInSpring"`, `"EaseOutSpring"`, `"EaseInOutSpring"` | | `playback/duration` | `Double` | `0.6` | The duration in seconds for which this block should be visible. | | `textAnimationOverlap` | `Float` | `0.35` | The overlap factor for text animations. | | `textAnimationWritingStyle` | `Enum` | `"Line"` | The writing style for text animations (e.g., by character, by word)., Possible values: `"Block"`, `"Line"`, `"Character"`, `"Word"` | ## Pop Type An animation that quickly scales the block up and down. This section describes the properties available for the **Pop Type** (`//ly.img.ubq/animation/pop`) block type. | Property | Type | Default | Description | | --------------------------- | -------- | -------- | ----------------------------------------------------------------------------------------------------------------------------------- | | `playback/duration` | `Double` | `0.6` | The duration in seconds for which this block should be visible. | | `textAnimationOverlap` | `Float` | `0.35` | The overlap factor for text animations. | | `textAnimationWritingStyle` | `Enum` | `"Line"` | The writing style for text animations (e.g., by character, by word)., Possible values: `"Block"`, `"Line"`, `"Character"`, `"Word"` | ## Pulsating Loop Type A looping animation with a pulsating scale effect. This section describes the properties available for the **Pulsating Loop Type** (`//ly.img.ubq/animation/pulsating_loop`) block type. | Property | Type | Default | Description | | ------------------------------------ | -------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------ | | `animation/pulsating_loop/intensity` | `Float` | `0` | Controls the intensity of the pulsating effect. A value of 0 results in a maximum scale of 1.25. A value of 1 results in a maximum scale of 2.5. | | `playback/duration` | `Double` | `0.6` | The duration in seconds for which this block should be visible. | ## Slide Type An animation that slides the block into or out of view. This section describes the properties available for the **Slide Type** (`//ly.img.ubq/animation/slide`) block type. | Property | Type | Default | Description | | --------------------------- | -------- | ---------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `animation/slide/direction` | `Float` | `0` | The movement direction angle of the slide animation in radians. | | `animation/slide/fade` | `Bool` | `false` | Whether an opacity fade animation should be applied during the slide animation. | | `animationEasing` | `Enum` | `"Linear"` | The easing function to apply to the animation., Possible values: `"Linear"`, `"EaseIn"`, `"EaseOut"`, `"EaseInOut"`, `"EaseInQuart"`, `"EaseOutQuart"`, `"EaseInOutQuart"`, `"EaseInQuint"`, `"EaseOutQuint"`, `"EaseInOutQuint"`, `"EaseInBack"`, `"EaseOutBack"`, `"EaseInOutBack"`, `"EaseInSpring"`, `"EaseOutSpring"`, `"EaseInOutSpring"` | | `playback/duration` | `Double` | `0.6` | The duration in seconds for which this block should be visible. | | `textAnimationOverlap` | `Float` | `0.35` | The overlap factor for text animations. | | `textAnimationWritingStyle` | `Enum` | `"Line"` | The writing style for text animations (e.g., by character, by word)., Possible values: `"Block"`, `"Line"`, `"Character"`, `"Word"` | ## Spin Type An animation that rotates the block. This section describes the properties available for the **Spin Type** (`//ly.img.ubq/animation/spin`) block type. | Property | Type | Default | Description | | --------------------------- | -------- | ------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `animation/spin/direction` | `Enum` | `"Clockwise"` | The direction of the spin animation., Possible values: `"Clockwise"`, `"CounterClockwise"` | | `animation/spin/fade` | `Bool` | `true` | Whether an opacity fade animation should be applied during the pan animation. | | `animation/spin/intensity` | `Float` | `1` | How far the animation should spin the block. 1.0 is a full rotation (360°). | | `animationEasing` | `Enum` | `"Linear"` | The easing function to apply to the animation., Possible values: `"Linear"`, `"EaseIn"`, `"EaseOut"`, `"EaseInOut"`, `"EaseInQuart"`, `"EaseOutQuart"`, `"EaseInOutQuart"`, `"EaseInQuint"`, `"EaseOutQuint"`, `"EaseInOutQuint"`, `"EaseInBack"`, `"EaseOutBack"`, `"EaseInOutBack"`, `"EaseInSpring"`, `"EaseOutSpring"`, `"EaseInOutSpring"` | | `playback/duration` | `Double` | `0.6` | The duration in seconds for which this block should be visible. | | `textAnimationOverlap` | `Float` | `0.35` | The overlap factor for text animations. | | `textAnimationWritingStyle` | `Enum` | `"Line"` | The writing style for text animations (e.g., by character, by word)., Possible values: `"Block"`, `"Line"`, `"Character"`, `"Word"` | ## Spin Loop Type A looping animation that continuously rotates the block. This section describes the properties available for the **Spin Loop Type** (`//ly.img.ubq/animation/spin_loop`) block type. | Property | Type | Default | Description | | ------------------------------- | -------- | ------------- | ------------------------------------------------------------------------------------------ | | `animation/spin_loop/direction` | `Enum` | `"Clockwise"` | The direction of the spin animation., Possible values: `"Clockwise"`, `"CounterClockwise"` | | `playback/duration` | `Double` | `1.2` | The duration in seconds for which this block should be visible. | ## Spread Text Type A text animation where letters spread apart or come together. This section describes the properties available for the **Spread Text Type** (`//ly.img.ubq/animation/spread_text`) block type. | Property | Type | Default | Description | | --------------------------------- | -------- | ---------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `animation/spread_text/fade` | `Bool` | `true` | Whether the text should fade in / out during the spread animation. | | `animation/spread_text/intensity` | `Float` | `0.5` | The intensity of the spread. | | `animationEasing` | `Enum` | `"Linear"` | The easing function to apply to the animation., Possible values: `"Linear"`, `"EaseIn"`, `"EaseOut"`, `"EaseInOut"`, `"EaseInQuart"`, `"EaseOutQuart"`, `"EaseInOutQuart"`, `"EaseInQuint"`, `"EaseOutQuint"`, `"EaseInOutQuint"`, `"EaseInBack"`, `"EaseOutBack"`, `"EaseInOutBack"`, `"EaseInSpring"`, `"EaseOutSpring"`, `"EaseInOutSpring"` | | `playback/duration` | `Double` | `0.6` | The duration in seconds for which this block should be visible. | ## Squeeze Loop Type A looping animation with a squeezing effect. This section describes the properties available for the **Squeeze Loop Type** (`//ly.img.ubq/animation/squeeze_loop`) block type. | Property | Type | Default | Description | | ------------------- | -------- | ------- | --------------------------------------------------------------- | | `playback/duration` | `Double` | `1.2` | The duration in seconds for which this block should be visible. | ## Sway Loop Type A looping animation with a swaying rotational motion. This section describes the properties available for the **Sway Loop Type** (`//ly.img.ubq/animation/sway_loop`) block type. | Property | Type | Default | Description | | ------------------------------- | -------- | ------- | ----------------------------------------------------------------------------------- | | `animation/sway_loop/intensity` | `Float` | `1` | The intensity of the animation. Defines the maximum sway angle between 15° and 45°. | | `playback/duration` | `Double` | `1.2` | The duration in seconds for which this block should be visible. | ## Typewriter Text Type A text animation that reveals text as if it's being typed. This section describes the properties available for the **Typewriter Text Type** (`//ly.img.ubq/animation/typewriter_text`) block type. | Property | Type | Default | Description | | ---------------------------------------- | -------- | ------------- | ------------------------------------------------------------------------------------------------------------- | | `animation/typewriter_text/writingStyle` | `Enum` | `"Character"` | Whether the text should appear one character or one word at a time., Possible values: `"Character"`, `"Word"` | | `playback/duration` | `Double` | `0.6` | The duration in seconds for which this block should be visible. | ## Wipe Type An animation that reveals or hides the block with a wipe transition. This section describes the properties available for the **Wipe Type** (`//ly.img.ubq/animation/wipe`) block type. | Property | Type | Default | Description | | --------------------------- | -------- | ---------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `animation/wipe/direction` | `Enum` | `"Right"` | The direction of the wipe animation., Possible values: `"Up"`, `"Right"`, `"Down"`, `"Left"` | | `animationEasing` | `Enum` | `"Linear"` | The easing function to apply to the animation., Possible values: `"Linear"`, `"EaseIn"`, `"EaseOut"`, `"EaseInOut"`, `"EaseInQuart"`, `"EaseOutQuart"`, `"EaseInOutQuart"`, `"EaseInQuint"`, `"EaseOutQuint"`, `"EaseInOutQuint"`, `"EaseInBack"`, `"EaseOutBack"`, `"EaseInOutBack"`, `"EaseInSpring"`, `"EaseOutSpring"`, `"EaseInOutSpring"` | | `playback/duration` | `Double` | `0.6` | The duration in seconds for which this block should be visible. | | `textAnimationOverlap` | `Float` | `0.35` | The overlap factor for text animations. | | `textAnimationWritingStyle` | `Enum` | `"Line"` | The writing style for text animations (e.g., by character, by word)., Possible values: `"Block"`, `"Line"`, `"Character"`, `"Word"` | ## Zoom Type An animation that scales the entire block. This section describes the properties available for the **Zoom Type** (`//ly.img.ubq/animation/zoom`) block type. | Property | Type | Default | Description | | --------------------------- | -------- | ---------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `animation/zoom/fade` | `Bool` | `true` | Whether an opacity fade animation should be applied during the zoom animation. | | `animationEasing` | `Enum` | `"Linear"` | The easing function to apply to the animation., Possible values: `"Linear"`, `"EaseIn"`, `"EaseOut"`, `"EaseInOut"`, `"EaseInQuart"`, `"EaseOutQuart"`, `"EaseInOutQuart"`, `"EaseInQuint"`, `"EaseOutQuint"`, `"EaseInOutQuint"`, `"EaseInBack"`, `"EaseOutBack"`, `"EaseInOutBack"`, `"EaseInSpring"`, `"EaseOutSpring"`, `"EaseInOutSpring"` | | `playback/duration` | `Double` | `0.6` | The duration in seconds for which this block should be visible. | | `textAnimationOverlap` | `Float` | `0.35` | The overlap factor for text animations. | | `textAnimationWritingStyle` | `Enum` | `"Line"` | The writing style for text animations (e.g., by character, by word)., Possible values: `"Block"`, `"Line"`, `"Character"`, `"Word"` | --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Automate Workflows" description: "Automate repetitive editing tasks using CE.SDK’s headless APIs to generate assets at scale." platform: ios url: "https://img.ly/docs/cesdk/ios/automation-715209/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Automate Workflows](https://img.ly/docs/cesdk/ios/automation-715209/) --- --- ## Related Pages - [Overview](https://img.ly/docs/cesdk/ios/automation/overview-34d971/) - Automate repetitive editing tasks using CE.SDK’s headless APIs to generate assets at scale. - [Batch Processing](https://img.ly/docs/cesdk/ios/automation/batch-processing-ab2d18/) - Documentation for Batch Processing - [Auto-Resize Blocks (Fill Parent & Percent Sizing)](https://img.ly/docs/cesdk/ios/automation/auto-resize-4c2d58/) - Make blocks automatically fill their parent or resize proportionally using percent-based sizing. Learn when to use absolute vs. percent sizing, and how to build predictable, responsive layouts for automation. - [Automate Design Generation](https://img.ly/docs/cesdk/ios/automation/design-generation-98a99e/) - Generate on-brand designs programmatically using templates, variables, and CE.SDK’s headless API. - [Multiple Image Generation](https://img.ly/docs/cesdk/ios/automation/multi-image-generation-2a0de4/) - Create many image variants from structured data by interpolating content into reusable design templates. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Auto-Resize Blocks (Fill Parent & Percent Sizing)" description: "Make blocks automatically fill their parent or resize proportionally using percent-based sizing. Learn when to use absolute vs. percent sizing, and how to build predictable, responsive layouts for automation." platform: ios url: "https://img.ly/docs/cesdk/ios/automation/auto-resize-4c2d58/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Automate Workflows](https://img.ly/docs/cesdk/ios/automation-715209/) > [Auto-Resize](https://img.ly/docs/cesdk/ios/automation/auto-resize-4c2d58/) --- Sometimes you don’t want to hard-code widths and heights. You want elements that *just fit*. You need a background that always covers the page, an overlay that scales with its container, or a column that takes up exactly half the parent. CE.SDK supports this through **size modes** and **percent-based sizing**, plus a one-liner convenience API that makes a block **fill its parent**. You set size modes per axis and use `0.0…1.0` values to express percentages; you can then query **computed frame dimensions** once layout stabilizes. ### Why It Matters for Automation When you generate designs in bulk, you can’t manually correct layout differences for each image. **Percentage-based sizing** and `fillParent()` guarantee that every inserted asset or background automatically scales to the right dimensions, regardless of its original size or aspect ratio. This ensures **reliable layouts** and **predictable exports** in automated pipelines. ## What You’ll Learn - Make a block **fill its parent** in one line. This is perfect for backgrounds and overlays. - Use **percent size modes** for responsive, predictable layouts. - Read **computed** width and height after layout to verify results. - Switch between **absolute** and **percent** sizing modes at runtime. - Build a **responsive background** that always fits the page. ## When You’ll Use It - Full-bleed **background images** that cover the page. - **Responsive overlays** and watermarks that track the parent’s size. - **Adaptive layouts** across iPhone, iPad, and Mac (Catalyst). - **Automation workflows** that replace assets of different sizes without breaking layout consistency. ## Fill the Parent (One-Liner) The simplest way to auto-resize is to ask the engine to make a block fill its parent: ```swift // Make 'block' fill its parent container (resizes & positions). try engine.block.fillParent(block) ``` CE.SDK resizes and positions the block, resets crop values if applicable, and switches content fill mode to `.cover` if needed to avoid invalid crops. **Good for:** Page backgrounds, edge-to-edge color panels, full-page masks. ## Percent-Based Sizing (Responsive Layouts) For finer control, switch size modes for width and height to `.percent`, then assign values from `0.0 ... 1.0`: ```swift // 100% width & height (fill parent) try engine.block.setWidthMode(block, mode: .percent) try engine.block.setHeightMode(block, mode: .percent) try engine.block.setWidth(block, value: 1.0) try engine.block.setHeight(block, value: 1.0) ``` In percent mode, `1.0` means 100% of the parent. Use: - `.absolute` for fixed-size elements. - `.auto` when the content determines its own size. ## Partial Fill Examples ```swift // 50% width, 100% height (e.g., a left column) try engine.block.setWidthMode(block, mode: .percent) try engine.block.setHeightMode(block, mode: .percent) try engine.block.setWidth(block, value: 0.5) try engine.block.setHeight(block, value: 1.0) ``` Great for split layouts, sidebars, or variable-width panels. ## Reading Computed Dimensions After the engine completes a layout pass, you can read **computed** dimensions with the frame accessors: ```swift let w = try engine.block.getFrameWidth(block) let h = try engine.block.getFrameHeight(block) ``` These values are available **after** layout updates. If you query immediately after changes, you might see stale values. Yield a tick or wait for engine-driven updates before reading. > **Note:** In Swift, using `try await Task.yield()` or deferring the read to the next run loop often suffices for demos. ## Switching Between Absolute & Percent You can toggle sizing modes at runtime to move between fixed and responsive layouts: ```swift // Fixed (absolute) sizing try engine.block.setWidthMode(block, mode: .absolute) try engine.block.setHeightMode(block, mode: .absolute) try engine.block.setWidth(block, value: 400.0) try engine.block.setHeight(block, value: 300.0) // Back to responsive sizing try engine.block.setWidthMode(block, mode: .percent) try engine.block.setHeightMode(block, mode: .percent) try engine.block.setWidth(block, value: 0.75) // 75% width try engine.block.setHeight(block, value: 1.0) // 100% height ``` Use **absolute** for fixed-size exports or print layouts, and **percent** for responsive layouts or template-based automation. ## Practical Example: Responsive Background Here’s a common pattern. Create a background that always fills the page: ```swift // 1) Create an image block and set its fill (placeholder asset) let bg = try engine.block.create(.graphic) let shape = try engine.block.createShape(.rect) try engine.block.setShape(bg, shape: shape) let solidColor = try engine.block.createFill(.color) try engine.block.setFill(bg, fill: solidColor) let rgbaGreen = Color.rgba(r: 0.5, g: 1, b: 0.5, a: 1) try engine.block.setColor(solidColor, property: "fill/color/value", color: rgbaGreen) // 2) Append to the page and send behind other content try engine.block.appendChild(to: page, child: bg) try engine.block.sendToBack(bg) // 3) Either the one-liner: try engine.block.fillParent(bg) // 4) Or, percent-based equivalent: try engine.block.setWidthMode(bg, mode: .percent) try engine.block.setHeightMode(bg, mode: .percent) try engine.block.setWidth(bg, value: 1.0) try engine.block.setHeight(bg, value: 1.0) ``` The percent-based alternative mirrors the behavior of `fillParent` if your content and crop are already valid. The `fillParent` method guarantees coverage and sets a safe fill mode automatically. ## Automation Example: Batch Image Replacement This automation scenario, generates name tags: - Each tag has a dedicated container that: - Is called hero-frame. - Displays the person’s photo. - The code prepares hero-frame **once** so it always fills its parent. - It replaces the image fill inside hero-frame during batch generation. - Layout stays stabless regardless of the photo’s size and aspect. ### Prepare Once ```swift // One‑time setup when building the template/scene let heroFrame = try engine.block.create(.graphic) let rect = try engine.block.createShape(.rect) try engine.block.setShape(heroFrame, shape: rect) // Give it a name for easy lookup in later steps / debugging try engine.block.setString(heroFrame, key: "name", value: "hero-frame") // Attach an image fill now (can be a placeholder) var heroFrameImageFill = try engine.block.createFill(.image) try engine.block.setFill(heroFrame, fill: heroFrameImageFill) // Place it in the nametag layout once (e.g., inside a card group or page) try engine.block.appendChild(to: faceGroup, child: heroFrame) // Make the container auto‑resize to its parent so the photo always fits try engine.block.fillParent(heroFrame) ``` ### Update During the Batch At a later time, when the batch runs, it: - Gets a reference to the `heroFrame` block - Calls a function to update the image fill during each pass. ```swift func replaceHeroPhotoURL(engine: Engine, hero: DesignBlockID, with url: String) { do { // Try to get the current fill and swap the image on the same object let fill = try engine.block.getFill(hero) try engine.block.setString( fill, property: "fill/image/fileURI", value: url ) } catch { // If no fill exists yet, create and attach one print("No fill found on hero-frame; creating new fill.") do { let newFill = try engine.block.createFill(.image) try engine.block.setString( newFill, property: "fill/image/fileURI", value: url ) try engine.block.setFill(hero, fill: newFill) } catch { print("Failed to attach new fill: \(error)") } } } ``` Because `heroFrame` used `fillParent`, every image conforms to its parent’s size, ensuring layouts remain consistent. This approach is ideal for: - Product catalogs - User-generated templates - Any workflow where image dimensions vary widely. ![Example of batch image replacement for nametags](assets/resize-example-ios-160-0.png) In the preceding diagram, three input images are of **different sizes**. The engine **crops and resizes** them to fill the placeholder during the batch. ## Platform Notes & Limitations - **Computed frame values are asynchronous.** Read them after a layout cycle. If you set percent sizing and immediately call `getFrameWidth`, you may get the previous value. - **Groups vs. children.** Percent sizing relates a child to *its direct parent*. Ensure your block is appended where you expect before reading frames. - **Fills and crops.** `fillParent` may reset crop values and/or set fill mode to `.cover` to avoid invalid states. ## Troubleshooting | Symptom | Likely Cause | Fix | |---|---|---| | Block doesn’t resize with parent | Width/height modes aren’t set to `.percent` | Set `setWidthMode(.percent)` / `setHeightMode(.percent)` and assign `0…1` values. | | Computed width/height are `0` or stale | Reading before layout settled | Defer reads; yield a tick before `getFrameWidth/Height`. | | Only one axis resizes | Only one axis set to `.percent` | Set both axes to `.percent` (or use `fillParent`). | | Unexpected crop after fill | `fillParent` adjusted crop/fill mode | Use percent sizing manually if you need to preserve a crop. | | Child ignores parent size | Wrong parent in hierarchy | Verify `appendChild` target and recheck computed frames after update. | ## Next Steps Explore a code sample of percentage resizing and `.fillParent` on [GitHub](https://github.com/imgly/cesdk-swift-examples/tree/v$UBQ_VERSION$/engine_guides_autoresize). You can use auto-resize as you create compositions and make responsive designs. Here are some other guides to explore to deepen your understanding. - [Resize blocks (manual)](https://img.ly/docs/cesdk/ios/edit-image/transform/resize-407242/) — control dimensions interactively. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Batch Processing" description: "Documentation for Batch Processing" platform: ios url: "https://img.ly/docs/cesdk/ios/automation/batch-processing-ab2d18/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Automate Workflows](https://img.ly/docs/cesdk/ios/automation-715209/) > [Batch Processing](https://img.ly/docs/cesdk/ios/automation/batch-processing-ab2d18/) --- Batch processing lets your app automatically generate scores of assets from a single design template. For example, you might create 100 personalized posters or social posts from a CSV file of names and photos, without opening the editor for each one. CE.SDK’s headless engine makes this possible entirely in Swift. This guide shows you how to do that in Swift for iOS, macOS, and Catalyst. You’ll learn how to load a saved design, substitute text and images, and export each variation as an asset file. The same techniques apply to more complex outputs like PDFs or videos. ## What You’ll Learn - How to start CE.SDK’s **headless engine** without a UI editor. - How to **load a template** from an archive and attach it to a new scene. - How to **replace variables and images** for each record in your data. - How to **export** each generated design as a common format like PNG, JPEG or PDF. ## When You’ll Use This Headless batch generation is ideal for tasks that need automation, not user interaction. Use it to mass-produce: - Branded materials - Social media graphics - Dynamic thumbnails Because you're not displaying the editor UI, it works equally well on iOS, macOS, and Catalyst. ## Headless Engine At the center of CE.SDK is the `Engine`, a lightweight rendering system you can use without the prebuilt editors. It can run in the background, respond to async tasks, and render scenes directly to image data. ```swift let engine = try await Engine(license: "") ``` For automation, you’ll typically create one `Engine` instance for the full batch run. - **On mobile**, a single-engine, sequential approach is safest. - **On more powerful hardware**, you can explore modest parallelism, as each instance of `Engine` is independent. ## Loading Templates The template defines the design you’ll use for all generated images. You can: 1. Create a template in the CE.SDK editor. 2. Save it as an archive. 3. Add that archive to your app bundle under **Copy Bundle Resources** in Xcode. Or host it somewhere with a valid `URL` for the batch to use. ```swift static var archiveURL: URL { guard let url = Bundle.main.url(forResource: "Template", withExtension: "archive") else { fatalError("Missing Template.archive in bundle") } return url } ``` **Archives** are self-contained, they include: - Your layout - Text - All linked assets. They’re ideal for predictable batch exports. You can choose to save templates as `String` types, but in those cases, the `URL` of every asset must resolve correctly at runtime. Once loaded, always validate the structure before using it. ```swift let blocks = try await engine.block.loadArchive(from: url) if blocks.isEmpty { throw BatchError.invalidTemplate } ``` This ensures that missing or corrupt templates don’t interrupt your batch. `loadArchive(from:)` returns the blocks for your design, which you then attach to a page so the engine can render, modify, and export it. If you want the archive to become the `.scene` use the `loadArchive(from:)` version in the `scene` API. ```swift let scene = try await engine.scene.loadArchive(from: url) ``` ## Supplying Data from JSON Every batch needs a list of records. Each record holds the values to apply to the template. A common pattern is: 1. Store them as a JSON array. 2. Decode them during the batch. A record might have these properties. ```swift struct Record: Codable, Hashable { var id: String var variables: [String: String] var outputFileName: String var images: [String: String]? // optional blockName → bundled image name } ``` Then decode any JSON using a standard pattern. ```swift func loadRecords() -> [Record] { guard let url = Bundle.main.url(forResource: "records", withExtension: "json"), let data = try? Data(contentsOf: url) else { return [] } return (try? JSONDecoder().decode([Record].self, from: data)) ?? [] } ``` Example `records.json` ```json [ { "id": "001", "variables": { "name": "Ruth", "tagline": "Ship great apps" }, "outputFileName": "badge-ruth" }, { "id": "002", "variables": { "name": "Chris", "tagline": "Move fast, polish later" }, "outputFileName": "badge-chris" } ] ``` In a production environment, you’ll load data from an API or database instead of the bundle. If your dataset is large, consider streaming it in chunks instead of loading everything at once. ## Templates and Variables Templates often include placeholders, or variables, that you can update with real data at runtime. In CE.SDK (Swift), template variables follow a key/value pattern and are **always stored as strings**. Your app can convert them into types like numbers or colors when needed. For text blocks, CE.SDK automatically matches placeholders in the template with variable names. Displaying `\{\{username\}\}` as the text in a text box, becomes the variable `username` you can replace with a person’s name before exporting. ```swift // All variables are set via (key:String, value:String) try engine.variable.set(key: "name", value: "Chris") // text try engine.variable.set(key: "price", value: "9.99") // number encoded as string try engine.variable.set(key: "brandColor", value: "#FFD60A") // color as hex string try engine.variable.set(key: "isFeatured", value: "true") // boolean as "true" / "false" try engine.variable.set(key: "imageURL", value: record.imageURL.absoluteString) // URL as string ``` Discover the available variable keys at runtime to validate a template using: ```swift let keys = engine.variable.findAll() // assert or log missing keys before a long batch run ``` ## Applying Data to the Template Once the engine loads the template, you can fill in variables. These correspond to the placeholders you set in your CE.SDK scene, like `\{\{name\}\}` or `\{\{tagline\}\}`. ```swift @MainActor func applyVariables(_ engine: Engine, values: [String: String]) throws { for (key, value) in values { try engine.variable.set(key: key, value: value) } } ``` You can also swap out placeholder images at runtime. The simplest method is to find the block by its name and update its image fill. ```swift let matches = try engine.block.find(byName: "productImage") if let imageBlock = matches.first { let fill = try engine.block.getFill(block) try engine.block.setString(fill, property: "fill/image/fileURI", value: record.imageURL.absoluteString) try engine.block.setFill(imageBlock, fill: fill) try engine.block.setKind(imageBlock, kind: "image") } ``` This snippet looks up a block named `productImage` and replaces its image fill with the URL of the new image. > **Note:** Using block names keeps your automation readable and less fragile than referencing IDs. ## Create Thumbnails You can generate previews by exporting a scaled version of each result: ```swift func exportThumbnail(from engine: Engine, fileName: String, scale: CGFloat = 0.25) throws -> URL { let dir = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first! let thumbURL = dir.appendingPathComponent("thumb_\(fileName).jpg") let root = try engine.scene.get() let width = try engine.block.getFrameWidth(root) * Float(scale) let height = try engine.block.getFrameHeight(root) * Float(scale) let options = ExportOptions(jpegQuality: 0.7, targetWidth: width, targetHeight: height) let exportData = try engine.block.export(root, mimeType: .jpeg, options: options) try exportData.write(to:url) return thumbURL } ``` ## Exporting to Multiple Formats Exports can target different output types. Just switch the mime type you pass: ```swift let pngData = try await engine.block.export(page, mimeType: .png, options: ExportOptions(targetHeight: 1080)) let pdfData = try await engine.block.export(page, mimeType: .pdf) ``` |Format| MimeType| Typical Use| |---|---|---| |PNG |image/png |Lossless images with transparency| |JPEG |image/jpeg |Photos and smaller files| |PDF |application/pdf |Printable designs| |MP4 |video/mp4 |Animated or timed templates| Use an `ExportOptions` struct to tune output quality, size and other properties of the export. You can get the details in the [Export](https://img.ly/docs/cesdk/ios/export-save-publish/export-82f968/) guides. If you need multiple formats at once, run several export calls back-to-back using the same engine and page. ## Managing Memory and Resources Each export involves GPU textures, image buffers, and temporary files. To keep your app responsive: - Reuse a single engine for sequential jobs. - Clean up temporary directories between batches. ## Performance Tuning Checklist - Use JPEG quality 0.8–0.9 to balance file size and speed. - Keep templates plain. Avoid unnecessary effects or large images. - Chunk data into smaller groups for large datasets. - Limit concurrency to 2–3 parallel tasks. - Profile on the lowest device you support. ## Error Handling and Retries Batch jobs can fail for network hiccups or invalid data. Use Swift’s do/catch blocks to retry a few times before giving up. ```swift for record in records { var attempts = 0 while attempts < 3 { do { try await exportRecord(record) break } catch { attempts += 1 try await Task.sleep(nanoseconds: UInt64(Double(attempts) * 0.5e9)) } } } ``` You can also log each attempt for easier debugging. ## Logging and Monitoring Progress Adding logging helps track how long each export takes: ```swift import os.log let logger = Logger(subsystem: "com.example.batch", category: "automation") logger.info("Exported \(record.name, privacy: .public)") ``` Wrap your entire run in timestamps using standard Swift `Date` or `DispatchTime` to measure throughput and display progress in your SwiftUI interface. ## Batch Workflow Batch processing isn’t limited to mobile apps. The same logic can run on backends or web services using CE.SDK for Web or Node. If your workload scales beyond device limits, consider: 1. Migrating automation to a server workflow. 2. Sending results back to the app. An example batch process, below, calls `processRecord(_:)` for each record in the data set. The record is processed by: 1. loading the template 2. setting variables 3. replacing images 4. exporting the result ```swift @MainActor func processRecord(_ record: Record) async throws -> URL { let engine = try await EngineFactory.make() let scene = try await engine.scene.loadArchive(from: Template.archiveURL) try applyVariables(engine, values: record.variables) if let imgs = record.images { for (blockName, fileName) in imgs { try replaceNamedImage(engine, name: blockName, fileName: fileName) } } let outURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first!.appendingPathComponent("\(record.outputFileName).jpg") try Exporter.exportJPEG(engine, sceneBlock: scene, to: outURL, quality: 0.9) return outURL } struct EngineFactory { static func make() async throws -> Engine { let engine = try await Engine(license: secrets.licenseKey) return engine } } func replaceNamedImage(_ engine: Engine, name: String, fileName: String) throws { guard let fileURL = Bundle.main.url(forResource: fileName, withExtension: nil) else { return } if let block = try engine.block.find(byName: name).first { // Update the block's image fill via its fileURI let fill = try engine.block.getFill(block) try engine.block.setString(fill, property: "fill/image/fileURI", value: fileURL.absoluteString) try engine.block.setFill(block, fill: fill) } } enum Exporter { @MainActor static func exportJPEG(_ engine: Engine, sceneOrPage: DesignBlockID, to url: URL, quality: Float = 0.9) async throws { let options = ExportOptions(jpegQuality: quality) let exportedData = try await engine.block.export(sceneOrPage, mimeType: .jpeg, options: options) try exportedData.write(to: url) } } ``` Use a small concurrency limit for parallel runs: ```swift @MainActor func runBatchParallel(records: [Record], maxConcurrent: Int = 3) async { await withTaskGroup(of: Void.self) { group in var iterator = records.makeIterator() for _ in 0.. --- title: "Automate Design Generation" description: "Generate on-brand designs programmatically using templates, variables, and CE.SDK’s headless API." platform: ios url: "https://img.ly/docs/cesdk/ios/automation/design-generation-98a99e/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Automate Workflows](https://img.ly/docs/cesdk/ios/automation-715209/) > [Design Generation](https://img.ly/docs/cesdk/ios/automation/design-generation-98a99e/) --- Automate on-brand output at scale. Feed data (JSON, APIs, user input) into CE.SDK templates and export print- or web-ready assets. No manual editing required. This guide shows the Swift/iOS workflow using the headless engine. ## What You’ll Learn - Load a template scene from bundle or URL. - Populate **text variables** via the variable API. - Replace **images** by updating a block’s image fill. - Export to PDF/PNG/JPEG and batch the whole process. ## When to Use It Use automation when you need to mass-produce personalized postcards, product cards, certificates, multi-locale variants, or nightly batch refreshes—either fully headless or hybrid with UI confirmation. ## Load a Template You can host templated scenes on your CDN or bundle them with the app. Once you start the engine: 1. Load the template you want to populate. 2. Display the template with UI, use it headless, or mix and match. ```swift import IMGLYEngine let engine = try Engine(license: "") let templateURL = URL(string: "https://cdn.img.ly/assets/demo/v3/ly.img.template/templates/cesdk_postcard_2.scene" )! try await engine.scene.load(from: templateURL) ``` ## Inject Dynamic Data Templates expose placeholders for dynamic content. Text tokens, such as \{\{first\_name}}, are set through the variable API. This is typically a straight mapping from your data model to the variable keys you’ve designed into the scene. ```swift try engine.variable.set(key: "first_name", value: "John") try engine.variable.set(key: "last_name", value: "DuPont") try engine.variable.set(key: "address", value: "123 Main St.") try engine.variable.set(key: "city", value: "Anytown") ``` If you prefer to discover what the template expects: 1. List the keys. 2. Fill them from your data store. ```swift let keys = engine.variable.findAll() // assert or log missing keys before a long batch run ``` This approach scales well for batch jobs and minimizes key-mismatch errors. ## Replace Images via Image Fill Image placeholders in templates are just graphic blocks that use an image fill. To swap the picture, point the fill to your asset’s URL. In production, you’ll usually: 1. Target a specific block (via your own metadata or naming convention). 2. Set the file URI. ```swift let graphic = try engine.block.find(byType: .graphic).first! let imageFill = try engine.block.createFill(.image) try engine.block.setFill(graphic, fill: imageFill) try engine.block.setString( imageFill, property: "fill/image/fileURI", value: "https://cdn.example.com/assets/photo_001.jpg" ) ``` This property path sets the image file used by the fill. This is ideal for server-hosted libraries or results fetched from your API. ## Export the Final Design After the template is populated, export the scene to your desired format (PDF for print, PNG/JPEG for web). In a batch flow, you’ll typically: 1. Export 2. Store or upload 3. Loop to the next record. ```swift guard let scene = try engine.scene.get() else { fatalError("No scene") } let pdfData = try await engine.block.export(scene, mimeType: .pdf) let url = FileManager.default.temporaryDirectory.appendingPathComponent("Postcard.pdf") try pdfData.write(to: url) ``` ## Batch It Automation shines when you repeat the same steps for many records. You can: 1. Load the template **once**. 2. Map your model into variables. 3. (Optional) Swap imagery 4. Export. You can either: - Run the preceding process completely headless. - Pause between steps to let a user approve variants in the UI (if your workflow calls for it). ```swift struct Recipient { let firstName: String let lastName: String let address: String let city: String let photoURL: URL } for recipient in recipients { try engine.variable.set(key: "first_name", value: recipient.firstName) try engine.variable.set(key: "last_name", value: recipient.lastName) try engine.variable.set(key: "address", value: recipient.address) try engine.variable.set(key: "city", value: recipient.city) let graphic = try engine.block.find(byType: .graphic).first! let imageFill = try engine.block.createFill(.image) try engine.block.setFill(graphic, fill: imageFill) try engine.block.setString(imageFill, property: "fill/image/fileURI", value: recipient.photoURL.absoluteString) _ = try await engine.block.export(try engine.scene.get()!, mimeType: .pdf) } ``` ## Troubleshooting **❌ Text didn’t update**: - Confirm variable names match the template’s tokens exactly; enumerating keys first helps prevent mismatches. **❌ Image didn’t change**: - Ensure you’re setting the image file on an image fill and that the fill is applied to the target block. **❌ Export is empty**: - Verify the scene is loaded and you’re exporting the correct node (usually the scene). **❌ Print colors look off**: - Prepare templates with appropriate print settings and export to PDF for print workflows. ## Next Steps Now that you’ve seen the general workflow for design generation, explore some of these topics to fine tune your projects. - [Create Templates](https://img.ly/docs/cesdk/ios/create-templates/overview-4ebe30/) – design for automation and add variables/placeholders. - [Automate Workflows](https://img.ly/docs/cesdk/ios/automation-715209/) – patterns for client/server and hybrid flows. - [Export Overview](https://img.ly/docs/cesdk/ios/export-save-publish/export-82f968/) – formats, presets, and print-ready output. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Multiple Image Generation" description: "Create many image variants from structured data by interpolating content into reusable design templates." platform: ios url: "https://img.ly/docs/cesdk/ios/automation/multi-image-generation-2a0de4/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Automate Workflows](https://img.ly/docs/cesdk/ios/automation-715209/) > [Multiple Image Generation](https://img.ly/docs/cesdk/ios/automation/multi-image-generation-2a0de4/) --- Generate image variants, such as square, portrait, or landscape layouts, from a single data record using the CreativeEditor SDK’s Engine API. This pattern lets you populate templates programmatically with text, images, and colors to create consistent, on‑brand designs across all formats. ## What You’ll Learn - Load multiple templates into CE.SDK and populate them with structured data. - Replace text and image placeholders dynamically using variables and named blocks. - Apply consistent brand color themes across scenes. - Export each variant as PNG, JPEG, or PDF. - Build a SwiftUI preview for the generated images. ## When to Use It Use multi‑image generation when a single record (like a restaurant listing or product) needs to produce multiple layout variants. For larger datasets, many records generating many images, refer to the [Batch Processing](https://img.ly/docs/cesdk/ios/automation/batch-processing-ab2d18/) guide. ## Core Concepts **Templates and Instances**: Templates define reusable layout and placeholders. An instance is a populated version with specific data. Use `scene.saveToString()` to serialize a template and `scene.load(from:)` to load it for processing. **Variables for Dynamic Text**: Define variables in your templates for fields like `RestaurantName` or `Rating`. Set them at runtime with `engine.variable.setString(name:value:)`. Use `engine.variable.findAll()` to verify available variable names. **Named Blocks for Image Replacement**: Name your image placeholders (for example, `RestaurantImage`, `Logo`). Retrieve them with `engine.block.findByName()`, access the fill with `getFill()`, then update its source URI using `setString(..., property: "fill/image/imageFileURI")`. Always reset the crop after replacing an image fill for proper framing. **Brand and Conditional Styling**: Use predictable block naming for elements such as star ratings. Apply color changes programmatically with `setTextColor` or `setColor` to visualize rating or brand status. **Sequential Template Processing**: Process each variant one at a time to reduce memory pressure and simplify export tracking. ## Prerequisites - CE.SDK for iOS integrated through Swift Package Manager. - A valid license key. - Templates archived as `.scene` or `.archive` files. - Template variables and named blocks prepared for population. ## Initialize the Engine ```swift import IMGLYEngine import IMGLYCore @MainActor func makeEngine() async throws -> Engine { let engine = try Engine(license: "") try await engine.addDefaultAssetSources() return engine } ``` ## Define Your Data Model Your data model can use proper typing for variables. When you insert values into the templates, you will often need to convert them to strings. ```swift struct Restaurant: Identifiable, Sendable { let id: UUID let name: String let rating: Double let reviewCount: Int let imageURL: String let logoURL: String let brandPrimary: String let brandSecondary: String } ``` This model provides a data record for the example code below. ## Populate Templates and Export Variants Use one template per format such as: - square - portrait - landscape Populate the templates sequentially. ```swift @MainActor func generateVariants(engine: Engine, for restaurant: Restaurant) async throws -> [URL] { let templates = [ "restaurant_square", "restaurant_portrait", "restaurant_landscape" ].compactMap { Bundle.main.url(forResource: $0, withExtension: "scene") } var results: [URL] = [] for template in templates { let scene = try await engine.loadArchive(from: template) // Set text variables try engine.variable.setString("RestaurantName", value: restaurant.name) try engine.variable.setString("Rating", value: String(format: "%.1f ★", restaurant.rating)) try engine.variable.setString("ReviewCount", value: "\(restaurant.reviewCount)") // Replace images try replaceImage(engine: engine, name: "RestaurantImage", with: restaurant.imageURL) try replaceImage(engine: engine, name: "Logo", with: restaurant.logoURL) // Apply brand theme try applyBrandTheme(engine: engine, primary: Color.fromHex(restaurant.brandPrimary), secondary: Color.fromHex(restaurant.brandSecondary)) // Export variant let output = try await exportJPEG(engine: engine, name: outputName(for: restaurant, template: template)) results.append(output) } return results } ``` **Helper Functions**: The preceding code example uses some helper functions. These aren’t part of the CE.SDK. Possible implementations of the functions follow. The function for `Color.fromHex()` is at the end of the guide as it’s used in another example as well. ```swift private func replaceImage(engine: Engine, name: String, with uri: String) throws { if let block = engine.block.find(byName: name).first { let fill = try engine.block.getFill(block) try engine.block.setString(fill, property: "fill/image/fileURI", value: uri) try engine.block.resetCrop(fill) } } private func applyBrandTheme(engine: Engine, primary: Color, secondary: Color) throws { for block in try engine.block.findAll() { switch try engine.block.getType(block) { case "//ly.img.ubq/text": try engine.block.setTextColor(block, color: primary) case "//ly.img.ubq/graphic": if let fill = try? engine.block.getFill(block) { try engine.block.setColor(fill, property: "fill/color/value", color: secondary) } default: break } } } private func exportJPEG(engine: Engine, name: String) async throws -> URL { guard let page = try engine.block.find(byType: .page).first else { throw NSError(domain: "no-page", code: 1) } let data = try await engine.block.export(page, mimeType: .jpeg) let dir = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0] let url = dir.appendingPathComponent("\(name).jpg") try data.write(to: url, options: .atomic) return url } ``` ## Preview the Generated Variants Use SwiftUI to display and share generated images. ```swift struct VariantsGrid: View { let urls: [URL] @State private var shareURL: URL? var body: some View { ScrollView { LazyVGrid(columns: [GridItem(.adaptive(minimum: 160), spacing: 12)]) { ForEach(urls, id: \.self) { url in if let image = UIImage(contentsOfFile: url.path) { Button { shareURL = url } label: { Image(uiImage: image) .resizable().scaledToFit() .clipShape(RoundedRectangle(cornerRadius: 10)) .shadow(radius: 2) } } } }.padding() } .sheet(item: $shareURL) { url in ShareLink(items: [url]) { Text("Share \(url.lastPathComponent)") } } } } ``` ## Advanced Use Cases **Conditional Content**: Show or hide elements based on data values—for example, color stars according to the rating. ```swift func colorStars(engine: Engine, rating: Int, baseName: String = "Rating") throws { for index in 1...5 { guard let star = try engine.block.find(byName:"\(baseName)\(index)").first, let fill = try? engine.block.getFill(star) else { continue } let color = index <= rating ? Color.hex("#FFD60A") : Color.hex("#CCCCCC") try engine.block.setColor(fill, property: "fill/color/value", color: color) } } ``` **Custom Assets**: Add your own logos or fonts by registering a custom asset source. The [Custom Asset Sources](https://img.ly/docs/cesdk/ios/import-media/asset-library-65d6c4/) for setup examples. **Adopter Mode Editing** Allow users to open the generated design in the editor UI for minor edits. Serialize the populated scene with `scene.saveToString()` and load it into the Design Editor configured for [restricted content](https://img.ly/docs/cesdk/ios/create-templates/lock-131489/) editing. ## Troubleshooting **❌ Variables not updating**: - Verify variable names in both template and code. **❌ Images missing**: - Confirm local path or remote URL points to a valid image. **❌ Colors incorrect**: - Check block type before applying color. **❌ Memory spikes**: - Process templates sequentially. **❌ Export size unexpected**: - Confirm consistent `page`, `secene` and `block` dimensions across templates. **Debugging Tips**: - Print variable names using `engine.variable.findAll()` - Log block names with `engine.block.getName(id)` - Test with one minimal template before expanding ## Next Steps Multi-image generation is one way to automate your workflow. Some other ways the CE.SDK can automate are in these guides: - [Batch Processing](https://img.ly/docs/cesdk/ios/automation/batch-processing-ab2d18/) lets you process many data records at once. - Adapt layouts across aspect ratios using [auto resize](https://img.ly/docs/cesdk/ios/automation/auto-resize-4c2d58/). - Explore [export formats](https://img.ly/docs/cesdk/ios/export-save-publish/export-82f968/) and settings. - Add branded fonts, logos, and graphics by creating [custom asset sources](https://img.ly/docs/cesdk/ios/import-media/overview-84bb23/). *** ## Utility Extension Add this helper to convert hex strings into CE.SDK `Color` values. Use it in the guide examples as `Color.fromHex("#FFD60A")` or `Color.fromHex(restaurant.brandPrimary)`. ```swift import IMGLYCore extension Color { /// Create a CE.SDK Color from a hex string like "#FFAA33" or "#FFAA33FF" static func fromHex(_ hex: String) -> Color { var hexString = hex.trimmingCharacters(in: .whitespacesAndNewlines) .replacingOccurrences(of: "#", with: "") if hexString.count == 6 { hexString.append("FF") } // add alpha if missing var hexValue: UInt64 = 0 Scanner(string: hexString).scanHexInt64(&hexValue) let r = Float((hexValue & 0xFF000000) >> 24) / 255.0 let g = Float((hexValue & 0x00FF0000) >> 16) / 255.0 let b = Float((hexValue & 0x0000FF00) >> 8) / 255.0 let a = Float((hexValue & 0x000000FF)) / 255.0 return Color(r: r, g: g, b: b, a: a) } } ``` --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Overview" description: "Automate repetitive editing tasks using CE.SDK’s headless APIs to generate assets at scale." platform: ios url: "https://img.ly/docs/cesdk/ios/automation/overview-34d971/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Automate Workflows](https://img.ly/docs/cesdk/ios/automation-715209/) > [Overview](https://img.ly/docs/cesdk/ios/automation/overview-34d971/) --- Workflow automation with CreativeEditor SDK (CE.SDK) enables you to programmatically generate, manipulate, and export creative assets—at scale. Whether you're creating thousands of localized ads, preparing platform-specific variants of a campaign, or populating print-ready templates with dynamic data, CE.SDK provides a flexible foundation for automation. You can run automation entirely on the client, integrate it with your backend, or build hybrid “human-in-the-loop” workflows where users interact with partially automated scenes before export. The automation engine supports static pipelines, making it suitable for a wide range of publishing, e-commerce, and marketing applications. Video support will follow soon. [Explore Demos](https://img.ly/showcases/cesdk?tags=ios) [Get Started](https://img.ly/docs/cesdk/ios/get-started/overview-e18f40/) ### Output Formats --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Bundle Size" description: "Understand CE.SDK’s engine and editor bundle sizes and how they affect your mobile app’s download footprint." platform: ios url: "https://img.ly/docs/cesdk/ios/bundle-size-df9210/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Compatibility & Security](https://img.ly/docs/cesdk/ios/compatibility-fef719/) > [Bundle Size](https://img.ly/docs/cesdk/ios/bundle-size-df9210/) --- ## Engine Download Size The `IMGLYEngine.xcframework` file, which is downloaded by Swift Package Manager or Cocoapods, has a compressed size exceeding 130 MB. However, it's essential to understand that this does not directly translate to an equivalent increase in your application's size. In fact, the framework itself will only add around **11.9 MB** to your app's download size, which is relatively small considering the rich feature set that IMGLYEngine provides. The actual impact on your app's size may vary depending on various factors, as discussed in the sections below. ## Mobile Editor and Mobile Camera Download Size The mobile editor and mobile camera are part of the [IMGLYUI package](https://github.com/imgly/IMGLYUI-swift) built on top of the [IMGLYEngine](https://github.com/imgly/IMGLYEngine-swift). This means that you can expect the download size of the mobile editor and camera to be around the size of the engine plus a few additional megabytes. The precise size increase may depend on the bundled assets (scene files, images, stickers), however, with the default resources you can expect it to be around **3.5 MB** plus the size of the engine. ## Assets IMGLYEngine does not include any assets, such as scene files, images, or stickers. However, the engine does provide a convenient API for loading assets from your app's bundle or serving them from a remote location. The size of your assets will directly impact your app's size. ## Architectures IMGLYEngine is designed to support the iOS platform and provides slices for both `x86_64` and `arm64` architectures. The `x86_64` architecture is specifically utilized for running apps within the iOS Simulator, whereas the `arm64` architecture is intended for executing apps on actual iOS devices. ## Debug symbols (dSYMs) Debug symbols, also known as dSYMs, are substantial in size but essential for comprehending crash logs and debugging your application. They establish a link between your app's binary code and the human-readable source code, enabling you to determine the cause of a crash. The `IMGLYEngine.xcframework` file includes debug symbols, which are primarily used for crash symbolication when uploading to external tools. Importantly, they won't negatively impact your app's size. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Colors" description: "Manage color usage in your designs, from applying brand palettes to handling print and screen formats." platform: ios url: "https://img.ly/docs/cesdk/ios/colors-a9b79c/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Colors](https://img.ly/docs/cesdk/ios/colors-a9b79c/) --- --- ## Related Pages - [Overview](https://img.ly/docs/cesdk/ios/colors/overview-16a177/) - Manage color usage in your designs, from applying brand palettes to handling print and screen formats. - [Basics](https://img.ly/docs/cesdk/ios/colors/basics-307115/) - Learn how color is handled in CE.SDK and how to apply, modify, and manage it across elements. - [For Print](https://img.ly/docs/cesdk/ios/colors/for-print-59bc05/) - Use print-ready color models and settings for professional-quality, production-ready exports. - [For Screen](https://img.ly/docs/cesdk/ios/colors/for-screen-1911f8/) - Documentation for For Screen - [Apply Colors](https://img.ly/docs/cesdk/ios/colors/apply-2211e3/) - Apply solid colors to shapes, backgrounds, and other design elements. - [Create a Color Palette](https://img.ly/docs/cesdk/ios/colors/create-color-palette-7012e0/) - Build reusable color palettes to maintain consistency and streamline user choices. - [Color Conversion](https://img.ly/docs/cesdk/ios/colors/conversion-bcd82b/) - Convert between RGB, CMYK, and other color formats based on your project’s output requirements. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Apply Colors" description: "Apply solid colors to shapes, backgrounds, and other design elements." platform: ios url: "https://img.ly/docs/cesdk/ios/colors/apply-2211e3/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Colors](https://img.ly/docs/cesdk/ios/colors-a9b79c/) > [Apply Color](https://img.ly/docs/cesdk/ios/colors/apply-2211e3/) --- ```swift file=@cesdk_swift_examples/engine-guides-colors/Colors.swift reference-only import Foundation import IMGLYEngine @MainActor func colors(engine: Engine) async throws { let scene = try engine.scene.create() let page = try engine.block.create(.page) try engine.block.setWidth(page, value: 800) try engine.block.setHeight(page, value: 600) try engine.block.appendChild(to: scene, child: page) let block = try engine.block.create(.graphic) try engine.block.setShape(block, shape: engine.block.createShape(.rect)) try engine.block.setPositionX(block, value: 350) try engine.block.setPositionY(block, value: 400) try engine.block.setWidth(block, value: 100) try engine.block.setHeight(block, value: 100) let fill = try engine.block.createFill(.color) try engine.block.setFill(block, fill: fill) let rgbaBlue = Color.rgba(r: 0, g: 0, b: 1, a: 1) let cmykRed = Color.cmyk(c: 0, m: 1, y: 1, k: 0, tint: 1) let cmykPartialRed = Color.cmyk(c: 0, m: 1, y: 1, k: 0, tint: 0.5) engine.editor.setSpotColor(name: "Pink-Flamingo", r: 0.988, g: 0.455, b: 0.992) engine.editor.setSpotColor(name: "Yellow", c: 0, m: 0, y: 1, k: 0) let spotPinkFlamingo = Color.spot(name: "Pink-Flamingo", tint: 1.0, externalReference: "Crayola") let spotPartialYellow = Color.spot(name: "Yellow", tint: 0.3, externalReference: "") try engine.block.setColor(fill, property: "fill/color/value", color: rgbaBlue) try engine.block.setColor(fill, property: "fill/color/value", color: cmykRed) try engine.block.setColor(block, property: "stroke/color", color: cmykPartialRed) try engine.block.setColor(fill, property: "fill/color/value", color: spotPinkFlamingo) try engine.block.setColor(block, property: "dropShadow/color", color: spotPartialYellow) let cmykBlueConverted = try engine.editor.convertColorToColorSpace(color: rgbaBlue, colorSpace: .cmyk) let rgbaPinkFlamingoConverted = try engine.editor.convertColorToColorSpace( color: spotPinkFlamingo, colorSpace: .sRGB, ) engine.editor.findAllSpotColors() // ["Crayola-Pink-Flamingo", "Yellow"] engine.editor.setSpotColor(name: "Yellow", c: 0.2, m: 0, y: 1, k: 0) try engine.editor.removeSpotColor(name: "Yellow") } ``` ## Setup the scene We first create a new scene with a graphic block that has color fill. ```swift highlight-setup let scene = try engine.scene.create() let page = try engine.block.create(.page) try engine.block.setWidth(page, value: 800) try engine.block.setHeight(page, value: 600) try engine.block.appendChild(to: scene, child: page) let block = try engine.block.create(.graphic) try engine.block.setShape(block, shape: engine.block.createShape(.rect)) try engine.block.setPositionX(block, value: 350) try engine.block.setPositionY(block, value: 400) try engine.block.setWidth(block, value: 100) try engine.block.setHeight(block, value: 100) let fill = try engine.block.createFill(.color) try engine.block.setFill(block, fill: fill) ``` ## Create colors Here we instantiate a few colors with RGB and CMYK color spaces. We also define two spot colors, one with an RGB approximation and another with a CMYK approximation. Note that a spot colors can have both color space approximations. ```swift highlight-create-colors let rgbaBlue = Color.rgba(r: 0, g: 0, b: 1, a: 1) let cmykRed = Color.cmyk(c: 0, m: 1, y: 1, k: 0, tint: 1) let cmykPartialRed = Color.cmyk(c: 0, m: 1, y: 1, k: 0, tint: 0.5) engine.editor.setSpotColor(name: "Pink-Flamingo", r: 0.988, g: 0.455, b: 0.992) engine.editor.setSpotColor(name: "Yellow", c: 0, m: 0, y: 1, k: 0) let spotPinkFlamingo = Color.spot(name: "Pink-Flamingo", tint: 1.0, externalReference: "Crayola") let spotPartialYellow = Color.spot(name: "Yellow", tint: 0.3, externalReference: "") ``` ## Applying colors to a block We can use the defined colors to modify certain properties of a fill or properties of a shape. Here we apply it to `'fill/color/value'`, `'stroke/color'` and `'dropShadow/color'`. ```swift highlight-apply-colors try engine.block.setColor(fill, property: "fill/color/value", color: rgbaBlue) try engine.block.setColor(fill, property: "fill/color/value", color: cmykRed) try engine.block.setColor(block, property: "stroke/color", color: cmykPartialRed) try engine.block.setColor(fill, property: "fill/color/value", color: spotPinkFlamingo) try engine.block.setColor(block, property: "dropShadow/color", color: spotPartialYellow) ``` ## Converting colors Using the utility function `convertColorToColorSpace` we create a new color in the CMYK color space by converting the `rgbaBlue` color to the CMYK color space. We also create a new color in the RGB color space by converting the `spotPinkFlamingo` color to the RGB color space. ```swift highlight-convert-color let cmykBlueConverted = try engine.editor.convertColorToColorSpace(color: rgbaBlue, colorSpace: .cmyk) let rgbaPinkFlamingoConverted = try engine.editor.convertColorToColorSpace( color: spotPinkFlamingo, colorSpace: .sRGB, ) ``` ## Listing spot colors This function returns the list of currently defined spot colors. ```swift highlight-find-spot engine.editor.findAllSpotColors() // ["Crayola-Pink-Flamingo", "Yellow"] ``` ## Redefine a spot color We can re-define the RGB and CMYK approximations of an already defined spot color. Doing so will change the rendered color of the blocks. We change it for the CMYK approximation of `'Yellow'` and make it a bit greenish. The properties that have `'Yellow'` as their spot color will change when re-rendered. ```swift highlight-change-spot engine.editor.setSpotColor(name: "Yellow", c: 0.2, m: 0, y: 1, k: 0) ``` ## Removing the definition of a spot color We can undefine a spot color. Doing so will make all the properties still referring to that spot color (`'Yellow'` in this case) use the default magenta RGB approximation. ```swift highlight-undefine-spot try engine.editor.removeSpotColor(name: "Yellow") ``` ## Full Code Here's the full code: ```swift import Foundation import IMGLYEngine @MainActor func colors(engine: Engine) async throws { let scene = try engine.scene.create() let page = try engine.block.create(.page) try engine.block.setWidth(page, value: 800) try engine.block.setHeight(page, value: 600) try engine.block.appendChild(to: scene, child: page) let block = try engine.block.create(.graphic) try engine.block.setShape(block, shape: engine.block.createShape(.rect)) try engine.block.setPositionX(block, value: 350) try engine.block.setPositionY(block, value: 400) try engine.block.setWidth(block, value: 100) try engine.block.setHeight(block, value: 100) let fill = try engine.block.createFill(.color) try engine.block.setFill(block, fill: fill) let rgbaBlue = Color.rgba(r: 0, g: 0, b: 1, a: 1) let cmykRed = Color.cmyk(c: 0, m: 1, y: 1, k: 0, tint: 1) let cmykPartialRed = Color.cmyk(c: 0, m: 1, y: 1, k: 0, tint: 0.5) engine.editor.setSpotColor(name: "Pink-Flamingo", r: 0.988, g: 0.455, b: 0.992) engine.editor.setSpotColor(name: "Yellow", c: 0, m: 0, y: 1, k: 0) let spotPinkFlamingo = Color.spot(name: "Pink-Flamingo", tint: 1.0, externalReference: "Crayola") let spotPartialYellow = Color.spot(name: "Yellow", tint: 0.3, externalReference: "") try engine.block.setColor(fill, property: "fill/color/value", color: rgbaBlue) try engine.block.setColor(fill, property: "fill/color/value", color: cmykRed) try engine.block.setColor(block, property: "stroke/color", color: cmykPartialRed) try engine.block.setColor(fill, property: "fill/color/value", color: spotPinkFlamingo) try engine.block.setColor(block, property: "dropShadow/color", color: spotPartialYellow) let cmykBlueConverted = try engine.editor.convertColorToColorSpace(color: rgbaBlue, colorSpace: .cmyk) let rgbaPinkFlamingoConverted = try engine.editor.convertColorToColorSpace( color: spotPinkFlamingo, colorSpace: .sRGB ) engine.editor.findAllSpotColors() // ["Crayola-Pink-Flamingo", "Yellow"] engine.editor.setSpotColor(name: "Yellow", c: 0.2, m: 0, y: 1, k: 0) try engine.editor.removeSpotColor(name: "Yellow") } ``` --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Basics" description: "Learn how color is handled in CE.SDK and how to apply, modify, and manage it across elements." platform: ios url: "https://img.ly/docs/cesdk/ios/colors/basics-307115/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Colors](https://img.ly/docs/cesdk/ios/colors-a9b79c/) > [Basics](https://img.ly/docs/cesdk/ios/colors/basics-307115/) --- When specifying a color property, you can use one of three color spaces: [RGB](https://en.wikipedia.org/wiki/RGB_color_model), [CMYK](https://en.wikipedia.org/wiki/CMYK_color_model) and [spot color](https://en.wikipedia.org/wiki/Spot_color). > **Note:** During export, only RGB and spot color values will be inserted into the > resulting PDF. Any color specified in CMYK values will be converted to RGB > using the standard conversion. Tint values will be retained in the alpha > channel. The following properties can be set with the function `setColor` and support all three color spaces: - `'backgroundColor/color'` - `'camera/clearColor'` - `'dropShadow/color'` - `'fill/color/value'` - `'stroke/color'` ## RGB RGB is the color space used when rendering a color to a screen. All values of `R`, `G`, `B`must be between `0.0` and `1.0` When using RGB, you typically also specify opacity or alpha as a value between `0.0` and `1.0` and is then referred to as `RGBA`. When a RGB color has an alpha value that is not `1.0`, it will be rendered to screen with corresponding transparency. ## CMYK CMYK is the color space used when color printing. All values of `C`, `M`, `Y`, and `K` must be between `0.0` and `1.0` When using CMYK, you can also specify a tint value between `0.0` and `1.0`. When a CMYK color has a tint value that is not `1.0`, it will be rendered to screen as if transparent over a white background. When rendering to screen, CMYK colors are first converted to RGB using a simple mathematical conversion. Currently, the same conversion happens when exporting a scene to a PDF file. ## Spot Color Spot colors are typically used for special printers or other devices that understand how to use them. Spot colors are defined primarily by their name as that is the information that the device will use to render. For the purpose of rendering a spot color to screen, it must be given either an RGB or CMYK color approximation or both. These approximations adhere to the same restrictions respectively described above. You can specify a tint as a value between `0.0` and `1.0` which will be interpreted as opacity when rendering to screen. It is up to the special printer or device how to interpret the tint value. You can also specify an external reference, a string describing the origin of this spot color. When rendering to screen, the spot color's RGB or CMYK approximation will be used, in that order of preference. When exporting a scene to a PDF file, spot colors will be saved as a [Separation Color Space](https://opensource.adobe.com/dc-acrobat-sdk-docs/pdfstandards/pdfreference1.6.pdf#G9.1850648). Using a spot color is a two step process: 1. you define a spot color with its name and color approximation(s) in the spot color registry. 2. you instantiate a spot color with its name, a tint and an external reference. The spot color registry allows you to: - list the defined spot colors - define a new a spot color with a name and its RGB or CMYK approximation - re-define an existing spot color's RGB or CMYK approximation - retrieve the RGB or CMYK approximation of an already defined spot color - remove a spot color from the list of defined spot colors Multiple blocks and their properties can refer to the same spot color and each can have a different tint and external reference. > **Note:** **Warning** If a block's color property refers to an undefined spot color, the > default color magenta with an RGB approximation of (1, 0, 1) will be used. ## Converting between colors A utility function `convertColorToColorSpace` is provided to create a new color from an existing color and a new color space. RGB and CMYK colors can be converted between each other and is done so using a simple mathematical conversion. Spot colors can be converted to RGB and CMYK simply by using the corresponding approximation. RGB and CMYK colors cannot be converted to spot colors. ## Custom Color Libraries You can configure CE.SDK with custom color libraries. More information is found [here](https://img.ly/docs/cesdk/ios/colors/create-color-palette-7012e0/). --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Color Conversion" description: "Convert between RGB, CMYK, and other color formats based on your project’s output requirements." platform: ios url: "https://img.ly/docs/cesdk/ios/colors/conversion-bcd82b/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Colors](https://img.ly/docs/cesdk/ios/colors-a9b79c/) > [Color Conversion](https://img.ly/docs/cesdk/ios/colors/conversion-bcd82b/) --- To ease implementing advanced color interfaces, you may rely on the engine to perform color conversions. Converts a color to the given color space. - `color`: The color to convert. - `colorSpace`: The color space to convert to. - Returns The converted color. ```swift // Convert a color let rgbaGreen = Color(cgColor: CGColor(red: 0, green: 1, blue: 0, alpha: 1))! let cmykGreen = try engine.editor.convertColorToColorSpace(color: rgbaGreen, colorSpace: .cmyk) ``` --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Create a Color Palette" description: "Build reusable color palettes to maintain consistency and streamline user choices." platform: ios url: "https://img.ly/docs/cesdk/ios/colors/create-color-palette-7012e0/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Colors](https://img.ly/docs/cesdk/ios/colors-a9b79c/) > [Create a Color Palette](https://img.ly/docs/cesdk/ios/colors/create-color-palette-7012e0/) --- ```swift file=@cesdk_swift_examples/editor-guides-configuration-color-palette/ColorPaletteEditorSolution.swift reference-only import IMGLYDesignEditor import SwiftUI struct ColorPaletteEditorSolution: View { let settings = EngineSettings(license: secrets.licenseKey, // pass nil for evaluation mode with watermark userID: "") var editor: some View { DesignEditor(settings) .imgly.colorPalette([ .init("Blue", .imgly.blue), .init("Green", .imgly.green), .init("Yellow", .imgly.yellow), .init("Red", .imgly.red), .init("Black", .imgly.black), .init("White", .imgly.white), .init("Gray", .imgly.gray), ]) } @State private var isPresented = false var body: some View { Button("Use the Editor") { isPresented = true } .fullScreenCover(isPresented: $isPresented) { ModalEditor { editor } } } } #Preview { ColorPaletteEditorSolution() } ``` In this example, we will show you how to make color palette configurations for the mobile editor. The example is based on the `Design Editor`, however, it is exactly the same for all the other [solutions](https://img.ly/docs/cesdk/ios/prebuilt-solutions-d0ed07/). ## Modifiers After initializing an editor SwiftUI view you can apply any SwiftUI *modifier* to customize it like for any other SwiftUI view. All public Swift `extension`s of existing types provided by IMG.LY, e.g., for the SwiftUI `View` protocol or for the `CGColor` class, are exposed in a separate `.imgly` property namespace. The color palette configuration to customize the editor is no exception to this rule and is implemented as a SwiftUI *modifier*. ```swift highlight-editor DesignEditor(settings) ``` - `colorPalette` - the color palette used for UI elements that contain predefined color options, e.g., for "Fill Color" or "Stroke Color". It expects an array of `NamedColor`s that are composed of a name, required for accessibility, and the actual `CGColor` to use. It should contain seven elements. Six of them are always shown. The seventh is only shown when a color property does not support a disabled state. This example shows the default configuration. ```swift highlight-colorPalette .imgly.colorPalette([ .init("Blue", .imgly.blue), .init("Green", .imgly.green), .init("Yellow", .imgly.yellow), .init("Red", .imgly.red), .init("Black", .imgly.black), .init("White", .imgly.white), .init("Gray", .imgly.gray), ]) ``` ## Full Code Here's the full code: ```swift import IMGLYDesignEditor import SwiftUI struct ColorPaletteEditorSolution: View { let settings = EngineSettings(license: secrets.licenseKey, userID: "") var editor: some View { DesignEditor(settings) .imgly.colorPalette([ .init("Blue", .imgly.blue), .init("Green", .imgly.green), .init("Yellow", .imgly.yellow), .init("Red", .imgly.red), .init("Black", .imgly.black), .init("White", .imgly.white), .init("Gray", .imgly.gray), ]) } @State private var isPresented = false var body: some View { Button("Use the Editor") { isPresented = true } .fullScreenCover(isPresented: $isPresented) { ModalEditor { editor } } } } #Preview { ColorPaletteEditorSolution() } ``` --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "For Print" description: "Use print-ready color models and settings for professional-quality, production-ready exports." platform: ios url: "https://img.ly/docs/cesdk/ios/colors/for-print-59bc05/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Colors](https://img.ly/docs/cesdk/ios/colors-a9b79c/) > [For Print](https://img.ly/docs/cesdk/ios/colors/for-print-59bc05/) --- --- ## Related Pages - [Spot Colors](https://img.ly/docs/cesdk/ios/colors/for-print/spot-c3a150/) - Learn how to define spot colors and set their color approximation in the CreativeEditor SDK. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Spot Colors" description: "Learn how to define spot colors and set their color approximation in the CreativeEditor SDK." platform: ios url: "https://img.ly/docs/cesdk/ios/colors/for-print/spot-c3a150/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Colors](https://img.ly/docs/cesdk/ios/colors-a9b79c/) > [For Print](https://img.ly/docs/cesdk/ios/colors/for-print-59bc05/) > [Spot Colors](https://img.ly/docs/cesdk/ios/colors/for-print/spot-c3a150/) --- ```swift reference-only // Create a spot color with an RGB color approximation. engine.editor.setSpotColor(name: "Red", r: 1.0, g: 0.0, b: 0.0) // Create a spot color with a CMYK color approximation. // Add a CMYK approximation to the already defined 'Red' spot color. engine.editor.setSpotColor(name: "Yellow", c: 0.0, m: 0.0, y: 1.0, k: 0.0) engine.editor.setSpotColor(name: "Red", c: 0.0, m: 1.0, y: 1.0, k: 0.0) // List all defined spot colors. engine.editor.findAllSpotColors() // ["Red", "Yellow"] // Retrieve the RGB color approximation for a defined color. // The alpha value will always be 1.0. let rgbaSpotRed: RGBA = engine.editor.getSpotColor(name: "Red") // Retrieve the CMYK color approximation for a defined color. let cmykSpotRed: CMYK = engine.editor.getSpotColor(name: "Red") // Retrieving the approximation of an undefined spot color returns magenta. let cmykSpotUnknown: CMYK = engine.editor.getSpotColor(name: "Unknown") // Returns CMYK values for magenta. // Removes a spot color from the list of defined spot colors. try engine.editor.removeSpotColor(name: "Red") ``` In this example, we will show you how to use the [CreativeEditor SDK](https://img.ly/creative-sdk)'s CreativeEngine to manage spot colors in the `editor` API. ## Functions ```swift public func findAllSpotColors() -> [String] ``` Queries the names of currently set spot colors previously set with `setSpotColor`. - Returns: The names of set spot colors. ```swift public func getSpotColor(name: String) -> RGBA ``` Queries the RGB representation set for a spot color. If the value of the queried spot color has not been set yet, returns the default RGB representation (of magenta). The alpha value is always 1.0. - `name:`: The name of a spot color. - Returns: A result holding the four color components. ```swift public func getSpotColor(name: String) -> CMYK ``` Queries the CMYK representation set for a spot color. If the value of the queried spot color has not been set yet, returns the default CMYK representation (of magenta). - `name:`: The name of a spot color. - Returns: A result holding the four color components. ```swift public func setSpotColor(name: String, r: Float, g: Float, b: Float) ``` Sets the RGB representation of a spot color. Use this function to both create a new spot color or update an existing spot color. - `name`: The name of a spot color. - `r`: The red color component in the range of 0 to 1. - `g`: The green color component in the range of 0 to 1. - `b`: The blue color component in the range of 0 to 1. ```swift public func setSpotColor(name: String, c: Float, m: Float, y: Float, k: Float) ``` Sets the CMYK representation of a spot color. Use this function to both create a new spot color or update an existing spot color. - `name`: The name of a spot color. - `c`: The cyan color component in the range of 0 to 1. - `m`: The magenta color component in the range of 0 to 1. - `y`: The yellow color component in the range of 0 to 1. - `k`: The key color component in the range of 0 to 1. ```swift public func removeSpotColor(name: String) throws ``` Removes a spot color from the list of set spot colors. - `name:`: The name of a spot color. ## Full Code Here's the full code: ```swift // Create a spot color with an RGB color approximation. engine.editor.setSpotColor(name: "Red", r: 1.0, g: 0.0, b: 0.0) // Create a spot color with a CMYK color approximation. // Add a CMYK approximation to the already defined 'Red' spot color. engine.editor.setSpotColor(name: "Yellow", c: 0.0, m: 0.0, y: 1.0, k: 0.0) engine.editor.setSpotColor(name: "Red", c: 0.0, m: 1.0, y: 1.0, k: 0.0) // List all defined spot colors. engine.editor.findAllSpotColors() // ["Red", "Yellow"] // Retrieve the RGB color approximation for a defined color. // The alpha value will always be 1.0. let rgbaSpotRed: RGBA = engine.editor.getSpotColor(name: "Red") // Retrieve the CMYK color approximation for a defined color. let cmykSpotRed: CMYK = engine.editor.getSpotColor(name: "Red") // Retrieving the approximation of an undefined spot color returns magenta. let cmykSpotUnknown: CMYK = engine.editor.getSpotColor(name: "Unknown") // Returns CMYK values for magenta. // Removes a spot color from the list of defined spot colors. try engine.editor.removeSpotColor(name: "Red") ``` --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "For Screen" description: "Documentation for For Screen" platform: ios url: "https://img.ly/docs/cesdk/ios/colors/for-screen-1911f8/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Colors](https://img.ly/docs/cesdk/ios/colors-a9b79c/) > [For Screen](https://img.ly/docs/cesdk/ios/colors/for-screen-1911f8/) --- --- ## Related Pages - [P3 Colors](https://img.ly/docs/cesdk/ios/colors/for-screen/p3-706127/) - Documentation for P3 Colors --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "P3 Colors" description: "Documentation for P3 Colors" platform: ios url: "https://img.ly/docs/cesdk/ios/colors/for-screen/p3-706127/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Colors](https://img.ly/docs/cesdk/ios/colors-a9b79c/) > [For Screen](https://img.ly/docs/cesdk/ios/colors/for-screen-1911f8/) > [P3 Colors](https://img.ly/docs/cesdk/ios/colors/for-screen/p3-706127/) --- This guide explains how to check whether the P3 color space is supported on a given device using the `supportsP3()` function and how to handle scenarios where P3 is unavailable. `supportsP3` returns whether the engine supports displaying and working in the P3 color space on the current device. Otherwise, this function throws an error with a description of why the P3 color space is not supported. If supported, the engine can be switched to a P3 color space using the "features/p3WorkingColorSpace" setting. `checkP3Support` throws an error if the engine does not support working in the P3 color space. ```swift // Check whether the current device supports working in the P3 color space let p3IsSupported = try engine.editor.supportsP3() do { try engine.editor.checkP3Support() } catch { // P3 is not supported. } ``` --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Overview" description: "Manage color usage in your designs, from applying brand palettes to handling print and screen formats." platform: ios url: "https://img.ly/docs/cesdk/ios/colors/overview-16a177/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Colors](https://img.ly/docs/cesdk/ios/colors-a9b79c/) > [Overview](https://img.ly/docs/cesdk/ios/colors/overview-16a177/) --- Colors are a fundamental part of design in the CreativeEditor SDK (CE.SDK). Whether you're designing for digital screens or printed materials, consistent color management ensures your creations look the way you intend. CE.SDK offers flexible tools for working with colors through both the user interface and programmatically, making it easy to manage color workflows at any scale. [Explore Demos](https://img.ly/showcases/cesdk?tags=ios) [Get Started](https://img.ly/docs/cesdk/ios/get-started/overview-e18f40/) --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "System Compatibility" description: "Learn how device performance and hardware limits affect CE.SDK editing, rendering, and export capabilities." platform: ios url: "https://img.ly/docs/cesdk/ios/compatibility-139ef9/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Compatibility & Security](https://img.ly/docs/cesdk/ios/compatibility-fef719/) > [System Compatibility](https://img.ly/docs/cesdk/ios/compatibility-139ef9/) --- ## Targets On Apple platforms, CE.SDK makes use of system-frameworks to benefit from hardware acceleration and platform native performance. The following targets are supported: - iOS & iPadOS 14 or later - macOS 12 or later ## Recommended Hardware - iPhone 8 or later - iPad (6th gen) or later - Macs released in the last 7 years ## Video Playback and exporting is **supported for all codecs** mentioned in the general section. However, mobile devices have stricter limits around the number of parallel encoders and decoders compared to fully fledged desktop machines. This means, that very large scenes with more than 10 videos shown in parallel may fail to play all videos at the same time and can’t be exported. ## Export Limitations The export size is limited by the hardware capabilities of the device, e.g., due to the maximum texture size that can be allocated. The maximum possible export size can be queried via API, see [export guide](https://img.ly/docs/cesdk/ios/export-save-publish/export/overview-9ed3a8/). --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Compatibility & Security" description: "Learn about CE.SDK's compatibility and security features." platform: ios url: "https://img.ly/docs/cesdk/ios/compatibility-fef719/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Compatibility & Security](https://img.ly/docs/cesdk/ios/compatibility-fef719/) --- CE.SDK provides robust compatibility and security features across platforms. Learn about supported browsers, frameworks, file formats, language support, and how CE.SDK ensures secure operation in your applications. --- ## Related Pages - [Bundle Size](https://img.ly/docs/cesdk/ios/bundle-size-df9210/) - Understand CE.SDK’s engine and editor bundle sizes and how they affect your mobile app’s download footprint. - [System Compatibility](https://img.ly/docs/cesdk/ios/compatibility-139ef9/) - Learn how device performance and hardware limits affect CE.SDK editing, rendering, and export capabilities. - [File Format Support](https://img.ly/docs/cesdk/ios/file-format-support-3c4b2a/) - See which image, video, audio, font, and template formats CE.SDK supports for import and export. - [Security](https://img.ly/docs/cesdk/ios/security-777bfd/) - Learn how CE.SDK keeps your data private with client-side processing, secure licensing, and GDPR-compliant practices. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Concepts" description: "Key concepts and principles of CE.SDK" platform: ios url: "https://img.ly/docs/cesdk/ios/concepts-c9ff51/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Concepts](https://img.ly/docs/cesdk/ios/concepts-c9ff51/) --- Key Concepts and principles of CE.SDK. --- ## Related Pages - [Key Concepts](https://img.ly/docs/cesdk/ios/key-concepts-21a270/) - Explore CE.SDK’s key features—manual editing, automation, templates, AI tools, and full UI and API control. - [Key Capabilities](https://img.ly/docs/cesdk/ios/key-capabilities-dbb5b1/) - Explore CE.SDK’s key features—manual editing, automation, templates, AI tools, and full UI and API control. - [Editing Workflow](https://img.ly/docs/cesdk/ios/concepts/editing-workflow-032d27/) - Control editing access with Creator and Adopter roles, each offering tailored permissions and UI constraints. - [Blocks](https://img.ly/docs/cesdk/ios/concepts/blocks-90241e/) - Learn how blocks define elements in a scene and how to structure them for rendering in CE.SDK. - [Scenes](https://img.ly/docs/cesdk/ios/concepts/scenes-e8596d/) - Scenes act as the root container for blocks and define the full design structure in CE.SDK. - [Editor State](https://img.ly/docs/cesdk/ios/concepts/edit-modes-1f5b6c/) - Control how users interact with content by switching between edit modes like transform, crop, and text. - [Events](https://img.ly/docs/cesdk/ios/concepts/events-353f97/) - Subscribe to block creation, update, and deletion events to track changes in your CE.SDK scene. - [Buffers](https://img.ly/docs/cesdk/ios/concepts/buffers-9c565b/) - Use buffers to store temporary, non-serializable data in CE.SDK via the CreativeEngine API. - [Working With Resources](https://img.ly/docs/cesdk/ios/concepts/resources-a58d71/) - Preload all resources for blocks or scenes in CE.SDK to improve performance and avoid runtime delays. - [Undo and History](https://img.ly/docs/cesdk/ios/concepts/undo-and-history-99479d/) - Manage undo and redo stacks in CE.SDK using multiple histories, callbacks, and API-based controls. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Blocks" description: "Learn how blocks define elements in a scene and how to structure them for rendering in CE.SDK." platform: ios url: "https://img.ly/docs/cesdk/ios/concepts/blocks-90241e/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Concepts](https://img.ly/docs/cesdk/ios/concepts-c9ff51/) > [Blocks](https://img.ly/docs/cesdk/ios/concepts/blocks-90241e/) --- ## Lifecycle Only blocks that are direct or indirect children of a `page` block are rendered. Scenes without any `page` child may not be properly displayed by the CE.SDK editor. ## Functions ```swift public func create(_ type: DesignBlockType) throws -> DesignBlockID ``` Create a new block. - `type:`: The type of the block that shall be created. - Returns: The created blocks handle. To create a scene, use [](https://img.ly/docs/cesdk/ios/open-the-editor-23a1db/) instead. ```swift public func saveToString(blocks: [DesignBlockID], allowedResourceSchemes: [String] = ["bundle", "file", "http", "https"]) async throws -> String ``` Saves the given blocks into a string. If given the root of a block hierarchy, e.g. a page with multiple children, the entire hierarchy is saved. - `blocks`: The blocks to save. - `allowedResourceSchemes`: If a resource URL has a scheme that is not in this list an error will be thrown. - Returns: A string representation of the blocks. ```swift public func saveToArchive(blocks: [DesignBlockID]) async throws -> Blob ``` Saves the given blocks and all of their referenced assets into an archive. The archive contains all assets that were accessible when this function was called. Blocks in the archived scene reference assets relative from to the location of the scene file. These references are resolved when loading such a scene via `scene.load(from url:)`. - `blocks:`: The blocks to save. - Returns: A serialized scene data blob. ```swift public func load(from string: String) async throws -> [DesignBlockID] ``` Loads existing blocks from the given string. The blocks are not attached by default and won't be visible until attached to a page or the scene. The UUID of the loaded blocks is replaced with a new one. - `string:`: A string representing the given blocks. - Returns: A list of loaded blocks. ```swift public func load(from url: URL) async throws -> [DesignBlockID] ``` Loads existing blocks from a URL. The URL should point to a blocks file within an unzipped archive directory previously saved with `block.saveToArchive`. The blocks are not attached by default and won't be visible until attached to a page or the scene. The UUID of the loaded blocks is replaced with a new one. - `url:`: The URL to the blocks file. - Returns: A list of loaded blocks. ```swift public func loadArchive(from url: URL) async throws -> [DesignBlockID] ``` Loads existing blocks from an archive. The blocks are not attached by default and won't be visible until attached to a page or the scene. The UUID of the loaded blocks is replaced with a new one. - `url:`: The URL of the blocks archive file. - Returns: A list of loaded blocks. ```swift public func getType(_ id: DesignBlockID) throws -> String ``` Get the type of the given block, fails if the block is invalid. - `id:`: The block to query. - Returns: The blocks type. ```swift public func setName(_ id: DesignBlockID, name: String) throws ``` Update a block's name. - `id`: The block to update. - `name`: The name to set. ```swift public func getName(_ id: DesignBlockID) throws -> String ``` Get a block's name. - `id:`: The block to query. - Returns: The block's name. ```swift public func duplicate(_ id: DesignBlockID) throws -> DesignBlockID ``` Duplicates a block including its children. Required scope: "lifecycle/duplicate" If the block is parented to a track that is set always-on-bottom, the duplicate is inserted in the same track immediately after the block. Otherwise, the duplicate is moved up in the hierarchy. - `id:`: The block to duplicate. - Returns: The handle of the duplicate. ```swift public func destroy(_ id: DesignBlockID) throws ``` Destroys a block. Required scope: "lifecycle/destroy" - `id:`: The block to destroy. ```swift public func isValid(_ id: DesignBlockID) -> Bool ``` Check if a block is valid. A block becomes invalid once it has been destroyed. - `id:`: The block to query. - Returns: `true`, if the block is valid. ### Full Code In this example, we will show you how to use the [CreativeEditor SDK](https://img.ly/products/creative-sdk)'s CreativeEngine to modify scenes through the `block` API. ```swift // Create, save and load blocks let block = try engine.block.create(.graphic) let savedBlocksString = try await engine.block.saveToString(blocks: [block]) let loadedBlocksString = try await engine.block.load(from: savedBlocksString) let savedBlocksArchive = try await engine.block.saveToArchive(blocks: [block]) let loadedBlocksArchive = try await engine.block.loadArchive(from: .init(string: "https://cdn.img.ly/assets/demo/v1/ly.img.template/templates/cesdk_postcard_1_blocks.zip")!) // Load blocks from an extracted zip file created with block.saveToArchive let loadedBlocksURL = try await engine.block.load(from: URL(string: "https://cdn.img.ly/assets/v6/ly.img.text.components/box/blocks.blocks")!) // Check a blocks type let blockType = try engine.block.getType(block) // Alter a blocks name try engine.block.setName(block, name: "someName") let name = try engine.block.getName(block) // You may duplicate or destroy blocks let duplicate = try engine.block.duplicate(block) try engine.block.destroy(duplicate) engine.block.isValid(duplicate) // false ``` ## Properties ### UUID A universally unique identifier (UUID) is assigned to each block upon creation and can be queried. This is stable across save & load and may be used to reference blocks. ```swift public func getUUID(_ id: DesignBlockID) throws -> String ``` Get a block's unique identifier. - `id:`: The block to query. - Returns: The block's UUID. ### Reflection For every block, you can get a list of all its properties by calling `findAllProperties(id: number): string[]`. Properties specific to a block are prefixed with the block's type followed by a forward slash. There are also common properties shared between blocks which are prefixed by their respective type. A list of all properties can be found in the [Blocks Overview](https://img.ly/docs/cesdk/ios/concepts/blocks-90241e/). ```swift public func findAllProperties(_ id: DesignBlockID) throws -> [String] ``` Get all available properties of a block. - `id:`: The block whose properties should be queried. - Returns: A list of the property names. Given a property, you can query its type using `getType(ofProperty:)`. ```swift public func getType(ofProperty property: String) throws -> PropertyType ``` Get the type of a property given its name. - `property:`: The name of the property whose type should be queried. - Returns: The property type. The property type `'Enum'` is a special type. Properties of this type only accept a set of certain strings. To get a list of possible values for an enum property call `getEnumValues(enumProperty: string): string[]`. ```swift public func getEnumValues(ofProperty enumProperty: String) throws -> [String] ``` Get all the possible values of an enum given an enum property. - `enumProperty:`: The name of the property whose enum values should be queried. - Returns: A list of the enum value names as string. Some properties can only be written to or only be read. To find out what is possible with a property, you can use the `isPropertyReadable` and `isPropertyWritable` methods. ```swift public func isPropertyReadable(property: String) throws -> Bool ``` Check if a property with a given name is readable. - `property:`: The name of the property whose type should be queried. - Returns: Whether the property is readable or not. Will return false for unknown properties. ```swift public func isPropertyWritable(property: String) throws -> Bool ``` Check if a property with a given name is writable. - `property:`: The name of the property whose type should be queried. - Returns: Whether the property is writable or not. Will return false for unknown properties. ### Generic Properties There are dedicated setter and getter functions for each property type. You have to provide a block and the property path. Use `findAllProperties` to get a list of all the available properties a block has. > **Note:** Please make sure you call the setter and getter function matching the type of > the property you want to set or query or else you will get an error. Use > `getType` to figure out the pair of functions you need to use. ```swift public func findAllProperties(_ id: DesignBlockID) throws -> [String] ``` Get all available properties of a block. - `id:`: The block whose properties should be queried. - Returns: A list of the property names. ```swift public func setBool(_ id: DesignBlockID, property: String, value: Bool) throws ``` Set a bool property of the given design block to the given value. - `id`: The block whose property should be set. - `property`: The name of the property to set. - `value`: The value to set. ```swift public func getBool(_ id: DesignBlockID, property: String) throws -> Bool ``` Get the value of a bool property of the given design block. - `id`: The block whose property should be queried. - `property`: The name of the property to query. - Returns: The value of the property. ```swift public func setInt(_ id: DesignBlockID, property: String, value: Int) throws ``` Set an int property of the given design block to the given value. - `id`: The block whose property should be set. - `property`: The name of the property to set. - `value`: The value to set. ```swift public func getInt(_ id: DesignBlockID, property: String) throws -> Int ``` Get the value of an int property of the given design block. - `id`: The block whose property should be queried. - `property`: The name of the property to query. - Returns: The value of the property. ```swift public func setFloat(_ id: DesignBlockID, property: String, value: Float) throws ``` Set a float property of the given design block to the given value. - `id`: The block whose property should be set. - `property`: The name of the property to set. - `value`: The value to set. ```swift public func getFloat(_ id: DesignBlockID, property: String) throws -> Float ``` Get the value of a float property of the given design block. - `id`: The block whose property should be queried. - `property`: The name of the property to query. - Returns: The value of the property. ```swift public func setDouble(_ id: DesignBlockID, property: String, value: Double) throws ``` Set a double property of the given design block to the given value. - `id`: The block whose property should be set. - `property`: The name of the property to set. - `value`: The value to set. ```swift public func getDouble(_ id: DesignBlockID, property: String) throws -> Double ``` Get the value of a double property of the given design block. - `id`: The block whose property should be queried. - `property`: The name of the property to query. - Returns: The value of the property. ```swift public func setString(_ id: DesignBlockID, property: String, value: String) throws ``` Set a string property of the given design block to the given value. - `id`: The block whose property should be set. - `property`: The name of the property to set. - `value`: The value to set. ```swift public func getString(_ id: DesignBlockID, property: String) throws -> String ``` Get the value of a string property of the given design block. - `id`: The block whose property should be queried. - `property`: The name of the property to query. - Returns: The value of the property. ```swift public func setURL(_ id: DesignBlockID, property: String, value: URL) throws ``` Set a URL property of the given design block to the given value. - `block`: The block whose property should be set. - `property`: The name of the property to set. - `value`: The value to set. ```swift public func getURL(_ id: DesignBlockID, property: String) throws -> URL ``` Get the value of a URL property of the given design block. - `block`: The block whose property should be queried. - `property`: The name of the property to set. - Returns: The value of the property. ```swift public func setColor(_ id: DesignBlockID, property: String, color: Color) throws ``` Set a color property of the given design block to the given value. - `id`: The block whose property should be set. - `property`: The name of the property to set. - `color`: The value to set. ```swift public func getColor(_ id: DesignBlockID, property: String) throws -> Color ``` Get the value of a color property of the given design block. - `id`: The block whose property should be queried. - `property`: The name of the property to query. - Returns: The color value of the property. ```swift public func setEnum(_ id: DesignBlockID, property: String, value: String) throws ``` Set an enum property of the given design block to the given value. - `id`: The block whose property should be set. - `property`: The name of the property to set. - `value`: The enum value as string. ```swift public func getEnum(_ id: DesignBlockID, property: String) throws -> String ``` Get the value of an enum property of the given design block. - `id`: The block whose property should be queried. - `property`: The name of the property to query. - Returns: The value as string. ```swift public func setGradientColorStops(_ id: DesignBlockID, property: String, colors: [GradientColorStop]) throws ``` Set a gradient color stops property of the given design block. - `id`: The block whose property should be set. - `property`: The name of the property to set. - `colors`: The colors to set. ```swift public func getGradientColorStops(_ id: DesignBlockID, property: String) throws -> [GradientColorStop] ``` Get the gradient color stops property of the given design block. - `id`: The block whose property should be queried. - `property`: The name of the property to query. - Returns: The gradient colors. ```swift public func setSourceSet(_ id: DesignBlockID, property: String, sourceSet: [Source]) throws ``` Set the source set of the given block. The crop and content fill mode of the associated block will be set to the default values. - `id`: The block whose source set should be set. - `sourceSet`: The new source set. ```swift public func getSourceSet(_ id: DesignBlockID, property: String) throws -> [Source] ``` Returns the source set of the given block. - `id:`: The block whose source set should be returned. - Returns: The source set of the given block. ```swift public func addImageFileURIToSourceSet(_ id: DesignBlockID, property: String, uri: URL) async throws ``` Add a source to the `sourceSet` property of the given block. If there already exists in source set an image with the same width, that existing image will be replaced. If the source set is or gets empty, the crop and content fill mode of the associated block will be set to the default values. Note: This fetches the resource from the given URI to obtain the image dimensions. It is recommended to use setSourceSet if the dimension is known. - `id`: The block whose source set should be set. - `property`: The name of the property to modify. - `uri`: The source to add to the source set. ```swift public func addVideoFileURIToSourceSet(_ id: DesignBlockID, property: String, uri: URL) async throws ``` Add a source to the `sourceSet` property of the given block. If there already exists in source set a video with the same width, that existing video will be replaced. If the source set is or gets empty, the crop and content fill mode of the associated block will be set to the default values. Note: This fetches the resource from the given URI to obtain the video dimensions. It is recommended to use setSourceSet if the dimension is known. - `id`: The block whose source set should be set. - `property`: The name of the property to modify. - `uri`: The source to add to the source set. ### Modifying Properties Here’s the full code snippet for modifying a block’s properties: ```swift let uuid = try engine.block.getUUID(block) let propertyNamesStar = try engine.block .findAllProperties(starShape) // Array [ "shape/star/innerDiameter", "shape/star/points", "opacity/value", ... ] let propertyNamesImage = try engine.block .findAllProperties(imageFill) // Array [ "fill/image/imageFileURI", "fill/image/previewFileURI", "fill/image/externalReference", ... ] let propertyNamesText = try engine.block .findAllProperties(text) // Array [ "text/text", "text/fontFileUri", "text/externalReference", "text/fontSize", "text/horizontalAlignment", ... ] let pointsType = try engine.block.getType(ofProperty: "shape/star/points") // "Int" let alignmentType = try engine.block.getType(ofProperty: "text/horizontalAlignment") // "Enum" try engine.block.getEnumValues(ofProperty: "text/horizontalAlignment") let readable = try engine.block.isPropertyReadable(property: "shape/star/points") let writable = try engine.block.isPropertyWritable(property: "shape/star/points") // Generic Properties try engine.block.setBool(scene, property: "scene/aspectRatioLock", value: false) try engine.block.getBool(scene, property: "scene/aspectRatioLock") let points = try engine.block.getInt(starShape, property: "shape/star/points") try engine.block.setInt(starShape, property: "shape/star/points", value: points + 2) try engine.block.setFloat(starShape, property: "shape/star/innerDiameter", value: 0.75) try engine.block.getFloat(starShape, property: "shape/star/innerDiameter") let audio = try engine.block.create(.audio) try engine.block.appendChild(to: scene, child: audio) try engine.block.setDouble(audio, property: "playback/duration", value: 1.0) try engine.block.getDouble(audio, property: "playback/duration") try engine.block.setString(text, property: "text/text", value: "*o*") try engine.block.getString(text, property: "text/text") try engine.block.setURL( imageFill, property: "fill/image/imageFileURI", value: URL(string: "https://img.ly/static/ubq_samples/sample_4.jpg")! ) try engine.block.getURL(imageFill, property: "fill/image/imageFileURI") try engine.block.setColor(colorFill, property: "fill/color/value", color: .rgba(r: 1, g: 1, b: 1, a: 1)) // White try engine.block.getColor(colorFill, property: "fill/color/value") as Color try engine.block.setEnum(text, property: "text/horizontalAlignment", value: "Center") try engine.block.setEnum(text, property: "text/verticalAlignment", value: "Center") try engine.block.getEnum(text, property: "text/horizontalAlignment") try engine.block.getEnum(text, property: "text/verticalAlignment") try engine.block.setGradientColorStops(gradientFill, property: "fill/gradient/colors", colors: [ .init(color: .rgba(r: 1.0, g: 0.8, b: 0.2, a: 1.0), stop: 0), .init(color: .rgba(r: 0.3, g: 0.4, b: 0.7, a: 1.0), stop: 1) ]) try engine.block.getGradientColorStops(gradientFill, property: "fill/gradient/colors") let imageFill = try engine.block.createFill(.image) try engine.block.setSourceSet(imageFill, property: "fill/image/sourceSet", sourceSet: [ .init( uri: URL(string: "http://img.ly/my-image.png")!, width: 800, height: 600 ) ]) _ = try engine.block.getSourceSet(imageFill, property: "fill/image/sourceSet") try await engine.block.addImageFileURIToSourceSet(imageFill, property: "fill/image/sourceSet", uri: "https://img.ly/static/ubq_samples/sample_1.jpg") let videoFill = try engine.block.createFill(.video) try await engine.block.addVideoFileURIToSourceSet(videoFill, property: "fill/video/sourceSet", uri: "https://img.ly/static/example-assets/sourceset/1x.mp4") ``` ## Kind Property The `kind` of a design block is a custom string that can be assigned to a block in order to categorize it and distinguish it from other blocks that have the same type. The user interface can then customize its appearance based on the kind of the selected blocks. It can also be used for automation use cases in order to process blocks in a different way based on their kind. ```swift public func setKind(_ id: DesignBlockID, kind: String) throws ``` Get the kind of the given block, fails if the block is invalid. - `id`: The block to query. - `kind`: The block's kind. ```swift public func getKind(_ id: DesignBlockID) throws -> String ``` Get the kind of the given block, fails if the block is invalid. - `id:`: The block to query. - Returns: The block's kind. ```swift public func find(byKind kind: String) throws -> [DesignBlockID] ``` Finds all blocks with the given kind. - `kind:`: The kind to search for. - Returns: A list of block ids. ### Full Code In this example, we will show you how to use the [CreativeEditor SDK](https://img.ly/products/creative-sdk)'s CreativeEngine to modify a and query the kind property of design blocks through the `block` API. ```swift try engine.block.setKind(text, kind: "title") let kind = try engine.block.getKind(text) let allTitles = try engine.block.find(byKind: "title") ``` ## Selection & Visibility A block can potentially be *invisible* (in the sense that you can't see it), even though `isVisible()` returns true. This could be the case when a block has not been added to a parent, the parent itself is not visible, or the block is obscured by another block on top of it. ```swift public func setSelected(_ id: DesignBlockID, selected: Bool) throws ``` Update the selection state of a block. - Note: Previously selected blocks remain selected. Required scope: "editor/select" - `id`: The block to query. - `selected`: Whether or not the block should be selected. ```swift public func isSelected(_ id: DesignBlockID) throws -> Bool ``` Get the selected state of a block. - `id:`: The block to query. - Returns: `true` if the block is selected, `false` otherwise. ```swift public func select(_ id: DesignBlockID) throws ``` Selects the given block and deselects all other blocks. - `id:`: The block to be selected. ```swift public func findAllSelected() -> [DesignBlockID] ``` Get all currently selected blocks. - Returns: An array of block ids. ```swift public var onSelectionChanged: AsyncStream { get } ``` Subscribe to changes in the current set of selected blocks. ```swift public var onClicked: AsyncStream { get } ``` Subscribe to block click events. ```swift public func setVisible(_ id: DesignBlockID, visible: Bool) throws ``` Update a block's visibility. Required scope: "layer/visibility" - `id`: The block to update. - `visible`: Whether the block shall be visible. ```swift public func isVisible(_ id: DesignBlockID) throws -> Bool ``` Query a block's visibility. - `id:`: The block to query. - Returns: `true` if visible, `false` otherwise. ```swift public func setClipped(_ id: DesignBlockID, clipped: Bool) throws ``` Update a block's clipped state. Required scope: "layer/clipping" - `id`: The block to update. - `clipped`: Whether the block should clips its contents to its frame. ```swift public func isClipped(_ id: DesignBlockID) throws -> Bool ``` Query a block's clipped state. If `true`, the block should clip - `id:`: The block to query. - Returns: `True` if clipped, `false` otherwise. ```swift public func isIncludedInExport(_ id: DesignBlockID) throws -> Bool ``` Check if the given block is included on the exported result. - `id:`: The block id to query. - Returns: True if block will be included on the exported result. ```swift public func setIncludedInExport(_ id: DesignBlockID, enabled: Bool) throws ``` Set whether you want the given design block to be included in exported result. - `id`: The block to include/exclude from export. - `enabled`: If true, block will be included in the export. ### Full Code In this example, we will show you how to use the [CreativeEditor SDK](https://img.ly/products/creative-sdk)'s CreativeEngine to modify scenes through the `block` API. ```swift try engine.block.setSelected(block, selected: true) let isSelected = try engine.block.isSelected(block) try engine.block.select(block) let selectedIds = engine.block.findAllSelected() let isVisible = try engine.block.isVisible(block) try engine.block.setVisible(block, visible: true) let isClipped = try engine.block.isClipped(page) try engine.block.setClipped(page, clipped: true) let selectionTask = Task { for await _ in engine.block.onSelectionChanged { let selectedIDs = engine.block.findAllSelected() print("Selection changed: \(selectedIDs)") } } let clickedTask = Task { for await block in engine.block.onClicked { print("Block clicked: \(block)") } } let isIncludedInExport = try engine.block.isIncludedInExport(block) try engine.block.setIncludedInExport(block, enabled: true) ``` ## State Blocks can perform operations that take some time or that can end in bad results. When that happens, blocks put themselves in a pending state or an error state and visual feedback is shown pertaining to the state. When an external operation is done to blocks, for example with a plugin, you may want to manually set the block's state to pending (if that external operation takes time) or to error (if that operation resulted in an error). The possible states of a block are: ``` public enum BlockState { /// The block is ready to be rendered. case ready /// There is an ongoing operation on the block. Rendering may be affected. /// - Parameter progress: The progress is in the range of [0, 1]. case pending(progress: Float) /// There's an error preventing rendering. case error(BlockStateError) } public enum BlockStateError { /// Failed to decode the block's audio stream. case audioDecoding /// Failed to decode the block's image stream. case imageDecoding /// Failed to retrieve the block's remote content. case fileFetch /// An unknown error occurred. case unknown /// Failed to decode the block's video stream. case videoDecoding } ``` When calling `getState`, the returned state reflects the combined state of a block, the block's fill, the block's shape and the block's effects. If any of these blocks is in an `.error` state, the returned state will reflect that error. If none of these blocks is in error state but any is in `.pending` state, the returned state will reflect the aggregate progress of the block's progress. If none of the blocks are in error state or pending state, the returned state is `.ready`. ```swift public func getState(_ id: DesignBlockID) throws -> BlockState ``` Get the current state of a block. - Note: If this block is in error state or this block has a `Shape` block, `Fill` block or `Effect` block(s), that is in error state, the returned state will be `.error`. Else, if this block is in pending state or this block has a `Shape` block, `Fill` block or `Effect` block(s), that is in pending state, the returned state will be `.pending`. Else, the returned state will be `.ready`. - `id:`: The block whose state should be queried. - Returns: The state of the block. ```swift public func setState(_ id: DesignBlockID, state: BlockState) throws ``` Set the state of a block. - `id`: The block whose state should be set. - `state`: The new state to set. ```swift public func onStateChanged(_ ids: [DesignBlockID]) -> AsyncStream<[DesignBlockID]> ``` Subscribe to changes to the state of a block. - Note: Like `getState`, the state of a block is determined by the state of itself and its `Shape`, `Fill` and `Effect` block(s). - `ids:`: A list of blocks to filter events by. If the list is empty, events for every block are sent. - Returns: A stream of block state change events. ### Full Code In this example, we will show you how to use [CreativeEditor SDK](https://img.ly/products/creative-sdk)'s CreativeEngine to retrieve's a block's state and to manually set a block in a pending state, an error state or back to a ready state. ```swift let stateTask = Task { for await blocks in engine.block.onStateChanged([block]) { print("State of blocks \(blocks) is updated.") } } let state = try engine.block.getState(block) try engine.block.setState(block, state: .pending(progress: 0.5)) try engine.block.setState(block, state: .ready) try engine.block.setState(block, state: .error(.imageDecoding)) ``` --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Buffers" description: "Use buffers to store temporary, non-serializable data in CE.SDK via the CreativeEngine API." platform: ios url: "https://img.ly/docs/cesdk/ios/concepts/buffers-9c565b/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Concepts](https://img.ly/docs/cesdk/ios/concepts-c9ff51/) > [Buffers](https://img.ly/docs/cesdk/ios/concepts/buffers-9c565b/) --- ```swift reference-only // Create an audio block and append it to the page let audioBlock = try engine.block.create(.audio) try engine.block.appendChild(to: page, child: audioBlock) // Create a buffer let audioBuffer = engine.editor.createBuffer() // Reference the audio buffer resource from the audio block try engine.block.setURL(audioBlock, property: "audio/fileURI", value: audioBuffer) // Generate 10 seconds of stereo 48 kHz audio data let samples = ContiguousArray(unsafeUninitializedCapacity: 10 * 2 * 48000) { buffer, initializedCount in for i in stride(from: 0, to: buffer.count, by: 2) { let sample = sin((440.0 * Float(i) * Float.pi) / 48000.0) buffer[i + 0] = sample buffer[i + 1] = sample } initializedCount = buffer.count } // Assign the audio data to the buffer try samples.withUnsafeBufferPointer { buffer in try engine.editor.setBufferData(url: audioBuffer, offset: 0, data: Data(buffer: buffer)) } // We can get subranges of the buffer data let chunk = try engine.editor.getBufferData(url: audioBuffer, offset: 0, length: 4096) // Get current length of the buffer in bytes let length = try engine.editor.getBufferLength(url: audioBuffer) // Reduce the buffer to half its length, leading to 5 seconds worth of audio try engine.editor.setBufferLength(url: audioBuffer,length: UInt(truncating: length) / 2) // Free data try engine.editor.destroyBuffer(url: audioBuffer) ``` In this example, we will show you how to use the [CreativeEditor SDK](https://img.ly/products/creative-sdk)'s CreativeEngine to create buffers through the `editor` API. Buffers can hold arbitrary data. > **Note:** **Limitations**Buffers are intended for temporary data only.* Buffer data is not part of the [scene serialization](https://img.ly/docs/cesdk/ios/concepts/scenes-e8596d/) > * Changes to buffers can't be undone using the [history system](https://img.ly/docs/cesdk/ios/concepts/undo-and-history-99479d/) ```swift public func createBuffer() -> URL ``` Create a resizable buffer that can hold arbitrary data. - Returns: A URL to identify the buffer. ```swift public func destroyBuffer(url: URL) throws ``` Destroy a buffer and free its resources. - `url`: The URL of the buffer to destroy. ```swift public func setBufferData(url: URL, offset: UInt, data: Data) throws ``` Set the data of a buffer. - `url`: The URL of the buffer. - `offset`: The offset in bytes at which to start writing. - `data`: The data to write. ```swift public func getBufferData(url: URL, offset: UInt, length: UInt) throws -> Data ``` Get the data of a buffer. - `url`: The URL of the buffer. - `offset`: The offset in bytes at which to start reading. - `length`: The number of bytes to read. - Returns: The data read from the buffer or an error. ```swift public func setBufferLength(url: URL, length: UInt) throws ``` Set the length of a buffer. - `url`: The URL of the buffer. - `length`: The new length of the buffer in bytes. ```swift public func getBufferLength(url: URL) throws -> NSNumber ``` Get the length of a buffer. - `url`: The URL of the buffer. - Returns: The length of the buffer in bytes. ## Full Code Here's the full code: ```swift // Create an audio block and append it to the page let audioBlock = try engine.block.create(.audio) try engine.block.appendChild(to: page, child: audioBlock) // Create a buffer let audioBuffer = engine.editor.createBuffer() // Reference the audio buffer resource from the audio block try engine.block.setURL(audioBlock, property: "audio/fileURI", value: audioBuffer) // Generate 10 seconds of stereo 48 kHz audio data let samples = ContiguousArray(unsafeUninitializedCapacity: 10 * 2 * 48000) { buffer, initializedCount in for i in stride(from: 0, to: buffer.count, by: 2) { let sample = sin((440.0 * Float(i) * Float.pi) / 48000.0) buffer[i + 0] = sample buffer[i + 1] = sample } initializedCount = buffer.count } // Assign the audio data to the buffer try samples.withUnsafeBufferPointer { buffer in try engine.editor.setBufferData(url: audioBuffer, offset: 0, data: Data(buffer: buffer)) } // We can get subranges of the buffer data let chunk = try engine.editor.getBufferData(url: audioBuffer, offset: 0, length: 4096) // Get current length of the buffer in bytes let length = try engine.editor.getBufferLength(url: audioBuffer) // Reduce the buffer to half its length, leading to 5 seconds worth of audio try engine.editor.setBufferLength(url: audioBuffer,length: UInt(truncating: length) / 2) // Free data try engine.editor.destroyBuffer(url: audioBuffer) ``` --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Editor State" description: "Control how users interact with content by switching between edit modes like transform, crop, and text." platform: ios url: "https://img.ly/docs/cesdk/ios/concepts/edit-modes-1f5b6c/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Concepts](https://img.ly/docs/cesdk/ios/concepts-c9ff51/) > [Editor State](https://img.ly/docs/cesdk/ios/concepts/edit-modes-1f5b6c/) --- ```swift reference-only let task = Task { for await _ in engine.editor.onStateChanged { print("Editor state has changed") } } // Native modes: 'Transform', 'Crop', 'Text' engine.editor.setEditMode(.crop) engine.editor.getEditMode() // 'Crop' engine.editor.unstable_isInteractionHappening(); // Use this information to alter the displayed cursor engine.editor.getCursorType() engine.editor.getCursorRotation() // Query information about the text cursor position engine.editor.getTextCursorPositionInScreenSpaceX() engine.editor.getTextCursorPositionInScreenSpaceY() ``` The CreativeEditor SDK operates in different states called **Edit Modes**, each designed for a specific type of interaction on the canvas: - `Transform`: this is the default mode which allow to move, resize and manipulate things on the canvas - `Text`: Allows to edit the text elements on the canvas - `Crop`: Allow to Crop media blocks (images, videos, etc...) - `Trim`: Trim the clips in video mode - `Playback`: Play the media (mostly video) in video mode While users typically interact with these modes through the UI (e.g., showing or hiding specific controls based on the active mode), it’s also possible to manage them programmatically via the engine’s API, though this isn’t always required. In this example, we will show you how to use the [CreativeEditor SDK](https://img.ly/products/creative-sdk)'s CreativeEngine to set and query the editor state in the `editor` API, i.e., what type of content the user is currently able to edit. ## State The editor state consists of the current edit mode, which informs what type of content the user is currently able to edit. The edit mode can be set to either `'Transform'`, `'Crop'`, `'Text'`, or a custom user-defined one. You can also query the intended mouse cursor and the location of the text cursor while editing text. Instead of having to constantly query the state in a loop, you can also be notified when the state has changed to then act on these changes in a callback. ```swift public var onStateChanged: AsyncStream { get } ``` Subscribe to changes to the editor state. ```swift public func setEditMode(_ mode: EditMode) ``` Set the edit mode of the editor. An edit mode defines what type of content can currently be edited by the user. - Note: The initial edit mode is "Transform". - `mode:`: "Transform", "Crop", "Text", "Playback". ```swift public func getEditMode() -> EditMode ``` Get the current edit mode of the editor. An edit mode defines what type of content can currently be edited by the user. - Returns: "Transform", "Crop", "Text", "Playback". ```swift public func unstable_isInteractionHappening() throws -> Bool ``` If an user interaction is happening, e.g., a resize edit with a drag handle or a touch gesture. - Returns: true if an interaction is happening. ## Cursor ```swift public func getCursorType() -> CursorType ``` Get the type of cursor that should be displayed by the application. - Returns: The cursor type. ```swift public func getCursorRotation() -> Float ``` Get the rotation with which to render the mouse cursor. - Returns: The angle in radians. ```swift public func getTextCursorPositionInScreenSpaceX() -> Float ``` Get the current text cursor's x position in screen space. - Returns: The text cursor's x position in screen space. ```swift public func getTextCursorPositionInScreenSpaceY() -> Float ``` Get the current text cursor's y position in screen space. - Returns: The text cursor's y position in screen space. ## Full Code Here's the full code: ```swift let task = Task { for await _ in engine.editor.onStateChanged { print("Editor state has changed") } } // Native modes: 'Transform', 'Crop', 'Text' engine.editor.setEditMode(.crop) engine.editor.getEditMode() // 'Crop' engine.editor.unstable_isInteractionHappening(); // Use this information to alter the displayed cursor engine.editor.getCursorType() engine.editor.getCursorRotation() // Query information about the text cursor position engine.editor.getTextCursorPositionInScreenSpaceX() engine.editor.getTextCursorPositionInScreenSpaceY() ``` --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Editing Workflow" description: "Control editing access with Creator and Adopter roles, each offering tailored permissions and UI constraints." platform: ios url: "https://img.ly/docs/cesdk/ios/concepts/editing-workflow-032d27/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Concepts](https://img.ly/docs/cesdk/ios/concepts-c9ff51/) > [Editing Workflow](https://img.ly/docs/cesdk/ios/concepts/editing-workflow-032d27/) --- ## Roles User roles allow the CE.SDK to change and adapt its UI layout and functionality to provide the optimal editing experience for a specific purpose. ### Creator The `Creator` role is the most powerful and least restrictive role that is offered by the CE.SDK. Running the editor with this role means that there are no limits to what the user can do with the loaded scene. Elements can be added, moved, deleted, and modified. All types of controls for modifying the selected elements are shown inside of the inspector. ### Adopter The `Adopter` role allows new elements to be added and modified. Existing elements of a scene are only modifiable based on the set of constraints that the `Creator` has manually enabled. This provides the `Adopter` with a simpler interface that is reduced to only the properties that they should be able to change and prevents them from accidentally changing or deleting parts of a design that should not be modified. An example use case for how such a distinction between `Creator` and `Adopter` roles can provide a lot of value is the process of designing business cards. A professional designer (using the `Creator` role) can create a template design of the business card with the company name, logo, colors, etc. They can then use the constraints to make only the name text editable for non-creators. Non-designers (either the employees themselves or the HR department) can then easily open the design in a CE.SDK instance with the `Adopter` role and are able to quickly change the name on the business card and export it for printing, without a designer having to get involved. ### Role customization Roles in the CE.SDK are sets of global scopes and settings. When changing the role via the `setRole` command in the EditorAPI, the internal defaults for that role are applied as described in the previous sections. The CE.SDK and Engine provide a `onRoleChanged` callback subscription on the EditorAPI. Callbacks registered here are invoked whenever the role changes and can be used to configure additional settings or adjust the default scopes and settings. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Events" description: "Subscribe to block creation, update, and deletion events to track changes in your CE.SDK scene." platform: ios url: "https://img.ly/docs/cesdk/ios/concepts/events-353f97/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Concepts](https://img.ly/docs/cesdk/ios/concepts-c9ff51/) > [Events](https://img.ly/docs/cesdk/ios/concepts/events-353f97/) --- ```swift reference-only let scene = try engine.scene.create() let page = try engine.block.create(.page) try engine.block.appendChild(to: scene, child: page) let block = try engine.block.create(.graphic) try engine.block.setShape(block, shape: engine.block.createShape(.star)) try engine.block.setFill(block, fill: engine.block.createFill(.color)) try engine.block.appendChild(to: page, child: block) let task = Task { for await events in engine.event.subscribe(to: [block]) { for event in events { print("Event: \(event.type) \(event.block)") if engine.block.isValid(event.block) { let type = try engine.block.getType(event.block) print("Block type: \(type)") } } } } try await Task.sleep(for: .seconds(1)) try engine.block.setRotation(block, radians: 0.5 * .pi) try await Task.sleep(for: .seconds(1)) try engine.block.destroy(block) try await Task.sleep(for: .seconds(1)) ``` In this example, we will show you how to use the [CreativeEditor SDK](https://img.ly/products/creative-sdk)'s CreativeEngine to subscribe to creation, update, and destruction events of design blocks. ## Subscribing to Events The event API provides a single function to subscribe to design block events. The types of events are: - `'Created'`: The design block was just created. - `'Updated'`: A property of the design block was updated. - `'Destroyed'`: The design block was destroyed. Note that a destroyed block will have become invalid and trying to use Block API functions on it will result in an exception. You can always use the Block API's `isValid` function to verify whether a block is valid before use. All events that occur during an engine update are batched, deduplicated, and always delivered at the very end of the engine update. Deduplication means you will receive at most one `'Updated'` event per block per subscription, even though there could potentially be multiple updates to a block during the engine update. To be clear, this also means the order of the event list provided to your event callback won't reflect the actual order of events within an engine update. ```swift public func subscribe(to blocks: [DesignBlockID]) -> AsyncStream<[BlockEvent]> ``` Subscribe to block life-cycle events. - `blocks:`: A list of blocks to filter events by. If the list is empty, events for every block are sent. - Returns: A stream of events. Events are bundled and sent at the end of each engine update. ## Full Code ```swift let scene = try engine.scene.create() let page = try engine.block.create(.page) try engine.block.appendChild(to: scene, child: page) let block = try engine.block.create(.graphic) try engine.block.setShape(block, shape: engine.block.createShape(.star)) try engine.block.setFill(block, fill: engine.block.createFill(.color)) try engine.block.appendChild(to: page, child: block) let task = Task { for await events in engine.event.subscribe(to: [block]) { for event in events { print("Event: \(event.type) \(event.block)") if engine.block.isValid(event.block) { let type = try engine.block.getType(event.block) print("Block type: \(type)") } } } } try await Task.sleep(for: .seconds(1)) try engine.block.setRotation(block, radians: 0.5 * .pi) try await Task.sleep(for: .seconds(1)) try engine.block.destroy(block) try await Task.sleep(for: .seconds(1)) ``` --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Working With Resources" description: "Preload all resources for blocks or scenes in CE.SDK to improve performance and avoid runtime delays." platform: ios url: "https://img.ly/docs/cesdk/ios/concepts/resources-a58d71/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Concepts](https://img.ly/docs/cesdk/ios/concepts-c9ff51/) > [Resources](https://img.ly/docs/cesdk/ios/concepts/resources-a58d71/) --- ```swift reference-only let scene = try engine.scene.get()! // Forcing all resources of all the blocks in a scene or the resources of graphic block to load try await engine.block.forceLoadResources([scene]) let graphics = try engine.block.find(byType: .graphic) try await engine.block.forceLoadResources(graphics) ``` By default, a scene's resources are loaded on-demand. You can manually trigger the loading of all resources in a scene of for specific blocks by calling `forceLoadResources`. Any set of blocks can be passed as argument and whatever resources these blocks require will be loaded. In this example, we will show you how to use the [CreativeEditor SDK](https://img.ly/products/creative-sdk)'s CreativeEngine to forcibly pre-load all resources contained in a scene. ```swift public func forceLoadResources(_ ids: [DesignBlockID]) async throws ``` Begins loading the resources of the given blocks and their children. If the resource had been loaded earlier and resulted in an error, it will be reloaded. This function is useful for preloading resources before they are needed. Warning: For elements with a source set, all elements in the source set will be loaded. - `ids:`: The blocks whose resources should be loaded. The given blocks don't require to have resources and can have children blocks (e.g. a scene block or a page block). ### Full Code Here's the full code: ```swift let scene = try engine.scene.get()! // Forcing all resources of all the blocks in a scene or the resources of graphic block to load try await engine.block.forceLoadResources([scene]) let graphics = try engine.block.find(byType: .graphic) try await engine.block.forceLoadResources(graphics) ``` --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Scenes" description: "Scenes act as the root container for blocks and define the full design structure in CE.SDK." platform: ios url: "https://img.ly/docs/cesdk/ios/concepts/scenes-e8596d/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Concepts](https://img.ly/docs/cesdk/ios/concepts-c9ff51/) > [Scenes](https://img.ly/docs/cesdk/ios/concepts/scenes-e8596d/) --- ```swift file=@cesdk_swift_examples/engine-guides-modifying-scenes/ModifyingScenes.swift reference-only import Foundation import IMGLYEngine @MainActor func modifyingScenes(engine: Engine) async throws { let scene = try engine.scene.get() /* In engine only mode we have to create our own scene and page. */ if scene == nil { let scene = try engine.scene.create() let page = try engine.block.create(.page) try engine.block.appendChild(to: scene, child: page) } /* Find all pages in our scene. */ let pages = try engine.block.find(byType: .page) /* Use the first page we found. */ let page = pages.first! /* Create a graphic block and add it to the scene's page. */ let block = try engine.block.create(.graphic) let fill = try engine.block.createFill(.image) try engine.block.setShape(block, shape: engine.block.createShape(.rect)) try engine.block.setFill(block, fill: fill) try engine.block.setString( fill, property: "fill/image/imageFileURI", value: "https://img.ly/static/ubq_samples/imgly_logo.jpg", ) /* The content fill mode 'Contain' ensures the entire image is visible. */ try engine.block.setEnum(block, property: "contentFill/mode", value: "Contain") try engine.block.appendChild(to: page, child: block) /* Zoom the scene's camera on our page. */ try await engine.scene.zoom(to: page) } ``` Commonly, a scene contains several pages which in turn contain any other blocks such as images and texts. A block (or design block) is the main building unit in CE.SDK. Blocks are organized in a hierarchy through parent-child relationships. A scene is a specialized block that acts as the root of this hierarchy. At any time, the engine holds only a single scene. Loading or creating a scene will replace the current one. ## Interacting With The Scene ### Creating or Using an Existing Scene When using the Engine's API in the context of the CE.SDK editor, there's already an existing scene. You can obtain a handle to this scene by calling the [SceneAPI](https://img.ly/docs/cesdk/ios/concepts/scenes-e8596d/)'s `func get() throws -> DesignBlockID?` method. However, when using the Engine on its own you first have to create a scene, e.g. using `func create() throws -> DesignBlockID`. See the [Creating Scenes](https://img.ly/docs/cesdk/ios/open-the-editor-23a1db/) guide for more details and options. ```swift // In engine only mode we have to create our own scene and page. if (engine.scene.get() == null) { val scene = engine.scene.create() ``` Next, we need a page to place our blocks on. The scene automatically arranges its pages either in a vertical (the default) or horizontal layout. Again in the context of the editor, there's already an existing page. To fetch that page call the [BlockAPI](https://img.ly/docs/cesdk/ios/concepts/blocks-90241e/)'s `func find(byType type: DesignBlockType) throws -> [DesignBlockID]` method and use the first element of the returned array. When only using the engine, you have to create a page yourself and append it to the scene. To do that create the page using `func create(_ type: DesignBlockType) throws -> DesignBlockID` and append it to the scene with `func appendChild(to parent: DesignBlockID, child: DesignBlockID) throws`. ```swift val page = engine.block.create(DesignBlockType.Page) engine.block.appendChild(parent = scene, child = page) } // Find all pages in our scene. val pages = engine.block.findByType(DesignBlockType.Page) // Use the first page we found. val page = pages.first() ``` At this point, you should have a handle to an existing scene as well as a handle to its page. Now it gets interesting when we start to add different types of blocks to the scene's page. ### Modifying the Scene As an example, we create a graphic block using the [BlockAPI](https://img.ly/docs/cesdk/ios/concepts/blocks-90241e/)'s `create()` method which we already used for creating our page. Then we set a rect shape and an image fill to this newly created block to give it a visual representation. To see what other kinds of blocks are available see the [Block Types](https://img.ly/docs/cesdk/ios/concepts/blocks-90241e/) in the API Reference. ```swift // Create a graphic block and add it to the scene's page. val block = engine.block.create(DesignBlockType.Graphic) val fill = engine.block.createFill(FillType.Image) engine.block.setShape(block, shape = engine.block.createShape(ShapeType.Rect)) engine.block.setFill(block = block, fill = fill) ``` We set a property of our newly created image fill by giving it a URL to reference an image file from. We also make sure the entire image stays visible by setting the block's content fill mode to `'Contain'`. To learn more about block properties check out the [Block Properties](https://img.ly/docs/cesdk/ios/concepts/blocks-90241e/) API Reference. ```swift engine.block.setString( block = fill, property = "fill/image/imageFileURI", value = "https://img.ly/static/ubq_samples/imgly_logo.jpg", ) // The content fill mode 'Contain' ensures the entire image is visible. engine.block.setEnum( block = block, property = "contentFill/mode", value = "Contain", ) ``` And finally, for our image to be visible we have to add it to our page using `appendChild`. ```swift engine.block.appendChild(parent = page, child = block) ``` To frame everything nicely and put it into view we direct the scene's camera to zoom on our page. ```swift // Zoom the scene's camera on our page. engine.scene.zoomToBlock(page) ``` ### Full Code Here's the full code snippet: ```swift import Foundation import IMGLYEngine @MainActor func modifyingScenes(engine: Engine) async throws { let scene = try engine.scene.get() /* In engine only mode we have to create our own scene and page. */ if scene == nil { let scene = try engine.scene.create() let page = try engine.block.create(.page) try engine.block.appendChild(to: scene, child: page) } /* Find all pages in our scene. */ let pages = try engine.block.find(byType: .page) /* Use the first page we found. */ let page = pages.first! /* Create a graphic block and add it to the scene's page. */ let block = try engine.block.create(.graphic) let fill = try engine.block.createFill(.image) try engine.block.setShape(block, shape: engine.block.createShape(.rect)) try engine.block.setFill(block, fill: fill) try engine.block.setString( fill, property: "fill/image/imageFileURI", value: "https://img.ly/static/ubq_samples/imgly_logo.jpg" ) /* The content fill mode 'Contain' ensures the entire image is visible. */ try engine.block.setEnum(block, property: "contentFill/mode", value: "Contain") try engine.block.appendChild(to: page, child: block) /* Zoom the scene's camera on our page. */ try await engine.scene.zoom(to: page) } ``` ## Exploring Scene Contents Using The Scene API Learn how to use the [CreativeEditor SDK](https://img.ly/products/creative-sdk)'s CreativeEngine to explore scene contents through the `scene` API. ```swift public func getPages() throws -> [DesignBlockID] ``` Get the sorted list of pages in the scene. - Returns: The sorted list of pages in the scene. ```swift public func setDesignUnit(_ designUnit: DesignUnit) throws ``` Converts all values of the current scene into the given design unit. - `designUnit:`: The new design unit of the scene. ```swift public func getDesignUnit() throws -> DesignUnit ``` Returns the design unit of the current scene. - Returns: The current design unit. ```swift public func getCurrentPage() throws -> DesignBlockID? ``` Get the current page, i.e., the page of the first selected element if this page is at least 25% visible, otherwise, the page nearest to the viewport center. - Returns: The current page in the scene or an error. ```swift public func findNearestToViewPortCenter(byKind kind: String) throws -> [DesignBlockID] ``` Finds all blocks with the given kind sorted by distance to viewport center. - `kind:`: The kind to search for. - Returns: A list of block ids with the given kind sorted by distance to viewport center. ```swift public func findNearestToViewPortCenter(byType type: DesignBlockType) throws -> [DesignBlockID] ``` Finds all blocks with the given type sorted by distance to viewport center. - `type:`: The type to search for. - Returns: A list of block ids with the given type sorted by distance to viewport center. ### Full Code Here's the full code snippet for exploring a scene's contents using the `scene` API: ```swift let pages = try engine.scene.getPages() val currentPage = engine.scene.getCurrentPage() val nearestPageByType = engine.scene.findNearestToViewPortCenter(byType: .page).first! val nearestImageByKind = engine.scene.findNearestToViewPortCenter(byKind: "image").first! try engine.scene.setDesignUnit(.px) /* Now returns DesignUnit.px */ _ = try engine.scene.getDesignUnit() ``` ## Exploring Scene Contents Using The Block API Learn how to use the [CreativeEditor SDK](https://img.ly/products/creative-sdk)'s CreativeEngine to explore scenes through the `block` API. ### Functions ```swift public func findAll() -> [DesignBlockID] ``` Return all blocks currently known to the engine. - Returns: A list of block ids. ```swift public func findAllPlaceholders() -> [DesignBlockID] ``` Return all placeholder blocks in the current scene. - Returns: A list of block ids. ```swift public func find(byType type: DesignBlockType) throws -> [DesignBlockID] ``` Finds all blocks with the given type. - `type:`: The type to search for. - Returns: A list of block ids. ```swift public func find(byType type: FillType) throws -> [DesignBlockID] ``` Finds all blocks with the given type. - `type:`: The type to search for. - Returns: A list of block ids. ```swift public func find(byType type: ShapeType) throws -> [DesignBlockID] ``` Finds all blocks with the given type. - `type:`: The type to search for. - Returns: A list of block ids. ```swift public func find(byType type: EffectType) throws -> [DesignBlockID] ``` Finds all blocks with the given type. - `type:`: The type to search for. - Returns: A list of block ids. ```swift public func find(byType type: BlurType) throws -> [DesignBlockID] ``` Finds all blocks with the given type. - `type:`: The type to search for. - Returns: A list of block ids. ```swift public func find(byKind kind: String) throws -> [DesignBlockID] ``` Finds all blocks with the given kind. - `kind:`: The kind to search for. - Returns: A list of block ids. ```swift public func find(byName name: String) -> [DesignBlockID] ``` Finds all blocks with the given name. - `name:`: The name to search for. - Returns: A list of block ids. ### Full Code Here's the full code snippet for exploring a scene's contents using the `block` API: ```swift let allIds = engine.block.findAll() let allPlaceholderIds = engine.block.findAllPlaceholders() let allPages = try engine.block.find(byType: .page) let allImageFills = try engine.block.find(byType: .image) let allStarShapes = try engine.block.find(byType: .star) let allHalfToneEffects = try engine.block.find(byType: .halfTone) let allUniformBlurs = try engine.block.find(byType: .uniform) let allStickers = try engine.block.find(byKind: "sticker") let ids = engine.block.find(byName: "someName") ``` --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Undo and History" description: "Manage undo and redo stacks in CE.SDK using multiple histories, callbacks, and API-based controls." platform: ios url: "https://img.ly/docs/cesdk/ios/concepts/undo-and-history-99479d/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Concepts](https://img.ly/docs/cesdk/ios/concepts-c9ff51/) > [Undo and History](https://img.ly/docs/cesdk/ios/concepts/undo-and-history-99479d/) --- ```swift reference-only // Manage history stacks let newHistory = engine.editor.createHistory() let oldHistory = engine.editor.getActiveHistory() engine.editor.setActiveHistory(newHistory) engine.editor.destroyHistory(oldHistory) let historyTask = Task { for await _ in engine.editor.onHistoryUpdated { let canUndo = try engine.editor.canUndo() let canRedo = try engine.editor.canRedo() print("History updated: \(canUndo) \(canRedo)") } } // Push a new state to the undo stack try engine.editor.addUndoStep() // Perform an undo, if possible. if try engine.editor.canUndo() { try engine.editor.undo() } // Perform a redo, if possible. if try engine.editor.canRedo() { try engine.editor.redo() } ``` In this example, we will show you how to use the [CreativeEditor SDK](https://img.ly/products/creative-sdk)'s CreativeEngine to undo and redo steps in the `editor` API. ## Functions ```swift public func createHistory() -> History ``` Create a history which consists of an undo/redo stack for editing operations. There can be multiple. But only one can be active at a time. - Returns: The handle to the created history. ```swift public func destroyHistory(_ history: History) ``` Destroy the given history, returns an error if the handle doesn't refer to a history. - `history:`: The history to be destroyed. ```swift public func setActiveHistory(_ history: History) ``` Mark the given history as active, returns an error if the handle doesn't refer to a history. All other histories get cleared from the active state. Undo/redo operations only apply to the active history. - `history:`: The history to be marked as active. ```swift public func getActiveHistory() -> History ``` Get the handle to the currently active history. If there's none it will be created. - Returns: The handle to the active history. ```swift public func addUndoStep() throws ``` Adds a new history state to the stack, if undoable changes were made. ```swift public func undo() throws ``` Undo one step in the history if an undo step is available. ```swift public func canUndo() throws -> Bool ``` If an undo step is available. - Returns: `true` if an undo step is available. ```swift public func redo() throws ``` Redo one step in the history if a redo step is available. ```swift public func canRedo() throws -> Bool ``` If a redo step is available. - Returns: `true` if a redo step is available. ```swift public var onHistoryUpdated: AsyncStream { get } ``` Subscribe to changes to the undo/redo history. ## Full Code Here's the full code: ```swift // Manage history stacks let newHistory = engine.editor.createHistory() let oldHistory = engine.editor.getActiveHistory() engine.editor.setActiveHistory(newHistory) engine.editor.destroyHistory(oldHistory) let historyTask = Task { for await _ in engine.editor.onHistoryUpdated { let canUndo = try engine.editor.canUndo() let canRedo = try engine.editor.canRedo() print("History updated: \(canUndo) \(canRedo)") } } // Push a new state to the undo stack try engine.editor.addUndoStep() // Perform an undo, if possible. if try engine.editor.canUndo() { try engine.editor.undo() } // Perform a redo, if possible. if try engine.editor.canRedo() { try engine.editor.redo() } ``` --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Configuration" description: "Learn how to configure CE.SDK to match your application's functional, visual, and performance requirements." platform: ios url: "https://img.ly/docs/cesdk/ios/configuration-2c1c3d/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Configuration](https://img.ly/docs/cesdk/ios/configuration-2c1c3d/) --- ```swift file=@cesdk_swift_examples/editor-guides-configuration-basics/BasicEditorSolution.swift reference-only import IMGLYDesignEditor import SwiftUI struct BasicEditorSolution: View { let settings = EngineSettings( license: secrets.licenseKey, // pass nil for evaluation mode with watermark userID: "", baseURL: URL(string: "https://cdn.img.ly/packages/imgly/cesdk-engine/1.68.0/assets")!, ) var editor: some View { DesignEditor(settings) } @State private var isPresented = false var body: some View { Button("Use the Editor") { isPresented = true } .fullScreenCover(isPresented: $isPresented) { ModalEditor { editor } } } } #Preview { BasicEditorSolution() } ``` In this example, we will show you how to make basic configurations for the mobile editor. The example is based on the `Design Editor`, however, it is exactly the same for all the other [solutions](https://img.ly/docs/cesdk/ios/prebuilt-solutions-d0ed07/). ## Configuration All the basic configuration settings are part of the `EngineSettings` which are required to initialize the editor. ```javascript highlight-editor DesignEditor(settings) ``` - `license` - the license to activate the [Engine](https://img.ly/docs/cesdk/ios/get-started/overview-e18f40/) with. ```javascript highlight-license license: secrets.licenseKey, // pass nil for evaluation mode with watermark ``` - `userID` - an optional unique ID tied to your application's user. This helps us accurately calculate monthly active users (MAU). Especially useful when one person uses the app on multiple devices with a sign-in feature, ensuring they're counted once. Providing this aids in better data accuracy. The default value is `nil`. ```javascript highlight-userID userID: "", ``` - `baseURL` - is used to initialize the engine's [setting](https://img.ly/docs/cesdk/ios/settings-970c98/) before the editor's `onCreate` callback is run. It is the foundational URL for constructing absolute paths from relative ones. This URL enables the loading of specific scenes or assets using their relative paths. The default value is pointing at the versioned IMG.LY CDN `https://cdn.img.ly/packages/imgly/cesdk-engine/$UBQ_VERSION$/assets` but it should be changed in production environments. ```javascript highlight-baseURL baseURL: URL(string: "https://cdn.img.ly/packages/imgly/cesdk-engine/1.68.0/assets")!, ``` ## Full Code ```swift import IMGLYDesignEditor import SwiftUI struct BasicEditorSolution: View { let settings = EngineSettings( license: secrets.licenseKey, userID: "", baseURL: URL(string: "https://cdn.img.ly/packages/imgly/cesdk-engine/$UBQ_VERSION$/assets")! ) var editor: some View { DesignEditor(settings) } @State private var isPresented = false var body: some View { Button("Use the Editor") { isPresented = true } .fullScreenCover(isPresented: $isPresented) { ModalEditor { editor } } } } #Preview { BasicEditorSolution() } ``` --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Conversion" description: "Convert designs into different formats such as PDF, PNG, MP4, and more using CE.SDK tools." platform: ios url: "https://img.ly/docs/cesdk/ios/conversion-c3fbb3/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Conversion](https://img.ly/docs/cesdk/ios/conversion-c3fbb3/) --- --- ## Related Pages - [Overview](https://img.ly/docs/cesdk/ios/conversion/overview-44dc58/) - Convert designs into different formats such as PDF, PNG, MP4, and more using CE.SDK tools. - [Convert Compositions To PDF](https://img.ly/docs/cesdk/ios/conversion/to-pdf-eb937f/) - Convert your compositions to PDF for export and print. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Overview" description: "Convert designs into different formats such as PDF, PNG, MP4, and more using CE.SDK tools." platform: ios url: "https://img.ly/docs/cesdk/ios/conversion/overview-44dc58/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Conversion](https://img.ly/docs/cesdk/ios/conversion-c3fbb3/) > [Overview](https://img.ly/docs/cesdk/ios/conversion/overview-44dc58/) --- CreativeEditor SDK (CE.SDK) allows you to export designs into a variety of formats, making it easy to prepare assets for web publishing, printing, storage, and other workflows. You can trigger conversions either programmatically through the SDK’s API or manually using the built-in export options available in the UI. [Explore Demos](https://img.ly/showcases/cesdk?tags=android) [Get Started](https://img.ly/docs/cesdk/ios/get-started/overview-e18f40/) ## Supported Input and Output Formats CE.SDK accepts a range of input formats when working with designs, including: When it comes to exporting or converting designs, the SDK supports the following output formats: Each format serves different use cases, giving you the flexibility to adapt designs for your application’s needs. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Convert Compositions To PDF" description: "Convert your compositions to PDF for export and print." platform: ios url: "https://img.ly/docs/cesdk/ios/conversion/to-pdf-eb937f/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Conversion](https://img.ly/docs/cesdk/ios/conversion-c3fbb3/) > [To PDF](https://img.ly/docs/cesdk/ios/conversion/to-pdf-eb937f/) --- Automate PDF creation entirely in Swift. Load assets, build single- or multi-page scenes, set page size and DPI, and export to PDF. All without presenting the editor UI. ## What You’ll Learn - Create scenes and pages programmatically. - Place images and fit/cover them on the page. - Export a single page or the entire scene to PDF. - Apply print options: scene DPI, high-compatibility, underlayer (spot color + offset). - Persist/share the resulting PDF file. ## When to Use It - Batch generation, server-style workflows on device. - Custom UIs where you control layout and export. - Printing to non-white stock requiring an underlayer. - Create and export (single page vs whole scene) ## Plain PDF Creation For creating PDFs, the CE.SDK provides an `.export` function in the `block` API. Exporting a `page` creates a single-page PDF. Export a `scene` when you want a multi-page PDF. The `Data` objects that the functions below return are PDFs. Your app can share them, send them to print, or whatever your workflow requires. ```swift import IMGLYEngine @MainActor func exportCurrentPageToPDF(engine: Engine) async throws -> Data { let page = try engine.page.get() return try await engine.block.export(page, mimeType: .pdf) } @MainActor func exportWholeSceneToPDF(engine: Engine) async throws -> Data { let scene = try engine.scene.get() return try await engine.block.export(scene, mimeType: .pdf) } ``` > **Note:** Your app can create `.jpg`, `.png`, and other formats by changing the `mimeType` parameter of the `.export` function. Video and audio exports use other export functions. ## Build a Multi-Page Scene From Images The function below starts with an array of image URLs. For each URL, the code: 1. Creates a page and adds it to the scene. 2. Creates a rectangular block to hold the image. 3. Loads the contents of each URL as an image fill for a block. 4. Resizes the block to be the same size as the page. 5. Creates a PDF from the entire scene, which contains every page. ```swift @MainActor func buildMultiPagePDF(from urls: [URL], engine: Engine) async throws -> Data { //Create a scene let scene = try engine.scene.create() for url in urls { //Create a page and add it to the scene let page = try engine.block.create(.page) try engine.block.appendChild(to: scene, child: page) //Create a rectangular block to hold the image let graphic = try engine.block.create(.graphic) let shape = try engine.block.createShape(.rect) try engine.block.setShape(graphic, shape: shape) try engine.block.appendChild(to: page, child: graphic) //Fill the block with the image from the URL let fill = try engine.block.createFill(.image) try engine.block.setString(fill, property: "fill/image/fileURI", value: url.absoluteString) try engine.block.setFill(graphic, fill: fill) //Resize the block to fill it’s parent try engine.block.fillParent(graphic) } //Convert the scene and all of its children to a pdf. return try await engine.block.export(scene, mimeType: .pdf) } ``` ## Page Size & DPI PDF size follows the page size. By default scenes and pages have a DPI of 300 but don’t have a predefined paper size. Set width and height of the pages **before** export. The helper below defines some common page sizes. ```swift enum PagePreset { case usLetterPortrait, usLetterLandscape, a4Portrait, a4Landscape, custom(Float, Float) } @MainActor func setPageSize(page: DesignBlockID, preset: PagePreset, engine: Engine) throws { let (w,h): (Float,Float) = switch preset { case .usLetterPortrait: (2550,3300) case .usLetterLandscape: (3300,2550) case .a4Portrait: (2480,3508) case .a4Landscape: (3508,2480) case let .custom(W,H): (W,H) } try engine.block.setWidth(page, value: w) try engine.block.setHeight(page, value: h) } ``` The DPI of the scene: - Affects the rasterization quality of the PDF (see the next section). - Doesn’t change the page size when modified. - Can be set and read using the "scene/dpi" property of a scene. ## High-compatibility & Underlayer (print) The `.export` function has an optional structure where you can set configuration. This structure is the same for **every** mime type that `.export` supports, not just PDF. For a PDF export these options apply: - `exportPdfWithHighCompatibility` - Exports the PDF document with a higher compatibility to different PDF viewers. Bitmap images and some effects like gradients are rasterized with the scene’s DPI setting instead of embedding them directly. - `exportPdfWithUnderlayer` - Export the PDF document with an underlayer. The export generates an underlayer by adding a graphics block behind the existing elements of the shape of the elements. This is useful when printing on non-standard media, like glass. - `underlayerSpotColorName` - The name of the spot color to use to fill the underlayer. - `underlayerOffset` - The adjustment in size of the shape of the underlayer. ```swift @MainActor func exportWithPrintOptions(engine: Engine) async throws -> Data { let scene = try engine.scene.get() try engine.block.setFloat(scene, property: "scene/dpi", value: 300) engine.editor.setSpotColor(name: "RDG_WHITE", r: 0.8, g: 0.8, b: 0.8) let opts = ExportOptions( exportPdfWithHighCompatibility: true, exportPdfWithUnderlayer: true, underlayerSpotColorName: "RDG_WHITE", underlayerOffset: -2 ) return try await engine.block.export(scene, mimeType: .pdf, options: opts) } ``` ## Save/share Once the export completes, your app can save the PDF data to the user’s directories or write the it to a temporary URL and present `UIActivityViewController` or `ShareLink`. ## Troubleshooting **❌ PDF is rasterized / too big**: Disable `exportPdfWithHighCompatibility` to preserve vectors where possible. If you need compatibility, reduce `"scene/dpi"` (for example, 150 vs 300) to control size. **❌ Underlayer isn’t visible in the printed result**: Make sure you **don’t flatten the PDF** in post-processing, the spot color name matches the print shop’s setup exactly, and the underlayer offset is appropriate for your media. **❌ Colors don’t match the brand**: Confirm you’re using the correct color model. For brand-critical workflows, coordinate spot color naming with your printer and avoid unnecessary conversions down-stream. Explore more in the [spot color](https://img.ly/docs/cesdk/ios/colors-a9b79c/) guide. **❌ Can’t change PDF page size at export**: That’s expected. Set page `width`/`height` on your pages before export; the PDF follows the page size. The `targetWidth`/`targetHeight` export options are for some image formats (PNG/JPEG/WEBP), not PDF. **❌ Multi-page export only gives one page**: Ensure you export the scene (not just a page) and that your scene actually contains more than one page block. ## Next Steps Now that you're able to export your creations as PDF, explore some related topics to perfect your workflow: - [Save](https://img.ly/docs/cesdk/ios/export-save-publish/save-c8b124/) Scenes – persist .scene/.zip and restore later to reproduce exports. - [Asset Sources & Upload](https://img.ly/docs/cesdk/ios/import-media-4e3703/) – bring user images/videos in, then export as PDF. - [Image Crop & Fit](https://img.ly/docs/cesdk/ios/edit-image/transform-9d189b/) – control how images fill pages before exporting (cover vs fit). --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Create Compositions" description: "Combine and arrange multiple elements to create complex, multi-page, or layered design compositions." platform: ios url: "https://img.ly/docs/cesdk/ios/create-composition-db709c/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Create and Edit Compositions](https://img.ly/docs/cesdk/ios/create-composition-db709c/) --- --- ## Related Pages - [Overview](https://img.ly/docs/cesdk/ios/create-composition/overview-5b19c5/) - Combine and arrange multiple elements to create complex, multi-page, or layered design compositions. - [Positioning and Alignment](https://img.ly/docs/cesdk/ios/insert-media/position-and-align-cc6b6a/) - Precisely position, align, and distribute objects using guides, snapping, and alignment tools. - [Group and Ungroup Objects](https://img.ly/docs/cesdk/ios/create-composition/group-and-ungroup-62565a/) - Group multiple elements to move or transform them together; ungroup to edit them individually. - [Layer Management](https://img.ly/docs/cesdk/ios/create-composition/layer-management-18f07a/) - Organize design elements using a layer stack for precise control over stacking and visibility. - [Blend Modes](https://img.ly/docs/cesdk/ios/create-composition/blend-modes-ad3519/) - Apply blend modes to elements to control how colors and layers interact visually. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Blend Modes" description: "Apply blend modes to elements to control how colors and layers interact visually." platform: ios url: "https://img.ly/docs/cesdk/ios/create-composition/blend-modes-ad3519/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Create and Edit Compositions](https://img.ly/docs/cesdk/ios/create-composition-db709c/) > [Blend Modes](https://img.ly/docs/cesdk/ios/create-composition/blend-modes-ad3519/) --- ```swift reference-only try engine.block.supportsOpacity(image) try engine.block.setOpacity(image, value: 0.5) try engine.block.getOpacity(image) try engine.block.supportsBlendMode(image) try engine.block.setBlendMode(image, mode: .multiply) try engine.block.getBlendMode(image) if try engine.block.supportsBackgroundColor(image) { try engine.block.setBackgroundColor(page, r: 1, g: 0, b: 0, a: 1) // Red try engine.block.getBackgroundColor(page) try engine.block.setBackgroundColorEnabled(page, enabled: true) try engine.block.isBackgroundColorEnabled(page) } ``` In this example, we will show you how to use the [CreativeEditor SDK](https://img.ly/products/creative-sdk)'s CreativeEngine to modify a blocks appearance through the `block` API. ## Common Properties Common properties are properties that occur on multiple block types. For instance, fill color properties are available for all the shape blocks and the text block. That's why we built convenient setter and getter functions for these properties. So you don't have to use the generic setters and getters and don't have to provide a specific property path. There are also `has*` functions to query if a block supports a set of common properties. ### Opacity Set the translucency of the entire block. ```swift public func supportsOpacity(_ id: DesignBlockID) throws -> Bool ``` Query if the given block has an opacity. - `id:`: The block to query. - Returns: `true`, if the block has an opacity. ```swift public func setOpacity(_ id: DesignBlockID, value: Float) throws ``` Set the opacity of the given design block. Required scope: "layer/opacity" - `id`: The block whose opacity should be set. - `value`: The opacity to be set. The valid range is 0 to 1. ```swift public func getOpacity(_ id: DesignBlockID) throws -> Float ``` Get the opacity of the given design block. - `id:`: The block whose opacity should be queried. - Returns: The opacity. ### Blend Mode Define the blending behaviour of a block. ```swift public func supportsBlendMode(_ id: DesignBlockID) throws -> Bool ``` Query if the given block has a blend mode. - `id:`: The block to query. - Returns: `true`, if the block has a blend mode. ```swift public func setBlendMode(_ id: DesignBlockID, mode: BlendMode) throws ``` Set the blend mode of the given design block. Required scope: "layer/blendMode" - `id`: The block whose blend mode should be set. - `mode`: The blend mode to be set. ```swift public func getBlendMode(_ id: DesignBlockID) throws -> BlendMode ``` Get the blend mode of the given design block. - `id:`: The block whose blend mode should be queried. - Returns: The blend mode. ### Background Color Manipulate the background of a block. To understand the difference between fill and background color take the text block. The glyphs of the text itself are colored by the fill color. The rectangular background given by the bounds of the block on which the text is drawn is colored by the background color. ```swift public func supportsBackgroundColor(_ id: DesignBlockID) throws -> Bool ``` Query if the given block has background color properties. - `id:`: The block to query. - Returns: `true`, if the block has background color properties. ```swift public func setBackgroundColor(_ id: DesignBlockID, r: Float, g: Float, b: Float, a: Float = 1) throws ``` Set the background color of the given design block. Required scope: "fill/change" - `id`: The block whose background color should be set. - `r`: The red color component in the range of 0 to 1. - `g`: The green color component in the range of 0 to 1. - `b`: The blue color component in the range of 0 to 1. - `a`: The alpha color component in the range of 0 to 1. ```swift public func getBackgroundColor(_ id: DesignBlockID) throws -> RGBA ``` Get the background color of the given design block. - `id:`: The block whose background color should be queried. - Returns: The background color. ```swift public func setBackgroundColorEnabled(_ id: DesignBlockID, enabled: Bool) throws ``` Enable or disable the background of the given design block. Required scope: "fill/change" - `id`: The block whose background should be enabled or disabled. - `enabled`: If `true`, the background will be enabled. ```swift public func isBackgroundColorEnabled(_ id: DesignBlockID) throws -> Bool ``` Query if the background of the given design block is enabled. - `id:`: The block whose background state should be queried. - Returns: `true`, if background is enabled. ## Full Code Here's the full code: ```swift try engine.block.supportsOpacity(image) try engine.block.setOpacity(image, value: 0.5) try engine.block.getOpacity(image) try engine.block.supportsBlendMode(image) try engine.block.setBlendMode(image, mode: .multiply) try engine.block.getBlendMode(image) if try engine.block.supportsBackgroundColor(image) { try engine.block.setBackgroundColor(page, r: 1, g: 0, b: 0, a: 1) // Red try engine.block.getBackgroundColor(page) try engine.block.setBackgroundColorEnabled(page, enabled: true) try engine.block.isBackgroundColorEnabled(page) } ``` --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Group and Ungroup Objects" description: "Group multiple elements to move or transform them together; ungroup to edit them individually." platform: ios url: "https://img.ly/docs/cesdk/ios/create-composition/group-and-ungroup-62565a/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Create and Edit Compositions](https://img.ly/docs/cesdk/ios/create-composition-db709c/) > [Group and Ungroup Objects](https://img.ly/docs/cesdk/ios/create-composition/group-and-ungroup-62565a/) --- ```swift reference-only // Create blocks and append to scene let member1 = try engine.block.create(.graphic) let member2 = try engine.block.create(.graphic) try engine.block.appendChild(to: scene, child: member1) try engine.block.appendChild(to: scene, child: member2) // Check whether the blocks may be grouped if try engine.block.isGroupable([member1, member2]) { let group = try engine.block.group([member1, member2]) try engine.block.setSelected(group, selected: true) try engine.block.enterGroup(group) try engine.block.setSelected(member1, selected: true) try engine.block.exitGroup(member1) try engine.block.ungroup(group) } ``` In this example, we will show you how to use the [CreativeEditor SDK](https://img.ly/products/creative-sdk)'s CreativeEngine to group blocks through the `block` API. Groups form a cohesive unit. ## Grouping Multiple blocks can be grouped together to form a cohesive unit. A group being a block, it can itself be part of a group. > **Note:** **What cannot be grouped*** A scene > * A block that already is part of a group ```swift public func isGroupable(_ ids: [DesignBlockID]) throws -> Bool ``` Confirms that a given set of blocks can be grouped together. - `ids:`: A non-empty array of block ids. - Returns: Whether the blocks can be grouped together. ```swift public func group(_ ids: [DesignBlockID]) throws -> DesignBlockID ``` Group blocks together. - `ids:`: A non-empty array of block ids. - Returns: The block id of the created group. ```swift public func ungroup(_ id: DesignBlockID) throws ``` Ungroups a group. - `id:`: The group id from a previous call to `group`. ```swift public func enterGroup(_ id: DesignBlockID) throws ``` Changes selection from selected group to a block within that group. Nothing happens if `id` is not a group. Required scope: "editor/select" - `id:`: The group id from a previous call to `group`. ```swift public func exitGroup(_ id: DesignBlockID) throws ``` Changes selection from a group's selected block to that group. Nothing happens if the `id` is not part of a group. Required scope: "editor/select" - `id:`: A block id. ## Full Code Here's the full code: ```swift // Create blocks and append to scene let member1 = try engine.block.create(.graphic) let member2 = try engine.block.create(.graphic) try engine.block.appendChild(to: scene, child: member1) try engine.block.appendChild(to: scene, child: member2) // Check whether the blocks may be grouped if try engine.block.isGroupable([member1, member2]) { let group = try engine.block.group([member1, member2]) try engine.block.setSelected(group, selected: true) try engine.block.enterGroup(group) try engine.block.setSelected(member1, selected: true) try engine.block.exitGroup(member1) try engine.block.ungroup(group) } ``` --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Layer Management" description: "Organize design elements using a layer stack for precise control over stacking and visibility." platform: ios url: "https://img.ly/docs/cesdk/ios/create-composition/layer-management-18f07a/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Create and Edit Compositions](https://img.ly/docs/cesdk/ios/create-composition-db709c/) > [Layers](https://img.ly/docs/cesdk/ios/create-composition/layer-management-18f07a/) --- ```swift reference-only try engine.block.insertChild(into: page, child: block, at: 0) let parent = try engine.block.getParent(block) let childIds = try engine.block.getChildren(block) try engine.block.appendChild(to: parent!, child: block) ``` In this example, we will show you how to use the [CreativeEditor SDK](https://img.ly/products/creative-sdk)'s CreativeEngine to modify the hierarchy of blocks through the `block` API. ## Manipulate the hierarchy of blocks > **Note:** Only blocks that are direct or indirect children of a `page` block are > rendered. Scenes without any `page` child may not be properly displayed by the > CE.SDK editor. ```swift public func getParent(_ id: DesignBlockID) throws -> DesignBlockID? ``` Query a block's parent. - `id:`: The block to query. - Returns: The parent's handle or `nil` if the block has no parent. ```swift public func getChildren(_ id: DesignBlockID) throws -> [DesignBlockID] ``` Get all children of the given block. Children are sorted in their rendering order: Last child is rendered in front of other children. - `id:`: The block to query. - Returns: A list of block ids. ```swift public func insertChild(into parent: DesignBlockID, child: DesignBlockID, at index: Int) throws ``` Insert a new or existing child at a certain position in the parent's children. Required scope: "editor/add" - `parent`: The block whose children should be updated. - `child`: The child to insert. Can be an existing child of `parent`. - `index`: The index to insert or move to. ```swift public func appendChild(to parent: DesignBlockID, child: DesignBlockID) throws ``` Appends a new or existing child to a block's children. Required scope: "editor/add" - `parent`: The block whose children should be updated. - `child`: The child to insert. Can be an existing child of `parent`. When adding a block to a new parent, it is automatically removed from its previous parent. ## Full Code Here's the full code: ```swift try engine.block.insertChild(into: page, child: block, at: 0) let parent = try engine.block.getParent(block) let childIds = try engine.block.getChildren(block) try engine.block.appendChild(to: parent!, child: block) ``` --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Overview" description: "Combine and arrange multiple elements to create complex, multi-page, or layered design compositions." platform: ios url: "https://img.ly/docs/cesdk/ios/create-composition/overview-5b19c5/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Create and Edit Compositions](https://img.ly/docs/cesdk/ios/create-composition-db709c/) > [Overview](https://img.ly/docs/cesdk/ios/create-composition/overview-5b19c5/) --- In CreativeEditor SDK (CE.SDK), a *composition* is an arrangement of multiple design elements—such as images, text, shapes, graphics, and effects—combined into a single, cohesive visual layout. Unlike working with isolated elements, compositions allow you to design complex, multi-element visuals that tell a richer story or support more advanced use cases. All composition processing is handled entirely on the client side, ensuring fast, secure, and efficient editing without requiring server infrastructure. You can use compositions to create a wide variety of projects, including social media posts, marketing materials, collages, and multi-page exports like PDFs. Whether you are building layouts manually through the UI or generating them dynamically with code, compositions give you the flexibility and control to design at scale. [Explore Demos](https://img.ly/showcases/cesdk?tags=ios) [Get Started](https://img.ly/docs/cesdk/ios/get-started/overview-e18f40/) ## Exporting Compositions CE.SDK compositions can be exported in several formats: --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Create a precompiled XCFramework for offline builds" description: "Compiling CE.SDK Swift packages and other project dependencies to a binary XCFramework to support easy building in airgapped environments." platform: ios url: "https://img.ly/docs/cesdk/ios/create-prebuilt-xcframework-c67971/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Prebuilt XCFramework](https://img.ly/docs/cesdk/ios/create-prebuilt-xcframework-c67971/) --- This guide walks you through compiling the IMGLYUI Swift dependency and its dependencies into a XCFramework usable for Xcode builds without internet access or Swift Package Manager access. ## Requirements To work with the SDK, you'll need: - A Mac running a recent version of [Xcode](https://developer.apple.com/xcode/) - macOS Tahoe (26) or newer, as required by Scipio - Your application project for reference ## Install [Scipio](https://github.com/giginet/Scipio) We will make use of the [Scipio](https://github.com/giginet/Scipio) tool, which automates the process of building XCFrameworks from Swift Package Manager dependencies. You can install Scipio with standard Swift package management tools such as [nest](https://github.com/mtj0928/nest), [Mint](https://github.com/yonaskolb/Mint) or directly from source: ```bash nest install giginet/Scipio scipio --help # Or mint install giginet/Scipio mint run scipio --help # Or git clone https://github.com/giginet/Scipio.git cd Scipio swift run -c release scipio --help ``` ## Prepare a dummy Swift Package Manager project to pull in all the dependencies for precompilation First, create an empty directory somewhere and create a `Package.swift` file inside with the following contents: ```swift // swift-tools-version: 6.2 // swift-tools-version: 6.2 import PackageDescription // Dummy package to bundle dependencies as a precompiled XCFramework let package = Package( name: "DummyApp", // Match the app target version here platforms: [.iOS(.v16)], products: [ .library(name: "DummyApp", targets: ["DummyApp"]) ], // Custom dependencies can be added here dependencies: [ .package(url: "https://github.com/imgly/IMGLYUI-swift.git", exact: "$UBQ_VERSION$"), // If you use these libraries in your app, make sure to match exact versions here .package(url: "https://github.com/siteline/SwiftUI-Introspect.git", exact: "26.0.0"), .package(url: "https://github.com/onevcat/Kingfisher.git", exact: "8.5.0"), ], targets: [ .target( name: "DummyApp", // Make sure to add any custom packages to the list here too dependencies: [.product(name: "IMGLYUI", package: "IMGLYUI-swift")] ) ] ) ``` You can tweak the dependency lists and versions as needed to precompile all your project dependencies into XCFramework bundles. Then, create an empty source file in `Sources/DummyApp/DummyApp.swift` to match the package definition. Make sure the Swift package builds by running `xcodebuild` in the package directory: ```bash xcodebuild -scheme DummyApp -destination 'platform=iOS Simulator,arch=arm64,OS=26.0,name=iPhone SE (3rd generation)' build ``` ## Compile XCFrameworks for all the dependencies with Scipio Run the following command to create a [mergeable](https://developer.apple.com/documentation/xcode/configuring-your-project-to-use-mergeable-libraries) XCFramework for every dependency of the project: (including transitive dependencies) ```bash scipio prepare --support-simulators --framework-type mergeable --enable-library-evolution --overwrite ``` ## Use the resulting XCFrameworks The resulting frameworks are located in the `XCFrameworks` subdirectory by default, and can be added to Xcode projects as dependencies. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Create Templates" description: "Learn how to create, import, and manage reusable templates to streamline design creation in CE.SDK." platform: ios url: "https://img.ly/docs/cesdk/ios/create-templates-3aef79/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Create and Use Templates](https://img.ly/docs/cesdk/ios/create-templates-3aef79/) --- --- ## Related Pages - [Overview](https://img.ly/docs/cesdk/ios/create-templates/overview-4ebe30/) - Learn how to create, import, and manage reusable templates to streamline design creation in CE.SDK. - [Create From Scratch](https://img.ly/docs/cesdk/ios/create-templates/from-scratch-663cda/) - Build and save reusable CE.SDK templates in Swift for iOS, macOS, and Mac Catalyst. - [Dynamic Content](https://img.ly/docs/cesdk/ios/create-templates/add-dynamic-content-53fad7/) - Use variables and placeholders to inject dynamic data into templates at design or runtime. - [Lock the Template](https://img.ly/docs/cesdk/ios/create-templates/lock-131489/) - Restrict editing access to specific elements or properties in a template to enforce design rules. - [Overview](https://img.ly/docs/cesdk/ios/use-templates/overview-ae74e1/) - Learn how to browse, apply, and dynamically populate templates in CE.SDK to streamline design workflows. - [Apply a Template](https://img.ly/docs/cesdk/ios/use-templates/apply-template-35c73e/) - Learn how to apply template scenes via API in the CreativeEditor SDK. - [Generate From Templates](https://img.ly/docs/cesdk/ios/use-templates/generate-334e15/) - Learn how to load, apply, and populate CE.SDK templates in Swift for iOS, macOS, and Mac Catalyst. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Dynamic Content" description: "Use variables and placeholders to inject dynamic data into templates at design or runtime." platform: ios url: "https://img.ly/docs/cesdk/ios/create-templates/add-dynamic-content-53fad7/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Create and Use Templates](https://img.ly/docs/cesdk/ios/create-templates-3aef79/) > [Insert Dynamic Content](https://img.ly/docs/cesdk/ios/create-templates/add-dynamic-content-53fad7/) --- --- ## Related Pages - [Text Variables](https://img.ly/docs/cesdk/ios/create-templates/add-dynamic-content/text-variables-7ecb50/) - Use variables in scene documents to update content automatically. - [Placeholders](https://img.ly/docs/cesdk/ios/create-templates/add-dynamic-content/placeholders-d9ba8a/) - Use placeholders to mark editable image, video, or text areas within a locked template layout. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Placeholders" description: "Use placeholders to mark editable image, video, or text areas within a locked template layout." platform: ios url: "https://img.ly/docs/cesdk/ios/create-templates/add-dynamic-content/placeholders-d9ba8a/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Create and Use Templates](https://img.ly/docs/cesdk/ios/create-templates-3aef79/) > [Insert Dynamic Content](https://img.ly/docs/cesdk/ios/create-templates/add-dynamic-content-53fad7/) > [Placeholders](https://img.ly/docs/cesdk/ios/create-templates/add-dynamic-content/placeholders-d9ba8a/) --- ```swift reference-only // Check if block supports placeholder behavior if try engine.block.supportsPlaceholderBehavior(block) { // Enable the placeholder behavior try engine.block.setPlaceholderBehaviorEnabled(block, enabled: true) let placeholderBehaviorIsEnabled = try engine.block.isPlaceholderBehaviorEnabled(block) } // Enable the placeholder capabilities (interaction in Adopter mode) try engine.block.setPlaceholderEnabled(block, enabled: true) let placeholderIsEnabled = try engine.block.isPlaceholderEnabled(block) // Check if block supports placeholder controls if try engine.block.supportsPlaceholderControls(block) { // Enable the visibility of the placeholder overlay pattern try engine.block.setPlaceholderControlsOverlayEnabled(block, enabled: true) let overlayEnabled = try engine.block.isPlaceholderControlsOverlayEnabled(block) // Enable the visibility of the placeholder button try engine.block.setPlaceholderControlsButtonEnabled(block, enabled: true) let buttonEnabled = try engine.block.isPlaceholderControlsButtonEnabled(block) } ``` In this example, we will demonstrate how to use the [CreativeEditor SDK](https://img.ly/products/creative-sdk)'s CreativeEngine to manage placeholder behavior and controls through the block API. ## Placeholder Behavior and Controls ```swift public func supportsPlaceholderBehavior(_ id: DesignBlockID) throws -> Bool ``` Query if the given block supports placeholder behavior. - `id:`: The block to query. - Returns: `true`, if the block supports placeholder behavior. ```swift public func setPlaceholderBehaviorEnabled(_ id: DesignBlockID, enabled: Bool) throws ``` Enable or disable the placeholder behavior for a block. - `id`: The block whose placeholder behavior should be enabled or disabled. - `enabled`: Whether the behavior should be enabled or disabled. ```swift public func isPlaceholderBehaviorEnabled(_ id: DesignBlockID) throws -> Bool ``` Query if the given block has placeholder behavior enabled. - `id:`: The block to query. - Returns: `true`, if the block has placeholder behavior enabled. ```swift public func setPlaceholderEnabled(_ id: DesignBlockID, enabled: Bool) throws ``` Enable or disable the placeholder function for a block. - `id`: The block whose placeholder function should be enabled or disabled. - `enabled`: Whether the function should be enabled or disabled. ```swift public func isPlaceholderEnabled(_ id: DesignBlockID) throws -> Bool ``` Query whether the placeholder function for a block is enabled. - `id:`: The block whose placeholder function state should be queried. - Returns: The enabled state of the placeholder function. ```swift public func supportsPlaceholderControls(_ id: DesignBlockID) throws -> Bool ``` Checks whether the block supports placeholder controls. - `id:`: The block to query. - Returns: `true`, if the block supports placeholder controls. ```swift public func setPlaceholderControlsOverlayEnabled(_ id: DesignBlockID, enabled: Bool) throws ``` Enable or disable the visibility of the placeholder overlay pattern for a block. - `id`: The block whose placeholder overlay should be enabled or disabled. - `enabled`: Whether the placeholder overlay should be shown or not. ```swift public func isPlaceholderControlsOverlayEnabled(_ id: DesignBlockID) throws -> Bool ``` Query whether the placeholder overlay pattern for a block is shown. - `id:`: The block whose placeholder overlay visibility state should be queried. - Returns: the visibility state of the block's placeholder overlay pattern. ```swift public func setPlaceholderControlsButtonEnabled(_ id: DesignBlockID, enabled: Bool) throws ``` Enable or disable the visibility of the placeholder button for a block. - `id`: The block whose placeholder button should be enabled or disabled. - `enabled`: Whether the placeholder button should be shown or not. ```swift public func isPlaceholderControlsButtonEnabled(_ id: DesignBlockID) throws -> Bool ``` Query whether the placeholder button for a block is shown. - `id:`: The block whose placeholder button visibility state should be queried. - Returns: the visibility state of the block's placeholder button. ## Full Code Here's the full code: ```swift // Check if block supports placeholder behavior if try engine.block.supportsPlaceholderBehavior(block) { // Enable the placeholder behavior try engine.block.setPlaceholderBehaviorEnabled(block, enabled: true) let placeholderBehaviorIsEnabled = try engine.block.isPlaceholderBehaviorEnabled(block) } // Enable the placeholder capabilities (interaction in Adopter mode) try engine.block.setPlaceholderEnabled(block, enabled: true) let placeholderIsEnabled = try engine.block.isPlaceholderEnabled(block) // Check if block supports placeholder controls if try engine.block.supportsPlaceholderControls(block) { // Enable the visibility of the placeholder overlay pattern try engine.block.setPlaceholderControlsOverlayEnabled(block, enabled: true) let overlayEnabled = try engine.block.isPlaceholderControlsOverlayEnabled(block) // Enable the visibility of the placeholder button try engine.block.setPlaceholderControlsButtonEnabled(block, enabled: true) let buttonEnabled = try engine.block.isPlaceholderControlsButtonEnabled(block) } ``` --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Text Variables" description: "Use variables in scene documents to update content automatically." platform: ios url: "https://img.ly/docs/cesdk/ios/create-templates/add-dynamic-content/text-variables-7ecb50/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Create and Use Templates](https://img.ly/docs/cesdk/ios/create-templates-3aef79/) > [Insert Dynamic Content](https://img.ly/docs/cesdk/ios/create-templates/add-dynamic-content-53fad7/) > [Text Variables](https://img.ly/docs/cesdk/ios/create-templates/add-dynamic-content/text-variables-7ecb50/) --- Text variables let you design once and personalize infinitely. Instead of writing “Erika Mustermann” directly into a template, you insert the token `{{fullName}}`. At render time CE.SDK swaps that token for any value you supply in code or via an external data source. This is ideal for certificates, mail-merge, and multichannel campaigns. In this guide, you’ll learn how to set up variables in text areas at design time and populate them with dynamic text at run time. ## What You’ll Learn - Add a variable to a text block and to a scene. - Change the value of a variable at runtime. - Detect whether a block references any variables. - Discover and list all variables in a scene. ## When to Use It - You need mass personalization (names, codes, dates) across many exports. - The layout is fixed but copy changes per audience or locale. - You’re merging external data (CSV/JSON/API) into a single design. - You want a locked layout where only specific text is editable. - You run A/B tests or multi-variant creatives with consistent design. - You ship reusable templates across campaigns and need quick reset between runs. ### Adding a Token to a Text Block The most common use case for variables is to create scenes and templates with text blocks where the value of the text is dynamic by: - Locking at design time: - Font - Position - Other properties - Updating the text value at runtime. - Preserving any styling applied to the block. To place a variable onto the canvas: 1. Add a text block to a scene or template document. 2. Use curly bracket notation with the variable name. ![Document with a dog-name and done-date variable.](assets/variables-ios-160-1.png) In the preceding image: - \{\{dog-name}} and \{\{done-date}} are **tokens** acting as placeholders in the text. - The text references variables with the keys `dog-name` and `done-date`. Tokens work this way: - They **don’t create variables**. - They mark **where a variable’s value appears** if a value exists in the engine’s variable store. A token can be part of a longer string. For example: "Greetings, \{\{dog-name}}" renders as "Greetings, Ruth" when the `dog-name` variable is set to "Ruth". Another way to bind a variable to a token is in code: ```swift let textBlockId = try engine.block.create(.text) try engine.block.setString( textBlockId, property: "text/text", value: "Hello {{fullName}}!" ) ``` ### Adding a Variable to a Scene A **variable store** is a key/value store you can use for any purpose. Whenever the engine loads a scene or a template: - It checks for tokens in the template. - If it finds tokens that match variable names in the store, the engine replaces the tokens with the values of the matching variables. Your app can manipulate the variables outside the scope of any scene document. Changing the value of a variable that’s associated with a token immediately updates the value in the document. > **Note:** Tokens in a document **are not** the same as variables. Though the engine automatically replaces tokens with variable values, it **does not** automatically add tokens to the variable store as variables. Create or update a variable using the `.set` command. Upon creation, the engine automatically checks any open documents for matching tokens. ```swift try engine.variable.set(key: "fullName", value: "Marie Dupont") ``` Read a variable using `.get`. ```swift let name = try engine.variable.get(key: "fullName") // "Marie Dupont" ``` Remove a variable using `.remove` ```swift try engine.variable.remove(key: "city") ``` Removing a variable that’s associated with a token causes the UI to display the name of the token. To hide a token, set its variable value to an empty string. ### Determine if a Block References a Variable For any given block, your code can determine if it has an associated variable. ```swift let hasTokens = try engine.block.referencesAnyVariables(textBlockId) ``` To find all the variables set in the engine: ```swift let allVariables = try engine.variable.findAll() ``` This **won’t** find tokens in a document. To find all tokens, your code needs to: 1. Traverse the block tree. 2. Look for tokens. A regular expression is a good way to extract them. Here is a possible strategy: ```swift func extractVariableKeys(from text: String) -> [String] { // matches {{foo}}, {{user.name}}, {{A-1}} let pattern = #"\{\{\s*([A-Za-z0-9_.\-]+)\s*\}\}"# let regex = try! NSRegularExpression(pattern: pattern) let nsrange = NSRange(text.startIndex..= 2, let r = Range(match.range(at: 1), in: text) else { return nil } return String(text[r]) } } ``` The preceding function: 1. Searches a `String` for text enclosed in double curly brackets. 2. Maps each match to a new array entry. 3. Returns the new array. **Remember**, a text block might contain more than one token. An example workflow could be: 1. Your code traverses the block tree. 2. It extracts all of the token names. 3. You create UI to let the user populate them or manipulate them in some other way. Because the variables in the engine are a key/value store. Once your code has the `String` for a variable name, it can read and set values for the variable. Here’s a minimal code example for traversing the tree and extracting any tokens: ```swift for id in try engine.block.find(byType: .text) { if let s = try? engine.block.getString(id, property: "text/text") { print(extractVariableKeys(from: s)) } } ``` ## Troubleshooting **❌ `findAll()` Returns an Empty Array**: - `findAll()` lists the keys in an engine’s variable store. Tokens like `{{dog-name}}` in text blocks aren’t automatically registered. - Either seed known keys at load time with `set(key:value:)` or scan text blocks for tokens and call set with empty defaults. **❌ `referencesAnyVariables(_:)` Always Returns `false`**: - Make sure you’re checking the correct block id and property. A parent block id doesn’t check it’s children. - When using `styled/rich text` or another text-bearing property, make sure you are using that property to filter blocks. **❌ No Visual Change When setting `textVariableHighlightColor`**: - iOS prebuilt editors don’t show token highlights. Only web editors have that affordance. - If you need a cue for users, add your own overlay or styling. **❌ Token Appears Verbatim at Runtime**: - Variable isn’t registered with the engine. - Call `set(key:value)` before preview or export. If you want to hide optional token names, set the value to `""` and handle any surrounding punctuation. **❌ Regex Misses Some Tokens**: - Look for smart or unicode braces or spaces inside of tokens. - Normalize the string before processing and ensure your regular expression pattern tolerates whitespace by using `s*` in the pattern. - Avoid using braces in regular typography. **❌ Variables Disappear on Relaunch**: - Variables persist with the scene upon save. If they’re only set in memory and the document isn’t saved, they won’t appear next time. - Save the scene after seeding variables, or re-seed variables on load. **❌ Token is Still Visible after `.remove(key:)`**: - Removing a variable from the store doesn’t remove tokens from a document. When the engine cannot resolve a token to a variable, the token text gets displayed. - Either re-add the variable with a value or set it to `""` if you want the token location to disappear from the layout. ## Next Steps Variables provide a lightweight, scene-scoped key–value store that CE.SDK resolves inside text properties at render time. Use tokens (`{{key}}`) to bind content in your text blocks, and manage values programmatically through `engine.variable`. For production flows, choose one of two patterns: - Store-first: Seed known keys on load, drive a simple form, validate, export. - Reference-first: Scan for tokens, register keys, populate values, validate, export. Both patterns keep layout stable while allowing large-scale personalization without duplicating designs. Now that you can replace text, here are some related topics to explore: - Swap entire media blocks (images/video/audio) using [placeholders to replace content](https://img.ly/docs/cesdk/ios/create-templates/add-dynamic-content/placeholders-d9ba8a/). --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Create From Scratch" description: "Build and save reusable CE.SDK templates in Swift for iOS, macOS, and Mac Catalyst." platform: ios url: "https://img.ly/docs/cesdk/ios/create-templates/from-scratch-663cda/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Create and Use Templates](https://img.ly/docs/cesdk/ios/create-templates-3aef79/) > [Create From Scratch](https://img.ly/docs/cesdk/ios/create-templates/from-scratch-663cda/) --- Templates define a reusable design with pattern—text regions, image placeholders, and locked brand elements that your app can populate at runtime. This guide walks you through creating a template **from scratch** in Swift, enabling placeholder behavior and variable bindings, and saving the result as a string or archive for reuse. ## What You’ll Learn - Differences between **templates** and **scenes**. - Programmatically build a template scene. - Enable **placeholder behavior** and **variable** bindings. - Save templates to **string** or **archive**. - Store basic **metadata** for library use. ## When to Use It Choose this guide when you need to **author** templates programmatically for things such as: - Automation pipelines - Unit tests - Code‑generated layouts. Prefer the web-based CE.SDK editors if your goal is to let designers craft rich templates visually including: - Marking placeholders. - Locking styles. - Setting edit permissions. ## Templates vs Scenes - **Scene**: a complete document (pages, blocks, assets). Edit and export it directly. - **Template**: a reusable pattern applied to scenes; often includes placeholders and variables to control what’s editable versus locked. ## Create Templates Programmatically The web-based CE.SDK editors include built-in template logic and UI. You can use them to: - Mark blocks as placeholders - Bind variables - Assign granular edit permissions. For most teams, this is the recommended path to author templates. This guide shows how to achieve similar results **in Swift**, which is useful for code‑driven generation, CI pipelines, or dynamic authoring. In the code below: - You’ll create a scene. - Add a page. - Insert a text block bound to a variable - Add an image block with placeholder behavior. ```swift let scene = try engine.scene.create() let page = try engine.block.create(.page) try engine.block.appendChild(to: scene, child: page) // Text block bound to a variable (e.g., {{name}}) let text = try engine.block.create(.text) try engine.block.setString(text, property: "text/text", value: "\{\{name\}\}") try engine.block.setPositionX(text, value: 0.1) try engine.block.setPositionY(text, value: 0.1) try engine.block.appendChild(to: page, child: text) // Image block acting as a placeholder let image = try engine.block.create(.graphic) if try engine.block.supportsPlaceholderBehavior(image) { try engine.block.setPlaceholderBehaviorEnabled(image, enabled: true) } try engine.block.setWidth(image, value: 300) try engine.block.setHeight(image, value: 200) try engine.block.setPositionX(image, value: 0.1) try engine.block.setPositionY(image, value: 0.3) try engine.block.appendChild(to: page, child: image) ``` ## Binding Variables - Use variables for [text substitution](https://img.ly/docs/cesdk/ios/create-templates/add-dynamic-content/text-variables-7ecb50/). - Use [placeholders for media](https://img.ly/docs/cesdk/ios/create-templates/add-dynamic-content/placeholders-d9ba8a/) the user swaps at runtime. Define a variable in text using curly brackets. The variable can be the entire string or part of a string, such as "Hello, \{\{guest\_name}}" Placeholder behavior works as follows: - Visible only in one of CE.SDK’s predefined editors (iOS or web). - In CI or headless workflows: replace the fill URL of the placeholder block directly rather than relying on interactive placeholder behavior. ## Saving Templates Templates are scenes with some extra settings. Save templates: - Use the same logic as for scenes. - Save as a **string** for a lightweight file: the template needs to be able to resolve all asset URLs at runtime. - Save as an **archive** for a self contained, portable file: bundles the assets into the file. ```swift let sceneAsString = try await engine.scene.saveToString() // Persist to your DB or send to a backend ``` ```swift let blob = try await engine.scene.saveToArchive() // Upload or store `blob` as a portable template package ``` Once you’ve created the string or data blob, use standard methods to persist it. ## Add Template Metadata Like other assets, you can: - Load templates into the [asset library](https://img.ly/docs/cesdk/ios/import-media/asset-library-65d6c4/). - Store metadata in your CMS or local database. - Use the saved metadata later when you register the template as an `AssetDefinition` in an `AssetSource`. That way the UI can display names, thumbnails and, categories. ## Lock Template Properties Templates can restrict editing at runtime so that users don’t edit any part of the design that should remain static. To protect integrity, you can lock properties such as: - Position - Size - Color - Fill The guide for [locking templates](https://img.ly/docs/cesdk/ios/create-templates/lock-131489/) provides details on which properties are lockable and how to set up editor and adopter rules. ## Troubleshooting **❌ Placeholders not working**: - Confirm the block type supports placeholder behavior before enabling it. - Make sure that you have set up the correct permissions on the block and that the `"fill/change"` property isn’t locked. **❌ Missing fonts/images at runtime**: - Use an archive save to embed assets into a template for portability. - Ensure that the asset URIs are reachable and stable. ## Next Steps Now that you can create templates, some related topics you may find helpful are: - [Generate scenes](https://img.ly/docs/cesdk/ios/use-templates/generate-334e15/) with templates as the source. - [Apply templates](https://img.ly/docs/cesdk/ios/use-templates/apply-template-35c73e/) to existing scenes. - Work with [dynamic content](https://img.ly/docs/cesdk/ios/create-templates/add-dynamic-content-53fad7/) to update templates at runtime --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Lock the Template" description: "Restrict editing access to specific elements or properties in a template to enforce design rules." platform: ios url: "https://img.ly/docs/cesdk/ios/create-templates/lock-131489/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Create and Use Templates](https://img.ly/docs/cesdk/ios/create-templates-3aef79/) > [Lock the Template](https://img.ly/docs/cesdk/ios/create-templates/lock-131489/) --- ```swift file=@cesdk_swift_examples/engine-guides-scopes/Scopes.swift reference-only import Foundation import IMGLYEngine @MainActor func scopes(engine: Engine) async throws { let scene = try engine.scene.create() let page = try engine.block.create(.page) try engine.block.setWidth(page, value: 800) try engine.block.setHeight(page, value: 600) try engine.block.appendChild(to: scene, child: page) let block = try engine.block.create(.graphic) try engine.block.setShape(block, shape: engine.block.createShape(.rect)) try engine.block.setWidth(block, value: 100) try engine.block.setHeight(block, value: 100) try engine.block.setFill(block, fill: engine.block.createFill(.color)) try engine.block.appendChild(to: page, child: block) let scopes = try engine.editor.findAllScopes() /* Let the global scope defer to the block-level. */ try engine.editor.setGlobalScope(key: "layer/move", value: .defer) /* Manipulation of layout properties of any block will fail at this point. */ do { try engine.block.setPositionX(block, value: 100) // Not allowed } catch { print(error.localizedDescription) } /* This will return `.defer`. */ try engine.editor.getGlobalScope(key: "layer/move") /* Allow the user to control the layout properties of the image block. */ try engine.block.setScopeEnabled(block, key: "layer/move", enabled: true) /* Manipulation of layout properties of any block is now allowed. */ do { try engine.block.setPositionX(block, value: 100) // Allowed } catch { print(error.localizedDescription) } /* Verify that the "layer/move" scope is now enabled for the image block. */ try engine.block.isScopeEnabled(block, key: "layer/move") /* This will return true as well since the global scope is set to `.defer`. */ try engine.block.isAllowedByScope(block, key: "layer/move") } ``` CE.SDK allows you to control which parts of a block can be manipulated. Scopes describe different aspects of a block, e.g. layout or style and can be enabled or disabled for every single block. There's also the option to control a scope globally. When configuring a scope globally you can set an override to always allow or deny a certain type of manipulation for every block. Or you can configure the global scope to defer to the individual block scopes. Initially, the block-level scopes are all disabled while at the global level all scopes are set to `"Allow"`. This overrides the block-level and allows for any kind of manipulation. If you want to implement a limited editing mode in your software you can set the desired scopes on the blocks you want the user to manipulate and then restrict the available actions by globally setting the scopes to `"Defer"`. In the same way you can prevent any manipulation of properties covered by a scope by setting the respective global scope to `"Deny"`. ## Available Scopes You can retrieve all available scopes by calling `try engine.editor.findAllScopes()`. ```javascript highlight-findAllScopes let scopes = try engine.editor.findAllScopes() ``` We currently support the following scopes: | Scope | Explanation | | -------------------------- | -------------------------------------------------- | | `"layer/move"` | Whether the block's position can be changed | | `"layer/resize"` | Whether the block can be resized | | `"layer/rotate"` | Whether the block's rotation can be changed | | `"layer/flip"` | Whether the block can be flipped | | `"layer/crop"` | Whether the block's content can be cropped | | `"layer/clipping"` | Whether the block's clipping can be changed | | `"layer/opacity"` | Whether the block's opacity can be changed | | `"layer/blendMode"` | Whether the block's blend mode can be changed | | `"layer/visibility"` | Whether the block's visibility can be changed | | `"appearance/adjustments"` | Whether the block's adjustments can be changed | | `"appearance/filter"` | Whether the block's filter can be changed | | `"appearance/effect"` | Whether the block's effect can be changed | | `"appearance/blur"` | Whether the block's blur can be changed | | `"appearance/shadow"` | Whether the block's shadow can be changed | | `"lifecycle/destroy"` | Whether the block can be deleted | | `"lifecycle/duplicate"` | Whether the block can be duplicated | | `"editor/add"` | Whether new blocks can be added | | `"editor/select"` | Whether a block can be selected or not | | `"fill/change"` | Whether the block's fill can be changed | | `"fill/changeType"` | Whether the block's fill type can be changed | | `"stroke/change"` | Whether the block's stroke can be changed | | `"shape/change"` | Whether the block's shape can be changed | | `"text/edit"` | Whether the block's text can be changed | | `"text/character"` | Whether the block's text properties can be changed | ## Managing Scopes First, we globally defer the `"layer/move"` scope to the block-level using `try engine.editor.setGlobalScope(key: "layer/move", value: .defer)`. Since all blocks default to having their scopes set to `false` initially, modifying the layout properties of any block will fail at this point. | Value | Explanation | | -------- | ----------------------------------------------------------------- | | `.allow` | Manipulation of properties covered by the scope is always allowed | | `.deny` | Manipulation of properties covered by the scope is always denied | | `.defer` | Permission is deferred to the scope of the individual blocks | ```swift highlight-setGlobalScope /* Let the global scope defer to the block-level. */ try engine.editor.setGlobalScope(key: "layer/move", value: .defer) /* Manipulation of layout properties of any block will fail at this point. */ do { try engine.block.setPositionX(block, value: 100) // Not allowed } catch { print(error.localizedDescription) } ``` We can verify the current state of the global `"layer/move"` scope using `try engine.editor.getGlobalScope(key: "layer/move")`. ```swift highlight-getGlobalScope /* This will return `.defer`. */ try engine.editor.getGlobalScope(key: "layer/move") ``` Now we can allow the `"layer/move"` scope for a single block by setting it to `true` using `func setScopeEnabled(_ id: DesignBlockID, key: String, enabled: Bool) throws`. ```swift highlight-setScopeEnabled /* Allow the user to control the layout properties of the image block. */ try engine.block.setScopeEnabled(block, key: "layer/move", enabled: true) /* Manipulation of layout properties of any block is now allowed. */ do { try engine.block.setPositionX(block, value: 100) // Allowed } catch { print(error.localizedDescription) } ``` Again we can verify this change by calling `func isScopeEnabled(_ id: DesignBlockID, key: String) throws -> Bool`. ```swift highlight-isScopeEnabled /* Verify that the "layer/move" scope is now enabled for the image block. */ try engine.block.isScopeEnabled(block, key: "layer/move") ``` Finally, `func isAllowedByScope(_ id: DesignBlockID, key: String) throws -> Bool` will allow us to verify a block's final scope state by taking both the global state as well as block-level state into account. ```swift highlight-isAllowedByScope /* This will return true as well since the global scope is set to `.defer`. */ try engine.block.isAllowedByScope(block, key: "layer/move") ``` ## Full Code Here's the full code: ```swift import Foundation import IMGLYEngine @MainActor func scopes(engine: Engine) async throws { let scene = try await engine.scene.create(fromImage: .init(string: "https://img.ly/static/ubq_samples/imgly_logo.jpg")!) let block = try engine.block.find(byType: .graphic).first! let scopes = try engine.editor.findAllScopes() /* Let the global scope defer to the block-level. */ try engine.editor.setGlobalScope(key: "layer/move", value: .defer) /* Manipulation of layout properties of any block will fail at this point. */ do { try engine.block.setPositionX(block, value: 100) // Not allowed } catch { print(error.localizedDescription) } /* This will return `.defer`. */ try engine.editor.getGlobalScope(key: "layer/move") /* Allow the user to control the layout properties of the image block. */ try engine.block.setScopeEnabled(block, key: "layer/move", enabled: true) /* Manipulation of layout properties of any block is now allowed. */ do { try engine.block.setPositionX(block, value: 100) // Allowed } catch { print(error.localizedDescription) } /* Verify that the "layer/move" scope is now enabled for the image block. */ try engine.block.isScopeEnabled(block, key: "layer/move") /* This will return true as well since the global scope is set to `.defer`. */ try engine.block.isAllowedByScope(block, key: "layer/move") } ``` --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Overview" description: "Learn how to create, import, and manage reusable templates to streamline design creation in CE.SDK." platform: ios url: "https://img.ly/docs/cesdk/ios/create-templates/overview-4ebe30/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Create and Use Templates](https://img.ly/docs/cesdk/ios/create-templates-3aef79/) > [Overview](https://img.ly/docs/cesdk/ios/create-templates/overview-4ebe30/) --- In CE.SDK, a *template* is a reusable, structured design that defines editable areas and constraints for end users. Templates can be based on static visuals or video compositions and are used to guide content creation, enable mass personalization, and enforce design consistency. Unlike a regular editable design, a template introduces structure through placeholders and constraints, allowing you to define which elements users can change and how. Templates support both static output formats (like PNG, PDF) and videos (like MP4), and can be created or applied using either the CE.SDK UI or API. Templates are a core part of enabling design automation, personalization, and streamlined workflows in any app that includes creative functionality. [Explore Demos](https://img.ly/showcases/cesdk?tags=ios) [Get Started](https://img.ly/docs/cesdk/ios/get-started/overview-e18f40/) These imported designs can then be adapted into editable, structured templates inside CE.SDK. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Create Videos" description: "Learn how to create and customize videos in CE.SDK using scenes, assets, and timeline-based editing." platform: ios url: "https://img.ly/docs/cesdk/ios/create-video-c41a08/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Create and Edit Videos](https://img.ly/docs/cesdk/ios/create-video-c41a08/) --- --- ## Related Pages - [Create Videos Overview](https://img.ly/docs/cesdk/ios/create-video/overview-b06512/) - Learn how to create and customize videos in CE.SDK using scenes, assets, and timeline-based editing. - [Timeline Editor](https://img.ly/docs/cesdk/ios/create-video/timeline-editor-912252/) - Use the timeline editor to arrange and edit video clips, audio, and animations frame by frame. - [Control Audio and Video](https://img.ly/docs/cesdk/ios/create-video/control-daba54/) - Learn how to configure and control audio and video through offset, trim, playback, and resource control. - [Transform](https://img.ly/docs/cesdk/ios/edit-video/transform-369f28/) - Documentation for Transform - [Add Captions](https://img.ly/docs/cesdk/ios/edit-video/add-captions-f67565/) - Documentation for adding captions to videos - [Annotation](https://img.ly/docs/cesdk/ios/edit-video/annotation-e9cbad/) - Documentation for Annotation --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Control Audio and Video" description: "Learn how to configure and control audio and video through offset, trim, playback, and resource control." platform: ios url: "https://img.ly/docs/cesdk/ios/create-video/control-daba54/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Create and Edit Videos](https://img.ly/docs/cesdk/ios/create-video-c41a08/) > [Control Audio and Video](https://img.ly/docs/cesdk/ios/create-video/control-daba54/) --- In this example, we will show you how to use the [CreativeEditor SDK](https://img.ly/products/creative-sdk)'s CreativeEngine to configure and control audio and video through the `block` API. ## Time Offset and Duration The time offset determines when a block becomes active during playback on the page's timeline, and the duration decides how long this block is active. Blocks within tracks are a special case in that they have an implicitly calculated time offset that is determined by their order and the total duration of their preceding blocks in the same track. As with any audio/video-related property, not every block supports these properties. Use `hasTimeOffset` and `hasDuration` to check. ```swift public func supportsTimeOffset(_ id: DesignBlockID) throws -> Bool ``` Returns whether the block has a time offset property. - `id:`: The block to query. - Returns: `true`, if the block has a time offset property. ```swift public func setTimeOffset(_ id: DesignBlockID, offset: Double) throws ``` Set the time offset of the given block relative to its parent. The time offset controls when the block is first active in the timeline. - Note: The time offset is not supported by the page block. - `id`: The block whose time offset should be changed. - `offset`: The new time offset in seconds. ```swift public func getTimeOffset(_ id: DesignBlockID) throws -> Double ``` Get the time offset of the given block relative to its parent. - `id:`: The block whose time offset should be queried. - Returns: The time offset of the block. ```swift public func supportsDuration(_ id: DesignBlockID) throws -> Bool ``` Returns whether the block has a duration property. - Note: This also adjusts the trim for non looping blocks. - `id:`: The block to query. - Returns: `true` if the block has a duration property. ```swift public func setDuration(_ id: DesignBlockID, duration: Double) throws ``` Set the playback duration of the given block in seconds. The duration defines for how long the block is active in the scene during playback. If a duration is set on the page block, it becomes the duration source block. - Note: The duration is ignored when the scene is not in "Video" mode. - Note: This also adjusts the trim for non looping blocks. - `id`: The block whose duration should be changed. - `duration`: The new duration in seconds. ```swift public func getDuration(_ id: DesignBlockID) throws -> Double ``` Get the playback duration of the given block in seconds. The duration defines for how long the block is active in the scene during playback. Note: The duration is ignored when the scene is not in `Video` mode. - `id:`: The block whose duration should be returned. - Returns: The block's duration. ```swift public func supportsPageDurationSource(_ page: DesignBlockID, id: DesignBlockID) throws -> Bool ``` Returns whether the block can be marked as the element that defines the duration of the given page. - `id:`: The block to query. - Returns: `true`, if the block has a time offset property. ```swift public func setPageDurationSource(_ page: DesignBlockID, id: DesignBlockID) throws ``` Set an block as duration source so that the overall page duration is automatically determined by this. If no defining block is set, the page duration is calculated over all children. Only one block per page can be marked as duration source. Will automatically unmark the previously marked. Note: This is only supported for blocks that have a duration. - `page:`: The page block for which it should be enabled. - `id:`: The block which should be marked as duration source. ```swift public func isPageDurationSource(_ id: DesignBlockID) throws -> Bool ``` Returns whether the block is a duration source block. - `id:`: The block whose duration source property should be queried. - Returns: `true`, if the block is a duration source block. ```swift public func removePageDurationSource(_ id: DesignBlockID) throws ``` Remove the block as duration source block for the page. If a scene or page is given as block, it is deactivated for all blocks in the scene or page. - `id:`: The block whose duration source property should be removed. ```swift public func setNativePixelBuffer(_ id: DesignBlockID, buffer: CVPixelBuffer) throws ``` Update the pixels of the given pixel stream fill block. - `id`: The pixel stream fill block. - `buffer`: The buffer to copy the pixel data from. ## Trim You can select a specific range of footage from your audio/video resource by providing a trim offset and a trim length. The footage will loop if the trim's length is shorter than the block's duration. This behavior can also be disabled using the `setLooping` function. ```swift public func supportsTrim(_ id: DesignBlockID) throws -> Bool ``` Returns whether the block has trim properties. - `id:`: The block to query. - Returns: `true`, if the block has trim properties. ```swift public func setTrimOffset(_ id: DesignBlockID, offset: Double) throws ``` Set the trim offset of the given block or fill. Sets the time in seconds within the fill at which playback of the audio or video clip should begin. - Note: This requires the video or audio clip to be loaded. - `id`: The block whose trim should be updated. - `offset`: The new trim offset, measured in timeline seconds (scaled by playback rate). ```swift public func getTrimOffset(_ id: DesignBlockID) throws -> Double ``` Get the trim offset of this block. - Note: This requires the video or audio clip to be loaded. - `id:`: The block whose trim offset should be queried. - Returns: The trim offset in timeline seconds. ```swift public func setTrimLength(_ id: DesignBlockID, length: Double) throws ``` Set the trim length of the given block or fill. The trim length is the duration of the audio or video clip that should be used for playback. - Note: After reaching this value during playback, the trim region will loop. - Note: This requires the video or audio clip to be loaded. - `id`: The object whose trim length should be updated. - `length`: The new trim length, measured in timeline seconds (scaled by playback rate). ```swift public func getTrimLength(_ id: DesignBlockID) throws -> Double ``` Get the trim length of the given block or fill. - `id:`: The object whose trim length should be queried. - Returns: The trim length of the object measured in timeline seconds (scaled by playback rate). ## Playback Control You can start and pause playback and seek to a certain point on the scene's timeline. There's also a solo playback mode to preview audio and video blocks individually while the rest of the scene stays frozen. Finally, you can enable or disable the looping behavior of blocks and control their audio volume. ```swift public func setPlaying(_ id: DesignBlockID, enabled: Bool) throws ``` Set whether the block should be during active playback. - `id`: The block that should be updated. - `enabled`: Whether the block should be playing its contents. ```swift public func isPlaying(_ id: DesignBlockID) throws -> Bool ``` Returns whether the block is currently during active playback. - `id:`: The block to query. - Returns: Whether the block is during playback. ```swift public func setSoloPlaybackEnabled(_ id: DesignBlockID, enabled: Bool) throws ``` Set whether the given block or fill should play its contents while the rest of the scene remains paused. - Note: Setting this to true for one block will automatically set it to false on all other blocks. - `id`: The block or fill to update. - `enabled`: Whether the block's playback should progress as time moves on. ```swift public func isSoloPlaybackEnabled(_ id: DesignBlockID) throws -> Bool ``` Return whether the given block or fill is currently set to play its contents while the rest of the scene remains paused. - `id:`: The block or fill to query. - Returns: Whether solo playback is enabled for this block. ```swift public func supportsPlaybackTime(_ id: DesignBlockID) throws -> Bool ``` Returns whether the block has a playback time property. - `id:`: The block to query. - Returns: Whether the block has a playback time property. ```swift public func setPlaybackTime(_ id: DesignBlockID, time: Double) throws ``` Set the playback time of the given block. - `id`: The block whose playback time should be updated. - `time`: The new playback time of the block in seconds. ```swift public func getPlaybackTime(_ id: DesignBlockID) throws -> Double ``` Get the playback time of the given block. - `id:`: The block to query. - Returns: The playback time of the block in seconds. ```swift public func isVisibleAtCurrentPlaybackTime(_ id: DesignBlockID) throws -> Bool ``` Returns whether the block is visible on the canvas at the current playback time. - `id:`: The block to query. - Returns: The visibility state. ```swift public func supportsPlaybackControl(_ id: DesignBlockID) throws -> Bool ``` Returns whether the block supports a playback control. - `id:`: The block to query. - Returns: Whether the block has playback control. ```swift public func setLooping(_ id: DesignBlockID, looping: Bool) throws ``` Set whether the block should start from the beginning again or stop. - `id`: The block or video fill to update. - `looping`: Whether the block should loop to the beginning or stop. ```swift public func isLooping(_ id: DesignBlockID) throws -> Bool ``` Query whether the block is looping. - `id:`: The block to query. - Returns: Whether the block is looping. ```swift public func setMuted(_ id: DesignBlockID, muted: Bool) throws ``` Set whether the audio of the block is muted. - `id`: The block or video fill to update. - `muted`: Whether the audio should be muted. ```swift public func isMuted(_ id: DesignBlockID) throws -> Bool ``` Query whether the block is muted. - `id:`: The block to query. - Returns: The volume with a range of `0, 1`. ```swift public func setVolume(_ id: DesignBlockID, volume: Float) throws ``` Set the audio volume of the given block. - `id`: The block or video fill to update. - `volume`: The desired volume with a range of `0, 1`. ```swift public func getVolume(_ id: DesignBlockID) throws -> Float ``` Get the audio volume of the given block. - `id:`: The block to query. - Returns: The volume with a range of `0, 1`. ```swift public func getVideoWidth(_ id: DesignBlockID) throws -> Int ``` Get the video width in pixels of the video resource that is attached to the given block. - `block:`: The video fill. - Returns: The video width in pixels. ```swift public func getVideoHeight(_ id: DesignBlockID) throws -> Int ``` Get the video height in pixels of the video resource that is attached to the given block. - `block:`: The video fill. - Returns: The video height in pixels. ## Playback Speed You can control the playback speed of audio and video blocks to create slow-motion or fast-forward effects. The playback speed is a multiplier that affects how quickly the content plays back. Audio blocks accept values from 0.25x (quarter speed) to 3.0x (triple speed). Video fills can be pushed beyond 3.0x when you need extreme fast-forward playback. Note that changing the playback speed automatically adjusts both the trim and duration of the block to maintain the same visual timeline length. ```swift public func setPlaybackSpeed(_ id: DesignBlockID, speed: Float) throws ``` Set the playback speed of the given block. - Note: This also adjusts the trim and duration of the block. Video fills running faster than 3.0x are force muted until their speed is reduced to 3.0x or below. - `id`: The block or video fill to update. - `speed`: The desired playback speed multiplier. Valid range is \[0.25, 3.0] for audio blocks and \[0.25, ∞) for video fills. ```swift public func getPlaybackSpeed(_ id: DesignBlockID) throws -> Float ``` Get the playback speed of the given block. - `id`: The block to query. - Returns: The playback speed multiplier. ## Resource Control Until an audio/video resource referenced by a block is loaded, properties like the duration of the resource aren't available, and accessing those will lead to an error. You can avoid this by forcing the resource you want to access to load using `forceLoadAVResource`. ```swift public func forceLoadAVResource(_ id: DesignBlockID) async throws ``` Begins loading the required audio and video resource for the given video fill or audio block. If the resource had been loaded earlier and resulted in an error, it will be reloaded. - `id:`: The video fill or audio block whose resource should be loaded. ```swift public func unstable_isAVResourceLoaded(_ id: DesignBlockID) throws -> Bool ``` Returns whether the audio and video resource for the given video fill or audio block is loaded. - `id:`: The video fill or audio block. - Returns: Whether the resource is loaded. ```swift public func getAVResourceTotalDuration(_ id: DesignBlockID) throws -> Double ``` Get the duration in seconds of the video or audio resource that is attached to the given block. - `id:`: The video fill or audio block. - Returns: The video or audio file duration. ## Thumbnail Previews For a user interface, it can be helpful to have image previews in the form of thumbnails for any given video resource. For videos, the engine can provide one or more frames using `generateVideoThumbnailSequence`. Pass the video fill that references the video resource. In addition to video thumbnails, the engine can also render compositions of design blocks over time. To do this pass in the respective design block. The video editor uses these to visually represent blocks in the timeline. In order to visualize audio signals `generateAudioThumbnailSequence` can be used. This generates a sequence of values in the range of 0 to 1 that represent the loudness of the signal. These values can be used to render a waveform pattern in any custom style. Note: There can be at most one thumbnail generation request per block at any given time. If you don't want to wait for the request to finish before issuing a new request, you can cancel the task. ```swift public func generateVideoThumbnailSequence(_ id: DesignBlockID, thumbnailHeight: Int, timeRange: ClosedRange, numberOfFrames: Int) -> AsyncThrowingStream ``` Generate a thumbnail sequence for the given video fill or design block. - Note: There can only be one thumbnail generation request in progress for a given block. - Note: During playback, the thumbnail generation will be paused. - `id`: A video fill or a design block. - `thumbnailHeight`: The height of a thumbnail. The width will be calculated from the video aspect ratio. - `timeRange`: The time range of the generated thumbnails relative to the time offset of the design block. - `numberOfFrames`: The number of thumbnails to generate within the given time range. - Returns: A stream of VideoThumbnail objects. ```swift public func generateAudioThumbnailSequence(_ id: DesignBlockID, samplesPerChunk: Int, timeRange: ClosedRange, numberOfSamples: Int, numberOfChannels: Int) -> AsyncThrowingStream ``` Generate a thumbnail sequence for the given audio block or video fill. A thumbnail in this case is a chunk of samples in the range of 0 to 1. In case stereo data is requested, the samples are interleaved, starting with the left channel. - Note: During playback, the thumbnail generation will be paused. - `id`: The audio block or video fill. - `samplesPerChunk`: The number of samples per chunk. - `timeRange`: The time range of the generated thumbnails. - `numberOfSamples`: The total number of samples to generate. - `numberOfChannels`: The number of channels in the output. 1 for mono, 2 for stereo. - Returns: A stream of AudioThumbnail objects. ## Full Code Here's the full code: ```swift file=@cesdk_swift_examples/engine-guides-control-av/ControlAudioVideo.swift import Foundation import IMGLYEngine // swiftlint:disable for_where @MainActor func controlAudioVideo(engine: Engine) async throws { // Setup a minimal video scene let scene = try engine.scene.createVideo() let page = try engine.block.create(.page) try engine.block.appendChild(to: scene, child: page) try engine.block.setWidth(page, value: 1280) try engine.block.setHeight(page, value: 720) // Create a video block and track let videoBlock = try engine.block.create(.graphic) try engine.block.setShape(videoBlock, shape: try engine.block.createShape(.rect)) let videoFill = try engine.block.createFill(.video) try engine.block.setString( videoFill, property: "fill/video/fileURI", // swiftlint:disable:next line_length value: "https://cdn.img.ly/assets/demo/v1/ly.img.video/videos/pexels-drone-footage-of-a-surfer-barrelling-a-wave-12715991.mp4", ) try engine.block.setFill(videoBlock, fill: videoFill) let track = try engine.block.create(.track) try engine.block.appendChild(to: page, child: track) try engine.block.appendChild(to: track, child: videoBlock) try engine.block.fillParent(track) // Create an audio block let audio = try engine.block.create(.audio) try engine.block.appendChild(to: page, child: audio) try engine.block.setString( audio, property: "audio/fileURI", value: "https://cdn.img.ly/assets/demo/v1/ly.img.audio/audios/far_from_home.m4a", ) // Time Offset and Duration try engine.block.supportsTimeOffset(audio) try engine.block.setTimeOffset(audio, offset: 2) try engine.block.getTimeOffset(audio) /* Returns 2 */ try engine.block.supportsDuration(page) try engine.block.setDuration(page, duration: 10) try engine.block.getDuration(page) /* Returns 10 */ // Duration of the page can be that of a block try engine.block.supportsPageDurationSource(page, id: videoBlock) try engine.block.setPageDurationSource(page, id: videoBlock) try engine.block.isPageDurationSource(videoBlock) try engine.block.getDuration(page) /* Returns duration plus offset of the block */ // Duration of the page can be the maximum end time of all page child blocks try engine.block.removePageDurationSource(page) try engine.block.getDuration(page) /* Returns the maximum end time of all page child blocks */ // Trim try engine.block.supportsTrim(videoFill) try engine.block.setTrimOffset(videoFill, offset: 1) try engine.block.getTrimOffset(videoFill) /* Returns 1 */ try engine.block.setTrimLength(videoFill, length: 5) try engine.block.getTrimLength(videoFill) /* Returns 5 */ // Playback Control try engine.block.setPlaying(page, enabled: true) try engine.block.isPlaying(page) try engine.block.setSoloPlaybackEnabled(videoFill, enabled: true) try engine.block.isSoloPlaybackEnabled(videoFill) try engine.block.supportsPlaybackTime(page) try engine.block.setPlaybackTime(page, time: 1) try engine.block.getPlaybackTime(page) try engine.block.isVisibleAtCurrentPlaybackTime(videoBlock) try engine.block.supportsPlaybackControl(videoFill) try engine.block.setLooping(videoFill, looping: true) try engine.block.isLooping(videoFill) try engine.block.setMuted(videoFill, muted: true) try engine.block.isMuted(videoFill) try engine.block.setVolume(videoFill, volume: 0.5) /* 50% volume */ try engine.block.getVolume(videoFill) // Playback Speed try engine.block.setPlaybackSpeed(videoFill, speed: 0.5) /* Half speed */ let currentSpeed = try engine.block.getPlaybackSpeed(videoFill) /* 0.5 */ try engine.block.setPlaybackSpeed(videoFill, speed: 2.0) /* Double speed */ try engine.block.setPlaybackSpeed(videoFill, speed: 1.0) /* Normal speed */ // Resource Control try await engine.block.forceLoadAVResource(videoFill) try engine.block.unstable_isAVResourceLoaded(videoFill) try engine.block.getAVResourceTotalDuration(videoFill) try engine.block.getVideoWidth(videoFill) try engine.block.getVideoHeight(videoFill) // Thumbnail Previews let videoThumbnailTask = Task { for try await thumbnail in engine.block.generateVideoThumbnailSequence( videoFill, /* video fill or page */ thumbnailHeight: 128, /* width will be calculated from aspect ratio */ timeRange: 0.5 ... 9.5, /* inclusive time range in seconds */ numberOfFrames: 10, /* number of thumbnails to generate */ ) { if Task.isCancelled { break } // Use the thumbnail... } } let audioThumbnailTask = Task { for try await thumbnail in engine.block.generateAudioThumbnailSequence( audio, samplesPerChunk: 20, timeRange: 0.5 ... 9.5, numberOfSamples: 10 * 20, numberOfChannels: 2, ) { if Task.isCancelled { break } // Draw wave pattern... } } // Piping a native camera stream into the engine var pixelBuffer: CVPixelBuffer? CVPixelBufferCreate(kCFAllocatorDefault, 600, 400, kCVPixelFormatType_32BGRA, nil, &pixelBuffer) let pixelStreamFill = try engine.block.createFill(.pixelStream) try engine.block.setNativePixelBuffer(pixelStreamFill, buffer: pixelBuffer!) _ = videoThumbnailTask _ = audioThumbnailTask } ``` --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Create Videos Overview" description: "Learn how to create and customize videos in CE.SDK using scenes, assets, and timeline-based editing." platform: ios url: "https://img.ly/docs/cesdk/ios/create-video/overview-b06512/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Create and Edit Videos](https://img.ly/docs/cesdk/ios/create-video-c41a08/) > [Overview](https://img.ly/docs/cesdk/ios/create-video/overview-b06512/) --- ```swift file=@cesdk_swift_examples/engine-guides-video/Video.swift reference-only import Foundation import IMGLYEngine @MainActor func editVideo(engine: Engine) async throws { let scene = try engine.scene.createVideo() let page = try engine.block.create(.page) try engine.block.appendChild(to: scene, child: page) try engine.block.setWidth(page, value: 1280) try engine.block.setHeight(page, value: 720) try engine.block.setDuration(page, duration: 20) let video1 = try engine.block.create(.graphic) try engine.block.setShape(video1, shape: engine.block.createShape(.rect)) let videoFill = try engine.block.createFill(.video) try engine.block.setString( videoFill, property: "fill/video/fileURI", // swiftlint:disable:next line_length value: "https://cdn.img.ly/assets/demo/v1/ly.img.video/videos/pexels-drone-footage-of-a-surfer-barrelling-a-wave-12715991.mp4", ) try engine.block.setFill(video1, fill: videoFill) let video2 = try engine.block.create(.graphic) try engine.block.setShape(video2, shape: engine.block.createShape(.rect)) let videoFill2 = try engine.block.createFill(.video) try engine.block.setString( videoFill2, property: "fill/video/fileURI", value: "https://cdn.img.ly/assets/demo/v3/ly.img.video/videos/pexels-kampus-production-8154913.mp4", ) try engine.block.setFill(video2, fill: videoFill2) let track = try engine.block.create(.track) try engine.block.appendChild(to: page, child: track) try engine.block.appendChild(to: track, child: video1) try engine.block.appendChild(to: track, child: video2) try engine.block.fillParent(track) try engine.block.setDuration(video1, duration: 15) // Make sure that the video is loaded before calling the trim APIs. try await engine.block.forceLoadAVResource(videoFill) try engine.block.setTrimOffset(videoFill, offset: 1) try engine.block.setTrimLength(videoFill, length: 10) try engine.block.setLooping(videoFill, looping: true) try engine.block.setMuted(videoFill, muted: true) let audio = try engine.block.create(.audio) try engine.block.appendChild(to: page, child: audio) try engine.block.setString( audio, property: "audio/fileURI", value: "https://cdn.img.ly/assets/demo/v1/ly.img.audio/audios/far_from_home.m4a", ) // Set the volume level to 70%. try engine.block.setVolume(audio, volume: 0.7) // Start the audio after two seconds of playback. try engine.block.setTimeOffset(audio, offset: 2) // Give the Audio block a duration of 7 seconds. try engine.block.setDuration(audio, duration: 7) // Export page as mp4 video. let mimeType: MIMEType = .mp4 let exportTask = Task { for try await export in try await engine.block.exportVideo(page, mimeType: mimeType) { switch export { case let .progress(renderedFrames, encodedFrames, totalFrames): print("Rendered", renderedFrames, "frames and encoded", encodedFrames, "frames out of", totalFrames) case let .finished(video: videoData): return videoData } } return Blob() } let blob = try await exportTask.value } ``` In addition to static designs, CE.SDK also allows you to create and edit videos. Working with videos introduces the concept of time into the scene, which requires you to switch the scene into the `"Video"` mode. In this mode, each page in the scene has its own separate timeline within which its children can be placed. The `"playback/time"` property of each page controls the progress of time through the page. In order to add videos to your pages, you can add a block with a `FillType.video` fill. As the playback time of the page progresses, the corresponding point in time of the video fill is rendered by the block. You can also customize the video fill's trim in order to control the portion of the video that should be looped while the block is visible. `DesignBlockType.audio` blocks can be added to the page in order to play an audio file during playback. The `playback/timeOffset` property controls after how many seconds the audio should begin to play, while the duration property defines how long the audio should play. The same APIs can be used for other design blocks as well, such as text or graphic blocks. Finally, the whole page can be exported as a video file using the `block.exportVideo` function. ## Creating a Video Scene First, we create a scene that is set up for video editing by calling the `scene.createVideo()` API. Then we create a page, add it to the scene and define its dimensions. This page will hold our composition. ```swift highlight-setupScene let scene = try engine.scene.createVideo() let page = try engine.block.create(.page) try engine.block.appendChild(to: scene, child: page) try engine.block.setWidth(page, value: 1280) try engine.block.setHeight(page, value: 720) ``` ## Setting Page Durations Next, we define the duration of the page using the `func setDuration(_ id: DesignBlockID, duration: Double) throws` API to be 20 seconds long. This will be the total duration of our exported video in the end. ```swift highlight-setPageDuration try engine.block.setDuration(page, duration: 20) ``` ## Adding Videos In this example, we want to show two videos, one after the other. For this, we first create two graphic blocks and assign two `'video'` fills to them. ```swift highlight-assignVideoFill let video1 = try engine.block.create(.graphic) try engine.block.setShape(video1, shape: engine.block.createShape(.rect)) let videoFill = try engine.block.createFill(.video) try engine.block.setString( videoFill, property: "fill/video/fileURI", // swiftlint:disable:next line_length value: "https://cdn.img.ly/assets/demo/v1/ly.img.video/videos/pexels-drone-footage-of-a-surfer-barrelling-a-wave-12715991.mp4", ) try engine.block.setFill(video1, fill: videoFill) let video2 = try engine.block.create(.graphic) try engine.block.setShape(video2, shape: engine.block.createShape(.rect)) let videoFill2 = try engine.block.createFill(.video) try engine.block.setString( videoFill2, property: "fill/video/fileURI", value: "https://cdn.img.ly/assets/demo/v3/ly.img.video/videos/pexels-kampus-production-8154913.mp4", ) try engine.block.setFill(video2, fill: videoFill2) ``` ## Creating a Track While we could add the two blocks directly to the page and and manually set their sizes and time offsets, we can alternatively also use the `track` block to simplify this work. A `track` automatically adjusts the time offsets of its children to make sure that they play one after another without any gaps, based on each child's duration. Tracks themselves cannot be selected directly by clicking on the canvas, nor do they have any visual representation. We create a `track` block, add it to the page and add both videos in the order in which they should play as the track's children. Next, we use the `fillParent` API, which will resize all children of the track to the same dimensions as the page. The dimensions of a `track` are always derived from the dimensions of its children, so you should not call the `setWidth` or `setHeight` APIs on a track, but on its children instead if you can't use the `fillParent` API. ```swift highlight-addToTrack let track = try engine.block.create(.track) try engine.block.appendChild(to: page, child: track) try engine.block.appendChild(to: track, child: video1) try engine.block.appendChild(to: track, child: video2) try engine.block.fillParent(track) ``` By default, each block has a duration of 5 seconds after it is created. If we want to show it on the page for a different amount of time, we can use the `setDuration` API. Note that we can just increase the duration of the first video block to 15 seconds without having to adjust anything about the second video. The `track` takes care of that for us automatically so that the second video starts playing after 15 seconds. ```swift highlight-setDuration try engine.block.setDuration(video1, duration: 15) ``` If the video is longer than the duration of the graphic block that it's attached to, it will cut off once the duration of the graphic is reached. If it is too short, the video will automatically loop for as long as its graphic block is visible. We can also manually define the portion of our video that should loop within the graphic using the `func setTrimOffset(_ id: DesignBlockID, offset: Double) throws` and `func setTrimLength(_ id: DesignBlockID, length: Double) throws` APIs. We use the trim offset to cut away the first second of the video and the trim length to only play 10 seconds of the video. Since our graphic is 15 seconds long, the trimmed video will be played fully once and then start looping for the remaining 5 seconds. ```swift highlight-trim // Make sure that the video is loaded before calling the trim APIs. try await engine.block.forceLoadAVResource(videoFill) try engine.block.setTrimOffset(videoFill, offset: 1) try engine.block.setTrimLength(videoFill, length: 10) ``` We can control if a video will loop back to its beginning by calling `func setLooping(_ id: DesignBlockID, looping: Bool) throws`. Otherwise, the video will simply hold its last frame instead and audio will stop playing. Looping behavior is activated for all blocks by default. ```swift highlight-looping try engine.block.setLooping(videoFill, looping: true) ``` ## Audio If the video of a video fill contains an audio track, that audio will play automatically by default when the video is playing. We can mute it by calling `func setMuted(_ id: DesignBlockID, muted: Bool) throws`. ```swift highlight-mute-audio try engine.block.setMuted(videoFill, muted: true) ``` We can also add audio-only files to play together with the contents of the page by adding an `'audio'` block to the page and assigning it the URL of the audio file. ```swift highlight-audio let audio = try engine.block.create(.audio) try engine.block.appendChild(to: page, child: audio) try engine.block.setString( audio, property: "audio/fileURI", value: "https://cdn.img.ly/assets/demo/v1/ly.img.audio/audios/far_from_home.m4a", ) ``` We can adjust the volume level of any audio block or video fill by calling `func setVolume(_ id: DesignBlockID, volume: Float) throws`. The volume is given as a fraction in the range of 0 to 1. ```swift highlight-audio-volume // Set the volume level to 70%. try engine.block.setVolume(audio, volume: 0.7) ``` By default, our audio block will start playing at the very beginning of the page. We can change this by specifying how many seconds into the scene it should begin to play using the `func setTimeOffset(_ id: DesignBlockID, offset: Double) throws` API. ```swift highlight-timeOffset // Start the audio after two seconds of playback. try engine.block.setTimeOffset(audio, offset: 2) ``` By default, our audio block will have a duration of 5 seconds. We can change this by specifying its duration in seconds by using the `func setDuration(_ id: DesignBlockID, duration: Double) throws` API. ```swift highlight-audioDuration // Give the Audio block a duration of 7 seconds. try engine.block.setDuration(audio, duration: 7) ``` ## Exporting Video You can start exporting the entire page as a video file by calling `func exportVideo(_ id: DesignBlockID, mimeType: MIMEType)`. The encoding process will run in the background. You can get notified about the progress of the encoding process by the `async` stream that's returned. Since the encoding process runs in the background the engine will stay interactive. So, you can continue to use the engine to manipulate the scene. Please note that these changes won't be visible in the exported video file because the scene's state has been frozen at the start of the export. ```swift highlight-exportVideo // Export page as mp4 video. let mimeType: MIMEType = .mp4 let exportTask = Task { for try await export in try await engine.block.exportVideo(page, mimeType: mimeType) { switch export { case let .progress(renderedFrames, encodedFrames, totalFrames): print("Rendered", renderedFrames, "frames and encoded", encodedFrames, "frames out of", totalFrames) case let .finished(video: videoData): return videoData } } return Blob() } let blob = try await exportTask.value ``` ## Full Code Here's the full code: ```swift import Foundation import IMGLYEngine @MainActor func editVideo(engine: Engine) async throws { let scene = try engine.scene.createVideo() let page = try engine.block.create(.page) try engine.block.appendChild(to: scene, child: page) try engine.block.setWidth(page, value: 1280) try engine.block.setHeight(page, value: 720) try engine.block.setDuration(page, duration: 20) let video1 = try engine.block.create(.graphic) try engine.block.setShape(video1, shape: engine.block.createShape(.rect)) let videoFill = try engine.block.createFill(.video) try engine.block.setString( videoFill, property: "fill/video/fileURI", // swiftlint:disable:next line_length value: "https://cdn.img.ly/assets/demo/v1/ly.img.video/videos/pexels-drone-footage-of-a-surfer-barrelling-a-wave-12715991.mp4" ) try engine.block.setFill(video1, fill: videoFill) let video2 = try engine.block.create(.graphic) try engine.block.setShape(video2, shape: engine.block.createShape(.rect)) let videoFill2 = try engine.block.createFill(.video) try engine.block.setString( videoFill2, property: "fill/video/fileURI", value: "https://cdn.img.ly/assets/demo/v3/ly.img.video/videos/pexels-kampus-production-8154913.mp4" ) try engine.block.setFill(video2, fill: videoFill2) let track = try engine.block.create(.track) try engine.block.appendChild(to: page, child: track) try engine.block.appendChild(to: track, child: video1) try engine.block.appendChild(to: track, child: video2) try engine.block.fillParent(track) try engine.block.setDuration(video1, duration: 15) // Make sure that the video is loaded before calling the trim APIs. try await engine.block.forceLoadAVResource(videoFill) try engine.block.setTrimOffset(videoFill, offset: 1) try engine.block.setTrimLength(videoFill, length: 10) try engine.block.setLooping(videoFill, looping: true) try engine.block.setMuted(videoFill, muted: true) let audio = try engine.block.create(.audio) try engine.block.appendChild(to: page, child: audio) try engine.block.setString( audio, property: "audio/fileURI", value: "https://cdn.img.ly/assets/demo/v1/ly.img.audio/audios/far_from_home.m4a" ) // Set the volume level to 70%. try engine.block.setVolume(audio, volume: 0.7) // Start the audio after two seconds of playback. try engine.block.setTimeOffset(audio, offset: 2) // Give the Audio block a duration of 7 seconds. try engine.block.setDuration(audio, duration: 7) // Export page as mp4 video. let mimeType: MIMEType = .mp4 let exportTask = Task { for try await export in try await engine.block.exportVideo(page, mimeType: mimeType) { switch export { case let .progress(renderedFrames, encodedFrames, totalFrames): print("Rendered", renderedFrames, "frames and encoded", encodedFrames, "frames out of", totalFrames) case let .finished(video: videoData): return videoData } } return Blob() } let blob = try await exportTask.value } ``` --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Timeline Editor" description: "Use the timeline editor to arrange and edit video clips, audio, and animations frame by frame." platform: ios url: "https://img.ly/docs/cesdk/ios/create-video/timeline-editor-912252/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Create and Edit Videos](https://img.ly/docs/cesdk/ios/create-video-c41a08/) > [Timeline Editor](https://img.ly/docs/cesdk/ios/create-video/timeline-editor-912252/) --- Timeline editing is the heart of any professional video creation tool. With CE.SDK, you can build video editors that use a **timeline model**. Each scene contains tracks and clips that align precisely over time. Developers can either launch and customize the **prebuilt VideoEditor UI** (which already includes a timeline, but is iOS only) or build a **custom headless timeline** using the `Engine` APIs. ## What You’ll Learn - How the CE.SDK timeline hierarchy works (`Scene → Page → Track → Clip`). - How to create and organize video tracks programmatically. - How to trim and arrange video clips in a timeline. - How to generate thumbnails for a timeline view. - How to connect timeline scenes to export or playback features. ## When You’ll Use It - You want to build a **custom video editing interface** that arranges clips. - You want to integrate the **prebuilt VideoEditor** but still understand how it works under the hood. - You need to **trim or rearrange** clips programmatically before export. - You’re adding **thumbnail visualization** or building a playback scrubber. ## Understanding the Timeline Hierarchy CE.SDK organizes time-based video projects into a structured hierarchy: ```text Scene └── Page (timeline segment) ├── Track (parallel layer) │ ├── Clip (video or audio content) │ ├── Clip … ``` - **Scene:** the root container of your video project. - **Page:** a timeline segment (often a full video composition). - **Track:** a parallel layer for clips (like separate video or audio lanes). - **Clip:** an individual media item placed on a track. > **Note:** By default, a new scene has dimensions of **100 × 100 units**, which is small. For realistic video compositions, explicitly set a size that matches your export or camera input:```swift > let scene = try engine.scene.createVideo() > let page = try engine.block.create(.page) > > try engine.block.appendChild(to: scene, child: page) > > // Set page dimensions to match a 1080x1920 portrait video > try engine.block.setWidth(page, value: 1080) > try engine.block.setHeight(page, value: 1920) > ``` ## Using the Prebuilt Timeline Editor The **prebuilt VideoEditor** component includes a fully interactive timeline UI for arranging, trimming, and syncing clips. It handles playback synchronization, audio alignment, and real-time preview automatically. ```swift import IMGLYEngine import SwiftUI struct VideoEditorDemo: View { private let engineSettings = EngineSettings(license: "") var body: some View { NaviationStack { VideoEditorView(engineSettings) .imgly.OnCreate { engine in //optional: configure the editor or load your own scene } } } } ``` > **Note:** Omit `.imgly.OnCreate` to launch the Video Editor with default settings and media. If you include `.imgly.OnCreate` you **must** explicitly load media and set up asset sources. ![The prebuilt VideoEditor includes a draggable timeline with trimming handles.](assets/timeline_ios_0.png) The prebuilt timeline editor is ideal for apps that need a fast, ready-to-use UI with minimal setup. The prebuilt editors are **iOS only**. For more custom control, or when using macOS or Catalyst, follow the next sections to work directly with the `Engine` timeline API. ## Creating a Timeline Programmatically When you’re building a custom UI, create a timeline structure directly through the block API. ```swift let scene = try engine.scene.createVideo() let page = try engine.block.create(.page) try engine.block.appendChild(to: scene, child: page) // Always set a realistic frame size try engine.block.setWidth(page, value: 1080) try engine.block.setHeight(page, value: 1920) // Create a video track let track = try engine.block.create(.track) try engine.block.appendChild(page, child: track) // Insert a video clip let clip = try engine.block.create(.graphic) let fill = try engine.block.createFill(.video) try engine.block.setString(fill, key: "fill/video/fileURI", value: fileURL.absoluteString) try engine.block.setFill(clip, fill) try engine.block.appendChild(track, child: clip) ``` You can repeat this process for all clips and tracks, allowing for multi-layered compositions that include: - background video - overlays - captions When you append a clip to a track, CE.SDK automatically places the new clip **directly after the last clip in that track**. This gives you a continuous, gap-free sequence, so playback flows cleanly from one clip to the next without extra timing math. If you need gaps or overlaps, either: - Place the clips in separate tracks. - Disable automatic offset management for the track and fully control offsets yourself. ```swift // Disable automatic offset management for this track try engine.block.setBool( videoTrack, property: "track/automaticallyManageBlockOffsets", value: false) // Manage playback/timeOffset on each clip manually try engine.block.setFloat( aRoll, property: "playback/timeOffset", value: 0.0) try engine.block.setFloat( overlayClip, property: "playback/timeOffset", value: 3.0) try engine.block.setFloat(clip, key: "timeline/start", value: 12.5) ``` ### Multi-Track Example (Video + Overlay + Audio) You can build layered timelines by adding tracks to the same page. Each track maintains its own sequence of clips. The following code creates two video tracks to create a picture-in-picture display with an audio track accompaniment. The variables `primaryURL`, `overlayURL` and, `audioURL` resolve to `.mp4` and `.m4a` assets. ![Screenshot of video frame rendered by example code.](assets/timeline_ios_1.png) ```swift // Create a video scene and page let scene = try engine.scene.createVideo() let page = try engine.block.create(.page) try engine.block.appendChild(to: scene, child: page) // Set page dimensions and duration try engine.block.setWidth(page, value: 1080) try engine.block.setHeight(page, value: 1920) // Focus the canvas on this page try await engine.scene.zoom(to: page) // A‑roll primary video track let videoTrack = try engine.block.create(.track) try engine.block.appendChild(page, child: videoTrack) let aRoll = try engine.block.create(.graphic) try engine.block.setShape(aRoll, shape: engine.block.createShape(.rect)) let aRollFill = try engine.block.createFill(.video) try engine.block.setString(aRollFill, key: "fill/video/fileURI", value: primaryURL.absoluteString) try engine.block.setFill(aRoll, aRollFill) try engine.block.appendChild(videoTrack, child: aRoll) let rollDuration = try engine.block.getAVResourceTotalDuration(aRollFill) try engine.block.setDuration(aRoll, duration: rollDuration) try engine.block.fillParent(videoTrack) // Overlay track (B‑roll or picture-in-picture) let overlayTrack = try engine.block.create(.track) try engine.block.appendChild(page, child: overlayTrack) let overlayClip = try engine.block.create(.graphic) try engine.block.setShape(overlayClip, shape: engine.block.createShape(.rect)) let overlayFill = try engine.block.createFill(.video) try engine.block.setString(overlayFill, key: "fill/video/fileURI", value: overlayURL.absoluteString) try engine.block.setFill(overlayClip, overlayFill) // Position overlay visually try engine.block.setPositionX(overlayClip, value: 400) try engine.block.setPositionY(overlayClip, value: 200) try engine.block.setWidth(overlayClip, value: 225) try engine.block.setHeight(overlayClip, value: 500) try engine.block.appendChild(overlayTrack, child: overlayClip) let duration = try engine.block.getAVResourceTotalDuration(overlayFill) try engine.block.setDuration(overlayClip, duration: duration) // Audio bed track let audioTrack = try engine.block.create(.track) try engine.block.appendChild(page, child: audioTrack) let audioClip = try engine.block.create(.audio) try engine.block.setString(audioClip, key: "audio/fileURI", value: audioURL.absoluteString) try engine.block.appendChild(audioTrack, child: audioClip) //Set duration of composition to be the same as the longer clip try engine.block.setDuration(page, duration: max(rollDuration, duration)) //Start playing try engine.block.setPlaying(page, enabled: true) ``` ## Trimming and Clip Duration The `duration` of the page block controls the length of the final composition. If you don’t set a duration for clips, they truncate after a few seconds. Setting a duration for a clip that’s longer than the video asset for that clip causes the asset to loop. Setting a duration for a page that’s longer than the duration of its clips results in a blank screen. Use `getAVResourceTotalDuration()` on audio clips or video fills to get the duration of the underlying source media. CE.SDK gives you fine control over: - **trim start** - **trim length** - **timeline position** Each clip can define how much of its source video to display and where it begins in the composition’s timeline. Assume `aRoll` is a `.graphic` block and `aRollFill` is its `.video` fill. ```swift // Skip the first 2 seconds of the source try engine.block.setFloat( aRollFill, property: "playback/trimOffset", value: 2.0 ) // Play only 5 seconds after the trim offset try engine.block.setFloat( aRollFill, property: "playback/trimLength", value: 5.0 ) ``` Use playback/timeOffset on the clip block to move it along the track: ```swift // Start this clip 10 seconds into the track try engine.block.setFloat( aRoll, property: "playback/timeOffset", value: 10.0 ) ``` ## Timeline Playback Control You can preview playback using the **Scene API** after you’ve placed and trimmed clips. The prebuilt editor handles this automatically, but if you’re implementing a custom player, use the functions shown in [Control Audio and Video](https://img.ly/docs/cesdk/ios/create-video/control-daba54/). That guide covers: - Play, pause, and seek. - Playback speed and looping. - Current playback time queries. - Synchronization across different tracks. ## Generating Timeline Thumbnails You can render thumbnails directly from any video clip using CE.SDK’s **asynchronous** thumbnail generator. ```swift @State private var thumbnails: [UIImage] = [] let stream = await MainActor.run { engine.block.generateVideoThumbnailSequence( id, // id can be a video fill OR a video clip block. thumbnailHeight: 45, timeRange: 0.0...3.0, // timeRange is relative to the design block’s playback/timeOffset. numberOfFrames: 10 ) } for try await thumb in stream { await MainActor.run { thumbnails.append(UIImage(cgImage: thumb.image)) } } ``` Each emitted image corresponds to a frame sample along the clip’s timeline.\ You can display these in a `LazyHStack` or `ScrollView` to create a scrubber or timeline strip. ![Video clip with rendered thumbnails in a LazyHStack](assets/timeline_ios_2.png) > **Note:** Thumbnail generation renders frames using Metal. When you’re using the Engine without the prebuilt editor UI, you must have a `Canvas(engine:)` (it can be hidden) mounted in your SwiftUI view hierarchy. Otherwise the stream may produce no frames and you may see log messages like: > `[CAMetalLayer nextDrawable] returning nil because allocation failed.` > `Could not obtain a recording context`In other words, **thumbnail generation isn’t truly headless: it needs a live render surface.** Generate **Audio waveforms** in a similar way using `generateAudioThumbnailSequence`. This function emits an async stream of `AudioThumbnail` structs, which contain normalized audio samples (0…1). You can use the samples to render a waveform in a custom SwiftUI view. The function signature and async stream behavior mirror video thumbnails. Only the output data differs. ```swift // Generate audio “thumbnails” (sample chunks) for an audio block (or a video fill with audio) let stream = await MainActor.run { engine.block.generateAudioThumbnailSequence( audioClip, // or a video fill id samplesPerChunk: 512, timeRange: 0.0...10.0, // seconds numberOfSamples: 8_192, // total samples to generate numberOfChannels: 1 // 1 = mono, 2 = stereo (interleaved L/R) ) } ``` > **Note:** Rendering a waveform is application-specific and must be implemented using a custom SwiftUI view. ## Exporting the Timeline To export a timeline, you export the `page` block as a video file. `exportVideo` returns an async stream of export events, so you can report progress and receive the final video data. ```swift // Export a page to MP4 using default options let stream = try await engine.block.exportVideo(page) for try await event in stream { if case .finished(let data) = event { let url = FileManager.default.temporaryDirectory.appendingPathComponent("export.mp4") try data.write(to: url) print("Exported:", url) } else if case .progress(let progress) = event { print("Export progress:", progress) } } ``` CE.SDK supports standard formats (MP4, MOV, WebM, and audio-only tracks). ## Troubleshooting | Symptom | Likely Cause | Solution | |----------|---------------|-----------| | Clips overlap or play out of order | Misaligned `timeline/start` values | Ensure each clip’s start time is unique and sequential | | Trim changes ignored | Trim start + duration exceed source length | Use `engine.block.getFloat(key:)` to confirm clip duration | | Thumbnails are blank | Resource not loaded yet | Call `engine.resource.load(for:)` before `renderThumbnails()` | |No thumbnails, no error|Canvas not mounted / playback active / second request in progress| Check that there is an active Canvas and that playback is paused. Check that you don’t have overlapping calls to the thumbnail generator| | Playback stutters | Too many parallel HD tracks | Reduce simultaneous tracks or use compressed preview | *** ## Next Steps - Use [Control Audio and Video](https://img.ly/docs/cesdk/ios/create-video/control-daba54/) to play, pause, seek, loop, and adjust volume or speed for timeline content. - [Add Captions](https://img.ly/docs/cesdk/ios/edit-video/add-captions-f67565/) to place timed text that stays in sync with video and audio. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Edit Image" description: "Use CE.SDK to crop, transform, annotate, or enhance images with editing tools and programmatic APIs." platform: ios url: "https://img.ly/docs/cesdk/ios/edit-image-c64912/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Create and Edit Images](https://img.ly/docs/cesdk/ios/edit-image-c64912/) --- --- ## Related Pages - [Integrating a Custom Background Removal Tool in iOS](https://img.ly/docs/cesdk/ios/edit-image/remove-bg-9dfcf7/) - Learn how to add a custom button to the CE.SDK for iOS to trigger your own background removal logic using Apple's Vision framework. - [Transform](https://img.ly/docs/cesdk/ios/edit-image/transform-9d189b/) - Crop, resize, rotate, scale, or flip images using CE.SDK's built-in transformation tools. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Integrating a Custom Background Removal Tool in iOS" description: "Learn how to add a custom button to the CE.SDK for iOS to trigger your own background removal logic using Apple's Vision framework." platform: ios url: "https://img.ly/docs/cesdk/ios/edit-image/remove-bg-9dfcf7/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Create and Edit Images](https://img.ly/docs/cesdk/ios/edit-image-c64912/) > [Remove Background](https://img.ly/docs/cesdk/ios/edit-image/remove-bg-9dfcf7/) --- The CE.SDK provides a flexible architecture that allows you to extend its capability to meet your specific needs. This guide demonstrates how to integrate a custom background removal feature into the `Photo Editor`. You can use the same approach to all other [editor solutions](https://img.ly/docs/cesdk/ios/prebuilt-solutions-d0ed07/) and all other types of image processing. > **Note:** Working with `Vision` code using a simulator is **not recommended**. Use a physical device when experimenting with the code from this guide. ## What You’ll Learn - How to add a custom “Remove Background” button to the CE.SDK dock. - How to pull the current image from the engine, run background removal, and write the result back. - Implementation (iOS 17+) VNGenerateForegroundInstanceMaskRequest for general subject cut-outs. - How to keep the UI responsive and handle errors gracefully. > **Note:** When using iOS 15+ you’ll only be able to use VNGeneratePersonSegmentationRequest for people-only cut-outs. To extend that more broadly, you’ll have to use something other than the plain Vision framework. Everything else (create button, find selected image, extract data, etc.) is how this guide describes it. ## When To Use It - Want a one-tap “Remove BG” action in the editor UI. - Prefer on-device processing (no uploads) for latency, privacy, or offline. - Need to plug in your own image editing logic (Apple Vision, a third-party library, or your own API). ## Adding a Button To the Dock You can learn more about adding buttons in the [customize dock](https://img.ly/docs/cesdk/ios/user-interface/customization/dock-cb916c/) guide. For this guide, you’ll use a basic example to add a single button to the main dock of the Photo Editor. ```swift PhotoEditor(.init(license: "")) .imgly.modifyDockItems { context, items in items.addFirst { Dock.Button( id: "ly.img.backgroundRemoval", action: { context in Task { await performBackgroundRemoval(context: context) } }, label: { _ in Label("Remove BG", systemImage: "person.crop.circle.fill.badge.minus") } ) } } ``` ![The dock button created by the code snippet](assets/remove-bg-dock-0.png) The preceding code: 1. Creates an instance of a `Dock.Button`. 2. Adds it to the main dock of the editor in the leftmost space. The button has the following properties: - `id` - `action` - `label` This is a common pattern for buttons in SwiftUI. The `context` property of `.modifyDockItems` has a reference to the engine and the loaded assets. In the next few sections you’ll learn the steps to do the extraction. Put together, they become the body of the `performBackgroundRemoval(context:)` that the button calls. ![Editor with loaded image and new button](assets/remove-bg-before.png) ## Extracting the Image A block that displays an image has an `imageFill` which contains the URL of the underlying image. The next step in removing the background is to extract the image data. In the Photo Editor the scene’s page has the fill. In other scenarios, your code could either: - Look for the currently selected block. - Use some other method to find the fill. After extraction, the image gets converted to a `UIImage` for the `Vision` framework to use. ```swift // Get the current page (canvas) from the scene guard let currentPage = try engine.scene.getCurrentPage() else { return } // Validate that the page contains an image let imageFill = try engine.block.getFill(currentPage) let fillType = try engine.block.getType(imageFill) guard fillType == FillType.image.rawValue else { return } // Set block into loading state try engine.block.setState(imageFill, state: .pending(progress: 0.5)) // Step 1: Extract image data from block let imageData = try await extractImageData(from: imageFill, engine: engine) // Step 2: Convert to UIImage for processing guard let originalImage = UIImage(data: imageData) else { try engine.block.setState(imageFill, state: .ready) return } ``` Below is an example function to actually extract the data and return it to the background removal function. ```swift /// Extracts image data from a design block private func extractImageData(from block: DesignBlockID, engine: Engine) async throws -> Data { // I could also use here to check if the block is using a sourceSet let imageFileURI = try engine.block.getString(block, property: "fill/image/fileURI") guard let url = URL(string: imageFileURI) else { return } let (data, _) = try await URLSession.shared.data(from: url) return data } ``` ## Processing the Image With a `UIImage`, now your code can process the image using a background removal algorithm, or any image processing you can create. This guide uses a `BackgroundRemover.swift` structure that you’ll find at the end of the guide. Check the comments in the code about the `Vision` implementation. ```swift guard let cutout = await BackgroundRemover.removeWithForegroundInstanceMask(from: originalImage) else { try engine.block.setState(imageFill, state: .ready) return } ``` ## Replace the Image in the Editor With a processed image, the last step is to update the `"fill/image/imageFileURI"` with the new image: 1. Write the image to disk. 2. Return the URL. 3. Update the original image block with the new fill. This replaces the old image with the new one seamlessly. ![Image with background removed.](assets/remove-bg-after.png) ```swift let processedImageURL = try saveImageToCache(cutout) try await engine.block.addImageFileURIToSourceSet( imageFill, property: "fill/image/sourceSet", uri: processedImageURL, ) ``` An implementation of `saveImageToCache(_ image:)` might look like this: ```swift private func saveImageToCache(_ image: UIImage) throws -> URL { guard let imageData = image.pngData() else { return } let cacheURL = try FileManager.default .url(for: .cachesDirectory, in: .userDomainMask, appropriateFor: nil, create: false) .appendingPathComponent(UUID().uuidString, conformingTo: .png) try imageData.write(to: cacheURL) return cacheURL } ``` ## Complete Function Here is the complete function for the background removal processing described in the guide. In production code, you’d want to make the `guard` statements `throw` instead of just returning. ```swift func performBackgroundRemoval(context: Dock.Context) async { do { let engine = context.engine guard let currentPage = try engine.scene.getCurrentPage() else { return } let imageFill = try engine.block.getFill(currentPage) let fillType = try engine.block.getType(imageFill) guard fillType == FillType.image.rawValue else { return } try engine.block.setState(imageFill, state: .pending(progress: 0.5)) // Step 1: Extract image data from block let imageData = try await extractImageData(from: imageFill, engine: engine) // Step 2: Convert to UIImage for processing guard let originalImage = UIImage(data: imageData) else { try engine.block.setState(imageFill, state: .ready) return } // Step 3: Remove the background guard let cutout = await BackgroundRemover.removeWithForegroundInstanceMask(from: originalImage) else { try engine.block.setState(imageFill, state: .ready) return } // Step 4: Save the new image let processedImageURL = try saveImageToCache(cutout) // Step 5: Replace the original image with the new one without background try await engine.block.addImageFileURIToSourceSet( imageFill, property: "fill/image/sourceSet", uri: processedImageURL, ) /*Optional, replace the entire source set instead. This keeps variants in check. try engine.block.setSourceSet( imageFill, property: "fill/image/sourceSet", sourceSet: [ .init(uri: processedImageURL, width: UInt32(Int(cutout.size.width)), height: UInt32(Int(cutout.size.height))) ] ) */ // Set block into ready state again try engine.block.setState(imageFill, state: .ready) } catch { } } ``` ## Troubleshooting **❌ Button is enabled for non-image content**: Guard by checking the `FillType` of the block before doing any work. Optionally, disable the button dynamically by inspection the current selection. **❌ Mask looks jagged or haloed**: Try dilating and then slightly blurring (`CIMorphologyMaximum` then `CIGaussianBlur(σ≈1.0)`) the mask before composting in the `BackgroundRemoval.swift` file. **❌ Performance is poor on large images**: Downscale the image to a smaller size, generate the mask, then upscale the mask and image back to the original resolution before compositing. **❌ Code doesn’t run as expected, or crashes**: Ensure that you are testing with a device. Some `Vision` requests may not return expected masks in the simulator. Always test on device. ## BackgroundRemover.swift Here is a full, annotated implementation of the Vision functions that form the background removal code. ```swift // // BackgroundRemover.swift // // Performs on-device background removal using Apple’s Vision framework (iOS 17+). // Designed for use within CE.SDK or any app needing a quick subject cut-out. // // The Vision framework performs semantic segmentation of the foreground, // returning an instance mask (a grayscale alpha mask) that identifies // the main subjects in the image. We then composite the original image // over a transparent background using Core Image. // // © IMG.LY Documentation Example – Detailed Version // import Vision import CoreImage import CoreImage.CIFilterBuiltins import UIKit /// A helper struct providing one static method for background removal. /// This version uses the Vision framework’s new /// `VNGenerateForegroundInstanceMaskRequest` (iOS 17+) /// for general-purpose subject segmentation. struct BackgroundRemover { /// Removes the background from a given UIImage using Vision. /// /// - Parameter uiImage: The source image to process. /// - Returns: A new UIImage with the detected foreground preserved /// and the background made transparent, or `nil` if the operation fails. /// /// ### Implementation overview /// 1. Convert the UIImage to a Core Image (CIImage) for Vision and Core Image processing. /// 2. Run Vision’s `VNGenerateForegroundInstanceMaskRequest` /// to produce an instance segmentation mask. /// 3. Merge all detected instances into a single grayscale alpha mask. /// 4. Composite the original image over a transparent background using that mask as alpha. /// @MainActor static func removeWithForegroundInstanceMask(from uiImage: UIImage) async -> UIImage? { // Convert the UIKit UIImage into a Core Image representation // which Vision and Core Image APIs operate on. guard let ciImage = CIImage(image: uiImage) else { print("❌ Failed to create CIImage from UIImage.") return nil } // 1️⃣ Create the Vision request that produces foreground instance masks. // Each “instance” represents one segmented subject (e.g., person, pet, object). let request = VNGenerateForegroundInstanceMaskRequest() // 2️⃣ Create a Vision request handler that can process our image. // VNImageRequestHandler wraps the input image and orchestrates the request execution. let handler = VNImageRequestHandler(ciImage: ciImage) do { // 3️⃣ Perform the Vision request synchronously. // This will analyze the image and populate `request.results`. try handler.perform([request]) // 4️⃣ Retrieve the segmentation results. // We only handle the first result because each request can return multiple. guard let result = request.results?.first else { print("❌ No mask results returned by Vision.") return nil } // 5️⃣ Merge all detected instances into one combined alpha mask. // This creates a single-channel image (grayscale) that encodes // the combined “foreground subject” region. // // You can also choose to keep only specific instances (e.g., top confidence). let mergedMask = try result.generateScaledMaskForImage( forInstances: result.allInstances, // all detected subjects from: handler // reference to the original image handler ) // 6️⃣ Convert the mask’s pixel buffer into a CIImage for compositing. let maskCIImage = CIImage(cvPixelBuffer: mergedMask) // 7️⃣ Blend the original image over a transparent background using the mask. // This step is handled by a Core Image filter in `composite(ciImage:alphaMask:)`. return composite(ciImage: ciImage, alphaMask: maskCIImage) } catch { // If Vision throws an error (invalid image, unsupported format, etc.) print("❌ Vision background removal failed: \(error.localizedDescription)") return nil } } // MARK: - Core Image compositing /// Composites the original image over a transparent background, /// using the segmentation mask as the alpha channel. /// /// - Parameters: /// - ciImage: The source image as a CIImage. /// - alphaMask: The grayscale mask from Vision, /// where white = subject (fully visible) and black = background (transparent). /// - Returns: A UIImage with the background removed. private static func composite(ciImage: CIImage, alphaMask: CIImage) -> UIImage? { // Vision’s mask output might not match the original image size. // Here, we scale it to align perfectly with the input image dimensions. let scaleX = ciImage.extent.width / alphaMask.extent.width let scaleY = ciImage.extent.height / alphaMask.extent.height let resizedMask = alphaMask.transformed(by: CGAffineTransform(scaleX: scaleX, y: scaleY)) // Core Image needs a rendering context for filter operations. // The CIContext can reuse GPU/CPU resources for faster repeated processing. let context = CIContext() // 1️⃣ Create a Core Image filter to composite the subject over transparency. // `CIBlendWithMask` takes three images: // - inputImage: the content we want to keep (our photo) // - backgroundImage: what’s behind it (transparent color) // - maskImage: controls per-pixel opacity (white=opaque, black=transparent) let filter = CIFilter.blendWithMask() // Provide the three required inputs. filter.inputImage = ciImage filter.backgroundImage = CIImage(color: .clear).cropped(to: ciImage.extent) filter.maskImage = resizedMask // 2️⃣ Render the filtered output into a new CGImage. guard let output = filter.outputImage, let cg = context.createCGImage(output, from: output.extent) else { print("❌ Failed to create CGImage from composited output.") return nil } // 3️⃣ Convert the rendered CGImage back into a UIImage // that can be displayed or saved in UIKit-based workflows. return UIImage(cgImage: cg, scale: UIScreen.main.scale, orientation: .up) } } ``` --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Transform" description: "Crop, resize, rotate, scale, or flip images using CE.SDK's built-in transformation tools." platform: ios url: "https://img.ly/docs/cesdk/ios/edit-image/transform-9d189b/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Create and Edit Images](https://img.ly/docs/cesdk/ios/edit-image-c64912/) > [Transform](https://img.ly/docs/cesdk/ios/edit-image/transform-9d189b/) --- --- ## Related Pages - [Move](https://img.ly/docs/cesdk/ios/edit-image/transform/move-818dd9/) - Position an image relative to its parent using either percentage or units - [Crop Images](https://img.ly/docs/cesdk/ios/edit-image/transform/crop-f67a47/) - Cut out specific areas of an image to focus on key content or change aspect ratio. - [Rotate](https://img.ly/docs/cesdk/ios/edit-image/transform/rotate-5f39c9/) - Documentation for Rotate - [Resize](https://img.ly/docs/cesdk/ios/edit-image/transform/resize-407242/) - Change the size of individual elements or groups. - [Scale](https://img.ly/docs/cesdk/ios/edit-image/transform/scale-ebe367/) - Resize images uniformly in your app. - [Flip Images](https://img.ly/docs/cesdk/ios/edit-image/transform/flip-035e9f/) - Flip images horizontally or vertically, or mirror their content inside a crop frame. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Crop Images" description: "Cut out specific areas of an image to focus on key content or change aspect ratio." platform: ios url: "https://img.ly/docs/cesdk/ios/edit-image/transform/crop-f67a47/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Create and Edit Images](https://img.ly/docs/cesdk/ios/edit-image-c64912/) > [Transform](https://img.ly/docs/cesdk/ios/edit-image/transform-9d189b/) > [Crop](https://img.ly/docs/cesdk/ios/edit-image/transform/crop-f67a47/) --- Cropping images is a fundamental editing operation that helps you frame your subject, remove unwanted elements, or prepare visuals for specific formats. With the CreativeEditor SDK (CE.SDK) for iOS, you can crop images either using the built-in user interface or programmatically via the engine API. This guide covers both methods and explains how to apply constraints such as fixed aspect ratios or exact dimensions when using templates. ## What You’ll Learn - How to enable and use the pre-built crop UI. - How to query whether a block supports cropping. - How to adjust crop via helper methods or properties for scale, translation, rotation, and flip. - How to reset a crop and how to fill the frame programmatically. - How to chain crop transformations ## When To Use It Use the built-in UI when end-users should adjust the crop visually. Use the programmatic API when you need: - Automation. - To enforce brand layouts. - To drive cropping from templates or data. ## Using the Built-In Crop UI CE.SDK provides a user-friendly cropping tool in its default UI. Users can interactively: - **Adjust** crop areas. - **Select** preset aspect ratios. - Apply changes with **real-time** feedback. This makes it easy to support social media presets or maintain brand consistency. ![Crop tool appears when a selected image allows cropping.](assets/ios-crop-tool-161.png) ### User Interaction Workflow 1. **Select the image** you want to crop. 2. **Tap the crop icon** in the editor toolbar. 3. Drag the corners or edges, **adjusting** the crop area. 4. **Use the tools** to crop flip, rotate, resize or, to reset the image. 5. **Close the Sheet** to finalize the crop. ![An image that has been scale cropped and rotated slightly showing the cropped and original image.](assets/ios-ui-crop-workflow-161.png) The cropped image appears in your project, but the underlying original image and crop values are preserved even when you rotate or resize the cropped image. ### Enable and Configure Crop Tool The default editor UI allows cropping. When you are creating your own UI or custom toolbars, you can configure editing behavior. To ensure the crop tool is available in the UI, make sure you include it in your app in either: - The dock configuration - The quick actions ```swift try engine.editor.setSettingBool("doubleClickToCropEnabled", value: true) try engine.editor.setSettingBool("controlGizmo/showCropHandles", value: true) try engine.editor.setSettingBool("controlGizmo/showCropScaleHandles", value: true) ``` The cropping handles are only available when a selected block has a fill of type `.image`. Otherwise setting the edit mode of the `engine.editor` to `.crop` has no effect. ### Canvas vs. Prebuilt Editors The CE.SDK offers two UI paths: - Canvas (all platforms) view shows built-in crop controls for selected image blocks when the crop gizmos are enabled. No extra wiring is required. - Prebuilt Editors (iOS only) such as the Design Editor include an Inspector Bar with a Crop button. You can ensure the button is present or customize it using: - The `.imgly.inspectorBarItems` modifiers - The predefined `InspectorBar.Buttons.crop()` item ### Crop Presets CE.SDK has built-in crop presets as part of the default asset sources. You can also provide your own preset set by adding to the default source or serving your own source. ```swift let engine = Engine() try await engine.addDefaultAssetSources() // includes ly.img.crop.presets ``` To create your own preset definitions, you can serve a custom sets. Define presets using JSON in the `ly.img.crop.presets` asset source. Common types you can publish include: - FixedSize (absolute width/height in a unit) - FixedAspectRatio (ratio only) - FreeAspectRatio (unconstrained) Below is an example (FixedSize): ```json { "id": "page-sizes-instagram-square", "label": { "en": "Square (1080×1080)" }, "type": "FixedSize", "width": 1080, "height": 1080, "designUnit": "Pixel", "groups": ["instagram"] } ``` Publish your JSON with other served assets [from your server](https://img.ly/docs/cesdk/ios/serve-assets-b0827c/) and register that source. ## Programmatic Cropping Programmatic cropping gives you complete control over: - Image boundaries - Dimensions - Integration with other transformations like rotation or flipping. This is useful for: - Automation - Predefined layouts - Server-synced workflows. When you initially create a fill to insert an image into a block, the engine: 1. Centers the image in the block. 2. Crops any dimension that doesn't match. For example: when a block with dimensions of 400.0 × 400.0 is filled with an image that is 600.0 × 500.0, there will be horizontal cropping. When working with cropping using code, it’s important to remember that you are modifying the scale, translation, rotation, etc. of the underlying image. The examples below always adjust the x and y values equally. This isn’t required, but adjusting them unequally can distort the image, which might be just what you want. ### Verify Crop Permission Before your code can apply any cropping, it should first verify that a block currently supports cropping. ```swift let canCrop = try engine.block.supportsCrop(imageBlock) ``` ### Reset Crop When an image is initially placed into a block it will get crop scale and crop translation values. Resetting the crop will return the image to the original values. ![Image with no additional crop applied shown in crop mode](../mobile-assets/crop-example-1.png) This is a block (called `imageBlock` in the example code) with dimensions of 400 × 400 filled with an image that has dimensions of 600 × 530. The image has slight scaling and translation applied so that it fills the block evenly. At any time, the code can execute the reset crop command to return it to this stage. ```swift try engine.block.resetCrop(imageBlock) ``` ### Crop Translation The translation values adjust the placement of the origin point of an image. You can read and change the values. They’re not pixel units or centimeters, they’re scaled percentages. An image that has its origin point at the origin point of the crop block has a translation value of 0.0 for x and y. ![Image crop translated one quarter of it's width to the right](../mobile-assets/crop-example-5.png) ```swift try engine.block.setCropTranslationX(imageBlock, translationX: 0.250) ``` This image has had its translation in the x direction set to 0.25. That moved the image one quarter of its width to the right. Setting the value to -0.25 would change the offset of the origin to the left. These are absolute values. Setting the x value to 0.25 and then setting it to -0.25 does not move the image to an offset of 0.0. There is a `setCropTranslationY(_ id: DesignBlockID, translationY: Float)` function to adjust the translation of the image in the vertical direction. Negative values move the image up and positive values move the image down. To read the current crop translation values you can use the convenience getters for the x and y values. ```swift let currentX = try engine.block.getCropTranslationX(imageBlock) let currentY = try engine.block.getCropTranslationY(imageBlock) ``` ### Crop Scale The scale values adjust the height and width of the underlying image. Values larger than 1.0 will make the image larger while values less than 1.0 make the image smaller. Unless the image also has offsetting translation applied, the center of the image will move. ![Image crop scaled by 1.5 with no corresponding translation adjustment](../mobile-assets/crop-example-6.png) This image has been scaled by 1.5 in the x and y directions, but the origin point has not been translated. So, the center of the image has moved. ```swift try engine.block.setCropScaleX(imageBlock, scaleX: 1.50) try engine.block.setCropScaleY(imageBlock, scaleY: 1.50) ``` To read the current crop scale values you can use the convenience getters for the x and y values. ```swift let currentX = try engine.block.getCropScaleX(imageBlock) let currentY = try engine.block.getCropScaleY(imageBlock) ``` ### Crop Rotate The same as when rotating blocks, the crop rotation function uses radians: - Positive values rotate clockwise. - Negative values rotate counter clockwise. The image rotates around its center. ![Image crop rotated by pi/4 or 45 degrees](../mobile-assets/crop-example-7.png) ```swift try engine.block.setCropRotation(block, rotation: .pi / 4.0) ``` For working with radians, Swift has a constant defined for pi. It can be used as either `Float.pi` or `Double.pi`. Because the `setCropRotation` function takes a `Float` for the rotation value, you can use `.pi` and Swift will infer the correct type. ### Crop to Scale Ratio To center crop an image, you can use the scale ratio. This will adjust the x and y scales of the image evenly, and adjust the translation to keep it centered. ![Image cropped using the scale ratio to remain centered](../mobile-assets/crop-example-2.png) This image has been scaled by 2.0 in the x and y directions. It's translation has been adjusted by -0.5 in the x and y directions to keep the image centered. ```swift try engine.block.setCropScaleRatio(imageBlock, scaleRatio: 2.0) ``` Using the crop scale ratio function is the same as calling the translation and scale functions, but in one line. ```swift try engine.block.setCropScaleX(block, scaleX: 2.0) try engine.block.setCropScaleY(block, scaleY: 2.0) try engine.block.setCropTranslationX(block, translationX: -0.5) try engine.block.setCropTranslationY(block, translationY: -0.5) ``` ### Crop to Fixed Dimensions (Absolute Coordinates) Use fixed dimensions when you want to define the crop region explicitly in absolute coordinates, such as when either: - Matching a specified bounding box. - Recreating a design template. ```swift let cropRect = CGRect(x: 100, y: 50, width: 300, height: 300) try engine.block.setFloat(imageBlock, property: "crop/x", value: Float(cropRect.origin.x)) try engine.block.setFloat(imageBlock, property: "crop/y", value: Float(cropRect.origin.y)) try engine.block.setFloat(imageBlock, property: "crop/width", value: Float(cropRect.width)) try engine.block.setFloat(imageBlock, property: "crop/height", value: Float(cropRect.height)) ``` The result of the preceding code is an image cropped to a 300 x 300 point square starting a (100,50). This approach provides useful results during automation. ### Crop to Aspect Ratio When you want to target a specific aspect ratio, such as 4:5 or 16:9, you can calculate the crop rectangle based on the image dimensions. ```swift let imageWidth: Float = 800 let targetRatio: Float = 4.0 / 5.0 let newHeight = imageWidth / targetRatio try engine.block.setFloat(imageBlock, property: "crop/x", value: 0) try engine.block.setFloat(imageBlock, property: "crop/y", value: 0) try engine.block.setFloat(imageBlock, property: "crop/width", value: imageWidth) try engine.block.setFloat(imageBlock, property: "crop/height", value: newHeight) ``` The preceding code crops an image to a portrait 4:5 ratio. ### Chained Crops Crop operations can be chained together. The order of the chaining impacts the final image. ![Image cropped and rotated](../mobile-assets/crop-example-3.png) ```swift try engine.block.setCropScaleRatio(block, scaleRatio: 2.0) try engine.block.setCropRotation(block, rotation: .pi / 3.0) ``` ![Image rotated first and then scaled](../mobile-assets/crop-example-4.png) ```swift try engine.block.setCropRotation(block, rotation: .pi / 3.0) try engine.block.setCropScaleRatio(block, scaleRatio: 2.0) ``` ### Flipping the Crop There are two functions for crop flipping the image. One for horizontal and one for vertical. They each flip the image along its center. ![Image crop flipped vertically](../mobile-assets/crop-example-8.png) ```swift try engine.block.flipCropVertical(imageBlock) try engine.block.flipCropHorizontal(imageBlock) ``` The image will be crop flipped every time the function gets called. So calling the function an even number of times will return the image to its original orientation. ### Filling the Frame When the various crop operations cause the background of the crop block to be displayed, such as in the **Crop Translation** example above, the function ```swift try engine.block.adjustCropToFillFrame(imageBlock, minScaleRatio: 1.0) ``` adjusts these values: - Translation values - Scale values This way, the entire crop block is filled. This is not the same as resetting the crop. ## Legacy vs. Modern Presets Earlier versions of the SDK used editor keys such as `ui/crop/aspectRatios` to define ratio lists. These were deprecated and replaced by an asset source for presets, `ly.img.crop.presets`. When you encounter legacy configuration examples, migrate them by creating or editing the corresponding preset JSON objects instead. ## Relationship to Video Crop Cropping tools behave the same for still images as for video frames. Video crops also interact with clip trimming and frame bounds within the timeline. ## Troubleshooting **❌ Crop handles don’t appear**: - Ensure the selected block’s fill is an image. - `controlGizmo/showCropHandles` editor setting should be `true`. **❌ Crop ignored**: - Confirm `supportsCrop(_:)` returns `true` for the block. **❌ Background visible after edits**: - Call `adjustCropToFillFrame(_:minScaleRatio:)` to restore coverage. **❌ Cropped image appears distorted**: - Check `setCropScaleX`, `setCropScaleY` values are as expected. - Use `setCropScaleRatio` for a uniform scale. ## Next Steps Now that you’ve seen how to work with cropping your images, some other topics to explore are: - Other [image transformations](https://img.ly/docs/cesdk/ios/edit-image/transform-9d189b/) such as rotate, resize, scale, and flip. - Customize the [Dock](https://img.ly/docs/cesdk/ios/user-interface/customization/dock-cb916c/) or [Inspector Bar](https://img.ly/docs/cesdk/ios/user-interface/customization/inspector-bar-8ca1cd/) to add or replace the crop button or provide a custom preset toolbar. - Constrain who can crop using scopes and template rules to [lock a template](https://img.ly/docs/cesdk/ios/create-templates/lock-131489/). --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Flip Images" description: "Flip images horizontally or vertically, or mirror their content inside a crop frame." platform: ios url: "https://img.ly/docs/cesdk/ios/edit-image/transform/flip-035e9f/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Create and Edit Images](https://img.ly/docs/cesdk/ios/edit-image-c64912/) > [Transform](https://img.ly/docs/cesdk/ios/edit-image/transform-9d189b/) > [Flip](https://img.ly/docs/cesdk/ios/edit-image/transform/flip-035e9f/) --- Use CE.SDK to flip or mirror image and video elements horizontally or vertically in your app. This guide covers block-level and crop-level flipping, batch operations, mirror effects, and scope-based permissions. ## What you'll learn - Flip an image horizontally or vertically. - Understand the difference between block flip and content crop flip. - Use both dedicated methods and property-based approaches. - Flip multiple elements together. - Create mirrored or reflection effects. - Protect templates by locking flip permissions. ## When to use Flipping is helpful when: - Mirroring product or model images for layout consistency. - Creating stylistic reflections or symmetrical designs. - Adjusting orientation in right-to-left layouts. - Correcting flipped camera footage. *** ## Flip Types: Block vs. Crop There are two kinds of flips in CE.SDK: | Flip type | Methods | What is mirrored | When to use | | ---------- | ---------------------------------------: | --------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------- | | Block flip | `setFlipHorizontal`, `setFlipVertical` | Entire block — including borders, effects, and overlays; changes how the block is rendered on the canvas. | Layout corrections or composition changes | | Crop flip | `flipCropHorizontal`, `flipCropVertical` | Only the content inside the crop frame; block layout and dimensions remain unchanged. | Adjust underlying image/video orientation without affecting placement | Use **block flips** for layout corrections or composition changes, and **crop flips** to adjust underlying image or video orientation without affecting placement. ## Flip horizontally or vertically Use the `flip/horizontal` and `flip/vertical` properties to control mirroring. They are boolean properties and have dedicated helper functions defined. All flips are around the center point of a block. ```swift try engine.block.setFlipVertical(imageBlock, flip: true) try engine.block.setFlipHorizontal(imageBlock, flip: true) ``` To determine if a block has been flipped you can query the properties or use helper functions. ```swift let isFlippedHorizontally = try engine.block.getFlipHorizontal(imageBlock) let isFlippedVertically = try engine.block.getFlipVertical(imageBlock) ``` ### Property-Based Approach In addition to convenience methods, you can use the property API for dynamic or batch operations. Blocks have `"flip/horizontal"` and `"flip/vertical"` Boolean properties. ```swift try engine.block.setBool(imageBlock, property: "flip/horizontal", value: true) try engine.block.setBool(imageBlock, property: "flip/vertical", value: true) ``` | Approach | When to use | Notes | | -------------------------: | ---------------------------------------------------------- | ------------------------------------------------- | | Dedicated helper functions | Type safety when writing explicit flip operations | Prefer for explicit calls — safer, clearer API | | Property-based approach | Flexible key-path manipulation in batch scripts or tooling | Better for dynamic/bulk updates; less type safety | ## Flip Multiple Elements Together Group blocks and apply flip to the group: ```swift let groupId = try engine.block.group([imageId, textId]) try engine.block.setFlipHorizontal(groupId, flip: true) ``` While respecting scope permissions, you can also: 1. Iterate over all blocks of a type. 2. Flip each one individually. ```swift let blocks = try engine.block.find(byType: .graphic) for id in blocks { if try engine.block.isAllowedByScope(id, key: "layer/flip") { try engine.block.setFlipHorizontal(id, flip: true) } } ``` ![Items flipped individually and as a group](assets/flip-group-160.jpg) The preceding code: - Shows the original composition on the left. - Flips each item individually in the center composition. - Groups first, then flips the group for the composition on the right. ## To Remove Any Flip Applied If you want to remove the flip, set the property to false. ```swift try engine.block.setFlipVertical(block, flip: false) ``` Applying the flip multiple times doesn’t flip the image back to its original orientation. This code results in a flipped block. ```swift try engine.block.setFlipVertical(block, flip: true) try engine.block.setFlipVertical(block, flip: true) ``` ## Flip Crop Flips Content Only When you need to flip the image inside its crop region without changing the block’s placement: ```swift try engine.block.flipCropHorizontal(imageBlock) try engine.block.flipCropVertical(imageBlock) ``` These operations invert the crop’s translation and scale values, producing a mirror effect within the same bounding box. Use them for correcting camera orientation or stylized reflections without shifting the layout. ## Create Mirror and Reflection Effects You can simulate reflections or mirrored designs by duplicating, flipping, and adjusting opacity and position: ```swift let mirrored = try engine.block.duplicate(original) try engine.block.setFlipVertical(mirrored, flip: true) try engine.block.setOpacity(mirrored, value: 0.5) try engine.block.setPositionY(mirrored, value: 200) ``` ![Image mirrored using preceding code](assets/flip-mirror-160.png) > **Note:** Try combining vertical flips with gradients or masks for realistic water or > glass reflections. ## Lock or constrain flipping (optional) When building templates, you might want to lock flipping to protect the layout: ```swift try engine.block.setScopeEnabled(block, key: "layer/flip", enabled: false) ``` You can also disable all transformations by locking, this is regardless of working with a template. ```swift try engine.block.setTransformLocked(block, locked: true) ``` ## Troubleshooting | Issue | Solution | | ------------------------------- | ------------------------------------------------------------------- | | Flipping doesn’t apply visually | Confirm image is rendered and loaded | | Image flips unexpectedly | Check that flipping is not being overridden by grouped parent block | | User can still flip in editor | Use "layer/flip" constraint to prevent this | --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Move" description: "Position an image relative to its parent using either percentage or units" platform: ios url: "https://img.ly/docs/cesdk/ios/edit-image/transform/move-818dd9/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Create and Edit Images](https://img.ly/docs/cesdk/ios/edit-image-c64912/) > [Transform](https://img.ly/docs/cesdk/ios/edit-image/transform-9d189b/) > [Move](https://img.ly/docs/cesdk/ios/edit-image/transform/move-818dd9/) --- This guide shows how to move images on the canvas using CE.SDK in your iOS app. You’ll learn how to reposition single elements, move groups, and constrain movement behavior within templates. You can move elements programmatically or by using the built-in IMG.LY UI. ## What you'll learn - Move images programmatically using Swift - Use the IMG.LY UI to drag images - Adjust image position on the canvas - Move multiple blocks together - Constrain image movement in templates ## When to use Use movement to: - Position content precisely in designs - Align images with text, backgrounds, or grid layouts - Enable drag-and-drop or animated movement workflows *** ## Move an image block programmatically Image position is controlled using the `position/x` and `position/y` properties. They can either use absolute or percentage (relative) values. In addition to setting the properties, there are helper functions. ```swift try engine.block.setFloat(imageBlock, property: "position/x", value: 150) try engine.block.setFloat(imageBlock, property: "position/y", value: 100) ``` or ```swift try engine.block.setPositionX(imageBlock, value: 150) try engine.block.setPositionY(imageBlock, value: 150) ``` This moves the image to coordinates (150, 100) on the canvas. ```swift try engine.block.setPositionXMode(imageBlock, mode: .percent) try engine.block.setPositionYMode(imageBlock, mode: .percent) try engine.block.setPositionX(imageBlock, value: 0.5) try engine.block.setPositionY(imageBlock, value: 0.5) ``` This move the image to the center of the canvas, regardless of the dimensions of the canvas. As with setting position, you can update or check the mode using `position/x/mode` and `position/y/mode` properties. ```swift let xPosition = try engine.block.getPositionX(imageBlock) let yPosition = try engine.block.getPositionY(imageBlock) ``` *** ## Move images with the UI Users can drag and drop elements directly in the editor canvas. *** ## Move multiple elements together Group elements before moving to keep them aligned: ```swift let groupId = try engine.block.group([imageBlockId, textBlockId]) try engine.block.setPositionX(groupId, value: 200) ``` This moves the entire group to 200 from the left edge. *** ## Move relative to current position To nudge an image instead of setting an absolute position: ```swift let xPosition = try engine.block.getPositionX(imageBlock) try engine.block.setPositionX(imageBlock, value: xPosition + 20) ``` This moves the image 20 points to the right. *** ## Lock movement (optional) When building templates, you might want to lock movement to protect the layout: ```swift try engine.block.setScopeEnabled(block, key: "layer/move", enabled: false) ``` You can also disable all transformations by locking, this is regardless of working with a template. ```swift try engine.block.setTransformLocked(block, locked: true) ``` *** ## Troubleshooting | Issue | Solution | | ------------------------ | ----------------------------------------------------- | | Image not moving | Ensure it is not constrained or locked | | Unexpected position | Check canvas coordinates and alignment settings | | Grouped items misaligned | Confirm all items share the same reference point | | Can't move via UI | Ensure the move feature is enabled in the UI settings | *** --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Resize" description: "Change the size of individual elements or groups." platform: ios url: "https://img.ly/docs/cesdk/ios/edit-image/transform/resize-407242/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Create and Edit Images](https://img.ly/docs/cesdk/ios/edit-image-c64912/) > [Transform](https://img.ly/docs/cesdk/ios/edit-image/transform-9d189b/) > [Resize](https://img.ly/docs/cesdk/ios/edit-image/transform/resize-407242/) --- This guide shows how to resize image blocks using CE.SDK with Swift. The APIs and behavior are identical on iOS, macOS, and Catalyst. You’ll learn how to change how much space individual image elements or groups occupy on the canvas. You’ll also see how to maintain aspect ratios and apply responsive resizing rules within templates. ## What You'll Learn - Resize blocks using the UI. - Resize images programmatically using Swift. - Choose the correct size mode for fixed, responsive, or content-driven layouts. - Resize all blocks in a group. - Lock a user’s ability to resize a block. - Understand when to resize versus when to scale. ## When to Use Resizing is the right tool when you need to: - Match exact dimensions for layouts, templates, or exports. - Build responsive designs that adapt to different canvas sizes. - Control how much space an image occupies without distorting its content. - Enforce consistent layout rules across different designs. - Prepare designs for automated resizing workflows. If you want to visually enlarge or shrink an image without changing its frame, use [Scale](https://img.ly/docs/cesdk/ios/edit-image/transform/scale-ebe367/) instead. ## Resize vs Scale Resizing and scaling both change how much space a block occupies on the canvas, but they do so in different ways: - Resize changes width and height independently - Scale changes width and height by the same factor, preserving aspect ratio Use resize when you need non-uniform control, such as stretching or constraining a layout. Use scale when you want to grow or shrink a block proportionally. Cropping is a separate operation that affects framing image content inside the block without changing the block’s size. ## Resize a Block Using the UI When a block is selected, handles appear on the four sides to allow the user to resize either the width or the height of the block. Control the **visibility of the handles** by a setting in the `editor`. When the handles are invisible, a user can’t resize using touch or a mouse. ```swift try engine.editor.setSettingBool("controlGizmo/showResizeHandles", value: false) ``` ![The image on the left has resize handles, the one on the right does not](../mobile-assets/resize-example-1.png) In the image above, the block on the right has its resize handles hidden. It cannot be resized by the user. The scale handles on the corner and the rotate handle below are still visible and the user can use those to modify the image block. ## Resize a Block Programmatically Each block has a `width` and `height` property you can update to resize the block. These can be `.absolute` values or `.percent` values for their mode. ```swift try engine.block.setWidth(block, value: 400.0) try engine.block.setHeight(block, value: 400.0) ``` Sets a block to be 400 × 400 px. ## Resize Using Size Modes Each dimension has an associated size mode that controls how the value is interpreted. CE.SDK supports three size modes: - absolute - percent - auto Understanding these modes is key to building predictable layouts. ### Absolute Size Absolute sizing uses fixed design units. This is the most common and most explicit form of resizing. ```swift try engine.block.setWidthMode(imageID, mode: .absolute) try engine.block.setHeightMode(imageID, mode: .absolute) try engine.block.setWidth(imageID, value: 300) try engine.block.setHeight(imageID, value: 200) ``` Use absolute sizing when you need precise control, such as for print layouts or fixed UI designs. ### Percentage-Based Size Percentage sizing makes the block responsive to its parent container. A value of 1.0 represents 100% of the parent size. ```swift try engine.block.setWidthMode(imageID, mode: .percent) try engine.block.setWidth(imageID, value: 0.5) ``` This sets the image block to 50% of its parent’s width. Percentage sizing is especially useful for templates and dynamic layouts that must adapt to different formats. ### Automatic Size Automatic sizing lets CE.SDK determine the block’s size based on its content and layout context. ```swift try engine.block.setWidthMode(imageID, mode: .auto) try engine.block.setHeightMode(imageID, mode: .absolute) try engine.block.setHeight(imageID, value: 200) ``` The engine determines the width of the block during the layout pass. Auto sizing is useful when: - The image’s intrinsic size should drive layout - You want CE.SDK to resolve size during layout calculation - You’re working with responsive or generated designs ## Maintain Aspect Ratio When working with a block with an `imageFill`, when you modify the width or height of the block, the fill will adapt its dimensions to maintain its aspect ratio. For other types of blocks, it is necessary to calculate the dimensions and apply them. ![Image that was originally 400 × 400 widened to 600 × 400](../mobile-assets/resize-example-2.png) In the block above, the width has been changed from 400 to 600. Notice that the subject has been scaled when compared to the images earlier in this guide. ## Control Crop Behavior While Resizing When you resize an image block, you are changing the size of its frame, not the image itself. The block’s current crop state and fill rules govern the image content inside that frame. The `maintainCrop` option lets you control whether resizing should preserve the current crop framing or allow CE.SDK to recalculate it. ### Preserve the Existing Crop ```swift try engine.block.setWidth(imageID, value: 300, maintainCrop: true) try engine.block.setHeight(imageID, value: 200, maintainCrop: true) ``` Use this when: - The user has manually adjusted the crop. - You are resizing as part of layout changes. - Visual continuity matters. Think of this as keeping the same view through a differently sized window. ### Allow the Crop to Adjust ```swift try engine.block.setWidth(imageID, value: 300, maintainCrop: false) ``` Use this when: - You want the image to re-fit the new size - You are normalizing or generating layouts - You expect CE.SDK to resolve the best framing automatically If you don’t specify `maintainCrop`, CE.SDK may adjust the crop automatically. For predictable results, it’s best to set this value explicitly. ## Resizing a Group of Blocks You can group multiple items and then change the `width` and `height` property of the entire group. When resizing, the group will always scale in both dimensions the same amount. ![Grouped images resized uniformly](../mobile-assets/resize-example-3.png) ```swift let group = try engine.block.group([carDogBlock, lawnDogBlock]) try engine.block.setWidth(group, value: 400) ``` In this code, the group of images is resized to have a `width` of 400. Groups are always resized uniformly, so the `height` was also changed. Notice that, when selected, the group does not have resize handles, only scale and rotate handles. ## Lock or Constrain Resizing (optional) When building templates, you might want to lock resizing of particular blocks to protect the layout: ```swift try engine.block.setScopeEnabled(block, key: "layer/resize", enabled: false) ``` You can also disable all transformations for a block by locking, this is regardless of working with a template. ```swift try engine.block.setTransformLocked(block, locked: true) ``` ## Troubleshooting |Problem|Likely Cause|Solution| |---|---|---| |Resize has no effect|Size mode isn’t compatible|Verify `widthMode` / `heightMode`| |Image appears cropped|Frame resized but crop preserved|Adjust `maintainCrop` or crop settings| |Block won’t resize|Editing or transform is locked|Check template constraints| |Group resizes unexpectedly|Mixed size modes|Normalize size modes before resizing| ## Next Steps - Resize images proportionally using [Scale](https://img.ly/docs/cesdk/ios/edit-image/transform/scale-ebe367/). - Control image framing and visible content with [Crop](https://img.ly/docs/cesdk/ios/edit-image/transform/crop-f67a47/). - Apply resizing programmatically across many designs with [Auto-Resize](https://img.ly/docs/cesdk/ios/automation/auto-resize-4c2d58/). --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Rotate" description: "Documentation for Rotate" platform: ios url: "https://img.ly/docs/cesdk/ios/edit-image/transform/rotate-5f39c9/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Create and Edit Images](https://img.ly/docs/cesdk/ios/edit-image-c64912/) > [Transform](https://img.ly/docs/cesdk/ios/edit-image/transform-9d189b/) > [Rotate](https://img.ly/docs/cesdk/ios/edit-image/transform/rotate-5f39c9/) --- Rotation is a common transform you apply to images to: - straighten horizons - add dynamic tilt - correct orientation issues Learn how to programmatically and interactively rotate images in your app using CE.SDK. Rotation applies at the block level, rotating the entire graphic on the canvas. This differs from [crop rotation](https://img.ly/docs/cesdk/ios/edit-image/transform/crop-f67a47/), which rotates the content inside the block frame. This guide focuses on block rotation and shows how to wire interactive controls into your SwiftUI app. ## What you'll learn - How to rotate an image as a user using the handles - Rotate an image block by a specific angle - How to lock image rotation - How to rotate multiple images as a group ## When You’ll Use It Use rotation when: - Straightening an image or aligning it with other design elements. - Adding expressive tilt to photos or stickers. - Correcting imported images that appear sideways. - Building custom editing experiences where users can freely or incrementally rotate media. ## Understanding Rotation in CE.SDK Rotation’s center is the block’s center point, and its definition is in radians. ### Block Rotation vs Crop Rotation - Block rotation moves the whole block on the canvas. - Crop rotation turns the content inside the crop region and is part of the crop API. Usage depends on your goal: - To **tilt the image visually** on the canvas, use block rotation. - To **rotate the content** inside a zoomed or constrained frame, use crop rotation. See the Crop guide: [Crop](https://img.ly/docs/cesdk/ios/edit-image/transform/crop-f67a47/). ### Content Fill Mode Rotation can change how your content fits inside its frame: for example, a rotated image using `.contain` may reveal empty areas that `.cover` would fill. You rarely need to update fill mode for rotation, but know that rotated images sometimes behave differently. ## Rotate an Image Using the UI By default selecting a block will show handles for resizing and rotating. You can freeform rotate a block by dragging the rotation handle. ![Rotation handle of the control gizmo](assets/rotation-handle.png) ## Rotate an Image Using Code You can rotate an image block using the `setRotation` function. It takes the `id` of the block and a rotation amount in radians. ```swift try engine.block.setRotation(star, radians: .pi / 4) ``` If you need to convert between radians and degrees, multiply the number in degrees by pi and divide by 180. ```swift let angleInRadians: Double = angleInDegrees * Double.pi / 180 let angleInDegrees: Double = angleInRadians * 180 / Double.pi ``` You can discover the current rotation of a block using the `getRotation` function. ```swift let rotationOfStar = try engine.block.getRotation(starID) ``` Reset the rotation at any time by setting the `radians` to `0`. ```swift try engine.block.setRotation(blockID, radians: 0) ``` You can rotate a block incrementally by reading it’s current value and then adjusting it and setting the new value. ```swift let delta = 0.26 let currentRotation = try engine.block.getRotation(imageID) try engine.block.setRotation(imageID, radians: currentRotation + delta) ``` ## Lock Rotation You can remove the rotation handle from the UI by changing the setting for the engine. This will affect *all* blocks. ```swift try engine.editor.setSettingBool("controlGizmo/showRotateHandles", value: false) ``` Though the handle is gone, the user can still use the two finger rotation gesture on a touch device. You can disable that gesture with the following setting. ```swift try engine.editor.setSettingBool("touch/rotateAction", value: false) ``` When you want to lock only certain blocks, you can toggle the transform lock property. This will apply to all transformations for the block. ```swift try engine.block.setTransformLocked(star, locked: true) ``` To lock just the rotation transform for a block, set its rotation scope to `false`. ```swift try engine.editor.setScopeEnabled(imageID, key: "layer/rotate", enabled: false) ``` Refer to the template constraints guide for more detailed examples. ## Rotate a Group of Images To rotate multiple elements together, first add them to a `group` and then rotate the group. ```swift let groupId = try engine.block.group([star, textBlock]) engine.block.setRotation(groupId, radians: pi / 2) ``` ## Update UI During User Interaction (Advanced) Users can tap an image to select it and rotate it with the gizmo handle or a gesture. To keep any UI in sync with these updates, you’ll need to subscribe to block update events using the CE.SDK’s `event` api. ```swift //Event subscription func watchForUpdates() { guard let engine, let imageID else { return } Task { for await events in engine.event.subscribe(to: [imageID]) { // Look for updates to this specific block guard events.contains(where: { $0.type == .updated && $0.block == imageID }) else { continue } // Read the updated rotation value from the engine and update the //view's rotation variable //this will fire on _all_ updates, not just rotation if let newValue = try? engine.block.getRotation(imageID) { rotation = newValue } } } } ``` The preceding code subscribes to updates from a single block. Whenever that block updates, it reads the block’s rotation value and updates a `rotation` variable. You would call a function like this one at the end of your setup code for the `View`. ## Troubleshooting |Symptom|Likely Cause|Solution| |----|----|----| |Rotation does nothing|Block has rotation scope disabled or incorrect block ID|Enable layer/rotate or unlock transform. Check block ID value. Ensure block has been appended to the page| | Image appears offset after rotation |Pivot point isn’t at image center|Make sure the pivot point is centered (default is center). | |Rotation resets unexpectedly|Setting crop rotation instead of block rotation|Use `setRotation`, not crop APIs| |Image shows empty areas after rotation|Content fill mode exposing background|Use .cover or adjust frame| | Rotation handle not visible|Gizmo settings disabled| Check that interactive UI controls are enabled in the settings. | ## Next Steps Explore a minimal but complete code sample on [GitHub](https://github.com/imgly/cesdk-swift-examples/tree/v$UBQ_VERSION$/editor-guides-images-rotate). Continue shaping your transform workflow with these related guides: - [Crop](https://img.ly/docs/cesdk/ios/edit-image/transform/crop-f67a47/) lets you control the visible region of an image. - Use [Resize](https://img.ly/docs/cesdk/ios/edit-image/transform/resize-407242/) to change a block's width and height independently. - Use [Scale](https://img.ly/docs/cesdk/ios/edit-image/transform/scale-ebe367/) to scale a block uniformly from its center. - [Flip](https://img.ly/docs/cesdk/ios/edit-image/transform/flip-035e9f/) mirrors content horizontally or vertically. - Learn how to subscribe to block updates and sync UI state using [Events](https://img.ly/docs/cesdk/ios/concepts/events-353f97/). --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Scale" description: "Resize images uniformly in your app." platform: ios url: "https://img.ly/docs/cesdk/ios/edit-image/transform/scale-ebe367/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Create and Edit Images](https://img.ly/docs/cesdk/ios/edit-image-c64912/) > [Transform](https://img.ly/docs/cesdk/ios/edit-image/transform-9d189b/) > [Scale](https://img.ly/docs/cesdk/ios/edit-image/transform/scale-ebe367/) --- Scaling lets users enlarge or shrink a block directly on the canvas. In CE.SDK, scaling is a transform property that applies uniformly to most block types. This guide shows how to scale images using CE.SDK in your app. You’ll learn how to scale image blocks proportionally, scale groups, and apply scaling constraints to protect template structure. The standard UI already supports pinch-to-zoom and on-screen scale handles. Scaling programmatically gives you finer control. This is ideal for automation, custom UI, or template-driven apps. When you want to scale the image **inside** the block and leave the block dimensions unchanged, you’ll use [crop scale](https://img.ly/docs/cesdk/ios/edit-image/transform/crop-f67a47/) instead. ## What You’ll Learn - Scale images programmatically using Swift. - Scale images proportionally or non-uniformly. - Scale grouped elements. - Enable or disable scaling via pinch gestures or gizmo handles. ## When to Use Use image scaling when your UI needs to: - Let users zoom artwork smoothly without cropping - Enforce a canonical image size in templates - Support controls like sliders instead of gestures - Scale multiple elements together (logos, product bundles, captions) ## Scaling Basics On iOS, you scale blocks using the **block API**. The main pieces you’ll use are: - `engine.block.scale(_ id: DesignBlockID, to: Float, anchorX: Float = 0, anchorY: Float = 0)` - `width` / `height` and their modes (`width/mode`, `height/mode`) - crop-related properties like `crop/scaleX`, `crop/scaleY`, and `crop/translationX` / `Y` Control the size with the following scale values: - `1.0`: represents the **original** size. - Larger than `1.0`: **increases** the size. - Smaller than `1.0`: **shrinks** the size. > **Note:** The examples below use image blocks, but this same approach works for shapes, text, stickers, and groups as long as you have their `DesignBlockID`. ## Scale an Image Uniformly Uniform scaling uses the `scale(_ id: DesignBlockID, to: Float)` function. A scale value of `1.0` is the original scale. Values larger than `1.0` increase the scale of the block and values lower than `1.0` scale the block smaller. A value of `2.0`, for example makes the block twice as large. This scales the image to 150% of its original size. Because the default anchor point is the top-left corner, the block grows outward from that corner. ```swift try engine.block.scale(imageBlock, to: 1.5) ``` ![Original image and scaled image](../mobile-assets/scale-example-1.png) By default, the anchor point for the image when scaling is the origin point on the top left. The scale function has two optional parameters to move the anchor point in the x and y direction. They can have values between `0.0` and `1.0` This scales the image to 150% of its original size. The origin anchor point is 0.5, 0.5 so the image expands from the center. ```swift try engine.block.scale(block, to: 1.5, anchorX: 0.5, anchorY: 0.5) ``` ![Original image placed over the scaled image, aligned on the center anchor point](../mobile-assets/scale-example-2.png) ## Scale Non-Uniformly To stretch or compress only one axis, thus distorting an image, use this combination: - The crop scale function - The width or height function How you decide to make the adjustment will have different results. Below are three examples of scaling the original image in the x direction only. ![Allowing the engine to scale the image as you adjust the width of the block](../mobile-assets/scale-example-3.png) ```swift try engine.block.setWidthMode(imageBlock, mode: .absolute) let newWidth: Float = try engine.block.getWidth(imageBlock) * 1.5 try engine.block.setWidth(imageBlock, value: newWidth) ``` The image continues respecting its fill mode (usually `.cover`), so the content scales automatically as the frame widens. ![Using crop scale for the horizontal axis and adjusting the width of the block](../mobile-assets/scale-example-4.png) ```swift try engine.block.setCropScaleX(imageBlock, scaleX: 1.50) try engine.block.setWidthMode(imageBlock, mode: .absolute) let newWidth: Float = try engine.block.getWidth(imageBlock) * 1.5 try engine.block.setWidth(imageBlock, value: newWidth) ``` This uses crop scale to scale the image in a single direction and then adjusts the block's width to match the change. The change in width does not take the crop into account and so distorts the image as it's scaling the scaled image. ![Using crop scale for the horizontal axis and using the maintainCrop property when changing the width](../mobile-assets/scale-example-5.png) ```swift try engine.block.setCropScaleX(imageBlock, scaleX: 1.50) try engine.block.setWidthMode(imageBlock, mode: .absolute) let newWidth: Float = try engine.block.getWidth(imageBlock) * 1.5 try engine.block.setWidth(imageBlock, value: newWidth, maintainCrop: true) ``` By setting the `maintainCrop` option to true, expanding the width of the image by the scale factor respects the crop scale and the image is less distorted. ## Scale Images with Built-In Gestures or Gizmos The CE.SDK UI supports these interactions automatically: ### Pinch to Zoom Enabled by default: ```swift try engine.editor.setSettingBool("touch/pinchAction", value: true) ``` Setting this to false disables pinch scaling entirely. For environments with keyboard and mouse a similar property exists: ```swift try engine.editor.setSettingBool("mouse/enableZoom", value: true) ``` ### Gizmo Scale Handles The UI can show corner handles for drag-scaling: ```swift try engine.editor.setSettingBool("controlGizmo/showScaleHandles", value: true) ``` This mirrors the behavior of native editors. > **Note:** Changing these settings affects how the CE.SDK interprets user input. It doesn’t prevent you from scaling blocks programmatically with `scale(_:to:)`. ## Scale Multiple Elements Together If you combine multiple blocks into a group, scaling the group scales every member: ```swift let groupId = try engine.block.group([imageBlock, textBlock]) try engine.block.scale(groupId, to: 0.75) ``` This scales the entire group to 75%. ## Lock Scaling When working with templates, you can lock a block from scaling by setting its scope. The [guide on locking](https://img.ly/docs/cesdk/ios/create-templates/lock-131489/) provides more information. ```swift try engine.block.setScopeEnabled(imageBlock, key: "layer/resize", enabled: false) ``` To prevent users from applying **any** transform to a block: ```swift try engine.block.setTransformLocked(imageBlock, locked: true) ``` ## Troubleshooting |Symptom|Likely Cause|Fix| |---|---|---| |“Property not found: transform/scale/x”|Using old spec property names that no longer exist.|Replace with `engine.block.scale(_, to:)` for uniform scale. See [Crop](https://img.ly/docs/cesdk/ios/edit-image/transform/crop-f67a47/) for more on how crop scale affects scaling results.| |Image changes size but looks oddly distorted|Combining crop and width changes in a surprising way.|Use a simpler pattern: either change width alone, or use a controlled crop/scaleX + width approach and test with sample images.| |Pinch does nothing on canvas|Pinch scaling disabled|Ensure "touch/pinchAction" is true (or not overridden in settings).| |Scale handles don’t appear|Gizmo handles disabled in editor settings|Set `controlGizmo/showScaleHandles` to true.| |Image won’t scale at all|Block is transform-locked or scope-locked|Check `transformLocked` and any related scopes like "layer/resize". Unlock or re-enable scope if needed.| ## Next Steps Once you’re comfortable scaling images, explore the other transform tools: - [Resize](https://img.ly/docs/cesdk/ios/edit-image/transform/resize-407242/) for changing the size of a block’s frame. - [Crop](https://img.ly/docs/cesdk/ios/edit-image/transform/crop-f67a47/) for changing what part of the image is visible. - [Rotate](https://img.ly/docs/cesdk/ios/edit-image/transform/rotate-5f39c9/) for rotating images around an anchor. - [Flip](https://img.ly/docs/cesdk/ios/edit-image/transform/flip-035e9f/) to mirror images horizontally or vertically. - [Move](https://img.ly/docs/cesdk/ios/edit-image/transform/move-818dd9/) to reposition blocks on the canvas. Together, these guides give you a complete picture of how to position and transform images in CE.SDK on iOS, macOS, and Catalyst. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Add Captions" description: "Documentation for adding captions to videos" platform: ios url: "https://img.ly/docs/cesdk/ios/edit-video/add-captions-f67565/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Create and Edit Videos](https://img.ly/docs/cesdk/ios/create-video-c41a08/) > [Add Captions](https://img.ly/docs/cesdk/ios/edit-video/add-captions-f67565/) --- For video scenes, open captions can be added in CE.SDK. These allow to follow the content without the audio. Two blocks are available for this. The `DesignBlockType.caption` blocks hold the text of individual captions and the `DesignBlockType.captionTrack` is an optional structuring block to hold the `Caption` blocks, e.g., all captions for one video. The `"playback/timeOffset"` property of each caption block controls when the caption should be shown and the `"playback/duration"` property how long the caption should be shown. Usually, the captions do not overlap. As the playback time of the page progresses, the corresponding caption is shown. With the `"caption/text"` property, the text of the caption can be set. In addition, all text properties are also available for captions, e.g., to change the font, the font size, or the alignment. Position, size, and style changes on caption blocks are automatically synced across all caption blocks. Finally, the whole page can be exported as a video file using the `block.exportVideo` function. ## Creating a Video Scene First, we create a scene that is set up for captions editing by calling the `scene.createCaptions()` API. Then we create a page, add it to the scene and define its dimensions. This page will hold our composition. ```swift highlight-setupScene let scene = try engine.scene.createVideo() let page = try engine.block.create(.page) try engine.block.appendChild(to: scene, child: page) try engine.block.setWidth(page, value: 1280) try engine.block.setHeight(page, value: 720) try engine.editor.setSettingBool("features/videoCaptionsEnabled", value: true) ``` ## Setting Page Durations Next, we define the duration of the page using the `func setDuration(_ id: DesignBlockID, duration: Double) throws` API to be 20 seconds long. This will be the total duration of our exported captions in the end. ```swift highlight-setPageDuration try engine.block.setDuration(page, duration: 20) ``` ## Adding Captions In this example, we want to show two captions, one after the other. For this, we create two caption blocks. ```swift highlight-createCaptions let caption1 = try engine.block.create(.caption) try engine.block.setString(caption1, property: "caption/text", value: "Caption text 1") let caption2 = try engine.block.create(.caption) try engine.block.setString(caption2, property: "caption/text", value: "Caption text 2") ``` As an alternative to manually creating the captions, changing the text, and adjusting the timings, the captions can also be loaded from a caption file, i.e., an SRT or VTT file with the `createCaptionsFromURI` API. This return a list of caption blocks, with the parsed texts and timings. These can be added to a caption track as well. ```swift highlight-createCaptionsFromURI // Captions can also be loaded from a caption file, i.e., from SRT and VTT files. // The text and timing of the captions are read from the file. let captions = try await engine.block .createCaptionsFromURI(URL(string: "https://img.ly/static/examples/captions.srt")!) ``` ## Creating a Captions Track While we could add the two blocks directly to the page, we can alternatively also use the `captionTrack` block to group them. Caption tracks themselves cannot be selected directly by clicking on the canvas, nor do they have any visual representation. The dimensions of a `captionTrack` are always derived from the dimensions of its children, so you should not call the `setWidth` or `setHeight` APIs on a track, but on its children instead. ```swift highlight-addToTrack let captionTrack = try engine.block.create(.captionTrack) try engine.block.appendChild(to: page, child: captionTrack) try engine.block.appendChild(to: captionTrack, child: caption1) try engine.block.appendChild(to: captionTrack, child: caption2) for caption in captions { try engine.block.appendChild(to: captionTrack, child: caption) } ``` ## Modifying Captions By default, each caption block has a duration of 3 seconds after it is created. If we want to show it on the page for a different amount of time, we can use the `setDuration` API. ```swift highlight-setDuration try engine.block.setDuration(caption1, duration: 3) try engine.block.setDuration(caption2, duration: 5) try engine.block.setTimeOffset(caption1, offset: 0) try engine.block.setTimeOffset(caption2, offset: 3) ``` The position and size of the captions is automatically synced across all captions that are attached to the scene. Therefore, changes only need to be made on one of the caption blocks. ```swift highlight-positionAndSize // Once the captions are added to the scene, the position and size are synced with all caption blocks in the scene so // only needs to be set once. try engine.block.setPositionX(caption1, value: 0.05) try engine.block.setPositionXMode(caption1, mode: .percent) try engine.block.setPositionY(caption1, value: 0.8) try engine.block.setPositionYMode(caption1, mode: .percent) try engine.block.setHeight(caption1, value: 0.15) try engine.block.setHeightMode(caption1, mode: .percent) try engine.block.setWidth(caption1, value: 0.9) try engine.block.setWidthMode(caption1, mode: .percent) ``` The styling of the captions is also automatically synced across all captions that are attached to the scene. For example, changing the text color to red on the first block, changes it on all caption blocks. ```swift highlight-changeStyle // The style is synced with all caption blocks in the scene so only needs to be set once. try engine.block.setColor(caption1, property: "fill/solid/color", color: Color.rgba(r: 0.9, g: 0.9, b: 0.0, a: 1.0)) try engine.block.setBool(caption1, property: "dropShadow/enabled", value: true) try engine.block.setColor(caption1, property: "dropShadow/color", color: Color.rgba(r: 0.0, g: 0.0, b: 0.0, a: 0.8)) ``` ## Exporting Video You can start exporting the entire page as a video file by calling `func exportVideo(_ id: DesignBlockID, mimeType: MIMEType)`. The encoding process will run in the background. You can get notified about the progress of the encoding process by the `async` stream that's returned. Since the encoding process runs in the background the engine will stay interactive. So, you can continue to use the engine to manipulate the scene. Please note that these changes won't be visible in the exported video file because the scene's state has been frozen at the start of the export. ```swift highlight-exportVideo // Export page as mp4 video. let mimeType: MIMEType = .mp4 let exportTask = Task { for try await export in try await engine.block.exportVideo(page, mimeType: mimeType) { switch export { case let .progress(renderedFrames, encodedFrames, totalFrames): print("Rendered", renderedFrames, "frames and encoded", encodedFrames, "frames out of", totalFrames) case let .finished(video: videoData): return videoData } } return Blob() } let blob = try await exportTask.value ``` ## Full Code Here's the full code: ```swift import Foundation import IMGLYEngine @MainActor func editVideoCaptions(engine: Engine) async throws { let scene = try engine.scene.createVideo() let page = try engine.block.create(.page) try engine.block.appendChild(to: scene, child: page) try engine.block.setWidth(page, value: 1280) try engine.block.setHeight(page, value: 720) try engine.editor.setSettingBool("features/videoCaptionsEnabled", value: true) try engine.block.setDuration(page, duration: 20) let caption1 = try engine.block.create(.caption) try engine.block.setString(caption1, property: "caption/text", value: "Caption text 1") let caption2 = try engine.block.create(.caption) try engine.block.setString(caption2, property: "caption/text", value: "Caption text 2") // Captions can also be loaded from a caption file, i.e., from SRT and VTT files. // The text and timing of the captions are read from the file. let captions = try await engine.block .createCaptionsFromURI(URL(string: "https://img.ly/static/examples/captions.srt")!) let captionTrack = try engine.block.create(.captionTrack) try engine.block.appendChild(to: page, child: captionTrack) try engine.block.appendChild(to: captionTrack, child: caption1) try engine.block.appendChild(to: captionTrack, child: caption2) for caption in captions { try engine.block.appendChild(to: captionTrack, child: caption) } try engine.block.setDuration(caption1, duration: 3) try engine.block.setDuration(caption2, duration: 5) try engine.block.setTimeOffset(caption1, offset: 0) try engine.block.setTimeOffset(caption2, offset: 3) // Once the captions are added to the scene, the position and size are synced with all caption blocks in the scene so // only needs to be set once. try engine.block.setPositionX(caption1, value: 0.05) try engine.block.setPositionXMode(caption1, mode: .percent) try engine.block.setPositionY(caption1, value: 0.8) try engine.block.setPositionYMode(caption1, mode: .percent) try engine.block.setHeight(caption1, value: 0.15) try engine.block.setHeightMode(caption1, mode: .percent) try engine.block.setWidth(caption1, value: 0.9) try engine.block.setWidthMode(caption1, mode: .percent) // The style is synced with all caption blocks in the scene so only needs to be set once. try engine.block.setColor(caption1, property: "fill/solid/color", color: Color.rgba(r: 0.9, g: 0.9, b: 0.0, a: 1.0)) try engine.block.setBool(caption1, property: "dropShadow/enabled", value: true) try engine.block.setColor(caption1, property: "dropShadow/color", color: Color.rgba(r: 0.0, g: 0.0, b: 0.0, a: 0.8)) // Export page as mp4 video. let mimeType: MIMEType = .mp4 let exportTask = Task { for try await export in try await engine.block.exportVideo(page, mimeType: mimeType) { switch export { case let .progress(renderedFrames, encodedFrames, totalFrames): print("Rendered", renderedFrames, "frames and encoded", encodedFrames, "frames out of", totalFrames) case let .finished(video: videoData): return videoData } } return Blob() } let blob = try await exportTask.value } ``` --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Annotation" description: "Documentation for Annotation" platform: ios url: "https://img.ly/docs/cesdk/ios/edit-video/annotation-e9cbad/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Create and Edit Videos](https://img.ly/docs/cesdk/ios/create-video-c41a08/) > [Annotation](https://img.ly/docs/cesdk/ios/edit-video/annotation-e9cbad/) --- Annotations are on-screen callouts: - text notes - shapes - highlights - icons that appear at precise moments in your video. With CE.SDK you can let users add and edit annotations using the **prebuilt VideoEditor UI**, or you can **create, update, and remove overlays programmatically** when building your own SwiftUI interface. This guide shows both. ## What You’ll Learn - Launch the **prebuilt `VideoEditor`** and use its toolbar, inspector, and timeline duration handles. - Add text and shape annotations to a video scene. - Control **when** annotations appear with `timeOffset` and `duration`. - Read and set the **current playback time** to sync your UI. - Detect whether an annotation is **visible at the current time** and jump the playhead. ## When to Use It Use annotations for tutorials, sports analysis, education, product demos, and any workflow where viewers should notice specific moments without scrubbing manually. ## Annotations in the VideoEditor Insert annotations from the toolbar. Any static asset can become an annotation: - images - text - stickers ![Toolbar with static image tools highlighted](assets/annotation-ios-0-159.png) Once you’ve added an annotation, drag it around and use the standard block tools to position it in frame at the size you want. Use the timeline editor handles to change the duration and timing. ![UI with a selected annotation, arrow points to the clip handles.](assets/annotation-ios-1-159.png) You can continue to add annotations. Each one appears with its own track and handles. More than one annotation can be on the screen at the same time. ![Annotation used as a title](assets/annotations-ios-161-3.png) Annotations don’t have to be on a clip, you could use them to make simple title interstitial clips. Each text block is its own annotation, you’d need another strategy for more complicated titles. ## Annotations in Code > **Note:** When creating scenes and pages programmatically, set `width = 1080` and `height = 1920` to match the default VideoEditor scene. Otherwise, text or video clips may appear oddly sized relative to the canvas. ### Add a Text Annotation The following code: - Creates a text block. - Positions it. - Make it visible from 5–10 seconds on the timeline. The code is the same as for any other block except for the addition of `timeOffest` and `duration` properties. ```swift @MainActor func addTextAnnotation(engine: Engine, page: DesignBlockID) throws -> DesignBlockID { let text = try engine.block.create(.text) try engine.block.replaceText(text, text: "Watch this part!") try engine.block.setTextFontSize(text, fontSize: 32) // Auto-size + place it visibly try engine.block.setWidthMode(text, mode: .auto) try engine.block.setHeightMode(text, mode: .auto) try engine.block.setPositionX(text, value: 160) try engine.block.setPositionY(text, value: 560) // Timeline: show between 5s and 10s try engine.block.setTimeOffset(text, offset: 5.0) try engine.block.setDuration(text, duration: 5.0) try engine.block.appendChild(to: page, child: text) return text } ``` Any visual block (text, shapes, stickers) can serve as an annotation. Time properties control when it’s active on the page timeline. ### Add a Shape Annotation Use a graphic block with a vector shape for pointers or highlights. ```swift @MainActor func addStarAnnotation(engine: Engine, page: DesignBlockID) throws -> DesignBlockID { let star = try engine.block.create(.graphic) try engine.block.setShape(star, shape: engine.block.createShape(.star)) try engine.block.setFill(star, fill: engine.block.createFill(.color)) try engine.block.setColor(star, property: "fill/color/value", color: .init(r: 1, g: 0, b: 0, a: 1)) try engine.block.setPositionX(star, value: 320) try engine.block.setPositionY(star, value: 420) try engine.block.setTimeOffset(star, offset: 12.0) try engine.block.setDuration(star, duration: 4.0) try engine.block.appendChild(to: page, child: star) return star } ``` ## Timeline Sync: React to Playback & Highlight Active Annotations Below is a partial SwiftUI pattern to keep your UI in sync with the editor’s timeline. It: 1. Retrieves the current page’s playback time on an interval. 2. Marks an annotation as **active** when it’s visible at that time. 3. Lets you **seek** the playhead to an annotation’s start time. ```swift final class TimelineSync: ObservableObject { @Published var currentTime: Double = 0 @Published var activeAnnotation: DesignBlockID? private var task: Task? func start(engine: Engine, page: DesignBlockID, annotations: [DesignBlockID]) { task?.cancel() task = Task { @MainActor [weak self] in guard let self else { return } while !Task.isCancelled { // 1) Read the page’s current playback time let t = (try? engine.block.getPlaybackTime(page)) ?? 0 self.currentTime = t // 2) Determine which annotation is currently visible for id in annotations { if (try? engine.block.isVisibleAtCurrentPlaybackTime(id)) == true { self.activeAnnotation = id break } } try? await Task.sleep(nanoseconds: 200_000_000) // ~5 fps polling } } } func stop() { task?.cancel() } @MainActor func seek(to seconds: Double, engine: Engine, page: DesignBlockID) { try? engine.block.setPlaybackTime(page, time: seconds) } } ``` > **Note:** * Use a modest polling rate, start with 5–10 Hz. It keeps UI responsive. > * For tighter coupling, combine this with the SDK’s event subscriptions elsewhere in your app. **Wire it into SwiftUI**: ```swift struct AnnotationListView: View { @ObservedObject var sync: TimelineSync let engine: Engine let page: DesignBlockID let annotations: [DesignBlockID] var body: some View { List(annotations, id: \.[self]) { id in let isActive = (sync.activeAnnotation == id) HStack { Circle().frame(width: 8, height: 8) Text("Annotation \(id)") } .font(.body) .opacity(isActive ? 1 : 0.5) .contentShape(Rectangle()) .onTapGesture { // Seek to this annotation’s start let start = (try? engine.block.getTimeOffset(id)) ?? 0 sync.seek(to: start, engine: engine, page: page) } } } } ``` The list: - Dims non‑active annotations - Jumps the playhead when you tap an annotation. This example doesn’t include: - The main UI - The video clips - Any controls. ## Controlling Playback (Play/Pause, Loop) You can perform actions such as: - Play/pause the page timeline - Set looping - Play a solo playback for a single block when previewing. ```swift @MainActor func play(engine: Engine, page: DesignBlockID) throws { try engine.block.setPlaying(page, enabled: true) } @MainActor func pause(engine: Engine, page: DesignBlockID) throws { try engine.block.setPlaying(page, enabled: false) } @MainActor func setLooping(engine: Engine, id: DesignBlockID, enabled: Bool) throws { try engine.block.setLooping(id, looping: enabled) } ``` ## Edit & Remove Annotations Following code shows the functions for: - Updating text - Moving an annotation - Deleting an annotation entirely. ```swift @MainActor func updateAnnotationText(engine: Engine, id: DesignBlockID, newText: String) throws { try engine.block.replaceText(id, text: newText) } @MainActor func moveAnnotation(engine: Engine, id: DesignBlockID, x: Double, y: Double) throws { try engine.block.setPositionX(id, value: x) try engine.block.setPositionX(id, value: x) } @MainActor func removeAnnotation(engine: Engine, id: DesignBlockID) throws { try engine.block.destroy(id) } ``` ## Design Tips (Quick Wins) - **Readable contrast:** Light text over dark video (or add a translucent background for the text block). - **Consistent rhythm:** Align callout durations to beats/phrases; use 2–5\&nbap;s for most labels. - **Safe zones:** Keep annotations away from edges (device notches, social crop areas). Pair with your existing Rules/Scopes. - **Hierarchy:** Title (bolder), detail (smaller). Reserve color for emphasis. - **Motion restraint:** Prefer fades and basic transforms over heavy effects for legibility. ## Testing & QA Checklist - **Device playback:** Verify on physical devices; long H.265 exports may differ from simulator previews. - **Performance:** Poll timeline at ~5–10 Hz for UI sync; avoid tight loops. - **Edge timing:** Test annotations starting at `0s` and ending at page duration; confirm no off‑by‑one visibility. - **Layer order:** Ensure annotations render above background clips; append after media or bring to front when needed. - **Export parity:** Compare in‑editor preview vs `.mp4` export for small text and any blurs. ## Add a “Like” Button (Insert Annotation at Playhead) The snippet below adds a like button to the dock. When tapped, it: - Reads the page’s current playback time. - Inserts a heart emoji annotation that starts exactly there. ```swift import SwiftUI import IMGLYVideoEditor import IMGLYEngine struct EditorWithMarkerButton: View { private let settings = EngineSettings(license: "") @State private var isPresented = false var body: some View { Button("Open Editor") { isPresented = true } .fullScreenCover(isPresented: $isPresented) { videoEditor } } @MainActor var editor: some View { VideoEditor(settings).imgly.modifyDockItems { context, items in items.addFirst { Dock.Button( id: "ly.img.add.annotation", action: { context in Task { try await addMarkerAnnotation(engine: context.engine, message: "❤️❤️❤️") } }, label: { _ in Label("Add Annotation", systemImage: "heart.fill") } ) } } } @MainActor private func addMarkerAnnotation(engine: Engine, message: String = "") async throws -> DesignBlockID { let page = try engine.scene.getCurrentPage()! let text = try engine.block.create(.text) try engine.block.replaceText(text, text: title) try engine.block.setTextFontSize(text, fontSize: 22) // Auto-size + place it visibly try engine.block.setWidthMode(text, mode: .auto) try engine.block.setHeightMode(text, mode: .auto) try engine.block.setPositionX(text, value: 10) try engine.block.setPositionY(text, value: 10) try engine.block.setTimeOffset(text, offset: start) try engine.block.setDuration(text, duration: 1.5) // default length try engine.block.appendChild(to: page, child: text) return text } } ``` ![Video Editor with custom annotation button](assets/annotation-ios-161-4.png) In the preceding screenshot, the annotation button added **three different** annotations to the timeline. ## Troubleshooting **❌ Annotation doesn’t show up**: - Confirm you appended it to the **page** (or a track on the page). - Ensure its `timeOffset`/`duration` place it within the page’s total duration. - If hidden behind media, append it **after** the background or bring to front. **❌ Jumps don’t seem to work**: - Seek on the **page** block with `setPlaybackTime(page, time:)`, not on the annotation itself. **❌ Performance stutters**: - Poll the timeline at 5–10 Hz. Avoid tight loops. - Batch UI updates on the main actor. **❌ Exported video looks different**: - Make sure the scene mode is **Video** and the page duration property has the correct value. Long blurs/glows may differ depending on codec. ## Next Steps Now that you've explored annotation basics, these topics can deepen your understanding: - [Add Captions & Subtitles](https://img.ly/docs/cesdk/ios/edit-video/add-captions-f67565/) to your clips. - [Variables for Dynamic Labels](https://img.ly/docs/cesdk/ios/create-templates/add-dynamic-content/text-variables-7ecb50/) for displaying information like usernames or scores. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Transform" description: "Documentation for Transform" platform: ios url: "https://img.ly/docs/cesdk/ios/edit-video/transform-369f28/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Create and Edit Videos](https://img.ly/docs/cesdk/ios/create-video-c41a08/) > [Transform](https://img.ly/docs/cesdk/ios/edit-video/transform-369f28/) --- --- ## Related Pages - [Transform Overview](https://img.ly/docs/cesdk/ios/edit-video/transform/overview-22ed11/) - Learn how CE.SDK applies geometric and crop transformations in video scenes, when to use them, and how they relate to the dedicated transform guides. - [Move](https://img.ly/docs/cesdk/ios/edit-video/transform/move-aa9d89/) - Position a video relative to its parent using either percentage or units - [Crop Video](https://img.ly/docs/cesdk/ios/edit-video/transform/crop-8b1741/) - Cut out specific areas of a video to focus on key content or change aspect ratio - [Rotate](https://img.ly/docs/cesdk/ios/edit-video/transform/rotate-eaf662/) - Rotate video clips either freeform or by set angles - [Resize](https://img.ly/docs/cesdk/ios/edit-video/transform/resize-b1ce14/) - Change the frame size of individual elements or groups - [Scale](https://img.ly/docs/cesdk/ios/edit-video/transform/scale-f75c8a/) - Scale video clips and streams uniformly in projects - [Flip](https://img.ly/docs/cesdk/ios/edit-video/transform/flip-a603b0/) - Flip video clips horizontally or vertically or both --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Crop Video" description: "Cut out specific areas of a video to focus on key content or change aspect ratio" platform: ios url: "https://img.ly/docs/cesdk/ios/edit-video/transform/crop-8b1741/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Create and Edit Videos](https://img.ly/docs/cesdk/ios/create-video-c41a08/) > [Transform](https://img.ly/docs/cesdk/ios/edit-video/transform-369f28/) > [Crop](https://img.ly/docs/cesdk/ios/edit-video/transform/crop-8b1741/) --- Cropping video is a fundamental editing operation that helps you frame your subject, remove unwanted elements, or prepare visuals for specific formats. With the CreativeEditor SDK (CE.SDK) for iOS, you can crop video blocks either using the built-in user interface or programmatically via the engine API. This guide covers both methods and explains how to apply constraints such as fixed aspect ratios or exact dimensions when using templates. ## Using the built-in crop UI CE.SDK provides a user-friendly cropping tool in its default UI. Users can interactively adjust crop areas, select preset aspect ratios, and apply changes with real-time feedback. This makes it easy to support social media presets or maintain brand consistency. ![Crop tools appear when the crop button is tapped in the editor](../mobile-assets/crop-tool.png) ### User interaction workflow 1. **Select the video** you want to crop. 2. **Tap the crop icon** in the editor toolbar. 3. **Adjust the crop area** by dragging the corners or edges or using two-finger gestures. 4. **Use the tools** to modify the crop flip, rotation and angle or to reset the crop. 5. **Close the Sheet** to finalize the crop. The cropped video appears in your project, but the underlying original video and crop values are preserved even when you rotate or resize the cropped video. ### Enable and configure crop tool The default UI allows cropping. When you are creating your own UI or custom toolbars you can configure editing behavior. To ensure the crop tool is available in the UI, make sure it's included in your dock configuration or quick actions. ```swift try engine.editor.setSettingBool("doubleClickToCropEnabled", value: true) try engine.editor.setSettingBool("controlGizmo/showCropHandles", value: true) try engine.editor.setSettingBool("controlGizmo/showCropScaleHandles", value: true) ``` The cropping handles are only available when a selected block has a fill of type `.video`. Otherwise setting the edit mode of the `engine.editor` to `.crop` has no effect. ## Programmatic Cropping Programmatic cropping gives you complete control over video block boundaries, dimensions, and integration with other transformations like rotation or flipping. This is useful for automation, predefined layouts, or server-synced workflows. When you initially create a fill to insert a video into a block, the engine centers the image in the block and crops any dimension that doesn't match. For example: when a block with dimensions of 400.0 × 400.0 is filled with a video that is 600.0 × 500.0, there will be horizontal cropping. When working with cropping using code, it's important to remember that you are modifying the scale, translation, rotation, etc. of the underlying video. The examples below always adjust the x and y values equally. This is not required, but adjusting them unequally can distort the video, which might be just what you want. ### Reset Crop When a video is initially placed into a block, the engine applies crop scale and crop translation values. Resetting the crop returns the video to these original values. ![Video with no additional crop applied shown in crop mode](../mobile-assets/crop-example-1.png) This is a block (called `videoBlock` in the example code) with dimensions of 400 × 400 filled with a video that has dimensions of 720 × 1280. The video has slight scaling and translation applied so that it fills the block evenly. At any time, the code can execute the reset crop command to return it to this stage. ```swift try engine.block.resetCrop(videoBlock) ``` ### Crop Translation The translation values adjust the placement of the origin point of a video. You can read and change the values. They are not pixel units or centimeters, they are scaled percentages. A video that has its origin point at the origin point of the crop block will have translation value of 0.0 for x and y. ![Video crop translated one quarter of its width to the right](../mobile-assets/crop-example-5.png) ```swift try engine.block.setCropTranslationX(videoBlock, translationX: 0.250) ``` This video has had its translation in the x direction set to 0.25. That moved the image one quarter of its width to the right. Setting the value to -0.25 would change the offset of the origin to the left. These are absolute values. Setting the x value to 0.25 and then setting it to -0.25 does not move the image to an offset of 0.0. Use the `setCropTranslationY(_ id: DesignBlockID, translationY: Float)` function to adjust the translation of the video in the vertical direction. Negative values move the image up and positive values move the video down. To read the current crop translation values you can use the convenience getters for the x and y values. ```swift let currentX = try engine.block.getCropTranslationX(videoBlock) let currentY = try engine.block.getCropTranslationY(videoBlock) ``` ### Crop scale The scale values adjust the height and width of the underlying video. Values larger than 1.0 will make the video larger while values less than 1.0 make the video smaller. Unless the video also has offsetting translation applied, the center of the image will move. ![Video crop scaled by 1.5 with no corresponding translation adjustment](../mobile-assets/crop-example-6.png) This video has been scaled by 1.5 in the x and y directions, but the origin point has not been translated. The center of the image moves. ```swift try engine.block.setCropScaleX(videoBlock, scaleX: 1.50) try engine.block.setCropScaleY(videoBlock, scaleY: 1.50) ``` To read the current crop scale values, use the convenience getters for the x and y values. ```swift let currentX = try engine.block.getCropScaleX(videoBlock) let currentY = try engine.block.getCropScaleY(videoBlock) ``` ### Crop rotate The same as when rotating blocks, the crop rotation function uses radians. Positive values rotate clockwise and negative values rotate counter clockwise. The video rotates around its center. ![Video crop rotated by pi/4 or 45 degrees](../mobile-assets/crop-example-7.png) ```swift try engine.block.setCropRotation(videoBlock, rotation: .pi / 4.0) ``` For working with radians, Swift has a constant defined for pi. It can be used as either `Float.pi` or `Double.pi`. Because the `setCropRotation` function takes a `Float` for the rotation value, you can use `.pi` and Swift will infer the correct type. ### Crop to scale ratio To center crop a video, use the scale ratio. This adjusts the x and y scales of the image evenly, and adjust the translation to keep it centered. ![Video cropped using the scale ratio to remain centered](../mobile-assets/crop-example-2.png) This video has been scaled by 2.0 in the x and y directions. It's translation has been adjusted automatically by -0.5 in the x and y directions to keep the image centered. ```swift try engine.block.setCropScaleRatio(videoBlock, scaleRatio: 2.0) ``` Using the crop scale ratio function is the same as calling the translation and scale functions, but in one line. ```swift try engine.block.setCropScaleX(videoBlock, scaleX: 2.0) try engine.block.setCropScaleY(videoBlock, scaleY: 2.0) try engine.block.setCropTranslationX(videoBlock, translationX: -0.5) try engine.block.setCropTranslationY(videoBlock, translationY: -0.5) ``` ### Chained crops Crop operations can be chained together. The order of the chaining impacts the final video. ![Video cropped and rotated](../mobile-assets/crop-example-3.png) ```swift try engine.block.setCropScaleRatio(videoBlock, scaleRatio: 2.0) try engine.block.setCropRotation(videoBlock, rotation: .pi / 3.0) ``` ![Video rotated first and then scaled](../mobile-assets/crop-example-4.png) ```swift try engine.block.setCropRotation(videoBlock, rotation: .pi / 3.0) try engine.block.setCropScaleRatio(videoBlock, scaleRatio: 2.0) ``` ### Flipping the crop Crop flipping the video has two functions, one for horizontal and one for vertical. They each flip the video along its center. ![Image crop flipped vertically](../mobile-assets/crop-example-8.png) ```swift try engine.block.flipCropVertical(videoBlock) try engine.block.flipCropHorizontal(videoBlock) ``` The video will be crop flipped every time the function gets called. Calling the function an even number of times returns the image to its original orientation. ### Filling the frame When the various crop operations cause the background of the crop block to be displayed, such as in the **Crop Translation** example above, the function ```swift try engine.block.adjustCropToFillFrame(videoBlock, minScaleRatio: 1.0) ``` will adjust the translation values and the scale values of the video so that the entire crop block is filled. This is not the same as resetting the crop. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Flip Videos" description: "Flip videos horizontally or vertically to create mirror effects and symmetrical designs." platform: ios url: "https://img.ly/docs/cesdk/ios/edit-video/transform/flip-a603b0/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Create and Edit Videos](https://img.ly/docs/cesdk/ios/create-video-c41a08/) > [Transform](https://img.ly/docs/cesdk/ios/edit-video/transform-369f28/) > [Flip](https://img.ly/docs/cesdk/ios/edit-video/transform/flip-a603b0/) --- Video flipping in CreativeEditor SDK (CE.SDK) allows you to mirror video content horizontally or vertically. This transformation is useful for creating symmetrical designs, correcting orientation issues, or achieving specific visual effects in your video projects. You can flip videos both through the built-in user interface and programmatically using the SDK's APIs, providing flexibility for different workflow requirements. [Launch Web Demo](https://img.ly/showcases/cesdk) [Get Started](https://img.ly/docs/cesdk/ios/get-started/overview-e18f40/) ## Available Flip Operations CE.SDK supports two types of video flipping: - **Horizontal Flip**: Mirror the video along its vertical axis, creating a left-right reflection - **Vertical Flip**: Mirror the video along its horizontal axis, creating a top-bottom reflection These operations can be applied individually or combined to achieve the desired visual effect. ## Applying Flips ### UI-Based Flipping You can apply flips directly in the CE.SDK user interface. The editor provides intuitive controls for horizontally and vertically flipping videos, making it easy for users to quickly mirror content without writing code. ### Programmatic Flipping Developers can also apply flips programmatically, using the SDK's API. This allows for dynamic video adjustments based on application logic, user input, or automated processes. ## Combining with Other Transforms Video flipping works seamlessly with other transformation operations like rotation, scaling, and cropping. You can chain multiple transformations to create complex visual effects while maintaining video quality. ## Guides --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Move" description: "Position a video relative to its parent using either percentage or units" platform: ios url: "https://img.ly/docs/cesdk/ios/edit-video/transform/move-aa9d89/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Create and Edit Videos](https://img.ly/docs/cesdk/ios/create-video-c41a08/) > [Transform](https://img.ly/docs/cesdk/ios/edit-video/transform-369f28/) > [Move](https://img.ly/docs/cesdk/ios/edit-video/transform/move-aa9d89/) --- This guide shows how to move video blocks on the canvas using CE.SDK in your app. You’ll learn how to reposition single elements, move groups, and constrain movement behavior within templates. You can move elements programmatically or by using the built-in IMG.LY UI editors. ## What You’ll Learn - Move video programmatically using Swift - Use the IMG.LY UI to drag images - Adjust video position on the canvas - Move multiple blocks together - Constrain video movement in templates ## When to Use Use movement to: - Position content precisely in designs - Align video with text, backgrounds, or grid layouts - Enable drag-and-drop or animated movement workflows *** ## Move Videos With the UI Users can drag and drop elements directly in the editor canvas. *** ## Move a Video Block Programmatically Video block position is controlled using the `position/x` and `position/y` properties. They can either use absolute or percentage (relative) values. In addition to setting the properties, there are helper functions. ```swift try engine.block.setFloat(videoBlock, property: "position/x", value: 150) try engine.block.setFloat(videoBlock, property: "position/y", value: 100) ``` or ```swift try engine.block.setPositionX(videoBlock, value: 150) try engine.block.setPositionY(videoBlock, value: 150) ``` This moves the video to coordinates (150, 100) on the canvas. The origin point (0, 0) is at the top-left. ```swift try engine.block.setPositionXMode(videoBlock, mode: .percent) try engine.block.setPositionYMode(videoBlock, mode: .percent) try engine.block.setPositionX(videoBlock, value: 0.5) try engine.block.setPositionY(videoBlock, value: 0.5) ``` This moves the video to the center of the canvas, regardless of the dimensions of the canvas. As with setting position, you can update or check the mode using `position/x/mode` and `position/y/mode` properties. ```swift let xPosition = try engine.block.getPositionX(videoBlock) let yPosition = try engine.block.getPositionY(videoBlock) ``` *** ## Move Multiple Elements Together Group elements before moving to keep them aligned: ```swift let groupId = try engine.block.group([videoBlockId, textBlockId]) try engine.block.setPositionX(groupId, value: 200) ``` This moves the entire group to 200 from the left edge. *** ## Move Rrelative to Current Position To nudge a video instead of setting an absolute position: ```swift let xPosition = try engine.block.getPositionX(videoBlock) try engine.block.setPositionX(videoBlock, value: xPosition + 20) ``` This moves the video 20 points to the right. *** ## Lock Movement (optional) When building templates, you might want to lock movement to protect the layout: ```swift try engine.block.setScopeEnabled(videoBlock, key: "layer/move", enabled: false) ``` You can also disable all transformations for a block by locking, this is regardless of working with a template. ```swift try engine.block.setTransformLocked(videoBlock, locked: true) ``` *** ## Troubleshooting | Issue | Solution | | ------------------------ | ----------------------------------------------------- | | Video block not moving | Ensure it is not constrained or locked | | Unexpected position | Check canvas coordinates and alignment settings | | Grouped items misaligned | Confirm all items share the same reference point | | Can’t move via UI | Ensure the move feature is enabled in the UI settings | *** --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Transform Overview" description: "Learn how CE.SDK applies geometric and crop transformations in video scenes, when to use them, and how they relate to the dedicated transform guides." platform: ios url: "https://img.ly/docs/cesdk/ios/edit-video/transform/overview-22ed11/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Create and Edit Videos](https://img.ly/docs/cesdk/ios/create-video-c41a08/) > [Transform](https://img.ly/docs/cesdk/ios/edit-video/transform-369f28/) > [Overview](https://img.ly/docs/cesdk/ios/edit-video/transform/overview-22ed11/) --- Transforms control where a video block sits in the composition and how its content is framed. In CE.SDK, there are two families of transforms: - **Block-level transforms** change the block container (position, rotation, size, flip). - **Content-level transforms** (the `crop*` family) control how the video fills that container (pan, zoom, and content rotation). This overview explains the mental model and points you to focused sub‑guides for implementation details. ## What you’ll learn - The difference between **block** and **content (crop)** transforms. - Coordinate systems, anchor points, and units used in transforms. - Essential APIs & UI toggles for transform workflows. - How to restrict or lock transformations in templates. - Troubleshooting common issues. ## Understanding Transform Layers Each block in a scene has a transform that determines within the page: - The block’s position - Its rotation - Its scale Changing these values moves or rotates the block **relative to its parent**. When the block has a **video fill**, a second, content-level transform applies to the media *inside* the block. Use it to crop, pan, and zoom without changing the block’s geometry. In CE.SDK, all properties and methods beginning with `crop` belong to the **content-level transform**, such as: - `setCropScaleX`, `setCropScaleY` - `setCropRotation` - `setCropTranslationX`, `setCropTranslationY` These control how the video fills the block’s frame rather than altering the block’s geometry itself. Use `crop` methods for these actions within a clip: - Reframing - Punch-in Use **block transforms** to apply actions within a scene: - Moving - Rotating ![Block vs Content Transforms](assets/transform-overview-rotate.png) The preceding diagram compares block-level and content-level transforms: - The **left** frame shows a block rotated. - The **right** frame shows crop rotation of the video within its block frame. ## Coordinate Systems and Units - **Position:** absolute or percentage modes. - **Rotation:** radians (positive = counterclockwise). - **Scale:** uniform; optional anchor in normalized 0–1 space (0=left/top, 0.5=center, 1=right/bottom). - **Crop translation/scale:** normalized so pan/zoom behave consistently across resolutions. - **UI vs API spaces:** gizmo movements operate in canvas/screen space, while API values are normalized or scene‑space; always verify the mode before mixing. *** ## Built-in Transform UI The editor provides optional gizmos and gestures for direct manipulation: - Handles: `controlGizmo/showRotateHandles`, `/showResizeHandles`, `/showMoveHandles`, `/showScaleHandles`, `/showCropHandles`. - Gestures: `touch/rotateAction`, `touch/pinchAction`. - Limits: `controlGizmo/blockScaleDownLimit` prevents accidental shrinking to zero size. These can be read or set using: ```swift try engine.editor.setSettingBool(key: "controlGizmo/showRotateHandles", value: true) try engine.editor.setSettingFloat(key: "controlGizmo/blockScaleDownLimit", value: 0.4) ``` **Defaults:** If not configured, the editor exposes a safe, minimal set of handles; crop handles only appear when the selected block is croppable. ## Transform API Map (Quick Reference) | Action | Methods | |--------|----------| | Move | `setPositionX`, `setPositionY`, `setPositionXMode`, `setPositionYmode` | | Rotate | `setRotation` | | Flip | `setFlipHorizontal`, `setFlipVertical` | | Scale | `scale` | | Resize | `setWidth`, `setHeight`, `setWidthMode`, `setHeightMode` | | Crop | `setCropRotation`, `setCropScaleRatio`, `setCropTranslationX/Y`, `adjustCropToFillFrame`, `resetCrop` | See the dedicated sub‑guides for full examples and edge cases. ## Animated Transforms All transform properties in CE.SDK can be animated over time in video scenes. You can keyframe changes to create: - Movement - Zooms - Transitions - Keyframes live on the **timeline** associated with each block. - Interpolation curves (easing) control how values change between keys. - Programmatic animation uses standard transform methods but attached to timeline events. - **Crop UI gating:** Crop handles appear only when a selected block is croppable (e.g., a `.video` fill). - **Performance:** Transforms are GPU‑accelerated at playback; avoid heavy, stacked effects ahead of transform‑driven motion. The [Animation](https://img.ly/docs/cesdk/ios/animation-ce900c/) guide shows how to add keyframes, adjust easing, and preview animations. *** ## Permissions and Locking You can restrict or disable transforms in templates to prevent accidental edits: ```swift try engine.block.setScopeEnabled(blockID, key: "layer/rotate", enabled: false) try engine.block.setTransformLocked(blockID, locked: true) ``` These controls affect both UI gestures and API calls. ## Troubleshooting | Issue | Possible Cause | |--------|----------------| | Rotation appears wrong | Check radians vs degrees | | Block not responding | Transform locked or scope disabled | | Crop handles missing | Ensure block fill is croppable (e.g., `.video`) | | Unexpected scaling | Verify anchor and percentage mode | | Transforming group has no effect | Ensure blocks are grouped correctly | ## Next Steps Learn specific transformation APIs: - [Move](https://img.ly/docs/cesdk/ios/edit-video/transform/move-aa9d89/) - [Rotate](https://img.ly/docs/cesdk/ios/edit-video/transform/rotate-eaf662/) - [Flip](https://img.ly/docs/cesdk/ios/edit-video/transform/flip-a603b0/) - [Scale](https://img.ly/docs/cesdk/ios/edit-video/transform/scale-f75c8a/) - [Resize](https://img.ly/docs/cesdk/ios/edit-video/transform/resize-b1ce14/) Related: - [Create Video: Timeline Editor](https://img.ly/docs/cesdk/ios/create-video/timeline-editor-912252/) - [Animation Overview](https://img.ly/docs/cesdk/ios/animation/overview-6a2ef2/) - [Group Elements](https://img.ly/docs/cesdk/ios/create-composition/group-and-ungroup-62565a/) --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Resize Videos" description: "Change the dimensions of video elements to fit specific layout requirements." platform: ios url: "https://img.ly/docs/cesdk/ios/edit-video/transform/resize-b1ce14/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Create and Edit Videos](https://img.ly/docs/cesdk/ios/create-video-c41a08/) > [Transform](https://img.ly/docs/cesdk/ios/edit-video/transform-369f28/) > [Resize](https://img.ly/docs/cesdk/ios/edit-video/transform/resize-b1ce14/) --- Video resizing in CreativeEditor SDK (CE.SDK) allows you to change the dimensions of video elements to match specific layout requirements. Unlike scaling, resizing allows independent control of width and height dimensions, making it ideal for fitting videos into predefined spaces or responsive layouts. You can resize videos both through the built-in user interface and programmatically using the SDK's APIs, providing flexibility for different workflow requirements. [Launch Web Demo](https://img.ly/showcases/cesdk) [Get Started](https://img.ly/docs/cesdk/ios/get-started/overview-e18f40/) ## Resize Methods CE.SDK supports several approaches to video resizing: - **Absolute Dimensions**: Set specific pixel dimensions for precise control - **Percentage-based Resizing**: Size relative to parent container for responsive designs - **UI Resize Handles**: Interactive resize controls in the editor interface - **Aspect Ratio Constraints**: Maintain or ignore aspect ratios during resize operations ## Applying Resizing ### UI-Based Resizing You can resize videos directly in the CE.SDK user interface using resize handles. Users can drag edge and corner handles to adjust dimensions independently or proportionally, making it easy to fit videos into specific layouts visually. ### Programmatic Resizing Developers can apply resizing programmatically, using the SDK's API. This allows for precise dimension control, automated layout adjustments, and integration with responsive design systems or template constraints. ## Combining with Other Transforms Video resizing works seamlessly with other transformation operations like rotation, cropping, and positioning. You can chain multiple transformations to create complex layouts while maintaining video quality and performance. ## Guides --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Rotate" description: "Rotate video clips either freeform or by set angles" platform: ios url: "https://img.ly/docs/cesdk/ios/edit-video/transform/rotate-eaf662/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Create and Edit Videos](https://img.ly/docs/cesdk/ios/create-video-c41a08/) > [Transform](https://img.ly/docs/cesdk/ios/edit-video/transform-369f28/) > [Rotate](https://img.ly/docs/cesdk/ios/edit-video/transform/rotate-eaf662/) --- Learn how to programmatically and interactively rotate videos in your iOS app using CE.SDK. This guide walks you through rotating video blocks in an editor, using Swift to rotate, and giving your users intuitive controls and ensuring predictable editing behavior. ## What you'll learn - How to rotate a video as a user using the handles - Rotate a video block by a specific angle - How to lock video rotation - How to rotate multiple images as a group ### Rotating a Video Using the UI By default selecting a block will show handles for resizing. You can freeform rotate a block using a standard two-finger rotation gesture. To give the user the rotation handles, set the `editor` configuration setting. ```swift try engine.editor.setSettingBool("controlGizmo/showRotateHandles", value: true) ``` When working with an editor, it's best to modify settings in the `imgly.onCreate{}` callback. When working with the engine directly, they can be set at any time. ![Rotation handle of the control gizmo enabled for a video block](../mobile-assets/rotate-example-1.png) Use the `Crop` tab to rotate a video up to 45 degrees using the sliding control and in 90 degree increments using the rotation button. ![Crop menu showing rotation slider and button](../mobile-assets/rotate-example-2.png) > **Note:** Notice that using the Crop menu rotates the video, but not the block > containing the video. ### Rotating a view using code You can rotate a video block using the `setRotation` function. It takes the `id` of the block and a rotation amount in radians. Positive rotation values rotate **counterclockwise**. ```swift try engine.block.setRotation(videoBlock, radians: .pi / 4) ``` > **Note:** This rotates the entire block. If you want to rotate a video that is filling a > block but not the block, explore the > [crop rotate](https://img.ly/docs/cesdk/ios/edit-video/transform/crop-8b1741/) function. If you need to convert between radians and degrees, multiply the number in degrees by pi and divide by 180. ```swift let angleInRadians: Double = angleInDegrees * Double.pi / 180 let angleInDegrees: Double = angleInRadians * 180 / Double.pi ``` You can discover the current rotation of a block using the `getRotation` function. ```swift let rotationOfClip= try engine.block.getRotation(videoBlock) ``` ### Rotating as a group To rotate multiple elements together, first add them to a `group` and then rotate the group. ```swift let groupId = try engine.block.group([videoBlock, textBlock]) engine.block.setRotation(groupId, radians: pi / 2) ``` ### Locking rotation You can remove the rotation handle from the UI by changing the setting for the engine. This will affect *all* blocks. ```swift try engine.editor.setSettingBool("controlGizmo/showRotateHandles", value: false) ``` Though the handle is gone, the user can still use the two finger rotation gesture on a touch device. You can disable that gesture with the following setting. ```swift try engine.editor.setSettingBool("touch/rotateAction", value: false) ``` When you want to lock only certain blocks, you can toggle the transform lock property. This will apply to all transformations for the block. ```swift try engine.block.setTransformLocked(videoBlock, locked: true) ``` When working with templates, you can lock a block from rotating by setting its scope. Remember that the global layer has to defer to the blocks using `setGlobalScope`. ```swift try engine.block.setScopeEnabled(imageBlock, key: "layer/rotate", enabled: false) ``` ### Troubleshooting Troubleshooting | Issue | Solution | | ----------------------------------- | ------------------------------------------------------------------------------- | | Video appears offset after rotation | Make sure the pivot point is centered (default is center). | | Rotation not applying | Confirm that the video block is inserted and rendered before applying rotation. | | Rotation handle not visible | Check that interactive UI controls are enabled in the settings. | --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Scale" description: "Scale video clips and streams uniformly in projects" platform: ios url: "https://img.ly/docs/cesdk/ios/edit-video/transform/scale-f75c8a/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Create and Edit Videos](https://img.ly/docs/cesdk/ios/create-video-c41a08/) > [Transform](https://img.ly/docs/cesdk/ios/edit-video/transform-369f28/) > [Scale](https://img.ly/docs/cesdk/ios/edit-video/transform/scale-f75c8a/) --- This guide shows you how to scale video clips using CE.SDK in your iOS project. You'll learn how to scale video blocks proportionally, scale groups and apply scaling constraints to protect template structure. Because of the CE.SDK block architecture, many of the commands and concepts apply to all types of graphical fills. Methods for scaling video work the same when scaling images, text and other types of blocks. ## What you'll learn - Scale video using the UI - Scale video programmatically using Swift - Scale proportionally or non-uniformly - Scale grouped elements - Apply scale constraints in templates ## When to use Use scaling to: - Emphasize or de-emphasize elements - Fit video to available space without cropping - Enable pinch-to-zoom gestures or dynamic layouts *** ### Scale video using the UI When using an editor such as the **Video Editor** there are two methods for scaling video clip blocks, touch controls or the `Crop` menu. CE.SDK supports the standard pinch-to-zoom gesture for scaling. Scaling using the touch controls changes the scale of the entire video block. Scaling in the `Crop` menu changes the scale of the underlying video, but leaves the block's scale unchanged. Learn more about scaling while cropping in the [Crop guide](https://img.ly/docs/cesdk/ios/edit-video/transform/crop-8b1741/). The **Video Editor** also has a `Resize` menu, but those settings are for resizing the entire scene, not individual clips. ![Example of scaling the size of a block and crop scaling the underlying video](../mobile-assets/scale-example-2.png) ## Scale video programmatically using Swift ### Scale uniformly Scaling uses the `scale(_ id: DesignBlockID, to scale: Float)` function. A scale value of `1.0` is the original scale. Values larger than `1.0` increase the scale of the block and values lower than `1.0` scale the block smaller. A value of `2.0`, for example makes the block twice as large. This scales the video to 150% of its original size. The origin anchor point is unchanged, so the image expands down and to the right. ```swift try engine.block.scale(block, to: 1.5) ``` ![Original image and scaled image](../mobile-assets/scale-example-3.png) By default, the anchor point for the video when scaling is the origin point on the top left. The scale function has two optional parameters to move the anchor point in the x and y direction. They can have values between `0.0` and `1.0` This scales the video to 150% of its original size. The origin anchor point is 0.5, 0.5 so the video expands from the center. ```swift try engine.block.scale(block, to: 1.5, anchorX: 0.5, anchorY: 0.5) ``` ![Original video placed over the scaled video, aligned on the center anchor point](../mobile-assets/scale-example-4.png) *** ### Scale non-uniformly To stretch or compress only one axis, thus distorting a video, use the crop scale function in combination with the width or height function. How you decide to make the adjustment will have different results. Below are three examples of scaling the original video in the x direction only. ![Allowing the engine to scale the video as you adjust the width of the block](../mobile-assets/scale-example-5.png) ```swift try engine.block.setWidthMode(imageBlock, mode: .absolute) let newWidth: Float = try engine.block.getWidth(imageBlock) * 1.5 try engine.block.setWidth(imageBlock, value: newWidth) ``` This adjusts the width of the block and allows the engine to adjust the scale of the video to maintain it as a fill. The video isn't distorted, but it no longer fits the frame of the block. ![Using crop scale for the horizontal axis and adjusting the width of the block](../mobile-assets/scale-example-6.png) ```swift try engine.block.setCropScaleX(block, scaleX: 1.50) try engine.block.setWidthMode(block, mode: .absolute) let newWidth: Float = try engine.block.getWidth(block) * 1.5 try engine.block.setWidth(block, value: newWidth) ``` This uses crop scale to scale the video in a single direction and then adjusts the block's width to match the change. The change in width does not take the crop into account and so distorts the video as it's scaling the scaled video. ![Using crop scale for the horizontal axis and using the maintainCrop property when changing the width](../mobile-assets/scale-example-7.png) ```swift try engine.block.setCropScaleX(block, scaleX: 1.50) try engine.block.setWidthMode(block, mode: .absolute) let newWidth: Float = try engine.block.getWidth(block) * 1.5 try engine.block.setWidth(block, value: newWidth, maintainCrop: true) ``` By setting the `maintainCrop` option to true, expanding the width of the video by the scale factor respects the crop scale and the video is less distorted. ## Scale multiple elements together Group blocks to scale them proportionally: ```swift let groupId = try engine.block.group([videoBlock, textBlock]) try engine.block.scale(groupId, to: 0.75) ``` This scales the entire group to 75%. *** ## Lock scaling A standard pinch-to-zoom gesture allows a user to scale a block. Toggle this ability for users by changing the "touch/pinchAction" property of the `editor`: ```swift //disable pinch-to-scale try engine.editor.setSettingBool("touch/pinchAction", value: false) ``` By default, video clip blocks in the **Video Editor** do not enable their scale handles, toggle this ability using the `controlGizmo/showScaleHandles` property of the `editor`. Displaying the scale handles will allow the user to scale even when pinch-to-zoom is disabled. ```swift //show scale handles try engine.editor.setSettingBool("controlGizmo/showScaleHandles", value: true) ``` ![Video clip with scale handles enabled in Video Editor](../mobile-assets/scale-example-1.png) > **Note:** When working with an editor such as the **Video Editor**, editor settings are > best set in the `imgly.onCreate` callback. When working directly with the > **engine** they can be set at any time. When working with templates, you can lock a block from scaling by setting its scope. Remember that the global layer has to defer to the blocks using `setGlobalScope`. ```swift try engine.block.setScopeEnabled(videoBlock, key: "layer/resize", enabled: false) ``` To prevent users from transforming an element at all: ```swift try engine.block.setTransformLocked(videoBlock, locked: true) ``` *** --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Engine Interface" description: "Understand CE.SDK's architecture and learn when to use direct Engine access for automation workflows" platform: ios url: "https://img.ly/docs/cesdk/ios/engine-interface-6fb7cf/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Engine](https://img.ly/docs/cesdk/ios/engine-interface-6fb7cf/) --- The Creative Engine is the powerhouse behind CE.SDK's cross-platform capabilities. While the UI components provide ready-to-use editing experiences, the Engine interface gives you direct programmatic control over all creative operations—from simple batch processing to complex automated workflows. ## Client-Side vs Server-Side Processing Understanding when to use client-side versus server-side processing is crucial for building efficient creative automation workflows. Each approach offers distinct advantages depending on your use case requirements. ### Client-Side Processing (Mobile Device) Client-side processing runs the Engine directly in the user's device — but importantly, this doesn't mean visible to the user. The Engine operates headlessly in the background, making it perfect for automation tasks that enhance user experience without interrupting their workflow. **Common Implementation Patterns:** **Hidden Engine Instances**: Run a second, invisible Engine instance alongside your main UI for background processing. While users edit in the primary interface, the hidden instance can validate designs, generate previews, or prepare export-ready assets. **Underlying Engine Access**: Access the Engine API directly from prebuilt UI components for custom automation within existing workflows. **Dedicated Engine Packages**: Use platform-specific Engine packages for specialized client-side automation without any UI overhead. **Ideal Client-Side Use Cases:** - **Design Validation**: Check for empty placeholders, low-resolution images, or brand guideline violations in real-time - **Thumbnail Generation**: Create preview images for design galleries or version history - **Effect Previews**: Generate quick previews of filters or effects before applying them to the main design - **Auto-Save Optimization**: Compress and optimize scenes for storage while maintaining editability - **Real-Time Feedback**: Provide instant visual feedback for design rules or constraints ### Server-Side Processing Server-side processing moves the Engine to your backend infrastructure, unlocking powerful capabilities for resource-intensive operations and scalable workflows. **Key Advantages:** - **Enhanced Resources**: Access to more CPU, memory, and storage than client devices - **Secure Asset Access**: Process private assets without exposing them to client-side code - **Background Operations**: Handle long-running tasks without affecting user experience - **Scheduled Automation**: Trigger design generation based on events, schedules, or external APIs **Ideal Server-Side Use Cases:** - **High-Resolution Exports**: Generate print-quality assets that would be too resource-intensive for client devices - **Bulk Generation**: Create thousands of design variations for marketing campaigns or product catalogs - **Data Pipeline Integration**: Connect to databases, APIs, or file systems for automated content generation - **Multi-Format Output**: Export designs in multiple formats and resolutions simultaneously - **Workflow Orchestration**: Coordinate complex multi-step automation processes **Hybrid Workflows**: Often, the most effective approach combines both client and server-side processing. Users can design and preview on the client with instant feedback, while heavy processing happens on the server in the background. ## Engine-Powered Use Cases The Engine interface unlocks [powerful automation scenarios](https://img.ly/docs/cesdk/ios/automation/overview-34d971/) that can scale creative workflows: ### Batch Processing Process multiple designs simultaneously with consistent results. Whether you're applying filters to hundreds of images or generating variations of a marketing template, the Engine handles bulk operations efficiently both client-side and server-side. ### Auto-Resize Automatically adapt designs to different aspect ratios and platforms. The Engine intelligently repositions elements, adjusts text sizes, and maintains visual hierarchy across formats—from Instagram stories to LinkedIn posts. ### Data Merge Connect external data sources (CSV, JSON, APIs) to templates for personalized content generation. Perfect for creating thousands of product cards, personalized certificates, or location-specific campaigns. ### Product Variations Generate multiple versions of product designs with different colors, sizes, or configurations. Ideal for e-commerce platforms needing to showcase product options without manual design work. ### Design Generation Create entirely new designs programmatically based on rules, templates, or AI inputs. The Engine can compose layouts, select appropriate fonts, and arrange elements according to your design guidelines. ### Multiple Image Generation Efficiently process and export designs in various formats and resolutions. Generate web-optimized previews alongside print-ready high-resolution files in a single workflow. ### Actions Implement complex multi-step operations as reusable actions. Chain together filters, transformations, and exports to create sophisticated automated workflows that can be triggered programmatically. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Create Thumbnail" description: "Generate small preview images for scenes and pages using CE.SDK export options." platform: ios url: "https://img.ly/docs/cesdk/ios/export-save-publish/create-thumbnail-749be1/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Export Media Assets](https://img.ly/docs/cesdk/ios/export-save-publish/export-82f968/) > [Create Thumbnail](https://img.ly/docs/cesdk/ios/export-save-publish/create-thumbnail-749be1/) --- Thumbnails are scaled down previews of your designs. They let you show galleries or document picker previews without loading the full editor. CE.SDK generates thumbnails using the same API you use for final output, just with smaller target dimensions and often lower quality settings. This guide focuses on **image thumbnails**: small PNG, JPEG or WebP previews for use in grids, lists and document icons. It **doesn’t cover audio waveforms** or arrays of **preview frames** for scrubbers. ## What You’ll Learn - How to export a scene or page as a small preview image. - How to control thumbnail dimensions while preserving aspect ratio. - When to choose PNG, JPEG, or WebP for thumbnails. - How to tune quality and file size using `ExportOptions`. - How to batch-generate different thumbnail sizes safely. ## When You’ll Use It - Showing a “My Designs” or “Recent Files” gallery. - Rendering previews for templates or drafts. - Generating document icons or share sheet previews. - Creating thumbnail sizes for different UI contexts. ## How Thumbnail Export Works In CE.SDK, `.export` means *rendering bitmap image data from the engine*. When you call `export`, the SDK: 1. Renders the current visual state of a block (for example, a page). 2. Composites all **visible** child blocks (images, text, shapes, effects). 3. Produces raw bitmap image data in the requested format (PNG, JPEG, or WebP). Exporting **doesn’t** imply writing a file to disk. The result of the export call is an in-memory `Blob` (`Data`) that you can: - Convert to a `UIImage`, `NSImage`, or SwiftUI `Image`. - Cache in memory. - Write to disk if needed. - Upload elsewhere. CE.SDK doesn’t provide a separate "thumbnail API". If you build your own UI, you call `engine.block.export(...)` directly whenever you want. If you use the **prebuilt editor UI** (the default CE.SDK editors), there *is* a convenient hook for the built-in Export/Share button: the editor exposes an `OnExport` callback. The default export: 1. Renders (PDF for design scenes, MP4 for video scenes). 2. Writes the result to a temporary file 3. Opens the system share sheet. That hook is great for customizing what happens when the user taps Export in the prebuilt UI, but under the hood it still uses the same engine export APIs that you use for thumbnails. ## Export a Scene Thumbnail You generate thumbnails by exporting a design block. In most cases this should be either: - The **page block**, which represents the full visible canvas. - The scene, if your design is single-page. Exporting the page block is the safest choice when you want a thumbnail that matches what the user sees on screen. ## Control Thumbnail Size and Aspect Ratio ### How `targetWidth` and `targetHeight` behave When both `targetWidth` and `targetHeight` have values, CE.SDK renders the block large enough to **fill the target box while maintaining its aspect ratio**. Important implications: - You don’t need to calculate aspect-fit or aspect-fill yourself. - The exported image may exceed one of the target dimensions internally to preserve aspect ratio. - Consider `targetWidth` and `targetHeight` as a *desired bounding box*, not a hard crop. ### Typical Thumbnail Sizes Common choices include: - 150 × 150 for dense grids - 161 × 161 for Instagram Video Feeds - 55 × 55 or 222 × 150 for Pinterest - 400 × 300 for list previews - 800 × 600 for high-quality previews ## Choose the Right Thumbnail Format CE.SDK supports PNG, JPEG, and WebP for image export. It provides a `MIMEType` enum including `.jpeg`, `.png` and `.webp`. ### PNG - Preserves transparency - Lossless quality - Compression affects speed, not quality - Best for stickers, cutouts, or UI elements ### JPEG - Smaller and faster for photographic content - No transparency - Control quality via `jpegQuality` ### WebP - Efficient compression - Supports lossless and lossy modes - Requires WebP support everywhere you display thumbnails Switching formats only requires changing the `mimeType` and relevant quality option. ```swift let jpegBlob = try await engine.block.export( handle: page, mimeType: .jpeg, options: ExportOptions( jpegQuality: 0.8, targetWidth: 400, targetHeight: 300 ) ) ``` When you need **different thumbnails of different sizes or image formats**, call `export` for each permutation. Pass in the correct mime type and an `ExportOptions` configuration. ```swift let smallBlob = try await engine.block.export( handle: page, mimeType: .jpeg, options: ExportOptions(jpegQuality: 0.8, targetWidth: 22, targetHeight: 22 ) ) let mediumBlob = try await engine.block.export( handle: page, mimeType: .jpeg, options: ExportOptions(jpegQuality: 0.8, targetWidth: 150, targetHeight: 150 ) ) ``` > **Note:** ## Caching ThumbnailsThumbnail export is expensive compared to image display.Even a basic in-memory cache (for example, `NSCache`) can dramatically improve scrolling performance in galleries and `List` views. ## Tune Quality and File Size with `ExportOptions` `ExportOptions` lets you balance visual quality, file size, and export speed. Key fields for thumbnails: - `pngCompressionLevel` (0–9, default 5) - `jpegQuality` (0–1, default 0.9) - `webpQuality` (0–1, default 1.0) - `targetWidth` / `targetHeight` CE.SDK applies only the options relevant to the chosen MIME type. Others are ignored. ## Headless and Background Thumbnail Generation CE.SDK offers two common ways to export without blocking your UI: ### Use Your Existing Engine For occasional thumbnail creation (for example, when a user saves a draft), it’s often fine to export from the same `Engine` instance that powers the editor. ### Use a Separate Headless Engine Instance For batch thumbnail generation (for example, populating a large gallery), you can create a separate `Engine` instance, load the same scene data into it, and export thumbnails there. > **Note:** When you’re using the **prebuilt editor UI** in iOS, you can also customize what happens when the user taps the Export button via the editor’s `OnExport` callback. The default callback writes the exported data to a temporary file and triggers the share sheet. You could generate thumbnails here and control the export instead. ## Thumbnails from Video Blocks (Single Frame) Although this guide focuses on static image thumbnails, it’s worth calling out an important edge case that often surprises developers: If you export a **paused video fill block**, the result is a **single image thumbnail**, just like exporting a graphic or page. This is *not* the same as generating a stream of video thumbnails or scrubbing previews. ### How This Works - Video blocks render their current frame when exported. - You can control *which* frame becomes the thumbnail by setting the playhead time before calling `export`. Conceptually: 1. Seek the video to the desired time. 2. Pause playback. 3. Export the block using the thumbnail export flow shown earlier. This produces a single static image suitable for: - Gallery previews - Document icons - Poster-frame–style thumbnails ## Troubleshooting | Symptom | Likely Cause | Solution | |---|---|---| | Thumbnail only shows part of the design | Exported a child block instead of the page | Export the page block to capture the full visible canvas | | Thumbnail size looks wrong | Missing or zero target dimension | Set both `targetWidth` and `targetHeight` | | Export is slow | Large target size or high PNG compression | Reduce dimensions or compression level | | File size too large | Quality settings too high | Lower JPEG/WebP quality or size | | Thumbnail looks blurry | Target size too small | Increase target dimensions | | Export fails | Scene not loaded | Ensure `engine.scene.get()` returns a valid scene | ## Next Steps - To learn more about exporting images and controlling output quality, see [Export designs to image formats](https://img.ly/docs/cesdk/ios/export-save-publish/export/overview-9ed3a8/). - Reduce file size or tune quality for thumbnails and previews, with [Compress exported images](https://img.ly/docs/cesdk/ios/export-save-publish/export/compress-29105e/). - If you need to generate thumbnails at scale or as part of automated workflows, take a look at [Batch processing designs](https://img.ly/docs/cesdk/ios/automation/batch-processing-ab2d18/). --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Export" description: "Explore export options, supported formats, and configuration features for sharing or rendering output." platform: ios url: "https://img.ly/docs/cesdk/ios/export-save-publish/export-82f968/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Export Media Assets](https://img.ly/docs/cesdk/ios/export-save-publish/export-82f968/) --- --- ## Related Pages - [Options](https://img.ly/docs/cesdk/ios/export-save-publish/export/overview-9ed3a8/) - Explore export options, supported formats, and configuration features for sharing or rendering output. - [For Audio Processing](https://img.ly/docs/cesdk/ios/guides/export-save-publish/export/audio-68de25/) - Learn how to export audio in WAV or MP4 format from any block type in CE.SDK for iOS and macOS. - [To PDF](https://img.ly/docs/cesdk/ios/export-save-publish/export/to-pdf-95e04b/) - Learn how to export pages to PDF and automatically generate an underlayer. - [Compress Exports for Smaller Files](https://img.ly/docs/cesdk/ios/export-save-publish/export/compress-29105e/) - Learn how to reduce file sizes during export from CE.SDK for iOS, macOS, and Catalyst by tuning format-specific compression settings. - [Create Thumbnail](https://img.ly/docs/cesdk/ios/export-save-publish/create-thumbnail-749be1/) - Generate small preview images for scenes and pages using CE.SDK export options. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Compress Exports for Smaller Files" description: "Learn how to reduce file sizes during export from CE.SDK for iOS, macOS, and Catalyst by tuning format-specific compression settings." platform: ios url: "https://img.ly/docs/cesdk/ios/export-save-publish/export/compress-29105e/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Export Media Assets](https://img.ly/docs/cesdk/ios/export-save-publish/export-82f968/) > [Compress](https://img.ly/docs/cesdk/ios/export-save-publish/export/compress-29105e/) --- Compressions goal is to reduce file sizes during export while maintaining as much visual quality as possible. With the CreativeEditor SDK (CE.SDK) for Swift, you can fine-tune compression settings for both images and videos. This allows your app to balance performance, quality, and storage efficiency across iOS, macOS, and Catalyst. ## What You’ll Learn - How to configure compression for PNG, JPEG, and WebP exports. - How to control video file size using bitrate and resolution scaling. - How to balance file size, quality, and export performance for different use cases. - How to configure compression programmatically during automation or batch operations. ## When to Use It Compression tuning is useful whenever: - Exported media is too large for upload limits - You need to optimize storage quotas - You have constrained network bandwidth Use it when preparing images or videos for any workflow that benefits from: - Faster load times and smaller files, like: - Social media - Web delivery - Consistent file size and predictable performance, like: - Batch export - Automation scenarios ## Understanding Compression Options by Format Each format supports its own parameters for balancing: - Speed - File size - Quality You pass these through the `ExportOptions` or `VideoExportOptions` structure when calling the export functions. | Format | Parameter | Type | Effect | Default | | ------- | ---------- | ---- | ------- | -------- | | PNG | `pngCompressionLevel` | 0–9 | Higher = smaller, slower (lossless) | 5 | | JPEG | `jpegQuality` | 0.0–1.0 | Lower = smaller, lower quality | 0.9 | | WebP | `webpQuality` | 0.0–1.0 | 1.0 = lossless, \<1.0 = lossy | 1.0 | | MP4 | `videoBitrate`, `audioBitrate` | bits/sec | Higher = larger, higher quality | 0 (auto) | ## Export Images with Compression Below is an example that exports a design block as PNG and JPEG while tuning compression options. ```swift import Foundation import IMGLYEngine #if canImport(UIKit) import UIKit #endif @MainActor func exportCompressedImages(engine: Engine) async throws { // Load a demo scene let sceneURL = URL(string: "https://cdn.img.ly/assets/demo/v1/ly.img.template/templates/cesdk_postcard_1.scene")! let scene = try await engine.scene.load(from: sceneURL) // Select the first graphic block to export let block = try engine.block.find(byType: .graphic).first! // Export PNG with maximum compression (lossless) let pngOptions = ExportOptions(pngCompressionLevel: 9) let pngData = try await engine.block.export(block, mimeType: .png, options: pngOptions) // Export JPEG with balanced quality (lossy) let jpegOptions = ExportOptions(jpegQuality: 0.7) let jpegData = try await engine.block.export(block, mimeType: .jpeg, options: jpegOptions) // Convert to UIImage for preview (iOS) // pass these to another part of the app for preview let pngImage = UIImage(data: pngData) let jpegImage = UIImage(data: jpegData) } ``` Choose a format depending on what matters the most for your output: - **PNG** is ideal for flat graphics or assets that require **transparency**. - **JPEG** is best for photographs where slight **compression** artifacts are acceptable. - **WebP** can serve **both** roles: it supports transparency like PNG and delivers smaller files like JPEG. ## Combine Compression with Resolution Scaling You can further reduce file size by downscaling exports: ```swift let scaledOptions = ExportOptions( pngCompressionLevel: 7, targetWidth: 1080, targetHeight: 1080 ) let scaledBlob = try await engine.block.export(block, mimeType: .png, options: scaledOptions) ``` When you specify only one dimension, CE.SDK automatically preserves aspect ratio for consistent results. ## Compress Video Exports The `VideoExportOptions` structure handles configuration for video compression. You can specify: - Bitrate - Framerate - H.264 profile - Target resolution ```swift let videoOptions = VideoExportOptions( h264Profile: .main, h264Level: 52, videoBitrate: 2_000_000, // 2 Mbps = moderate compression audioBitrate: 128_000, // 128 kbps AAC framerate: 30.0, targetWidth: 1280, targetHeight: 720 ) // Export a page as compressed MP4 if let page = try engine.scene.getCurrentPage() { for try await export in try await engine.block.exportVideo(page, mimeType: .mp4, options: videoOptions) { switch export { case let .progress(_, encodedFrames, totalFrames): print("Progress: \(encodedFrames)/\(totalFrames)") case let .finished(video: videoData): print("Export complete: \(videoData.count) bytes") } } } ``` About the bitrate’s values: - **1–2 Mbps** produces high quality results for **web** and social media clips. - **8–12 Mbps** is more appropriate for **downloadable HD video**. Setting `videoBitrate` to `0` allows CE.SDK to automatically choose an optimized bitrate based on resolution and frame rate. The H.264 `profile` and `level` determine compatibility and encoder features.\ Use `.baseline` for mobile-friendly playback, `.main` for standard HD, and `.high` for the highest quality exports targeting desktop or professional workflows. ## Performance and Trade-Offs Higher compression results in smaller files but slower export speeds. For example: - PNG Level 9 may take twice as long to encode as Level 3–5, though it produces smaller files. - JPEG and WebP are faster but can introduce visible compression artifacts. Video exports are more demanding and depend heavily on device CPU and GPU performance. You can check available export limits before encoding: ```swift let maxSize = try engine.editor.getMaxExportSize() let availableMemory = try engine.editor.getAvailableMemory() print("Max export size: \(maxSize), Memory: \(availableMemory)") ``` ## Real-World Compression Comparison (1080 × 1080) The following table compares average results across different compression settings for photo-like and graphic-like images. | Format | Setting | Avg. File Size (KB) | Encode Time (ms) | PSNR (dB)\* | Notes | | ------- | -------- | ------------------- | ---------------- | ------------ | ------ | | **PNG** | Level 0 | ~1 450 | ~44 | ∞ (lossless) | Fastest, largest | | | Level 5 | ~1 260 | ~61 | ∞ | Balanced speed and size | | | Level 9 | ~1 080 | ~88 | ∞ | Smallest, slowest | | **JPEG** | Quality 95 | ~640 | ~24 | 43 | Near-lossless appearance | | | Quality 80 | ~420 | ~20 | 39 | Good default for photos | | | Quality 60 | ~290 | ~17 | 35 | Some artifacts visible | | | Quality 40 | ~190 | ~15 | 31 | Heavy compression | | **WebP** | Quality 95 | ~510 | ~27 | 44 | Smaller than JPEG | | | Quality 80 | ~350 | ~23 | 39 | Excellent web balance | | | Quality 60 | ~240 | ~20 | 35 | Mild artifacts | | | Quality 40 | ~160 | ~18 | 31 | Compact, noticeable loss | | | Lossless | ~830 | ~33 | ∞ | Smaller than PNG, keeps alpha | \*PSNR > 40 dB ≈ visually lossless; 30–35 dB shows mild artifacts. **Key Takeaways**: - **WebP** achieves 70–85 % smaller files than uncompressed PNG with high quality around `webpQuality = 0.8`. - **JPEG** performs well for photographs; use `jpegQuality = 0.8–0.9` for web or print, `0.6` for compact exports. - **PNG** is essential for transparency and vector-like shapes; higher levels reduce size modestly at the cost of speed. - Test on realistic assets: complex photos and flat graphics compress differently. ## Practical Presets These presets provide starting points for common export scenarios. | Use Case | Format | Typical Settings | Result | Notes | |-----------|---------|------------------|---------|-------| | **Web or Social Sharing** | JPEG / WebP | `jpegQuality: 0.8` or `webpQuality: 0.8` | ~60–70 % smaller than PNG | Balanced quality and size | | **UI Graphics / Transparent Assets** | PNG / WebP | `pngCompressionLevel: 6–8` or `webpQuality: 1.0 (lossless)` | ~25 % smaller than default PNG | Maintains transparency | | **High-Quality Print or Archival** | PNG / WebP Lossless | `pngCompressionLevel: 9` or `webpQuality: 1.0` | Maximum fidelity | Slower export, large files | | **Video for Web / Social** | MP4 | `videoBitrate: 2_000_000`, `audioBitrate: 128_000`, `targetWidth: 1280` | Smooth playback, small file | Adjust for platform | | **Video for Download / HD** | MP4 | `videoBitrate: 8_000_000`, `targetWidth: 1920`, `framerate: 30` | Full HD quality | Larger file, slower encode | **PDF and Print**: PDF exports aren’t compressed by default. Use `exportPdfWithHighCompatibility` when you need broad software support in print workflows. > **Note:** Consider showing users an **estimated file size** before export. It helps them make informed choices about quality vs. performance. ## Automating Compression in Batch Exports When exporting multiple elements, apply the same compression settings programmatically: ```swift for block in try engine.block.find(byType: .graphic) { let options = ExportOptions(jpegQuality: 0.8) _ = try await engine.block.export(block, mimeType: .jpeg, options: options) } ``` This ensures consistent quality and file size across all exported assets. ## Troubleshooting **❌ File size not reduced**: - Ensure correct property name such as`jpegQuality`, `webpQuality`. **❌ JPEG Quality too low**: - Increase quality to 0.9 or use PNG/WebP lossless. **❌ Export slow**: - Check for excessive compression level. - Lower PNG level to 5–6. **❌ Video not compressing**: - Set `videoBitrate` to a non-zero reasonable value. ## Next Steps Compression is one of the most practical tools for optimizing export workflows.\ By adjusting the `ExportOptions` and `VideoExportOptions` structures in Swift, you can deliver high-quality results efficiently—whether your users are exporting social media posts, UI assets, or professional-grade print layouts. - [Export Overview](https://img.ly/docs/cesdk/ios/export-save-publish/export/overview-9ed3a8/) to learn about all available export formats. - Apply compression consistently in automated exports using [batch processing](https://img.ly/docs/cesdk/ios/automation/batch-processing-ab2d18/). - Combine scaling and compression for [thumbnails](https://img.ly/docs/cesdk/ios/export-save-publish/create-thumbnail-749be1/). --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Options" description: "Explore export options, supported formats, and configuration features for sharing or rendering output." platform: ios url: "https://img.ly/docs/cesdk/ios/export-save-publish/export/overview-9ed3a8/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Export Media Assets](https://img.ly/docs/cesdk/ios/export-save-publish/export-82f968/) > [Overview](https://img.ly/docs/cesdk/ios/export-save-publish/export/overview-9ed3a8/) --- ```swift file=@cesdk_swift_examples/engine-guides-exporting-blocks/ExportingBlocks.swift reference-only import Foundation import IMGLYEngine #if canImport(UIKit) import UIKit #endif #if canImport(AppKit) import AppKit #endif @MainActor func exportingBlocks(engine: Engine) async throws { try engine.editor.setSettingString("basePath", value: "https://cdn.img.ly/packages/imgly/cesdk-engine/1.68.0/assets") try await engine.addDefaultAssetSources() let sceneUrl = URL(string: "https://cdn.img.ly/assets/demo/v1/ly.img.template/templates/cesdk_postcard_1.scene")! try await engine.scene.load(from: sceneUrl) /* Export the scene as PDF. */ let scene = try engine.scene.get()! let mimeTypePdf: MIMEType = .pdf let sceneBlob = try await engine.block.export(scene, mimeType: mimeTypePdf) /* Export a block as PNG image. */ let block = try engine.block.find(byType: .graphic).first! let mimeTypePng: MIMEType = .png /* Optionally, the maximum supported export size can be checked before exporting */ let maxExportSizeInPixels = try engine.editor.getMaxExportSize() /* Optionally, the compression level and the target size can be specified. */ let options = ExportOptions(pngCompressionLevel: 9, targetWidth: 0, targetHeight: 0) let blob = try await engine.block.export(block, mimeType: mimeTypePng, options: options) /* Convert the blob to UIImage or NSImage. */ #if os(iOS) let exportedBlock = UIImage(data: blob) #endif #if os(macOS) let exportedBlock = NSImage(data: blob) #endif } ``` Exporting via the `block.export` endpoint allows fine-grained control of the target format. CE.SDK currently supports exporting scenes, pages, groups, or individual blocks in the PNG, JPEG, WEBP, BINARY and PDF formats. To specify the desired type, just pass in the corresponding `mimeType`. Pass additional options in a mime-type specific object: | Format | MimeType | Options (Default) | | ------ | -------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | PNG | `image/png` | `pngCompressionLevel (5)` - The compression level is a trade-off between file size and encoding/decoding speed, but doesn't affect quality. Valid values are `[0-9]` ranging from no to maximum compression. | | JPEG | `image/jpeg` | `jpegQuality (0.9)` - Directly influences the resulting files visual quality. Smaller = worse quality, but lower file size. Valid values are `(0-1]` | | WEBP | `image/webp` | `webpQuality (1.0)` - Controls the desired output quality. 1.0 results in a special lossless encoding that usually produces smaller file sizes than PNG. Valid values are (0-1], higher means better quality. | | BINARY | `application/octet-stream` | No additional options. This type returns the raw image data in RGBA8888 order in a blob. | | PDF | `application/pdf` | `exportPdfWithHighCompatibility (true)` - Increase compatibility with different PDF viewers. Images and effects will be rasterized with regard to the scene's DPI value instead of simply being embedded. | | PDF | `application/pdf` | `exportPdfWithUnderlayer (false)` - An underlayer is generated by adding a graphics block behind the existing elements of the shape of the elements on the page. | | PDF | `application/pdf` | `underlayerSpotColorName ("")` - The name of the spot color to be used for the underlayer's fill. | | PDF | `application/pdf` | `underlayerOffset (0.0)` - The adjustment in size of the shape of the underlayer. | Certain formats allow additional configuration, e.g. `options.jpegQuality` controls the output quality level when exporting to JPEG. These format-specific options are ignored when exporting to other formats. You can choose which part of the scene to export by passing in the respective block as the first parameter. For all formats, an optional `targetWidth` and `targetHeight` in pixels can be specified. If used, the block will be rendered large enough, that it fills the target size entirely while maintaining its aspect ratio. The supported export size limit can be queried with `editor.getMaxExportSize()`, the width and height should not exceed this value. Export details: - Exporting automatically performs an internal update to resolve the final layout for all blocks. - Only blocks that belong to the scene hierarchy can be exported. - The export will include the block and all child elements in the block hierarchy. - If the exported block itself is rotated it will be exported without rotation. - If a margin is set on the block it will be included. - If an outside stroke is set on the block it will be included except for pages. - Exporting a scene with more than one page may result in transparent areas between the pages, it is recommended to export the individual pages instead. - Exporting as JPEG drops any transparency on the final image and may lead to unexpected results. ```swift reference-only let scene = engine.scene.get()! let page = engine.scene.getCurrentPage()! let exportOptions = ExportOptions( /** * The PNG compression level to use, when exporting to PNG. * Valid values are 0 to 9, higher means smaller, but slower. * Quality is not affected. * Ignored for other encodings. * The default value is 5. */ pngCompressionLevel: 5, /** * The JPEG quality to use when encoding to JPEG. * Valid values are (0F-1F], higher means better quality. * Ignored for other encodings. * The default value is 0.9F. */ jpegQuality: 0.9, /** * The WebP quality to use when encoding to WebP. Valid values are (0-1], higher means better quality. * WebP uses a special lossless encoding that usually produces smaller file sizes than PNG. * Ignored for other encodings. Defaults to 1.0. */ webpQuality: 1.0, /** * An optional target width used in conjunction with target height. * If used, the block will be rendered large enough, that it fills the target * size entirely while maintaining its aspect ratio. * The default value is 0. */ targetWidth: 0, /** * An optional target height used in conjunction with target with. * If used, the block will be rendered large enough, that it fills the target * size entirely while maintaining its aspect ratio. * The default value is 0. */ targetHeight: 0, /** * Export the PDF document with a higher compatibility to different PDF viewers. * Bitmap images and some effects like gradients will be rasterized with the DPI * setting instead of embedding them directly. * The default value is true. */ exportPdfWithHighCompatibility: true, /** * Export the PDF document with an underlayer. * An underlayer is generated by adding a graphics block behind the existing elements of the shape of the elements on * the page. */ exportPdfWithUnderlayer: false, /** * The name of the spot color to be used for the underlayer's fill. */ underlayerSpotColorName: "", /** * The adjustment in size of the shape of the underlayer. */ underlayerOffset: 0.0 ) let blob = try await engine.block.export( scene, mimeType: MIMEType.png, options: exportOptions ) let colorMaskedBlob = try await engine.block.exportWithColorMask( scene, mimeType: MIMEType.png, maskColorR: 1, maskColorG: 0, maskColorB: 0, options: exportOptions ) let videoExportOptions = VideoExportOptions( /** * Determines the encoder feature set and in turn the quality, size and speed of the encoding process. * The default value is `.main`. */ h264Profile: .main, /** * Controls the H.264 encoding level. This relates to parameters used by the encoder such as bit rate, * timings and motion vectors. Defined by the spec are levels 1.0 up to 6.2. To arrive at an integer value, * the level is multiplied by ten. E.g. to get level 5.2, pass a value of 52. * The default value is 52. */ h264Level: 52, /** * The video bitrate in bits per second. The maximum bitrate is determined by h264Profile and h264Level. * If the value is 0, the bitrate is automatically determined by the engine. */ videoBitrate: 0, /** * The audio bitrate in bits per second. If the value is 0, the bitrate is automatically determined by the engine (128kbps for stereo AAC stream). */ audioBitrate: 0, /** * The target frame rate of the exported video in Hz. * The default value is 30. */ framerate: 30.0, /** * An optional target width used in conjunction with target height. * If used, the block will be rendered large enough, that it fills the target * size entirely while maintaining its aspect ratio. */ targetWidth: 1280, /** * An optional target height used in conjunction with target width. * If used, the block will be rendered large enough, that it fills the target * size entirely while maintaining its aspect ratio. */ targetHeight: 720 ) let exportTask = Task { for try await export in try await engine.block.exportVideo(page, mimeType: MIMEType.mp4, options: videoExportOptions) { switch export { case let .progress(renderedFrames, encodedFrames, totalFrames): print("Rendered", renderedFrames, "frames and encoded", encodedFrames, "frames out of", totalFrames) case let .finished(video: videoData): return videoData } } return Blob() } let videoBlob = try await exportTask.value let maxExportSizeInPixels = engine.editor.getMaxExportSize() let availableMemoryInBytes = engine.editor.getAvailableMemory() ``` ## Export a Static Design ```swift func export(_ id: DesignBlockID, mimeType: MIMEType, options: ExportOptions = .init(), onPreExport: @Sendable (_ engine: Worker) async throws -> Void = { _ in }) async throws -> Blob ``` Exports a design block element as a file of the given mime type. Performs an internal update to resolve the final layout for the blocks. - `id`: The design block element to export. - `mimeType`: The mime type of the output file. - `options`: The options for exporting the block type. - `onPreExport`: The closure to configure the engine before export. Note that the `engine` parameter of this closure is a separate engine that runs in the background. - Returns: The exported data. ## Export with a Color Mask ```swift func exportWithColorMask(_ id: DesignBlockID, mimeType: MIMEType, maskColorR: Float, maskColorG: Float, maskColorB: Float, options: ExportOptions = .init(), onPreExport: @Sendable (_ engine: Worker) async throws -> Void = { _ in }) async throws -> [Blob] ``` Exports a design block element as a file of the given mime type. Performs an internal update to resolve the final layout for the blocks. - `id`: The design block element to export. - `maskColorB`: The red mask color component in the range of 0 to 1. - `maskColorG`: The green mask color component in the range of 0 to 1. - `maskColorB`: The blue mask color component in the range of 0 to 1. - `mimeType`: The mime type of the output file. - `options`: The options for exporting the block type. - `onPreExport`: The closure to configure the engine before export. Note that the `engine` parameter of this closure is a separate engine that runs in the background. - Returns: A list of the exported image data and mask data. ## Export a Video Export a page as a video file of the given mime type. ```swift func exportVideo(_ id: DesignBlockID, mimeType: MIMEType = .mp4, options: VideoExportOptions = .init(), onPreExport: @Sendable (_ engine: Worker) async throws -> Void = { _ in }) async throws -> AsyncThrowingStream ``` Exports a design block as a video file of the given mime type. - `id`: The design block element to export. Currently, only page blocks are supported. - `mimeType`: The mime type of the output video file. - `options`: The options for exporting the video. - `onPreExport`: The closure to configure the engine before export. Note that the `engine` parameter of this closure is a separate engine that runs in the background. - Returns: A stream of video export events that can be used to monitor the progress of the export and to receive the exported video data. ## Export Information Before exporting, the maximum export size and available memory can be queried. ```swift public func getMaxExportSize() throws -> Int ``` Get the export size limit in pixels on the current device. An export is only possible when both the width and height of the output are below or equal this limit. However, this is only an upper limit as the export might also not be possible due to other reasons, e.g., memory constraints. - Returns: The upper export size limit in pixels or an unlimited size, i.e, the maximum signed 32-bit integer value, if the limit is unknown. ```swift public func getAvailableMemory() throws -> Int64 ``` Get the currently available memory in bytes - Returns: The available memory in bytes. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "To PDF" description: "Learn how to export pages to PDF and automatically generate an underlayer." platform: ios url: "https://img.ly/docs/cesdk/ios/export-save-publish/export/to-pdf-95e04b/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Export Media Assets](https://img.ly/docs/cesdk/ios/export-save-publish/export-82f968/) > [To PDF](https://img.ly/docs/cesdk/ios/export-save-publish/export/to-pdf-95e04b/) --- ```swift file=@cesdk_swift_examples/engine-guides-underlayer/Underlayer.swift reference-only import Foundation import IMGLYEngine @MainActor func underlayer(engine: Engine) async throws { let scene = try engine.scene.create() let page = try engine.block.create(.page) try engine.block.setWidth(page, value: 800) try engine.block.setHeight(page, value: 600) try engine.block.appendChild(to: scene, child: page) let block = try engine.block.create(.graphic) try engine.block.setShape(block, shape: engine.block.createShape(.star)) try engine.block.setPositionX(block, value: 350) try engine.block.setPositionY(block, value: 400) try engine.block.setWidth(block, value: 100) try engine.block.setHeight(block, value: 100) let fill = try engine.block.createFill(.color) try engine.block.setFill(block, fill: fill) let rgbaBlue = Color.rgba(r: 0, g: 0, b: 1, a: 1) try engine.block.setColor(fill, property: "fill/color/value", color: rgbaBlue) engine.editor.setSpotColor(name: "RDG_WHITE", r: 0.8, g: 0.8, b: 0.8) let mimeTypePdf: MIMEType = .pdf let options = ExportOptions(exportPdfWithUnderlayer: true, underlayerSpotColorName: "RDG_WHITE", underlayerOffset: -2.0) let blob = try await engine.block.export(page, mimeType: mimeTypePdf, options: options) } ``` When printing on a non-white medium or on a special medium like fabric or glass, printing your design over an underlayer helps achieve the desired result. An underlayer will typically be printed using a special ink and be of the exact shape of your design. When exporting to PDF, you can specify that an underlayer be automatically generated in the `ExportOptions`. An underlayer will be generated by detecting the contour of all elements on a page and inserting a new block with the shape of the detected contour. This new block will be positioned behind all existing block. After exporting, the new block will be removed. The result will be a PDF file containing an additional shape of the same shape as your design and sitting behind it. The ink to be used by the printer is specified in the `ExportOptions` with a [spot color](https://img.ly/docs/cesdk/ios/colors-a9b79c/). You can also adjust the scale of the underlayer shape with a negative or positive offset, in design units. > **Note:** **Warning** Do not flatten the resulting PDF file or you will lose the > underlayer shape which sits behind your design. ## Setup the scene We first create a new scene with a graphic block that has a color fill. ```swift highlight-setup let scene = try engine.scene.create() let page = try engine.block.create(.page) try engine.block.setWidth(page, value: 800) try engine.block.setHeight(page, value: 600) try engine.block.appendChild(to: scene, child: page) let block = try engine.block.create(.graphic) try engine.block.setShape(block, shape: engine.block.createShape(.star)) try engine.block.setPositionX(block, value: 350) try engine.block.setPositionY(block, value: 400) try engine.block.setWidth(block, value: 100) try engine.block.setHeight(block, value: 100) let fill = try engine.block.createFill(.color) try engine.block.setFill(block, fill: fill) let rgbaBlue = Color.rgba(r: 0, g: 0, b: 1, a: 1) try engine.block.setColor(fill, property: "fill/color/value", color: rgbaBlue) ``` ## Add the underlayer's spot color Here we instantiate a spot color with the known name of the ink the printer should use for the underlayer. The visual color approximation is not so important, so long as the name matches what the printer expects. ```swift highlight-create-underlayer-spot-color engine.editor.setSpotColor(name: "RDG_WHITE", r: 0.8, g: 0.8, b: 0.8) ``` ## Exporting with an underlayer We enable the automatic generation of an underlayer on export with the option `exportPdfWithUnderlayer = true`. We specify the ink to use with `underlayerSpotColorName = 'RDG_WHITE'`. In this instance, we make the underlayer a bit smaller than our design so we specify an offset of 2 design units (e.g. millimeters) with `underlayerOffset = -2.0`. ```swift highlight-export-pdf-underlayer let mimeTypePdf: MIMEType = .pdf let options = ExportOptions(exportPdfWithUnderlayer: true, underlayerSpotColorName: "RDG_WHITE", underlayerOffset: -2.0) let blob = try await engine.block.export(page, mimeType: mimeTypePdf, options: options) ``` ## Full Code Here's the full code: ```swift import Foundation import IMGLYEngine @MainActor func underlayer(engine: Engine) async throws { let scene = try engine.scene.create() let page = try engine.block.create(.page) try engine.block.setWidth(page, value: 800) try engine.block.setHeight(page, value: 600) try engine.block.appendChild(to: scene, child: page) let block = try engine.block.create(.graphic) try engine.block.setShape(block, shape: engine.block.createShape(.star)) try engine.block.setPositionX(block, value: 350) try engine.block.setPositionY(block, value: 400) try engine.block.setWidth(block, value: 100) try engine.block.setHeight(block, value: 100) let fill = try engine.block.createFill(.color) try engine.block.setFill(block, fill: fill) let rgbaBlue = Color.rgba(r: 0, g: 0, b: 1, a: 1) try engine.block.setColor(fill, property: "fill/color/value", color: rgbaBlue) engine.editor.setSpotColor(name: "RDG_WHITE", r: 0.8, g: 0.8, b: 0.8) let mimeTypePdf: MIMEType = .pdf let options = ExportOptions(exportPdfWithUnderlayer: true, underlayerSpotColorName: "RDG_WHITE", underlayerOffset: -2.0) let blob = try await engine.block.export(page, mimeType: mimeTypePdf, options: options) } ``` --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Save" description: "Save design progress locally or to a backend service to allow for later editing or publishing." platform: ios url: "https://img.ly/docs/cesdk/ios/export-save-publish/save-c8b124/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Save](https://img.ly/docs/cesdk/ios/export-save-publish/save-c8b124/) --- The CreativeEngine allows you to save scenes in a binary format to share them between editors or store them for later editing. Saving a scene can be done as a either scene file or as an archive file. A scene file does not include any fonts or images. Only the source URIs of assets, the general layout, and element properties are stored. When loading scenes in a new environment, ensure previously used asset URIs are available. Conversely, an archive file contains within it the scene's assets and references them as relative URIs. > **Note:** **Warning** A scene file does not include any fonts or images. Only the source > URIs of assets, the general layout, and element properties are stored. When > loading scenes in a new environment, ensure previously used asset URIs are > available. ```swift file=@cesdk_swift_examples/engine-guides-save-scene-to-archive/SaveSceneToArchive.swift reference-only import Foundation import IMGLYEngine @MainActor func saveSceneToArchive(engine: Engine) async throws { let sceneUrl = URL(string: "https://cdn.img.ly/assets/demo/v1/ly.img.template/templates/cesdk_postcard_1.scene")! try await engine.scene.load(from: sceneUrl) let blob = try await engine.scene.saveToArchive() var request = URLRequest(url: .init(string: "https://example.com/upload/")!) request.httpMethod = "POST" let (data, response) = try await URLSession.shared.upload(for: request, from: blob) } ``` ## Save Scenes to an Archive In this example, we will show you how to save scenes as an archive with the [CreativeEditor SDK](https://img.ly/products/creative-sdk). As an archive, the resulting `Blob` includes all pages and any hidden elements and all the asset data. To get hold of such a `Blob`, you need to use `engine.scene.saveToArchive()`. This is an asynchronous method. After waiting for the coroutine to finish, we receive a `Blob` holding the entire scene currently loaded in the editor including its assets' data. ```swift highlight-saveToArchive let blob = try await engine.scene.saveToArchive() ``` That `Blob` can then be treated as a form file parameter and sent to a remote location. ```swift highlight-create-form-data-archive var request = URLRequest(url: .init(string: "https://example.com/upload/")!) request.httpMethod = "POST" let (data, response) = try await URLSession.shared.upload(for: request, from: blob) ``` ### Full Code Here's the full code: ```swift file=@cesdk_swift_examples/engine-guides-save-scene-to-archive/SaveSceneToArchive.swift import Foundation import IMGLYEngine @MainActor func saveSceneToArchive(engine: Engine) async throws { let sceneUrl = URL(string: "https://cdn.img.ly/assets/demo/v1/ly.img.template/templates/cesdk_postcard_1.scene")! try await engine.scene.load(from: sceneUrl) let blob = try await engine.scene.saveToArchive() var request = URLRequest(url: .init(string: "https://example.com/upload/")!) request.httpMethod = "POST" let (data, response) = try await URLSession.shared.upload(for: request, from: blob) } ``` ```swift file=@cesdk_swift_examples/engine-guides-save-scene-to-blob/SaveSceneToBlob.swift reference-only import Foundation import IMGLYEngine @MainActor func saveSceneToBlob(engine: Engine) async throws { let sceneUrl = URL(string: "https://cdn.img.ly/assets/demo/v1/ly.img.template/templates/cesdk_postcard_1.scene")! try await engine.scene.load(from: sceneUrl) let savedSceneString = try await engine.scene.saveToString() let blob = savedSceneString.data(using: .utf8)! var request = URLRequest(url: .init(string: "https://example.com/upload/")!) request.httpMethod = "POST" let (data, response) = try await URLSession.shared.upload(for: request, from: blob) } ``` ## Save Scenes to a Blob In this example, we will show you how to save scenes as a `Blob` with the [CreativeEditor SDK](https://img.ly/products/creative-sdk). This is done by converting the contents of a scene to a string, which can then be stored or transferred. For sending these to a remote location, we wrap them in a `Blob` and treat it as a file object. To get hold of the scene contents as string, you need to use `engine.scene.saveToString()`. This is an asynchronous method. After waiting for the coroutine to finish, we receive a plain string holding the entire scene currently loaded in the editor. This includes all pages and any hidden elements but none of the actual asset data. ```swift highlight-saveToBlob let savedSceneString = try await engine.scene.saveToString() ``` The returned string consists solely of ASCII characters and can safely be used further or written to a database. ```swift highlight-create-blob let blob = savedSceneString.data(using: .utf8)! ``` That object can then be treated as a form file parameter and sent to a remote location. ```swift highlight-create-form-data-blob var request = URLRequest(url: .init(string: "https://example.com/upload/")!) request.httpMethod = "POST" let (data, response) = try await URLSession.shared.upload(for: request, from: blob) ``` ### Full Code Here's the full code: ```swift file=@cesdk_swift_examples/engine-guides-save-scene-to-blob/SaveSceneToBlob.swift import Foundation import IMGLYEngine @MainActor func saveSceneToBlob(engine: Engine) async throws { let sceneUrl = URL(string: "https://cdn.img.ly/assets/demo/v1/ly.img.template/templates/cesdk_postcard_1.scene")! try await engine.scene.load(from: sceneUrl) let savedSceneString = try await engine.scene.saveToString() let blob = savedSceneString.data(using: .utf8)! var request = URLRequest(url: .init(string: "https://example.com/upload/")!) request.httpMethod = "POST" let (data, response) = try await URLSession.shared.upload(for: request, from: blob) } ``` ```swift file=@cesdk_swift_examples/engine-guides-save-scene-to-string/SaveSceneToString.swift reference-only import Foundation import IMGLYEngine @MainActor func saveSceneToString(engine: Engine) async throws { let sceneUrl = URL(string: "https://cdn.img.ly/assets/demo/v1/ly.img.template/templates/cesdk_postcard_1.scene")! try await engine.scene.load(from: sceneUrl) let sceneAsString = try await engine.scene.saveToString() print(sceneAsString) } ``` ## Save Scenes to a String In this example, we will show you how to save scenes as a string with the [CreativeEditor SDK](https://img.ly/products/creative-sdk). This is done by converting the contents of a scene to a single string, which can then be stored or transferred. To get hold of such a string, you need to use `engine.scene.saveToString()`. This is an asynchronous method. After waiting for the coroutine to finish, we receive a plain string holding the entire scene currently loaded in the editor. This includes all pages and any hidden elements, but none of the actual asset data. ```swift highlight-saveToString let sceneAsString = try await engine.scene.saveToString() ``` The returned string consists solely of ASCII characters and can safely be used further or written to a database. ```swift highlight-result-string print(sceneAsString) ``` ### Full Code Here's the full code: ```swift file=@cesdk_swift_examples/engine-guides-save-scene-to-string/SaveSceneToString.swift import Foundation import IMGLYEngine @MainActor func saveSceneToString(engine: Engine) async throws { let sceneUrl = URL(string: "https://cdn.img.ly/assets/demo/v1/ly.img.template/templates/cesdk_postcard_1.scene")! try await engine.scene.load(from: sceneUrl) let sceneAsString = try await engine.scene.saveToString() print(sceneAsString) } ``` ```swift file=@cesdk_swift_examples/engine-guides-save-scene-to-string-with-persistence-callback/SaveSceneToStringWithPersistenceCallback.swift reference-only import Foundation import IMGLYEngine @MainActor func saveSceneToStringWithPersistenceCallback(engine: Engine) async throws { try engine.editor.setSettingString("basePath", value: "https://cdn.img.ly/packages/imgly/cesdk-engine/1.68.0/assets") try await engine.addDefaultAssetSources() let sceneUrl = URL(string: "https://cdn.img.ly/assets/demo/v1/ly.img.template/templates/cesdk_postcard_1.scene")! try await engine.scene.load(from: sceneUrl) let blob = try await engine.scene.saveToArchive() let sceneArchiveUrl = FileManager.default.temporaryDirectory.appendingPathComponent( UUID().uuidString, conformingTo: .zip, ) try blob.write(to: sceneArchiveUrl) try await engine.scene.loadArchive(from: sceneArchiveUrl) var alreadyPersistedURLs: [String: URL] = [:] let sceneAsString = try await engine.scene.saveToString(allowedResourceSchemes: ["http", "https"]) { url, hash in guard let persistedURL = alreadyPersistedURLs[hash] else { do { var blob = Data() try engine.editor.getResourceData(url: url, chunkSize: 10_000_000) { blob.append($0) return true } let persistedURL = URL(string: "https://example.com/" + url.absoluteString.components(separatedBy: "://")[1])! var request = URLRequest(url: persistedURL) request.httpMethod = "POST" let (data, response) = try await URLSession.shared.upload(for: request, from: blob) alreadyPersistedURLs[hash] = persistedURL return persistedURL } catch { print("Failed to persist \(url):", error) return url } } return persistedURL } print(sceneAsString) } ``` ## Save Previously Archived Scenes to a String and Persist Resources In this example, we will show you how to save scenes that were loaded from an archive to a string with the [CreativeEditor SDK](https://img.ly/products/creative-sdk). Some scenes contain resources that are only transient and may be lost after the scene is destroyed. An example is a scene that was previously saved as an archive. When loaded, the data of the archived scene's resources is held in in-memory buffers. Saving again that scene to string would result in a scene with URLs whose schemes are `buffer` which is unusable in any other instance of the editor. It is best that all resources are put online so they are always available. To that end, you can use the `saveToString()`'s `allowedResourceSchemes` and `onDisallowedResourceScheme` parameters to be notified of resources whose URL should not end in the final string. Set the `allowedResourceSchemes` argument to an array of schemes whose values can be kept as-is and set the `onDisallowedResourceScheme` to a function that will save the resource's data to a permanent location and call an embedded callback with the new URL to refer to that resource. Any resource whose URL scheme is not found in the `allowedResourceSchemes` array will trigger a call of the passed `onDisallowedResourceScheme` argument with the resource's URL, a hash of its data and the embedded callback to call with the new URL. ```swift highlight-saveToStringWithPersistenceCallback var alreadyPersistedURLs: [String: URL] = [:] let sceneAsString = try await engine.scene.saveToString(allowedResourceSchemes: ["http", "https"]) { url, hash in guard let persistedURL = alreadyPersistedURLs[hash] else { do { var blob = Data() try engine.editor.getResourceData(url: url, chunkSize: 10_000_000) { blob.append($0) return true } let persistedURL = URL(string: "https://example.com/" + url.absoluteString.components(separatedBy: "://")[1])! var request = URLRequest(url: persistedURL) request.httpMethod = "POST" let (data, response) = try await URLSession.shared.upload(for: request, from: blob) alreadyPersistedURLs[hash] = persistedURL return persistedURL } catch { print("Failed to persist \(url):", error) return url } } return persistedURL } ``` The returned string consists solely of ASCII characters and can safely be used further or written to a database. ```swift highlight-result-callback print(sceneAsString) ``` ### Full Code Here's the full code: ```swift file=@cesdk_swift_examples/engine-guides-save-scene-to-string-with-persistence-callback/SaveSceneToStringWithPersistenceCallback.swift import Foundation import IMGLYEngine @MainActor func saveSceneToStringWithPersistenceCallback(engine: Engine) async throws { try engine.editor.setSettingString("basePath", value: "https://cdn.img.ly/packages/imgly/cesdk-engine/1.68.0/assets") try await engine.addDefaultAssetSources() let sceneUrl = URL(string: "https://cdn.img.ly/assets/demo/v1/ly.img.template/templates/cesdk_postcard_1.scene")! try await engine.scene.load(from: sceneUrl) let blob = try await engine.scene.saveToArchive() let sceneArchiveUrl = FileManager.default.temporaryDirectory.appendingPathComponent( UUID().uuidString, conformingTo: .zip, ) try blob.write(to: sceneArchiveUrl) try await engine.scene.loadArchive(from: sceneArchiveUrl) var alreadyPersistedURLs: [String: URL] = [:] let sceneAsString = try await engine.scene.saveToString(allowedResourceSchemes: ["http", "https"]) { url, hash in guard let persistedURL = alreadyPersistedURLs[hash] else { do { var blob = Data() try engine.editor.getResourceData(url: url, chunkSize: 10_000_000) { blob.append($0) return true } let persistedURL = URL(string: "https://example.com/" + url.absoluteString.components(separatedBy: "://")[1])! var request = URLRequest(url: persistedURL) request.httpMethod = "POST" let (data, response) = try await URLSession.shared.upload(for: request, from: blob) alreadyPersistedURLs[hash] = persistedURL return persistedURL } catch { print("Failed to persist \(url):", error) return url } } return persistedURL } print(sceneAsString) } ``` --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Store Custom Metadata" description: "Attach and persist metadata alongside your design, such as tags, version info, or creator details." platform: ios url: "https://img.ly/docs/cesdk/ios/export-save-publish/store-custom-metadata-337248/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Store Custom Metadata](https://img.ly/docs/cesdk/ios/export-save-publish/store-custom-metadata-337248/) --- ```swift file=@cesdk_swift_examples/engine-guides-store-metadata/StoreMetadata.swift reference-only import Foundation import IMGLYEngine @MainActor func storeMetadata(engine: Engine) async throws { var scene = try await engine.scene.create(fromImage: .init(string: "https://img.ly/static/ubq_samples/imgly_logo.jpg")!) let block = try engine.block.find(byType: .page).first! try engine.block.setMetadata(scene, key: "author", value: "img.ly") try engine.block.setMetadata(block, key: "customer_id", value: "1234567890") /* We can even store complex objects */ struct Payment: Encodable { let id: Int let method: String let received: Bool } let payment = Payment(id: 5, method: "credit_card", received: true) try engine.block.setMetadata( block, key: "payment", value: String(data: JSONEncoder().encode(payment), encoding: .utf8)!, ) /* This will return "img.ly" */ try engine.block.getMetadata(scene, key: "author") /* This will return "1000000" */ try engine.block.getMetadata(block, key: "customer_id") /* This will return ["customer_id"] */ try engine.block.findAllMetadata(block) try engine.block.removeMetadata(block, key: "payment") /* This will return false */ try engine.block.hasMetadata(block, key: "payment") /* We save our scene and reload it from scratch */ let sceneString = try await engine.scene.saveToString() scene = try await engine.scene.load(from: sceneString) /* This still returns "img.ly" */ try engine.block.getMetadata(scene, key: "author") /* And this still returns "1234567890" */ try engine.block.getMetadata(block, key: "customer_id") } ``` CE.SDK allows you to store custom metadata in your scenes. You can attach metadata to your scene or directly to your individual design blocks within the scene. This metadata is persistent across saving and loading of scenes. It simply consists of key value pairs of strings. Using any string-based serialization format such as JSON will allow you to store even complex objects. Please note that when duplicating blocks their metadata will also be duplicated. ## Working with Metadata We can add metadata to any design block using `func setMetadata(_ id: DesignBlockID, key: String, value: String) throws`. This also includes the scene block. ```swift highlight-setMetadata try engine.block.setMetadata(scene, key: "author", value: "img.ly") try engine.block.setMetadata(block, key: "customer_id", value: "1234567890") /* We can even store complex objects */ struct Payment: Encodable { let id: Int let method: String let received: Bool } let payment = Payment(id: 5, method: "credit_card", received: true) try engine.block.setMetadata( block, key: "payment", value: String(data: JSONEncoder().encode(payment), encoding: .utf8)!, ) ``` We can retrieve metadata from any design block or scene using `func getMetadata(_ id: DesignBlockID, key: String) throws`. Before accessing the metadata you check for its existence using `func hasMetadata(_ id: DesignBlockID, key: String) throws -> Bool`. ```swift highlight-getMetadata /* This will return "img.ly" */ try engine.block.getMetadata(scene, key: "author") /* This will return "1000000" */ try engine.block.getMetadata(block, key: "customer_id") ``` We can query all metadata keys from any design block or scene using `func findAllMetadata(_ id: DesignBlockID) throws -> [String]`. For blocks without any metadata, this will return an empty list. ```swift highlight-findAllMetadata /* This will return ["customer_id"] */ try engine.block.findAllMetadata(block) ``` If you want to get rid of any metadata, you can use `func removeMetadata(_ id: DesignBlockID, key: String) throws`. ```swift highlight-removeMetadata try engine.block.removeMetadata(block, key: "payment") /* This will return false */ try engine.block.hasMetadata(block, key: "payment") ``` Metadata will automatically be saved and loaded as part the scene. So you don't have to worry about it getting lost or having to save it separately. ```swift highlight-persistence /* We save our scene and reload it from scratch */ let sceneString = try await engine.scene.saveToString() scene = try await engine.scene.load(from: sceneString) /* This still returns "img.ly" */ try engine.block.getMetadata(scene, key: "author") /* And this still returns "1234567890" */ try engine.block.getMetadata(block, key: "customer_id") ``` ## Full Code Here's the full code: ```swift import Foundation import IMGLYEngine @MainActor func storeMetadata(engine: Engine) async throws { var scene = try await engine.scene.create(fromImage: .init(string: "https://img.ly/static/ubq_samples/imgly_logo.jpg")!) let block = try engine.block.find(byType: .graphic).first! try engine.block.setMetadata(scene, key: "author", value: "img.ly") try engine.block.setMetadata(block, key: "customer_id", value: "1234567890") /* We can even store complex objects */ struct Payment: Encodable { let id: Int let method: String let received: Bool } let payment = Payment(id: 5, method: "credit_card", received: true) try engine.block.setMetadata( block, key: "payment", value: String(data: JSONEncoder().encode(payment), encoding: .utf8)! ) /* This will return "img.ly" */ try engine.block.getMetadata(scene, key: "author") /* This will return "1000000" */ try engine.block.getMetadata(block, key: "customer_id") /* This will return ["customer_id"] */ try engine.block.findAllMetadata(block) try engine.block.removeMetadata(block, key: "payment") /* This will return false */ try engine.block.hasMetadata(block, key: "payment") /* We save our scene and reload it from scratch */ let sceneString = try await engine.scene.saveToString() scene = try await engine.scene.load(from: sceneString) /* This still returns "img.ly" */ try engine.block.getMetadata(scene, key: "author") /* And this still returns "1234567890" */ try engine.block.getMetadata(block, key: "customer_id") } ``` --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "File Format Support" description: "See which image, video, audio, font, and template formats CE.SDK supports for import and export." platform: ios url: "https://img.ly/docs/cesdk/ios/file-format-support-3c4b2a/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Compatibility & Security](https://img.ly/docs/cesdk/ios/compatibility-fef719/) > [File Format Support](https://img.ly/docs/cesdk/ios/file-format-support-3c4b2a/) --- ## Importing Media ### SVG Limitations ## Exporting Media ## Importing Templates ## Font Formats ## Video & Audio Codecs CE.SDK supports the most widely adopted video and audio codecs to ensure compatibility across platforms: ## Size Limits ### Image Resolution Limits ### Video Resolution & Duration Limits --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Fills" description: "Apply solid colors, gradients, images, or videos as fills to shapes, text, and other design elements." platform: ios url: "https://img.ly/docs/cesdk/ios/fills-402ddc/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Fills](https://img.ly/docs/cesdk/ios/fills-402ddc/) --- --- ## Related Pages - [Fills](https://img.ly/docs/cesdk/ios/fills/overview-3895ee/) - Apply solid colors, gradients, images, or videos as fills to shapes, text, and other design elements. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Fills" description: "Apply solid colors, gradients, images, or videos as fills to shapes, text, and other design elements." platform: ios url: "https://img.ly/docs/cesdk/ios/fills/overview-3895ee/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Fills](https://img.ly/docs/cesdk/ios/fills-402ddc/) > [Overview](https://img.ly/docs/cesdk/ios/fills/overview-3895ee/) --- ```swift file=@cesdk_swift_examples/engine-guides-using-fills/UsingFills.swift reference-only import Foundation import IMGLYEngine @MainActor func usingFills(engine: Engine) async throws { let scene = try engine.scene.create() let page = try engine.block.create(.page) try engine.block.setWidth(page, value: 800) try engine.block.setHeight(page, value: 600) try engine.block.appendChild(to: scene, child: page) try await engine.scene.zoom(to: page, paddingLeft: 40, paddingTop: 40, paddingRight: 40, paddingBottom: 40) let block = try engine.block.create(.graphic) try engine.block.setShape(block, shape: engine.block.createShape(.rect)) try engine.block.setWidth(block, value: 100) try engine.block.setHeight(block, value: 100) try engine.block.setFill(block, fill: engine.block.createFill(.color)) try engine.block.appendChild(to: page, child: block) try engine.block.supportsFill(scene) // Returns false try engine.block.supportsFill(block) // Returns true let colorFill = try engine.block.getFill(block) let defaultRectFillType = try engine.block.getType(colorFill) let allFillProperties = try engine.block.findAllProperties(colorFill) try engine.block.setColor(colorFill, property: "fill/color/value", color: .rgba(r: 1.0, g: 0.0, b: 0.0, a: 1.0)) try engine.block.setFillEnabled(block, enabled: false) try engine.block.setFillEnabled(block, enabled: !engine.block.isFillEnabled(block)) let imageFill = try engine.block.createFill(.image) try engine.block.setString( imageFill, property: "fill/image/imageFileURI", value: "https://img.ly/static/ubq_samples/sample_1.jpg", ) try engine.block.destroy(colorFill) try engine.block.setFill(block, fill: imageFill) /* The following line would also destroy imageFill */ // try engine.block.destroy(circle) let duplicateBlock = try engine.block.duplicate(block) try engine.block.setPositionX(duplicateBlock, value: 450) let autoDuplicateFill = try engine.block.getFill(duplicateBlock) try engine.block.setString( autoDuplicateFill, property: "fill/image/imageFileURI", value: "https://img.ly/static/ubq_samples/sample_2.jpg", ) // let manualDuplicateFill = try engine.block.duplicate(autoDuplicateFill) // /* We could now assign this fill to another block. */ // try engine.block.destroy(manualDuplicateFill) let sharedFillBlock = try engine.block.create(.graphic) try engine.block.setShape(sharedFillBlock, shape: engine.block.createShape(.rect)) try engine.block.setPositionX(sharedFillBlock, value: 350) try engine.block.setPositionY(sharedFillBlock, value: 400) try engine.block.setWidth(sharedFillBlock, value: 100) try engine.block.setHeight(sharedFillBlock, value: 100) try engine.block.appendChild(to: page, child: sharedFillBlock) try engine.block.setFill(sharedFillBlock, fill: engine.block.getFill(block)) } ``` Some [design blocks](https://img.ly/docs/cesdk/ios/concepts/blocks-90241e/) in CE.SDK allow you to modify or replace their fill. The fill is an object that defines the contents within the shape of a block. CreativeEditor SDK supports many different types of fills, such as images, solid colors, gradients and videos. Similarly to blocks, each fill has a numeric id which can be used to query and [modify its properties](https://img.ly/docs/cesdk/ios/concepts/blocks-90241e/). We currently support the following fill types: - `FillType.color` - `FillType.linearGradient` - `FillType.radialGradient` - `FillType.conicalGradient` - `FillType.image` - `FillType.video` - `FillType.pixelStream` ## Accessing Fills Not all types of design blocks support fills, so you should always first call the `func supportsFill(_ id: DesignBlockID) throws -> Bool` API before accessing any of the following APIs. ```swift highlight-supportsFill try engine.block.supportsFill(scene) // Returns false try engine.block.supportsFill(block) // Returns true ``` In order to receive the fill id of a design block, call the `func getFill(_ id: DesignBlockID) throws -> DesignBlockID` API. You can now pass this id into other APIs in order to query more information about the fill, e.g. its type via the `func getType(_ id: DesignBlockID) throws -> String` API. ```swift highlight-getFill let colorFill = try engine.block.getFill(block) let defaultRectFillType = try engine.block.getType(colorFill) ``` ## Fill Properties Just like design blocks, fills with different types have different properties that you can query and modify via the API. Use `func findAllProperties(_ id: DesignBlockID) throws -> [String]` in order to get a list of all properties of a given fill. For the solid color fill in this example, the call would return `["fill/color/value", "type"]`. Please refer to the [design blocks](https://img.ly/docs/cesdk/ios/concepts/blocks-90241e/) for a complete list of all available properties for each type of fill. ```swift highlight-getProperties let allFillProperties = try engine.block.findAllProperties(colorFill) ``` Once we know the property keys of a fill, we can use the same APIs as for design blocks in order to modify those properties. For example, we can use `func setColor(_ id: DesignBlockID, property: String, color: Color) throws` in order to change the color of the fill to red. Once we do this, our graphic block with rect shape will be filled with solid red. ```swift highlight-modifyProperties try engine.block.setColor(colorFill, property: "fill/color/value", color: .rgba(r: 1.0, g: 0.0, b: 0.0, a: 1.0)) ``` ## Disabling Fills You can disable and enable a fill using the `func setFillEnabled(_ id: DesignBlockID, enabled: Bool) throws` API, for example in cases where the design block should only have a stroke but no fill. Notice that you have to pass the id of the design block and not of the fill to the API. ```swift highlight-disableFill try engine.block.setFillEnabled(block, enabled: false) try engine.block.setFillEnabled(block, enabled: !engine.block.isFillEnabled(block)) ``` ## Changing Fill Types All design blocks that support fills allow you to also exchange their current fill for any other type of fill. In order to do this, you need to first create a new fill object using `func createFill(_ type: FillType) throws -> DesignBlockID`. ```swift highlight-createFill let imageFill = try engine.block.createFill(.image) try engine.block.setString( imageFill, property: "fill/image/imageFileURI", value: "https://img.ly/static/ubq_samples/sample_1.jpg", ) ``` In order to assign a fill to a design block, simply call `func setFill(_ id: DesignBlockID, fill: DesignBlockID) throws`. Make sure to delete the previous fill of the design block first if you don't need it any more, otherwise we will have leaked it into the scene and won't be able to access it any more, because we don't know its id. Notice that we don't use the `appendChild` API here, which only works with design blocks and not fills. When a fill is attached to one design block, it will be automatically destroyed when the block itself gets destroyed. ```swift highlight-replaceFill try engine.block.destroy(colorFill) try engine.block.setFill(block, fill: imageFill) /* The following line would also destroy imageFill */ // try engine.block.destroy(circle) ``` ## Duplicating Fills If we duplicate a design block with a fill that is only attached to this block, the fill will automatically be duplicated as well. In order to modify the properties of the duplicate fill, we have to query its id from the duplicate block. ```swift highlight-duplicateFill let duplicateBlock = try engine.block.duplicate(block) try engine.block.setPositionX(duplicateBlock, value: 450) let autoDuplicateFill = try engine.block.getFill(duplicateBlock) try engine.block.setString( autoDuplicateFill, property: "fill/image/imageFileURI", value: "https://img.ly/static/ubq_samples/sample_2.jpg", ) // let manualDuplicateFill = try engine.block.duplicate(autoDuplicateFill) // /* We could now assign this fill to another block. */ // try engine.block.destroy(manualDuplicateFill) ``` ## Sharing Fills It is also possible to share a single fill instance between multiple design blocks. In that case, changing the properties of the fill will apply to all of the blocks that it's attached to at once. Destroying a block with a shared fill will not destroy the fill until there are no other design blocks left that still use that fill. ```swift highlight-sharedFill let sharedFillBlock = try engine.block.create(.graphic) try engine.block.setShape(sharedFillBlock, shape: engine.block.createShape(.rect)) try engine.block.setPositionX(sharedFillBlock, value: 350) try engine.block.setPositionY(sharedFillBlock, value: 400) try engine.block.setWidth(sharedFillBlock, value: 100) try engine.block.setHeight(sharedFillBlock, value: 100) try engine.block.appendChild(to: page, child: sharedFillBlock) try engine.block.setFill(sharedFillBlock, fill: engine.block.getFill(block)) ``` ## Full Code Here is the full code for working with fills: ```swift import Foundation import IMGLYEngine @MainActor func usingFills(engine: Engine) async throws { let scene = try engine.scene.create() let page = try engine.block.create(.page) try engine.block.setWidth(page, value: 800) try engine.block.setHeight(page, value: 600) try engine.block.appendChild(to: scene, child: page) try await engine.scene.zoom(to: page, paddingLeft: 40, paddingTop: 40, paddingRight: 40, paddingBottom: 40) let block = try engine.block.create(.graphic) try engine.block.setShape(block, shape: engine.block.createShape(.rect)) try engine.block.setWidth(block, value: 100) try engine.block.setHeight(block, value: 100) try engine.block.setFill(block, fill: engine.block.createFill(.color)) try engine.block.appendChild(to: page, child: block) try engine.block.supportsFill(scene) // Returns false try engine.block.supportsFill(block) // Returns true let colorFill = try engine.block.getFill(block) let defaultRectFillType = try engine.block.getType(colorFill) let allFillProperties = try engine.block.findAllProperties(colorFill) try engine.block.setColor(colorFill, property: "fill/color/value", color: .rgba(r: 1.0, g: 0.0, b: 0.0, a: 1.0)) try engine.block.setFillEnabled(block, enabled: false) try engine.block.setFillEnabled(block, enabled: !engine.block.isFillEnabled(block)) let imageFill = try engine.block.createFill(.image) try engine.block.setString( imageFill, property: "fill/image/imageFileURI", value: "https://img.ly/static/ubq_samples/sample_1.jpg" ) try engine.block.destroy(colorFill) try engine.block.setFill(block, fill: imageFill) /* The following line would also destroy imageFill */ // try engine.block.destroy(circle) let duplicateBlock = try engine.block.duplicate(block) try engine.block.setPositionX(duplicateBlock, value: 450) let autoDuplicateFill = try engine.block.getFill(duplicateBlock) try engine.block.setString( autoDuplicateFill, property: "fill/image/imageFileURI", value: "https://img.ly/static/ubq_samples/sample_2.jpg" ) // let manualDuplicateFill = try engine.block.duplicate(autoDuplicateFill) // /* We could now assign this fill to another block. */ // try engine.block.destroy(manualDuplicateFill) let sharedFillBlock = try engine.block.create(.graphic) try engine.block.setShape(sharedFillBlock, shape: engine.block.createShape(.rect)) try engine.block.setPositionX(sharedFillBlock, value: 350) try engine.block.setPositionY(sharedFillBlock, value: 400) try engine.block.setWidth(sharedFillBlock, value: 100) try engine.block.setHeight(sharedFillBlock, value: 100) try engine.block.appendChild(to: page, child: sharedFillBlock) try engine.block.setFill(sharedFillBlock, fill: engine.block.getFill(block)) } ``` --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Filters and Effects" description: "Enhance visual elements with filters and effects such as blur, duotone, LUTs, and chroma keying." platform: ios url: "https://img.ly/docs/cesdk/ios/filters-and-effects-6f88ac/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Filters and Effects](https://img.ly/docs/cesdk/ios/filters-and-effects-6f88ac/) --- --- ## Related Pages - [iOS Filters & Effects SDK](https://img.ly/docs/cesdk/ios/filters-and-effects/overview-299b15/) - Enhance visual elements with filters and effects such as blur, duotone, LUTs, and chroma keying. - [Apply a Filter or Effect](https://img.ly/docs/cesdk/ios/filters-and-effects/apply-2764e4/) - Programmatically or manually add effects to design elements to modify their visual style. - [Chroma Key (Green Screen) in iOS, macOS & Catalyst (SwiftUI)](https://img.ly/docs/cesdk/ios/filters-and-effects/chroma-key-green-screen-1e3e99/) - Use CE.SDK's green/blue screen keyer to replace backgrounds, tune edges & spill, and composite subjects over virtual scenes. - [Blur](https://img.ly/docs/cesdk/ios/filters-and-effects/blur-71d642/) - Apply blur effects to soften backgrounds or create depth and focus in your designs. - [Create a Custom LUT Filter](https://img.ly/docs/cesdk/ios/filters-and-effects/create-custom-lut-filter-6e3f49/) - Create and apply custom LUT filters to achieve consistent, brand-aligned visual styles. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Apply a Filter or Effect" description: "Programmatically or manually add effects to design elements to modify their visual style." platform: ios url: "https://img.ly/docs/cesdk/ios/filters-and-effects/apply-2764e4/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Filters and Effects](https://img.ly/docs/cesdk/ios/filters-and-effects-6f88ac/) > [Apply Filter or Effect](https://img.ly/docs/cesdk/ios/filters-and-effects/apply-2764e4/) --- ```swift file=@cesdk_swift_examples/engine-guides-using-effects/UsingEffects.swift reference-only import Foundation import IMGLYEngine @MainActor func usingEffects(engine: Engine) async throws { let scene = try engine.scene.create() let page = try engine.block.create(.page) try engine.block.setWidth(page, value: 800) try engine.block.setHeight(page, value: 600) try engine.block.appendChild(to: scene, child: page) try await engine.scene.zoom(to: page, paddingLeft: 40, paddingTop: 40, paddingRight: 40, paddingBottom: 40) let block = try engine.block.create(.graphic) try engine.block.setShape(block, shape: engine.block.createShape(.rect)) try engine.block.setPositionX(block, value: 100) try engine.block.setPositionY(block, value: 50) try engine.block.setWidth(block, value: 300) try engine.block.setHeight(block, value: 300) try engine.block.appendChild(to: page, child: block) let fill = try engine.block.createFill(.image) try engine.block.setString( fill, property: "fill/image/imageFileURI", value: "https://img.ly/static/ubq_samples/sample_1.jpg", ) try engine.block.setFill(block, fill: fill) try engine.block.supportsEffects(scene) // Returns false try engine.block.supportsEffects(block) // Returns true let pixelize = try engine.block.createEffect(.pixelize) let adjustments = try engine.block.createEffect(.adjustments) try engine.block.appendEffect(block, effectID: pixelize) try engine.block.insertEffect(block, effectID: adjustments, index: 0) // try engine.block.removeEffect(rect, index: 0) // This will return [adjustments, pixelize] let effectsList = try engine.block.getEffects(block) let unusedEffect = try engine.block.createEffect(.halfTone) try engine.block.destroy(unusedEffect) let allPixelizeProperties = try engine.block.findAllProperties(pixelize) let allAdjustmentProperties = try engine.block.findAllProperties(adjustments) try engine.block.setInt(pixelize, property: "pixelize/horizontalPixelSize", value: 20) try engine.block.setFloat(adjustments, property: "effect/adjustments/brightness", value: 0.2) try engine.block.setEffectEnabled(effectID: pixelize, enabled: false) try engine.block.setEffectEnabled(effectID: pixelize, enabled: !engine.block.isEffectEnabled(effectID: pixelize)) } ``` Some [design blocks](https://img.ly/docs/cesdk/ios/concepts/blocks-90241e/) in CE.SDK such as pages and graphic blocks allow you to add effects to them. An effect can modify the visual output of a block's [fill](https://img.ly/docs/cesdk/ios/fills-402ddc/). CreativeEditor SDK supports many different types of effects, such as adjustments, LUT filters, pixelization, glow, vignette and more. Similarly to blocks, each effect instance has a numeric id which can be used to query and [modify its properties](https://img.ly/docs/cesdk/ios/concepts/blocks-90241e/). We create a scene containing a graphic block with an image fill and want to apply effects to this image. ```swift highlight-setup let scene = try engine.scene.create() let page = try engine.block.create(.page) try engine.block.setWidth(page, value: 800) try engine.block.setHeight(page, value: 600) try engine.block.appendChild(to: scene, child: page) try await engine.scene.zoom(to: page, paddingLeft: 40, paddingTop: 40, paddingRight: 40, paddingBottom: 40) let block = try engine.block.create(.graphic) try engine.block.setShape(block, shape: engine.block.createShape(.rect)) try engine.block.setPositionX(block, value: 100) try engine.block.setPositionY(block, value: 50) try engine.block.setWidth(block, value: 300) try engine.block.setHeight(block, value: 300) try engine.block.appendChild(to: page, child: block) let fill = try engine.block.createFill(.image) try engine.block.setString( fill, property: "fill/image/imageFileURI", value: "https://img.ly/static/ubq_samples/sample_1.jpg", ) try engine.block.setFill(block, fill: fill) ``` ## Accessing Effects Not all types of design blocks support effects, so you should always first call the `func supportsEffects(_ id: DesignBlockID) throws -> Bool` API before accessing any of the following APIs. ```swift highlight-supportsEffects try engine.block.supportsEffects(scene) // Returns false try engine.block.supportsEffects(block) // Returns true ``` ## Creating an Effect In order to add effects to our block, we first have to create a new effect instance, which we can do by calling `func createEffect(_ type: EffectType) throws -> DesignBlockID` and passing it the type of effect that we want. In this example, we create a pixelization and an adjustment effect. Please refer to [API Docs](https://img.ly/docs/cesdk/ios/concepts/blocks-90241e/) for a complete list of supported effect types. ```swift highlight-createEffect let pixelize = try engine.block.createEffect(.pixelize) let adjustments = try engine.block.createEffect(.adjustments) ``` ## Adding Effects Now we have two effects but the output of our scene looks exactly the same as before. That is because we still need to append these effects to the graphic design block's list of effects, which we can do by calling `func appendEffect(_ id: DesignBlockID, effectID: DesignBlockID) throws`. We can also insert or remove effects from specific indices of a block's effect list using the `func insertEffect(_ id: DesignBlockID, effectID: DesignBlockID, index: Int) throws` and `func removeEffect(_ id: DesignBlockID, index: Int) throws` APIs. Effects will be applied to the block in the order they are placed in the block's effects list. If the same effect appears multiple times in the list, it will also be applied multiple times. In our case, the adjustments effect will be applied to the image first, before the result of that is then pixelated. ```swift highlight-addEffect try engine.block.appendEffect(block, effectID: pixelize) try engine.block.insertEffect(block, effectID: adjustments, index: 0) // try engine.block.removeEffect(rect, index: 0) ``` ## Querying Effects Use the `func getEffects(_ id: DesignBlockID) throws -> [DesignBlockID]` API to query the ordered list of effect ids of a block. ```swift highlight-getEffects // This will return [adjustments, pixelize] let effectsList = try engine.block.getEffects(block) ``` ## Destroying Effects If we created an effect that we don't want anymore, we have to make sure to destroy it using the same `func destroy(_ id: DesignBlockID) throws` API that we also call for design blocks. Effects that are attached to a design block will be automatically destroyed when the design block is destroyed. ```swift highlight-destroyEffect let unusedEffect = try engine.block.createEffect(.halfTone) try engine.block.destroy(unusedEffect) ``` ## Effect Properties Just like design blocks, effects with different types have different properties that you can query and modify via the API. Use `func findAllProperties(_ id: DesignBlockID) throws -> [String]` in order to get a list of all properties of a given effect. Please refer to the [API Docs](https://img.ly/docs/cesdk/ios/concepts/blocks-90241e/) for a complete list of all available properties for each type of effect. ```swift highlight-getProperties let allPixelizeProperties = try engine.block.findAllProperties(pixelize) let allAdjustmentProperties = try engine.block.findAllProperties(adjustments) ``` Once we know the property keys of an effect, we can use the same APIs as for design blocks in order to [modify those properties](https://img.ly/docs/cesdk/ios/concepts/blocks-90241e/). Our adjustment effect here for example will not modify the output unless we at least change one of its adjustment properties - such as the brightness - to not be zero. ```swift highlight-modifyProperties try engine.block.setInt(pixelize, property: "pixelize/horizontalPixelSize", value: 20) try engine.block.setFloat(adjustments, property: "effect/adjustments/brightness", value: 0.2) ``` ## Disabling Effects You can temporarily disable and enable the individual effects using the `func setEffectEnabled(effectID: DesignBlockID, enabled: Bool) throws` API. When the effects are applied to a block, all disabled effects are simply skipped. Whether an effect is currently enabled or disabled can be queried with `func isEffectEnabled(effectID: DesignBlockID) throws -> Bool`. ```swift highlight-disableEffect try engine.block.setEffectEnabled(effectID: pixelize, enabled: false) try engine.block.setEffectEnabled(effectID: pixelize, enabled: !engine.block.isEffectEnabled(effectID: pixelize)) ``` ## Full Code Here's the full code: ```swift import Foundation import IMGLYEngine @MainActor func usingEffects(engine: Engine) async throws { let scene = try engine.scene.create() let page = try engine.block.create(.page) try engine.block.setWidth(page, value: 800) try engine.block.setHeight(page, value: 600) try engine.block.appendChild(to: scene, child: page) try await engine.scene.zoom(to: page, paddingLeft: 40, paddingTop: 40, paddingRight: 40, paddingBottom: 40) let block = try engine.block.create(.graphic) try engine.block.setShape(block, shape: engine.block.createShape(.rect)) try engine.block.setPositionX(block, value: 100) try engine.block.setPositionY(block, value: 50) try engine.block.setWidth(block, value: 300) try engine.block.setHeight(block, value: 300) try engine.block.appendChild(to: page, child: block) let fill = try engine.block.createFill(.image) try engine.block.setString( fill, property: "fill/image/imageFileURI", value: "https://img.ly/static/ubq_samples/sample_1.jpg" ) try engine.block.setFill(block, fill: fill) try engine.block.supportsEffects(scene) // Returns false try engine.block.supportsEffects(block) // Returns true let pixelize = try engine.block.createEffect(.pixelize) let adjustments = try engine.block.createEffect(.adjustments) try engine.block.appendEffect(block, effectID: pixelize) try engine.block.insertEffect(block, effectID: adjustments, index: 0) // try engine.block.removeEffect(rect, index: 0) // This will return [adjustments, pixelize] let effectsList = try engine.block.getEffects(block) let unusedEffect = try engine.block.createEffect(.halfTone) try engine.block.destroy(unusedEffect) let allPixelizeProperties = try engine.block.findAllProperties(pixelize) let allAdjustmentProperties = try engine.block.findAllProperties(adjustments) try engine.block.setInt(pixelize, property: "pixelize/horizontalPixelSize", value: 20) try engine.block.setFloat(adjustments, property: "effect/adjustments/brightness", value: 0.2) try engine.block.setEffectEnabled(effectID: pixelize, enabled: false) try engine.block.setEffectEnabled(effectID: pixelize, enabled: !engine.block.isEffectEnabled(effectID: pixelize)) } ``` --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Blur" description: "Apply blur effects to soften backgrounds or create depth and focus in your designs." platform: ios url: "https://img.ly/docs/cesdk/ios/filters-and-effects/blur-71d642/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Filters and Effects](https://img.ly/docs/cesdk/ios/filters-and-effects-6f88ac/) > [Apply Blur](https://img.ly/docs/cesdk/ios/filters-and-effects/blur-71d642/) --- Blur is a core visual effect in CE.SDK, useful for softening backgrounds, creating depth, or drawing attention toward specific elements. The SDK offers four blur types, each with its own behavior and adjustable properties. In this guide, you’ll learn where to find blur controls in the prebuilt UI and how to create and configure blur effects programmatically using Swift. ## What You’ll Learn - How to locate the Blur tool in the prebuilt UI. - How to create and attach blur effects to blocks in Swift. - How each blur type works and what its properties change. - How to update blur parameters programmatically. - How to remove or replace blur effects. ## When You’ll Use It Use blur when you want to: - soften an image - simulate depth of field - reduce distraction behind text - create stylistic transitions in image and video templates. ## Using Blur in the Prebuilt Editor ### Locate the Blur Tool The blur button appears in the Inspector when the user selects a block that supports blur. ![Blur picker button in the inspector](assets/blur-ios-button-163.png) Upon selection, controls appear to choose one of the blurs or to turn off the blur effect. ![Blur selection bar showing options](assets/blur-ios-controls-163.png) ### Access Property Controls After applying any of the blur effects, the editor reveals controls specific to the selected blur type. Tapping the control opens the options pane. ![Selected blur control showing options button](assets/blur-ios-options-button.png) The option pane is specific to the active blur. Sliders update blur radius, intensity, position and, other parameters. ![Blur parameter for Gaussian blur](assets/blur-ios-gaussian-options-163.png) Find descriptions of the differences between blurs and the impact of specific parameters below. ## Programmatic Blur Blur effects apply to blocks. Create them similarly to fills or shapes: 1. Call the `blockAPI`. 2. Create the blur with `createBlur`. 3. Attach it to an existing block with `setBlur`. ```swift let uniformBlur = try engine.block.createBlur(.uniform) ``` The UI refers to the `uniform` type as a **Gaussian** blur. The other types have the same names as the UI controls: - `radial` - `linear` - `mirrored` Unlike other effects, **a block can only have one blur**, so instead of `appendEffect`, use `setBlur`. ```swift try engine.block.setBlur(imageBlockId, blurID: uniformBlur) ``` Determine if a block accepts the blur effect using the `supportsBlur` function. ```swift let doesSupportBlur = try engine.block.supportsBlur(imageBlockId) ``` Once applied, you can use `setBlurEnabled` to toggle the blur effect on and off for a specific block. ```swift try engine.block.setBlurEnabled(imageBlockId, enabled: false) //turn off blur ``` This function has a companion function to get the blur effect’s state. ```swift let blurIsEnabled = try engine.block.isBlurEnabled(imageBlockId) ``` To remove a blur entirely, call `destroy`. This: - Permanently removes the blur block from the engine. - Detaches it from **every block that was using it**. Use this only when the blur should no longer exist anywhere. Otherwise, prefer `setBlurEnabled`. ```swift let blurToDestroy = try engine.block.getBlur(imageBlockId) try engine.block.destroy(blurToDestroy) ``` ## Blur Types and Their Properties Each blur type has a distinct gradient shape. Properties specific to each blur type control aspects such as intensity and focus area of the image. All blurs have a string `type` property, to identify the blur type. The canvas updates immediately when blur properties change. > **Note:** In the examples below, coordinates (`x`, `y`, `x1`, `y1`, etc.) are relative values in the range `0.0–1.0`, where `0,0` is the top-left of the block and `1,1` is the bottom-right. ### Uniform/Gaussian Blur Applies even blurring across the entire block. Increasing intensity makes the whole image softer. ![Uniform blur applied to an image with default property values](assets/blur-ios-gaussian-163.png) **Properties**: - `blur/uniform/intensity` blur strength. Higher values increase softness. - `type` returns a value of `//ly.img.ubq/blur/uniform` **Example**: ```swift try engine.block.setFloat(blurId, property: "blur/uniform/intensity", value: 0.6) ``` ### Linear Blur Creates a directional blur using two control points. Moving the control points rotates the blur direction and shifts where the transition occurs. ![Linear blur applied to an image with default property values](assets/blur-ios-linear-163.png) **Properties**: - `blur/linear/blurRadius` blur strength. - `blur/linear/x1`, `y1` starting point of gradient. - `blur/linear/x2`, `y2` ending point of gradient. - `type` returns a value of `//ly.img.ubq/blur/linear` **Example**: ```swift try engine.block.setFloat(blurID, property: "blur/linear/blurRadius", value: 20) try engine.block.setFloat(blurID, property: "blur/linear/x1", value: 0.1) try engine.block.setFloat(blurID, property: "blur/linear/x2", value: 0.9) ``` ### Radial Blur The blur radiates outward from a center point or circle of sharpness. ![Radial blur applied to an image with default property values](assets/blur-ios-radial-163.png) **Properties**: - `blur/radial/blurRadius` blur strength. - `blur/radial/gradientRadius` how quickly blur fades from center outward. - `blur/radial/radius` size of the sharp inner focus region. - `blur/radial/x`, `y` the blur’s center point. - `type` returns a value of `//ly.img.ubq/blur/radial` **Example**: ```swift try engine.block.setFloat(blurID, property: "blur/radial/blurRadius", value: 25) try engine.block.setFloat(blurID, property: "blur/radial/x", value: 0.5) try engine.block.setFloat(blurID, property: "blur/radial/y", value: 0.4) ``` ### Mirrored Blur A dual, symmetric gradient. Use this for tilt‑shift effects. ![Mirrored blur applied to an image with default property values](assets/blur-ios-mirrored-163.png) **Properties**: - `blur/mirrored/blurRadius` blur strength. - `blur/mirrored/gradientSize` width of the transition zones. - `blur/mirrored/size` width of the clear, unblurred band. - `blur/mirrored/x1`, `y1`, `x2`, `y2` define the two gradient axes. Change these to rotate or shift the plane. - `type` returns a value of `//ly.img.ubq/blur/mirrored` **Example**: ```swift try engine.block.setFloat(blurID, property: "blur/mirrored/size", value: 0.3) ``` ## Troubleshooting Here are some common issues when working with the blur effect. | Symptom | Cause | Solution | |--------|--------|----------| | No blur appears | Block doesn’t support blur or blur isn’t enabled. | Check values of `isBlurEnabled()` and `supportsBlur()` | | Property changes do nothing | Wrong key path | Verify exact property names. | | Blur appears clipped | Block is clipped | Disable clipping or resize block | | Linear/radial blur looks off-center | Incorrect coordinate values | Verify x/y ranges | ## Next Steps Now that you have an idea about working with blur effects, here are some other guides you may find useful. - [Fills & Effects Overview](https://img.ly/docs/cesdk/ios/fills/overview-3895ee/) - [Blend Modes](https://img.ly/docs/cesdk/ios/create-composition/blend-modes-ad3519/) --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Chroma Key (Green Screen) in iOS, macOS & Catalyst (SwiftUI)" description: "Use CE.SDK's green/blue screen keyer to replace backgrounds, tune edges & spill, and composite subjects over virtual scenes." platform: ios url: "https://img.ly/docs/cesdk/ios/filters-and-effects/chroma-key-green-screen-1e3e99/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Filters and Effects](https://img.ly/docs/cesdk/ios/filters-and-effects-6f88ac/) > [Apply Chroma Key (Green Screen)](https://img.ly/docs/cesdk/ios/filters-and-effects/chroma-key-green-screen-1e3e99/) --- Chroma keying removes a uniform background color (often green or blue) from a video or image so you can composite the foreground over a new scene. In CE.SDK, chroma keying is an **effect** you attach to an image or video block, with parameters for **color selection**, **similarity threshold**, **edge smoothing**, and **spill suppression**. This guide walks you through applying the effect in SwiftUI or in one of the prebuilt editors, dialing it in for clean edges, and composing the keyed result with a replacement background. ## What You’ll Learn - How to add the **Green Screen** effect to **image** and **video** blocks. - How to set the key color (green by default, but any color works). - How to tune **colorMatch** (similarity), **smoothness** (edge falloff), and **spill** (desaturating color cast). - How to layer a new background behind the keyed subject. - How to persist, export, and protect templates that include chroma key. ## When to Use It Use chroma key when your source contains a uniform backdrop (green, blue, or a solid brand color) and you want to: - Replace the background with a **virtual set**, branded plate, or blurred depth backdrop. - Place talent over **slides** or **product footage**. - Standardize a team’s talking‑head videos with consistent backgrounds. - Composite when using an asset formats such as MP4, H.264 or, JPEG that don't support transparency. Avoid chroma key if the subject’s clothing, props, or lighting contains the same hue as your key color, or if the background is highly textured. > **Chroma Key vs. Background Removal:** The `effect/green_screen` shader operates on color similarity directly on the GPU. Unlike AI-based background removal such as Vision’s `VNGenerateForegroundInstanceMaskRequest`, chroma keying provides predictable, real‑time control for studio footage where lighting and backdrop color are controlled. ## Apply the Green Screen Effect In a Prebuilt Editor Chroma key is one of the standard effects available for images and video clips in the prebuilt editors, such as the Design Editor and the Video Editor. Use it as follows: 1. Select a key image or video clip. 2. Look for the `Effects` button in the inspector and tap it. ![Location of the Effect button in the Inspector](assets/chroma-key-ios-159-0.png) Scroll through the effects until you find "Green Screen". Once you tap it, the effect implements immediately. ![Arrow pointing to the Green Screen effect button](assets/chroma-key-ios-159-1.png) An options indicator appears for the effect. Tap it to show the options. ![Green screen effect button showing options indicator](assets/chroma-key-ios-159-2.png) Use the sliders and the color wheel, to change the settings for: - key color - color match - smoothness - spill ![Effect controls for key color, color match, smoothness and, spill](assets/chroma-key-ios-159-3.png) The "Tuning the Effect" section below explains each of these in detail. ## Apply the Green Screen Effect In Code CE.SDK exposes chroma key as the `.greenScreen` effect type with the following key properties: - `effect/green_screen/fromColor` the color to key out (default green). - `effect/green_screen/colorMatch` similarity threshold \[0…1]. - `effect/green_screen/smoothness` edge falloff \[0…1]. - `effect/green_screen/spill` desaturates remaining color spill \[0…1]. Not all platforms expose a typed enum for every effect. The **string form** shown here is fully supported and future‑proof. ### Key an Image Block ```swift @MainActor func applyGreenScreenToImage(engine: Engine, imageBlock: DesignBlockID) throws { // 1) Create the effect and attach it to the block let keyer = try engine.block.createEffect(.greenScreen) try engine.block.appendEffect(imageBlock, effectID: keyer) // 2) Choose the key color (here: pure green); any color works try engine.block.setColor( keyer, property: "effect/green_screen/fromColor", color: .rgba(r: 0.0, g: 1.0, b: 0.0, a: 1.0) ) // 3) Tune similarity, smoothness, and spill try engine.block.setFloat(keyer, property: "effect/green_screen/colorMatch", value: 0.40) try engine.block.setFloat(keyer, property: "effect/green_screen/smoothness", value: 0.08) try engine.block.setFloat(keyer, property: "effect/green_screen/spill", value: 0.15) } ``` ### Key a Video Block Video blocks use video fills instead of image fills, but the rest of the workflow is identical. ```swift @MainActor func applyGreenScreenToVideo(engine: Engine, videoBlock: DesignBlockID) throws { let keyer = try engine.block.createEffect(.greenScreen) try engine.block.appendEffect(videoBlock, effectID: keyer) // Blue screen example try engine.block.setColor( keyer, property: "effect/green_screen/fromColor", color: .rgba(r: 0.0, g: 0.25, b: 1.0, a: 1.0) ) try engine.block.setFloat(keyer, property: "effect/green_screen/colorMatch", value: 0.35) try engine.block.setFloat(keyer, property: "effect/green_screen/smoothness", value: 0.10) try engine.block.setFloat(keyer, property: "effect/green_screen/spill", value: 0.25) } ``` Order matters: if you add other effects, like color adjustments, place the **keyer first** in the stack so later effects operate on the premultiplied result. ### Pick the Key Color from the Image Hard‑coding `fromColor` works for controlled shoots. In general, sample the background color under the user’s tap. ```swift struct ColorPickerOverlay: View { let onPick: (Color) -> Void var body: some View { Rectangle().fill(.clear) .gesture(DragGesture(minimumDistance: 0).onEnded { value in // Map screen point -> scene pixel, then sample via your image source. // Convert sampled sRGBA to engine Color.rgba and call onPick. }) } } ``` > **Note:** `ColorPickerOverlay` is a conceptual example. CE.SDK doesn’t provide a built-in API to read a pixel at a screen coordinate. In your app, map the tap location to the image/video buffer you control and sample the pixel using APIs such as `CGImage` or `CIImage`. Convert the sampled sRGBA to `Color.rgba` and set `effect/green_screen/fromColor`. If you embed the `DesignEditor`, keep an app-level copy of the media to sample from, since the editor's preview is GPU-rendered. Tie the sampled color back to the effect: ```swift func setKeyColor(engine: Engine, keyer: DesignBlockID, rgba: (Double, Double, Double)) throws { try engine.block.setColor( keyer, property: "effect/green_screen/fromColor", color: .rgba(r: rgba.0, g: rgba.1, b: rgba.2, a: 1.0) ) } ``` For polished UIs, show a zoomed loupe and a live matte preview as the user drags. ### Composite over a Replacement Background A keyed subject is transparent where the background was, so you **layer a background block beneath** the keyed block. ```swift @MainActor func addBackgroundBehind(engine: Engine, subject: DesignBlockID, imageURL: URL) throws { let bg = try engine.block.create(.graphic) let shape = try engine.block.createShape(.rect) try engine.block.setShape(bg, shape: shape) let fill = try engine.block.createFill(.image) try engine.block.setURL(fill, property: "fill/image/fileURI", value: imageURL.absoluteString) try engine.block.setFill(bg, fill: fill) // Make background full‑bleed on the page // Place background **behind** subject try engine.block.insertChild(into: page, child: bg, index: 0) try engine.block.fillParent(bg) try engine.block.sendToBack(bg) } ``` For video, create a video fill instead of an image fill and align durations in your export. ## Tuning the Effect The three parameters for tuning chroma key composition are: - color match - spill - smoothness Knowing what they impact can help decide your strategy when the composition doesn’t look correct. The examples below all show how these values can change this chroma key image. ![Example composited image.](assets/chroma-key-ios-159-4.png) > **Recommended Starting Values:** | Background | colorMatch | smoothness | spill | > |-------------|-------------|------------|--------| > | **Green Screen** | 0.35–0.45 | 0.08–0.12 | 0.15–0.25 | > | **Blue Screen** | 0.30–0.40 | 0.10–0.15 | 0.25–0.35 | > | **Custom Color** | 0.40–0.50 | 0.08–0.12 | 0.10–0.20 |Tune `colorMatch` first for coverage, then refine edge softness with `smoothness`, and finally correct color tint with `spill`. ### Color Match Color Match determines how close a pixel’s color has to be to the key color to be considered *background*. When the value is low, only exact matches are removed. When the value is high, a larger range of colors similar to the key color get removed. What to watch for when the value is wrong: - Too low: you may see patches of the green screen still visible around edges, especially if lighting is uneven or shadows present. - Too high: you risk keying out part of the subject (hair strands, clothing edges, reflective items) creating holes or transparency because the effect is too aggressive. ![Color match range examples.](assets/color-match-range-ios.jpg) The preceding image shows color match values of 0.0, 0.5 and 1.0. ### Smoothness Smoothness controls how gradually or sharply the transitions occur, how soft the matte edges of the gradients are. A low value produces sharp transition between keyed and un-keyed areas. When the value is high, there is softer transition. What to watch for when the value is wrong: - Too low: harsh edges, visible fringes around hair or "hard cutouts" that look unnatural. - Too high: a halo effect or the subject blends into the background. ![Color match range examples.](assets/smoothness-range-ios.jpg) The preceding image shows smoothness values of 0.0, 0.5 and 1.0. ### Spill Spill impacts the unwanted "color spill", when your key color reflects or bleeds onto the subject. This is especially noticeable around edges, hair and, shiny objects. What to watch for when the value is wrong: - Too low: you may see green reflection on the subject (especially edges/hair/shoulders) that doesn’t get cleaned up, making it look unnatural or floating. - Too high: the subject’s actual color edges are desaturated, making hair or detail look gray, faded or too soft. ![Color match range examples.](assets/spill-range-ios.jpg) The preceding image shows spill values of 0.0, 0.5 and 1.0. ## Lighting & Capture Tips - Keep your backdrop evenly lit and 1–2 stops brighter than your subject. - Avoid shadows or wrinkles—uneven color creates transparency artifacts. - Separate your subject from the background by at least 1 m to reduce spill. ## Template & Scope Considerations If you ship templates that include a keyer, you might want to lock down parameters to protect quality: - Use **Scopes/Permissions** to limit which effect properties the end‑user can change. - Store platform‑tested defaults (match, smoothness, spill) in the template. - Provide preset chips like **“Green Screen”**, **“Blue Screen”**, **“Brand Cyan”** to switch `fromColor` quickly. ## Performance and Rendering Pipeline CE.SDK runs chroma keying directly on the graphics card for smooth, real-time results. Place the keyer near the start of your effect list so that later effects, like color or tone adjustments, apply correctly to the transparent areas. To keep playback fast, avoid heavy effects such as blur or LUTs before the keyer. ## Export Tips - Prefer **ProRes 4444** (or other alpha‑carrying formats) when exporting an intermediate keyed asset to reuse elsewhere. - For final composites, export with the background enabled and a standard delivery codec/format. ## Testing Checklist - Verify background color is uniform and well lit. - Check for reflective surfaces that might cause spill. - Test both **720p** and **4K** previews to compare performance. - Try different wardrobe colors. Avoid those close to the key color. - Examine edges on hair or fine detail under motion. - Validate output formats (e.g., MP4 with solid background vs. ProRes with alpha). ## Troubleshooting **❌ Holes in the matte (background not fully removed)**: - Increase `colorMatch` slightly. If edges get harsh, bump `smoothness` too. **❌ Foreground punched out (you lose subject detail)**: - Lower `colorMatch` until detail returns; then reduce `spill` if the subject appears tinted. **❌ Green/blue color cast on edges**: - Raise `spill` (try 0.2–0.4). If it looks gray, back it down. **❌ Jagged edges**: - Increase `smoothness` in small steps (0.05–0.15). - Consider adding a light `effect/blur` **after** the keyer for video. **❌ Uneven backgrounds / shadows**: - Sample a darker patch of the backdrop or increase `colorMatch` and compensate with `spill`. **❌ Nothing turns transparent:** - Verify the effect is attached to the **right block** and not to the page. - Check `fromColor` is close to the actual backdrop hue (sample it!). - Ensure your block type supports effects (graphic, video are supported). **❌ Performance drops with 4K video**: - Avoid stacking extra heavy effects **before** the keyer. - Render proxies or downscale the preview while tuning; export at full res. **❌ Skin tones look dull**: - Reduce `spill` and re‑tune `colorMatch`. **❌ Hair/fur looks crunchy:** - Raise `smoothness` incrementally (and consider light post‑blur). ## Next Steps With the core of chroma key compositing mastered, here are some other topics that may be interesting: - Learn about other [Filters & Effects](https://img.ly/docs/cesdk/ios/filters-and-effects/overview-299b15/) and try combining the keyer with adjustments for color matching. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Create a Custom LUT Filter" description: "Create and apply custom LUT filters to achieve consistent, brand-aligned visual styles." platform: ios url: "https://img.ly/docs/cesdk/ios/filters-and-effects/create-custom-lut-filter-6e3f49/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Filters and Effects](https://img.ly/docs/cesdk/ios/filters-and-effects-6f88ac/) > [Apply Custom LUT Filter](https://img.ly/docs/cesdk/ios/filters-and-effects/create-custom-lut-filter-6e3f49/) --- ```swift file=@cesdk_swift_examples/engine-guides-custom-lut-filter/CustomLUTFilter.swift reference-only import Foundation import IMGLYEngine @MainActor func customLutFilter(engine: Engine) async throws { let scene = try engine.scene.create() let page = try engine.block.create(.page) try engine.block.setWidth(page, value: 100) try engine.block.setHeight(page, value: 100) try engine.block.appendChild(to: scene, child: page) try await engine.scene.zoom(to: scene, paddingLeft: 40.0, paddingTop: 40.0, paddingRight: 40.0, paddingBottom: 40.0) let rect = try engine.block.create(.graphic) try engine.block.setShape(rect, shape: engine.block.createShape(.rect)) try engine.block.setWidth(rect, value: 100) try engine.block.setHeight(rect, value: 100) try engine.block.appendChild(to: page, child: rect) let imageFill = try engine.block.createFill(.image) try engine.block.setString( imageFill, property: "fill/image/imageFileURI", value: "https://img.ly/static/ubq_samples/sample_1.jpg", ) let lutFilter = try engine.block.createEffect(.lutFilter) try engine.block.setBool(lutFilter, property: "effect/enabled", value: true) try engine.block.setFloat(lutFilter, property: "effect/lut_filter/intensity", value: 0.9) try engine.block.setString( lutFilter, property: "effect/lut_filter/lutFileURI", // swiftlint:disable:next line_length value: "https://cdn.img.ly/packages/imgly/cesdk-js/1.68.0/assets/extensions/ly.img.cesdk.filters.lut/LUTs/imgly_lut_ad1920_5_5_128.png", ) try engine.block.setInt(lutFilter, property: "effect/lut_filter/verticalTileCount", value: 5) try engine.block.setInt(lutFilter, property: "effect/lut_filter/horizontalTileCount", value: 5) try engine.block.appendEffect(rect, effectID: lutFilter) try engine.block.setFill(rect, fill: imageFill) } ``` We use a technology called Lookup Tables (LUTs) in order to add new filters to our SDK. The main idea is that colors respond to operations that are carried out during the filtering process. We 'record' that very response by applying the filter to the identity image shown below. Identity LUT The resulting image can be used within our SDK and the recorded changes can then be applied to any image by looking up the transformed colors in the modified LUT. If you want to create a new filter, you'll need to [download](content-assets/6e3f49/imgly_lut_ad1920_5_5_128.png) the identity LUT shown above, load it into an image editing software of your choice, apply your operations, save it and add it to your app. > **WARNING:** As any compression artifacts in the edited LUT could lead to distorted results when applying the filter, you need to save your LUT as a PNG file. ## Using Custom Filters In this example, we will use a hosted CDN LUT filter file. First we will load one of our demo scenes and change the first image to use LUT filter we will provide. We will also configure the necessary setting based on the file. LUT file we will use: Color grading LUT showcasing a grid of color variations used for applying a specific visual style to images. ## Load Scene After the setup, we create a new scene. Within this scene, we create a page, set its dimensions, and append it to the scene. Lastly, we adjust the zoom level to properly fit the page into the view. ```javascript highlight-load-scene let page = try engine.block.create(.page) try engine.block.setWidth(page, value: 100) try engine.block.setHeight(page, value: 100) try engine.block.appendChild(to: scene, child: page) try await engine.scene.zoom(to: scene, paddingLeft: 40.0, paddingTop: 40.0, paddingRight: 40.0, paddingBottom: 40.0) ``` ## Create Rectangle Next, we create a rectangle with defined dimensions and append it to the page. We will apply our LUT filter to this rectangle. ```javascript highlight-create-rect let rect = try engine.block.create(.graphic) try engine.block.setShape(rect, shape: engine.block.createShape(.rect)) try engine.block.setWidth(rect, value: 100) try engine.block.setHeight(rect, value: 100) try engine.block.appendChild(to: page, child: rect) ``` ## Load Image After creating the rectangle, we create an image fill with a specified URL. This will load the image as a fill for the rectangle to which we will apply the LUT filter. ```javascript highlight-create-image-fill let imageFill = try engine.block.createFill(.image) try engine.block.setString( imageFill, property: "fill/image/imageFileURI", value: "https://img.ly/static/ubq_samples/sample_1.jpg", ) ``` ## Create LUT Filter Now, we create a Look-Up Table (LUT) filter effect. We enable the filter, set its intensity, and provide a URL for the LUT file. We also define the tile count for the filter. The LUT filter effect is then applied to the rectangle and image should appear black and white. ```javascript highlight-create-lut-filter let lutFilter = try engine.block.createEffect(.lutFilter) try engine.block.setBool(lutFilter, property: "effect/enabled", value: true) try engine.block.setFloat(lutFilter, property: "effect/lut_filter/intensity", value: 0.9) try engine.block.setString( lutFilter, property: "effect/lut_filter/lutFileURI", // swiftlint:disable:next line_length value: "https://cdn.img.ly/packages/imgly/cesdk-js/1.68.0/assets/extensions/ly.img.cesdk.filters.lut/LUTs/imgly_lut_ad1920_5_5_128.png", ) try engine.block.setInt(lutFilter, property: "effect/lut_filter/verticalTileCount", value: 5) try engine.block.setInt(lutFilter, property: "effect/lut_filter/horizontalTileCount", value: 5) ``` ## Apply LUT Filter Finally, we apply the LUT filter effect to the rectangle, and set the image fill to the rectangle. Before setting an image fill, we destroy the default rectangle fill. ```javascript highlight-apply-lut-filter try engine.block.appendEffect(rect, effectID: lutFilter) try engine.block.setFill(rect, fill: imageFill) ``` ## Full Code Here's the full code: ```swift import Foundation import IMGLYEngine @MainActor func customLutFilter(engine: Engine) async throws { let scene = try engine.scene.create() let page = try engine.block.create(.page) try engine.block.setWidth(page, value: 100) try engine.block.setHeight(page, value: 100) try engine.block.appendChild(to: scene, child: page) try await engine.scene.zoom(to: scene, paddingLeft: 40.0, paddingTop: 40.0, paddingRight: 40.0, paddingBottom: 40.0) let rect = try engine.block.create(.graphic) try engine.block.setShape(rect, shape: engine.block.createShape(.rect)) try engine.block.setWidth(rect, value: 100) try engine.block.setHeight(rect, value: 100) try engine.block.appendChild(to: page, child: rect) let imageFill = try engine.block.createFill(.image) try engine.block.setString( imageFill, property: "fill/image/imageFileURI", value: "https://img.ly/static/ubq_samples/sample_1.jpg" ) let lutFilter = try engine.block.createEffect(.lutFilter) try engine.block.setBool(lutFilter, property: "effect/enabled", value: true) try engine.block.setFloat(lutFilter, property: "effect/lut_filter/intensity", value: 0.9) try engine.block.setString( lutFilter, property: "effect/lut_filter/lutFileURI", // swiftlint:disable:next line_length value: "https://cdn.img.ly/packages/imgly/cesdk-js/$UBQ_VERSION$/assets/extensions/ly.img.cesdk.filters.lut/LUTs/imgly_lut_ad1920_5_5_128.png" ) try engine.block.setInt(lutFilter, property: "effect/lut_filter/verticalTileCount", value: 5) try engine.block.setInt(lutFilter, property: "effect/lut_filter/horizontalTileCount", value: 5) try engine.block.appendEffect(rect, effectID: lutFilter) try engine.block.setFill(rect, fill: imageFill) } ``` --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "iOS Filters & Effects SDK" description: "Enhance visual elements with filters and effects such as blur, duotone, LUTs, and chroma keying." platform: ios url: "https://img.ly/docs/cesdk/ios/filters-and-effects/overview-299b15/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Filters and Effects](https://img.ly/docs/cesdk/ios/filters-and-effects-6f88ac/) > [Overview](https://img.ly/docs/cesdk/ios/filters-and-effects/overview-299b15/) --- In CreativeEditor SDK (CE.SDK), *filters* and *effects* refer to visual modifications that enhance or transform the appearance of design elements. Filters typically adjust an element’s overall color or tone, while effects add specific visual treatments like blur, sharpness, or distortion. You can apply both filters and effects through the user interface or programmatically using the CE.SDK API. They allow you to refine the look of images, videos, and graphic elements in your designs with precision and flexibility. [Explore Demos](https://img.ly/showcases/cesdk?tags=ios) [Get Started](https://img.ly/docs/cesdk/ios/get-started/overview-e18f40/) --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Existing Project with SwiftUI" description: "Integrating CE.SDK into an existing iOS project using SwiftUI" platform: ios url: "https://img.ly/docs/cesdk/ios/get-started/ios/existing-project/swiftui-u8789a/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Get Started](https://img.ly/docs/cesdk/ios/get-started/overview-e18f40/) > [Quickstart SwiftUI](https://img.ly/docs/cesdk/ios/get-started/ios/new-project/swiftui-s6567y/) --- This guide walks you through integrating the CE.SDK into an existing SwiftUI app. Whether your app already uses a `NavigationView` or not, this tutorial shows how to add a new screen that launches the editor. ## Requirements To work with the SDK, you'll need: - A Mac running a recent version of [Xcode](https://developer.apple.com/xcode/) - A valid **CE.SDK license key** ([Get a free trial](https://img.ly/forms/free-trial)) - Your application project ## Add the CE.SDK Swift package **1.** With your Xcode project open, use the `File` menu to select `Add Package Dependencies...` ![Screen grab of the dependencies menu option](assets/dependencies-menu.png) **2.** Copy the following package URL and paste it into the Search field, at the top right of the modal: https://github.com/imgly/IMGLYUI-swift ![Image of the packages modal screen](assets/add-package.png) **3.** Once you see the IMGLY UI package information in the window, click `Add Package` **4.** After downloading the package and its dependencies, you'll be presented with a list of libraries. For this demo choose the `IMGLYUI` library to add to your project target. This adds all of the capabilities of the SDK to your project so you can explore after you've added the SDK. For a production app, you can include only those libraries that contain the functions you need to help conserve app space. ![Image of the list of packages](assets/add-package-to-target.png) ## Create a Screen to Show the CE.SDK **1.** Use the `File` menu to select `New` and then `File from Template...`. Choose the `SwiftUI View` template and click `Next` ![SwiftUI view chooser screen](assets/file-template.png) **2.** Give the file a name, something like `EditorView.swift`. **3.** Open the `EditorView.swift` file in your project. Import the SDK by adding the following import just below the import for `SwiftUI`: ```swift import IMGLYDesignEditor ``` **4.** Create a variable to hold the engine for the editor. Just below the line to create the `EditorView` struct and before the declaration of the body, add this code, and update it with your actual license key and user ID. ```swift let engineSettings = EngineSettings(license: "", // pass nil for evaluation mode with watermark, userID: "") ``` **5.** In the `body` variable of `EditorView`, replace the existing code with a `DesignEditor` that uses the `engineSettings`. It needs to be wrapped in a navigation-capable container, which is necessary for toolbars and controls to display properly. ```swift NavigationView { DesignEditor(engineSettings) } .navigationViewStyle(.stack) ``` > If your app already uses `NavigationView` higher up in the view hierarchy, you can omit the `NavigationView` wrapper in `EditorView`. > Avoid using layout containers like `VStack`, `ZStack`, or `ScrollView` as the direct parent of `DesignEditor`. These do not provide navigation context and will result in missing toolbars or UI elements. When you're done, your `EditorView.swift` should look similar to this: ```swift import SwiftUI import IMGLYDesignEditor struct EditorView: View { let engineSettings = EngineSettings(license: "", // pass nil for evaluation mode with watermark, userID: "") var body: some View { NavigationView { DesignEditor(engineSettings) } .navigationViewStyle(.stack) } } ``` ## Add a Navigation Link or Button to Launch the Editor If your app already uses `NavigationView`, you can simply add a `NavigationLink`. Here is an example adding the link to a list of options. ```swift struct OptionsView: View { var body: some View { NavigationView { List { NavigationLink("Image Gallery") { Text("Coming Soon") } //Adding new link to editor NavigationLink("Open Editor") { EditorView() } NavigationLink("Settings") { Text("Settings Page") } } .navigationTitle("My Options") } .navigationViewStyle(.stack) } } ``` If your app doesn't use `NavigationView`, you can present the editor view modally instead: ```swift struct HomeView: View { @State private var showEditor = false //var to determine whether to show or hide the editor var body: some View { VStack(spacing: 20) { Button("Image Gallery") {} Button("Open Editor") { showEditor = true //toggle visibility state } Button("Settings") {} } .sheet(isPresented: $showEditor) { //present modal sheet with editor NavigationView { EditorView() } .navigationViewStyle(.stack) } } } ``` > Choose the approach that best fits your app’s architecture. The editor must be shown inside a `NavigationView` to function correctly. Now select an iOS device or a Simulator, and Build and Run. ![Simulator screens running the demo app](assets/hello-world.png) ## Using the Editor When the editor view launches you'll see a blank page with a toolbar at the bottom. Use the different buttons to add assets to your creation. You can add pages to your creation using the button in the top toolbar. Use the share button to export your creation as a pdf. ## Troubleshooting If you run into issues, here are some common problems and solutions. For additional help, [visit our support page](https://img.ly/company/contact-us). #### Import Errors: 'EngineSettings' or 'DesignEditor' Not Found ![Import error message](assets/import-error.png) Make sure that every Swift file that uses the editor has an `import` statement before the first line of code. Like this one, for example `import IMGLYDesignEditor` #### Build Errors: Missing Modules ![Examples of build errors](assets/missing-package.png) Make sure that you didn't accidentially choose the wrong target when you added the SDK to your project. You can check a target's imports on the `General` tab of the target settings. ![Screen shot showing the settings pane with the right imports](assets/check-import.png) If the SDK is missing, you can add it using the `+` button at the bottom of the list. #### License Key Error at Runtime ![Modal of the error for a missing license](assets/license-error.png) Double-check that your `EngineSettings` has the exact license key with proper capitalization. If you don't have a license, [register for a free trial](https://img.ly/forms/free-trial) to get a demonstration license. #### Missing Toolbars or Controls ![Simulator screen with no top controls](assets/missing-controls.png) Ensure the `DesignEditor` is wrapped in a `NavigationView` either in the View that creates it or in one of the previous views in the navigation chain. Do **not** place it directly in a `VStack` or as the root of the view hierarchy. Don't do this: ```swift var body: some View { DesignEditor(engineSettings) } ``` Or this: ```swift var body: some View { VStack { DesignEditor(engineSettings) } } ``` Instead, wrap it in a `NavigationView` as shown above with the `.stack` style. ### What's Next? Now that your integration is working, you can explore more advanced features like custom templates, user uploads, and localized UI. Check out our [documentation site](https://img.ly/docs) for more tutorials and guides. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Existing Project with UIKit" description: "Integrating CE.SDK into an existing iOS project using UIKit" platform: ios url: "https://img.ly/docs/cesdk/ios/get-started/ios/existing-project/uikit-v9890b/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Get Started](https://img.ly/docs/cesdk/ios/get-started/overview-e18f40/) > [Quickstart UIKit](https://img.ly/docs/cesdk/ios/get-started/ios/new-project/uikit-t7678z/) --- This guide walks you through integrating the CE.SDK into an existing UIKit app. In this example, we'll assume your app has a `UIButton` or `UITableViewRow` that when selected, presents the editor full screen. ## Requirements To work with the SDK, you'll need: - A Mac running a recent version of [Xcode](https://developer.apple.com/xcode/) - A valid **CE.SDK license key** ([Get a free trial](https://img.ly/forms/free-trial)) - Your application project ## Add the CE.SDK Swift package **1.** With your project open, use the `File` menu to select `Add Package Dependencies...` ![Screen grab of the dependencies menu option](assets/dependencies-menu.png) **2.** Copy the package URL and add it to the Search field at the top right of the modal https://github.com/imgly/IMGLYUI-swift ![Image of the packages modal screen](assets/add-package.png) **3.** Once you see the IMGLY UI package in the window, click `Add Package` **4.** After downloading the package and its dependencies, you'll be presented with a list of libraries. For this demo choose the `IMGLYUI` library to add to your project target. This adds all of the capabilities of the SDK to your project. In a production app, you would select only those libraries that contain the functions you want to help conserve app space. ![Image of the list of packages](assets/add-package-to-target.png) ## Add Code to Use the CE.SDK **1.** Open the View Controller of your app that will launch the editor. This demo will use the "Design Editor" features of the SDK, so at the top of your View Controller with the other `import` statements, add these lines ```swift import IMGLYDesignEditor import SwiftUI ``` **2.** Create the control to launch the editor. It might be a button or table row or tab. As long as the element can trigger an action it should work. **3.** Add code to the element's action to launch the editor. ```swift //Create an engine for the editor let engineSettings = EngineSettings(license: "", // pass nil for evaluation mode with watermark, userID: "") //initialize an editor with the engine settings, wrapped in a Navigation aware container let editorVC = UIHostingController(rootView: NavigationView { DesignEditor(engineSettings) } .navigationViewStyle(.stack) ) //set the presentation style to full screen editorVC.modalPresentationStyle = .fullScreen //present the editor from the current view controller present(editorVC, animated: true) ``` > The editor must be presented in a navigation-aware container like `NavigationView`. This ensures proper toolbar rendering. Now select an iOS device or a Simulator and Build and Run. ## Using the Editor in Your App Navigate to the view with the control you built to launch the editor and tap it. When the editor launches you'll see a blank page with a toolbar at the bottom. Use the different buttons to add assets to your creation. You can add pages to your creation using the button in the top toolbar. Once you're happy with it, use the share button to export your creation as a pdf. ![Simulator screens running the demo app](assets/hello-world.png) ## Troubleshooting Here are some issues you may encounter and their causes. If you need additional help, you can [visit our support page](https://img.ly/company/contact-us). - Xcode does not know about the `EngineSettings` or `DesignEditor`. ![Import error message](assets/import-error.png) Make sure that every Swift file that needs to interact with the editor has an `import` statement before the first line of code. Like this one, for example `import IMGLYDesignEditor` - You get build errors about missing modules. ![Examples of build errors](assets/missing-package.png) Make sure that you didn't accidentally choose the wrong target when you imported the SDK to your project. You can check a target's imports on the `General` tab of the target settings. ![Screen shot showing the settings pane with the right imports](assets/check-import.png) If you don't see the SDK, you can add it using the `+` button at the bottom of the list. - When you run the application, you get an error message about the license key: ![Modal of the error for a missing license](assets/license-error.png) Make sure that your `EngineSettings` has the exact license key with matching capitalization and that your app bundle id matches the license key. If you don't have a license you can [register for a free trial](https://img.ly/forms/free-trial) to get a demonstration license. - When you run the demo app you don't get any errors, but have a blank top bar and are missing controls. ![Simulator screen with no top controls](assets/missing-controls.png) Make sure that the code that instantiates your `DesignEditor` is wrapped in some kind of navigation view, like a `NavigationView` and also `UIHostingController`. Don't place it at the root of the view hierarchy. Don't do this: ```swift let editor = DesignEditor(engineSettings) let editorVC = UIHostingController(rootView: editor) present(editorVC, animated: true) ``` Instead, wrap it in a `NavigationView` as shown above. ### What's Next? Now that your integration is working, you can explore more advanced features like custom templates, user uploads, and localized UI. Check out our [documentation site](https://img.ly/docs) for more tutorials and guides. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "New Project with SwiftUI" description: "Setting up CE.SDK in a new iOS project using SwiftUI" platform: ios url: "https://img.ly/docs/cesdk/ios/get-started/ios/new-project/swiftui-s6567y/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Get Started](https://img.ly/docs/cesdk/ios/get-started/overview-e18f40/) > [Quickstart SwiftUI](https://img.ly/docs/cesdk/ios/get-started/ios/new-project/swiftui-s6567y/) --- This guide walks you through integrating the CE.SDK into a brand-new SwiftUI project. You'll be able to add professional-grade video and photo editing to your app with just a few simple steps. ## Requirements To work with the SDK, you'll need: - A Mac running a recent version of [Xcode](https://developer.apple.com/xcode/) - A valid **CE.SDK license key** ([Get a free trial](https://img.ly/forms/free-trial)) ## Creating a new Xcode Project **1.** Launch Xcode and use the `File` menu to select `New` -> `Project...`. ![Screen grab of the Xcode file menu.](assets/file-menu.png) **2.** Make sure the `iOS` tab is selected, and highlight the `App` template. Click `Next`. ![Screen grab of the template chooser](assets/template-chooser-ios.png) **3.** Enter a name for your app and an identifier for your organization. These will be combined to be the bundle identifier of your app. A team is not required to deploy your app to the simulator. Then click `Next`. ![Screen grab of the naming screen](assets/naming-options.png) When you want to run on a physical device, you'll need to add a team. Learn how to [set up teams](https://help.apple.com/xcode/mac/current/#/dev60b6fbbc7) at Apple's help site. **4.** Choose a location to save the project and click `Create`. ## Add the CE.SDK Swift package **1.** With your Xcode project open, use the `File` menu to select `Add Package Dependencies...` ![Screen grab of the dependencies menu option](assets/dependencies-menu.png) **2.** Copy the following package URL and paste it into the Search field, at the top right of the modal: https://github.com/imgly/IMGLYUI-swift ![Image of the packages modal screen](assets/add-package.png) **3.** Once you see the IMGLY UI package information in the window, click `Add Package` **4.** After downloading the package and its dependencies, you'll be presented with a list of libraries. For this demo choose the `IMGLYUI` library to add to your project target. This adds all of the capabilities of the SDK to your project. For a production app, you can include only those libraries that contain the functions you need to help conserve app space. ![Image of the list of packages](assets/add-package-to-target.png) ## Add Code to Use the CE.SDK **1.** Open the `ContentView.swift` file in your project. Import the SDK by adding the following below `import SwiftUI`: ```swift import IMGLYDesignEditor ``` **2.** Create a variable to hold the engine for the editor. Just below the line to create the `ContentView` struct and before the declaration of the body, add this code, and update it with your actual license key and user ID. Pass `nil` for the license parameter to run the SDK in evaluation mode with a watermark. ```swift let engineSettings = EngineSettings(license: "", // pass nil for evaluation mode with watermark userID: "") ``` **3.** In the `body` variable of `ContentView`, replace the existing code with a `DesignEditor` that uses the `engineSettings`. It needs to be wrapped in a navigation-capable container, which is necessary for toolbars and controls to display properly. ```swift NavigationView { DesignEditor(engineSettings) } .navigationViewStyle(.stack) ``` > Note: Avoid using layout containers like `VStack`, `ZStack`, or `ScrollView` as the direct parent of `DesignEditor`. These do not provide navigation context and will result in missing toolbars or UI elements. When you're done, your `ContentView.swift` should look similar to this: ```swift import SwiftUI import IMGLYDesignEditor struct ContentView: View { let engineSettings = EngineSettings(license: "", // pass nil for evaluation mode with watermark userID: "") var body: some View { NavigationView { DesignEditor(engineSettings) } .navigationViewStyle(.stack) } } ``` Now select an iOS device or a Simulator, and Build and Run. ![Simulator screens running the demo app](assets/hello-world.png) ## Using Your App When the app launches you'll see a blank page with a toolbar at the bottom. Use the different buttons to add assets to your creation. You can add pages to your creation using the button in the top toolbar. Use the share button to export your creation as a pdf. ## Troubleshooting If you run into issues, here are some common problems and solutions. For additional help, [visit our support page](https://img.ly/company/contact-us). #### Import Errors: 'EngineSettings' or 'DesignEditor' Not Found ![Import error message](assets/import-error.png) Make sure that every Swift file that uses the editor has an `import` statement before the first line of code. Like this one, for example `import IMGLYDesignEditor` #### Build Errors: Missing Modules ![Examples of build errors](assets/missing-package.png) Make sure that you didn't accidentially choose the wrong target when you added the SDK to your project. You can check a target's imports on the `General` tab of the target settings. ![Screen shot showing the settings pane with the right imports](assets/check-import.png) If the SDK is missing, you can add it using the `+` button at the bottom of the list. #### License Key Error at Runtime ![Modal of the error for a missing license](assets/license-error.png) Double-check that your `EngineSettings` has the exact license key with proper capitalization. If you don't have a license, [register for a free trial](https://img.ly/forms/free-trial) to get a demonstration license. #### Missing Toolbars or Controls ![Simulator screen with no top controls](assets/missing-controls.png) Ensure the `DesignEditor` is wrapped in a `NavigationView`. Do **not** place it directly in a `VStack` or as the root of the view hierarchy. Don’t do this: ```swift var body: some View { DesignEditor(engineSettings) } ``` Or this: ```swift var body: some View { VStack { DesignEditor(engineSettings) } } ``` Instead, wrap it in a `NavigationView` as shown above with the `.stack` style. ### What’s Next? Now that your integration is working, you can explore more advanced features like custom templates, user uploads, and localized UI. Check out our [documentation site](https://img.ly/docs) for more tutorials and guides. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "New Project with UIKit" description: "Setting up CE.SDK in a new iOS project using UIKit" platform: ios url: "https://img.ly/docs/cesdk/ios/get-started/ios/new-project/uikit-t7678z/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Get Started](https://img.ly/docs/cesdk/ios/get-started/overview-e18f40/) > [Quickstart UIKit](https://img.ly/docs/cesdk/ios/get-started/ios/new-project/uikit-t7678z/) --- This guide walks you through integrating the CE.SDK into a new UIKit-based iOS project. With just a few setup steps, you'll be able to add robust image and video editing features to your app using the CE.SDK. ## Requirements To work with the SDK, you'll need: - A Mac running a recent version of [Xcode](https://developer.apple.com/xcode/). - A valid **CE.SDK license key** ([Get a free trial](https://img.ly/forms/free-trial)). ## Creating a new Xcode Project **1.** Launch Xcode and use the `File` menu to select `New` -> `Project...`. ![Screen grab of the Xcode file menu.](assets/file-menu.png) **2.** Ensure that the `iOS` tab is selected and the `App` template is highlighted and click `Next`. ![Screen grab of the template chooser](assets/template-chooser-ios.png) **3.** Enter a name for your app and an identifier for your organization. These will be combined to be the bundle id of your app. As a combination, they must be unique. A team is not required to deploy your app to the simulator. Then click `Next`. ![Screen grab of the naming screen](assets/naming-options.png) > Make sure **Interface** is set to **Storyboard** and **Language** is set to **Swift**. While the CE.SDK is written in SwiftUI, it works seamlessly in UIKit projects by embedding it in a `UIHostingController`. When you want to run on a physical device, you'll need to add a team. Learn how to [set up teams](https://help.apple.com/xcode/mac/current/#/dev60b6fbbc7) at Apple's help site. **4.** Choose a location on your computer to save the project and click `Create`. ## Add the CE.SDK Swift package **1.** With your new application open, use the `File` menu to select `Add Package Dependencies...`. ![Screen grab of the dependencies menu option](assets/dependencies-menu.png) **2.** Copy the package URL and add it to the Search field at the top right of the modal. https://github.com/imgly/IMGLYUI-swift ![Image of the packages modal screen](assets/add-package.png) **3.** Once you see the IMGLY UI package in the window, click `Add Package`. **4.** After downloading the package and its dependencies, you'll be presented with a list of libraries. For this demo choose the `IMGLYUI` library to add to your project target. This adds all of the capabilities of the SDK to your project. In a production app, you would select only those libraries that contain the functions you want to help conserve app space. ![Image of the list of packages](assets/add-package-to-target.png) ## Add Code to Use the CE.SDK **1.** Open the `ViewController` swift file of your app. This demo will use the "Design Editor" features of the SDK, so below the line that reads `import UIKit` add these lines: ```swift import IMGLYDesignEditor import SwiftUI ``` **2.** Create a button to launch the editor. It can go inside `viewDidLoad`. Here is some code to put a simple button in the center of the screen. ```swift override func viewDidLoad() { super.viewDidLoad() // Add a launch button to present the editor let launchButton = UIButton(type: .system) launchButton.setTitle("Open Editor", for: .normal) launchButton.addTarget(self, action: #selector(openEditor), for: .touchUpInside) launchButton.translatesAutoresizingMaskIntoConstraints = false view.addSubview(launchButton) NSLayoutConstraint.activate([ launchButton.centerXAnchor.constraint(equalTo: view.centerXAnchor), launchButton.centerYAnchor.constraint(equalTo: view.centerYAnchor) ]) } ``` > **Prefer using Storyboard?** > > You can also add a UIButton to your main storyboard and connect it to your view controller using an `@IBAction`. In your storyboard: > > 1. Drag a UIButton into your scene. > 2. Set up constraints so it appears centered. > 3. Control-drag from the button to your `ViewController.swift` file to create an `@IBAction`. ![Storyboard and an @IBAction](assets/storyboard.png) **3.** Create the `openEditor` function (if you didn't already in the storyboard). If you made the button in code the signature for the function should start with `@objc` because of the button selector. ```swift @objc func openEditor() { } ``` **4.** Inside the `openEditor` function, create a variable to hold the engine settings for the editor. Add this code, and update it with your actual license key and user ID. ```swift let engineSettings = EngineSettings(license: "", // pass nil for evaluation mode with watermark, userID: "") ``` **5.** Still in the `openEditor` function. You'll now create a `DesignEditor` with your `engineSettings`. It needs to be wrapped in a `UIHostingController` *AND* a `NavigationView`. The hosting controller is the bridge between UIKit and SwiftUI, the navigation view provides access to the iOS toolbars. Set the modal presentation style to `.fullScreen` and the `.navigationViewStyle` modifier to `.stack`. ```swift @objc func openEditor() { let engineSettings = EngineSettings(license: "", // pass nil for evaluation mode with watermark, userID: "") let editorVC = UIHostingController(rootView: NavigationView { DesignEditor(engineSettings) } .navigationViewStyle(.stack) ) editorVC.modalPresentationStyle = .fullScreen present(editorVC, animated: true) } ``` When you're done, your `ViewController` should look similar to this: ```swift import UIKit import SwiftUI import IMGLYDesignEditor class ViewController: UIViewController { override func viewDidLoad() { super.viewDidLoad() // Add a launch button to present the editor let launchButton = UIButton(type: .system) launchButton.setTitle("Open Editor", for: .normal) launchButton.addTarget(self, action: #selector(openEditor), for: .touchUpInside) launchButton.translatesAutoresizingMaskIntoConstraints = false view.addSubview(launchButton) NSLayoutConstraint.activate([launchButton.centerXAnchor.constraint(equalTo: view.centerXAnchor), launchButton.centerYAnchor.constraint(equalTo: view.centerYAnchor) ]) } @objc func openEditor() { let engineSettings = EngineSettings(license: "", // pass nil for evaluation mode with watermark, userID: "") let editorVC = UIHostingController(rootView: NavigationView { DesignEditor(engineSettings) } .navigationViewStyle(.stack) ) editorVC.modalPresentationStyle = .fullScreen present(editorVC, animated: true) } } ``` Now select an iOS device or a Simulator, and Build and Run. ![Screen shot of the iPhone with the "Open Editor" button](assets/uikitbutton.png) ## Using Your App Tap your button to launch the editor. When the editor launches you'll see a blank page with a toolbar at the bottom. Use the different buttons to add assets to your creation. You can add pages to your creation using the button in the top toolbar. Once you're happy with it, use the share button to export your creation as a pdf. ![Simulator screens running the demo app](assets/hello-world.png) ## Troubleshooting Here are some issues you may encounter and their causes. If you need additional help, you can [visit our support page](https://img.ly/company/contact-us). - Xcode does not know about the `EngineSettings` or `DesignEditor`. ![Import error message](assets/import-error.png) Make sure that every Swift file that needs to interact with the editor has an `import` statement before the first line of code. Like this one, for example `import IMGLYDesignEditor` - You get build errors about missing modules. ![Examples of build errors](assets/missing-package.png) Make sure that you didn't accidentially choose the wrong target when you imported the SDK to your project. You can check a target's imports on the `General` tab of the target settings. ![Screen shot showing the settings pane with the right imports](assets/check-import.png) If you don't see the SDK, you can add it using the `+` button at the bottom of the list. - When you run the application, you get an error message about the license key: ![Modal of the error for a missing license](assets/license-error.png) Make sure that your `EngineSettings` has the exact license key with matching capitalization. If you don't have a license you can [register for a free trial](https://img.ly/forms/free-trial) to get a demonstration license. - When you run the demo app you don't get any errors, but have a blank top bar and are missing controls. ![Simulator screen with no top controls](assets/missing-controls.png) Make sure that the code that instantiates your `DesignEditor` is wrapped in a `NavigationView` and also `UIHostingController`. Don't place it at the root of the view hierarchy. Don't do this: ```swift let editor = DesignEditor(engineSettings) let editorVC = UIHostingController(rootView: editor) present(editorVC, animated: true) ``` Instead, wrap it in a `NavigationView` as shown above. ### What's Next? Now that your integration is working, you can explore more advanced features like custom templates, user uploads, and localized UI. Check out our [documentation site](https://img.ly/docs) for more tutorials and guides. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "MCP Server" description: "Connect AI assistants to CE.SDK documentation using the Model Context Protocol (MCP) server." platform: ios url: "https://img.ly/docs/cesdk/ios/get-started/mcp-server-fde71c/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Get Started](https://img.ly/docs/cesdk/ios/get-started/overview-e18f40/) > [MCP Server](https://img.ly/docs/cesdk/ios/get-started/mcp-server-fde71c/) --- The CE.SDK MCP server provides a standardized interface that allows any compatible AI assistant to search and access our documentation. This enables AI tools like Claude, Cursor, and VS Code Copilot to provide more accurate, context-aware help when working with CE.SDK. ## What is MCP? The [Model Context Protocol (MCP)](https://modelcontextprotocol.io/) is an open standard that enables AI assistants to securely connect to external data sources. By connecting your AI tools to our MCP server, you get: - **Accurate answers**: AI assistants can search and retrieve the latest CE.SDK documentation - **Context-aware help**: Get platform-specific guidance for your development environment - **Up-to-date information**: Always access current documentation without relying on training data ## Available Tools The MCP server exposes two tools: | Tool | Description | |------|-------------| | `search` | Search documentation by query string | | `fetch` | Retrieve the full content of a document by ID | ## Server Endpoint | URL | Transport | |-----|-----------| | `https://mcp.img.ly/mcp` | Streamable HTTP | No authentication is required. ## Setup Instructions ### Claude Code Add the MCP server with a single command: ```bash claude mcp add --transport http imgly_docs https://mcp.img.ly/mcp ``` ### Claude Desktop 1. Open Claude Desktop and go to **Settings** (click your profile icon) 2. Navigate to **Connectors** in the sidebar 3. Click **Add custom connector** 4. Enter the URL: `https://mcp.img.ly/mcp` 5. Click **Add** to connect ### Cursor Add the following to your Cursor MCP configuration. You can use either: - **Project-specific**: `.cursor/mcp.json` in your project root - **Global**: `~/.cursor/mcp.json` ```json { "mcpServers": { "imgly_docs": { "url": "https://mcp.img.ly/mcp" } } } ``` ### VS Code Add to your workspace configuration at `.vscode/mcp.json`: ```json { "servers": { "imgly_docs": { "type": "http", "url": "https://mcp.img.ly/mcp" } } } ``` ### Windsurf Add the following to your Windsurf MCP configuration at `~/.codeium/windsurf/mcp_config.json`: ```json { "mcpServers": { "imgly_docs": { "serverUrl": "https://mcp.img.ly/mcp" } } } ``` ### Other Clients For other MCP-compatible clients, use the endpoint `https://mcp.img.ly/mcp` with HTTP transport. Refer to your client's documentation for the specific configuration format. ## Usage Once configured, your AI assistant will automatically have access to CE.SDK documentation. You can ask questions like: - "How do I add a text block in CE.SDK?" - "Show me how to export a design as PNG" - "What are the available blend modes?" The AI will search our documentation and provide answers based on the latest CE.SDK guides and API references. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Get Started" description: "Documentation for Get Started" platform: ios url: "https://img.ly/docs/cesdk/ios/get-started/overview-e18f40/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Get Started](https://img.ly/docs/cesdk/ios/get-started/overview-e18f40/) --- Welcome to our documentation! This guide will help you get started with our SDK on your preferred platform. ## Choose Your Platform --- ## Related Pages - [iOS Creative Editor](https://img.ly/docs/cesdk/ios/what-is-cesdk-2e7acd/) - The iOS Mobile Design Editor SDK provides a comprehensive toolkit for building rich visual design and editing experiences directly within your iOS applications. - [New Project with SwiftUI](https://img.ly/docs/cesdk/ios/get-started/ios/new-project/swiftui-s6567y/) - Setting up CE.SDK in a new iOS project using SwiftUI - [New Project with UIKit](https://img.ly/docs/cesdk/ios/get-started/ios/new-project/uikit-t7678z/) - Setting up CE.SDK in a new iOS project using UIKit - [MCP Server](https://img.ly/docs/cesdk/ios/get-started/mcp-server-fde71c/) - Connect AI assistants to CE.SDK documentation using the Model Context Protocol (MCP) server. - [LLMs.txt](https://img.ly/docs/cesdk/ios/llms-txt-eb9cc5/) - Our documentation is available in LLMs.txt format - [Licensing](https://img.ly/docs/cesdk/ios/licensing-8aa063/) - Understand CE.SDK’s flexible licensing, trial options, and how keys work across dev, staging, and production. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Guides" description: "Documentation for Guides" platform: ios url: "https://img.ly/docs/cesdk/ios/guides-8d8b00/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) --- --- ## Related Pages - [Configuration](https://img.ly/docs/cesdk/ios/configuration-2c1c3d/) - Learn how to configure CE.SDK to match your application's functional, visual, and performance requirements. - [Settings](https://img.ly/docs/cesdk/ios/settings-970c98/) - Explore all configurable editor settings and learn how to read, update, and observe them via the Settings API. - [Serve Assets From Your Server](https://img.ly/docs/cesdk/ios/serve-assets-b0827c/) - Set up and manage how assets are served to the editor, including local, remote, or CDN-based delivery. - [Engine Interface](https://img.ly/docs/cesdk/ios/engine-interface-6fb7cf/) - Understand CE.SDK's architecture and learn when to use direct Engine access for automation workflows - [Automate Workflows](https://img.ly/docs/cesdk/ios/automation-715209/) - Automate repetitive editing tasks using CE.SDK’s headless APIs to generate assets at scale. - [User Interface](https://img.ly/docs/cesdk/ios/user-interface-5a089a/) - Use CE.SDK’s customizable, production-ready UI or replace it entirely with your own interface. - [Open the Editor](https://img.ly/docs/cesdk/ios/open-the-editor-23a1db/) - Learn how to load and create scenes, set the zoom level, and configure file proxies or URI resolvers. - [Insert Media Into Scenes](https://img.ly/docs/cesdk/ios/insert-media-a217f5/) - Understand how insertion works, how inserted media behave within scenes, and how to control them via UI or code. - [Import Media](https://img.ly/docs/cesdk/ios/import-media-4e3703/) - Learn how to import, manage, and customize assets from local, remote, or camera sources in CE.SDK. - [Export](https://img.ly/docs/cesdk/ios/export-save-publish/export-82f968/) - Explore export options, supported formats, and configuration features for sharing or rendering output. - [Save](https://img.ly/docs/cesdk/ios/export-save-publish/save-c8b124/) - Save design progress locally or to a backend service to allow for later editing or publishing. - [Store Custom Metadata](https://img.ly/docs/cesdk/ios/export-save-publish/store-custom-metadata-337248/) - Attach and persist metadata alongside your design, such as tags, version info, or creator details. - [Edit Image](https://img.ly/docs/cesdk/ios/edit-image-c64912/) - Use CE.SDK to crop, transform, annotate, or enhance images with editing tools and programmatic APIs. - [Create Videos](https://img.ly/docs/cesdk/ios/create-video-c41a08/) - Learn how to create and customize videos in CE.SDK using scenes, assets, and timeline-based editing. - [Text](https://img.ly/docs/cesdk/ios/text-8a993a/) - Add, style, and customize text layers in your design using CE.SDK’s flexible text editing tools. - [Create and Edit Shapes](https://img.ly/docs/cesdk/ios/shapes-9f1b2c/) - Draw custom vector shapes, combine them with boolean operations, and insert QR codes into your designs. - [Create and Edit Stickers](https://img.ly/docs/cesdk/ios/stickers-3d4e5f/) - Create and customize stickers using image fills for icons, logos, emoji, and multi-color graphics. - [Create Compositions](https://img.ly/docs/cesdk/ios/create-composition-db709c/) - Combine and arrange multiple elements to create complex, multi-page, or layered design compositions. - [Create Templates](https://img.ly/docs/cesdk/ios/create-templates-3aef79/) - Learn how to create, import, and manage reusable templates to streamline design creation in CE.SDK. - [Colors](https://img.ly/docs/cesdk/ios/colors-a9b79c/) - Manage color usage in your designs, from applying brand palettes to handling print and screen formats. - [Fills](https://img.ly/docs/cesdk/ios/fills-402ddc/) - Apply solid colors, gradients, images, or videos as fills to shapes, text, and other design elements. - [Outlines](https://img.ly/docs/cesdk/ios/outlines-b7820c/) - Enhance design elements with strokes, shadows, and glow effects to improve contrast and visual appeal. - [Filters and Effects](https://img.ly/docs/cesdk/ios/filters-and-effects-6f88ac/) - Enhance visual elements with filters and effects such as blur, duotone, LUTs, and chroma keying. - [Animation](https://img.ly/docs/cesdk/ios/animation-ce900c/) - Add motion to designs with support for keyframes, timeline editing, and programmatic animation control. - [Rules](https://img.ly/docs/cesdk/ios/rules-1427c0/) - Define and enforce layout, branding, and safety rules to ensure consistent and compliant designs. - [Conversion](https://img.ly/docs/cesdk/ios/conversion-c3fbb3/) - Convert designs into different formats such as PDF, PNG, MP4, and more using CE.SDK tools. - [Create a precompiled XCFramework for offline builds](https://img.ly/docs/cesdk/ios/create-prebuilt-xcframework-c67971/) - Compiling CE.SDK Swift packages and other project dependencies to a binary XCFramework to support easy building in airgapped environments. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "For Audio Processing" description: "Learn how to export audio in WAV or MP4 format from any block type in CE.SDK for iOS and macOS." platform: ios url: "https://img.ly/docs/cesdk/ios/guides/export-save-publish/export/audio-68de25/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Export Media Assets](https://img.ly/docs/cesdk/ios/export-save-publish/export-82f968/) > [For Audio Processing](https://img.ly/docs/cesdk/ios/guides/export-save-publish/export/audio-68de25/) --- Export audio from pages, video blocks, audio blocks, and tracks to WAV or MP4 format for external processing, transcription, or analysis. The `exportAudio` API allows you to extract audio from any block that contains audio content. This is particularly useful when integrating with external audio processing services like speech-to-text transcription, audio enhancement, or music analysis platforms. Audio can be exported from multiple block types: - **Page blocks** - Export the complete mixed audio timeline - **Video blocks** - Extract audio tracks from videos - **Audio blocks** - Export standalone audio content - **Track blocks** - Export audio from specific timeline tracks ## Export Audio Export audio from any block using the `exportAudio` API: ```swift let page = try engine.scene.getCurrentPage() let audioData = try await engine.block.exportAudio( page, mimeType: .wav, sampleRate: 48000, numberOfChannels: 2 ) print("Exported \(audioData.count) bytes") ``` ### Export Options Configure your audio export with these parameters: - **`mimeType`** - `.wav` (uncompressed) or `.mp4` (compressed AAC) - **`sampleRate`** - Audio quality in Hz (default: 48000) - **`numberOfChannels`** - 1 for mono or 2 for stereo - **`timeOffset`** - Start time in seconds (default: 0.0) - **`duration`** - Length to export in seconds (0.0 = entire duration) - **`onProgress`** - Callback receiving `(rendered, encoded, total)` for progress tracking ## Find Audio Sources To find blocks with audio in your scene: ```swift // Find audio blocks let audioBlocks = try engine.block.findByType(.audio) // Find video fills with audio let videoFills = try engine.block.findByType(.videoFill) let videosWithAudio = videoFills.filter { block in do { return try !engine.block.getAudioInfoFromVideo(block).isEmpty } catch { return false } } ``` ## Working with Multi-Track Video Audio Videos can contain multiple audio tracks (e.g., different languages). CE.SDK provides APIs to inspect and extract specific tracks. ### Check audio track count ```swift guard let videoFillId = try engine.block.findByType(.videoFill).first else { throw AudioExportError.noVideoFound } let trackCount = try engine.block.getAudioTrackCountFromVideo(videoFillId) print("Video has \(trackCount) audio track(s)") ``` ### Get track information ```swift let audioTracks = try engine.block.getAudioInfoFromVideo(videoFillId) for (index, track) in audioTracks.enumerated() { print(""" Track \(index): - Channels: \(track.channels) // 1=mono, 2=stereo - Sample Rate: \(track.sampleRate) Hz - Language: \(track.language ?? "unknown") - Label: \(track.label ?? "Track \(index)") """) } ``` ### Extract a specific track ```swift // Create audio block from track 0 (first track) let audioBlockId = try engine.block.createAudioFromVideo(videoFillId, trackIndex: 0) // Export just this track's audio let trackAudioData = try await engine.block.exportAudio( audioBlockId, mimeType: .wav, sampleRate: 48000, numberOfChannels: 2 ) ``` ### Extract all tracks ```swift // Create audio blocks for all tracks let audioBlockIds = try engine.block.createAudiosFromVideo(videoFillId) // Export each track for (i, audioBlockId) in audioBlockIds.enumerated() { let trackData = try await engine.block.exportAudio(audioBlockId, mimeType: .wav) print("Track \(i): \(trackData.count) bytes") } ``` ## Complete Workflow: Audio to Captions A common workflow is to export audio, send it to a transcription service, and use the returned captions in your scene. ### Step 1: Export Audio ```swift let page = try engine.scene.getCurrentPage() let audioData = try await engine.block.exportAudio( page, mimeType: .wav, sampleRate: 48000, numberOfChannels: 2 ) ``` ### Step 2: Send to Transcription Service Send the audio to a service that returns SubRip (SRT) format captions: ```swift func transcribeAudio(_ audioData: Data) async throws -> String { let boundary = UUID().uuidString var body = Data() // Add audio file body.append("--\(boundary)\r\n") body.append("Content-Disposition: form-data; name=\"audio\"; filename=\"audio.wav\"\r\n") body.append("Content-Type: audio/wav\r\n\r\n") body.append(audioData) body.append("\r\n") // Add format parameter body.append("--\(boundary)\r\n") body.append("Content-Disposition: form-data; name=\"format\"\r\n\r\n") body.append("srt") body.append("\r\n--\(boundary)--\r\n") var request = URLRequest(url: URL(string: "https://api.transcription-service.com/transcribe")!) request.httpMethod = "POST" request.setValue("multipart/form-data; boundary=\(boundary)", forHTTPHeaderField: "Content-Type") request.setValue("Bearer YOUR_API_KEY", forHTTPHeaderField: "Authorization") request.httpBody = body let (data, _) = try await URLSession.shared.data(for: request) return String(data: data, encoding: .utf8) ?? "" } extension Data { mutating func append(_ string: String) { if let data = string.data(using: .utf8) { append(data) } } } let srtContent = try await transcribeAudio(audioData) ``` ### Step 3: Import Captions from SRT Use the built-in API to create caption blocks from the SRT response: ```swift import Foundation // Save SRT to temporary file let tempDir = FileManager.default.temporaryDirectory let tempFile = tempDir.appendingPathComponent("captions.srt") try srtContent.write(to: tempFile, atomically: true, encoding: .utf8) // Import captions from file URL let captions = try await engine.block.createCaptionsFromURI(tempFile.absoluteString) // Clean up temporary file try FileManager.default.removeItem(at: tempFile) // Add captions to page let page = try engine.scene.getCurrentPage() let captionTrack = try engine.block.create(.captionTrack) for caption in captions { try engine.block.appendChild(to: captionTrack, child: caption) } try engine.block.appendChild(to: page, child: captionTrack) // Center the first caption as a reference point try engine.block.alignHorizontally([captions[0]], alignment: .center) try engine.block.alignVertically([captions[0]], alignment: .center) ``` ### Other Processing Services Audio export also supports these workflows: - **Audio enhancement** - Noise removal, normalization - **Music analysis** - Tempo, key, beat detection - **Language detection** - Identify spoken language - **Speaker diarization** - Identify who spoke when ## Next Steps Now that you understand audio export, explore related audio and video features in the [Create Video guides](https://img.ly/docs/cesdk/ios/create-video-c41a08/). --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Import Media" description: "Learn how to import, manage, and customize assets from local, remote, or camera sources in CE.SDK." platform: ios url: "https://img.ly/docs/cesdk/ios/import-media-4e3703/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Import Media Assets](https://img.ly/docs/cesdk/ios/import-media-4e3703/) --- --- ## Related Pages - [Overview](https://img.ly/docs/cesdk/ios/import-media/overview-84bb23/) - Learn how to import, manage, and customize assets from local, remote, or camera sources in CE.SDK. - [Concepts](https://img.ly/docs/cesdk/ios/import-media/concepts-5e6197/) - Understand key asset concepts like sources, formats, metadata, and how assets are integrated into designs. - [Asset Library](https://img.ly/docs/cesdk/ios/import-media/asset-library-65d6c4/) - Manage how users browse, preview, and insert media assets into their designs with a customizable asset library. - [Import From Local Source](https://img.ly/docs/cesdk/ios/import-media/from-local-source-39b2a9/) - Enable users to upload files from their device for use as design assets in the editor. - [Import From Remote Source](https://img.ly/docs/cesdk/ios/import-media/from-remote-source-b65faf/) - Connect CE.SDK to external sources like servers or third-party platforms to import assets remotely. - [Capture From Camera](https://img.ly/docs/cesdk/ios/import-media/capture-from-camera-92f388/) - Capture photos or videos directly from a connected camera for immediate use in your design. - [Source Sets](https://img.ly/docs/cesdk/ios/import-media/source-sets-5679c8/) - Use multiple versions of an asset to support different resolutions or formats. - [Retrieve Mimetype](https://img.ly/docs/cesdk/ios/import-media/retrieve-mimetype-ed13bf/) - Detect the file type of an asset to control how it’s handled or displayed. - [File Format Support](https://img.ly/docs/cesdk/ios/import-media/file-format-support-8cdc84/) - Review the supported image, video, and audio formats for importing assets. - [Size Limits](https://img.ly/docs/cesdk/ios/import-media/size-limits-c32275/) - Learn about file size restrictions and how to optimize large assets for use in CE.SDK. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Asset Library" description: "Manage how users browse, preview, and insert media assets into their designs with a customizable asset library." platform: ios url: "https://img.ly/docs/cesdk/ios/import-media/asset-library-65d6c4/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Import Media Assets](https://img.ly/docs/cesdk/ios/import-media-4e3703/) > [Asset Library](https://img.ly/docs/cesdk/ios/import-media/asset-library-65d6c4/) --- --- ## Related Pages - [Customize](https://img.ly/docs/cesdk/ios/import-media/asset-panel/customize-c9a4de/) - Adapt the asset library UI and behavior to suit your application's structure and user needs. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Customize" description: "Adapt the asset library UI and behavior to suit your application's structure and user needs." platform: ios url: "https://img.ly/docs/cesdk/ios/import-media/asset-panel/customize-c9a4de/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Import Media Assets](https://img.ly/docs/cesdk/ios/import-media-4e3703/) > [Asset Library](https://img.ly/docs/cesdk/ios/import-media/asset-library-65d6c4/) > [Customize](https://img.ly/docs/cesdk/ios/import-media/asset-panel/customize-c9a4de/) --- ```swift file=@cesdk_swift_examples/editor-guides-configuration-asset-library/DefaultAssetLibraryEditorSolution.swift reference-only import IMGLYDesignEditor import SwiftUI struct DefaultAssetLibraryEditorSolution: View { let settings = EngineSettings(license: secrets.licenseKey, // pass nil for evaluation mode with watermark userID: "") @MainActor var editor: some View { DesignEditor(settings) .imgly.onCreate { engine in try await OnCreate.loadScene(from: DesignEditor.defaultScene)(engine) try engine.asset.addSource(UnsplashAssetSource(host: secrets.unsplashHost)) } .imgly.assetLibrary { DefaultAssetLibrary( tabs: DefaultAssetLibrary.Tab.allCases.reversed().filter { tab in tab != .elements && tab != .photoRoll }, ) .images { AssetLibrarySource.image(.title("Unsplash"), source: .init(id: UnsplashAssetSource.id)) DefaultAssetLibrary.images } } } @State private var isPresented = false var body: some View { Button("Use the Editor") { isPresented = true } .fullScreenCover(isPresented: $isPresented) { ModalEditor { editor } } } } #Preview { DefaultAssetLibraryEditorSolution() } ``` ```swift file=@cesdk_swift_examples/editor-guides-configuration-asset-library/CustomAssetLibraryEditorSolution.swift reference-only import IMGLYDesignEditor import SwiftUI struct CustomAssetLibraryEditorSolution: View { let settings = EngineSettings(license: secrets.licenseKey, // pass nil for evaluation mode with watermark userID: "") @MainActor var editor: some View { DesignEditor(settings) .imgly.onCreate { engine in try await OnCreate.loadScene(from: DesignEditor.defaultScene)(engine) try engine.asset.addSource(UnsplashAssetSource(host: secrets.unsplashHost)) } .imgly.assetLibrary { CustomAssetLibrary() } } @State private var isPresented = false var body: some View { Button("Use the Editor") { isPresented = true } .fullScreenCover(isPresented: $isPresented) { ModalEditor { editor } } } } #Preview { CustomAssetLibraryEditorSolution() } ``` ```swift file=@cesdk_swift_examples/editor-guides-configuration-asset-library/CustomAssetLibrary.swift reference-only import IMGLYEditor import IMGLYEngine import SwiftUI @MainActor struct CustomAssetLibrary: AssetLibrary { @AssetLibraryBuilder func photoRoll(_ sceneMode: SceneMode?) -> AssetLibraryContent { AssetLibrarySource.photoRoll( .title("Photo Roll"), media: sceneMode == .video ? [.image, .video] : [.image], ) } @AssetLibraryBuilder var videosAndImages: AssetLibraryContent { AssetLibraryGroup.video("Videos") { videos } AssetLibraryGroup.image("Images") { images } AssetLibrarySource.photoRoll(.title("Photo Roll"), media: [.image, .video]) } @AssetLibraryBuilder var videos: AssetLibraryContent { AssetLibrarySource.video(.title("Videos"), source: .init(demoSource: .video)) AssetLibrarySource.photoRoll(.title("Photo Roll"), media: [.video]) } @AssetLibraryBuilder var audio: AssetLibraryContent { AssetLibrarySource.audio(.title("Audio"), source: .init(demoSource: .audio)) AssetLibrarySource.audioUpload(.title("Uploads"), source: .init(demoSource: .audioUpload)) } @AssetLibraryBuilder var images: AssetLibraryContent { AssetLibrarySource.image(.title("Unsplash"), source: .init(id: UnsplashAssetSource.id)) AssetLibrarySource.image(.title("Images"), source: .init(demoSource: .image)) AssetLibrarySource.photoRoll(.title("Photo Roll"), media: [.image]) } @AssetLibraryBuilder var text: AssetLibraryContent { AssetLibrarySource.text(.title("Plain Text"), source: .init(id: TextAssetSource.id)) AssetLibrarySource.textComponent(.title("Text Designs"), source: .init(demoSource: .textComponents)) } @AssetLibraryBuilder var shapes: AssetLibraryContent { AssetLibrarySource.shape(.title("Basic"), source: .init( defaultSource: .vectorPath, config: .init(groups: ["//ly.img.cesdk.vectorpaths/category/vectorpaths"]))) AssetLibrarySource.shape(.title("Abstract"), source: .init( defaultSource: .vectorPath, config: .init(groups: ["//ly.img.cesdk.vectorpaths.abstract/category/abstract"]))) } @AssetLibraryBuilder var stickers: AssetLibraryContent { AssetLibrarySource.sticker(.titleForGroup { group in if let name = group?.split(separator: "/").last { "\(name.capitalized)" } else { "Stickers" } }, source: .init(defaultSource: .sticker)) } @AssetLibraryBuilder func elements(_ sceneMode: SceneMode?) -> AssetLibraryContent { photoRoll(sceneMode) if sceneMode == .video { AssetLibraryGroup.video("Videos") { videos } AssetLibraryGroup.audio("Audio") { audio } } AssetLibraryGroup.image("Images") { images } AssetLibraryGroup.text("Text", excludedPreviewSources: [Engine.DemoAssetSource.textComponents.rawValue]) { text } AssetLibraryGroup.shape("Shapes") { shapes } AssetLibraryGroup.sticker("Stickers") { stickers } } @ViewBuilder var photoRollTab: some View { AssetLibrarySceneModeReader { sceneMode in AssetLibraryTab("Photo Roll") { photoRoll(sceneMode) } label: { DefaultAssetLibrary.photoRollLabel($0) } } } @ViewBuilder var elementsTab: some View { AssetLibrarySceneModeReader { sceneMode in AssetLibraryTab("Elements") { elements(sceneMode) } label: { DefaultAssetLibrary.elementsLabel($0) } } } @ViewBuilder var videosTab: some View { AssetLibraryTab("Videos") { videos } label: { DefaultAssetLibrary.videosLabel($0) } } @ViewBuilder var audioTab: some View { AssetLibraryTab("Audio") { audio } label: { DefaultAssetLibrary.audioLabel($0) } } @ViewBuilder var imagesTab: some View { AssetLibraryTab("Images") { images } label: { DefaultAssetLibrary.imagesLabel($0) } } @ViewBuilder var textTab: some View { AssetLibraryTab("Text") { text } label: { DefaultAssetLibrary.textLabel($0) } } @ViewBuilder var shapesTab: some View { AssetLibraryTab("Shapes") { shapes } label: { DefaultAssetLibrary.shapesLabel($0) } } @ViewBuilder var stickersTab: some View { AssetLibraryTab("Stickers") { stickers } label: { DefaultAssetLibrary.stickersLabel($0) } } @ViewBuilder public var clipsTab: some View { AssetLibraryTab("Clips") { videosAndImages } label: { _ in EmptyView() } } @ViewBuilder public var overlaysTab: some View { AssetLibraryTab("Overlays") { videosAndImages } label: { _ in EmptyView() } } @ViewBuilder public var stickersAndShapesTab: some View { AssetLibraryTab("Stickers") { stickers shapes } label: { _ in EmptyView() } } var body: some View { TabView { AssetLibrarySceneModeReader { sceneMode in if sceneMode == .video { elementsTab photoRollTab videosTab audioTab AssetLibraryMoreTab { imagesTab textTab shapesTab stickersTab } } else { elementsTab imagesTab textTab shapesTab stickersTab } } } } } ``` In this example, we will show you how to customize the asset library for the mobile editor. The example is based on the `Design Editor`, however, it is exactly the same for all the other [solutions](https://img.ly/docs/cesdk/ios/prebuilt-solutions-d0ed07/). Explore a full code sample on [GitHub](https://github.com/imgly/cesdk-swift-examples/tree/v$UBQ_VERSION$/editor-guides-configuration-asset-library/). ## Modifiers After initializing an editor SwiftUI view you can apply any SwiftUI *modifier* to customize it like for any other SwiftUI view. All public Swift `extension`s of existing types provided by IMG.LY, e.g., for the SwiftUI `View` protocol, are exposed in a separate `.imgly` property namespace. The asset library configuration to customize the editor is no exception to this rule and is implemented as a SwiftUI *modifier*. ```swift highlight-editor-default DesignEditor(settings) ``` - `assetLibrary` - the asset library UI definition used by the editor. The result of the trailing closure needs to conform to the `AssetLibrary` protocol. By default, the predefined `DefaultAssetLibrary` is used. ```swift highlight-assetLibrary-default .imgly.assetLibrary { DefaultAssetLibrary( tabs: DefaultAssetLibrary.Tab.allCases.reversed().filter { tab in tab != .elements && tab != .photoRoll }, ) .images { AssetLibrarySource.image(.title("Unsplash"), source: .init(id: UnsplashAssetSource.id)) DefaultAssetLibrary.images } } ``` ### Custom Asset Source To use custom asset sources in the asset library UI, the custom asset source must be first added to the engine. In addition to creating or loading a scene, registering the asset sources should be done in the [callback](https://img.ly/docs/cesdk/ios/user-interface/events-514b70/). In this example, the `OnCreate.loadScene` default implementation is used and afterward, the [custom](https://img.ly/docs/cesdk/ios/import-media/from-remote-source/unsplash-8f31f0/) is added. ```swift highlight-assetSource-default .imgly.onCreate { engine in try await OnCreate.loadScene(from: DesignEditor.defaultScene)(engine) try engine.asset.addSource(UnsplashAssetSource(host: secrets.unsplashHost)) } ``` ### Default Asset Library The `DefaultAssetLibrary` is a predefined `AssetLibrary` intended to quickly customize some parts of the default asset library without implementing a complete `AssetLibrary` from scratch. It can be initialized with a custom selection and ordering of the available tabs. In this example, we reverse the ordering and exclude the elements and photo roll tab. ```swift highlight-defaultAssetLibrary DefaultAssetLibrary( tabs: DefaultAssetLibrary.Tab.allCases.reversed().filter { tab in tab != .elements && tab != .photoRoll }, ) ``` ### Asset Library Builder The content of some of the tabs can be changed with *modifiers* that are defined on the `DefaultAssetLibrary` type and expect a trailing `@AssetLibraryBuilder` closure similar to regular SwiftUI `@ViewBuilder` closures. These type-bound *modifiers* are `videos`, `audio`, `images`, `shapes`, and `stickers`. The elements tab will then be generated with these definitions. In this example, we reuse the `DefaultAssetLibrary.images` default implementation and add a new `AssetLibrarySource` for the [previously added](https://img.ly/docs/cesdk/ios/import-media/asset-panel/customize-c9a4de/) which will add a new "Unsplash" section to the asset library UI. ```swift highlight-defaultAssetLibraryImages .images { AssetLibrarySource.image(.title("Unsplash"), source: .init(id: UnsplashAssetSource.id)) DefaultAssetLibrary.images } ``` ### Custom Asset Library If the `DefaultAssetLibrary` is not customizable enough for your use case you can create your own custom `AssetLibrary`. ```swift highlight-assetLibrary-custom .imgly.assetLibrary { CustomAssetLibrary() } ``` In this example, we did exactly that by creating the `CustomAssetLibrary`. It resembles the above customized `DefaultAssetLibrary` with the added `UnsplashAssetSource` but without the custom tab configuration which is not needed as you control every section, layout, and grouping. ```swift highlight-customAssetLibrary import IMGLYEditor import IMGLYEngine import SwiftUI @MainActor struct CustomAssetLibrary: AssetLibrary { ``` As used above for customizing the `DefaultAssetLibrary` with its *modifiers*, the `@AssetLibraryBuilder` concept is the foundation to quickly create any asset library hierarchy. It behaves and feels like the regular SwiftUI `@ViewBuilder` syntax. You compose your asset library out of `AssetLibrarySource`s that can be organized in named `AssetLibraryGroup`s. There are different flavors of these two for each asset type which define the used asset preview and section styling. ```swift highlight-assetLibraryBuilder @AssetLibraryBuilder func photoRoll(_ sceneMode: SceneMode?) -> AssetLibraryContent { AssetLibrarySource.photoRoll( .title("Photo Roll"), media: sceneMode == .video ? [.image, .video] : [.image], ) } @AssetLibraryBuilder var videosAndImages: AssetLibraryContent { AssetLibraryGroup.video("Videos") { videos } AssetLibraryGroup.image("Images") { images } AssetLibrarySource.photoRoll(.title("Photo Roll"), media: [.image, .video]) } @AssetLibraryBuilder var videos: AssetLibraryContent { AssetLibrarySource.video(.title("Videos"), source: .init(demoSource: .video)) AssetLibrarySource.photoRoll(.title("Photo Roll"), media: [.video]) } @AssetLibraryBuilder var audio: AssetLibraryContent { AssetLibrarySource.audio(.title("Audio"), source: .init(demoSource: .audio)) AssetLibrarySource.audioUpload(.title("Uploads"), source: .init(demoSource: .audioUpload)) } @AssetLibraryBuilder var images: AssetLibraryContent { AssetLibrarySource.image(.title("Unsplash"), source: .init(id: UnsplashAssetSource.id)) AssetLibrarySource.image(.title("Images"), source: .init(demoSource: .image)) AssetLibrarySource.photoRoll(.title("Photo Roll"), media: [.image]) } @AssetLibraryBuilder var text: AssetLibraryContent { AssetLibrarySource.text(.title("Plain Text"), source: .init(id: TextAssetSource.id)) AssetLibrarySource.textComponent(.title("Text Designs"), source: .init(demoSource: .textComponents)) } @AssetLibraryBuilder var shapes: AssetLibraryContent { AssetLibrarySource.shape(.title("Basic"), source: .init( defaultSource: .vectorPath, config: .init(groups: ["//ly.img.cesdk.vectorpaths/category/vectorpaths"]))) AssetLibrarySource.shape(.title("Abstract"), source: .init( defaultSource: .vectorPath, config: .init(groups: ["//ly.img.cesdk.vectorpaths.abstract/category/abstract"]))) } @AssetLibraryBuilder var stickers: AssetLibraryContent { AssetLibrarySource.sticker(.titleForGroup { group in if let name = group?.split(separator: "/").last { "\(name.capitalized)" } else { "Stickers" } }, source: .init(defaultSource: .sticker)) } @AssetLibraryBuilder func elements(_ sceneMode: SceneMode?) -> AssetLibraryContent { photoRoll(sceneMode) if sceneMode == .video { AssetLibraryGroup.video("Videos") { videos } AssetLibraryGroup.audio("Audio") { audio } } AssetLibraryGroup.image("Images") { images } AssetLibraryGroup.text("Text", excludedPreviewSources: [Engine.DemoAssetSource.textComponents.rawValue]) { text } AssetLibraryGroup.shape("Shapes") { shapes } AssetLibraryGroup.sticker("Stickers") { stickers } } ``` To compose a SwiftUI view for any `AssetLibraryBuilder` result you use an `AssetLibraryTab` which can be added to your view hierarchy. In this example, we reuse the labels defined in the `DefaultAssetLibrary` but you can also use your own SwiftUI `Label` or any other view. The argument of the `label` closure just forwards the title that was used to initialize the `AssetLibraryTab` so that you don't have to type it twice. ```swift highlight-assetLibraryView @ViewBuilder var photoRollTab: some View { AssetLibrarySceneModeReader { sceneMode in AssetLibraryTab("Photo Roll") { photoRoll(sceneMode) } label: { DefaultAssetLibrary.photoRollLabel($0) } } } @ViewBuilder var elementsTab: some View { AssetLibrarySceneModeReader { sceneMode in AssetLibraryTab("Elements") { elements(sceneMode) } label: { DefaultAssetLibrary.elementsLabel($0) } } } @ViewBuilder var videosTab: some View { AssetLibraryTab("Videos") { videos } label: { DefaultAssetLibrary.videosLabel($0) } } @ViewBuilder var audioTab: some View { AssetLibraryTab("Audio") { audio } label: { DefaultAssetLibrary.audioLabel($0) } } @ViewBuilder var imagesTab: some View { AssetLibraryTab("Images") { images } label: { DefaultAssetLibrary.imagesLabel($0) } } @ViewBuilder var textTab: some View { AssetLibraryTab("Text") { text } label: { DefaultAssetLibrary.textLabel($0) } } @ViewBuilder var shapesTab: some View { AssetLibraryTab("Shapes") { shapes } label: { DefaultAssetLibrary.shapesLabel($0) } } @ViewBuilder var stickersTab: some View { AssetLibraryTab("Stickers") { stickers } label: { DefaultAssetLibrary.stickersLabel($0) } } @ViewBuilder public var clipsTab: some View { AssetLibraryTab("Clips") { videosAndImages } label: { _ in EmptyView() } } @ViewBuilder public var overlaysTab: some View { AssetLibraryTab("Overlays") { videosAndImages } label: { _ in EmptyView() } } @ViewBuilder public var stickersAndShapesTab: some View { AssetLibraryTab("Stickers") { stickers shapes } label: { _ in EmptyView() } } ``` ### Asset Library `body` View Finally, multiple `AssetLibraryTab`s are ready to be used in a SwiftUI `TabView` environment. Use an `AssetLibraryMoreTab` if you have more than five tabs to workaround various SwiftUI `TabView` shortcomings. Editor solutions with a floating "+" action button (FAB) show this `AssetLibrary.body` `View`. ```swift highlight-assetLibraryTabView var body: some View { TabView { AssetLibrarySceneModeReader { sceneMode in if sceneMode == .video { elementsTab photoRollTab videosTab audioTab AssetLibraryMoreTab { imagesTab textTab shapesTab stickersTab } } else { elementsTab imagesTab textTab shapesTab stickersTab } } } } ``` ### Asset Library Tab Views In addition to its `View.body`, the `AssetLibrary` protocol requires to define `elementsTab`, `videosTab`, `audioTab`, `imagesTab`, `textTab`, `shapesTab`, and `stickersTab` `View`s. These are used when displaying isolated asset libraries just for the corresponding asset type, e.g., for replacing an asset. ```swift highlight-assetLibraryTabViews @ViewBuilder var elementsTab: some View { AssetLibrarySceneModeReader { sceneMode in AssetLibraryTab("Elements") { elements(sceneMode) } label: { DefaultAssetLibrary.elementsLabel($0) } } } @ViewBuilder var videosTab: some View { AssetLibraryTab("Videos") { videos } label: { DefaultAssetLibrary.videosLabel($0) } } @ViewBuilder var audioTab: some View { AssetLibraryTab("Audio") { audio } label: { DefaultAssetLibrary.audioLabel($0) } } @ViewBuilder var imagesTab: some View { AssetLibraryTab("Images") { images } label: { DefaultAssetLibrary.imagesLabel($0) } } @ViewBuilder var textTab: some View { AssetLibraryTab("Text") { text } label: { DefaultAssetLibrary.textLabel($0) } } @ViewBuilder var shapesTab: some View { AssetLibraryTab("Shapes") { shapes } label: { DefaultAssetLibrary.shapesLabel($0) } } @ViewBuilder var stickersTab: some View { AssetLibraryTab("Stickers") { stickers } label: { DefaultAssetLibrary.stickersLabel($0) } } ``` For the video editor solution, it is also required to define `clipsTab`, `overlaysTab`, and `stickersAndShapesTab` `View`s. These composed libraries are used as entry points instead of the FAB. ```swift highlight-assetLibraryVideoEditor @ViewBuilder public var clipsTab: some View { AssetLibraryTab("Clips") { videosAndImages } label: { _ in EmptyView() } } @ViewBuilder public var overlaysTab: some View { AssetLibraryTab("Overlays") { videosAndImages } label: { _ in EmptyView() } } @ViewBuilder public var stickersAndShapesTab: some View { AssetLibraryTab("Stickers") { stickers shapes } label: { _ in EmptyView() } } ``` --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Capture From Camera" description: "Capture photos or videos directly from a connected camera for immediate use in your design." platform: ios url: "https://img.ly/docs/cesdk/ios/import-media/capture-from-camera-92f388/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Import Media Assets](https://img.ly/docs/cesdk/ios/import-media-4e3703/) > [Capture From Camera](https://img.ly/docs/cesdk/ios/import-media/capture-from-camera-92f388/) --- --- ## Related Pages - [Integrate Mobile Camera](https://img.ly/docs/cesdk/ios/import-media/capture-from-camera/integrate-33d863/) - Enable live camera capture in mobile apps to shoot and insert photos or videos. - [Mobile Camera Configuration](https://img.ly/docs/cesdk/ios/import-media/capture-from-camera/camera-configuration-46afd0/) - Set up the visual interface and behavior when capturing with the IMGLY Camera. - [Record Video](https://img.ly/docs/cesdk/ios/import-media/capture-from-camera/record-video-47819b/) - Record video directly inside the editor using a connected camera device. - [Record Reaction](https://img.ly/docs/cesdk/ios/import-media/capture-from-camera/record-reaction-42e4c5/) - Record user’s reaction while watching a video. - [Access Recordings](https://img.ly/docs/cesdk/ios/import-media/capture-from-camera/recordings-c2ca1e/) - Manage access to recorded videos or reactions for playback or editing. - [Dual Camera](https://img.ly/docs/cesdk/ios/import-media/capture-from-camera/dual-camera-ecf71f/) - Record with the front and back cameras at the same time. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Mobile Camera Configuration" description: "Set up the visual interface and behavior when capturing with the IMGLY Camera." platform: ios url: "https://img.ly/docs/cesdk/ios/import-media/capture-from-camera/camera-configuration-46afd0/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Import Media Assets](https://img.ly/docs/cesdk/ios/import-media-4e3703/) > [Capture From Camera](https://img.ly/docs/cesdk/ios/import-media/capture-from-camera-92f388/) > [Camera Configuration](https://img.ly/docs/cesdk/ios/import-media/capture-from-camera/camera-configuration-46afd0/) --- In this guide you'll learn how to apply `CameraConfiguration` to the **IMGLY Camera** to: - Adjust the visual accents and recording limits. - Discover the different `mode` properties of the camera. - Understand where to find the localizable strings. > **Note:** The IMGLY Camera **only supports video** capture. If you need to capture photos:1) Use the system `PHPickerViewController` or `AVCapturePhotoOutput` > 2) Load the images into the CE.SDK engine. ## What You’ll Learn - How to configure properties of the `Camera` using `CameraConfiguration` - How to initialize each of the available camera modes - How to localize the strings that the `Camera` shows to the user ### Using CameraConfiguration The `CameraConfiguration` structure has properties to: - Control the tint of various controls on the camera. - Limit the total duration of video - Lock the camera into a particular screen mode. ![Camera with default colors and no limits on duration or mode](assets/configuration-ios-157-1.png) In the images above: - **On the left** the mode switching button is enabled and visible. - **During recording** The tint of the recording indicators is the default red color. - There is no limit on **the length** of video. ### Change the Properties To change the properties: - Create a `CameraConfiguration` structure. - Pass it to the `Camera` object **during initialization**. ```swift let engineSettings = EngineSettings(license: "") let cameraConfig = CameraConfiguration(recordingColor: .green, highlightColor: .purple, maxTotalDuration: 10.0, allowExceedingMaxDuration: false, allowModeSwitching: false) Camera(engineSettings, config) { result in //handle videos here } ``` The code above sets all available properties of `CameraConfiguration`. Each property **has a default value**. When creating your structure, you only need to include the properties you want to configure. ![Camera with configuration applied](assets/configuration-ios-157-2.png) In the preceding images: - The **mode switching button** is no longer visible. - The **recording indicators** are now green. - The **limit** of 10 seconds appears in the time stamp window at the top--when the user reaches the limit, a message appears. - The highlight of the **control button** is purple. ### Camera Modes The Camera has a number of different modes. They are not set with a `CameraConfiguration` structure but are an argument when initializing the camera. Each mode has its own guide to explore it in much more detail. The modes are: - `.standard`: the regular camera. - `.dualCamera(layoutMode)`: records with both front and back camera at the same time. Learn more about this mode in the [Dual Camera guide](https://img.ly/docs/cesdk/ios/import-media/capture-from-camera/dual-camera-ecf71f/) - `.reaction(layoutMode, URL, positionsSwapped)`: records with the camera while playing back a video. Learn more about this mode in the [Record Reaction guide](https://img.ly/docs/cesdk/ios/import-media/capture-from-camera/record-reaction-42e4c5/) ```swift Camera(engineSettings, config: cameraConfig, mode: .standard) { result in //do something with the resulting video } ``` The code above: 1. Initializes the camera with the same `engineSettings` and `cameraConfig` as earlier in the guide. 2. Sets the `mode` property. ### Localization and Languages The CE.SDK camera currently supports these languages on iOS: - English - German However, it provides a convenient API to: - Replace the values of existing localization keys. - Add **support for more languages**. All the camera keys are located [in the GitHub repository](https://github.com/imgly/IMGLYUI-swift/tree/$UBQ_VERSION$/Sources/IMGLYCamera/IMGLYCamera.xcstrings) and they all follow **strict naming conventions** to make locating keys simple and self-explanatory. For instance: - The `ly_img_camera_timer_option_off` key provides the timer off button. - The `ly_img_camera_dialog_delete_last_recording_title` key enables the configuration of the title in the alert dialog that appears when deleting the last recording. ### Replacing Existing Keys In order to replace any of the existing camera keys, find the key of the desired text, add the key to `Localizable.xcstrings` file of your app and replace with the desired value or copy the `IMGLYCamera.xcstrings` file to your app and edit it. Keys defined in `Localizable.xcstrings` take precedence over the ones defined in `IMGLYCamera.xcstrings`. ### Supporting New Languages In order to add support for a language that is not supported by the CE.SDK camera add a new language to your `Localizable.xcstrings` or `IMGLYCamera.xcstrings` file and replace the values with desired translations. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Dual Camera" description: "Record with the front and back cameras at the same time." platform: ios url: "https://img.ly/docs/cesdk/ios/import-media/capture-from-camera/dual-camera-ecf71f/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Import Media Assets](https://img.ly/docs/cesdk/ios/import-media-4e3703/) > [Capture From Camera](https://img.ly/docs/cesdk/ios/import-media/capture-from-camera-92f388/) > [Dual Camera](https://img.ly/docs/cesdk/ios/import-media/capture-from-camera/dual-camera-ecf71f/) --- Dual Camera Mode lets your users record with both the front and back cameras simultaneously. This is ideal for vlogging, interviews, and live reactions where you want to capture the subject and the user’s perspective at the same time. The result is one or more recordings containing synchronized tracks from each camera that you can bring into the editor and arrange in layouts like split-screen or picture-in-picture. ## What You’ll Learn - How Dual Camera Mode differs from Standard and Reaction modes. - How to launch the CE.SDK camera in Dual Camera mode with a layout. - How to record with both cameras at once. - How to retrieve the dual-camera recordings and access their video URLs. ## When to Use It Choose Dual Camera when you want to capture two perspectives at once: - Interviews, conversations, or podcasts where both participants should be visible. - Reactions during events (e.g., filming a concert while capturing the audience’s response). - Vlogging and storytelling that show both the subject and the narrator. - Any scenario where capturing both front and back cameras adds context. **Not appropriate when:** - You only need a single selfie or back-camera video → use **Standard** mode - You want to play back a base video while recording → use **Reaction** mode - You expect an auto-composited video (e.g., side-by-side output) — Dual Camera returns two video assets; you assemble them in the editor. ### Understanding Dual Camera Mode ![A screenshot of a dual-camera mode recording in progress.](assets/dual-camera-ios-0.jpg) Initialize the `IMGLYCamera` in Dual Camera mode with: ```swift Camera(engineSettings, mode: .dualCamera(.vertical)) { result in // Handle results here } ``` - `.vertical` (or `.horizontal`) — defines how the preview windows are arranged during capture. - The recording `result` returns synchronized clips for both cameras. - When the recording finishes, you receive a .recording(\[Recording]) result containing both the front and back camera recordings. Here is a minimal code example that extracts the URL for each recording: ```swift Camera(engineSettings, mode: .dualCamera(.vertical)) { result in switch result { case let .success(.recording(recordings)): let urls = recordings.flatMap { $0.videos.map(\.url) } print("Recorded videos:", urls) case let .failure(error): print("Error:", error.localizedDescription) default: break } } ``` You can learn more about the Recording struct in the [Access Recordings guide](https://img.ly/docs/cesdk/ios/import-media/capture-from-camera/recordings-c2ca1e/). Each returned Recording corresponds to a camera feed. The videos property contains the captured media tracks and their URLs. > **Note:** **About flipping cameras:**![Location of the flip control in the UI](assets/ios-flip-button-161.jpeg)The two rectangles in Dual Camera mode aren’t permanently tied to the front or back camera. When the user taps the **flip camera** control, the feed shown in each rectangle is swapped, but the rectangles themselves keep their identity.This means that if the user flips during recording, the video tracks will reflect that flip — each rectangle continues recording its assigned view, regardless of which camera it’s showing at that moment. The video previews are cropped to fit the screen, but the Recording struct contains full-screen data. All returned videos are time-synced so that they align correctly in the editor. ![Full screen images from the front and back cameras.](assets/dual-camera-ios-3.png) > **Note:** The layout of the preview windows (side-by-side or top-and-bottom) is controlled in `CameraMode.swift` in the CE.SDK package. You can change the preview `rect` values if you want to customize the live UI. ## Troubleshooting ❌ **Only one video returned** Be sure you’re using a device that supports simultaneous front-and-back capture. Some older iPhones only support one active camera. ❌ **Videos out of sync** All returned recordings are time-aligned. If playback appears unsynced, check how you’re handling the array of recordings — don’t manually offset them. ❌ **Performance issues** Capturing from two cameras at once can be demanding. Test on a range of devices and consider limiting resolution for smoother performance. ## Next Steps Dual Camera Mode is a powerful way to capture two perspectives at once. By recording both front and back cameras together, your users can create richer, more engaging stories. Continue exploring with these guides: - Learn how to [integrate the IMGLY Camera](https://img.ly/docs/cesdk/ios/import-media/capture-from-camera/integrate-33d863/) into your project. - [Configure](https://img.ly/docs/cesdk/ios/import-media/capture-from-camera/camera-configuration-46afd0/) the UI and other properties of the camera. - Learn how to [retrieve and manage](https://img.ly/docs/cesdk/ios/import-media/capture-from-camera/recordings-c2ca1e/) recordings. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Integrate Mobile Camera" description: "Enable live camera capture in mobile apps to shoot and insert photos or videos." platform: ios url: "https://img.ly/docs/cesdk/ios/import-media/capture-from-camera/integrate-33d863/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Import Media Assets](https://img.ly/docs/cesdk/ios/import-media-4e3703/) > [Capture From Camera](https://img.ly/docs/cesdk/ios/import-media/capture-from-camera-92f388/) > [Integrate Mobile Camera](https://img.ly/docs/cesdk/ios/import-media/capture-from-camera/integrate-33d863/) --- In this example, learn how to initialize the [CreativeEditor SDK](https://img.ly/products/creative-sdk)’s mobile camera in your iOS app. Explore a full code sample on [GitHub](https://github.com/imgly/cesdk-swift-examples/tree/v$UBQ_VERSION$/camera-guides-quickstart/). ## Requirements The mobile camera requires: - iOS 16 - Swift 6.2 (Xcode 26.0.1) or later ### Using Swift Package Manager If you use [Swift Package Manager](https://github.com/apple/swift-package-manager) to build your app, and want to integrate the Creative Engine and UI modules using your regular workflows, add the [IMGLYUI Swift Package](https://github.com/imgly/IMGLYUI-swift) as a dependency to your project. ![](./assets/spm-ui-ios.png) This package provides multiple library products. Add the default `IMGLYUI` library to your app target to add all available UI modules included in this package to your app. To keep your app size minimal, only add the library product that you need, For example, only add the `IMGLYCamera` library if you need to `import IMGLYCamera` in your code. ![Settings location for modifying which part of the library is added](assets/integrate-ios-157-10.png) On the *General* page of your app target's Xcode project settings the *Frameworks, Libraries, and Embedded Content* section lists all used library products. You can use the `+` and `-` buttons to change them. ## Usage This example shows the basic usage of the camera. ### Launch the Camera You can get started right away by importing the camera module into your own code. ```swift import IMGLYCamera ``` ```swift import IMGLYCamera ``` In this integration example, the camera is presented as a modal view after tapping a button. ```swift Button("Use the Camera") { isPresented = true } ``` ```swift private lazy var button = UIButton( type: .system, primaryAction: UIAction(title: "Use the Camera") { [unowned self] _ in camera.modalPresentationStyle = .fullScreen present(camera, animated: true) } ) ``` ### Initialization The camera is initialized with `EngineSettings`. You need to provide the **license key** that you received from IMG.LY. Optionally, you can provide a **unique ID** tied to your application's user. This helps us accurately calculate monthly active users (MAU) and it is especially useful when one person uses the app on multiple devices with a sign-in feature, ensuring they’re counted once. ```swift Camera(.init(license: secrets.licenseKey, userID: "")) { result in ``` ```swift Camera(.init(license: secrets.licenseKey, userID: "")) { result in ``` ### Result The `Camera`’s `onDismiss` closure returns a `Result`. If the user has recorded videos, the `.success(_)` case contains the `CameraResult`. ```swift switch result { case let .success(.recording(recordings)): let urls = recordings.flatMap { $0.videos.map(\.url) } let recordedVideos = urls // Do something with the recorded videos print(recordedVideos) case .success(.reaction): print("Reaction case not handled here") case let .failure(error): print(error.localizedDescription) isPresented = false } ``` ```swift switch result { case let .success(.recording(recordings)): let urls = recordings.flatMap { $0.videos.map(\.url) } let recordedVideos = urls // Do something with the recorded videos print(recordedVideos) case .success(.reaction): print("Reaction case not handled here") case let .failure(error): print(error.localizedDescription) self.presentedViewController?.dismiss(animated: true) } ``` When using UIKit, it needs to be integrated with a `UIHostingController` object into a UIKit view hierarchy. ```swift private lazy var camera = UIHostingController(rootView: Camera(.init(license: secrets.licenseKey, userID: "")) { result in switch result { case let .success(.recording(recordings)): let urls = recordings.flatMap { $0.videos.map(\.url) } let recordedVideos = urls // Do something with the recorded videos print(recordedVideos) case .success(.reaction): print("Reaction case not handled here") case let .failure(error): print(error.localizedDescription) self.presentedViewController?.dismiss(animated: true) } }) ``` ### Environment When using SwiftUI, the camera is best opened in a [`fullScreenCover`](https://developer.apple.com/documentation/swiftui/view/fullscreencover\(ispresented:ondismiss:content:\)). ```swift .fullScreenCover(isPresented: $isPresented) { ``` ## Full Code Here's the full code: ```swift import IMGLYCamera import SwiftUI struct CameraSwiftUI: View { @State private var isPresented = false var body: some View { Button("Use the Camera") { isPresented = true } .fullScreenCover(isPresented: $isPresented) { Camera(.init(license: secrets.licenseKey, userID: "")) { result in switch result { case let .success(.recording(recordings)): let urls = recordings.flatMap { $0.videos.map(\.url) } let recordedVideos = urls // Do something with the recorded videos print(recordedVideos) case .success(.reaction): print("Reaction case not handled here") case let .failure(error): print(error.localizedDescription) isPresented = false } } } } } ``` ```swift import IMGLYCamera import SwiftUI class CameraUIKit: UIViewController { private lazy var camera = UIHostingController(rootView: Camera(.init(license: secrets.licenseKey, userID: "")) { result in switch result { case let .success(.recording(recordings)): let urls = recordings.flatMap { $0.videos.map(\.url) } let recordedVideos = urls // Do something with the recorded videos print(recordedVideos) case .success(.reaction): print("Reaction case not handled here") case let .failure(error): print(error.localizedDescription) self.presentedViewController?.dismiss(animated: true) } }) private lazy var button = UIButton( type: .system, primaryAction: UIAction(title: "Use the Camera") { [unowned self] _ in camera.modalPresentationStyle = .fullScreen present(camera, animated: true) } ) override func viewDidLoad() { super.viewDidLoad() view.addSubview(button) button.translatesAutoresizingMaskIntoConstraints = false NSLayoutConstraint.activate([ button.centerXAnchor.constraint(equalTo: view.centerXAnchor), button.centerYAnchor.constraint(equalTo: view.centerYAnchor), ]) } } ``` --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Record Reaction" description: "Record user’s reaction while watching a video." platform: ios url: "https://img.ly/docs/cesdk/ios/import-media/capture-from-camera/record-reaction-42e4c5/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Import Media Assets](https://img.ly/docs/cesdk/ios/import-media-4e3703/) > [Capture From Camera](https://img.ly/docs/cesdk/ios/import-media/capture-from-camera-92f388/) > [Record Reaction](https://img.ly/docs/cesdk/ios/import-media/capture-from-camera/record-reaction-42e4c5/) --- Reaction Mode lets your users record themselves while watching a video. The base video plays back in the preview, while the front camera and microphone capture the user’s reaction. When recording stops, you get two assets: the original base video and one or more reaction clips. You can then bring both into the editor and place the reaction video as a picture-in-picture overlay for export. ## What You’ll Learn - How Reaction Mode differs from Standard and Dual Camera modes. - How to launch the CE.SDK camera in Reaction Mode with a base video URL. - How to record the user’s reaction (front camera + mic) while the base video plays. - How to retrieve the reaction recording as a separate file. ## When to Use It Choose Reaction Mode when you want users to capture their response to a video: - Watch-along commentary, tutorials, or educational content - Social media formats like reaction videos or duets - Sports replays or event commentary where facial expressions matter - Any scenario where the user’s reaction is the content 🚫 Not appropriate when: - You only need a selfie-style recording → use Standard mode. - You want to capture both front and back cameras simultaneously → use Dual Camera mode. - You expect an auto-composited reaction + base video → Reaction Mode only records the reaction; you compose both in the editor. ### Launching the Camera Initialize the IMGLYCamera in Reaction Mode with: ```swift Camera(engineSettings, mode: .reaction(.vertical, video: baseURL, positionsSwapped: false)) { result in // Handle results here } ``` - `video: baseURL` — the video to play back during recording - `positionsSwapped` — swaps layout between playback and selfie preview (UI only) - `.vertical` (or `.horizontal`) — how to lay out preview windows while recording ![Camera UI when in Reaction Mode](assets/reaction-ios-159-1.jpeg) It’s also a good idea to lock the mode so that the user cannot switch out of reaction mode. Learn how to lock the mode in the [Camera Configuration](https://img.ly/docs/cesdk/ios/import-media/capture-from-camera/camera-configuration-46afd0/) guide. ### Retrieving the Recording When the recording finishes, you receive a `.reaction(video: Recording, reaction: [Recording])` result with both the base and the reaction clips. Here is a minimal code example that extracts the `URL` for each of the recordings: ```swift Camera(engineSettings, mode: .reaction(.vertical, video: baseURL)) { result in switch result { case let .success(.reaction(video: base, reaction: reactions)): let baseVideoURL = base.videos.first?.url let reactionURL = reactions.first?.videos.first?.url print("Base video:", baseVideoURL as Any) print("Reaction video:", reactionURL as Any) case let .failure(error): print("Error:", error.localizedDescription) case let .success(.recording(recordings)): // This case is returned in Standard/Dual modes, not Reaction break } } ``` You can learn more about the `Recording` struct in the [Access Recordings guide](https://img.ly/docs/cesdk/ios/import-media/capture-from-camera/recordings-c2ca1e/). The reaction `URL` points to the `Caches` directory on the device. Be sure to copy it somewhere if you want to save it long-term. Here is a simple helper function to copy a file to the `Documents` directory and return the `URL` of the new file location. ```swift func persistFile(from sourceURL: URL, fileName: String) throws -> URL { let docs = try FileManager.default.url(for: .documentDirectory, in: .userDomainMask, appropriateFor: nil, create: true) let dest = docs.appendingPathComponent(fileName) if FileManager.default.fileExists(atPath: dest.path) { try FileManager.default.removeItem(at: dest) } try FileManager.default.copyItem(at: sourceURL, to: dest) return dest } ``` The video previews are cropped to fit the screen, but the `Recording` struct contains full-screen data. The reaction video starts at time 0 of the base video. If the user pauses, both the base and the reaction videos will pause to preserve the time sync. ![Captured recordings from Reaction Mode](assets/reaction-ios-159-2.png) > **Note:** It is beyond the scope of this guide, but the `rect` of each of the previews > is set in `CameraMode.swift` in the CE.SDK package. You can change the layout > of the previews by changing each `rect`. Changing the `rect` values **only > affects the live UI, not the captured recording**. ## Troubleshooting ❌ **Reaction Video is Incomplete** When the user pauses and restarts the recording, the camera will create a new file for each segment. Process the array of recordings. ❌ **Audio Echo** The base video’s audio may be picked up by the mic. Lower preview volume or suggest headphones. ## Next Steps Reaction Mode is a powerful way to create engaging, social-friendly content. By combining playback and live recording, your users can produce watch-along or commentary videos with minimal setup. Continue exploring with these guides: - Learn how to [integrate the IMGLY Camera](https://img.ly/docs/cesdk/ios/import-media/capture-from-camera/integrate-33d863/) into your project. - [Configure](https://img.ly/docs/cesdk/ios/import-media/capture-from-camera/camera-configuration-46afd0/) the UI and other properties of the camera. - Learn how to [retrieve and manage recordings](https://img.ly/docs/cesdk/ios/import-media/capture-from-camera/recordings-c2ca1e/). --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Record Video" description: "Record video directly inside the editor using a connected camera device." platform: ios url: "https://img.ly/docs/cesdk/ios/import-media/capture-from-camera/record-video-47819b/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Import Media Assets](https://img.ly/docs/cesdk/ios/import-media-4e3703/) > [Capture From Camera](https://img.ly/docs/cesdk/ios/import-media/capture-from-camera-92f388/) > [Record Video](https://img.ly/docs/cesdk/ios/import-media/capture-from-camera/record-video-47819b/) --- ```swift file=@cesdk_swift_examples/engine-guides-using-camera/UsingCamera.swift reference-only import Foundation import IMGLYEngine @MainActor func usingCamera(engine: Engine) async throws { let scene = try engine.scene.createVideo() let stack = try engine.block.find(byType: .stack).first! let page = try engine.block.create(.page) try engine.block.appendChild(to: stack, child: page) let pixelStreamFill = try engine.block.createFill(.pixelStream) try engine.block.setFill(page, fill: pixelStreamFill) try engine.block.appendEffect(page, effectID: try engine.block.createEffect(.halfTone)) try engine.block.setEnum( pixelStreamFill, property: "fill/pixelStream/orientation", value: "UpMirrored", ) let camera = try Camera() Task { try await engine.scene.zoom(to: page, paddingLeft: 40, paddingTop: 40, paddingRight: 40, paddingBottom: 40) for try await event in camera.captureVideo() { switch event { case let .frame(buffer): try engine.block.setNativePixelBuffer(pixelStreamFill, buffer: buffer) case let .videoCaptured(url): // Use a `VideoFill` for the recorded video file. let videoFill = try engine.block.createFill(.video) try engine.block.setFill(page, fill: videoFill) try engine.block.setString( videoFill, property: "fill/video/fileURI", value: url.absoluteString, ) } } } // Stop capturing after 5 seconds. Task { try? await Task.sleep(nanoseconds: NSEC_PER_SEC * 5) camera.stopCapturing() } } ``` ```swift file=@cesdk_swift_examples/engine-guides-using-camera/Camera.swift reference-only import AVFoundation import Foundation enum VideoCapture: @unchecked Sendable { case frame(CVImageBuffer) case videoCaptured(URL) } final class Camera: NSObject, @unchecked Sendable { private lazy var queue = DispatchQueue(label: "ly.img.camera", qos: .userInteractive) private var videoContinuation: AsyncThrowingStream.Continuation? private let videoInput: AVCaptureDeviceInput private let audioInput: AVCaptureDeviceInput private var captureSession: AVCaptureSession! private var movieOutput: AVCaptureMovieFileOutput init( videoDevice: AVCaptureDevice = .default(for: .video)!, audioDevice: AVCaptureDevice = .default(for: .audio)! ) throws { videoInput = try AVCaptureDeviceInput(device: videoDevice) audioInput = try AVCaptureDeviceInput(device: audioDevice) movieOutput = AVCaptureMovieFileOutput() } func captureVideo(toURL fileURL: URL = .init(fileURLWithPath: NSTemporaryDirectory() + UUID().uuidString + ".mp4")) -> AsyncThrowingStream { .init { continuation in videoContinuation = continuation captureSession = AVCaptureSession() captureSession.addInput(videoInput) captureSession.addInput(audioInput) let videoOutput = AVCaptureVideoDataOutput() videoOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA] videoOutput.setSampleBufferDelegate(self, queue: queue) captureSession.addOutput(videoOutput) captureSession.addOutput(movieOutput) queue.async { self.captureSession.startRunning() self.movieOutput.startRecording(to: fileURL, recordingDelegate: self) } continuation.onTermination = { _ in self.queue.async { self.movieOutput.stopRecording() self.captureSession.stopRunning() } } } } func stopCapturing() { queue.async { self.movieOutput.stopRecording() self.captureSession?.stopRunning() } } } extension Camera: AVCaptureVideoDataOutputSampleBufferDelegate { func captureOutput( _: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from _: AVCaptureConnection, ) { guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return } videoContinuation?.yield(.frame(pixelBuffer)) } } extension Camera: AVCaptureFileOutputRecordingDelegate { func fileOutput( _: AVCaptureFileOutput, didStartRecordingTo _: URL, from _: [AVCaptureConnection], ) {} func fileOutput( _: AVCaptureFileOutput, didFinishRecordingTo url: URL, from _: [AVCaptureConnection], error: Error?, ) { if let error { videoContinuation?.finish(throwing: error) } else { videoContinuation?.yield(.videoCaptured(url)) videoContinuation?.finish() } } } ``` Other than having pre-recorded [video](https://img.ly/docs/cesdk/ios/create-video-c41a08/) in your scene you can also have a live preview from a camera in the engine. This allows you to make full use of the engine's capabilities such as [effects](https://img.ly/docs/cesdk/ios/filters-and-effects-6f88ac/), [strokes](https://img.ly/docs/cesdk/ios/outlines/strokes-c2e621/) and [drop shadows](https://img.ly/docs/cesdk/ios/outlines/shadows-and-glows-6610fa/), while the preview integrates with the composition of your scene. Simply swap out the `VideoFill` of a block with a `PixelStreamFill`. This guide shows you how the `PixelStreamFill` can be used in combination with a camera. We create a video scene with a single page. Then we create a `PixelStreamFill` and assign it to the page. To demonstrate the live preview capabilities of the engine we also apply an effect to the page. ```swift highlight-setup let scene = try engine.scene.createVideo() let stack = try engine.block.find(byType: .stack).first! let page = try engine.block.create(.page) try engine.block.appendChild(to: stack, child: page) let pixelStreamFill = try engine.block.createFill(.pixelStream) try engine.block.setFill(page, fill: pixelStreamFill) try engine.block.appendEffect(page, effectID: try engine.block.createEffect(.halfTone)) ``` ## Orientation To not waste expensive compute time by transforming the pixel data of the buffer itself, it's often beneficial to apply a transformation during rendering and let the GPU handle this work much more efficiently. For this purpose the `PixelStreamFill` has an `orientation` property. You can use it to mirror the image or rotate it in 90° steps. This property lets you easily mirror an image from a front facing camera or rotate the image by 90° when the user holds a device sideways. ```swift highlight-orientation try engine.block.setEnum( pixelStreamFill, property: "fill/pixelStream/orientation", value: "UpMirrored", ) ``` ## Camera We use the `Camera` helper class that internally creates an `AVCaptureSession` and connects it with audio/video inputs and frame and file outputs. We bring the page fully into view using `engine.scene.zoom`. By calling `camera.captureVideo()` we simultaneously start the frame output and file recording. We can then switch on the `.frame` event and the `.videoCaptured` event. Once the recording is finished we swap the `PixelStreamFill` with a `VideoFill` to play back the recorded video file. ```swift highlight-camera let camera = try Camera() Task { try await engine.scene.zoom(to: page, paddingLeft: 40, paddingTop: 40, paddingRight: 40, paddingBottom: 40) for try await event in camera.captureVideo() { ``` ## Updating the Fill In the `.frame` event we update the `PixelStreamFill` with the pixel buffer of the new video frame using `setNativePixelBuffer`. `setNativePixelBuffer` accepts a `CVPixelBuffer`. ```swift highlight-setNativePixelBuffer case let .frame(buffer): try engine.block.setNativePixelBuffer(pixelStreamFill, buffer: buffer) ``` ## Full Code Here's the full code for both files. ### UsingCamera.swift ```swift import Foundation import IMGLYEngine @MainActor func usingCamera(engine: Engine) async throws { let scene = try engine.scene.createVideo() let stack = try engine.block.find(byType: .stack).first! let page = try engine.block.create(.page) try engine.block.appendChild(to: stack, child: page) let pixelStreamFill = try engine.block.createFill(.pixelStream) try engine.block.setFill(page, fill: pixelStreamFill) try engine.block.appendEffect(page, effectID: try engine.block.createEffect(.halfTone)) try engine.block.setEnum( pixelStreamFill, property: "fill/pixelStream/orientation", value: "UpMirrored" ) let camera = try Camera() Task { try await engine.scene.zoom(to: page, paddingLeft: 40, paddingTop: 40, paddingRight: 40, paddingBottom: 40) for try await event in camera.captureVideo() { switch event { case let .frame(buffer): try engine.block.setNativePixelBuffer(pixelStreamFill, buffer: buffer) case let .videoCaptured(url): // Use a `VideoFill` for the recorded video file. let videoFill = try engine.block.createFill(.video) try engine.block.setFill(page, fill: videoFill) try engine.block.setString( videoFill, property: "fill/video/fileURI", value: url.absoluteString ) } } } // Stop capturing after 5 seconds. Task { try? await Task.sleep(nanoseconds: NSEC_PER_SEC * 5) camera.stopCapturing() } } ``` ### Camera.swift ```swift import AVFoundation import Foundation @frozen enum VideoCapture { case frame(CVImageBuffer) case videoCaptured(URL) } final class Camera: NSObject { private lazy var queue = DispatchQueue(label: "ly.img.camera", qos: .userInteractive) private var videoContinuation: AsyncThrowingStream.Continuation? private let videoInput: AVCaptureDeviceInput private let audioInput: AVCaptureDeviceInput private var captureSession: AVCaptureSession! private var movieOutput: AVCaptureMovieFileOutput init( videoDevice: AVCaptureDevice = .default(for: .video)!, audioDevice: AVCaptureDevice = .default(for: .audio)! ) throws { videoInput = try AVCaptureDeviceInput(device: videoDevice) audioInput = try AVCaptureDeviceInput(device: audioDevice) movieOutput = AVCaptureMovieFileOutput() } func captureVideo(toURL fileURL: URL = .init(fileURLWithPath: NSTemporaryDirectory() + UUID().uuidString + ".mp4")) -> AsyncThrowingStream { .init { continuation in videoContinuation = continuation captureSession = AVCaptureSession() captureSession.addInput(videoInput) captureSession.addInput(audioInput) let videoOutput = AVCaptureVideoDataOutput() videoOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA] videoOutput.setSampleBufferDelegate(self, queue: queue) captureSession.addOutput(videoOutput) captureSession.addOutput(movieOutput) queue.async { self.captureSession.startRunning() self.movieOutput.startRecording(to: fileURL, recordingDelegate: self) } continuation.onTermination = { _ in self.queue.async { self.movieOutput.stopRecording() self.captureSession.stopRunning() } } } } func stopCapturing() { queue.async { self.movieOutput.stopRecording() self.captureSession?.stopRunning() } } } extension Camera: AVCaptureVideoDataOutputSampleBufferDelegate { func captureOutput( _: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from _: AVCaptureConnection ) { guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return } videoContinuation?.yield(.frame(pixelBuffer)) } } extension Camera: AVCaptureFileOutputRecordingDelegate { func fileOutput( _: AVCaptureFileOutput, didStartRecordingTo _: URL, from _: [AVCaptureConnection] ) {} func fileOutput( _: AVCaptureFileOutput, didFinishRecordingTo url: URL, from _: [AVCaptureConnection], error: Error? ) { if let error { videoContinuation?.finish(throwing: error) } else { videoContinuation?.yield(.videoCaptured(url)) videoContinuation?.finish() } } } ``` --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Access Recordings" description: "Manage access to recorded videos or reactions for playback or editing." platform: ios url: "https://img.ly/docs/cesdk/ios/import-media/capture-from-camera/recordings-c2ca1e/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Import Media Assets](https://img.ly/docs/cesdk/ios/import-media-4e3703/) > [Capture From Camera](https://img.ly/docs/cesdk/ios/import-media/capture-from-camera-92f388/) > [Access Recordings](https://img.ly/docs/cesdk/ios/import-media/capture-from-camera/recordings-c2ca1e/) --- In this guide, you'll learn how to access user videos generated by the IMGLY Camera when it’s used as a standalone video camera. This guide covers "Dual Camera", "Record Reaction" and "Standard" camera modes. > **Note:** This guide is for working with the standalone camera. When you’re using the version of the IMGLY Camera in the **Video Editor**, you’ll be able to access the videos using the [standard pipeline](https://img.ly/docs/cesdk/ios/import-media/from-local-source/user-upload-c6c7d9/). In that pipeline, dual and segmented captures appear as multiple assets and call `onUpload` multiple times. ## What You’ll Learn - How to read the `onDismiss` completion result and extract recorded clips. - How Dual Camera returns clips with layout frames, so you can recreate the layout. - How Reaction Mode returns the base video plus separate reaction clips. - How to persist temp files. ## When to Use It - After any camera session (Standard, Dual or Reaction) when you need to retrieve the captured media. - When you want to reproduce the previewed layout in your editing flow. - When you need to save recordings for later retrieval. ### Reading Camera Results When the user dismisses the camera after recording, the video appears in a Swift standard `Result` enum with two cases: - `Camera Result` for success - `Camera Error` for failure The Camera Result case has two cases as well: - `recording` used for single and dual mode recordings and contains an array of `Recording` structures. - `reaction` used for reaction mode recordings and contains a `Recording` structure wrapping the original video that was reacted to. It also contains an array of the users reactions as an array of `Recording` structures. The Camera Error enum has three cases: - `cancelled` when the user canceled the camera view - `permissionsMissing` when the user has denied your app permission to use the camera hardware. - `failedToLoadVideo` (record reaction only) when the reaction video fails to load correctly. A bare example of initializing an IMGLY Camera with an `onDismiss` closure is: ```swift private let settings = EngineSettings(license: ") Camera(settings, config: CameraConfiguration(), // Adjust if you set max duration, mode switching, UI tint, etc. mode: .standard // .standard, .dualCamera(...), or .reaction(...) ) { result in switch result { case .success(let cameraResult): switch cameraResult { case .recording(let recordings): //[Recording] // Standard / Dual: one or more clips. case .reaction(let baseVideo, let reactions): //Recording, [Recording] // Reaction: base video + separate reaction clips. } case .failure(let error): // User canceled, permissions missing, or other camera error } } ``` The `Recording` structure has a `duration` property, which is a `CMTime` type and an array of `Video` types named `videos`. The `Video` type contains a `url` which points to the video file. This originates in the app’s caches or temp directory. To persist the video long term, you need to add some kind of storage mechanism in your code. `Video` also has a `rect` property. For a standard video, the `rect` property will be the full rect of the video preview window of the camera. For Dual and Reaction modes, the `rect` indicates which section of the screen the video comes from. The dimensions of the `rect` are the dimensions of the preview window. The video itself is recorded full screen. ### Video Segments The results of the video capture return as an array or `Recording` types because the user can tap the record button on the camera to make multiple recording segments. The segments appear as arcs around the record button, as shown below. ![Record button indicating four recording segments.](assets/ios-segments-161.png) In Single/Reaction, recording.`videos.count` is typically `1`; in Dual, it’s `2` (one `Video` per lens) for each Recording segment. ![Debug window for dual camera recording with four video segments.](assets/ios-recording-struct-161.png) The preceding debug window screenshot shows the `reactions` for a dual camera recording where the user created four segments. Each segment contains a `videos` array with two entries, one for each camera. > **Note:** When the user uses the flip camera button during recording it flips the source of the video to the preview rectangles. It **does not** create a new video segment.![Flip button to change input of each preview rectangle.](assets/ios-flip-button-161.jpeg)Try not to think of the two preview rectangles in dual mode as "front camera" and "back camera" as the user can change which camera feeds which rectangle while they’re capturing video. ### Persist Temporary Files Video URLs are typically returned as temporary file URLs. If you want to use them later, copy or move them to an app-managed location like the app’s documents or library directory. A minimal example of copying the video to the app’s documents directory and giving it a new filename follows: ```swift func persistFile(from sourceURL: URL, fileName: String) throws -> URL { // Get destination in the app's Documents directory let docsURL = try FileManager.default.url( for: .documentDirectory, in: .userDomainMask, appropriateFor: nil, create: true ) let destinationURL = docsURL.appendingPathComponent(fileName) // Remove if file already exists if FileManager.default.fileExists(atPath: destinationURL.path) { try FileManager.default.removeItem(at: destinationURL) } // Copy from tmp to Documents try FileManager.default.copyItem(at: sourceURL, to: destinationURL) //use .moveItem(at:, to:) if you prefer return destinationURL } ``` ## Next Steps Now that you are able to extract the video you can: - Add it to the user's photo library. - Open a scene with the CE.SDK and insert the video. - Create local assets and add them to one of the prebuilt editor's asset panel. ## Troubleshooting **❌ URLs are invalid or empty**: - Use the URLs immediately after capture, copy/move them to a safe folder. Treat returned URLs as *temporary*. **❌ User canceled**: - Handle `.failure(.cancelled)` silently or with some minimal UI cues. Don’t show an error or alert. **❌ Videos Don’t appear in Photos.app**: - Saving to **Photos.app** isn’t automatic. Use `PHPhotoLibrary` from Apple’s `Photos` framework to save them. **❌ Storage balloons during long sessions**: - Prefer `moveItem(at:to:)` over `copyItem` after capture and prune old temp files on app launch. For uploads, delete local files after you persist to a remote URL. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Concepts" description: "Understand key asset concepts like sources, formats, metadata, and how assets are integrated into designs." platform: ios url: "https://img.ly/docs/cesdk/ios/import-media/concepts-5e6197/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Import Media Assets](https://img.ly/docs/cesdk/ios/import-media-4e3703/) > [Concepts](https://img.ly/docs/cesdk/ios/import-media/concepts-5e6197/) --- ```swift reference-only let scene = try engine.scene.create() let page = try engine.block.create(.page) let block = try engine.block.create(.graphic) try engine.block.appendChild(to: scene, child: page) try engine.block.appendChild(to: page, child: block) let customSource = CustomAssetSource(engine: engine) let addedTask = Task { for await sourceID in engine.asset.onAssetSourceAdded { print("Added source: \(sourceID)") } } let removedTask = Task { for await sourceID in engine.asset.onAssetSourceRemoved { print("Removed source: \(sourceID)") } } let updatedTask = Task { for await sourceID in engine.asset.onAssetSourceUpdated { print("Updated source: \(sourceID)") } } try engine.asset.addSource(customSource) let localSourceID = "local-source" try engine.asset.addLocalSource(sourceID: localSourceID) let assetDefinition = AssetDefinition( id: "ocean-waves-1", meta: [ "uri": "https://example.com/ocean-waves-1.mp4", "thumbUri": "https://example.com/thumbnails/ocean-waves-1.jpg", "mimeType": MIMEType.mp4.rawValue, "width": "1920", "height": "1080", ], label: [ "en": "relaxing ocean waves", ], tags: [ "en": ["ocean", "waves", "soothing", "slow"], ] ) try engine.asset.addAsset(to: localSourceID, asset: assetDefinition) try engine.asset.removeAsset(from: localSourceID, assetID: assetDefinition.id) engine.asset.findAllSources() let mimeTypes = try engine.asset.getSupportedMIMETypes(sourceID: customSource.id) let credits = engine.asset.getCredits(sourceID: customSource.id) let license = engine.asset.getLicense(sourceID: customSource.id) let groups = try await engine.asset.getGroups(sourceID: customSource.id) let result = try await engine.asset.findAssets( sourceID: customSource.id, query: .init(query: "", page: 0, perPage: 10) ) let asset = result.assets[0] let sortByNewest = try await engine.asset.findAssets( sourceID: customSource.id, query: .init(query: nil, page: 0, perPage: 10, sortingOrder: .descending) ) let sortById = try await engine.asset.findAssets( sourceID: customSource.id, query: .init(query: nil, page: 0, perPage: 10, sortingOrder: .ascending, sortKey: "id") ) let sortByMetaKeyValue = try await engine.asset.findAssets( sourceID: customSource.id, query: .init(query: nil, page: 0, perPage: 10, sortingOrder: .ascending, sortKey: "someMetaKey") ) let search = try await engine.asset.findAssets( sourceID: customSource.id, query: .init(query: "banana", page: 0, perPage: 100) ) let sceneColorsResult = try await engine.asset.findAssets( sourceID: "ly.img.scene.colors", query: .init(query: nil, page: 0, perPage: 99999) ) let colorAsset = sceneColorsResult.assets[0] try await engine.asset.apply(sourceID: customSource.id, assetResult: asset) try await engine.asset.applyToBlock(sourceID: customSource.id, assetResult: asset, block: block) try engine.asset.assetSourceContentsChanged(sourceID: customSource.id) try engine.asset.removeSource(sourceID: customSource.id) try engine.asset.removeSource(sourceID: localSourceID) final class CustomAssetSource: NSObject, AssetSource { private weak var engine: Engine? init(engine: Engine) { self.engine = engine } var id: String { "foobar" } func findAssets(queryData: AssetQueryData) async throws -> AssetQueryResult { .init(assets: [ .init(id: "logo", meta: [ "uri": "https://img.ly/static/ubq_samples/imgly_logo.jpg", "thumbUri": "https://img.ly/static/ubq_samples/thumbnails/imgly_logo.jpg", "blockType": DesignBlockType.graphic.rawValue, "fillType": FillType.image.rawValue, "width": "320", "height": "116", ], context: .init(sourceID: "foobar")), ], currentPage: queryData.page, total: 1) } func apply(asset: AssetResult) async throws -> NSNumber? { if let id = try await engine?.asset.defaultApplyAsset(assetResult: asset) { .init(value: id) } else { nil } } func applyToBlock(asset: AssetResult, block: DesignBlockID) async throws { try await engine?.asset.defaultApplyAssetToBlock(assetResult: asset, block: block) } var supportedMIMETypes: [String]? { [MIMEType.jpeg.rawValue] } var credits: IMGLYEngine.AssetCredits? { nil } var license: IMGLYEngine.AssetLicense? { nil } } ``` In this example, we will show you how to use the [CreativeEditor SDK](https://img.ly/products/creative-sdk)'s CreativeEngine to manage assets through the `asset` API. To begin working with assets first you need at least one asset source. As the name might imply asset sources provide the engine with assets. These assets then show up in the editor's asset library. But they can also be independently searched and used to create design blocks. Asset sources can be added dynamically using the `asset` API as we will show in this guide. ## Defining a Custom Asset Source Asset sources need at least an `id` and a `findAssets` function. You may notice asset source functions are all `async`. This way you can use web requests or other long-running operations inside them and return results asynchronously. ```swift highlight-defineCustomSource let customSource = CustomAssetSource(engine: engine) ``` All functions of the `asset` API refer to an asset source by its unique `id`. That's why it has to be mandatory. Trying to add an asset source with an already registered `id` will fail. ```swift highlight-customSourceId var id: String { "foobar" } ``` ## Finding and Applying Assets The `findAssets` function should return paginated asset results for the given `queryData`. The asset results have a set of mandatory and optional properties. For a listing with an explanation for each property please refer to the [Integrate a Custom Asset Source](https://img.ly/docs/cesdk/ios/import-media/from-remote-source/unsplash-8f31f0/) guide. The properties of the `queryData` and the pagination mechanism are also explained in this guide. ```swift public func findAssets(sourceID: String, query: AssetQueryData) async throws -> AssetQueryResult ``` Finds assets of a given type in a specific asset source. - `sourceID`: The ID of the asset source. - `query`: All the options to filter the search results by. - Returns: The search results. The optional function 'applyAsset' is to define the behavior of what to do when an asset gets applied to the scene. You can use the engine's APIs to do whatever you want with the given asset result. In this case, we always create an image block and add it to the first page we find. If you don't provide this function the engine's default behavior is to create a block based on the asset result's `meta.blockType` property, add the block to the active page, and sensibly position and size it. ```swift public func apply(sourceID: String, assetResult: AssetResult) async throws -> DesignBlockID? ``` Apply an asset result to the active scene. The default behavior will instantiate a block and configure it according to the asset's properties. - Note: that this can be overridden by providing an `applyAsset` function when adding the asset source. - `sourceID`: The ID of the asset source. - `assetResult`: A single assetResult of a `findAssets` query. ```swift public func defaultApplyAsset(assetResult: AssetResult) async throws -> DesignBlockID? ``` The default implementation for applying an asset to the scene. This implementation is used when no `applyAsset` function is provided to `addSource`. - `assetResult:`: A single assetResult of a `findAssets` query. ```swift public func applyToBlock(sourceID: String, assetResult: AssetResult, block: DesignBlockID) async throws ``` Apply an asset result to the given block. - `sourceID`: The ID of the asset source. - `assetResult`: A single assetResult of a `findAssets` query. - `block`: The block the asset should be applied to. ```swift public func defaultApplyAssetToBlock(assetResult: AssetResult, block: DesignBlockID) async throws ``` The default implementation for applying an asset to an existing block. This implementation is used when no `applyAssetToBlock` function is provided to `addSource`. - `assetResult`: A single assetResult of a `findAssets` query. - `block`: The block to apply the asset result to. ```swift public func getSupportedMIMETypes(sourceID: String) throws -> [String] ``` Queries the list of supported mime types of the specified asset source. An empty result means that all mime types are supported. - `sourceID:`: The ID of the asset source. ## Registering a New Asset Source ```swift public func addSource(_ source: AssetSource) throws ``` Adds a custom asset source. Its ID has to be unique. - `source:`: The asset source. ```swift public func addLocalSource(sourceID: String, supportedMimeTypes: [String]? = nil, applyAsset: (@Sendable (AssetResult) async throws -> DesignBlockID?)? = nil, applyAssetToBlock: (@Sendable (AssetResult, DesignBlockID) async throws -> Void)? = nil) throws ``` Adds a local asset source. Its ID has to be unique. - `sourceID`: The asset source. - `supportedMimeTypes`: The mime types of assets that are allowed to be added to this local source. - `applyAsset`: An optional callback that can be used to override the default behavior of applying a given asset result to the active scene. Returns the newly created block or `nil` if a new block was not created. - `applyAssetToBlock`: An optional callback that can be used to override the default behavior of applying an asset result to a given block. ```swift public func findAllSources() -> [String] ``` Finds all registered asset sources. - Returns: A list with the IDs of all registered asset sources. ```swift public func removeSource(sourceID: String) throws ``` Removes an asset source with the given ID. - `sourceID:`: The ID to refer to the asset source. ```swift public var onAssetSourceAdded: AsyncStream { get } ``` Subscribe to changes whenever an asset source is added. ```swift public var onAssetSourceRemoved: AsyncStream { get } ``` Subscribe to changes whenever an asset source is removed. ```swift public var onAssetSourceUpdated: AsyncStream { get } ``` Subscribe to changes whenever asset source's content is updated. ## Scene Asset Sources A scene colors asset source is automatically available that allows listing all colors in the scene. This asset source is read-only and is updated when `findAssets` is called. ## Add an Asset ```swift public func addAsset(to sourceID: String, asset: AssetDefinition) throws ``` Adds the given asset to an asset source. - `to`: The asset source ID that the asset should be added to. - `asset`: The asset to be added to the asset source. ## Remove an Asset ```swift public func removeAsset(from sourceID: String, assetID: String) throws ``` Removes the specified asset from its asset source. - `from`: The id of the asset source that currently contains the asset. - `assetID`: The id of the asset to be removed. ## Asset Source Content Updates If the contents of your custom asset source change, you can call the `assetSourceUpdated` API to later notify all subscribers of the `onAssetSourceUpdated` API. ```swift public func assetSourceContentsChanged(sourceID: String) throws ``` Notifies the engine that the contents of an asset source changed. - `sourceID:`: The ID of the asset source. ## Groups in Assets ```swift public func getGroups(sourceID: String) async throws -> [String] ``` Queries the asset source's groups for a certain asset type. - `sourceID:`: The ID of the asset source. - Returns: The asset groups. ## Credits and License ```swift public func getCredits(sourceID: String) -> AssetCredits? ``` Queries the asset source's credits info. - `sourceID:`: The ID of the asset source. - Returns: The asset source's credits info consisting of a name and an optional URL. ```swift public func getLicense(sourceID: String) -> AssetLicense? ``` Queries the asset source's license info. - `sourceID:`: The ID of the asset source. - Returns: The asset source's license info consisting of a name and an optional URL. ## Full Code Here's the full code: ```swift let scene = try engine.scene.create() let page = try engine.block.create(.page) let block = try engine.block.create(.graphic) try engine.block.appendChild(to: scene, child: page) try engine.block.appendChild(to: page, child: block) let customSource = CustomAssetSource(engine: engine) let addedTask = Task { for await sourceID in engine.asset.onAssetSourceAdded { print("Added source: \(sourceID)") } } let removedTask = Task { for await sourceID in engine.asset.onAssetSourceRemoved { print("Removed source: \(sourceID)") } } let updatedTask = Task { for await sourceID in engine.asset.onAssetSourceUpdated { print("Updated source: \(sourceID)") } } try engine.asset.addSource(customSource) let localSourceID = "local-source" try engine.asset.addLocalSource(sourceID: localSourceID) let assetDefinition = AssetDefinition( id: "ocean-waves-1", meta: [ "uri": "https://example.com/ocean-waves-1.mp4", "thumbUri": "https://example.com/thumbnails/ocean-waves-1.jpg", "mimeType": MIMEType.mp4.rawValue, "width": "1920", "height": "1080", ], label: [ "en": "relaxing ocean waves", ], tags: [ "en": ["ocean", "waves", "soothing", "slow"], ] ) try engine.asset.addAsset(to: localSourceID, asset: assetDefinition) try engine.asset.removeAsset(from: localSourceID, assetID: assetDefinition.id) engine.asset.findAllSources() let mimeTypes = try engine.asset.getSupportedMIMETypes(sourceID: customSource.id) let credits = engine.asset.getCredits(sourceID: customSource.id) let license = engine.asset.getLicense(sourceID: customSource.id) let groups = try await engine.asset.getGroups(sourceID: customSource.id) let result = try await engine.asset.findAssets( sourceID: customSource.id, query: .init(query: "", page: 0, perPage: 10) ) let asset = result.assets[0] let sortByNewest = try await engine.asset.findAssets( sourceID: customSource.id, query: .init(query: nil, page: 0, perPage: 10, sortingOrder: .descending) ) let sortById = try await engine.asset.findAssets( sourceID: customSource.id, query: .init(query: nil, page: 0, perPage: 10, sortingOrder: .ascending, sortKey: "id") ) let sortByMetaKeyValue = try await engine.asset.findAssets( sourceID: customSource.id, query: .init(query: nil, page: 0, perPage: 10, sortingOrder: .ascending, sortKey: "someMetaKey") ) let search = try await engine.asset.findAssets( sourceID: customSource.id, query: .init(query: "banana", page: 0, perPage: 100) ) let sceneColorsResult = try await engine.asset.findAssets( sourceID: "ly.img.scene.colors", query: .init(query: nil, page: 0, perPage: 99999) ) let colorAsset = sceneColorsResult.assets[0] try await engine.asset.apply(sourceID: customSource.id, assetResult: asset) try await engine.asset.applyToBlock(sourceID: customSource.id, assetResult: asset, block: block) try engine.asset.assetSourceContentsChanged(sourceID: customSource.id) try engine.asset.removeSource(sourceID: customSource.id) try engine.asset.removeSource(sourceID: localSourceID) final class CustomAssetSource: NSObject, AssetSource { private weak var engine: Engine? init(engine: Engine) { self.engine = engine } var id: String { "foobar" } func findAssets(queryData: AssetQueryData) async throws -> AssetQueryResult { .init(assets: [ .init(id: "logo", meta: [ "uri": "https://img.ly/static/ubq_samples/imgly_logo.jpg", "thumbUri": "https://img.ly/static/ubq_samples/thumbnails/imgly_logo.jpg", "blockType": DesignBlockType.graphic.rawValue, "fillType": FillType.image.rawValue, "width": "320", "height": "116", ], context: .init(sourceID: "foobar")), ], currentPage: queryData.page, total: 1) } func apply(asset: AssetResult) async throws -> NSNumber? { if let id = try await engine?.asset.defaultApplyAsset(assetResult: asset) { .init(value: id) } else { nil } } func applyToBlock(asset: AssetResult, block: DesignBlockID) async throws { try await engine?.asset.defaultApplyAssetToBlock(assetResult: asset, block: block) } var supportedMIMETypes: [String]? { [MIMEType.jpeg.rawValue] } var credits: IMGLYEngine.AssetCredits? { nil } var license: IMGLYEngine.AssetLicense? { nil } } ``` --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "File Format Support" description: "Review the supported image, video, and audio formats for importing assets." platform: ios url: "https://img.ly/docs/cesdk/ios/import-media/file-format-support-8cdc84/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Import Media Assets](https://img.ly/docs/cesdk/ios/import-media-4e3703/) > [File Format Support](https://img.ly/docs/cesdk/ios/import-media/file-format-support-8cdc84/) --- CreativeEditor SDK (CE.SDK) supports importing high-resolution images, video, and audio content. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Import From Local Source" description: "Enable users to upload files from their device for use as design assets in the editor." platform: ios url: "https://img.ly/docs/cesdk/ios/import-media/from-local-source-39b2a9/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Import Media Assets](https://img.ly/docs/cesdk/ios/import-media-4e3703/) > [Import From Local Source](https://img.ly/docs/cesdk/ios/import-media/from-local-source-39b2a9/) --- --- ## Related Pages - [Import Local Asset](https://img.ly/docs/cesdk/ios/import-media/from-local-source/local-asset-3f93f2/) - Import files directly from the user’s device and insert them into the design canvas. - [From Photo Roll](https://img.ly/docs/cesdk/ios/import-media/from-local-source/photo-roll-23820d/) - Import photos directly from the user’s photo library into your editor. - [From User Upload](https://img.ly/docs/cesdk/ios/import-media/from-local-source/user-upload-c6c7d9/) - Enable file picker uploads from end users for use in the editor. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Import Local Asset" description: "Import files directly from the user’s device and insert them into the design canvas." platform: ios url: "https://img.ly/docs/cesdk/ios/import-media/from-local-source/local-asset-3f93f2/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Import Media Assets](https://img.ly/docs/cesdk/ios/import-media-4e3703/) > [Import From Local Source](https://img.ly/docs/cesdk/ios/import-media/from-local-source-39b2a9/) > [Import Local Asset](https://img.ly/docs/cesdk/ios/import-media/from-local-source/local-asset-3f93f2/) --- This guide explains how to set up and manage **local asset sources** in the **CreativeEditor SDK (CE.SDK)** for iOS. Local assets are files stored on the user’s device, such as images, videos, or audio, that can be imported into the editor for use in designs. A local source can hold: - Images - Videos - Audio The Asset Panel **filters by type automatically**. ## What You’ll Learn - Set up local asset sources in CE.SDK - Integrate local asset sources into the Asset Panel of the prebuilt editors ## When to Use It Use this feature for: - Apps where users can add their personal assets from the documents directory. - Apps that ship with assets stored in their app bundle. ## Key Terms - **Asset:** a single media item (video, image, or audio) - **Asset Definition:** an asset with metadata (ID, URI, label, tags). - **Asset Source:** a named repository you register with the engine (e.g., “user-media,” Unsplash, your server) that owns a list of assets and can be local or remote. - **Asset Library** or **Asset Catalog:** all of the asset sources that are present in an app (the SDK uses the term "library", but "catalog" appears in the documentation and marketing materials sometimes as a synonym). - **Asset Panel:** the prebuilt editors’ UI that lists and searches across all registered sources. ![Dock with buttons to display parts of the asset library](assets/ios-local-1-159.png) In the dock of the Design Editor, there are buttons to display the app's asset library filtered by the type of asset. ![Asset Panel for images showing three asset sources](assets/ios-local-2-159.PNG) When opening the *Images* Asset Panel, it displays three asset sources: "Dogs", "Images" and "Photo Roll". In this guide you’ll: - Create a **local source**. - Add files as assets. - Display the assets in the **catalog**. Once the assets are in the catalog, you can place an asset from the catalog to the canvas. The editor creates a block that references that asset. For information on working with the *Photo Roll* and *Camera* buttons in the prebuilt editors and with dynamically adding assets to a local source at runtime, refer to the [photo roll](https://img.ly/docs/cesdk/ios/import-media/from-local-source/photo-roll-23820d/) and [user upload](https://img.ly/docs/cesdk/ios/import-media/from-local-source/user-upload-c6c7d9/) guides. ## Where to Store Local Assets There are several regular locations for storing local assets in an iOS application: - The **application bundle**, for assets you want to include with the application itself. These assets will be read-only - The app’s `library` directory (the FileManager calls it `applicationSupportDirectory`) if the app is going to manage the assets - The app’s `documents` directory or a sub-directory in `documents`. A location in `documents` can centralize user-generated assets. The user and you both have **full control of assets** in the `documents` directory; therefore, the user could **delete or move** them. > **Note:** iOS also includes a `temporary` directory and `caches` directory. These > locations get cleared by the system without user interaction. They are not > good long-term storage locations for assets, but are excellent for things like > thumbnails and other assets that are easily regenerated. ## Creating an Asset Source The first step is to create a local source repository for the assets. The minimum requirement is to supply a unique source id: ```swift try engine.asset.addLocalSource(sourceID: "my-local-source") ``` The `addLocalSource` method has optional parameters to: - **restrict mime types** allowed in the repository - **modify the default behavior** of adding the asset to a scene or block when it is selected. An asset source can contain different mime types. When the editor displays an asset source in the Asset Panel, it will filter based on mime type. For example: - The `Images` Asset Panel displays a particular asset source. - It automatically filters out audio, video and other non-image assets. Whether your app has one asset source or multiple asset sources depends on how you want to organize the assets. If it is possible that the Asset Panel is open when you add or remove assets, notify the app to **refresh immediately** using code similar to: ```swift try engine.asset.assetSourceContentsChanged(sourceID: "my-local-source") ``` ## Adding a Definition The asset itself, combined with metadata becomes an **asset definition**. This is what is stored in the asset source. To include the asset, these are the **minimum requirements** for `AssetDefinition`: - A unique `id`. - A `meta` property dictionary where you can supply a `uri` and optionally a `mimeType`. An `AssetDefinition` can also include a `label` property for these features: - **Voice over** uses `label` for the asset. - **Free text search** matches on the `label` property. It can also have a `tags` property to help organize and search. The `label` and `tags` properties can be localized. **The local asset needs a URL that points to the actual file**. For example: 1. The assets are in the app bundle. 2. There is an Xcode project named "dogs". 3. You want to get an array of all of the `jpeg` images from a bundle folder in the project. You could get the array with code that looks like this: ```swift let bundleURLs: [URL]? = Bundle.main.urls(forResourcesWithExtension: "jpeg", subdirectory: "dogs") ``` or you can get a single asset using its name: ```swift let bundleURL: URL? = Bundle.main.url(forResource: "mattie01", withExtension: "jpeg") ``` When the assets are in the app’s library directory, use the `FileManager` to get their URLs. ```swift let assetsURL = FileManager.default.url(for: .applicationSupportDirectory, in: .userDomainMask, appropriateFor: nil, create: false) let allJpegURLs = FileManager.default.contentsOfDirectory( at: assetsURL, includingPropertiesForKeys: nil, options: [.skipsHiddenFiles]).filter { ["jpeg", "jpg"].contains($0.pathExtension.lowercased()) } let singleImageURL = assetsURL.appendingPathComponent("mattie02", conformingTo: .jpeg) ``` In the code above: - The `assetsURL` points to the app’s `library` directory. - `contentsOfDirectory` returns an array of all of the files in a directory. - It adds a filter to return only the `jpeg` files. - The `singleImageURL` returns the URL for a specific file in the library directory, assuming it exists. An `AssetDefinition` for the `singleImageURL` asset above could look like this: ```swift let mattieImage = AssetDefinition( id: UUID().uuidString, meta: ["uri": singleImageURL.absoluteString, "mimeType": MIMEType.jpeg.rawValue], label: ["en": "Mattie"], tags: ["en": ["local", "dog"]] ) ``` Some other properties you might include in the `meta` dictionary are: - `width` - `height` - `thumbUri` > **Note:** Video and audio assets may require more properties in `meta`. Video assets > require an image for `thumbUri` and should also have a `duration`, for > example. Once an asset has a definition, the last step is to add it to an asset source. ```swift try engine.asset.addAsset(to: "my-local-source", asset: mattieImage) ``` > **Note:** This guide uses an Asset Source that you define. When working with the prebuilt editors, you may decide that you want to append a few assets to the already existing sources. These are the default sources in the prebuilt editors:* `ly.img.image` > * `ly.img.video` > * `ly.img.audio` ### Adding to the Asset Panel The prebuilt editors have a modifier specifically for updating the assets upon launch. In this guide, you've been working with images, so we'll add our assets to the default "Images" tab. Depending on the editor you may also have `video`, `audio`, `shape`, and others. ```swift DesignEditor(engineSettings).imgly.onCreate { engine in //Set up the scene and populate the asset source }.imgly.assetLibrary { // Extend the default asset library DefaultAssetLibrary(tabs: DefaultAssetLibrary.Tab.allCases) .images { // Add your directory asset source as a new tab AssetLibrarySource.image( .title("Dogs"), source: .init(id: "my-local-source") ) // Include the default images tab DefaultAssetLibrary.images } } ``` The example above: 1. Creates a `DesignEditor`. 2. Adds the "my-local-source" asset source to the "Images" tab with the title of "Dogs". Check the screenshot at the beginning of this guide for reference. ## Troubleshooting ❌ **Assets disappear after relaunch** - Check that you are storing the assets in the app container before creating the asset definition. ❌ **Asset added programmatically doesn’t appear in the Asset Panel** - If the panel was already open, notify the editor to refresh. ```swift try engine.asset.assetSourceContentsChanged(sourceID: "your source id") ``` ❌ **Video or Image Shows as Gray/Error Icon** - Check that the mimeType is correct in the AssetDefinition. - Missing or invalid `thumbUri` for a video asset. - Confirm that `uri` points to the asset. ❌ **My Mixed Asset Source (images + video) only shows the images, the video is missing** - Make sure that a mixed asset source gets added to all of the tabs it matches. - Ensure every asset has the proper `mimeType`. ❌ **Bundle Assets Aren’t Found** - Verify the asset file is in the "Copy Bundle Resources" build step. - Verify that the name and subdirectory match exactly. - Confirm that you are pointing to the correct `Bundle` (many apps have more than one). ❌ **Users Can Delete Files via Files App and Assets Break** Store editor-managed files in `Application Support` (`Library`). Ony use `Documents` when you expect user visibility and can handle missing assets gracefullly. ❌ **Assets Added, but Search Doesn’t Find Them** Make sure to populate localized `label` and `tags` in the `AssetDefinition`. The Asset Panel’s search indexes these fields. ## Next Steps Now that you've seen how to import assets stored locally you might explore: - Pulling assets [directly from the Photos library](https://img.ly/docs/cesdk/ios/import-media/from-local-source/photo-roll-23820d/). --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "From Photo Roll" description: "Import photos directly from the user’s photo library into your editor." platform: ios url: "https://img.ly/docs/cesdk/ios/import-media/from-local-source/photo-roll-23820d/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Import Media Assets](https://img.ly/docs/cesdk/ios/import-media-4e3703/) > [Import From Local Source](https://img.ly/docs/cesdk/ios/import-media/from-local-source-39b2a9/) > [From Photo Roll](https://img.ly/docs/cesdk/ios/import-media/from-local-source/photo-roll-23820d/) --- In this guide, you’ll learn how to use the built-in **Photo Roll** integration to let users add images from their iOS photo library into your CE.SDK-based app. Unlike a custom upload source, CE.SDK provides Photo Roll as a system-backed asset source with its own tab in the Asset Panel. ## What You’ll Learn - How the built-in Photo Roll tab works inside the Asset Panel. - How to handle assets selected from Photo Roll with the `onUpload` callback. - How to persist Photo Roll imports across app launches. ## When To Use This - When you want the fastest path to letting users select from their device’s photo library. - When you don’t need custom UI for picking photos. - When you want to extend Photo Roll imports with your own metadata or persistence logic. ## Using the Photo Roll Tab ![Design Editor tabs with the Photo Roll tab highlighted](assets/ios-photo-roll-1-159.png) CE.SDK includes a built-in **Photo Roll** tab in the Dock. You don’t need to register it manually. When tapped, it: 1. Launches the system photo picker (`PHPickerViewController`). 2. Returns one or more images. Each selected photo appears in a predefined asset source with the ID `"ly.img.image.upload"`. You can intercept these assets in your `onUpload` handler just like you would for a custom source. ![Asset catalog showing Photo Roll source.](assets/ios-photo-roll-2-159.png) *** ## The onUpload Callback for Photo Roll Whenever the user picks from Photo Roll, CE.SDK fires your `.onUpload` callback. You’ll receive: - `engine` which is a reference to the CE.SDK engine - `sourceID` with a value of `"ly.img.image.upload"` - `asset` an `AssetDefinition` with metadata for the photo Here’s an example handler: ```swift .imgly.onUpload { engine, sourceID, asset in guard sourceID == "ly.img.image.upload" else { return asset } var updated = asset // Add searchable labels and tags updated.labels = ["Camera Roll"] updated.tags = ["photo-roll", "imported"] // Persist the file into Documents so it survives relaunch return persistAsset(updated) } ``` The `persistAsset` function is an example described below. It’s not a standard function. It does the following: 1. Copies files into persistent storage. 2. Updates the URI with the new location. ## Persisting Photo Roll Imports Just like with other asset uploads, Photo Roll imports only exist in memory during the current session by default. To persist them: - Copy the selected photo to Documents/Uploads (or upload to your backend). - Update the URI in the asset definition. - Save metadata with: - UserDefaults - Core Data - A database. - Re-register the saved definitions into your asset source when the app launches. This ensures the Photo Roll tab looks the same even after restarting the app. Here is a minimal example that: - Uses `FileManager` to copy the URL of an `AssetDefinition` to a safe place. - Modifies the `AssetDefinition` to point to the new URL. ```swift func persistAsset(_ asset: AssetDefinition) -> AssetDefinition { guard let originalURL = URL(string: asset.meta["uri"] ?? "") else { return asset } let docs = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first! //This copies the filename to the new path let dest = docs.appendingPathComponent(originalURL.lastPathComponent) do { // Copy the file if it’s not already in Documents if !FileManager.default.fileExists(atPath: dest.path) { try FileManager.default.copyItem(at: originalURL, to: dest) } var updated = asset updated.meta["uri"] = dest.absoluteString return updated } catch { print("Failed to persist asset: \(error)") return asset } } ``` ## Troubleshooting **❌ Error**: Photo Roll tab doesn’t appear - Make sure you’re including the default tabs in the `imgly.dock`. The photo roll is `Dock.Buttons.photoRoll()`. **❌ Error**: Photos Disappear After Relaunch - Photo Roll assets originate in temporary directories. Use a persistence function to save them to the app’s documents directory or your back end. Then reload the assets on app launch. **❌ Error**: Duplicate Photos - If the user imports the same photo multiple times, you may want to de-duplicate by comparing URLs or hashes **before** registering new photos. ## Next Steps Now that you can import from the Photo Roll, you may want to explore: - [From User Upload](https://img.ly/docs/cesdk/ios/import-media/from-local-source/user-upload-c6c7d9/) to let users add files via the camera or Files app. - [Capture from Camera](https://img.ly/docs/cesdk/ios/import-media/capture-from-camera-92f388/) using the IMGLY standalone camera to record video. - Import media [From a Remote Source](https://img.ly/docs/cesdk/ios/import-media/from-remote-source-b65faf/) to bring assets from a service or your backend. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "From User Upload" description: "Enable file picker uploads from end users for use in the editor." platform: ios url: "https://img.ly/docs/cesdk/ios/import-media/from-local-source/user-upload-c6c7d9/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Import Media Assets](https://img.ly/docs/cesdk/ios/import-media-4e3703/) > [Import From Local Source](https://img.ly/docs/cesdk/ios/import-media/from-local-source-39b2a9/) > [From User Upload](https://img.ly/docs/cesdk/ios/import-media/from-local-source/user-upload-c6c7d9/) --- This guide shows how to let users add photos, videos, or audio from their device into your CE.SDK app, handle uploads with the `onUpload` callback, and optionally keep them for future sessions. ## What You’ll Learn - How to register an asset source so that it has an **+ Add** button - How to use the `onUpload` handler to process and edit uploaded assets - How to persist uploads so assets reappear the next time the app launches ## When to Use This - When you want users to add media from the system **photo library** or **Files app**. - When you want uploads to be saved across sessions. ### Register an Asset Source to Accept Uploads ![An Asset Source that is ready for uploads](assets/ios-image-upload-1-159.png) When the user displays the Asset Panel, some sources have a **+ Add** button while others don’t. Tapping that button shows standard system options: - "Choose Photo", pick a photo from their photo library - "Take Photo", launch the system camera to take a photo - "Select Photo", open the files app to select an asset > **Note:** Choosing **Take Photo** without the correct privacy permissions in your app’s `Info.plist` terminates the app. Add the `NSCameraUsageDescription` key before testing. You can learn more in [this guide](https://img.ly/docs/cesdk/ios/import-media/capture-from-camera/integrate-33d863/). When registering an Asset Source, either local or remote, you can indicate that users are allowed to add to the source by using the `.imageUpload` initializer. ```swift .imgly.assetLibrary { // Extend the default asset library DefaultAssetLibrary(tabs: DefaultAssetLibrary.Tab.allCases) .images { // Add your directory asset source as a new tab AssetLibrarySource.imageUpload( .title("Dogs"), source: .init(id: "dogs-images-directory") ) // Include the default images tab DefaultAssetLibrary.images } } ``` In the preceding code, an existing Asset Source with the `id` of "dogs-images-directory" gets added to the **Images** tab of a demo editor. To add the same source but **restrict uploads**, use the `.image` initializer of `AssetLibrarySource`. > **Note:** This guide focuses on images. You can register other asset types using `.videoUpload` and `.audioUpload`. ### Use the Photo Roll Asset Source Along with your custom sources, CE.SDK includes a built-in Photo Roll tab. The Photo Roll tab: - Opens the system picker - Adds selected items to a special “Photo Roll” source in the Asset Panel. You don’t need to register this source yourself. ![Design Editor tabs with the Photo Roll tab highlighted](assets/ios-image-upload-2-159.png) ### The onUpload Event No matter which entry point the user selects (camera, Files, or Photo Roll), CE.SDK calls the `imgly.onUpload` handler with an asset definition before adding it to the source. The callback passes three parameters: - `engine` a reference to the CE.SDK - `sourceID` an identifier for what Asset Source initiated the capture - `asset` an `AssetDefinition` for the asset If your app doesn’t have any code in the `.onUpload` callback, the default behavior is: - The `asset` gets added to the Asset Source with the `sourceID`. - The `asset` also gets added as a block to the main canvas of the app. When the app restarts, since the asset is only part of the Asset Source at runtime: - It no longer appears as part of the source. - The underlying asset file may still be in the app’s temporary storage. JSON for an example asset definition could have this format: ```json id: "string", groups: nil, meta: [ "blockType": "//ly.img.ubq/graphic", "width": "3024", "thumbUri": "file:///long/file/url/to/the/Caches/directory/filename.jpg", "height": "4032", "fillType": "//ly.img.ubq/fill/image", "kind": "image", "uri": "file:///long/file/url/to/the/Caches/directory/filename.jpg" ], payload: nil labels: nil tags: nil ``` The sample above is a definition for an image. Your app can modify the definition before passing it along. Some of the modifications might be: - Adding values for the `labels` and `tags` fields, either automatically or from an extra form dialog. - Generating an actual `thumbnailUri` at a thumbnail size instead of using the full size image. - Adding the asset to the local or remote source, so that it will persist in future app launches. Here is a minimal example that adds values for `labels` and `tags`. ```swift .imgly.onUpload { engine, sourceID, asset in var updated = asset // Add metadata to make assets searchable later updated.labels = ["en" : "User Upload"] if sourceID == "dogs-upload" { updated.tags = ["en" : ["custom", "dog"]] } else {} updated.tags = ["en" : ["custom", "session"]] } return updated } ``` Use the `sourceID` to determine how to process an asset so that it aligns with the other assets in that source. After any modifications, the `.onUpload` handler should finish by returning an `AssetDefinition`, either the original one or a modified one. ## Persisting Uploads Across App Launches To make uploads reappear next time the user opens the app, you need to: - Copy the file into persistent local storage or upload it to your server. - Store the definition metadata alongside the file, in UserDefaults, CoreData, or your backend server. - Re-register the asset with the asset source on startup. To keep your app performant, a good practice is to make any saves or uploads as a background task and make minimal changes to the AssetDefinition returned from `.onUpload`. ## Troubleshooting ❌ **App crashes when using “Take Photo”** - Add `NSCameraUsageDescription` to Info.plist. Without it, iOS will terminate your app. ❌ **Assets disappear after relaunch** - Save files to your app’s Documents or Library directory,or your backend and re-register them at startup. Assets in Caches are temporary. ❌ **File picker shows unsupported types** - Validate `asset.meta["kind"]` in `.onUpload.` Reject or filter out anything other than image, video, or audio. ❌ **Large or iCloud-backed photos load slowly** - Some files may download from iCloud. Show a loading indicator, and consider downscaling or compressing before registering. ❌ **Images appear rotated** - Normalize EXIF orientation in `.onUpload` before generating thumbnails or inserting blocks. ❌ **Duplicate uploads clutter the panel** - Hash the file (e.g., MD5 or SHA256) and check against existing asset definitions before registering. - Ensure that you are generating unique `id` values in the `AssetDefinition` ## Next Steps Now that you can let the user add to the asset sources you may want to explore these topics: - Import media [from remote source](https://img.ly/docs/cesdk/ios/import-media/from-remote-source-b65faf/) --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Import From Remote Source" description: "Connect CE.SDK to external sources like servers or third-party platforms to import assets remotely." platform: ios url: "https://img.ly/docs/cesdk/ios/import-media/from-remote-source-b65faf/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Import Media Assets](https://img.ly/docs/cesdk/ios/import-media-4e3703/) > [Import From Remote Source](https://img.ly/docs/cesdk/ios/import-media/from-remote-source-b65faf/) --- --- ## Related Pages - [From A Custom Source](https://img.ly/docs/cesdk/ios/import-media/from-remote-source/unsplash-8f31f0/) - Browse and import royalty-free images from Unsplash into the editor. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "From A Custom Source" description: "Browse and import royalty-free images from Unsplash into the editor." platform: ios url: "https://img.ly/docs/cesdk/ios/import-media/from-remote-source/unsplash-8f31f0/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Import Media Assets](https://img.ly/docs/cesdk/ios/import-media-4e3703/) > [Import From Remote Source](https://img.ly/docs/cesdk/ios/import-media/from-remote-source-b65faf/) > [From a Custom Source](https://img.ly/docs/cesdk/ios/import-media/from-remote-source/unsplash-8f31f0/) --- ```swift file=@cesdk_swift_examples/engine-guides-custom-asset-source/CustomAssetSource.swift reference-only import Foundation import IMGLYEngine @MainActor func customAssetSource(engine: Engine) async throws { let source = UnsplashAssetSource(host: secrets.unsplashHost) try engine.asset.addSource(source) let list = try await engine.asset.findAssets( sourceID: "ly.img.asset.source.unsplash", query: .init(query: "", page: 1, perPage: 10), ) let search = try await engine.asset.findAssets( sourceID: "ly.img.asset.source.unsplash", query: .init(query: "banana", page: 1, perPage: 10), ) try engine.asset.addLocalSource(sourceID: "background-videos") let asset = AssetDefinition(id: "ocean-waves-1", meta: [ "uri": "https://example.com/ocean-waves-1.mp4", "thumbUri": "https://example.com/thumbnails/ocean-waves-1.jpg", "mimeType": "video/mp4", "width": "1920", "height": "1080", ], label: [ "en": "relaxing ocean waves", "es": "olas del mar relajantes", ], tags: [ "en": ["ocean", "waves", "soothing", "slow"], "es": ["mar", "olas", "calmante", "lento"], ]) try engine.asset.addAsset(to: "background-videos", asset: asset) } ``` ```swift file=@cesdk_swift_examples/third-party/UnsplashAssetSource.swift reference-only import Foundation import IMGLYEngine public final class UnsplashAssetSource: NSObject { private lazy var decoder: JSONDecoder = { let decoder = JSONDecoder() decoder.keyDecodingStrategy = .convertFromSnakeCase return decoder }() private let host: String private let path: String public init(host: String, path: String = "/unsplashProxy") { self.host = host self.path = path } private struct Endpoint { let path: String let query: [URLQueryItem] static func search(queryData: AssetQueryData) -> Self { Endpoint( path: "/search/photos", query: [ .init(name: "query", value: queryData.query), .init(name: "page", value: String(queryData.page + 1)), .init(name: "per_page", value: String(queryData.perPage)), .init(name: "content_filter", value: "high"), ], ) } static func list(queryData: AssetQueryData) -> Self { Endpoint( path: "/photos", query: [ .init(name: "order_by", value: "popular"), .init(name: "page", value: String(queryData.page + 1)), .init(name: "per_page", value: String(queryData.perPage)), .init(name: "content_filter", value: "high"), ], ) } func url(with host: String, path: String) -> URL? { var components = URLComponents() components.scheme = "https" components.host = host components.path = path + self.path components.queryItems = query return components.url } } } extension UnsplashAssetSource: AssetSource { public static let id = "ly.img.asset.source.unsplash" public var id: String { Self.id } public func findAssets(queryData: AssetQueryData) async throws -> AssetQueryResult { let endpoint: Endpoint = queryData.query? .isEmpty ?? true ? .list(queryData: queryData) : .search(queryData: queryData) let data = try await URLSession.shared.data(from: endpoint.url(with: host, path: path)!).0 if queryData.query?.isEmpty ?? true { let response = try decoder.decode(UnsplashListResponse.self, from: data) let nextPage = queryData.page + 1 return .init( assets: response.map(AssetResult.init), currentPage: queryData.page, nextPage: nextPage, total: -1, ) } else { let response = try decoder.decode(UnsplashSearchResponse.self, from: data) let (results, total, totalPages) = (response.results, response.total ?? 0, response.totalPages ?? 0) let nextPage = (queryData.page + 1) == totalPages ? -1 : queryData.page + 1 return .init( assets: results.map(AssetResult.init), currentPage: queryData.page, nextPage: nextPage, total: total, ) } } public var supportedMIMETypes: [String]? { [MIMEType.jpeg.rawValue] } public var credits: AssetCredits? { .init( name: "Unsplash", url: URL(string: "https://unsplash.com/")!, ) } public var license: AssetLicense? { .init( name: "Unsplash license (free)", url: URL(string: "https://unsplash.com/license")!, ) } } private extension AssetResult { convenience init(image: UnsplashImage) { self.init( id: image.id, locale: "en", label: image.description ?? image.altDescription, tags: image.tags?.compactMap(\.title), meta: [ "uri": image.urls.full.absoluteString, "thumbUri": image.urls.thumb.absoluteString, "blockType": DesignBlockType.graphic.rawValue, "fillType": FillType.image.rawValue, "shapeType": ShapeType.rect.rawValue, "kind": "image", "width": String(image.width), "height": String(image.height), "looping": "false", ], context: .init(sourceID: "unsplash"), credits: .init(name: image.user.name!, url: image.user.links?.html), utm: .init(source: "CE.SDK Demo", medium: "referral"), ) } } ``` ```swift file=@cesdk_swift_examples/third-party/UnsplashResponse.swift reference-only import Foundation // MARK: - UnsplashResponse struct UnsplashSearchResponse: Decodable { let total, totalPages: Int? let results: [UnsplashImage] } typealias UnsplashListResponse = [UnsplashImage] // MARK: - Result struct UnsplashImage: Decodable { let id: String let createdAt, updatedAt: String let promotedAt: String? let width, height: Int let color, blurHash: String? let description: String? let altDescription: String? let urls: Urls let likes: Int? let likedByUser: Bool? let user: User let tags: [Tag]? } // MARK: - Tag struct Tag: Decodable { let type, title: String? } // MARK: - Urls struct Urls: Decodable { let raw, full, regular, small: URL let thumb, smallS3: URL } // MARK: - User struct User: Decodable { let id: String let updatedAt: String let username, name, firstName: String? let lastName, twitterUsername: String? let portfolioURL: String? let bio, location: String? let links: UserLinks? let instagramUsername: String? let totalCollections, totalLikes, totalPhotos: Int? let acceptedTos, forHire: Bool? } // MARK: - UserLinks struct UserLinks: Decodable { let linksSelf, html, photos, likes: URL? let portfolio, following, followers: URL? } ``` In this example, we will show you how to integrate your custom asset sources into [CE.SDK](https://img.ly/products/creative-sdk). With CE.SDK you can directly add external image providers like Unsplash or your own backend. A third option we will explore in this guide is using the engine's Asset API directly. Follow along with this example while we are going to add the Unsplash library. Explore a full code sample on [GitHub](https://github.com/imgly/cesdk-swift-examples/tree/v$UBQ_VERSION$/engine-guides-custom-asset-source/CustomAssetSource.swift). Adding an asset source is done creating an asset source definition and adding it using `func addSource(_ source: AssetSource) throws`. The asset source needs a unique identifier as part of an object implementing the interface of the source. All Asset API methods require the asset source's unique identifier. ```swift highlight-unsplash-definition let source = UnsplashAssetSource(host: secrets.unsplashHost) try engine.asset.addSource(source) ``` The most important function to implement is `func findAssets(sourceID: String, query: AssetQueryData) async throws -> AssetQueryResult`. With this function alone you can define the complete asset source. It receives the asset query as an argument and returns a promise with the results. - The argument is the `queryData` and describes the slice of data the engine wants to use. This includes a query string and pagination information. - The result of this query, besides the actual asset data, returns information like the current page, the next page and the total number of assets available for this specific query. Providing an `async` function gives us great flexibility since we are completely agnostic of how we want to get the assets. We can use `URLSession`, local storage, cache or import a 3rd party library to return the result. ```swift highlight-unsplash-findAssets let list = try await engine.asset.findAssets( sourceID: "ly.img.asset.source.unsplash", query: .init(query: "", page: 1, perPage: 10), ) ``` Let us implement an Unsplash asset source. Please note that this is just for demonstration purposes only and may not be ideal if you want to integrate Unsplash in your production environment. We will create a class integrating two Unsplash REST endpoints. The setup part only contains endpoint definition, as well as JSON decoder. According to their documentation and guidelines, we have to create an access key and use a proxy to query the API, but this is out of scope for this example. Take a look at Unsplash's documentation for further details. ```swift highlight-unsplash-api-creation public final class UnsplashAssetSource: NSObject { private lazy var decoder: JSONDecoder = { let decoder = JSONDecoder() decoder.keyDecodingStrategy = .convertFromSnakeCase return decoder }() private let host: String private let path: String public init(host: String, path: String = "/unsplashProxy") { self.host = host self.path = path } private struct Endpoint { let path: String let query: [URLQueryItem] static func search(queryData: AssetQueryData) -> Self { Endpoint( path: "/search/photos", query: [ .init(name: "query", value: queryData.query), .init(name: "page", value: String(queryData.page + 1)), .init(name: "per_page", value: String(queryData.perPage)), .init(name: "content_filter", value: "high"), ], ) } static func list(queryData: AssetQueryData) -> Self { Endpoint( path: "/photos", query: [ .init(name: "order_by", value: "popular"), .init(name: "page", value: String(queryData.page + 1)), .init(name: "per_page", value: String(queryData.perPage)), .init(name: "content_filter", value: "high"), ], ) } func url(with host: String, path: String) -> URL? { var components = URLComponents() components.scheme = "https" components.host = host components.path = path + self.path components.queryItems = query return components.url } } } ``` Unsplash has different API endpoints for different use cases. If we want to search we need to call a different endpoint as if we just want to display images without any search term. Therefore we need to check if the query data contains a `query` string. If `findAssets` was called with a non-empty `query` we can call the `/search` endpoint. As we can see in the example, we are passing the `queryData` to this method, containing the following fields: - `queryData.query`: The current search string from the search bar in the asset library. - `queryData.page`: For Unsplash specifically the requested page number starts with 1. We do not query all assets at once but by pages. As the user scrolls down more pages will be requested by calls to the `findAssets` method. - `queryData.perPage`: Determines how many assets we want to have included per page. This might change between calls. For instance, `perPage` can be called with a small number to display a small preview, but with a higher number e.g. if we want to show more assets in a grid view. ```swift highlight-unsplash-query let endpoint: Endpoint = queryData.query? .isEmpty ?? true ? .list(queryData: queryData) : .search(queryData: queryData) ``` Once we receive the response and check for success we need to map Unsplash's result to what the asset source API needs as a result. The CE.SDK expects an object with the following properties: - `assets`: An array of assets for the current query. We will take a look at what these have to look like in the next paragraph. - `total`: The total number of assets available for the current query. If we search for "Cat" with `perPage` set to 30, we will get 30 assets, but `total` likely will be a much higher number. - `currentPage`: Return the current page that was requested. - `nextPage`: This is the next page that can be requested after the current one. Should be `undefined` if there is no other page (no more assets). In this case we stop querying for more even if the user has scrolled to the bottom. ```swift highlight-unsplash-result-mapping if queryData.query?.isEmpty ?? true { let response = try decoder.decode(UnsplashListResponse.self, from: data) let nextPage = queryData.page + 1 return .init( assets: response.map(AssetResult.init), currentPage: queryData.page, nextPage: nextPage, total: -1, ) } else { let response = try decoder.decode(UnsplashSearchResponse.self, from: data) let (results, total, totalPages) = (response.results, response.total ?? 0, response.totalPages ?? 0) let nextPage = (queryData.page + 1) == totalPages ? -1 : queryData.page + 1 return .init( assets: results.map(AssetResult.init), currentPage: queryData.page, nextPage: nextPage, total: total, ) } ``` Every image we get as a result of Unsplash needs to be translated into an object that is expected by the asset source API. We will describe every mandatory and optional property in the following paragraphs. ```swift highlight-translateToAssetResult convenience init(image: UnsplashImage) { self.init( id: image.id, locale: "en", label: image.description ?? image.altDescription, tags: image.tags?.compactMap(\.title), meta: [ "uri": image.urls.full.absoluteString, "thumbUri": image.urls.thumb.absoluteString, "blockType": DesignBlockType.graphic.rawValue, "fillType": FillType.image.rawValue, "shapeType": ShapeType.rect.rawValue, "kind": "image", "width": String(image.width), "height": String(image.height), "looping": "false", ], context: .init(sourceID: "unsplash"), credits: .init(name: image.user.name!, url: image.user.links?.html), utm: .init(source: "CE.SDK Demo", medium: "referral"), ) } ``` `id`: The id of the asset (mandatory). This has to be unique for this source configuration. ```swift highlight-result-id id: image.id, ``` `locale` (optional): The language locale for this asset is used in `label` and `tags`. ```swift highlight-result-locale locale: "en", ``` `label` (optional): The label of this asset. It could be displayed in the tooltip as well as in the credits of the asset. ```swift highlight-result-label label: image.description ?? image.altDescription, ``` `tags` (optional): The tags of this asset. It could be displayed in the credits of the asset. ```swift highlight-result-tags tags: image.tags?.compactMap(\.title), ``` `meta`: The meta object stores asset properties that depend on the specific asset type. ```swift highlight-result-meta meta: [ "uri": image.urls.full.absoluteString, "thumbUri": image.urls.thumb.absoluteString, "blockType": DesignBlockType.graphic.rawValue, "fillType": FillType.image.rawValue, "shapeType": ShapeType.rect.rawValue, "kind": "image", "width": String(image.width), "height": String(image.height), "looping": "false", ], ``` `uri`: For an image asset this is the URL to the image file that will be used to add the image to the scene. Note that we have to use the Unsplash API to obtain a usable URL at first. ```swift highlight-result-uri "uri": image.urls.full.absoluteString, ``` `thumbUri`: The URI of the asset's thumbnail. It could be used in an asset library. ```swift highlight-result-thumbUri "thumbUri": image.urls.thumb.absoluteString, ``` `blockType`: The type id of the design block that should be created when this asset is applied to the scene. If omitted, CE.SDK will try to infer the block type from an optionally provided `mimeType` property (e.g. `image/jpeg`) or by loading the asset data behind `uri` and parsing the mime type from that. However, this will cause a delay before the asset can be added to the scene, which is why it is always recommended to specify the `blockType` upfront. ```swift highlight-result-blockType "blockType": DesignBlockType.graphic.rawValue, ``` `fillType`: The type id of the fill that should be attached to the block when this asset is applied to the scene. If omitted, CE.SDK will default to a solid color fill `//ly.img.ubq/fill/color`. ```swift highlight-result-fillType "fillType": FillType.image.rawValue, ``` `shapeType`: The type id of the shape that should be attached to the block when this asset is applied to the scene. If omitted, CE.SDK will default to a rect shape `//ly.img.ubq/shape/rect`. ```swift highlight-result-shapeType "shapeType": ShapeType.rect.rawValue, ``` `kind`: The kind that should be set to the block when this asset is applied to the scene. If omitted, CE.SDK will default to an empty string. ```swift highlight-result-kind "kind": "image", ``` `width`: The original width of the image. `height`: The original height of the image. ```swift highlight-result-size "width": String(image.width), "height": String(image.height), ``` `looping`: Determines whether the asset allows looping (applicable only to Video and GIF). When set to `true`, the asset can extend beyond its original length by looping for the specified duration. ```swift highlight-result-looping "looping": "false", ``` `context`: Adds contextual information to the asset. Right now, this only includes the source id of the source configuration. ```swift highlight-result-context context: .init(sourceID: "unsplash"), ``` `credits` (optional): Some image providers require to display credits to the asset's artist. If set, it has to be an object with the artist's `name` and a `url` to the artist page. ```swift highlight-result-credits credits: .init(name: image.user.name!, url: image.user.links?.html), ``` `utm` (optional): Some image providers require to add UTM parameters to all links to the source or the artist. If set, it contains a string to the `source` (added as `utm_source`) and the `medium` (added as `utm_medium`) ```swift highlight-result-utm utm: .init(source: "CE.SDK Demo", medium: "referral"), ``` After translating the asset to match the interface from the asset source API, the array of assets for the current page can be returned. Going further with our Unsplash integration we need to handle the case when no query was provided. Unsplash requires us to call a different API endpoint (`/photos`) with slightly different parameters but the basics are the same. We need to check for success, calculate `total` and `nextPage` and translate the assets. ```swift highlight-unsplash-list let search = try await engine.asset.findAssets( sourceID: "ly.img.asset.source.unsplash", query: .init(query: "banana", page: 1, perPage: 10), ) ``` We have already seen that an asset can define credits for the artist. Depending on the image provider you might need to add credits and the license for the source. In case of Unsplash, this includes a link as well as the license of all assets from this source. ```swift highlight-unsplash-credits-license public var credits: AssetCredits? { .init( name: "Unsplash", url: URL(string: "https://unsplash.com/")!, ) } public var license: AssetLicense? { .init( name: "Unsplash license (free)", url: URL(string: "https://unsplash.com/license")!, ) } ``` ## Local Asset Sources In many cases, you already have various finite sets of assets that you want to make available via asset sources. In order to save you the effort of having to implement custom asset query callbacks for each of these asset sources, CE.SDK also allows you to create "local" asset sources, which are managed by the engine and provide search and pagination functionalities. In order to add such a local asset source, simply call the `addLocalSource` API and choose a unique id with which you can later access the asset source. ```swift highlight-add-local-source try engine.asset.addLocalSource(sourceID: "background-videos") ``` The `addAsset(to: String, asset: AssetDefinition)` API allows you to add new asset instances to your local asset source. The local asset source then keeps track of these assets and returns matching items as the result of asset queries. Asset queries return the assets in the same order in which they were inserted into the local asset source. Note that the `AssetDefinition` type that we pass to the `addAsset` API is slightly different than the `AssetResult` type which is returned by asset queries. The `AssetDefinition` for example contains all localizations of the labels and tags of the same asset whereas the `AssetResult` is specific to the locale property of the query. ```swift highlight-add-asset-to-source let asset = AssetDefinition(id: "ocean-waves-1", meta: [ "uri": "https://example.com/ocean-waves-1.mp4", "thumbUri": "https://example.com/thumbnails/ocean-waves-1.jpg", "mimeType": "video/mp4", "width": "1920", "height": "1080", ], label: [ "en": "relaxing ocean waves", "es": "olas del mar relajantes", ], tags: [ "en": ["ocean", "waves", "soothing", "slow"], "es": ["mar", "olas", "calmante", "lento"], ]) try engine.asset.addAsset(to: "background-videos", asset: asset) ``` ## Full Code Explore a full code sample on [GitHub](https://github.com/imgly/cesdk-swift-examples/tree/v$UBQ_VERSION$/engine-guides-custom-asset-source/CustomAssetSource.swift). --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Overview" description: "Learn how to import, manage, and customize assets from local, remote, or camera sources in CE.SDK." platform: ios url: "https://img.ly/docs/cesdk/ios/import-media/overview-84bb23/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Import Media Assets](https://img.ly/docs/cesdk/ios/import-media-4e3703/) > [Overview](https://img.ly/docs/cesdk/ios/import-media/overview-84bb23/) --- In CE.SDK, assets are the building blocks of your creative workflow—whether they’re images, videos, audio, fonts, or templates. They power everything from basic image edits to dynamic, template-driven design generation. This guide gives you a high-level understanding of how to bring assets into CE.SDK, where they can come from, and how to decide on the right strategy for your application. Whether you're working with local uploads, remote storage, or third-party sources, this guide will help you navigate your options and build an efficient import pipeline. [Explore Demos](https://img.ly/showcases/cesdk?tags=ios) [Get Started](https://img.ly/docs/cesdk/ios/get-started/overview-e18f40/) ## File Type Support CreativeEditor SDK (CE.SDK) supports importing high-resolution images, video, and audio content. ## Media Constraints ### Image Resolution Limits ### Video Resolution & Duration Limits --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Retrieve Mimetype" description: "Detect the file type of an asset to control how it’s handled or displayed." platform: ios url: "https://img.ly/docs/cesdk/ios/import-media/retrieve-mimetype-ed13bf/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Import Media Assets](https://img.ly/docs/cesdk/ios/import-media-4e3703/) > [Retrieve Mimetype](https://img.ly/docs/cesdk/ios/import-media/retrieve-mimetype-ed13bf/) --- When working with media assets in CE.SDK, it is often necessary to determine the mimetype of a resource before processing it. This guide explains how to use the `getMimeType(uri: Uri)` function to retrieve the mimetype of a given resource. Returns the mimetype of the resources at the given Uri. If the resource is not already downloaded, this function will download it. - `uri:` the Uri of the resource. - Returns the mimetype of the resource. ```swift // Get the mimetype of a resource let mimeType = try await engine.editor.getMIMEType(url: URL(string: "https://cdn.img.ly/assets/demo/v1/ly.img.image/images/sample_1.jpg")!) ``` --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Size Limits" description: "Learn about file size restrictions and how to optimize large assets for use in CE.SDK." platform: ios url: "https://img.ly/docs/cesdk/ios/import-media/size-limits-c32275/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Import Media Assets](https://img.ly/docs/cesdk/ios/import-media-4e3703/) > [Size Limits](https://img.ly/docs/cesdk/ios/import-media/size-limits-c32275/) --- CreativeEditor SDK (CE.SDK) supports importing high-resolution images, video, and audio, but there are practical limits to consider based on the user's device capabilities. ## Image Resolution Limits ## Video Resolution & Duration Limits --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Source Sets" description: "Use multiple versions of an asset to support different resolutions or formats." platform: ios url: "https://img.ly/docs/cesdk/ios/import-media/source-sets-5679c8/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Import Media Assets](https://img.ly/docs/cesdk/ios/import-media-4e3703/) > [Source Sets](https://img.ly/docs/cesdk/ios/import-media/source-sets-5679c8/) --- ```swift file=@cesdk_swift_examples/engine-guides-source-sets/SourceSets.swift reference-only import Foundation import IMGLYEngine @MainActor func sourceSets(engine: Engine) async throws { let scene = try engine.scene.create() let page = try engine.block.create(.page) try engine.block.setWidth(page, value: 800) try engine.block.setHeight(page, value: 600) try engine.block.appendChild(to: scene, child: page) try await engine.scene.zoom(to: page, paddingLeft: 50, paddingTop: 50, paddingRight: 50, paddingBottom: 50) let block = try engine.block.create(DesignBlockType.graphic) try engine.block.setShape(block, shape: engine.block.createShape(.rect)) let imageFill = try engine.block.createFill(.image) try engine.block.setSourceSet(imageFill, property: "fill/image/sourceSet", sourceSet: [ .init( uri: URL(string: "https://img.ly/static/ubq_samples/sample_1_512x341.jpg")!, width: 512, height: 341, ), .init( uri: URL(string: "https://img.ly/static/ubq_samples/sample_1_1024x683.jpg")!, width: 1024, height: 683, ), .init( uri: URL(string: "https://img.ly/static/ubq_samples/sample_1_2048x1366.jpg")!, width: 2048, height: 1366, ), ]) try engine.block.setFill(block, fill: imageFill) try engine.block.appendChild(to: page, child: block) let assetWithSourceSet = AssetDefinition( id: "my-image", meta: [ "kind": "image", "fillType": "//ly.img.ubq/fill/image", ], payload: .init(sourceSet: [ .init( uri: URL(string: "https://img.ly/static/ubq_samples/sample_1_512x341.jpg")!, width: 512, height: 341, ), .init( uri: URL(string: "https://img.ly/static/ubq_samples/sample_1_1024x683.jpg")!, width: 1024, height: 683, ), .init( uri: URL(string: "https://img.ly/static/ubq_samples/sample_1_2048x1366.jpg")!, width: 2048, height: 1366, ), ]), ) try engine.asset.addLocalSource(sourceID: "my-dynamic-images") try engine.asset.addAsset(to: "my-dynamic-images", asset: assetWithSourceSet) // Could also acquire the asset using `findAssets` on the source let assetResult = AssetResult( id: assetWithSourceSet.id, meta: assetWithSourceSet.meta, context: AssetContext(sourceID: "my-dynamic-images"), ) let result = try await engine.asset.defaultApplyAsset(assetResult: assetResult) // Lists the entries from above again. _ = try engine.block.getSourceSet( try engine.block.getFill(result!), property: "fill/image/sourceSet", ) let videoFill = try engine.block.createFill(.video) try engine.block.setSourceSet(videoFill, property: "fill/video/sourceSet", sourceSet: [ .init( uri: URL(string: "https://img.ly/static/example-assets/sourceset/1x.mp4")!, width: 1920, height: 1080, ), ]) try await engine.block.addVideoFileURIToSourceSet( videoFill, property: "fill/video/sourceSet", uri: URL(string: "https://img.ly/static/example-assets/sourceset/2x.mp4")!, ) } ``` Source sets allow specifying an entire set of sources, each with a different size, that should be used for drawing a block. The appropriate source is then dynamically chosen based on the current drawing size. This allows using the same scene to render a preview on a mobile screen using a small image file and a high-resolution file for print in the backend. This guide will show you how to specify source sets both for existing blocks and when defining assets. ### Drawing When an image needs to be drawn, the current drawing size in screen pixels is calculated and the engine looks up the most appropriate source file to draw at that resolution. 1. If a source set is set, the source with the closest size exceeding the drawing size is used 2. If no source set is set, the full resolution image is downscaled to a maximum edge length of 4096 (configurable via `maxImageSize` setting) and drawn to the target area This also gives more control about up- and downsampling to you, as all intermediate resolutions can be generated using tooling of your choice. **Source sets are also used during export of your designs and will resolve to the best matching asset for the export resolution.** ## Setup the scene We first create a new scene with a new page. ```swift highlight-setup let scene = try engine.scene.create() let page = try engine.block.create(.page) try engine.block.setWidth(page, value: 800) try engine.block.setHeight(page, value: 600) try engine.block.appendChild(to: scene, child: page) try await engine.scene.zoom(to: page, paddingLeft: 50, paddingTop: 50, paddingRight: 50, paddingBottom: 50) ``` ## Using a Source Set for an existing Block To make use of a source set for an existing image fill, we use the `setSourceSet` API. This defines a set of sources and specifies height and width for each of these sources. The engine then chooses the appropriate source during drawing. You may query an existing source set using `getSourceSet`. You can add sources to an existing source set with `addImageFileURIToSourceSet`. ```swift highlight-set-source-set let block = try engine.block.create(DesignBlockType.graphic) try engine.block.setShape(block, shape: engine.block.createShape(.rect)) let imageFill = try engine.block.createFill(.image) try engine.block.setSourceSet(imageFill, property: "fill/image/sourceSet", sourceSet: [ .init( uri: URL(string: "https://img.ly/static/ubq_samples/sample_1_512x341.jpg")!, width: 512, height: 341, ), .init( uri: URL(string: "https://img.ly/static/ubq_samples/sample_1_1024x683.jpg")!, width: 1024, height: 683, ), .init( uri: URL(string: "https://img.ly/static/ubq_samples/sample_1_2048x1366.jpg")!, width: 2048, height: 1366, ), ]) try engine.block.setFill(block, fill: imageFill) try engine.block.appendChild(to: page, child: block) ``` ## Using a Source Set in an Asset For assets, source sets can be defined in the `payload.sourceSet` field. This is directly translated to the `sourceSet` property when applying the asset. The resulting block is configured in the same way as the one described above. The code demonstrates how to add an asset that defines a source set to a local source and how `applyAsset` handles a populated `payload.sourceSet`. ```swift highlight-asset-definition let assetWithSourceSet = AssetDefinition( id: "my-image", meta: [ "kind": "image", "fillType": "//ly.img.ubq/fill/image", ], payload: .init(sourceSet: [ .init( uri: URL(string: "https://img.ly/static/ubq_samples/sample_1_512x341.jpg")!, width: 512, height: 341, ), .init( uri: URL(string: "https://img.ly/static/ubq_samples/sample_1_1024x683.jpg")!, width: 1024, height: 683, ), .init( uri: URL(string: "https://img.ly/static/ubq_samples/sample_1_2048x1366.jpg")!, width: 2048, height: 1366, ), ]), ) ``` ## Video Source Sets Source sets can also be used for video fills. This is done by setting the `sourceSet` property of the video fill. The engine will then use the source with the closest size exceeding the drawing size. Thumbnails will use the smallest source if `features/matchThumbnailSourceToFill` is disabled, which is the default. For low end devices or scenes with large videos, you can force the preview to always use the smallest source when editing by enabling `features/forceLowQualityVideoPreview`. On export, the highest quality source is used in any case. ```swift highlight-video-source-sets let videoFill = try engine.block.createFill(.video) try engine.block.setSourceSet(videoFill, property: "fill/video/sourceSet", sourceSet: [ .init( uri: URL(string: "https://img.ly/static/example-assets/sourceset/1x.mp4")!, width: 1920, height: 1080, ), ]) try await engine.block.addVideoFileURIToSourceSet( videoFill, property: "fill/video/sourceSet", uri: URL(string: "https://img.ly/static/example-assets/sourceset/2x.mp4")!, ) ``` ## Full Code Here's the full code: ```swift import Foundation import IMGLYEngine @MainActor func sourceSets(engine: Engine) async throws { let scene = try engine.scene.create() let page = try engine.block.create(.page) try engine.block.setWidth(page, value: 800) try engine.block.setHeight(page, value: 600) try engine.block.appendChild(to: scene, child: page) try await engine.scene.zoom(to: page, paddingLeft: 50, paddingTop: 50, paddingRight: 50, paddingBottom: 50) let block = try engine.block.create(DesignBlockType.graphic) try engine.block.setShape(block, shape: engine.block.createShape(.rect)) let imageFill = try engine.block.createFill(.image) try engine.block.setSourceSet(imageFill, property: "fill/image/sourceSet", sourceSet: [ .init( uri: URL(string: "https://img.ly/static/ubq_samples/sample_1_512x341.jpg")!, width: 512, height: 341 ), .init( uri: URL(string: "https://img.ly/static/ubq_samples/sample_1_1024x683.jpg")!, width: 1024, height: 683 ), .init( uri: URL(string: "https://img.ly/static/ubq_samples/sample_1_2048x1366.jpg")!, width: 2048, height: 1366 ), ]) try engine.block.setFill(block, fill: imageFill) try engine.block.appendChild(to: page, child: block) let assetWithSourceSet = AssetDefinition( id: "my-image", meta: [ "kind": "image", "fillType": "//ly.img.ubq/fill/image", ], payload: .init(sourceSet: [ .init( uri: URL(string: "https://img.ly/static/ubq_samples/sample_1_512x341.jpg")!, width: 512, height: 341 ), .init( uri: URL(string: "https://img.ly/static/ubq_samples/sample_1_1024x683.jpg")!, width: 1024, height: 683 ), .init( uri: URL(string: "https://img.ly/static/ubq_samples/sample_1_2048x1366.jpg")!, width: 2048, height: 1366 ), ]) ) try engine.asset.addLocalSource(sourceID: "my-dynamic-images") try engine.asset.addAsset(to: "my-dynamic-images", asset: assetWithSourceSet) // Could also acquire the asset using `findAssets` on the source let assetResult = AssetResult( id: assetWithSourceSet.id, meta: assetWithSourceSet.meta, context: AssetContext(sourceID: "my-dynamic-images") ) let result = try await engine.asset.defaultApplyAsset(assetResult: assetResult) // Lists the entries from above again. _ = try engine.block.getSourceSet( try engine.block.getFill(result!), property: "fill/image/sourceSet" ) let videoFill = try engine.block.createFill(.video) try engine.block.setSourceSet(videoFill, property: "fill/video/sourceSet", sourceSet: [ .init( uri: URL(string: "https://img.ly/static/example-assets/sourceset/1x.mp4")!, width: 1920, height: 1080 ), ]) try await engine.block.addVideoFileURIToSourceSet( videoFill, property: "fill/video/sourceSet", uri: URL(string: "https://img.ly/static/example-assets/sourceset/2x.mp4")! ) } ``` --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Insert Media Into Scenes" description: "Understand how insertion works, how inserted media behave within scenes, and how to control them via UI or code." platform: ios url: "https://img.ly/docs/cesdk/ios/insert-media-a217f5/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Insert Media Assets](https://img.ly/docs/cesdk/ios/insert-media-a217f5/) --- --- ## Related Pages - [Overview](https://img.ly/docs/cesdk/ios/overview-491658/) - Understand how insertion works, how inserted media behave within scenes, and how to control them via UI or code. - [Insert Images](https://img.ly/docs/cesdk/ios/insert-media/images-63848a/) - Add still images to CE.SDK scenes programmatically using Swift or using the built-in iOS editor UI. Includes positioning, layering, sizing and format considerations. - [Insert Shapes or Stickers](https://img.ly/docs/cesdk/ios/insert-media/shapes-or-stickers-20ac68/) - Add shapes and stickers to your designs using CE.SDK. Create rectangles, ellipses, stars, polygons, lines, and custom vector paths programmatically or through the built-in UI. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Insert Images" description: "Add still images to CE.SDK scenes programmatically using Swift or using the built-in iOS editor UI. Includes positioning, layering, sizing and format considerations." platform: ios url: "https://img.ly/docs/cesdk/ios/insert-media/images-63848a/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Insert Media Assets](https://img.ly/docs/cesdk/ios/insert-media-a217f5/) > [Insert Images](https://img.ly/docs/cesdk/ios/insert-media/images-63848a/) --- You can insert images into a scene using CE.SDK, either through the prebuilt UI for iOS or programmatically via Swift for all platforms. This gives you the flexibility to build interactive design workflows, enable user-generated content, or automate image placement based on logic or data. > **Note:** CE.SDK supports a wide range of image formats, including:* `.png` > * `.jpeg`, `.jpg` > * `.gif` > * `.webp` > * `.svg` > * `.bmp`See a [full list](https://img.ly/docs/cesdk/ios/file-format-support-3c4b2a/) of supported file types. ## What You’ll Learn - Two ways to insert images: - Programmatically (iOS/macOS/catalyst) by creating a graphic block, applying an image fill, and setting its position/size/rotation/z-index. - With Editor UI (iOS Only) using the controls and asset libraries of a prebuilt editor such as the Design Editor or Photo Editor. - Supported image sources such as bundled assets, app file URLs, and remote URLs. - Practical transforms after insertion such as move, scale, rotate and order. ## When to Use It - You’re building custom UI or automation flow to add images to compositions. - You want a ready-made editing experience on iOS with an image picker and panels. > **Note:** Prefer the programmatic approach and custom UI on macOS/Catalyst/iPad. Use the prebuilt editors on the iPhone only. ## Inserting Images Using the UI CE.SDK’s UI includes a built-in **image tool** that lets users add images from device sources directly onto the canvas. Once inserted, users can move, resize, crop, rotate, or stack images visually. Image controls on the IMGLY UI **Supported image sources:** - Photo Roll (Photos app) - Disk (Files app) - Camera (device camera) - Image (project asset library) In the Asset Library, a user can add images from the Photos app, the camera or the Files app. Add button in the Asset Library You can customize how the image tool appears in the user interface. ## Inserting Images Programmatically For apps with automation, batch workflows, or logic-driven design experiences, you can insert images into a scene using the block API and the graphics engine. Here’s how to do it: ```swift // 1. Create a graphic block let imageBlock = try engine.block.create(.graphic) // 2. Create a shape for the image let shape = try engine.block.createShape(.rect) try engine.block.setShape(imageBlock, shape: shape) // 3. Create an image fill let imageFill = try engine.block.createFill(.image) try engine.block.setString( imageFill, property: "fill/image/imageFileURI", value: "https://img.ly/static/ubq_samples/sample_4.jpg" ) try engine.block.setFill(imageBlock, fill: imageFill) // 4. (Optional) Set semantic kind to "image" for clarity try engine.block.setKind(imageBlock, kind: "image") // 5. Add image to the scene let page = try engine.block.find(byType: .page).first! try engine.block.appendChild(page, child: imageBlock) ``` The `shape` can be any of the supported shapes `.rect`, `.star`, etc and masks the inserted image. The asset URI in step 3 can either be a remote URL or a local asset URI represented as a String. For assets in the app bundle, get a URL: ```swift let url = Bundle.main.url(forResource: "poster", withExtension: "jpg") ``` For file assets, use the standard `FileManager`: ```swift let docs = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0] let file = docs.appendingPathComponent("uploads/avatar.png") ``` When working with the asset catalog, you can apply an image that’s an `AssetResult` either to: - The scene directly - A block In the code below `assetList` is an `AssetQueryResult` which is the result of a call to `findAssets` to get assets from an asset catalog. ```swift guard let newAsset = assetList.assets.first else { return } // Creates a new block that contains the image let imageBlock = try await engine.asset.defaultApplyAsset(assetResult: newAsset) // Applies the image to a block that already exists try await engine.asset.defaultApplyAssetToBlock(assetResult: newAsset, block: someBlock) ``` ## Image Properties After inserting the image, you can change the block's layout properties using standard methods in the `engine.block` API. ### Positioning Refer to the guide in the Transform Section for [Move](https://img.ly/docs/cesdk/ios/edit-image/transform/move-818dd9/) for more details and other options. ```swift // Set X/Y position on the canvas (in absolute units) try engine.block.setPositionX(imageBlock, value: 100) try engine.block.setPositionY(imageBlock, value: 200) ``` ### Scaling ```swift // Uniform scale try engine.block.setFloat(imageBlock, property: "transform/scale/x", value: 1.5) try engine.block.setFloat(imageBlock, property: "transform/scale/y", value: 1.5) // Non-uniform (stretching) try engine.block.setFloat(imageBlock, property: "transform/scale/x", value: 2.0) try engine.block.setFloat(imageBlock, property: "transform/scale/y", value: 1.0) ``` ### Rotation ```swift // Rotate 45 degrees (in radians) let degrees = 45.0 let radians = degrees * (.pi / 180) try engine.block.setFloat(imageBlock, property: "transform/rotation", value: Float(radians)) ``` ### Layering Control stack order using the helper methods to move blocks forward (towards the user) or backwards. You can also pin a block to the front or back of the stack. ```swift try engine.block.bringToFront(block) // Move above siblings try engine.block.sendToBack(block) // Move below siblings try engine.block.bringForward(block) // One step forward try engine.block.sendBackward(block) // One step backward try engine.block.setAlwaysOnTop(block, enabled: true) ``` > **Note:** You can also group images and other elements using `engine.block.group()` for easier layer management. ## Insert Into an Existing Block If your template exposes a placeholder block or you are creating an automated workflow, you can **replace an image fill** instead of creating a new block. Locate the block using its `name` property (this pairs well with the process for text variables) or by its known `id`. When you know the `id` of the target: ```swift let fill = try engine.block.createFill(.image) try engine.block.setString(fill, property: "fill/image/imageFileURI", value: imageURI) try engine.block.setFill(targetBlock, fill: fill) ``` When you’re using the `name` property to find the block, `find(byName:)` returns the block `id`: ```swift guard let targetBlock = engine.block.find(byName: "HeroTile") else { return } let fill = try engine.block.createFill(.image) try engine.block.setString(fill, property: "fill/image/imageFileURI", value: imageURI) try engine.block.setFill(targetBlock, fill: fill) ``` > **Note:** When generating templates, assign names so downstream replacement stays straightforward:```swift > try engine.block.setString(imageBlock, property: "name", value: "HeroImage") > ``` ## Troubleshooting **❌ Nothing appears after insert**: - Verify that the block is attached to the page. - Verify the URL string is correct (use `.absoluteString` property). - Check the scene’s current zoom and camera framing. **❌ Remote images fail**: - Confirm HTTPS, CORS, or ATS settings. - Test the URL in a browser. **❌ Pixelated result**: - Change the block size or use a higher-resolution source image. **❌ Unexpected orientation of image**: - Some formats carry EXIF orientation information. Apply `setRotation` or normalize the asset during import. ## Next Steps Now that you’ve learned about inserting images into your compositions, here are some topics to explore to deepen your understanding. - Apply more [transformations](https://img.ly/docs/cesdk/ios/edit-image/transform-9d189b/) such as crop, or scale. - Create [templates](https://img.ly/docs/cesdk/ios/create-templates-3aef79/) for automating content creation and formatting. - [Export](https://img.ly/docs/cesdk/ios/export-save-publish/export-82f968/) compositions in a variety of formats. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Positioning and Alignment" description: "Precisely position, align, and distribute objects using guides, snapping, and alignment tools." platform: ios url: "https://img.ly/docs/cesdk/ios/insert-media/position-and-align-cc6b6a/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Create and Edit Compositions](https://img.ly/docs/cesdk/ios/create-composition-db709c/) > [Position and Align](https://img.ly/docs/cesdk/ios/insert-media/position-and-align-cc6b6a/) --- ```swift reference-only let x = try engine.block.getPositionX(block) let xMode = try engine.block.getPositionXMode(block) let y = try engine.block.getPositionY(block) let yMode = try engine.block.getPositionYMode(block) try engine.block.setPositionX(block, value: 0.25) try engine.block.setPositionXMode(block, mode: .percent) try engine.block.setPositionY(block, value: 0.25) try engine.block.setPositionYMode(block, mode: .percent) let rad = try engine.block.getRotation(block) try engine.block.setRotation(block, radians: .pi) let flipHorizontal = try engine.block.getFlipHorizontal(block) let flipVertical = try engine.block.getFlipVertical(block) try engine.block.setFlipHorizontal(block, flip: true) try engine.block.setFlipVertical(block, flip: false) let width = try engine.block.getWidth(block) let widthMode = try engine.block.getWidthMode(block) let height = try engine.block.getHeight(block) let heightMode = try engine.block.getHeightMode(block) try engine.block.setWidth(block, value: 0.5) try engine.block.setWidth(block, value: 2.5, maintainCrop: true) try engine.block.setWidthMode(block, mode: .percent) try engine.block.setHeight(block, value: 0.5) try engine.block.setHeight(block, value: 2.5, maintainCrop: true) try engine.block.setHeightMode(block, mode: .percent) let frameX = try engine.block.getFrameX(block) let frameY = try engine.block.getFrameY(block) let frameWidth = try engine.block.getFrameWidth(block) let frameHeight = try engine.block.getFrameHeight(block) try engine.block.setAlwaysOnTop(block, enabled: false) let isAlwaysOnTop = try engine.block.isAlwaysOnTop(block) try engine.block.setAlwaysOnBottom(block, enabled: false) let isAlwaysOnBottom = try engine.block.isAlwaysOnBottom(block) try engine.block.bringToFront(block) try engine.block.sendToBack(block) try engine.block.bringForward(block) try engine.block.sendBackward(block) let globalX = try engine.block.getGlobalBoundingBoxX(block) let globalY = try engine.block.getGlobalBoundingBoxY(block) let globalWidth = try engine.block.getGlobalBoundingBoxWidth(block) let globalHeight = try engine.block.getGlobalBoundingBoxHeight(block) let screenSpaceRect = try engine.block.getScreenSpaceBoundingBox(containing: [block]) try engine.block.scale(block, to: 2.0, anchorX: 0.5, anchorY: 0.5) try engine.block.scale(block, to: 2.0, anchorX: 0.5, anchorY: 0.5) try engine.block.fillParent(block) let pages = try engine.scene.getPages() try engine.block.resizeContentAware(pages, width: 100.0, height: 100.0) // Create blocks and append to scene let member1 = try engine.block.create(.graphic) let member2 = try engine.block.create(.graphic) try engine.block.appendChild(to: scene, child: member1) try engine.block.appendChild(to: scene, child: member2) if try engine.block.isDistributable([member1, member2]) { try engine.block.distributeHorizontally([member1, member2]) try engine.block.distributeVertically([member1, member2]) } if try engine.block.isAlignable([member1, member2]) { try engine.block.alignHorizontally([member1, member2], alignment: .left) try engine.block.alignVertically([member1, member2], alignment: .top) } let isTransformLocked = try engine.block.isTransformLocked(block) if !isTransformLocked { try engine.block.setTransformLocked(block, locked: true) } ``` In this example, we will show you how to use the [CreativeEditor SDK](https://img.ly/products/creative-sdk)'s CreativeEngine to modify scenes layout through the `block` API. ## Layout of Blocks > **Note:** **Note on layout and frame size** The frame size is determined during the > layout phase of the render process inside the engine. This means that calling > `getFrameSize()` immediately after modifying the scene might return an > inaccurate result. The CreativeEngine supports three different modes for positioning blocks. These can be set for each block and both coordinates independently: - `'Absolute'`: the position value is interpreted in the scene's current design unit. - `'Percent'`: the position value is interpreted as percentage of the block's parent's size, where 1.0 means 100%. - `'Auto'` : the position is automatically determined. Likewise there are also three different modes for controlling a block's size. Again both dimensions can be set independently: - `'Absolute'`: the size value is interpreted in the scene's current design unit. - `'Percent'`: the size value is interpreted as percentage of the block's parent's size, where 1.0 means 100%. - `'Auto'` : the block's size is automatically determined by the size of the block's content. ### Positioning ```swift public func getPositionX(_ id: DesignBlockID) throws -> Float ``` Query a block's x position. - `id:`: The block to query. - Returns: The value of the x position. ```swift public func getPositionY(_ id: DesignBlockID) throws -> Float ``` Query a block's y position. - `id:`: The block to query. - Returns: The value of the y position. ```swift public func getPositionXMode(_ id: DesignBlockID) throws -> PositionMode ``` Query a block's mode for its x position. - `id:`: The block to query. - Returns: The current mode for the x position: absolute, percent or undefined. ```swift public func getPositionYMode(_ id: DesignBlockID) throws -> PositionMode ``` Query a block's mode for its y position. - `id:`: The block to query. - Returns: The current mode for the y position: absolute, percent or undefined. ```swift public func setPositionX(_ id: DesignBlockID, value: Float) throws ``` Update a block's x position. The position refers to the block's local space, relative to its parent with the origin at the top left. Required scope: "layer/move" - `id`: The block to update. - `value`: The value of the x position. ```swift public func setPositionY(_ id: DesignBlockID, value: Float) throws ``` Update a block's y position. The position refers to the block's local space, relative to its parent with the origin at the top left. Required scope: "layer/move" - `id`: The block to update. - `value`: The value of the y position. ```swift public func setPositionXMode(_ id: DesignBlockID, mode: PositionMode) throws ``` Set a block's mode for its x position. Required scope: "layer/move" - `id`: The block to update. - `mode`: The x position mode: absolute, percent or undefined. ```swift public func setPositionYMode(_ id: DesignBlockID, mode: PositionMode) throws ``` Set a block's mode for its y position. Required scope: "layer/move" - `id`: The block to update. - `mode`: The y position mode: absolute, percent or undefined. ### Layers ```swift public func setAlwaysOnTop(_ id: DesignBlockID, enabled: Bool) throws ``` Set a block to be always-on-top. If true, this blocks's global sorting order is automatically adjusted to be higher than all other siblings without this property. If more than one block is set to be always-on-top, the child order decides which is on top. - `id`: The block to update. - `enabled`: The new state. ```swift public func isAlwaysOnTop(_ id: DesignBlockID) throws -> Bool ``` If a block is set to be always-on-top. - `id`: The block to query. ```swift public func setAlwaysOnBottom(_ id: DesignBlockID, enabled: Bool) throws ``` Set a block to be always-on-bottom. If true, this blocks's global sorting order is automatically adjusted to be lower than all other siblings without this property. If more than one block is set to be always-on-bottom, the child order decides which is on the bottom. - `id`: The block to update. - `enabled`: The new state. ```swift public func isAlwaysOnBottom(_ id: DesignBlockID) throws -> Bool ``` If a block is set to be always-on-bottom. - `id`: The block to query. ```swift public func bringToFront(_ id: DesignBlockID) throws ``` Updates the sorting order of this block and all of its manually created siblings so that the given block has the highest sorting order. If the block is parented to a track, it is first moved up in the hierarchy. - `id`: The block to update. ```swift public func sendToBack(_ id: DesignBlockID) throws ``` Updates the sorting order of this block and all of its manually created siblings so that the given block has the lowest sorting order. If the block is parented to a track, it is first moved up in the hierarchy. - `id`: The block to update. ```swift public func bringForward(_ id: DesignBlockID) throws ``` Updates the sorting order of this block and all of its superjacent siblings so that the given block has a higher sorting order than the next superjacent sibling. If the block is parented to a track, it is first moved up in the hierarchy. Empty tracks and empty groups are passed by. - `id`: The block to update. ```swift public func sendBackward(_ id: DesignBlockID) throws ``` Updates the sorting order of this block and all of its manually created and subjacent siblings so that the given block will have a lower sorting order than the next subjacent sibling. If the block is parented to a track, it is first moved up in the hierarchy. Empty tracks and empty groups are passed by. - `id`: The block to update. ### Size ```swift public func getWidth(_ id: DesignBlockID) throws -> Float ``` Query a block's width. - `id:`: The block to query. - Returns: The value of the block's width. ```swift public func getWidthMode(_ id: DesignBlockID) throws -> SizeMode ``` Query a block's mode for its width. - `id:`: The block to query. - Returns: The current mode for the width: absolute, percent or auto. ```swift public func getHeight(_ id: DesignBlockID) throws -> Float ``` Query a block's height. - `id:`: The block to query. - Returns: The value of the block's height. ```swift public func getHeightMode(_ id: DesignBlockID) throws -> SizeMode ``` Query a block's mode for its height. - `id:`: The block to query. - Returns: The current mode for the height: absolute, percent or auto. ```swift public func setWidth(_ id: DesignBlockID, value: Float, maintainCrop: Bool = false) throws ``` Update a block's width and optionally maintain the crop. If the crop is maintained, the crop values will be automatically adjusted. The content fill mode `Cover` is only kept if the `features/transformEditsRetainCoverMode` setting is enabled, otherwise it will change to `Crop`. Required scope: "layer/resize" - `id`: The block to update. - `value`: The new width of the block. - `maintainCrop`: Whether or not the crop values, if available, should be automatically adjusted. ```swift public func setWidthMode(_ id: DesignBlockID, mode: SizeMode) throws ``` Set a block's mode for its width. Required scope: "layer/resize" - `id`: The block to update. - `mode`: The width mode. ```swift public func setHeight(_ id: DesignBlockID, value: Float, maintainCrop: Bool = false) throws ``` Update a block's height and optionally maintain the crop. If the crop is maintained, the crop values will be automatically adjusted. The content fill mode `Cover` is only kept if the `features/transformEditsRetainCoverMode` setting is enabled, otherwise it will change to `Crop`. Required scope: "layer/resize" - `id`: The block to update. - `value`: The new height of the block. - `maintainCrop`: Whether or not the crop values, if available, should be automatically adjusted. features/transformEditsRetainCoverMode is enabled. ```swift public func setHeightMode(_ id: DesignBlockID, mode: SizeMode) throws ``` Set a block's mode for its height. Required scope: "layer/resize" - `id`: The block to update. - `mode`: The height mode. ### Rotation ```swift public func getRotation(_ id: DesignBlockID) throws -> Float ``` Query a block's rotation in radians. - `id:`: The block to query. - Returns: The block's rotation around its center in radians. ```swift public func setRotation(_ id: DesignBlockID, radians: Float) throws ``` Update a block's rotation. Required scope: "layer/rotate" - `id`: The block to update. - `radians`: The new rotation in radians. Rotation is applied around the block's center. ### Flipping ```swift public func setFlipHorizontal(_ id: DesignBlockID, flip: Bool) throws ``` Update a block's horizontal flip. Required scope: "layer/flip" - `id`: The block to update. - `flip`: If the flip should be enabled. ```swift public func getFlipHorizontal(_ id: DesignBlockID) throws -> Bool ``` Query a block's horizontal flip state. - `id:`: The block to query. - Returns: A boolean indicating for whether the block is flipped in the queried direction. ```swift public func setFlipVertical(_ id: DesignBlockID, flip: Bool) throws ``` Update a block's vertical flip. Required scope: "layer/flip" - `id`: The block to update. - `flip`: If the flip should be enabled. ```swift public func getFlipVertical(_ id: DesignBlockID) throws -> Bool ``` Query a block's vertical flip state. - `id:`: The block to query. - Returns: A boolean indicating for whether the block is flipped in the queried direction. ### Scaling ```swift public func scale(_ id: DesignBlockID, to scale: Float, anchorX: Float = 0, anchorY: Float = 0) throws ``` Scales the block and all of its children proportionally around the specified relative anchor point. This updates the position, size and style properties (e.g. stroke width) of the block and its children. Required scope: "layer/resize" - `id`: The block that should be scaled. - `scale`: The scale factor to be applied to the current properties of the block. - `anchorX`: The relative position along the width of the block around which the scaling should occur. (0 = left edge, 0.5 = center, 1 = right edge) - `anchorY`: The relative position along the height of the block around which the scaling should occur. (0 = top edge, 0.5 = center, 1 = bottom edge) ### Fill a Block's Parent ```swift public func fillParent(_ id: DesignBlockID) throws ``` Resize and position a block to entirely fill its parent block. The crop values of the block, except for the flip and crop rotation, are reset if it can be cropped. If the size of the block's fill is unknown, the content fill mode is changed from `Crop` to `Cover` to prevent invalid crop values. Required scope: "layer/move" - "layer/resize" - `id:`: The block that should fill its parent. ### Resize Blocks Content-aware ```swift public func resizeContentAware(_ ids: [DesignBlockID], width: Float, height: Float) throws ``` Resize all blocks to the given size. The content of the blocks is automatically adjusted to fit the new dimensions. Required scope: "layer/resize" - `ids`: The blocks to resize. - `width`: The new width of the blocks. - `height`: The new height of the blocks. - Returns: An error if the blocks could not be resized. ### Even Distribution ```swift public func isDistributable(_ ids: [DesignBlockID]) throws -> Bool ``` Confirms that a given set of blocks can be distributed. - `ids:`: A non-empty array of block ids. - Returns: Whether the blocks can be distributed. ```swift public func distributeHorizontally(_ ids: [DesignBlockID]) throws ``` Distribute multiple blocks horizontally within their bounding box so that the space between them is even. Required scope: "layer/move" - `ids:`: A non-empty array of block ids. ```swift public func distributeVertically(_ ids: [DesignBlockID]) throws ``` Distribute multiple blocks vertically within their bounding box so that the space between them is even. Required scope: "layer/move" - `ids:`: A non-empty array of block ids. ### Alignment ```swift public func isAlignable(_ ids: [DesignBlockID]) throws -> Bool ``` Confirms that a given set of blocks can be aligned. - `ids:`: A non-empty array of block ids. - Returns: Whether the blocks can be aligned. ```swift public func alignHorizontally(_ ids: [DesignBlockID], alignment: HorizontalBlockAlignment) throws ``` Align multiple blocks horizontally within their bounding box or a single block to its parent. Required scope: "layer/move" - `ids:`: A non-empty array of block ids. - `alignment:`: How they should be aligned: left, right, or center ```swift public func alignVertically(_ ids: [DesignBlockID], alignment: VerticalBlockAlignment) throws ``` Align multiple blocks vertically within their bounding box or a single block to its parent. Required scope: "layer/move" - `ids:`: A non-empty array of block ids. - `alignment:`: How they should be aligned: top, bottom, or center ### Computed Dimensions ```swift public func getFrameX(_ id: DesignBlockID) throws -> Float ``` Get a block's layout position on the x-axis. The layout position is only available after an internal update loop, which may not happen immediately. - `id:`: The block to query. - Returns: The layout position on the x-axis. ```swift public func getFrameY(_ id: DesignBlockID) throws -> Float ``` Get a block's layout position on the y-axis. The layout position is only available after an internal update loop, which may not happen immediately. - `id:`: The block to query. - Returns: The layout position on the y-axis. ```swift public func getFrameWidth(_ id: DesignBlockID) throws -> Float ``` Get a block's layout width. The layout width is only available after an internal update loop, which may not happen immediately. - `id:`: The block to query. - Returns: The layout width. ```swift public func getFrameHeight(_ id: DesignBlockID) throws -> Float ``` Get a block's layout height. The layout height is only available after an internal update loop, which may not happen immediately. - `id:`: The block to query. - Returns: The layout height. ```swift public func getGlobalBoundingBoxX(_ id: DesignBlockID) throws -> Float ``` Get the x position of the block's axis-aligned bounding box in the scene's global coordinate space. The scene's global coordinate space has its origin at the top left. - `id:`: The block whose bounding box should be calculated. - Returns: The x coordinate of the position of the axis-aligned bounding box. ```swift public func getGlobalBoundingBoxY(_ id: DesignBlockID) throws -> Float ``` Get the y position of the block's axis-aligned bounding box in the scene's global coordinate space. The scene's global coordinate space has its origin at the top left. - `id:The`: block whose bounding box should be calculated. - Returns: The y coordinate of the position of the axis-aligned bounding box. ```swift public func getGlobalBoundingBoxWidth(_ id: DesignBlockID) throws -> Float ``` Get the width of the block's axis-aligned bounding box in the scene's global coordinate space. The scene's global coordinate space has its origin at the top left. - `id:`: The block whose bounding box should be calculated. - Returns: The width of the axis-aligned bounding box. ```swift public func getGlobalBoundingBoxHeight(_ id: DesignBlockID) throws -> Float ``` Get the height of the block's axis-aligned bounding box in the scene's global coordinate space. The scene's global coordinate space has its origin at the top left. - `id:`: The block whose bounding box should be calculated. - Returns: The height of the axis-aligned bounding box. ```swift public func getScreenSpaceBoundingBox(containing blocks: [DesignBlockID]) throws -> CGRect ``` Get the position and size of the axis-aligned bounding box for the given blocks in screen space. - `blocks:`: The blocks whose bounding box should be calculated. - Returns: The position and size of the bounding box as `CGRect` (in points). ### Transform Locking You can lock the transform of a block to prevent changes to any of its transformations. That is the block's position, rotation, scale, and sizing. ```swift public func isTransformLocked(_ id: DesignBlockID) throws -> Bool ``` Query a block's transform locked state. If `true`, the block's transform can't be changed. - `id:`: The block to query. - Returns: `True` if transform locked, `false` otherwise. ```swift public func setTransformLocked(_ id: DesignBlockID, locked: Bool) throws ``` Update a block's transform locked state. - `id`: The block to update. - `locked`: Whether the block's transform should be locked. ## Full Code Here's the full code: ```swift let x = try engine.block.getPositionX(block) let xMode = try engine.block.getPositionXMode(block) let y = try engine.block.getPositionY(block) let yMode = try engine.block.getPositionYMode(block) try engine.block.setPositionX(block, value: 0.25) try engine.block.setPositionXMode(block, mode: .percent) try engine.block.setPositionY(block, value: 0.25) try engine.block.setPositionYMode(block, mode: .percent) let rad = try engine.block.getRotation(block) try engine.block.setRotation(block, radians: .pi) let flipHorizontal = try engine.block.getFlipHorizontal(block) let flipVertical = try engine.block.getFlipVertical(block) try engine.block.setFlipHorizontal(block, flip: true) try engine.block.setFlipVertical(block, flip: false) let width = try engine.block.getWidth(block) let widthMode = try engine.block.getWidthMode(block) let height = try engine.block.getHeight(block) let heightMode = try engine.block.getHeightMode(block) try engine.block.setWidth(block, value: 0.5) try engine.block.setWidth(block, value: 2.5, maintainCrop: true) try engine.block.setWidthMode(block, mode: .percent) try engine.block.setHeight(block, value: 0.5) try engine.block.setHeight(block, value: 2.5, maintainCrop: true) try engine.block.setHeightMode(block, mode: .percent) let frameX = try engine.block.getFrameX(block) let frameY = try engine.block.getFrameY(block) let frameWidth = try engine.block.getFrameWidth(block) let frameHeight = try engine.block.getFrameHeight(block) try engine.block.setAlwaysOnTop(block, enabled: false) let isAlwaysOnTop = try engine.block.isAlwaysOnTop(block) try engine.block.setAlwaysOnBottom(block, enabled: false) let isAlwaysOnBottom = try engine.block.isAlwaysOnBottom(block) try engine.block.bringToFront(block) try engine.block.sendToBack(block) try engine.block.bringForward(block) try engine.block.sendBackward(block) let globalX = try engine.block.getGlobalBoundingBoxX(block) let globalY = try engine.block.getGlobalBoundingBoxY(block) let globalWidth = try engine.block.getGlobalBoundingBoxWidth(block) let globalHeight = try engine.block.getGlobalBoundingBoxHeight(block) let screenSpaceRect = try engine.block.getScreenSpaceBoundingBox(containing: [block]) try engine.block.scale(block, to: 2.0, anchorX: 0.5, anchorY: 0.5) try engine.block.scale(block, to: 2.0, anchorX: 0.5, anchorY: 0.5) try engine.block.fillParent(block) let pages = try engine.scene.getPages() try engine.block.resizeContentAware(pages, width: 100.0, height: 100.0) // Create blocks and append to scene let member1 = try engine.block.create(.graphic) let member2 = try engine.block.create(.graphic) try engine.block.appendChild(to: scene, child: member1) try engine.block.appendChild(to: scene, child: member2) if try engine.block.isDistributable([member1, member2]) { try engine.block.distributeHorizontally([member1, member2]) try engine.block.distributeVertically([member1, member2]) } if try engine.block.isAlignable([member1, member2]) { try engine.block.alignHorizontally([member1, member2], alignment: .left) try engine.block.alignVertically([member1, member2], alignment: .top) } let isTransformLocked = try engine.block.isTransformLocked(block) if !isTransformLocked { try engine.block.setTransformLocked(block, locked: true) } ``` --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Insert Shapes or Stickers" description: "Add shapes and stickers to your designs using CE.SDK. Create rectangles, ellipses, stars, polygons, lines, and custom vector paths programmatically or through the built-in UI." platform: ios url: "https://img.ly/docs/cesdk/ios/insert-media/shapes-or-stickers-20ac68/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Insert Media Assets](https://img.ly/docs/cesdk/ios/insert-media-a217f5/) > [Insert Shapes or Stickers](https://img.ly/docs/cesdk/ios/insert-media/shapes-or-stickers-20ac68/) --- --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Key Capabilities" description: "Explore CE.SDK’s key features—manual editing, automation, templates, AI tools, and full UI and API control." platform: ios url: "https://img.ly/docs/cesdk/ios/key-capabilities-dbb5b1/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Concepts](https://img.ly/docs/cesdk/ios/concepts-c9ff51/) > [Key Capabilities](https://img.ly/docs/cesdk/ios/key-capabilities-dbb5b1/) --- This guide gives you a high-level look at what CreativeEditor SDK (CE.SDK) can do—and how deeply it can integrate into your workflows. Whether you’re building a design editor into your product, enabling automation, or scaling personalized content creation, CE.SDK provides a flexible and future-ready foundation. [Explore Demos](https://img.ly/showcases/cesdk/?tags=ios) It’s designed for developers, product teams, and technical decision-makers evaluating how CE.SDK fits their use case. - 100% client-side processing - Custom-built rendering engine for consistent cross-platform performance - Flexible enough for both low-code and fully custom implementations --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Key Concepts" description: "Explore CE.SDK’s key features—manual editing, automation, templates, AI tools, and full UI and API control." platform: ios url: "https://img.ly/docs/cesdk/ios/key-concepts-21a270/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Concepts](https://img.ly/docs/cesdk/ios/concepts-c9ff51/) > [Key Concepts](https://img.ly/docs/cesdk/ios/key-concepts-21a270/) --- CE.SDK is built on two distinct technical layers that work together seamlessly: - **User Interface** — Pre-built editors optimized for different use cases - **Engine Interface** — Core rendering and processing engine ![The different layers CE.SDK is made of, see description below.](layers.png) This intentional separation gives you powerful advantages: 1. **Cross-platform consistency** – The engine is cross-compiled to native web, iOS, Android, and Node.js, ensuring identical output everywhere 2. **Custom UI** – Build your own UI for simpler tools and workflows 3. **Headless automation** – Run the engine independently for automations and batch processing, both client-side and server-side ## Creative Engine The Creative Engine powers all core editing operations. It handles rendering, processing, and manipulation across images, layouts, text, video, audio, and vectors. **What the Engine Does:** - Maintains the scene file (your structured content) - Renders the canvas in real-time - Handles block positioning and resizing - Applies filters and effects to images - Manages text editing and typography - Controls templates with role-based permissions - Displays smart guides and snap lines Every engine capability is exposed through a comprehensive API, letting you build custom UIs, workflows, and automations. ## Headless / Engine only Use the engine without any UI for powerful automation scenarios: **Client-side automation** Perfect for in-browser batch operations and dynamic content generation without server dependencies. **Server-side automation with Node.js** Use the [Node.JS SDK](https://img.ly/docs/cesdk/ios/what-is-cesdk-2e7acd/) for following scenarios: - **High-resolution processing** – Edit on the client with preview quality, then render server-side with full-resolution assets - **Bulk generation** – Create a large volume of design variations for variable data printing - **Non-blocking workflows** – Let users continue designing while exports process in the background **Server-side export with the CE.SDK Renderer** When exporting complex graphics and videos, the [CE.SDK Renderer](#broken-link-7f3e9a) can make use of GPU acceleration and video codecs on Linux server environments. **Plugin development** When building CE.SDK plugins, you get direct API access to manipulate canvas elements programmatically. ## User Interface Components CE.SDK includes pre-built UI configurations optimized for different use cases: - **Photo editing** — Advanced image editing tools and filters - **Video editing** — Timeline-based video editing and effects - **Design editing** — Layout and graphic design tools (similar to Canva) - **2D product design** — Apparel, postcards, and custom product templates More configurations are coming based on customer needs. ## UI Customization While UI configurations provide a solid foundation, you maintain control over the user experience: - Apply **custom color schemes** and branding to match your product - Add **custom asset libraries** with your own fonts, images, graphics, videos, and audio The plugin architecture lets you add custom buttons and panels throughout the interface, ensuring the editor feels native to your product. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Licensing" description: "Understand CE.SDK’s flexible licensing, trial options, and how keys work across dev, staging, and production." platform: ios url: "https://img.ly/docs/cesdk/ios/licensing-8aa063/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Get Started](https://img.ly/docs/cesdk/ios/get-started/overview-e18f40/) > [Licensing](https://img.ly/docs/cesdk/ios/licensing-8aa063/) --- Thanks for your interest in CreativeEditor SDK (CE.SDK). We offer flexible commercial licensing options to support teams and projects of all sizes. Whether you're building a new product or scaling an existing one, our goal is to provide the best creative editing experience—backed by a licensing model that aligns with your needs. Get in touch with us through our [contact sales form](https://img.ly/forms/contact-sales). ## Commercial Licensing CE.SDK is offered through a subscription-based commercial model. This allows us to: - Deliver ongoing updates and performance improvements - Ensure compatibility with new browsers and devices - Provide dedicated technical support - Build long-term partnerships with our customers ## How Licensing Works CE.SDK licenses are tied to a single commercial product instance, verified by the hostname for web apps and bundle/app ID for mobile apps. Licensing typically uses remote validation and includes lightweight event tracking. It’s possible to disable tracking or use offline-compatible options. To explore these options, [contact our sales team](https://img.ly/forms/contact-sales). ## Trial License Key Trial licenses are available for evaluation and testing and are valid for **30 days**. They provide full access to CE.SDK’s features so you can explore its capabilities in your environment. If you need more time to evaluate, [contact our sales team](https://img.ly/forms/contact-sales). ## Testing and Production Paid license keys can be used across development, staging, and production environments. Multiple domains or app identifiers can be added to support this setup. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "LLMs.txt" description: "Our documentation is available in LLMs.txt format" platform: ios url: "https://img.ly/docs/cesdk/ios/llms-txt-eb9cc5/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Get Started](https://img.ly/docs/cesdk/ios/get-started/overview-e18f40/) > [Vibe Coding](https://img.ly/docs/cesdk/ios/llms-txt-eb9cc5/) --- > **Note:** You can also connect your AI assistant directly to our documentation using our [MCP Server](https://img.ly/docs/cesdk/ios/get-started/mcp-server-fde71c/). This enables real-time search and retrieval without downloading large files. Our documentation is now available in LLMs.txt format, optimized for AI reasoning engines. To better support platform-specific development, we've created separate documentation files for each platform. For developers, this means you can now access documentation tailored to your specific platform, whether it's iOS, Android, Web, or any other supported platform. This approach allows for a more focused and efficient use of AI tools in your development workflow. [Download](getFullUrl\(`/$\{props.platform.slug}/llms-full.txt`\)) These documentation files are substantial in size, with token counts exceeding the context windows of many AI models. This guide explains how to download and effectively use these platform-specific documentation files with AI tools to accelerate your development process. ## What are LLMs.txt files? LLMs.txt is an emerging standard for making documentation AI-friendly. Unlike traditional documentation formats, LLMs.txt: - Presents content in a clean, markdown-based format - Eliminates extraneous HTML, CSS, and JavaScript - Optimizes content for AI context windows - Provides a comprehensive view of documentation in a single file By using our platform-specific LLMs.txt files, you'll ensure that AI tools have the most relevant and complete context for helping with your development tasks. ## Handling Large Documentation Files Due to the size of our documentation files (upward of 500 000 tokens) most AI tools will face context window limitations. Standard models typically have context windows ranging from 8,000 to 200,000 tokens, making it challenging to process our complete documentation in a single session. ### Recommended AI Model for Full Documentation For working with our complete documentation files, we recommend: - **Gemini 2.5 Flash**: Available via Google AI Studio with a context window of 1-2 million tokens, capable of handling even our largest documentation file --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Open the Editor" description: "Learn how to load and create scenes, set the zoom level, and configure file proxies or URI resolvers." platform: ios url: "https://img.ly/docs/cesdk/ios/open-the-editor-23a1db/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Open the Editor](https://img.ly/docs/cesdk/ios/open-the-editor-23a1db/) --- --- ## Related Pages - [Overview](https://img.ly/docs/cesdk/ios/open-the-editor/overview-99444b/) - Learn how to load and create scenes, set the zoom level, and configure file proxies or URI resolvers. - [Load a Scene](https://img.ly/docs/cesdk/ios/open-the-editor/load-scene-478833/) - Load existing design scenes into the editor to resume or modify previous work. - [Start With Blank Canvas](https://img.ly/docs/cesdk/ios/open-the-editor/blank-canvas-18ff05/) - Launch the editor with an empty canvas as a starting point for new designs. - [Create From Image](https://img.ly/docs/cesdk/ios/open-the-editor/from-image-ad9b5e/) - Open the editor using an image as the base design, with tools ready for immediate editing. - [Create From Video](https://img.ly/docs/cesdk/ios/open-the-editor/from-video-86beb0/) - Load a video file into the editor to start editing frame-based or timeline-based video content. - [Set Zoom Level](https://img.ly/docs/cesdk/ios/open-the-editor/set-zoom-level-d31896/) - Programmatically adjust the zoom level of the canvas to focus on specific areas of the design. - [URI Resolver](https://img.ly/docs/cesdk/ios/open-the-editor/uri-resolver-36b624/) - Customize how asset URIs are resolved and loaded into the editor for full control over file handling. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Start With Blank Canvas" description: "Launch the editor with an empty canvas as a starting point for new designs." platform: ios url: "https://img.ly/docs/cesdk/ios/open-the-editor/blank-canvas-18ff05/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Open the Editor](https://img.ly/docs/cesdk/ios/open-the-editor-23a1db/) > [Start With Blank Canvas](https://img.ly/docs/cesdk/ios/open-the-editor/blank-canvas-18ff05/) --- ```swift file=@cesdk_swift_examples/engine-guides-create-scene-from-scratch/CreateSceneFromScratch.swift reference-only import Foundation import IMGLYEngine @MainActor func createSceneFromScratch(engine: Engine) async throws { let scene = try engine.scene.create() let page = try engine.block.create(.page) try engine.block.appendChild(to: scene, child: page) let block = try engine.block.create(.graphic) try engine.block.setShape(block, shape: engine.block.createShape(.star)) try engine.block.setFill(block, fill: engine.block.createFill(.color)) try engine.block.appendChild(to: page, child: block) } ``` In this example, we will show you how to initialize the [CreativeEditor SDK](https://img.ly/products/creative-sdk) from scratch and add a star shape. We create an empty scene via `try engine.scene.create()` which sets up the default scene block with a camera attached. Afterwards, the scene can be populated by creating additional blocks and appending them to the scene. See [Modifying Scenes](https://img.ly/docs/cesdk/ios/concepts/blocks-90241e/) for more details. ```swift highlight-create let scene = try engine.scene.create() ``` We first add a page with `func create(_ type: DesignBlockType) throws -> DesignBlockID` specifying a `.page` and set a parent-child relationship between the scene and this page. ```swift highlight-add-page let page = try engine.block.create(.page) try engine.block.appendChild(to: scene, child: page) ``` To this page, we add a graphic design block, again with `func create(_ type: DesignBlockType) throws -> DesignBlockID`. To make it more interesting, we set a star shape and a color fill to this block to give it a visual representation. Like for the page, we set the parent-child relationship between the page and the newly added block. From then on, modifications to this block are relative to the page. ```swift highlight-add-block-with-star-shape let block = try engine.block.create(.graphic) try engine.block.setShape(block, shape: engine.block.createShape(.star)) try engine.block.setFill(block, fill: engine.block.createFill(.color)) try engine.block.appendChild(to: page, child: block) ``` This example first appends a page child to the scene as would typically be done but it is not strictly necessary and any child block can be appended directly to a scene. To later save your scene, see [Saving Scenes](https://img.ly/docs/cesdk/ios/export-save-publish/save-c8b124/). ### Full Code Here's the full code: ```swift import Foundation import IMGLYEngine @MainActor func createSceneFromScratch(engine: Engine) async throws { let scene = try engine.scene.create() let page = try engine.block.create(.page) try engine.block.appendChild(to: scene, child: page) let block = try engine.block.create(.graphic) try engine.block.setShape(block, shape: engine.block.createShape(.star)) try engine.block.setFill(block, fill: engine.block.createFill(.color)) try engine.block.appendChild(to: page, child: block) } ``` --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Create From Image" description: "Open the editor using an image as the base design, with tools ready for immediate editing." platform: ios url: "https://img.ly/docs/cesdk/ios/open-the-editor/from-image-ad9b5e/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Open the Editor](https://img.ly/docs/cesdk/ios/open-the-editor-23a1db/) > [Create From Image](https://img.ly/docs/cesdk/ios/open-the-editor/from-image-ad9b5e/) --- ```swift file=@cesdk_swift_examples/engine-guides-create-scene-from-image-blob/CreateSceneFromImageBlob.swift reference-only import Foundation import IMGLYEngine @MainActor func createSceneFromImageBlob(engine: Engine) async throws { let blob = try await URLSession.shared.data(from: .init(string: "https://img.ly/static/ubq_samples/sample_4.jpg")!).0 let url = FileManager.default.temporaryDirectory .appendingPathComponent(UUID().uuidString) .appendingPathExtension("jpg") try blob.write(to: url, options: .atomic) let scene = try await engine.scene.create(fromImage: url) let page = try engine.block.find(byType: .page).first! let pageFill = try engine.block.getFill(page) let imageFillType = try engine.block.getType(pageFill) } ``` ```swift file=@cesdk_swift_examples/engine-guides-create-scene-from-image-url/CreateSceneFromImageURL.swift reference-only import Foundation import IMGLYEngine @MainActor func createSceneFromImageURL(engine: Engine) async throws { let scene = try await engine.scene.create(fromImage: URL(string: "https://img.ly/static/ubq_samples/sample_4.jpg")!) // Get the fill from the page and verify it's an image fill let page = try engine.block.find(byType: .page).first! let pageFill = try engine.block.getFill(page) let imageFillType = try engine.block.getType(pageFill) } ``` Starting from an existing image allows you to use the editor for customizing individual assets. This is done by using `func create(from imageURL: URL, dpi: Float = 300, pixelScaleFactor: Float = 1) async throws -> DesignBlockID` and passing a URL as argument. The `dpi` argument sets the dots per inch of the scene. The `pixelScaleFactor` sets the display's pixel scale factor. ## Create a Scene From an Image In this example, we will show you how to initialize the [CreativeEditor SDK](https://img.ly/products/creative-sdk) with an initial image. Specify the source to use for the initial image. This can be a relative path or a remote URL. ```swift highlight-createFromImage-url let scene = try await engine.scene.create(fromImage: URL(string: "https://img.ly/static/ubq_samples/sample_4.jpg")!) ``` When starting with an initial image, the scene's page dimensions match the given resource and the scene is configured to be in pixel design units. To later save your scene, see [Saving Scenes](https://img.ly/docs/cesdk/ios/export-save-publish/save-c8b124/). ### Full Code Here's the full code: ```swift import Foundation import IMGLYEngine @MainActor func createSceneFromImageURL(engine: Engine) async throws { let scene = try await engine.scene.create(fromImage: URL(string: "https://img.ly/static/ubq_samples/sample_4.jpg")!) } ``` ## Create a Scene From a Blob In this example, we will show you how to initialize the [CreativeEditor SDK](https://img.ly/products/creative-sdk) with an initial image provided from a blob. First, get hold of a `blob` by fetching an image from the web. This is just for demonstration purposes and your `blob` object may come from a different source. ```swift highlight-blob-swift let blob = try await URLSession.shared.data(from: .init(string: "https://img.ly/static/ubq_samples/sample_4.jpg")!).0 ``` Afterward, create a temporary URL and save the `Data`. ```swift highlight-objectURL-swift let url = FileManager.default.temporaryDirectory .appendingPathComponent(UUID().uuidString) .appendingPathExtension("jpg") try blob.write(to: url, options: .atomic) ``` Use the created URL as a source for the initial image. ```swift highlight-initialImageURL-swift let scene = try await engine.scene.create(fromImage: url) ``` When starting with an initial image, the scenes page dimensions match the given image, and the scene is configured to be in pixel design units. To later save your scene, see [Saving Scenes](https://img.ly/docs/cesdk/ios/export-save-publish/save-c8b124/). ### Full Code Here's the full code: ```swift import Foundation import IMGLYEngine @MainActor func createSceneFromImageBlob(engine: Engine) async throws { let blob = try await URLSession.shared.data(from: .init(string: "https://img.ly/static/ubq_samples/sample_4.jpg")!).0 let url = FileManager.default.temporaryDirectory .appendingPathComponent(UUID().uuidString) .appendingPathExtension("jpg") try blob.write(to: url, options: .atomic) let scene = try await engine.scene.create(fromImage: url) } ``` --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Create From Video" description: "Load a video file into the editor to start editing frame-based or timeline-based video content." platform: ios url: "https://img.ly/docs/cesdk/ios/open-the-editor/from-video-86beb0/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Open the Editor](https://img.ly/docs/cesdk/ios/open-the-editor-23a1db/) > [Create From Video](https://img.ly/docs/cesdk/ios/open-the-editor/from-video-86beb0/) --- ```swift file=@cesdk_swift_examples/engine-guides-create-scene-from-video-url/CreateSceneFromVideoURL.swift reference-only import Foundation import IMGLYEngine @MainActor func createSceneFromVideoURL(engine: Engine) async throws { let scene = try await engine.scene.create(fromVideo: URL(string: "https://img.ly/static/ubq_video_samples/bbb.mp4")!) // Find the automatically added graphic block in the scene that contains the video fill. let block = try engine.block.find(byType: .graphic).first! // Change its opacity. try engine.block.setOpacity(block, value: 0.5) } ``` In this example, we will show you how to initialize the [CreativeEditor SDK](https://img.ly/products/creative-sdk) with an initial video. Starting from an existing video allows you to use the editor for customizing individual assets. This is done by using `func create(fromVideo url: URL) async throws -> DesignBlockID` and passing a URL as argument. Specify the source to use for the initial video. This can be a relative path or a remote URL. ```javascript highlight-createFromVideo let scene = try await engine.scene.create(fromVideo: URL(string: "https://img.ly/static/ubq_video_samples/bbb.mp4")!) ``` We can retrieve the graphic block id of this initial video using `func find(byType type: DesignBlockType) throws -> [DesignBlockID]`. Note that that function returns an array. Since there's only a single graphic block in the scene, the block is at index `0`. ```javascript highlight-findByType // Find the automatically added graphic block in the scene that contains the video fill. let block = try engine.block.find(byType: .graphic).first! ``` We can then manipulate and modify this block. Here we modify its opacity with `func setOpacity(_ id: DesignBlockID, value: Float) throws`. See [Modifying Scenes](https://img.ly/docs/cesdk/ios/concepts/blocks-90241e/) for more details. ```javascript highlight-setOpacity // Change its opacity. try engine.block.setOpacity(block, value: 0.5) ``` When starting with an initial video, the scene's page dimensions match the given resource and the scene is configured to be in pixel design units. To later save your scene, see [Saving Scenes](https://img.ly/docs/cesdk/ios/export-save-publish/save-c8b124/). ## Full Code Here's the full code: ```swift import Foundation import IMGLYEngine @MainActor func createSceneFromVideoURL(engine: Engine) async throws { let scene = try await engine.scene.create(fromVideo: URL(string: "https://img.ly/static/ubq_video_samples/bbb.mp4")!) // Find the automatically added graphic block in the scene that contains the video fill. let block = try engine.block.find(byType: .graphic).first! // Change its opacity. try engine.block.setOpacity(block, value: 0.5) } ``` --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Load a Scene" description: "Load existing design scenes into the editor to resume or modify previous work." platform: ios url: "https://img.ly/docs/cesdk/ios/open-the-editor/load-scene-478833/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Open the Editor](https://img.ly/docs/cesdk/ios/open-the-editor-23a1db/) > [Load a Scene](https://img.ly/docs/cesdk/ios/open-the-editor/load-scene-478833/) --- ```swift file=@cesdk_swift_examples/engine-guides-load-scene-from-blob/LoadSceneFromBlob.swift reference-only import Foundation import IMGLYEngine @MainActor func loadSceneFromBlob(engine: Engine) async throws { let sceneURL = URL(string: "https://cdn.img.ly/assets/demo/v1/ly.img.template/templates/cesdk_postcard_1.scene")! let sceneBlob = try await URLSession.shared.data(from: sceneURL).0 let blobString = String(data: sceneBlob, encoding: .utf8)! let scene = try await engine.scene.load(from: blobString) let text = try engine.block.find(byType: .text).first! try engine.block.setDropShadowEnabled(text, enabled: true) } ``` ```swift file=@cesdk_swift_examples/engine-guides-load-scene-from-string/LoadSceneFromString.swift reference-only import Foundation import IMGLYEngine @MainActor func loadSceneFromString(engine: Engine) async throws { let sceneURL = URL(string: "https://cdn.img.ly/assets/demo/v1/ly.img.template/templates/cesdk_postcard_1.scene")! let sceneBlob = try await URLSession.shared.data(from: sceneURL).0 let blobString = String(data: sceneBlob, encoding: .utf8)! let scene = try await engine.scene.load(from: blobString) let text = try engine.block.find(byType: .text).first! try engine.block.setDropShadowEnabled(text, enabled: true) } ``` ```swift file=@cesdk_swift_examples/engine-guides-load-scene-from-remote/LoadSceneFromRemote.swift reference-only import Foundation import IMGLYEngine @MainActor func loadSceneFromRemote(engine: Engine) async throws { let sceneUrl = URL(string: "https://cdn.img.ly/assets/demo/v1/ly.img.template/templates/cesdk_postcard_1.scene")! let scene = try await engine.scene.load(from: sceneUrl) let text = try engine.block.find(byType: .text).first! try engine.block.setDropShadowEnabled(text, enabled: true) } ``` Loading an existing scene allows resuming work on a previous session or adapting an existing template to your needs. > **Note:** **Warning** Saving a scene can be done as a either scene file or as > an archive file (c.f. > [Saving scenes](https://img.ly/docs/cesdk/ios/export-save-publish/save-c8b124/)). A scene file does > not include any fonts or images. Only the source URIs of assets, the general > layout, and element properties are stored. When loading scenes in a new > environment, ensure previously used asset URIs are available. Conversely, an > archive file contains within it the scene's assets and references > them as relative URIs. ## Load Scenes from a Remote URL Determine a URL that points to a scene binary string. ```swift highlight-url let sceneUrl = URL(string: "https://cdn.img.ly/assets/demo/v1/ly.img.template/templates/cesdk_postcard_1.scene")! ``` We can then pass that string to the `func load(from url: URL) async throws -> DesignBlockID` function. The editor will reset and present the given scene to the user. The function is asynchronous and it does not throw if the scene load succeeded. ```swift highlight-load-remote let scene = try await engine.scene.load(from: sceneUrl) ``` From this point on we can continue to modify our scene. In this example, assuming the scene contains a text element, we add a drop shadow to it. See [Modifying Scenes](https://img.ly/docs/cesdk/ios/concepts/blocks-90241e/) for more details. ```swift highlight-modify-text-remote let text = try engine.block.find(byType: .text).first! try engine.block.setDropShadowEnabled(text, enabled: true) ``` Scene loads may be reverted using `engine.editor.undo()`. To later save your scene, see [Saving Scenes](https://img.ly/docs/cesdk/ios/export-save-publish/save-c8b124/). ### Full Code Here's the full code: ```swift import Foundation import IMGLYEngine @MainActor func loadSceneFromRemote(engine: Engine) async throws { let sceneUrl = URL(string: "https://cdn.img.ly/assets/demo/v1/ly.img.template/templates/cesdk_postcard_1.scene")! let scene = try await engine.scene.load(from: sceneUrl) let text = try engine.block.find(byType: .text).first! try engine.block.setDropShadowEnabled(text, enabled: true) } ``` ## Load Scenes from a String In this example, we fetch a scene from a remote URL and load it as a string. This string could also come from the result of `func saveToString() async throws -> String`. ```swift highlight-fetch-string let sceneURL = URL(string: "https://cdn.img.ly/assets/demo/v1/ly.img.template/templates/cesdk_postcard_1.scene")! let sceneBlob = try await URLSession.shared.data(from: sceneURL).0 let blobString = String(data: sceneBlob, encoding: .utf8)! ``` We can pass that string to the `func load(from string: String) async throws -> DesignBlockID` function. The editor will then reset and present the given scene to the user. The function is asynchronous and it does not throw if the scene load succeeded. ```swift highlight-load-string let scene = try await engine.scene.load(from: blobString) ``` From this point on we can continue to modify our scene. In this example, assuming the scene contains a text element, we add a drop shadow to it. See [Modifying Scenes](https://img.ly/docs/cesdk/ios/concepts/blocks-90241e/) for more details. ```swift highlight-modify-text-string let text = try engine.block.find(byType: .text).first! try engine.block.setDropShadowEnabled(text, enabled: true) ``` Scene loads may be reverted using `engine.editor.undo()`. To later save your scene, see [Saving Scenes](https://img.ly/docs/cesdk/ios/export-save-publish/save-c8b124/). ```swift import Foundation import IMGLYEngine @MainActor func loadSceneFromString(engine: Engine) async throws { let sceneURL = URL(string: "https://cdn.img.ly/assets/demo/v1/ly.img.template/templates/cesdk_postcard_1.scene")! let sceneBlob = try await URLSession.shared.data(from: sceneURL).0 let blobString = String(data: sceneBlob, encoding: .utf8)! let scene = try await engine.scene.load(from: blobString) let text = try engine.block.find(byType: .text).first! try engine.block.setDropShadowEnabled(text, enabled: true) } ``` ## Load Scenes From a Blob In this example, we fetch a scene from a remote URL and load it as `sceneBlob`. ```swift highlight-fetch-blob let sceneURL = URL(string: "https://cdn.img.ly/assets/demo/v1/ly.img.template/templates/cesdk_postcard_1.scene")! let sceneBlob = try await URLSession.shared.data(from: sceneURL).0 ``` To acquire a scene string from `sceneBlob`, we need to read its contents into a string. ```swift highlight-read-blob let blobString = String(data: sceneBlob, encoding: .utf8)! ``` We can then pass that string to the `func load(from string: String) async throws -> DesignBlockID` function. The editor will reset and present the given scene to the user. The function is asynchronous and it does not throw if the scene load succeeded. ```swift highlight-load-blob let scene = try await engine.scene.load(from: blobString) ``` From this point on we can continue to modify our scene. In this example, assuming the scene contains a text element, we add a drop shadow to it. See [Modifying Scenes](https://img.ly/docs/cesdk/ios/concepts/blocks-90241e/) for more details. ```swift highlight-modify-text-blob let text = try engine.block.find(byType: .text).first! try engine.block.setDropShadowEnabled(text, enabled: true) ``` Scene loads may be reverted using `engine.editor.undo()`. To later save your scene, see [Saving Scenes](https://img.ly/docs/cesdk/ios/export-save-publish/save-c8b124/). ### Full Code Here's the full code: ```swift import Foundation import IMGLYEngine @MainActor func loadSceneFromBlob(engine: Engine) async throws { let sceneURL = URL(string: "https://cdn.img.ly/assets/demo/v1/ly.img.template/templates/cesdk_postcard_1.scene")! let sceneBlob = try await URLSession.shared.data(from: sceneURL).0 let blobString = String(data: sceneBlob, encoding: .utf8)! let scene = try await engine.scene.load(from: blobString) let text = try engine.block.find(byType: .text).first! try engine.block.setDropShadowEnabled(text, enabled: true) } ``` ## Loading Scene Archives Loading a scene archives requires unzipping the archives contents to a location, that's accessible to the CreativeEngine. One could for example unzip the archive via `unzip archive.zip` and then serve its contents using `$ npx serve`. This spins up a local test server, that serves everything contained in the current directory at `http://localhost:3000` The archive can then be loaded by calling `await engine.scene.loadFromURL('http://localhost:3000/scene.scene')`. See [loading scenes](https://img.ly/docs/cesdk/ios/open-the-editor/load-scene-478833/) for more details. All asset paths in the archive are then resolved relative to the location of the `scene.scene` file. For an image, that would result in `'http://localhost:3000/images/1234.jpeg'`. After loading all URLs are fully resolved with the location of the `scene.scene` file and the scene behaves like any other scene. ### Resolving assets from a different source The engine will use its [URI resolver](https://img.ly/docs/cesdk/ios/open-the-editor/uri-resolver-36b624/) to resolve all asset paths it encounters. This allows you to redirect requests for the assets contained in archive to a different location. To do so, you can add a custom resolver, that redirects requests for assets to a different location. Assuming you store your archived scenes in a `scenes/` directory, this would be an example of how to do so: ```swift try engine.editor.setURIResolver { path in let url = URL(string: path)! let components = URLComponents(string: path)! if components.host == "localhost" && components.path.hasPrefix("/scenes") && !components.path.hasSuffix(".scene") { // Apply custom logic here, e.g. redirect to a different server } // Use default behaviour for everything else return URL(string: engine.editor.defaultURIResolver(relativePath: path))! } ``` --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Overview" description: "Learn how to load and create scenes, set the zoom level, and configure file proxies or URI resolvers." platform: ios url: "https://img.ly/docs/cesdk/ios/open-the-editor/overview-99444b/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Open the Editor](https://img.ly/docs/cesdk/ios/open-the-editor-23a1db/) > [Overview](https://img.ly/docs/cesdk/ios/open-the-editor/overview-99444b/) --- CreativeEditor SDK (CE.SDK) offers multiple ways to open the editor. Whether you're starting with a blank canvas or importing complex layered files, CE.SDK gives you the building blocks to launch an editing session tailored to your users' needs. [Explore Demos](https://img.ly/showcases/cesdk?tags=ios) [Get Started](https://img.ly/docs/cesdk/ios/get-started/overview-e18f40/) --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Set Zoom Level" description: "Programmatically adjust the zoom level of the canvas to focus on specific areas of the design." platform: ios url: "https://img.ly/docs/cesdk/ios/open-the-editor/set-zoom-level-d31896/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Open the Editor](https://img.ly/docs/cesdk/ios/open-the-editor-23a1db/) > [Set Zoom Level](https://img.ly/docs/cesdk/ios/open-the-editor/set-zoom-level-d31896/) --- ```swift reference-only // Zoom to 100% try engine.scene.setZoom(1.0) // Zoom to 50% try engine.scene.setZoom(0.5 * engine.scene.getZoom()) // Bring entire scene in view with padding of 20px in all directions try await engine.scene.zoom(to: scene, paddingLeft: 20.0, paddingTop: 20.0, paddingRight: 20.0, paddingBottom: 20.0) try engine.scene.immediateZoom(to: scene, paddingLeft: 20.0, paddingTop: 20.0, paddingRight: 20.0, paddingBottom: 20.0) // Follow page with padding of 20px in both directions let page = try engine.scene.getPages().first! try engine.scene.enableZoomAutoFit( page, axis: .both, paddingLeft: 20, paddingTop: 20, paddingRight: 20, paddingBottom: 20 ) // Stop following page try engine.scene.disableZoomAutoFit(page) // Query if zoom auto-fit is enabled for page try engine.scene.isZoomAutoFitEnabled(page) // Keep the scene with padding of 10px within the camera try engine.scene.unstable_enableCameraPositionClamping( [scene], paddingLeft: 10, paddingTop: 10, paddingRight: 10, paddingBottom: 10, scaledPaddingLeft: 0, scaledPaddingTop: 0, scaledPaddingRight: 0, scaledPaddingBottom: 0 ) try engine.scene.unstable_disableCameraPositionClamping() // Query if camera position clamping is enabled for the scene try engine.scene.unstable_isCameraPositionClampingEnabled(scene) // Allow zooming from 12.5% to 800% relative to the size of a page try engine.scene.unstable_enableCameraZoomClamping( [page], minZoomLimit: 0.125, maxZoomLimit: 8.0, paddingLeft: 0, paddingTop: 0, paddingRight: 0, paddingBottom: 0 ) try engine.scene.unstable_disableCameraZoomClamping() // Query if camera zoom clamping is enabled for the scene try engine.scene.unstable_isCameraZoomClampingEnabled(scene) // Get notified when the zoom level changes let task = Task { for await _ in engine.editor.onZoomLevelChanged { let zoomLevel = try engine.scene.getZoom() print("Zoom level is now: \(zoomLevel)") } } task.cancel() ``` In this example, we will show you how to use the [CreativeEditor SDK](https://img.ly/products/creative-sdk)'s CreativeEngine to control and observe camera zoom via the `scene` API. ## Functions ```swift public func getZoom() throws -> Float ``` Query a camera zoom level of the active scene. - Returns: Returns the current zoom level of the scene in unit 1/px, i.e., how large a pixel of the camera resolution is shown on the screen. A zoom level of 2.0f results in one pixel in the design to be two pixels on the screen. ```swift public func setZoom(_ level: Float) throws ``` Sets the zoom level of the active scene. A zoom level of 2.0f results in one pixel in the design to be two pixels on the screen. - `level:`: The new zoom level with unit 1/px, i.e., how large a pixel of the camera resolution is shown on the screen. ```swift public func zoom(to id: DesignBlockID, paddingLeft: Float = 0, paddingTop: Float = 0, paddingRight: Float = 0, paddingBottom: Float = 0) async throws ``` Sets the zoom and focus to show a block. Without padding, this results in a tight view on the block. - `id`: The block that should be focussed on. - `paddingLeft`: Optional padding in points to the left of the block. - `paddingTop`: Optional padding in points to the top of the block. - `paddingRight`: Optional padding in points to the right of the block. - `paddingBottom`: Optional padding in points to the bottom of the block. ```swift public func immediateZoom(to id: DesignBlockID, paddingLeft: Float = 0, paddingTop: Float = 0, paddingRight: Float = 0, paddingBottom: Float = 0, forceUpdate: Bool = false) throws ``` Sets the zoom and focus to show a block. Without padding, this results in a tight view on the block. Assums layout has been done. You can force the layout with explicit update call that will update the layout. - `id`: The block that should be focussed on. - `paddingLeft`: Optional padding in points to the left of the block. - `paddingTop`: Optional padding in points to the top of the block. - `paddingRight`: Optional padding in points to the right of the block. - `paddingBottom`: Optional padding in points to the bottom of the block. - `forceUpdate`: Optional flag that will run the system update that will update the layout. ```swift public func enableZoomAutoFit(_ id: DesignBlockID, axis: ZoomAutoFitAxis, paddingLeft: Float = 0, paddingTop: Float = 0, paddingRight: Float = 0, paddingBottom: Float = 0) throws ``` Continually adjusts the zoom level to fit the width or height of a block's axis-aligned bounding box. This only shows an effect if the zoom level is not handled/overwritten by the UI. Without padding, this results in a tight view on the block. - Note: Calling `setZoom(level:)` or `zoom(to:)` disables the continuous adjustment. - `id`: The block for which the zoom is adjusted. - `axis`: The block axis for which the zoom is adjusted. - `paddingLeft`: Optional padding in points to the left of the block. - `paddingTop`: Optional padding in points to the top of the block. - `paddingRight`: Optional padding in points to the right of the block. - `paddingBottom`: Optional padding in points to the bottom of the block. ```swift public func disableZoomAutoFit(_ id: DesignBlockID) throws ``` Disables any previously set zoom auto-fit. - `id:`: The scene or a block in the scene for which to disable zoom auto-fit. ```swift public func isZoomAutoFitEnabled(_ id: DesignBlockID) throws -> Bool ``` Queries whether zoom auto-fit is enabled. - `id:`: The scene or a block in the scene for which to query the zoom auto-fit. - Returns: `true` if the given block has auto-fit set or the scene contains a block for which auto-fit is set, `false` otherwise. ```swift public func unstable_enableCameraPositionClamping(_ ids: [DesignBlockID], paddingLeft: Float = 0, paddingTop: Float = 0, paddingRight: Float = 0, paddingBottom: Float = 0, scaledPaddingLeft: Float = 0, scaledPaddingTop: Float = 0, scaledPaddingRight: Float = 0, scaledPaddingBottom: Float = 0) throws ``` Continually ensures the camera position to be within the width and height of the blocks axis-aligned bounding box. Without padding, this results in a tight clamp on the blocks. Disables any previously set camera position clamping in the scene and also takes priority over clamp camera commands. - `ids`: The blocks for which the camera position is adjusted to, usually, the scene or a page. - `paddingLeft`: Optional padding in points to the left of the block. - `paddingTop`: Optional padding in points to the top of the block. - `paddingRight`: Optional padding in points to the right of the block. - `paddingBottom`: Optional padding in points to the bottom of the block. - `scaledPaddingLeft`: Optional padding in points to the left of the block that scales with the zoom level until five times the initial value. - `scaledPaddingTop`: Optional padding in points to the left of the block that scales with the zoom level until five times the initial value. - `scaledPaddingRight`: Optional padding in points to the left of the block that scales with the zoom level until five times the initial value. - `scaledPaddingBottom`: Optional padding in points to the left of the block that scales with the zoom level until five times the initial value. ```swift public func unstable_disableCameraPositionClamping() throws ``` Disables any previously set position clamping. ```swift public func unstable_isCameraPositionClampingEnabled(_ id: DesignBlockID) throws -> Bool ``` Queries whether position clamping is enabled. - `id:`: The scene or a block in the scene for which to query the position clamping. - Returns: `true` if the given block has position clamping set or the scene contains a block for which position clamping is set, `false` otherwise. ```swift public func unstable_enableCameraZoomClamping(_ ids: [DesignBlockID], minZoomLimit: Float = -1, maxZoomLimit: Float = -1, paddingLeft: Float = 0, paddingTop: Float = 0, paddingRight: Float = 0, paddingBottom: Float = 0) throws ``` Continually ensures the zoom level of the camera in the active scene to be in the given range. - `ids`: The blocks for which the camera zoom limits are adjusted to, usually, the scene or a page. - `minZoomLimit`: The minimum zoom limit in unit `dpx/dot` when zooming out, unlimited when negative. - `maxZoomLimit`: The maximum zoom limit in unit `dpx/dot` when zooming in, unlimited when negative. - `paddingLeft`: Optional padding in points to the left of the block. Only applied when the block is not a camera. - `paddingTop`: Optional padding in points to the top of the block. Only applied when the block is not a camera. - `paddingRight`: Optional padding in points to the right of the block. Only applied when the block is not a camera. - `paddingBottom`: Optional padding in points to the bottom of the block. Only applied when the block is not a camera. ```swift public func unstable_disableCameraZoomClamping() throws ``` Disables previously set zoom clamping for the block, scene, or camera. ```swift public func unstable_isCameraZoomClampingEnabled(_ id: DesignBlockID) throws -> Bool ``` Queries whether zoom clamping is enabled. - `id:`: The scene or a block for which to query the zoom clamping. - Returns: `true` if the active scene has zoom clamping set, `false` otherwise. ```swift public var onZoomLevelChanged: AsyncStream { get } ``` Subscribe to changes to the zoom level. ## Settings See clamp camera settings in the [editor settings](https://img.ly/docs/cesdk/ios/settings-970c98/). ## Full Code Here's the full code: ```swift // Zoom to 100% try engine.scene.setZoom(1.0) // Zoom to 50% try engine.scene.setZoom(0.5 * engine.scene.getZoom()) // Bring entire scene in view with padding of 20px in all directions try await engine.scene.zoom(to: scene, paddingLeft: 20.0, paddingTop: 20.0, paddingRight: 20.0, paddingBottom: 20.0) try engine.scene.immediateZoom(to: scene, paddingLeft: 20.0, paddingTop: 20.0, paddingRight: 20.0, paddingBottom: 20.0) // Follow page with padding of 20px in both directions let page = try engine.scene.getPages().first! try engine.scene.enableZoomAutoFit( page, axis: .both, paddingLeft: 20, paddingTop: 20, paddingRight: 20, paddingBottom: 20 ) // Stop following page try engine.scene.disableZoomAutoFit(page) // Query if zoom auto-fit is enabled for page try engine.scene.isZoomAutoFitEnabled(page) // Keep the scene with padding of 10px within the camera try engine.scene.unstable_enableCameraPositionClamping( [scene], paddingLeft: 10, paddingTop: 10, paddingRight: 10, paddingBottom: 10, scaledPaddingLeft: 0, scaledPaddingTop: 0, scaledPaddingRight: 0, scaledPaddingBottom: 0 ) try engine.scene.unstable_disableCameraPositionClamping() // Query if camera position clamping is enabled for the scene try engine.scene.unstable_isCameraPositionClampingEnabled(scene) // Allow zooming from 12.5% to 800% relative to the size of a page try engine.scene.unstable_enableCameraZoomClamping( [page], minZoomLimit: 0.125, maxZoomLimit: 8.0, paddingLeft: 0, paddingTop: 0, paddingRight: 0, paddingBottom: 0 ) try engine.scene.unstable_disableCameraZoomClamping() // Query if camera zoom clamping is enabled for the scene try engine.scene.unstable_isCameraZoomClampingEnabled(scene) // Get notified when the zoom level changes let task = Task { for await _ in engine.editor.onZoomLevelChanged { let zoomLevel = try engine.scene.getZoom() print("Zoom level is now: \(zoomLevel)") } } task.cancel() ``` --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "URI Resolver" description: "Customize how asset URIs are resolved and loaded into the editor for full control over file handling." platform: ios url: "https://img.ly/docs/cesdk/ios/open-the-editor/uri-resolver-36b624/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Open the Editor](https://img.ly/docs/cesdk/ios/open-the-editor-23a1db/) > [URI Resolver](https://img.ly/docs/cesdk/ios/open-the-editor/uri-resolver-36b624/) --- ```swift file=@cesdk_swift_examples/engine-guides-uri-resolver/URIResolver.swift reference-only import Foundation import IMGLYEngine @MainActor func uriResolver(engine: Engine) async throws { // This will return "https://cdn.img.ly/packages/imgly/cesdk-js/1.68.0/assets/banana.jpg" try engine.editor.getAbsoluteURI(relativePath: "/banana.jpg") // Replace all .jpg files with the IMG.LY logo! try engine.editor.setURIResolver { uri in if uri.hasSuffix(".jpg") { return URL(string: "https://img.ly/static/ubq_samples/imgly_logo.jpg")! } // Make use of the default URI resolution behavior. return URL(string: engine.editor.defaultURIResolver(relativePath: uri))! } // The custom resolver will return a path to the IMG.LY logo because the given path ends with ".jpg". // This applies regardless if the given path is relative or absolute. try engine.editor.getAbsoluteURI(relativePath: "/banana.jpg") // The custom resolver will not modify this path because it ends with ".png". try engine.editor.getAbsoluteURI(relativePath: "https://example.com/orange.png") // Because a custom resolver is set, relative paths that the resolver does not transform remain unmodified! try engine.editor.getAbsoluteURI(relativePath: "/orange.png") // Removes the previously set resolver. try engine.editor.setURIResolver(nil) // Since we"ve removed the custom resolver, this will return // "https://cdn.img.ly/packages/imgly/cesdk-js/1.68.0/assets/banana.jpg" like before. try engine.editor.getAbsoluteURI(relativePath: "/banana.jpg") } ``` CE.SDK gives you full control over how URIs should be resolved. To register a custom resolver, use `setURIResolver` and pass in a function implementing your resolution logic. If a custom resolver is set, any URI requested by the engine is passed through the resolver. The URI your logic returns is then fetched by the engine. The resolved URI is just used for the current request and not stored. If, and only if, no custom resolver is set, the engine performs the default behaviour: absolute paths are unchanged and relative paths are prepended with the value of the `basePath` setting. > **Note:** **Warning** Your custom URI resolver must return an URL. We can preview the effects of setting a custom URI resolver with the function `func getAbsoluteURI(relativePath: String) throws -> String`. Before setting a custom URI resolver, the default behavior is as before and the given relative path will be prepended with the contents of `basePath`. ```swift highlight-get-absolute-base-path // This will return "https://cdn.img.ly/packages/imgly/cesdk-js/1.68.0/assets/banana.jpg" try engine.editor.getAbsoluteURI(relativePath: "/banana.jpg") ``` To show that the resolver can be fairly free-form, in this example we register a custom URI resolver that replaces all `.jpg` images with our company logo. The resolved URI are expected to be absolute. Note: you can still access the default URI resolver by calling `func defaultURIResolver(relativePath: String) -> String`. ```swift highlight-resolver // Replace all .jpg files with the IMG.LY logo! try engine.editor.setURIResolver { uri in if uri.hasSuffix(".jpg") { return URL(string: "https://img.ly/static/ubq_samples/imgly_logo.jpg")! } // Make use of the default URI resolution behavior. return URL(string: engine.editor.defaultURIResolver(relativePath: uri))! } ``` Given the same path as earlier, the custom resolver transforms it as specified. Note that after a custom resolver is set, relative paths that the resolver does not transform remain unmodified. ```swift highlight-get-absolute-custom // The custom resolver will return a path to the IMG.LY logo because the given path ends with ".jpg". // This applies regardless if the given path is relative or absolute. try engine.editor.getAbsoluteURI(relativePath: "/banana.jpg") // The custom resolver will not modify this path because it ends with ".png". try engine.editor.getAbsoluteURI(relativePath: "https://example.com/orange.png") // Because a custom resolver is set, relative paths that the resolver does not transform remain unmodified! try engine.editor.getAbsoluteURI(relativePath: "/orange.png") ``` To remove a previously set custom resolver, call the function with a `nil` value. The URI resolution is now back to the default behavior. ```swift highlight-remove-resolver // Removes the previously set resolver. try engine.editor.setURIResolver(nil) // Since we"ve removed the custom resolver, this will return // "https://cdn.img.ly/packages/imgly/cesdk-js/1.68.0/assets/banana.jpg" like before. try engine.editor.getAbsoluteURI(relativePath: "/banana.jpg") ``` ## Full Code Here's the full code: ```swift import Foundation import IMGLYEngine @MainActor func uriResolver(engine: Engine) async throws { // This will return "https://cdn.img.ly/packages/imgly/cesdk-js/$UBQ_VERSION$/assets/banana.jpg" try engine.editor.getAbsoluteURI(relativePath: "/banana.jpg") // Replace all .jpg files with the IMG.LY logo! try engine.editor.setURIResolver { uri in if uri.hasSuffix(".jpg") { return URL(string: "https://img.ly/static/ubq_samples/imgly_logo.jpg")! } // Make use of the default URI resolution behavior. return URL(string: engine.editor.defaultURIResolver(relativePath: uri))! } // The custom resolver will return a path to the IMG.LY logo because the given path ends with ".jpg". // This applies regardless if the given path is relative or absolute. try engine.editor.getAbsoluteURI(relativePath: "/banana.jpg") // The custom resolver will not modify this path because it ends with ".png". try engine.editor.getAbsoluteURI(relativePath: "https://example.com/orange.png") // Because a custom resolver is set, relative paths that the resolver does not transform remain unmodified! try engine.editor.getAbsoluteURI(relativePath: "/orange.png") // Removes the previously set resolver. try engine.editor.setURIResolver(nil) // Since we"ve removed the custom resolver, this will return // "https://cdn.img.ly/packages/imgly/cesdk-js/$UBQ_VERSION$/assets/banana.jpg" like before. try engine.editor.getAbsoluteURI(relativePath: "/banana.jpg") } ``` --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Outlines" description: "Enhance design elements with strokes, shadows, and glow effects to improve contrast and visual appeal." platform: ios url: "https://img.ly/docs/cesdk/ios/outlines-b7820c/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Outlines](https://img.ly/docs/cesdk/ios/outlines-b7820c/) --- --- ## Related Pages - [Overview](https://img.ly/docs/cesdk/ios/outlines/overview-dfeb12/) - Enhance design elements with strokes, shadows, and glow effects to improve contrast and visual appeal. - [Using Strokes](https://img.ly/docs/cesdk/ios/outlines/strokes-c2e621/) - Add and customize outlines around shapes, text, or images using stroke settings. - [Shadows and Glows](https://img.ly/docs/cesdk/ios/outlines/shadows-and-glows-6610fa/) - Apply shadow and glow effects to elements for added depth, contrast, or emphasis. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Overview" description: "Enhance design elements with strokes, shadows, and glow effects to improve contrast and visual appeal." platform: ios url: "https://img.ly/docs/cesdk/ios/outlines/overview-dfeb12/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Outlines](https://img.ly/docs/cesdk/ios/outlines-b7820c/) > [Overview](https://img.ly/docs/cesdk/ios/outlines/overview-dfeb12/) --- In CreativeEditor SDK (CE.SDK), *outlines* refer to visual enhancements added around design elements. They include strokes, shadows, and glows, each serving to emphasize, separate, or stylize content. Outlines help improve visibility, create visual contrast, and enhance the overall aesthetic of a design. You can add, edit, and remove outlines both through the CE.SDK user interface and programmatically via the API. [Explore Demos](https://img.ly/showcases/cesdk?tags=ios) [Get Started](https://img.ly/docs/cesdk/ios/get-started/overview-e18f40/) --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Shadows and Glows" description: "Apply shadow and glow effects to elements for added depth, contrast, or emphasis." platform: ios url: "https://img.ly/docs/cesdk/ios/outlines/shadows-and-glows-6610fa/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Outlines](https://img.ly/docs/cesdk/ios/outlines-b7820c/) > [Shadows and Glows](https://img.ly/docs/cesdk/ios/outlines/shadows-and-glows-6610fa/) --- ```swift reference-only // Configure a basic colored drop shadow if the block supports them if try engine.block.supportsDropShadow(block) { try engine.block.setDropShadowEnabled(block, enabled: true) try engine.block.setDropShadowColor(block, color: .rgba(r: 1.0, g: 0.75, b: 0.8, a: 1.0)) let dropShadowColor = try engine.block.getDropShadowColor(block) try engine.block.setDropShadowOffsetX(block, offsetX: -10) try engine.block.setDropShadowOffsetY(block, offsetY: 5) let dropShadowOffsetX = try engine.block.getDropShadowOffsetX(block) let dropShadowOffsetX = try engine.block.getDropShadowOffsetY(block) try engine.block.setDropShadowBlurRadiusX(block, blurRadiusX: -10) try engine.block.setDropShadowBlurRadiusY(block, blurRadiusY: 5) try engine.block.setDropShadowClip(block, clip: false) let dropShadowClip = try getDropShadowClip(block) // Query a blocks drop shadow properties let dropShadowIsEnabled = try engine.block.isDropShadowEnabled(block) let dropShadowBlurRadiusX = try engine.block.getDropShadowBlurRadiusX(block) let dropShadowBlurRadiusY = try engine.block.getDropShadowBlurRadiusY(block) } ``` In this example, we will show you how to use the [CreativeEditor SDK](https://img.ly/products/creative-sdk)'s CreativeEngine to modify an block's drop shadow through the `block` API. Drop shadows can be added to any shape, text or image. One can adjust its offset relative to its block on the X and Y axes, its blur factor on the X and Y axes and whether it is visible behind a transparent block. ## Functions ```swift public func supportsDropShadow(_ id: DesignBlockID) throws -> Bool ``` Query if the given block has a drop shadow property. - `id:`: The block to query. - Returns: `true` if the block has a drop shadow property. ```swift public func setDropShadowEnabled(_ id: DesignBlockID, enabled: Bool) throws ``` Enable or disable the drop shadow of the given design block. Required scope: "appearance/shadow" - `id`: The block whose drop shadow should be enabled or disabled. - `enabled`: If `true`, the drop shadow will be enabled. ```swift public func isDropShadowEnabled(_ id: DesignBlockID) throws -> Bool ``` Query if the drop shadow of the given design block is enabled. - `id:`: The block whose drop shadow state should be queried. - Returns: `true` if the block's drop shadow is enabled. ```swift public func setDropShadowColor(_ id: DesignBlockID, color: Color) throws ``` Set the drop shadow color of the given design block. Required scope: "appearance/shadow" - `id`: The block whose drop shadow color should be set. - `color`: The color to set. ```swift public func getDropShadowColor(_ id: DesignBlockID) throws -> Color ``` Get the drop shadow color of the given design block. - `id:`: The block whose drop shadow color should be queried. - Returns: The drop shadow color. ```swift public func setDropShadowOffsetX(_ id: DesignBlockID, offsetX: Float) throws ``` Set the drop shadow's X offset of the given design block. Required scope: "appearance/shadow" - `id`: The block whose drop shadow's X offset should be set. - `offsetX`: The X offset to be set. ```swift public func setDropShadowOffsetY(_ id: DesignBlockID, offsetY: Float) throws ``` Set the drop shadow's Y offset of the given design block. Required scope: "appearance/shadow" - `id`: The block whose drop shadow's Y offset should be set. - `offsetY`: The Y offset to be set. ```swift public func getDropShadowOffsetX(_ id: DesignBlockID) throws -> Float ``` Get the drop shadow's X offset of the given design block. - `id:`: The block whose drop shadow's X offset should be queried. - Returns: The offset. ```swift public func getDropShadowOffsetY(_ id: DesignBlockID) throws -> Float ``` Get the drop shadow's Y offset of the given design block. - `id:`: The block whose drop shadow's Y offset should be queried. - Returns: The offset. ```swift public func setDropShadowBlurRadiusX(_ id: DesignBlockID, blurRadiusX: Float) throws ``` Set the drop shadow's blur radius on the X axis of the given design block. Required scope: "appearance/shadow" - `id`: The block whose drop shadow's blur radius should be set. - `blurRadiusX`: The blur radius to be set. ```swift public func setDropShadowBlurRadiusY(_ id: DesignBlockID, blurRadiusY: Float) throws ``` Set the drop shadow's blur radius on the Y axis of the given design block. Required scope: "appearance/shadow" - `id`: The block whose drop shadow's blur radius should be set. - `blurRadiusY`: The blur radius to be set. ```swift public func setDropShadowClip(_ id: DesignBlockID, clip: Bool) throws ``` Set the drop shadow's clipping of the given design block. (Only applies to shapes.) Required scope: "appearance/shadow" - `id`: The block whose drop shadow's clip should be set. - `clip`: The drop shadow's clip to be set. ```swift public func getDropShadowClip(_ id: DesignBlockID) throws -> Bool ``` Get the drop shadow's clipping of the given design block. - `id:`: The block whose drop shadow's clipping should be queried. - Returns: The drop shadow's clipping. ```swift public func getDropShadowBlurRadiusX(_ id: DesignBlockID) throws -> Float ``` Get the drop shadow's blur radius on the X axis of the given design block. - `id:`: The block whose drop shadow's blur radius should be queried. - Returns: The blur radius. ```swift public func getDropShadowBlurRadiusY(_ id: DesignBlockID) throws -> Float ``` Get the drop shadow's blur radius on the Y axis of the given design block. - `id:`: The block whose drop shadow's blur radius should be queried. - Returns: The blur radius. ## Full Code Here's the full code: ```swift // Configure a basic colored drop shadow if the block supports them if try engine.block.supportsDropShadow(block) { try engine.block.setDropShadowEnabled(block, enabled: true) try engine.block.setDropShadowColor(block, color: .rgba(r: 1.0, g: 0.75, b: 0.8, a: 1.0)) let dropShadowColor = try engine.block.getDropShadowColor(block) try engine.block.setDropShadowOffsetX(block, offsetX: -10) try engine.block.setDropShadowOffsetY(block, offsetY: 5) let dropShadowOffsetX = try engine.block.getDropShadowOffsetX(block) let dropShadowOffsetX = try engine.block.getDropShadowOffsetY(block) try engine.block.setDropShadowBlurRadiusX(block, blurRadiusX: -10) try engine.block.setDropShadowBlurRadiusY(block, blurRadiusY: 5) try engine.block.setDropShadowClip(block, clip: false) let dropShadowClip = try getDropShadowClip(block) // Query a blocks drop shadow properties let dropShadowIsEnabled = try engine.block.isDropShadowEnabled(block) let dropShadowBlurRadiusX = try engine.block.getDropShadowBlurRadiusX(block) let dropShadowBlurRadiusY = try engine.block.getDropShadowBlurRadiusY(block) } ``` --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Using Strokes" description: "Add and customize outlines around shapes, text, or images using stroke settings." platform: ios url: "https://img.ly/docs/cesdk/ios/outlines/strokes-c2e621/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Outlines](https://img.ly/docs/cesdk/ios/outlines-b7820c/) > [Stroke (Outline)](https://img.ly/docs/cesdk/ios/outlines/strokes-c2e621/) --- In this example, we will show you how to use the [CreativeEditor SDK](https://img.ly/products/creative-sdk)'s CreativeEngine to modify strokes through the `block` API. Strokes can be added to any shape or text and stroke styles are varying from plain solid lines to dashes and gaps of varying lengths and can have different end caps. ## Strokes ```swift public func supportsStroke(_ id: DesignBlockID) throws -> Bool ``` Query if the given block has a stroke property. - `id:`: The block to query. - Returns: `true` if the block has a stroke property. ```swift public func setStrokeEnabled(_ id: DesignBlockID, enabled: Bool) throws ``` Enable or disable the stroke of the given design block. Required scope: "stroke/change" - `id`: The block whose stroke should be enabled or disabled. - `enabled`: If `true`, the stroke will be enabled. ```swift public func isStrokeEnabled(_ id: DesignBlockID) throws -> Bool ``` Query if the stroke of the given design block is enabled. - `id:`: The block whose stroke state should be queried. - Returns: `true` if the block's stroke is enabled. ```swift public func setStrokeColor(_ id: DesignBlockID, color: Color) throws ``` Set the stroke color of the given design block. Required scope: "stroke/change" - `id`: The block whose stroke color should be set. - `color`: The color to set. ```swift public func getStrokeColor(_ id: DesignBlockID) throws -> Color ``` Get the stroke color of the given design block. - `id:`: The block whose stroke color should be queried. - Returns: The stroke color. ```swift public func setStrokeWidth(_ id: DesignBlockID, width: Float) throws ``` Set the stroke width of the given design block. Required scope: "stroke/change" - `id`: The block whose stroke width should be set. - `width`: The stroke width to be set. ```swift public func getStrokeWidth(_ id: DesignBlockID) throws -> Float ``` Get the stroke width of the given design block. - `id:`: The block whose stroke width should be queried. - Returns: The stroke's width. ```swift public func setStrokeStyle(_ id: DesignBlockID, style: StrokeStyle) throws ``` Set the stroke style of the given design block. Required scope: "stroke/change" - `id`: The block whose stroke style should be set. - `style`: The stroke style to be set. ```swift public func getStrokeStyle(_ id: DesignBlockID) throws -> StrokeStyle ``` Get the stroke style of the given design block. - `id:`: The block whose stroke style should be queried. - Returns: The stroke's style. ```swift public func setStrokePosition(_ id: DesignBlockID, position: StrokePosition) throws ``` Set the stroke position of the given design block. Required scope: "stroke/change" - `id`: The block whose stroke position should be set. - `position`: The stroke position to be set. ```swift public func getStrokePosition(_ id: DesignBlockID) throws -> StrokePosition ``` Get the stroke position of the given design block. - `id:`: The block whose stroke position should be queried. - Returns: The stroke position. ```swift public func setStrokeCornerGeometry(_ id: DesignBlockID, cornerGeometry: StrokeCornerGeometry) throws ``` Set the stroke corner geometry of the given design block. Required scope: "stroke/change" - `id`: The block whose stroke join geometry should be set. - `cornerGeometry`: The stroke join geometry to be set. ```swift public func getStrokeCornerGeometry(_ id: DesignBlockID) throws -> StrokeCornerGeometry ``` Get the stroke corner geometry of the given design block. - `id:`: The block whose stroke join geometry should be queried. - Returns: The stroke join geometry. ## Full Code Here's the full code for using strokes: ```swift // Check if block supports strokes if try engine.block.supportsStroke(block) { // Enable the stroke try engine.block.setStrokeEnabled(block, enabled: true) let strokeIsEnabled = try engine.block.isStrokeEnabled(block) // Configure it try engine.block.setStrokeColor(block, color: .rgba(r: 1.0, g: 0.75, b: 0.8, a: 1.0)) let strokeColor = try engine.block.getStrokeColor(block) try engine.block.setStrokeWidth(block, width: 5) let strokeWidth = try engine.block.getStrokeWidth(block) try engine.block.setStrokeStyle(block, style: .dashed) let strokeStlye = try engine.block.getStrokeStyle(block) try engine.block.setStrokePosition(block, position: .outer) let strokePosition = try engine.block.getStrokePosition(block) try engine.block.setStrokeCornerGeometry(block, cornerGeometry: .round) let strokeCornerGeometry = try engine.block.getStrokeCornerGeometry(block) } ``` --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Overview" description: "Understand how insertion works, how inserted media behave within scenes, and how to control them via UI or code." platform: ios url: "https://img.ly/docs/cesdk/ios/overview-491658/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Insert Media Assets](https://img.ly/docs/cesdk/ios/insert-media-a217f5/) > [Overview](https://img.ly/docs/cesdk/ios/overview-491658/) --- In CE.SDK, *inserting media into a scene* means placing visual or audio elements directly onto the canvas—images, videos, audio clips, shapes, or stickers—so they become part of the design. This differs from *importing assets*, which simply makes media available in the asset library. This guide helps you understand how insertion works, how inserted media behave within scenes, and how to control them via UI or code. By the end, you'll know how media are represented, modified, saved, and exported. [Explore Demos](https://img.ly/showcases/cesdk?tags=ios) [Get Started](https://img.ly/docs/cesdk/ios/get-started/overview-e18f40/) ## Inserting Media --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Prebuilt Solutions" description: "Documentation for Prebuilt Solutions" platform: ios url: "https://img.ly/docs/cesdk/ios/prebuilt-solutions-d0ed07/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Solutions](https://img.ly/docs/cesdk/ios/prebuilt-solutions-d0ed07/) --- --- ## Related Pages - [iOS Photo Editor SDK](https://img.ly/docs/cesdk/ios/prebuilt-solutions/photo-editor-42ccb2/) - The CreativeEditor SDK provides a robust and user-friendly solution for photo editing on iOS devices. - [iOS Video Editor SDK](https://img.ly/docs/cesdk/ios/prebuilt-solutions/video-editor-9e533a/) - The CreativeEditor SDK offers a comprehensive and versatile solution for video editing on iOS devices. - [iOS Design Tool & Design Editor](https://img.ly/docs/cesdk/ios/prebuilt-solutions/design-editor-9bf041/) - Embed a ready-to-use design editor that lets users personalize templates while respecting layout constraints. - [T-Shirt Designer in iOS](https://img.ly/docs/cesdk/ios/prebuilt-solutions/t-shirt-designer-02b48f/) - Embed a t-shirt design editor with print areas, PDF export, and a focused UI for apparel customization. - [iOS Postcard Editor SDK](https://img.ly/docs/cesdk/ios/prebuilt-solutions/postcard-editor-61e1f6/) - Let users personalize postcards with templates, style presets, and print-ready exports—no design skills needed. - [iOS TikTok Clone](https://img.ly/docs/cesdk/ios/prebuilt-solutions/tiktok-clone-3fa605/) - Build a TikTok-style iOS app with in-app recording, editing, and export powered by CE.SDK’s mobile components. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "iOS Design Tool & Design Editor" description: "Embed a ready-to-use design editor that lets users personalize templates while respecting layout constraints." platform: ios url: "https://img.ly/docs/cesdk/ios/prebuilt-solutions/design-editor-9bf041/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Solutions](https://img.ly/docs/cesdk/ios/prebuilt-solutions-d0ed07/) > [Design Editor](https://img.ly/docs/cesdk/ios/prebuilt-solutions/design-editor-9bf041/) --- Give your users a fast, intuitive way to personalize templates with layout, text, and image editing — no design experience needed. The Design Editor comes ready to use and can be easily added to your iOS app with minimal setup. [Explore Demo](https://img.ly/showcases/cesdk/default-ui/ios) [View on GitHub](https://github.com/imgly/cesdk-swift-examples/tree/main/editor-guides-solutions-design-editor) ## What is the Design Editor Solution? The Design Editor is a pre-built configuration of the CreativeEditor SDK (CE.SDK), tailored for non-professional users to easily adapt and personalize existing templates. It’s optimized for workflows that focus on editing layout elements like text, images, and shapes — all within clearly defined design constraints. Whether you're building a product customization app, dynamic ad creator, or template-based marketing tool, the Design Editor brings a polished, user-friendly interface to your users — right out of the box. ## Key Features - **Template-based editing**\ Empower users to customize existing templates while preserving brand integrity and layout rules. - **Smart context menus**\ Clicking an element opens a simplified editing toolbar, showing only the most relevant actions — like replacing or cropping an image. - **Streamlined user interface**\ The interface is designed to surface essential tools first. A “More” button reveals the full set of features for deeper editing. - **Role-based permissions**\ Easily toggle between *Creator* and *Adopter* roles to define what elements users can modify, lock, or hide. - **Cross-platform support**\ Available for Web, iOS, Android, and Desktop — all powered by the same core SDK. ## Why Use This Solution? The Design Editor is the fastest way to offer layout editing with production-ready UX. It reduces the effort of building a complete UI from scratch, while giving you full control over customization and integration. Choose this solution if you want to: - Provide a ready-to-use template editor that feels intuitive to end users - Accelerate your time to market with a polished layout editing experience - Maintain creative control by restricting editable areas with template constraints - Avoid building custom design tooling for every use case --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "iOS Photo Editor SDK" description: "The CreativeEditor SDK provides a robust and user-friendly solution for photo editing on iOS devices." platform: ios url: "https://img.ly/docs/cesdk/ios/prebuilt-solutions/photo-editor-42ccb2/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Solutions](https://img.ly/docs/cesdk/ios/prebuilt-solutions-d0ed07/) > [Photo Editor](https://img.ly/docs/cesdk/ios/prebuilt-solutions/photo-editor-42ccb2/) --- The CreativeEditor SDK provides a robust and user-friendly solution for photo editing on iOS devices. The photo UI is a specific configuration of the CE.SDK UI which enables developers to seamlessly integrate essential photo editing features into their iOS applications, offering users a powerful yet intuitive editing experience. Whether you are developing an app for social media, content creation, or any other platform requiring photo editing tools, the CE.SDK iOS Photo Editor is designed to meet your needs. [Explore Demo](https://img.ly/showcases/cesdk/photo-editor-ui/ios) [View on GitHub](https://github.com/imgly/cesdk-swift-examples/blob/main/showcases-app/View/Showcases.swift)
## Key Capabilities of the iOS Mobile Design Editor SDK ## What is the Photo Editor Solution? The Photo Editor is a fully customizable CE.SDK configuration focused on photo-centric use cases. It strips down the editor interface to include only the most relevant features for image adjustments — giving users a focused and responsive experience. Whether your users need to fine-tune selfies, prepare product photos, or create profile images, this solution makes it easy. Get a powerful photo editor into your app with minimal setup. The Photo Editor runs entirely client-side — which helps reduce cloud computing costs and improve privacy. ## Supported Platforms The iOS Mobile Design Editor SDK is compatible with applications built using SwiftUI. A UIKit implementation is also available for developers working within that framework. ## Prerequisites To get started with the CE.SDK Photo Editor on iOS, ensure you have the latest version of Xcode and Swift installed. ## Supported File Types The CE.SDK Photo Editor supports various image formats, enabling users to work with popular file types. ### Importing Media ### Exporting Media ### Importing Templates For detailed information, see the [full file format support list](https://img.ly/docs/cesdk/ios/file-format-support-3c4b2a/). ## Understanding CE.SDK Architecture & API The following sections provide an overview of the key components of the CE.SDK photo editor UI and its API architecture. If you're ready to start integrating CE.SDK into your iOS application, check out our Implementation Guide. ### CreativeEditor SDK Mobile Photo UI The CE.SDK photo editor UI is a streamlined configuration of the CreativeEditor SDK, focusing on essential photo editing features. This configuration is fully customizable, allowing developers to adjust the UI and functionality to suit different use cases. Key components include: - **Canvas:** The primary workspace where users interact with their photo content. - **Inspector Bar:** Offers tools for adjusting properties like size, position, and effects for selected elements. - **Asset Library:** A collection of media resources available for use within the photo editor, including images and stickers. Learn more about interacting with and customizing the photo editor UI in our design editor UI guide. ### CreativeEngine At the heart of CE.SDK is the CreativeEngine, which powers all rendering and photo manipulation tasks. It can be used in headless mode or in combination with the CreativeEditor UI. Key features and APIs provided by CreativeEngine include: - **Scene Management:** Create, load, save, and manipulate photo scenes programmatically. - **Block Management:** Manage images, text, and other elements within the photo editor. - **Asset Management:** Integrate and manage photo and image assets from various sources. - **Variable Management:** Define and manipulate variables for dynamic content within photo scenes. - **Event Handling:** Subscribe to events like image selection changes or editing actions for dynamic interaction. ## Customizing the iOS ImageEditor CE.SDK provides extensive [customization options](https://img.ly/docs/cesdk/ios/user-interface-5a089a/), allowing you to tailor the UI and functionality to meet your specific needs. This can range from basic configuration settings to more advanced customizations involving callbacks and custom elements. ### Basic Customizations - **Configuration Object:** Customize the editor’s appearance and functionality by passing a configuration object during initialization. ```swift let settings = EngineSettings( license: secrets.licenseKey, userID: "", baseURL: URL(string: "https://cdn.img.ly/packages/imgly/cesdk-engine/$UBQ_VERSION$/assets")! ) ``` - **Custom Asset Sources:** Serve custom images or stickers from a remote URL. ### UI Customization Options - **Theme:** Choose between 'dark' or 'light' themes. ```swift var editor: some View { PhotoEditor(settings) .preferredColorScheme(colorScheme == .dark ? .light : .dark) } ``` --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "iOS Postcard Editor SDK" description: "Let users personalize postcards with templates, style presets, and print-ready exports—no design skills needed." platform: ios url: "https://img.ly/docs/cesdk/ios/prebuilt-solutions/postcard-editor-61e1f6/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Solutions](https://img.ly/docs/cesdk/ios/prebuilt-solutions-d0ed07/) > [Postcard Editor](https://img.ly/docs/cesdk/ios/prebuilt-solutions/postcard-editor-61e1f6/) --- The Postcard Editor is a prebuilt CreativeEditor SDK (CE.SDK) solution designed for quickly creating and personalizing postcards and greeting cards. It provides an intuitive UI that guides users through selecting a design, editing its contents, and customizing messaging—all without needing design expertise. This ready-to-use editor can be easily added to your iOS app and fully customized to match your brand, making it ideal for direct mail campaigns, seasonal greetings, or personalized customer engagement. [Explore Demo](https://img.ly/showcases/cesdk/post-greeting-cards/ios) [View on GitHub](https://github.com/imgly/cesdk-swift-examples/blob/main/showcases-app/View/Showcases.swift) ## What is the Postcard Editor Solution? The Postcard Editor is a prebuilt CreativeEditor SDK (CE.SDK) solution designed for quickly creating and personalizing postcards and greeting cards. It provides an intuitive UI that guides users through selecting a design, editing its contents, and customizing messaging—all without needing design expertise. With built-in support for style presets, design constraints, and variable-driven personalization, the Postcard Editor enables scalable creation of high-quality, print-ready content for direct mail, seasonal greetings, and personalized campaigns. ## Key Features - **Style presets**\ Jump-start the design process with a collection of professionally crafted postcard templates. - **Design mode**\ After selecting a style, users can personalize the design. Depending on the template configuration, they can: - Change accent and background colors - Replace photos from a library or upload their own - Edit headings and body text (fonts, colors, layout) - Add stickers, shapes, or other decorative elements - **Write mode**\ Users can add a personal message and address the card. Text styling options include font, size, and color customization. - **Dynamic variables**\ Enable scalable personalization using variables like `{{firstname}}` or `{{address}}`. Templates can be connected to external data sources for automated batch generation. - **Print-ready export**\ Designs are exported in high-resolution, print-friendly formats, suitable for direct mail or digital delivery. ## Why Use This Solution? - **Accelerate development**\ Save time with a pre-configured UI that’s production-ready and easily customizable via CE.SDK’s headless API. - **Empower non-designers**\ Make creative tools accessible to any user by enforcing design constraints and simplifying the editing experience. - **Scale personalization**\ Integrate with external data to programmatically generate personalized postcards for marketing, e-commerce, or events. - **Cross-platform ready**\ The Postcard Editor works across web, mobile, and desktop environments, offering a seamless user experience wherever it’s deployed. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "T-Shirt Designer in iOS" description: "Embed a t-shirt design editor with print areas, PDF export, and a focused UI for apparel customization." platform: ios url: "https://img.ly/docs/cesdk/ios/prebuilt-solutions/t-shirt-designer-02b48f/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Solutions](https://img.ly/docs/cesdk/ios/prebuilt-solutions-d0ed07/) > [T-Shirt Designer](https://img.ly/docs/cesdk/ios/prebuilt-solutions/t-shirt-designer-02b48f/) --- Quickly add a professional-grade t-shirt design editor to your iOS app with CE.SDK. [Explore Demo](https://img.ly/showcases/cesdk/apparel-editor-ui/ios) [View on GitHub](https://github.com/imgly/cesdk-swift-examples/tree/main/editor-guides-solutions-apparel-editor) ## What is the T-Shirt Designer Solution? The T-Shirt Designer is a pre-configured instance of the CreativeEditor SDK (CE.SDK) tailored for apparel design workflows. It enables your users to create high-quality, print-ready t-shirt designs directly in your app—whether for a custom merchandise platform, print-on-demand storefront, or internal design tool. This solution comes with a realistic t-shirt mockup background, precise boundary enforcement, and a simplified UI. You can easily integrate it across web, mobile, or desktop platforms and customize it to match your brand or workflow. ## Key Features - **T-Shirt backdrop with placement guidance**\ A visually accurate t-shirt background helps users place artwork exactly where it will appear when printed. - **Strict print area enforcement**\ Elements that extend beyond the defined print region are clipped automatically, ensuring print precision. - **Placeholder-based template editing**\ Supports templates with editable placeholders, such as swappable images. Define which parts of a design are user-editable by toggling between Creator and Adopter modes. - **Print-ready PDF export**\ Outputs print-quality PDFs to seamlessly fit into your existing production workflows. - **Fully customizable UI**\ Adapt the interface and features to suit your brand and user needs using the CE.SDK configuration API. ## Why Use This Solution? - **Accelerated development**\ Save engineering time with a ready-made editor specifically configured for t-shirt design. - **Better user experience**\ The focused UI reduces complexity, guiding users through apparel creation with built-in visual feedback and safeguards. - **Seamless print integration**\ The export format and boundary enforcement make it ideal for print-on-demand systems with no additional post-processing required. - **Flexible and extensible**\ As with all CE.SDK solutions, the T-Shirt Designer is deeply customizable—extend features, change design constraints, or integrate external data and asset libraries. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "iOS TikTok Clone" description: "Build a TikTok-style iOS app with in-app recording, editing, and export powered by CE.SDK’s mobile components." platform: ios url: "https://img.ly/docs/cesdk/ios/prebuilt-solutions/tiktok-clone-3fa605/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Solutions](https://img.ly/docs/cesdk/ios/prebuilt-solutions-d0ed07/) > [TikTok Clone](https://img.ly/docs/cesdk/ios/prebuilt-solutions/tiktok-clone-3fa605/) --- Create your own TikTok-style app using our **TikTok Clone for iOS**, powered by the CreativeEditor SDK (CE.SDK). This prebuilt solution demonstrates how to bring video recording, editing, and playback into your app using Swift and CE.SDK’s powerful mobile components. Easily embed this solution into your iOS app to give users a smooth, end-to-end video creation experience — from capturing footage to applying filters, effects, and music. [View on GitHub](https://github.com/imgly/tiktok-clone-ios-cesdk) ## What is the TikTok Clone? The TikTok Clone is a ready-to-use sample app that replicates the core experience of apps like TikTok. Built in Swift, it leverages CE.SDK’s Mobile Camera and Video Editor modules to let users: - Record short-form videos using their device camera. - Edit with a wide range of tools including filters, stickers, text overlays, and audio. - Preview their creations with looping playback. - Export the final result in a format ready for publishing or sharing. This solution is ideal for developers building social video apps, user-generated content platforms, or any app that benefits from short-form video creation. ## Key Features - **In-app video recording:** Capture high-quality video clips using the device camera with full permission handling and native UI. - **Video editing tools:** Add stickers, filters, text, music, and more — all customizable through CE.SDK. - **Looped playback:** Use AVPlayer for smooth video review with swipe navigation. - **One-click export:** Output edited videos to standard formats, ready for upload or sharing. - **Modular architecture:** Swap components, customize the UI, or add new features using CE.SDK’s APIs. ## Why Use This Solution? - **Accelerate development:** Skip boilerplate setup and get a feature-rich video creation flow out-of-the-box. - **Production-ready code:** Use the included Swift codebase as a starting point or as inspiration for your own implementation. - **Built with CE.SDK:** Tap into a robust editing engine designed for performance and flexibility on iOS. - **Easy customization:** Tailor the camera, editor, or asset library to match your brand and user needs. - **Designed for scalability:** Extend the app with additional editing workflows, user authentication, publishing endpoints, or monetization features. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "iOS Video Editor SDK" description: "The CreativeEditor SDK offers a comprehensive and versatile solution for video editing on iOS devices." platform: ios url: "https://img.ly/docs/cesdk/ios/prebuilt-solutions/video-editor-9e533a/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Solutions](https://img.ly/docs/cesdk/ios/prebuilt-solutions-d0ed07/) > [Video Editor](https://img.ly/docs/cesdk/ios/prebuilt-solutions/video-editor-9e533a/) --- The CreativeEditor SDK offers a comprehensive and versatile solution for video editing on iOS devices. The CE.SDK video editor empowers developers to integrate powerful video editing capabilities into their iOS applications, providing users with an intuitive and fully customizable editing experience. Whether you're building an app for social media, content creation, or any other platform that requires robust video editing tools, the CE.SDK iOS Video Editor is designed to meet your needs. [Explore Demo](https://img.ly/showcases/cesdk/video-ui/ios) [View on GitHub](https://github.com/imgly/cesdk-swift-examples)
## Key Capabilities of the iOS Mobile Video Editor SDK ## What is the Video Editor Solution? The Video Editor is a prebuilt solution powered by the CreativeEditor SDK (CE.SDK) that enables fast integration of high-performance video editing into web, mobile, and desktop applications. It’s designed to help your users create professional-grade videos—from short social clips to long-form stories—directly within your app. Skip building a video editor from scratch. This fully client-side solution provides a solid foundation with an extensible UI and a robust engine API to power video editing in any use case. ## Supported Platforms The iOS Mobile Design Editor SDK is compatible with applications built using SwiftUI. A UIKit implementation is also available for developers working within that framework. ## Prerequisites To get started with the CE.SDK Video Editor on iOS, ensure you have the latest version of Xcode and Swift installed. The SDK requires a license key, use `nil` or an empty string to run in evaluation mode with watermark. ## Supported Media Types [IMG.LY](http://img.ly/)'s Creative Editor SDK enables you to load, edit, and save **MP4 files** directly on the device without server dependencies. ### Importing Media ### Exporting Media ### Importing Templates For detailed information, see the [full file format support list](https://img.ly/docs/cesdk/ios/file-format-support-3c4b2a/). ## Understanding CE.SDK Architecture & API The following sections provide an overview of the key components of the CE.SDK video editor UI and its API architecture. If you're ready to start integrating CE.SDK into your iOS application, check out our [Implementation Guide](https://img.ly/docs/cesdk/ios/prebuilt-solutions/video-editor-9e533a/). ### CreativeEditor SDK Mobile Video UI The CE.SDK video editor UI is a specific configuration of the CreativeEditor SDK, focusing on essential video editing features. It includes robust tools for video manipulation, customizable to suit different use cases. Key components include: - **Canvas:** The main workspace where users interact with their video content. - **Timeline:** Provides control over the sequence and duration of video clips, images, and audio tracks. - **Tool Bar:** Provides essential editing options like adjustments, filters, effectsi, layer management or adding text or images in order of relevance. - **Context Menu:** Presents relevant editing options for each selected element, simplifying the editing process for users. Learn more about interacting with and customizing the video editor UI in our design editor UI guide. ### CreativeEngine At the core of CE.SDK is the CreativeEngine, which handles all rendering and video manipulation tasks. It can be used in headless mode or alongside the CreativeEditor UI. Key features and APIs provided by CreativeEngine include: - **Scene Management:** Create, load, save, and manipulate video scenes programmatically. - **Block Management:** Manage video clips, images, text, and other elements within the timeline. - **Asset Management:** Integrate and manage video, audio, and image assets from various sources. - **Variable Management:** Define and manipulate variables for dynamic content within video scenes. - **Event Handling:** Subscribe to events like clip selection changes or timeline updates for dynamic interaction. ## Customizing the iOS Video Editor CE.SDK provides extensive customization options, allowing you to tailor the UI and functionality to meet your specific needs. This can range from basic configuration settings to more advanced customizations involving callbacks and custom elements. ### Basic Customizations - [Configuration Object:](https://img.ly/docs/cesdk/ios/user-interface/customization-72b2f8/) Customize the editor’s appearance and functionality by passing a configuration object during initialization. ```swift let settings = EngineSettings( license: secrets.licenseKey, userID: "", baseURL: URL(string: "https://cdn.img.ly/packages/imgly/cesdk-engine/$UBQ_VERSION$/assets")! ) ``` - [Custom Asset Sources:](https://img.ly/docs/cesdk/ios/import-media/asset-panel/customize-c9a4de/) Serve custom video clips or audio tracks from a remote URL. ### UI Customization Options - **Theme:** Choose between 'dark' or 'light' themes. ```swift var editor: some View { VideoEditor(settings) .preferredColorScheme(colorScheme == .dark ? .light : .dark) } ``` - **Color Palette:** Configure the editor color palette to match a particular CI: ```swift VideoEditor(settings) .imgly.colorPalette([ .init("Blue", .imgly.blue), .init("Green", .imgly.green), .init("Yellow", .imgly.yellow), .init("Red", .imgly.red), .init("Black", .imgly.black), .init("White", .imgly.white), .init("Gray", .imgly.gray), ]) ``` ## Framework Support CreativeEditor SDK’s video editor is compatible with Swift and Objective-C, making it easy to integrate into any iOS application. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Rules" description: "Define and enforce layout, branding, and safety rules to ensure consistent and compliant designs." platform: ios url: "https://img.ly/docs/cesdk/ios/rules-1427c0/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Rules](https://img.ly/docs/cesdk/ios/rules-1427c0/) --- --- ## Related Pages - [Overview](https://img.ly/docs/cesdk/ios/rules/overview-e27832/) - Define and enforce layout, branding, and safety rules to ensure consistent and compliant designs. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Overview" description: "Define and enforce layout, branding, and safety rules to ensure consistent and compliant designs." platform: ios url: "https://img.ly/docs/cesdk/ios/rules/overview-e27832/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Rules](https://img.ly/docs/cesdk/ios/rules-1427c0/) > [Overview](https://img.ly/docs/cesdk/ios/rules/overview-e27832/) --- In CreativeEditor SDK (CE.SDK), *rules*—referred to as **scopes** in the API and code—are automated validations that help enforce design and layout standards during editing. You can use scopes to maintain brand guidelines, ensure print readiness, moderate content, and enhance the overall editing experience. Scopes can be applied to both designs and videos, helping you deliver high-quality outputs while reducing the risk of common mistakes. [Explore Demos](https://img.ly/showcases/cesdk?tags=ios) [Get Started](https://img.ly/docs/cesdk/ios/get-started/overview-e18f40/) --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Security" description: "Learn how CE.SDK keeps your data private with client-side processing, secure licensing, and GDPR-compliant practices." platform: ios url: "https://img.ly/docs/cesdk/ios/security-777bfd/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Compatibility & Security](https://img.ly/docs/cesdk/ios/compatibility-fef719/) > [Security](https://img.ly/docs/cesdk/ios/security-777bfd/) --- This document provides a comprehensive overview of CE.SDK's security practices, focusing on data handling, privacy, and our commitment to maintaining the highest standards of security for our customers and their end users. ## Key Security Features - **Client-Side Processing**: All image and design processing occurs directly on the user's device or your servers, not on our servers - **No Data Transmission**: Your content (e.g. images, designs, templates, videos, audio, etc.) is never uploaded to or processed on IMG.LY servers - **Minimal Data Collection**: We only collect device identifiers and count exports for licensing purposes - **GDPR Compliance**: Our data collection practices adhere to GDPR regulations - **Secure Licensing**: Enterprise licenses are secured with RSA SHA256 encryption ## Data Protection & Access Controls ### Data Collection CE.SDK requires minimal data to provide its services. The only potentially personally identifiable information (PII) collected includes device-specific identifiers such as `identifierForVendor` on iOS and `ANDROID_ID` on Android. These identifiers are: - Used solely for tracking monthly active users for our usage-based pricing models - Reset when the user reinstalls the app or resets their device - Collected under GDPR's legitimate interest provision (no explicit consent required as they are necessary for our licensing system) Additionally, we track export operations for billing purposes in usage-based pricing models. For enterprise customers who prefer more accurate tracking, integrators can provide their own userID. This allows for more precise measurement of usage without requiring additional device identifiers. ### Data Storage & Encryption **We do not collect or store user data beyond the device identifiers and export counts mentioned above.** Since CE.SDK operates entirely client-side: - All content processing happens on the user's device - No images, designs, or user content is transmitted to IMG.LY servers - No content data is stored on IMG.LY infrastructure We use standard HTTPS (SSL/TLS) encryption for all communications between CE.SDK instances and our licensing backend. ### Access Controls We are using established industry standard practices to handle sensitive customer data. Therefore access control concerns are minimized. The limited data we do handle is protected as follows: - Billing information is stored in Stripe and accessed only by members of our finance team and C-level executives - API keys and credentials are stored securely in 1Password or GitHub with granular access levels - All employees sign Confidentiality Agreements to protect customer information This refers to data of our direct customers, not their users or customers. ## Licensing System CE.SDK uses a licensing system that works as follows: 1. During instantiation, an API key is provided to the CE.SDK instance 2. This API key is held in memory (never stored permanently on the device) 3. The SDK validates the key with our licensing backend 4. Upon successful validation, the backend returns a temporary local license 5. This license is periodically refreshed to maintain valid usage For browser implementations, we protect licenses against misuse by pinning them to specific domains. For mobile applications, licenses are pinned to the application identifiers to prevent unauthorized use. For enterprise customers, we offer an alternative model: - A license file is passed directly to the instance - No communication with our licensing service is required - Licenses are secured using RSA SHA256 encryption ### CE.SDK Renderer CE.SDK Renderer is a specialized variant of CE.SDK that consists of a native Linux binary bundled in a Docker container. It uses GPU acceleration and native code to render scenes and archives to various export formats. Due to bundled third-party codecs (mainly H.264 & H.265) and their associated patent requirements, CE.SDK Renderer implements additional licensing communication beyond the standard licensing handshake: 1. **Initial License Validation**: The tool performs the standard license validation with our licensing backend 2. **Periodic Heartbeats**: After successful validation, it sends periodic heartbeats to our licensing backend to track the number of active instances 3. **Instance Limits**: We limit the maximum number of active instances per license based on the settings in your dashboard 4. **Activation Control**: If the instance limit is exceeded, further activations (launches) of the tool will fail with a descriptive error message This additional communication allows us to ensure compliance with codec licensing requirements while providing transparent usage tracking for your organization. As with all CE.SDK products, no user data (images, videos, designs, or other content) is transmitted to IMG.LY servers - only device identifiers and instance counts are collected for licensing purposes. ## Security Considerations for User Input As CE.SDK deals primarily with arbitrary user input, we've implemented specific security measures to handle data safely: - The CreativeEngine reads files from external resources to fetch images, fonts, structured data, and other sources for designs. These reads are safeguarded by platform-specific default measures. - The engine never loads executable code or attempts to execute any data acquired from dynamic content. It generally relies on provided mime types to decode image data or falls back to byte-level inspection to choose the appropriate decoder. - For data writing operations, we provide a callback that returns a pointer to the to-be-written data. The engine itself never unconditionally writes to an externally defined path. If it writes to files directly, these are part of internal directories and can't be modified externally. - Generated PDFs may have original image files embedded if the image was not altered via effects or blurs and the `exportPdfWithHighCompatibility` option was **not** enabled. This means a malicious image file could theoretically be included in the exported PDF. - Inline text-editing allows arbitrary input of strings by users. The engine uses platform-specific default inputs and APIs and doesn't apply additional sanitization. The acquired strings are stored and used exclusively for text rendering - they are neither executed nor used for file operations. ## Security Infrastructure ### Vulnerability Management We take a proactive approach to security vulnerability management: - We use GitHub to track dependency vulnerabilities - We regularly update affected dependencies - We don't maintain a private network, eliminating network vulnerability concerns in that context - We don't manually maintain servers or infrastructure, as we don't have live systems beyond those required for licensing - For storage and licensing, we use virtual instances in Google Cloud which are managed by the cloud provider - All security-related fixes are published in our public changelog at [https://img.ly/docs/cesdk/changelog/](https://img.ly/docs/cesdk/changelog/) ### Security Development Practices Our development practices emphasize security: - We rely on established libraries with proven security track records - We don't directly process sensitive user data in our code - Secrets (auth tokens, passwords, API credentials, certificates) are stored in GitHub or 1Password with granular access levels - We use RSA SHA256 encryption for our enterprise licenses - We rely on platform-standard SSL implementations for HTTPS communications ### API Key Management API keys for CE.SDK are handled securely: - Keys are passed during instantiation and held in memory only - Keys are never stored permanently on client devices - For web implementation, keys are pinned to specific domains to prevent unauthorized use - Enterprise licenses use a file-based approach that doesn't require API key validation ## Compliance IMG.LY complies with the General Data Protection Regulation (GDPR) in all our operations, including CE.SDK. Our Privacy Policy is publicly available at [https://img.ly/privacy-policy](https://img.ly/privacy-policy). Our client-side approach to content processing significantly reduces privacy and compliance concerns, as user content never leaves their device environment for processing. ## FAQ ### Does CE.SDK upload my images or designs to IMG.LY servers? No. CE.SDK processes all content locally on the user's device. Your images, designs, and other content are never transmitted to IMG.LY servers. ### What data does IMG.LY collect through CE.SDK? CE.SDK only collects device identifiers (such as identifierForVendor on iOS or ANDROID\_ID on Android) for licensing purposes and export counts. No user content or personal information is collected. ### How does IMG.LY protect API keys? API keys are never stored permanently; they are held in memory during SDK operation. For web implementations, keys are pinned to specific domains to prevent unauthorized use. ### Has IMG.LY experienced any security breaches? No, IMG.LY has not been involved in any cybersecurity breaches in the last 12 months. ### Does IMG.LY conduct security audits? As we don't store customer data directly, but rely on third parties to do so, we focus our security efforts on dependency tracking and vulnerability management through GitHub's security features. We don't conduct security audits. ## Additional Information For more detailed information about our data collection practices, please refer to our Data Privacy and Retention information below. Should you have any additional questions regarding security practices or require more information, please contact our team at [support@img.ly](mailto:support@img.ly). ## Data Privacy and Retention At IMG.LY, we prioritize your data privacy and ensure that apart from a minimal contractually stipulated set of interactions with our servers all other operations take place on your local device. Below is an overview of our data privacy and retention policies: ### **Data Processing** All data processed by CE.SDK remains strictly on your device. We do not transfer your data to our servers for processing. This means that operations such as rendering, editing, and other in-app functionalities happen entirely locally, ensuring that sensitive project or personal data stays with you. ### **Data Retention** We do not store any project-related data on our servers. Since all data operations occur locally, no information about your edits, images, or video content is retained by CE.SDK. The only data that interacts with our servers is related to license validation and telemetry related to usage tied to your pricing plan. ### **License Validation** CE.SDK performs a license validation check with our servers once upon initialization to validate the software license being used. This interaction is minimal and does not involve the transfer of any personal, project, or media data. ### **Event Tracking** While CE.SDK does not track user actions other than the exceptions listed below through telemetry or analytics by default, there are specific events tracked to manage customer usage, particularly for API key usage tracking. We gather the following information during these events: - **When the engine loads:** App identifier, platform, engine version, user ID (provided by the client), device ID (mobile only), and session ID. - **When a photo or video is exported:** User ID, device ID, session ID, media type (photo/video), resolution (width and height), FPS (video only), and duration (video only). This tracking is solely for ensuring accurate usage calculation and managing monthly active user billing. Enterprise clients can opt out of this tracking under specific agreements. ### **Personal Identifiable Information (PII)** The only PII that is potentially collected includes device-specific identifiers such as `identifierForVendor` on iOS and `ANDROID_ID` on Android. These IDs are used for tracking purposes and are reset when the user reinstalls the app or resets the device. No consent is required for these identifiers because they are crucial for our usage-based pricing models. This is covered by the GDPR as legitimate interest. ### **User Consent** As mentioned above, user consent is not required when solely using the CE.SDK. However, this may change depending on the specific enterprise agreement or additional regulatory requirements. IMG.LY is committed to maintaining compliance with **GDPR** and other applicable data protection laws, ensuring your privacy is respected at all times. For details consult our [privacy policy](https://img.ly/privacy-policy). --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Serve Assets From Your Server" description: "Set up and manage how assets are served to the editor, including local, remote, or CDN-based delivery." platform: ios url: "https://img.ly/docs/cesdk/ios/serve-assets-b0827c/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Serve Assets](https://img.ly/docs/cesdk/ios/serve-assets-b0827c/) --- In this example, we explain how to configure the Creative Engine to use assets hosted on your own servers. While we serve all assets from our own CDN by default, it is highly recommended to serve the assets from your own servers in a production environment. ## 1. Register IMG.LY's default assets If you want to use our default asset sources in your integration, call `engine.addDefaultAssetSources(baseURL: URL, exclude: Set)`. Right after initialization: ```swift let engine = Engine() Task { try await engine.addDefaultAssetSources() } ``` This call adds IMG.LY's default asset sources for stickers, vectorpaths and filters to your engine instance. By default, these include the following sources with their corresponding ids (as `rawValue`): - `.sticker` - `'ly.img.sticker'` - Various stickers. - `.vectorPath` - `'ly.img.vectorpath'` - Shapes and arrows. - `.filterLut` - `'ly.img.filter.lut'` - LUT effects of various kinds. - `.filterDuotone` - `'ly.img.filter.duotone'` - Color effects of various kinds. - `.colorsDefaultPalette` - `'ly.img.colors.defaultPalette'` - Default color palette. - `.effect` - `ly.img.effect` - Default effects. - `.blur` - `ly.img.blur` - Default blurs. - `.typeface` - `ly.img.typeface` - Default typefaces. - `.cropPresets` - `ly.img.crop.presets` - Default crop presets. - `.pagePresets` - `ly.img.page.presets` - Default page resize presets. If you don't specify a `baseURL` option, the assets are parsed and served from the IMG.LY CDN. It's it is highly recommended to serve the assets from your own servers in a production environment, if you decide to use them. To do so, follow the steps below and pass a `baseURL` option to `addDefaultAssetSources`. If you only need a subset of the categories above, use the `exclude` option to pass a set of ignored sources. ## 2. Copy Assets Download the IMG.LY default assets from [our CDN](https://cdn.img.ly/packages/imgly/cesdk-engine/$UBQ_VERSION$/imgly-assets.zip). Copy the IMGLYEngine *default* asset folders to your application bundle. The default asset folders should be located in a new `.bundle` folder. It will create a nested `Bundle` object that can be loaded by your app. The folder structure should look like this:

![](./assets/bundle-ios.png)


## 3. Configure the IMGLYEngine to use your self-hosted assets Next, we need to configure the SDK to use the copied assets instead of the ones served via IMG.LY CDN. `engine.addDefaultAssetSources` offers a `baseURL` option, that needs to be set to an absolute URL, pointing to a valid `Bundle` or a remote location. In case of using your own server, the `baseURL` should point to the root of your asset folder, e.g. `https://cdn.your.custom.domain/assets`: ```swift let remoteURL = URL(string: "https://cdn.your.custom.domain/assets")! Task { try await engine.addDefaultAssetSources(baseURL: remoteURL) } ``` In case of using a local `Bundle`, the `baseURL` should point to the `.bundle` folder, that we created in the previous step: ```swift let bundleURL = Bundle.main.url(forResource: "IMGLYAssets", withExtension: "bundle")! Task { try await engine.addDefaultAssetSources(baseURL: bundleURL) } ``` --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Settings" description: "Explore all configurable editor settings and learn how to read, update, and observe them via the Settings API." platform: ios url: "https://img.ly/docs/cesdk/ios/settings-970c98/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Settings](https://img.ly/docs/cesdk/ios/settings-970c98/) --- All keys listed below can be modified through the Editor API. The nested settings inside `UBQSettings` can be reached via key paths, e.g. `page/title/show`. ## Settings ### `BlockAnimationSettings` | Member | Type | Default | Description | | ------- | ------ | ------- | -------------------------------------------- | | enabled | `bool` | `true` | Whether animations should be enabled or not. | ### `CameraClampingSettings` | Member | Type | Default | Description | | ------------- | ----------------------------- | --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | overshootMode | `CameraClampingOvershootMode` | `Reverse` | Controls what happens when the clamp area is smaller than the viewport. Center: the clamp area is centered in the viewport. Reverse: the clamp area can move inside the viewport until it hits the edges. | ### `CameraSettings` | Member | Type | Default | Description | | -------- | ------------------------------------------------------------------- | ------- | --------------------------------- | | clamping | `CameraClampingSettings: CameraClampingOvershootMode overshootMode` | `{}` | Clamping settings for the camera. | ### `ControlGizmoSettings` | Member | Type | Default | Description | | -------------------- | ------- | -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | blockScaleDownLimit | `float` | `8.0` | Scale-down limit for blocks in screen pixels when scaling them with the gizmos or with touch gestures. The limit is ensured to be at least 0.1 to prevent scaling to size zero. | | showCropHandles | `bool` | `{true}` | Whether or not to show the handles to adjust the crop area during crop mode. | | showCropScaleHandles | `bool` | `{true}` | Whether or not to display the outer handles that scale the full image during crop. | | showMoveHandles | `bool` | `{true}` | Whether or not to show the move handles. | | showResizeHandles | `bool` | `{true}` | Whether or not to display the non-proportional resize handles (edge handles) | | showRotateHandles | `bool` | `{true}` | Whether or not to show the rotation handles. | | showScaleHandles | `bool` | `{true}` | Whether or not to display the proportional scale handles (corner handles) | ### `DebugFlags` Flags that control debug outputs. | Member | Type | Default | Description | | -------------------------- | ------ | --------- | ------------------------------------------------------------------------------------------------------------- | | enforceScopesInAPIs | `bool` | `false` | Whether APIs calls that perform edits should throw errors if the corresponding scope does not allow the edit. | | showHandlesInteractionArea | `bool` | `{false}` | Display the interaction area around the handles. | | useDebugMipmaps | `bool` | `false` | Enable the use of colored mipmaps to see which mipmap is used. | ### `MouseSettings` | Member | Type | Default | Description | | ------------ | ------ | ------- | ------------------------------------------------- | | enableScroll | `bool` | `true` | Whether the engine processes mouse scroll events. | | enableZoom | `bool` | `true` | Whether the engine processes mouse zoom events. | ### `PageSettings` | Member | Type | Default | Description | | ------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | allowCropInteraction | `bool` | `true` | If crop interaction (by handles and gestures) should be possible when the enabled arrangements allow resizing. | | allowMoveInteraction | `bool` | `false` | If move interaction (by handles and gestures) should be possible when the enabled arrangements allow moving and if the page layout is not controlled by the scene, e.g., in a 'VerticalStack'. | | allowResizeInteraction | `bool` | `false` | If a resize interaction (by handles and gestures) should be possible when the enabled arrangements allow resizing. | | allowRotateInteraction | `bool` | `false` | If rotation interaction (by handles and gestures) should be possible when the enabled arrangements allow rotation and if the page layout is not controlled by the scene, e.g., in a 'VerticalStack'. | | dimOutOfPageAreas | `bool` | `true` | Whether the opacity of the region outside of all pages should be reduced. | | innerBorderColor | `Color` | `createRGBColor(0.0, 0.0, 0.0, 0.0)` | Color of the inner frame around the page. | | marginFillColor | `Color` | `createRGBColor(0.79, 0.12, 0.40, 0.1)` | Color of frame around the bleed margin area of the pages. | | marginFrameColor | `Color` | `createRGBColor(0.79, 0.12, 0.40, 0.0)` | Color filled into the bleed margins of pages. | | moveChildrenWhenCroppingFill | `bool` | `false` | Whether the children of the page should be transformed to match their old position relative to the page fill when a page fill is cropped. | | outerBorderColor | `Color` | `createRGBColor(1.0, 1.0, 1.0, 0.0)` | Color of the outer frame around the page. | | restrictResizeInteractionToFixedAspectRatio | `bool` | `false` | If the resize interaction should be restricted to fixed aspect ratio resizing. | | title | `PageTitleSettings(bool show, bool showOnSinglePage, bool showPageTitleTemplate, bool appendPageName, string separator, Color color, string fontFileUri)` | \`\` | Page title settings. | ### `PageTitleSettings` | Member | Type | Default | Description | | --------------------- | -------- | ---------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------- | | appendPageName | `bool` | `true` | Whether to append the page name to the title if a page name is set even if the name is not specified in the template or the template is not shown | | color | `Color` | `createRGBColor(1., 1., 1.)` | Color of page titles visible in preview mode, can change with different themes. | | fontFileUri | `string` | `DEFAULT_FONT` | Font of page titles. | | separator | `string` | `"-"` | Title label separator between the page number and the page name. | | show | `bool` | `true` | Whether to show titles above each page. | | showOnSinglePage | `bool` | `true` | Whether to hide the the page title when only a single page is given. | | showPageTitleTemplate | `bool` | `true` | Whether to include the default page title from `page.titleTemplate` | ### `PlaceholderControlsSettings` | Member | Type | Default | Description | | ----------- | ------ | ------- | ---------------------------- | | showButton | `bool` | `true` | Show the placeholder button. | | showOverlay | `bool` | `true` | Show the overlay pattern. | ### `Settings` | Member | Type | Default | Description | | ------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | alwaysHighlightPlaceholders | `bool` | `false` | Whether placeholder elements should always be highlighted in the scene. | | basePath | `string` | `""` | The root directory to be used when resolving relative paths or when accessing `bundle://` URIs on platforms that don't offer bundles. | | blockAnimations | `BlockAnimationSettings: bool enabled` | `{}` | Settings that configure the behavior of block animations. | | borderOutlineColor | `Color` | `createRGBColor(0., 0., 0., 1.0)` | The border outline color, defaults to black. | | camera | `CameraSettings: CameraClampingSettings clamping` | `{}` | Settings that configure the behavior of the camera. | | clearColor | `Color` | `createClear()` | The color with which the render target is cleared before scenes get rendered. Only used while renderMode == Preview, else #00000000 (full transparency) is used. | | colorMaskingSettings | `ColorMaskingSettings(Color maskColor, bool secondPass)` | `{}` | A collection of settings used to perform color masking. | | controlGizmo | `ControlGizmoSettings(bool showCropHandles, bool showCropScaleHandles, bool showMoveHandles, bool showResizeHandles, bool showScaleHandles, bool showRotateHandles, float blockScaleDownLimit)` | `{}` | Settings that configure which touch/click targets for move/scale/rotate/etc. are enabled and displayed. | | cropOverlayColor | `Color` | `createRGBColor(0., 0., 0., 0.39)` | Color of the dimming overlay that's added in crop mode. | | debug | `DebugFlags(bool useDebugMipmaps, bool showHandlesInteractionArea, bool enforceScopesInAPIs)` | `{}` | ? | | defaultEmojiFontFileUri | `string` | `EMOJI_FONT` | URI of default font file for emojis. | | defaultFontFileUri | `string` | `DEFAULT_FONT` | URI of default font file This font file is the default everywhere unless overriden in specific settings. | | doubleClickSelectionMode | `DoubleClickSelectionMode` | `Hierarchical` | The current mode of selection on double-click. | | doubleClickToCropEnabled | `bool` | `true` | Whether double clicking on an image element should switch into the crop editing mode. | | emscriptenCORSConfigurations | `vector< CORSConfiguration >` | `{}` | CORS Configurations: `` pairs. See `FetchAsyncService-emscripten.cpp` for details. | | errorStateColor | `Color` | `createRGBColor(1., 1., 1., 0.7)` | The error state color for design blocks. | | fallbackFontUri | `string` | `""` | The URI of the fallback font to use for text that is missing certain characters. | | forceSystemEmojis | `bool` | `true` | Whether the system emojis should be used for text. | | globalScopes | `GlobalScopes(Text text, Fill fill, Stroke stroke, Shape shape, Layer layer, Appearance appearance, Lifecycle lifecycle, Editor editor)` | `Allow)` | Global scopes. | | handleFillColor | `Color` | `createWhite()` | The fill color for handles. | | highlightColor | `Color` | `createRGBColor(0.2, 85. / 255., 1.)` | Color of the selection, hover, and group frames and for the handle outlines for non-placeholder elements. | | license | `string` | `""` | A valid license string in JWT format. | | maxImageSize | `int` | `4096` | The maximum size at which images are loaded into the engine. Images that exceed this size are down-scaled prior to rendering. Reducing this size further reduces the memory footprint. Defaults to 4096x4096. | | mouse | `MouseSettings(bool enableZoom, bool enableScroll)` | `{}` | Settings that configure the behavior of the mouse. | | page | `PageSettings(PageTitleSettings title, Color marginFillColor, Color marginFrameColor, Color innerBorderColor, Color outerBorderColor, bool dimOutOfPageAreas, bool allowCropInteraction, bool allowResizeInteraction, bool restrictResizeInteractionToFixedAspectRatio, bool allowRotateInteraction, bool allowMoveInteraction, bool moveChildrenWhenCroppingFill)` | `{}` | Page related settings. | | pageHighlightColor | `Color` | `createRGBColor(0.5, 0.5, 0.5, 0.2)` | Color of the outline of each page. | | placeholderControls | `PlaceholderControlsSettings(bool showOverlay, bool showButton)` | `{}` | Supersedes how the blocks' placeholder controls are applied. | | placeholderHighlightColor | `Color` | `createRGBColor(0.77, 0.06, 0.95)` | Color of the selection, hover, and group frames and for the handle outlines for placeholder elements. | | positionSnappingThreshold | `float` | `4.` | Position snapping threshold in screen space. | | progressColor | `Color` | `createRGBColor(1., 1., 1., 0.7)` | The progress indicator color. | | renderTextCursorAndSelectionInEngine | `bool` | `true` | Whether the engine should render the text cursor and selection highlights during text editing. This can be set to false, if the platform wants to perform this rendering itself. | | rotationSnappingGuideColor | `Color` | `createRGBColor(1., 0.004, 0.361)` | Color of the rotation snapping guides. | | rotationSnappingThreshold | `float` | `0.15` | Rotation snapping threshold in radians. | | ruleOfThirdsLineColor | `Color` | `createRGBColor(0.75, 0.75, 0.75, 0.75)` | Color of the rule-of-thirds lines. | | showBuildVersion | `bool` | `false` | Show the build version on the canvas. | | snappingGuideColor | `Color` | `createRGBColor(1., 0.004, 0.361)` | Color of the position snapping guides. | | textVariableHighlightColor | `Color` | `createRGBColor(0.7, 0., 0.7)` | Color of the text variable highlighting borders. | | touch | `TouchSettings(bool dragStartCanSelect, bool singlePointPanning, PinchGestureAction pinchAction, RotateGestureAction rotateAction)` | `{}` | Settings that configure which touch gestures are enabled and which actions they trigger. | | useSystemFontFallback | `bool` | `false` | Whether the IMG.LY hosted font fallback is used for fonts that are missing certain characters, covering most of the unicode range. | ### `TouchSettings` | Member | Type | Default | Description | | ------------------ | --------------------- | -------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | dragStartCanSelect | `bool` | `true` | Whether dragging an element requires selecting it first. When not set, elements can be directly dragged. | | pinchAction | `PinchGestureAction` | `Scale` | The action to perform when a pinch gesture is performed. | | rotateAction | `RotateGestureAction` | `Rotate` | Whether or not the two finger turn gesture can rotate selected elements. | | singlePointPanning | `bool` | `true` | Whether or not dragging on the canvas should move the camera (scrolling). When not set, the scroll bars have to be used. This setting might get overwritten with the feature flag `preventScrolling`. | ```swift reference-only engine.editor.findAllSettings() try engine.editor.getSettingType("doubleClickSelectionMode") let settingsTask = Task { for await _ in engine.editor.onSettingsChanged { print("Editor settings have changed") } } let roleTask = Task { for await role in engine.editor.onRoleChanged { print("Role changed to \(role)") } } try engine.editor.setSettingBool("doubleClickToCropEnabled", value: true) try engine.editor.getSettingBool("doubleClickToCropEnabled") try engine.editor.setSettingInt("integerSetting", value: 0) try engine.editor.getSettingInt("integerSetting") try engine.editor.setSettingFloat("positionSnappingThreshold", value: 2.0) try engine.editor.getSettingFloat("positionSnappingThreshold") try engine.editor.setSettingString("license", value: "invalid") try engine.editor.getSettingString("license") try engine.editor.setSettingColor("highlightColor", color: .rgba(r: 1, g: 0, b: 1, a: 1)) // Pink try engine.editor.getSettingColor("highlightColor") as Color try engine.editor.setSettingEnum("doubleClickSelectionMode", value: "Direct") try engine.editor.getSettingEnum("doubleClickSelectionMode") try engine.editor.getSettingEnumOptions("doubleClickSelectionMode") try engine.editor.getRole() try engine.editor.setRole("Adopter") ``` ## Change Settings In this example, we will show you how to use the [CreativeEditor SDK](https://img.ly/products/creative-sdk)'s CreativeEngine to control with the `editor` API. A list of all available settings can be found above. ### Exploration ```swift public func findAllSettings() -> [String] ``` Get a list of all available settings. - Returns: A list of all available settings. ```swift public func getSettingType(_ keypath: String) throws -> PropertyType ``` Get the type of a setting. - `keypath:`: The settings keypath, e.g. `doubleClickSelectionMode`. - Returns: The type of the setting. ### Functions ```swift public var onSettingsChanged: AsyncStream { get } ``` Subscribe to changes to the editor settings. ```swift public var onRoleChanged: AsyncStream { get } ``` Subscribe to changes to the editor role. ```swift public func setSettingBool(_ keypath: String, value: Bool) throws ``` Set a boolean setting. - `keypath`: The settings keypath, e.g. `doubleClickToCropEnabled`. - `value`: The value to set. ```swift public func getSettingBool(_ keypath: String) throws -> Bool ``` Get a boolean setting. - `keypath:`: The settings keypath, e.g. `doubleClickToCropEnabled`. - Returns: The current value. ```swift public func setSettingInt(_ keypath: String, value: Int) throws ``` Set an integer setting. - `keypath`: The settings keypath. - `value`: The value to set. ```swift public func getSettingInt(_ keypath: String) throws -> Int ``` Get an integer setting. - `keypath:`: The settings keypath. - Returns: The current value. ```swift public func setSettingFloat(_ keypath: String, value: Float) throws ``` Set a float setting. - `keypath`: The settings keypath, e.g. `positionSnappingThreshold`. - `value`: The value to set. ```swift public func getSettingFloat(_ keypath: String) throws -> Float ``` Get a float setting. - `keypath:`: The settings keypath, e.g. `positionSnappingThreshold`. - Returns: The current value. ```swift public func setSettingString(_ keypath: String, value: String) throws ``` Set a string setting. - `keypath`: The settings keypath, e.g. `license`. - `value`: The value to set. ```swift public func getSettingString(_ keypath: String) throws -> String ``` Get a string setting. - `keypath:`: The settings keypath, e.g. `license`. - Returns: The current value. ```swift public func setSettingColor(_ keypath: String, color: Color) throws ``` Set a color setting. - `keypath`: The settings keypath, e.g. `highlightColor`. - `color`: The value to set. ```swift public func getSettingColor(_ keypath: String) throws -> Color ``` Get a color setting. - `keypath:`: The settings keypath, e.g. `highlightColor`. - Returns: An error, if the keypath is invalid. ```swift public func setSettingEnum(_ keypath: String, value: String) throws ``` Set an enum setting. - `keypath`: The settings keypath, e.g. `doubleClickSelectionMode`. - `value`: The enum value as string. ```swift public func getSettingEnum(_ keypath: String) throws -> String ``` Get an enum setting. - `keypath:`: The settings keypath, e.g. `doubleClickSelectionMode`. - Returns: The value as string. ```swift public func getSettingEnumOptions(_ keypath: String) throws -> [String] ``` Get the available options for an enum setting. - `keypath:`: The settings keypath, e.g. `doubleClickSelectionMode`. - Returns: The available options as string array. ```swift public func setSettingFloat(_ keypath: String, value: Float) throws ``` Set a float setting. - `keypath`: The settings keypath, e.g. `positionSnappingThreshold`. - `value`: The value to set. ```swift public func getSettingFloat(_ keypath: String) throws -> Float ``` Get a float setting. - `keypath:`: The settings keypath, e.g. `positionSnappingThreshold`. - Returns: The current value. ```swift public func setSettingString(_ keypath: String, value: String) throws ``` Set a string setting. - `keypath`: The settings keypath, e.g. `license`. - `value`: The value to set. ```swift public func getSettingString(_ keypath: String) throws -> String ``` Get a string setting. - `keypath:`: The settings keypath, e.g. `license`. - Returns: The current value. ```swift public func getRole() throws -> String ``` Get the current role of the user. - Returns: The current role of the user. ```swift public func setRole(_ role: String) throws ``` Set the role of the user and apply role-dependent defaults for scopes and settings. - `role:`: The role of the user. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Create and Edit Shapes" description: "Draw custom vector shapes, combine them with boolean operations, and insert QR codes into your designs." platform: ios url: "https://img.ly/docs/cesdk/ios/shapes-9f1b2c/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Create and Edit Shapes](https://img.ly/docs/cesdk/ios/shapes-9f1b2c/) --- --- ## Related Pages - [Create Shapes](https://img.ly/docs/cesdk/ios/stickers-and-shapes/create-edit/create-shapes-64acc0/) - Draw custom vector shapes and insert them into your design canvas. - [Edit Shapes](https://img.ly/docs/cesdk/ios/stickers-and-shapes/create-edit/edit-shapes-d67cfb/) - Modify shape properties like size, color, position, and border radius. - [Combine](https://img.ly/docs/cesdk/ios/stickers-and-shapes/combine-2a9e26/) - Group and merge multiple stickers or shapes into a single element for easier manipulation. - [Insert QR Code](https://img.ly/docs/cesdk/ios/stickers-and-shapes/insert-qr-code-b6cc53/) - Generate a QR code with Core Image and insert it into a scene as an image fill, with positioning, sizing, and optional metadata for later updates. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Create and Edit Stickers" description: "Create and customize stickers using image fills for icons, logos, emoji, and multi-color graphics." platform: ios url: "https://img.ly/docs/cesdk/ios/stickers-3d4e5f/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Create and Edit Stickers](https://img.ly/docs/cesdk/ios/stickers-3d4e5f/) --- --- ## Related Pages - [Create Cutout](https://img.ly/docs/cesdk/ios/stickers-and-shapes/create-cutout-384be3/) - Create cutouts from images or shapes by masking or removing specific areas. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Combine" description: "Group and merge multiple stickers or shapes into a single element for easier manipulation." platform: ios url: "https://img.ly/docs/cesdk/ios/stickers-and-shapes/combine-2a9e26/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Create and Edit Shapes](https://img.ly/docs/cesdk/ios/shapes-9f1b2c/) > [Combine](https://img.ly/docs/cesdk/ios/stickers-and-shapes/combine-2a9e26/) --- ```swift file=@cesdk_swift_examples/engine-guides-bool-ops/BoolOps.swift reference-only import Foundation import IMGLYEngine @MainActor func boolOps(engine: Engine) async throws { let scene = try engine.scene.create() let page = try engine.block.create(.page) try engine.block.setWidth(page, value: 800) try engine.block.setHeight(page, value: 600) try engine.block.appendChild(to: scene, child: page) let circle1 = try engine.block.create(.graphic) try engine.block.setShape(circle1, shape: engine.block.createShape(.ellipse)) try engine.block.setFill(circle1, fill: engine.block.createFill(.color)) try engine.block.setPositionX(circle1, value: 30) try engine.block.setPositionY(circle1, value: 30) try engine.block.setWidth(circle1, value: 40) try engine.block.setHeight(circle1, value: 40) try engine.block.appendChild(to: page, child: circle1) let circle2 = try engine.block.create(.graphic) try engine.block.setShape(circle2, shape: engine.block.createShape(.ellipse)) try engine.block.setFill(circle2, fill: engine.block.createFill(.color)) try engine.block.setPositionX(circle2, value: 80) try engine.block.setPositionY(circle2, value: 30) try engine.block.setWidth(circle2, value: 40) try engine.block.setHeight(circle2, value: 40) try engine.block.appendChild(to: page, child: circle2) let circle3 = try engine.block.create(.graphic) try engine.block.setShape(circle3, shape: engine.block.createShape(.ellipse)) try engine.block.setFill(circle3, fill: engine.block.createFill(.color)) try engine.block.setPositionX(circle3, value: 50) try engine.block.setPositionY(circle3, value: 50) try engine.block.setWidth(circle3, value: 50) try engine.block.setHeight(circle3, value: 50) try engine.block.appendChild(to: page, child: circle3) let union = try engine.block.combine([circle1, circle2, circle3], booleanOperation: .union) let text = try engine.block.create(.text) try engine.block.replaceText(text, text: "Removed text") try engine.block.setPositionX(text, value: 10) try engine.block.setPositionY(text, value: 40) try engine.block.setWidth(text, value: 80) try engine.block.setHeight(text, value: 10) try engine.block.appendChild(to: page, child: text) let image = try engine.block.create(.graphic) try engine.block.setShape(image, shape: engine.block.createShape(.rect)) let imageFill = try engine.block.createFill(.image) try engine.block.setFill(image, fill: imageFill) try engine.block.setPositionX(image, value: 0) try engine.block.setPositionY(image, value: 0) try engine.block.setWidth(image, value: 100) try engine.block.setHeight(image, value: 100) try engine.block.setString( imageFill, property: "fill/image/imageFileURI", value: "https://img.ly/static/ubq_samples/sample_1.jpg", ) try engine.block.appendChild(to: page, child: image) try engine.block.sendToBack(image) let difference = try engine.block.combine([image, text], booleanOperation: .difference) } ``` You can use four different boolean operations on blocks to combine them into unique shapes. These operations are: - `'Union'`: adds all the blocks' shapes into one - `'Difference'`: removes from the bottom-most block the shapes of the other blocks overlapping with it - `'Intersection'`: keeps only the overlapping parts of all the blocks' shapes - `'XOR'`: removes the overlapping parts of all the block's shapes Combining blocks allows you to create a new block with a customized shape. Combining blocks with the `union`, `intersection` or `XOR` operation will result in the new block whose fill is that of the top-most block and whose shape is the result of applying the operation pair-wise on blocks from the top-most block to the bottom-most block. Combining blocks with the `difference` operation will result in the new block whose fill is that of the bottom-most block and whose shape is the result of applying the operation pair-wise on blocks from the bottom-most block to the top-most block. The combined blocks will be destroyed. > **Note:** **Only the following blocks can be combined*** A graphics block > * A text block ```swift public func isCombinable(_ ids: [DesignBlockID]) throws -> Bool ``` Checks whether blocks could be combined. Only graphics blocks and text blocks can be combined. All blocks must have the "lifecycle/duplicate" scope enabled. - `ids:`: The blocks for which the confirm combinability. - Returns: Whether the blocks can be combined. ```swift public func combine(_ ids: [DesignBlockID], booleanOperation: BooleanOperation) throws -> DesignBlockID ``` Perform a boolean operation on the given blocks. All blocks must be combinable. See `isCombinable`. The parent, fill and sort order of the new block is that of the prioritized block. When performing a `Union`, `Intersection` or `XOR`, the operation is performed pair-wise starting with the element with the highest sort order. When performing a `Difference`, the operation is performed pair-wise starting with the element with the lowest sort order. Required scope: "editor/select" - `ids`: The blocks to combine. They will be destroyed if "lifecycle/destroy" scope is enabled. - `booleanOperation`: The boolean operation to perform. - Returns: The newly created block. Here's the full code: ```swift // Create blocks and append to scene let star = try engine.block.create(.starShape) let rect = try engine.block.create(.rectShape) try engine.block.appendChild(to: scene, child: star) try engine.block.appendChild(to: scene, child: rect) // Check whether the blocks may be combined if try engine.block.isCombinable([star, rect]) { let combined = try engine.block.combine([star, rect], booleanOperation: .union) } ``` ## Combining three circles together We create three circles and arrange in a recognizable pattern. Combing them with `'Union'` result in a single block with a unique shape. The result will inherit the top-most block's fill, in this case `circle3`'s fill. ```swift highlight-combine-union let circle1 = try engine.block.create(.graphic) try engine.block.setShape(circle1, shape: engine.block.createShape(.ellipse)) try engine.block.setFill(circle1, fill: engine.block.createFill(.color)) try engine.block.setPositionX(circle1, value: 30) try engine.block.setPositionY(circle1, value: 30) try engine.block.setWidth(circle1, value: 40) try engine.block.setHeight(circle1, value: 40) try engine.block.appendChild(to: page, child: circle1) let circle2 = try engine.block.create(.graphic) try engine.block.setShape(circle2, shape: engine.block.createShape(.ellipse)) try engine.block.setFill(circle2, fill: engine.block.createFill(.color)) try engine.block.setPositionX(circle2, value: 80) try engine.block.setPositionY(circle2, value: 30) try engine.block.setWidth(circle2, value: 40) try engine.block.setHeight(circle2, value: 40) try engine.block.appendChild(to: page, child: circle2) let circle3 = try engine.block.create(.graphic) try engine.block.setShape(circle3, shape: engine.block.createShape(.ellipse)) try engine.block.setFill(circle3, fill: engine.block.createFill(.color)) try engine.block.setPositionX(circle3, value: 50) try engine.block.setPositionY(circle3, value: 50) try engine.block.setWidth(circle3, value: 50) try engine.block.setHeight(circle3, value: 50) try engine.block.appendChild(to: page, child: circle3) let union = try engine.block.combine([circle1, circle2, circle3], booleanOperation: .union) ``` To create a special effect of text punched out from an image, we create an image and a text. We ensure that the image is at the bottom as that is the base block from which we want to remove shapes. The result will be a block with the size, shape and fill of the image but with a hole in it in the shape of the removed text. ```swift highlight-combine-difference let text = try engine.block.create(.text) try engine.block.replaceText(text, text: "Removed text") try engine.block.setPositionX(text, value: 10) try engine.block.setPositionY(text, value: 40) try engine.block.setWidth(text, value: 80) try engine.block.setHeight(text, value: 10) try engine.block.appendChild(to: page, child: text) let image = try engine.block.create(.graphic) try engine.block.setShape(image, shape: engine.block.createShape(.rect)) let imageFill = try engine.block.createFill(.image) try engine.block.setFill(image, fill: imageFill) try engine.block.setPositionX(image, value: 0) try engine.block.setPositionY(image, value: 0) try engine.block.setWidth(image, value: 100) try engine.block.setHeight(image, value: 100) try engine.block.setString( imageFill, property: "fill/image/imageFileURI", value: "https://img.ly/static/ubq_samples/sample_1.jpg", ) try engine.block.appendChild(to: page, child: image) try engine.block.sendToBack(image) let difference = try engine.block.combine([image, text], booleanOperation: .difference) ``` ### Full Code Here's the full code: ```swift import Foundation import IMGLYEngine @MainActor func boolOps(engine: Engine) async throws { let scene = try engine.scene.create() let page = try engine.block.create(.page) try engine.block.setWidth(page, value: 800) try engine.block.setHeight(page, value: 600) try engine.block.appendChild(to: scene, child: page) let circle1 = try engine.block.create(.graphic) try engine.block.setShape(circle1, shape: engine.block.createShape(.ellipse)) try engine.block.setFill(circle1, fill: engine.block.createFill(.color)) try engine.block.setPositionX(circle1, value: 30) try engine.block.setPositionY(circle1, value: 30) try engine.block.setWidth(circle1, value: 40) try engine.block.setHeight(circle1, value: 40) try engine.block.appendChild(to: page, child: circle1) let circle2 = try engine.block.create(.graphic) try engine.block.setShape(circle2, shape: engine.block.createShape(.ellipse)) try engine.block.setFill(circle2, fill: engine.block.createFill(.color)) try engine.block.setPositionX(circle2, value: 80) try engine.block.setPositionY(circle2, value: 30) try engine.block.setWidth(circle2, value: 40) try engine.block.setHeight(circle2, value: 40) try engine.block.appendChild(to: page, child: circle2) let circle3 = try engine.block.create(.graphic) try engine.block.setShape(circle3, shape: engine.block.createShape(.ellipse)) try engine.block.setFill(circle3, fill: engine.block.createFill(.color)) try engine.block.setPositionX(circle3, value: 50) try engine.block.setPositionY(circle3, value: 50) try engine.block.setWidth(circle3, value: 50) try engine.block.setHeight(circle3, value: 50) try engine.block.appendChild(to: page, child: circle3) let union = try engine.block.combine([circle1, circle2, circle3], booleanOperation: .union) let text = try engine.block.create(.text) try engine.block.replaceText(text, text: "Removed text") try engine.block.setPositionX(text, value: 10) try engine.block.setPositionY(text, value: 40) try engine.block.setWidth(text, value: 80) try engine.block.setHeight(text, value: 10) try engine.block.appendChild(to: page, child: text) let image = try engine.block.create(.graphic) try engine.block.setShape(image, shape: engine.block.createShape(.rect)) let imageFill = try engine.block.createFill(.image) try engine.block.setFill(image, fill: imageFill) try engine.block.setPositionX(image, value: 0) try engine.block.setPositionY(image, value: 0) try engine.block.setWidth(image, value: 100) try engine.block.setHeight(image, value: 100) try engine.block.setString( imageFill, property: "fill/image/imageFileURI", value: "https://img.ly/static/ubq_samples/sample_1.jpg" ) try engine.block.appendChild(to: page, child: image) try engine.block.sendToBack(image) let difference = try engine.block.combine([image, text], booleanOperation: .difference) } ``` --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Create Cutout" description: "Create cutouts from images or shapes by masking or removing specific areas." platform: ios url: "https://img.ly/docs/cesdk/ios/stickers-and-shapes/create-cutout-384be3/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Create and Edit Stickers](https://img.ly/docs/cesdk/ios/stickers-3d4e5f/) > [Create Cutout](https://img.ly/docs/cesdk/ios/stickers-and-shapes/create-cutout-384be3/) --- ```swift file=@cesdk_swift_examples/engine-guides-cutouts/Cutouts.swift reference-only import Foundation import IMGLYEngine @MainActor func cutouts(engine: Engine) async throws { let scene = try engine.scene.create() let page = try engine.block.create(.page) try engine.block.setWidth(page, value: 800) try engine.block.setHeight(page, value: 600) try engine.block.appendChild(to: scene, child: page) let circle = try engine.block.createCutoutFromPath("M 0,25 a 25,25 0 1,1 50,0 a 25,25 0 1,1 -50,0 Z") try engine.block.setFloat(circle, property: "cutout/offset", value: 3.0) try engine.block.setEnum(circle, property: "cutout/type", value: "Dashed") var square = try engine.block.createCutoutFromPath("M 0,0 H 50 V 50 H 0 Z") try engine.block.setFloat(square, property: "cutout/offset", value: 6.0) var union = try engine.block.createCutoutFromOperation(containing: [circle, square], cutoutOperation: .union) try engine.block.destroy(circle) try engine.block.destroy(square) engine.editor.setSpotColor(name: "CutContour", r: 0.0, g: 0.0, b: 1.0) } ``` Cutouts are a special feature one can use with cuttings printers. When printing a PDF file containing cutouts paths, a cutting printer will cut these paths with a cutter rather than print them with ink. Use cutouts to create stickers, iron on decals, etc. Cutouts can be created from an SVG string describing its underlying shape. Cutouts can also be created from combining multiple existing cutouts using the boolean operations `union`, `difference`, `intersection` and `xor`. Cutouts have a type property which can take one of two values: `solid` and `dashed`. Cutting printers recognize cutouts paths through their specially named spot colors. By default, `solid` cutouts have the spot color `"CutContour"` to produce a continuous cutting line and `dashed` cutouts have the spot colors `"PerfCutContour"` to produce a perforated cutting line. You may need to adjust these spot color names for you printer. > **Note:** **Note** Note that the actual color approximation given to the spot color does > not affect how the cutting printer interprets the cutout, only how it is > rendered. The default color approximations are magenta for "CutContour" and > green for "PerfCutContour". Cutouts have an offset property that determines the distance at which the cutout path is rendered from the underlying path set when created. ## Setup the scene We first create a new scene with a new page. ```swift highlight-setup let scene = try engine.scene.create() let page = try engine.block.create(.page) try engine.block.setWidth(page, value: 800) try engine.block.setHeight(page, value: 600) try engine.block.appendChild(to: scene, child: page) ``` ## Create cutouts Here we add two cutouts. First, a circle of type `dashed` and with an offset of 3.0. Second, a square of default type `solid` and an offset of 6.0. ```swift highlight-create-cutouts let circle = try engine.block.createCutoutFromPath("M 0,25 a 25,25 0 1,1 50,0 a 25,25 0 1,1 -50,0 Z") try engine.block.setFloat(circle, property: "cutout/offset", value: 3.0) try engine.block.setEnum(circle, property: "cutout/type", value: "Dashed") var square = try engine.block.createCutoutFromPath("M 0,0 H 50 V 50 H 0 Z") try engine.block.setFloat(square, property: "cutout/offset", value: 6.0) ``` ## Combining multiple cutouts into one Here we use the `union` operation to create a new cutout that consists of the combination of the earlier two cutouts we have created. Note that we destroy the previously created `circle` and `square` cutouts as we don't need them anymore and we certainly don't want to printer to cut through those paths as well. When combining multiple cutouts, the resulting cutout will be of the type of the first cutout given and an offset of 0. In this example, since the `circle` cutout is of type `dashed`, the newly created cutout will also be of type `dashed`. > **Note:** **Warning** When using the Difference operation, the first cutout is the > cutout that is subtracted from. For other operations, the order of > the cutouts don't matter. ```swift highlight-cutout-union var union = try engine.block.createCutoutFromOperation(containing: [circle, square], cutoutOperation: .union) try engine.block.destroy(circle) try engine.block.destroy(square) ``` ## Change the default color for Solid cutouts For some reason, we'd like the cutouts of type `solid` to not render as magenta but as blue. Knowing that `"CutContour"` is the spot color associated with `solid`, we change it RGB approximation to blue. Thought the cutout will render as blue, the printer will still interpret this path as a cutting because of its special spot color name. ```swift highlight-spot-color-solid engine.editor.setSpotColor(name: "CutContour", r: 0.0, g: 0.0, b: 1.0) ``` ## Full Code Here's the full code: ```swift import Foundation import IMGLYEngine @MainActor func cutouts(engine: Engine) async throws { let scene = try engine.scene.create() let page = try engine.block.create(.page) try engine.block.setWidth(page, value: 800) try engine.block.setHeight(page, value: 600) try engine.block.appendChild(to: scene, child: page) let circle = try engine.block.createCutoutFromPath("M 0,25 a 25,25 0 1,1 50,0 a 25,25 0 1,1 -50,0 Z") try engine.block.setFloat(circle, property: "cutout/offset", value: 3.0) try engine.block.setEnum(circle, property: "cutout/type", value: "Dashed") var square = try engine.block.createCutoutFromPath("M 0,0 H 50 V 50 H 0 Z") try engine.block.setFloat(square, property: "cutout/offset", value: 6.0) var union = try engine.block.createCutoutFromOperation(containing: [circle, square], cutoutOperation: .union) try engine.block.destroy(circle) try engine.block.destroy(square) engine.editor.setSpotColor(name: "CutContour", r: 0.0, g: 0.0, b: 1.0) } ``` --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Create Shapes" description: "Draw custom vector shapes and insert them into your design canvas." platform: ios url: "https://img.ly/docs/cesdk/ios/stickers-and-shapes/create-edit/create-shapes-64acc0/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Create and Edit Shapes](https://img.ly/docs/cesdk/ios/shapes-9f1b2c/) > [Create Shapes](https://img.ly/docs/cesdk/ios/stickers-and-shapes/create-edit/create-shapes-64acc0/) --- ```swift file=@cesdk_swift_examples/engine-guides-using-shapes/UsingShapes.swift reference-only import Foundation import IMGLYEngine @MainActor func usingShapes(engine: Engine) async throws { let scene = try engine.scene.create() let graphic = try engine.block.create(.graphic) let imageFill = try engine.block.createFill(.image) try engine.block.setString( imageFill, property: "fill/image/imageFileURI", value: "https://img.ly/static/ubq_samples/sample_1.jpg", ) try engine.block.setFill(graphic, fill: imageFill) try engine.block.setWidth(graphic, value: 100) try engine.block.setHeight(graphic, value: 100) try engine.block.appendChild(to: scene, child: graphic) try await engine.scene.zoom(to: graphic, paddingLeft: 40, paddingTop: 40, paddingRight: 40, paddingBottom: 40) try engine.block.supportsShape(graphic) // Returns true let text = try engine.block.create(.text) try engine.block.supportsShape(text) // Returns false let rectShape = try engine.block.createShape(.rect) try engine.block.setShape(graphic, shape: rectShape) let shape = try engine.block.getShape(graphic) let shapeType = try engine.block.getType(shape) let starShape = try engine.block.createShape(.star) try engine.block.destroy(engine.block.getShape(graphic)) try engine.block.setShape(graphic, shape: starShape) /* The following line would also destroy the currently attached starShape */ // engine.block.destroy(graphic) let allShapeProperties = try engine.block.findAllProperties(starShape) try engine.block.setInt(starShape, property: "shape/star/points", value: 6) } ``` The CE.SDK provides a flexible way to create and customize shapes, including rectangles, circles, lines, and polygons. ## Supported Shapes The following shapes are supported in CE.SDK: - `ShapeType.rect` - `ShapeType.line` - `ShapeType.ellipse` - `ShapeType.polygon` - `ShapeType.star` - `ShapeType.vectorPath` ## Creating Shapes `graphic` blocks don't have any shape after you create them, which leaves them invisible by default. In order to make them visible, we need to assign both a shape and a fill to the `graphic` block. You can find more information on fills [here](https://img.ly/docs/cesdk/ios/fills-402ddc/). In this example we have created and attached an image fill. In order to create a new shape, we must call the `func createShape(_ type: ShapeType) throws -> DesignBlockID` API. ```swift highlight-createShape let rectShape = try engine.block.createShape(.rect) ``` In order to assign this shape to the `graphic` block, call the `func setShape(_ id: DesignBlockID, shape: DesignBlockID) throws` API. ```swift highlight-setShape try engine.block.setShape(graphic, shape: rectShape) ``` Just like design blocks, shapes with different types have different properties that you can set via the API. Please refer to the [API docs](https://img.ly/docs/cesdk/ios/stickers-and-shapes/create-edit/edit-shapes-d67cfb/) for a complete list of all available properties for each type of shape. ## Full Code Here's the full code: ```swift import Foundation import IMGLYEngine @MainActor func usingShapes(engine: Engine) async throws { let scene = try engine.scene.create() let graphic = try engine.block.create(.graphic) let imageFill = try engine.block.createFill(.image) try engine.block.setString( imageFill, property: "fill/image/imageFileURI", value: "https://img.ly/static/ubq_samples/sample_1.jpg" ) try engine.block.setFill(graphic, fill: imageFill) try engine.block.setWidth(graphic, value: 100) try engine.block.setHeight(graphic, value: 100) try engine.block.appendChild(to: scene, child: graphic) try await engine.scene.zoom(to: graphic, paddingLeft: 40, paddingTop: 40, paddingRight: 40, paddingBottom: 40) try engine.block.supportsShape(graphic) // Returns true let text = try engine.block.create(.text) try engine.block.supportsShape(text) // Returns false let rectShape = try engine.block.createShape(.rect) try engine.block.setShape(graphic, shape: rectShape) } ``` --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Edit Shapes" description: "Modify shape properties like size, color, position, and border radius." platform: ios url: "https://img.ly/docs/cesdk/ios/stickers-and-shapes/create-edit/edit-shapes-d67cfb/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Create and Edit Shapes](https://img.ly/docs/cesdk/ios/shapes-9f1b2c/) > [Edit Shapes](https://img.ly/docs/cesdk/ios/stickers-and-shapes/create-edit/edit-shapes-d67cfb/) --- ```swift file=@cesdk_swift_examples/engine-guides-using-shapes/UsingShapes.swift reference-only import Foundation import IMGLYEngine @MainActor func usingShapes(engine: Engine) async throws { let scene = try engine.scene.create() let graphic = try engine.block.create(.graphic) let imageFill = try engine.block.createFill(.image) try engine.block.setString( imageFill, property: "fill/image/imageFileURI", value: "https://img.ly/static/ubq_samples/sample_1.jpg", ) try engine.block.setFill(graphic, fill: imageFill) try engine.block.setWidth(graphic, value: 100) try engine.block.setHeight(graphic, value: 100) try engine.block.appendChild(to: scene, child: graphic) try await engine.scene.zoom(to: graphic, paddingLeft: 40, paddingTop: 40, paddingRight: 40, paddingBottom: 40) try engine.block.supportsShape(graphic) // Returns true let text = try engine.block.create(.text) try engine.block.supportsShape(text) // Returns false let rectShape = try engine.block.createShape(.rect) try engine.block.setShape(graphic, shape: rectShape) let shape = try engine.block.getShape(graphic) let shapeType = try engine.block.getType(shape) let starShape = try engine.block.createShape(.star) try engine.block.destroy(engine.block.getShape(graphic)) try engine.block.setShape(graphic, shape: starShape) /* The following line would also destroy the currently attached starShape */ // engine.block.destroy(graphic) let allShapeProperties = try engine.block.findAllProperties(starShape) try engine.block.setInt(starShape, property: "shape/star/points", value: 6) } ``` The `graphic` [design block](https://img.ly/docs/cesdk/ios/concepts/blocks-90241e/) in CE.SDK allows you to modify and replace its shape. CreativeEditor SDK supports many different types of shapes, such as rectangles, lines, ellipses, polygons and custom vector paths. Similarly to blocks, each shape object has a numeric id which can be used to query and [modify its properties](https://img.ly/docs/cesdk/ios/concepts/blocks-90241e/). ## Accessing Shapes In order to query whether a block supports shapes, you should call the `func supportsShape(_ id: DesignBlockID) throws -> Bool` API. Currently, only the `graphic` design block supports shape objects. ```swift highlight-supportsShape try engine.block.supportsShape(graphic) // Returns true let text = try engine.block.create(.text) try engine.block.supportsShape(text) // Returns false ``` To query the shape of a design block, call the `func getShape(_ id: DesignBlockID) throws -> DesignBlockID` API. You can now pass the returned result into other APIs in order to query more information about the shape, e.g. its type via the `func getType(_ id: DesignBlockID) throws -> String` API. ```swift highlight-getShape let shape = try engine.block.getShape(graphic) let shapeType = try engine.block.getType(shape) ``` When replacing the shape of a design block, remember to destroy the previous shape object if you don't intend to use it any further. Shape objects that are not attached to a design block will never be automatically destroyed. Destroying a design block will also destroy its attached shape block. ```swift highlight-replaceShape let starShape = try engine.block.createShape(.star) try engine.block.destroy(engine.block.getShape(graphic)) try engine.block.setShape(graphic, shape: starShape) /* The following line would also destroy the currently attached starShape */ // engine.block.destroy(graphic) ``` ## Shape Properties Just like design blocks, shapes with different types have different properties that you can query and modify via the API. Use `func findAllProperties(_ id: DesignBlockID) throws -> [String]` in order to get a list of all properties of a given shape. For the star shape in this example, the call would return `["name", "shape/star/innerDiameter", "shape/star/points", "type", "uuid"]`. Please refer to the [API docs](https://img.ly/docs/cesdk/ios/stickers-and-shapes/create-edit/edit-shapes-d67cfb/) for a complete list of all available properties for each type of shape. ```swift highlight-getProperties let allShapeProperties = try engine.block.findAllProperties(starShape) ``` Once we know the property keys of a shape, we can use the same APIs as for design blocks in order to modify those properties. For example, we can use `func setInt(_ id: DesignBlockID, property: String, value: Int) throws` in order to change the number of points of the star to six. ```swift highlight-modifyProperties try engine.block.setInt(starShape, property: "shape/star/points", value: 6) ``` ## Full Code Here's the full code: ```swift import Foundation import IMGLYEngine @MainActor func usingShapes(engine: Engine) async throws { let scene = try engine.scene.create() let graphic = try engine.block.create(.graphic) let imageFill = try engine.block.createFill(.image) try engine.block.setString( imageFill, property: "fill/image/imageFileURI", value: "https://img.ly/static/ubq_samples/sample_1.jpg" ) try engine.block.setFill(graphic, fill: imageFill) try engine.block.setWidth(graphic, value: 100) try engine.block.setHeight(graphic, value: 100) try engine.block.appendChild(to: scene, child: graphic) try await engine.scene.zoom(to: graphic, paddingLeft: 40, paddingTop: 40, paddingRight: 40, paddingBottom: 40) try engine.block.supportsShape(graphic) // Returns true let text = try engine.block.create(.text) try engine.block.supportsShape(text) // Returns false let rectShape = try engine.block.createShape(.rect) try engine.block.setShape(graphic, shape: rectShape) let shape = try engine.block.getShape(graphic) let shapeType = try engine.block.getType(shape) let starShape = try engine.block.createShape(.star) try engine.block.destroy(engine.block.getShape(graphic)) try engine.block.setShape(graphic, shape: starShape) /* The following line would also destroy the currently attached starShape */ // engine.block.destroy(graphic) let allShapeProperties = try engine.block.findAllProperties(starShape) try engine.block.setInt(starShape, property: "shape/star/points", value: 6) } ``` --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Insert QR Code" description: "Generate a QR code with Core Image and insert it into a scene as an image fill, with positioning, sizing, and optional metadata for later updates." platform: ios url: "https://img.ly/docs/cesdk/ios/stickers-and-shapes/insert-qr-code-b6cc53/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Create and Edit Shapes](https://img.ly/docs/cesdk/ios/shapes-9f1b2c/) > [Insert QR Code](https://img.ly/docs/cesdk/ios/stickers-and-shapes/insert-qr-code-b6cc53/) --- QR codes are a practical way to turn any design into a scannable gateway for: - Landing pages - App installs - Product info - Event tickets - You name it CE.SDK doesn't include a built-in QR generator, but you can create the image with **Core Image** in just a few lines and place it on the canvas as an **image fill**. This guide shows the full workflow with Swift examples. ## What You'll Learn - Generate a QR code image from a `String` using **Core Image**. - Add it to a CE.SDK scene as an **image fill** on a shape block. - Control **size**, **position**, and **color** for brand consistency. - Store **metadata** for quick regeneration later. ## When You'll Use It - Business cards, flyers, or packaging that need a **scannable link**. - Apps that let users personalize templates with their own URLs. - Automated workflows that embed links into generated designs. ```swift file=@cesdk_swift_examples/engine-guides-shapes-qrcode/QRCodeGenerator.swift reference-only import CoreImage.CIFilterBuiltins import IMGLYEngine import SwiftUI #if canImport(UIKit) import UIKit private typealias PlatformColor = UIColor private typealias PlatformImage = UIImage #elseif canImport(AppKit) import AppKit private typealias PlatformColor = NSColor private typealias PlatformImage = NSImage #endif struct QRCanvasExampleView: View { // CE.SDK @State private var engine: Engine? @State private var scene: DesignBlockID = 0 @State private var page: DesignBlockID = 0 // UI state @State private var urlString: String = "https://example.com" var body: some View { VStack(spacing: 16) { // Canvas renders the current engine scene Group { if let engine { Canvas(engine: engine, isPaused: .constant(false)) .frame(minHeight: 280) .clipShape(RoundedRectangle(cornerRadius: 12)) .overlay(RoundedRectangle(cornerRadius: 12).stroke(Color.secondary.opacity(0.3))) } else { ZStack { RoundedRectangle(cornerRadius: 12).fill(Color.secondary.opacity(0.08)) Text("Canvas will appear after Engine is created") .foregroundColor(.secondary) .padding() } .frame(minHeight: 280) } } // Controls VStack(spacing: 12) { HStack(spacing: 0) { TextField("https://example.com", text: $urlString) #if os(iOS) .autocapitalization(.none) .keyboardType(.URL) #endif .textFieldStyle(.roundedBorder) Spacer() Button("Insert QR") { Task { await insertQR() } } .disabled(engine == nil || page == 0) .padding(.horizontal, 12) .padding(.vertical, 8) .background(Color.accentColor) .foregroundColor(.white) .cornerRadius(8) } } } .padding() .onAppear { Task { await setupEngineIfNeeded() } } } // MARK: - Engine Setup @MainActor private func setupEngineIfNeeded() async { guard engine == nil else { return } do { let e = try await Engine(license: "") engine = e let s = try e.scene.create() scene = s let p = try e.block.create(.page) try e.block.appendChild(to: s, child: p) page = p } catch { print("Engine setup error:", error) } } // MARK: - Insert QR @MainActor private func insertQR() async { guard let e = engine, page != 0 else { return } do { _ = try await insertQRCode( engine: e, page: page, urlString: urlString, position: CGPoint(x: 200, y: 200), size: 180, ) } catch { print("Insert QR failed:", error) } } } // MARK: - QR Generation (Core Image) /// Generate a QR code with brand colors. /// - Parameters: /// - string: Content to encode (use a full URL with scheme). /// - correction: Error correction level (L, M, Q, H). "M" is a good default. /// - scale: Pixel scale factor (increase for print). /// - foreground: Dark module color. /// - background: Light background color. private func makeQRCode( from string: String, correction: String = "M", scale: CGFloat = 10, foreground: PlatformColor = .black, background: PlatformColor = .white, ) -> PlatformImage? { guard let data = string.data(using: .utf8) else { return nil } let qr = CIFilter.qrCodeGenerator() qr.setValue(data, forKey: "inputMessage") qr.setValue(correction, forKey: "inputCorrectionLevel") guard let output = qr.outputImage else { return nil } // Map black/white to brand colors let falseColor = CIFilter.falseColor() falseColor.inputImage = output #if canImport(UIKit) falseColor.color0 = CIColor(color: foreground) falseColor.color1 = CIColor(color: background) #elseif canImport(AppKit) falseColor.color0 = CIColor(color: foreground) ?? CIColor.black falseColor.color1 = CIColor(color: background) ?? CIColor.white #endif guard let colored = falseColor.outputImage else { return nil } // Scale up without interpolation let scaled = colored.transformed(by: CGAffineTransform(scaleX: scale, y: scale)) let context = CIContext(options: [.useSoftwareRenderer: false]) guard let cg = context.createCGImage(scaled, from: scaled.extent) else { return nil } #if canImport(UIKit) return UIImage(cgImage: cg, scale: 1.0, orientation: .up) #elseif canImport(AppKit) return NSImage(cgImage: cg, size: NSSize(width: cg.width, height: cg.height)) #endif } // MARK: - CE.SDK Block Creation @MainActor func insertQRCode( engine: Engine, page: DesignBlockID, urlString: String, position: CGPoint = .init(x: 200, y: 200), size: CGFloat = 160, ) async throws -> DesignBlockID { guard let qr = makeQRCode(from: urlString, correction: "M", scale: 10, foreground: .black, background: .white) else { throw NSError(domain: "QR", code: 1, userInfo: [NSLocalizedDescriptionKey: "Failed to generate QR image"]) } // Get PNG data from the image (platform-specific) #if canImport(UIKit) guard let png = qr.pngData() else { throw NSError(domain: "QR", code: 2, userInfo: [NSLocalizedDescriptionKey: "Failed to encode QR as PNG"]) } #elseif canImport(AppKit) guard let tiffRepresentation = qr.tiffRepresentation, let bitmap = NSBitmapImageRep(data: tiffRepresentation), let png = bitmap.representation(using: .png, properties: [:]) else { throw NSError(domain: "QR", code: 2, userInfo: [NSLocalizedDescriptionKey: "Failed to encode QR as PNG"]) } #endif let fileURL = FileManager.default.temporaryDirectory .appendingPathComponent(UUID().uuidString) .appendingPathExtension("png") try png.write(to: fileURL) // Create a visible graphic block with a rect shape let graphic = try engine.block.create(.graphic) let rectShape = try engine.block.createShape(.rect) try engine.block.setShape(graphic, shape: rectShape) // Create an image fill and point it to the QR file URL let imageFill = try engine.block.createFill(.image) try engine.block.setString(imageFill, property: "fill/image/imageFileURI", value: fileURL.absoluteString) try engine.block.setFill(graphic, fill: imageFill) // Size & position (keep square) try engine.block.setWidth(graphic, value: Float(size)) try engine.block.setHeight(graphic, value: Float(size)) try engine.block.setPositionX(graphic, value: Float(position.x)) try engine.block.setPositionY(graphic, value: Float(position.y)) // Optional metadata for future updates try? engine.block.setMetadata(graphic, key: "qr/url", value: urlString) // Add to page try engine.block.appendChild(to: page, child: graphic) return graphic } /// Update an existing QR code block with a new URL. /// - Parameters: /// - engine: The CE.SDK engine instance. /// - qrBlock: The existing QR code block to update. /// - newURL: The new URL to encode. @MainActor func updateQRCode(engine: Engine, qrBlock: DesignBlockID, newURL: String) throws { guard let qr = makeQRCode(from: newURL) else { return } // Get PNG data from the image (platform-specific) #if canImport(UIKit) guard let png = qr.pngData() else { return } #elseif canImport(AppKit) guard let tiffRepresentation = qr.tiffRepresentation, let bitmap = NSBitmapImageRep(data: tiffRepresentation), let png = bitmap.representation(using: .png, properties: [:]) else { return } #endif let fileURL = FileManager.default.temporaryDirectory .appendingPathComponent(UUID().uuidString) .appendingPathExtension("png") try png.write(to: fileURL) let fill = try engine.block.getFill(qrBlock) try engine.block.setString(fill, property: "fill/image/imageFileURI", value: fileURL.absoluteString) try? engine.block.setMetadata(qrBlock, key: "qr/url", value: newURL) } #Preview { QRCanvasExampleView() } ``` ## Platform Setup The example uses type aliases to abstract platform differences between iOS (`UIKit`) and macOS (`AppKit`). `PlatformColor` maps to `UIColor` or `NSColor`, and `PlatformImage` maps to `UIImage` or `NSImage`. ```swift highlight-qr-imports import CoreImage.CIFilterBuiltins import IMGLYEngine import SwiftUI #if canImport(UIKit) import UIKit private typealias PlatformColor = UIColor private typealias PlatformImage = UIImage #elseif canImport(AppKit) import AppKit private typealias PlatformColor = NSColor private typealias PlatformImage = NSImage #endif ``` ## Generate a QR Code Image Use **Core Image** to create a high-resolution QR code, then colorize it to match your brand. ```swift highlight-qr-generate /// Generate a QR code with brand colors. /// - Parameters: /// - string: Content to encode (use a full URL with scheme). /// - correction: Error correction level (L, M, Q, H). "M" is a good default. /// - scale: Pixel scale factor (increase for print). /// - foreground: Dark module color. /// - background: Light background color. private func makeQRCode( from string: String, correction: String = "M", scale: CGFloat = 10, foreground: PlatformColor = .black, background: PlatformColor = .white, ) -> PlatformImage? { guard let data = string.data(using: .utf8) else { return nil } let qr = CIFilter.qrCodeGenerator() qr.setValue(data, forKey: "inputMessage") qr.setValue(correction, forKey: "inputCorrectionLevel") guard let output = qr.outputImage else { return nil } // Map black/white to brand colors let falseColor = CIFilter.falseColor() falseColor.inputImage = output #if canImport(UIKit) falseColor.color0 = CIColor(color: foreground) falseColor.color1 = CIColor(color: background) #elseif canImport(AppKit) falseColor.color0 = CIColor(color: foreground) ?? CIColor.black falseColor.color1 = CIColor(color: background) ?? CIColor.white #endif guard let colored = falseColor.outputImage else { return nil } // Scale up without interpolation let scaled = colored.transformed(by: CGAffineTransform(scaleX: scale, y: scale)) let context = CIContext(options: [.useSoftwareRenderer: false]) guard let cg = context.createCGImage(scaled, from: scaled.extent) else { return nil } #if canImport(UIKit) return UIImage(cgImage: cg, scale: 1.0, orientation: .up) #elseif canImport(AppKit) return NSImage(cgImage: cg, size: NSSize(width: cg.width, height: cg.height)) #endif } ``` Keep the foreground dark and the background light for reliable scanning. ## Insert the QR as an Image Fill Create a `graphic` block, assign it a `rect` shape, and fill it with your generated QR image. ```swift highlight-qr-insert @MainActor func insertQRCode( engine: Engine, page: DesignBlockID, urlString: String, position: CGPoint = .init(x: 200, y: 200), size: CGFloat = 160, ) async throws -> DesignBlockID { guard let qr = makeQRCode(from: urlString, correction: "M", scale: 10, foreground: .black, background: .white) else { throw NSError(domain: "QR", code: 1, userInfo: [NSLocalizedDescriptionKey: "Failed to generate QR image"]) } // Get PNG data from the image (platform-specific) #if canImport(UIKit) guard let png = qr.pngData() else { throw NSError(domain: "QR", code: 2, userInfo: [NSLocalizedDescriptionKey: "Failed to encode QR as PNG"]) } #elseif canImport(AppKit) guard let tiffRepresentation = qr.tiffRepresentation, let bitmap = NSBitmapImageRep(data: tiffRepresentation), let png = bitmap.representation(using: .png, properties: [:]) else { throw NSError(domain: "QR", code: 2, userInfo: [NSLocalizedDescriptionKey: "Failed to encode QR as PNG"]) } #endif let fileURL = FileManager.default.temporaryDirectory .appendingPathComponent(UUID().uuidString) .appendingPathExtension("png") try png.write(to: fileURL) // Create a visible graphic block with a rect shape let graphic = try engine.block.create(.graphic) let rectShape = try engine.block.createShape(.rect) try engine.block.setShape(graphic, shape: rectShape) // Create an image fill and point it to the QR file URL let imageFill = try engine.block.createFill(.image) try engine.block.setString(imageFill, property: "fill/image/imageFileURI", value: fileURL.absoluteString) try engine.block.setFill(graphic, fill: imageFill) // Size & position (keep square) try engine.block.setWidth(graphic, value: Float(size)) try engine.block.setHeight(graphic, value: Float(size)) try engine.block.setPositionX(graphic, value: Float(position.x)) try engine.block.setPositionY(graphic, value: Float(position.y)) // Optional metadata for future updates try? engine.block.setMetadata(graphic, key: "qr/url", value: urlString) // Add to page try engine.block.appendChild(to: page, child: graphic) return graphic } ``` The preceding code creates a QR code and then saves it to a temporary directory to generate a file URL the block can use. ## Add Optional Metadata Store the URL alongside the block for quick updates later. Metadata `key` values are anything you want. The `key` and the `value` must be `String` types. ```swift highlight-qr-metadata // Optional metadata for future updates try? engine.block.setMetadata(graphic, key: "qr/url", value: urlString) ``` ## Update an Existing QR Code If data changes, just regenerate the QR image and update the fill URI. ```swift highlight-qr-update /// Update an existing QR code block with a new URL. /// - Parameters: /// - engine: The CE.SDK engine instance. /// - qrBlock: The existing QR code block to update. /// - newURL: The new URL to encode. @MainActor func updateQRCode(engine: Engine, qrBlock: DesignBlockID, newURL: String) throws { guard let qr = makeQRCode(from: newURL) else { return } // Get PNG data from the image (platform-specific) #if canImport(UIKit) guard let png = qr.pngData() else { return } #elseif canImport(AppKit) guard let tiffRepresentation = qr.tiffRepresentation, let bitmap = NSBitmapImageRep(data: tiffRepresentation), let png = bitmap.representation(using: .png, properties: [:]) else { return } #endif let fileURL = FileManager.default.temporaryDirectory .appendingPathComponent(UUID().uuidString) .appendingPathExtension("png") try png.write(to: fileURL) let fill = try engine.block.getFill(qrBlock) try engine.block.setString(fill, property: "fill/image/imageFileURI", value: fileURL.absoluteString) try? engine.block.setMetadata(qrBlock, key: "qr/url", value: newURL) } ``` To generate many QR codes, for instance during a batch run, loop through your data and call `insertQRCode` for each. ## Troubleshooting | Symptom | Cause | Solution | |----------|--------|-----------| | QR looks blurry | Image scaled too small | Increase the Core Image `scale` and block size. | | QR won't scan | Low contrast or invalid URL | Use dark-on-light colors and percent-encode URLs. | | QR not visible | Shape missing from block | Call `setShape` before applying the fill. | | App crash writing file | Invalid temp URL | Always use `FileManager.default.temporaryDirectory`. | ## Next Steps Now that you can generate QR codes, here are some related guides to help you learn more. Explore a complete code sample on [GitHub](https://github.com/imgly/cesdk-swift-examples/tree/v$UBQ_VERSION$/engine-guides-shapes-qrcode). - [Insert Shapes or Stickers](https://img.ly/docs/cesdk/ios/insert-media/shapes-or-stickers-20ac68/) — Learn how fills and shapes interact. - [Batch Processing](https://img.ly/docs/cesdk/ios/automation/batch-processing-ab2d18/) — Automate multiple QR insertions. - [Export to PDF](https://img.ly/docs/cesdk/ios/export-save-publish/export/to-pdf-95e04b/) — Prepare print-ready designs. - [Use Templates: Overview](https://img.ly/docs/cesdk/ios/create-templates/overview-4ebe30/) — Add a placeholder for QR blocks in templates. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Text" description: "Add, style, and customize text layers in your design using CE.SDK’s flexible text editing tools." platform: ios url: "https://img.ly/docs/cesdk/ios/text-8a993a/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Create and Edit Text](https://img.ly/docs/cesdk/ios/text-8a993a/) --- --- ## Related Pages - [Overview](https://img.ly/docs/cesdk/ios/text/overview-0bd620/) - Add, style, and customize text layers in your design using CE.SDK’s flexible text editing tools. - [Add Text](https://img.ly/docs/cesdk/ios/text/add-4f5011/) - Insert text blocks into your CE.SDK scene. - [Edit Text](https://img.ly/docs/cesdk/ios/text/edit-c5106b/) - Edit text content directly on the canvas or through the properties panel. - [Text Styling](https://img.ly/docs/cesdk/ios/text/styling-269c48/) - Apply fonts, colors, alignment, and other styling options to customize text appearance. - [Text Designs](https://img.ly/docs/cesdk/ios/text/text-designs-a1b2c3/) - Create and customize text component libraries using predefined text designs that appear in your asset library. - [Emojis](https://img.ly/docs/cesdk/ios/text/emojis-510651/) - Insert and style emojis alongside text for expressive, modern typographic designs. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Add Text" description: "Insert text blocks into your CE.SDK scene." platform: ios url: "https://img.ly/docs/cesdk/ios/text/add-4f5011/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Create and Edit Text](https://img.ly/docs/cesdk/ios/text-8a993a/) > [Add Text](https://img.ly/docs/cesdk/ios/text/add-4f5011/) --- Text is often the first dynamic element you introduce into a design. Whether you’re inserting personalized names, generating labels, or building your own editor UI, CE.SDK lets you create and configure text blocks from the UI or programmatically with just a few lines of Swift. This guide walks through creating a text block, setting its initial content, choosing a typeface and size, and placing it on the canvas. You’ll also see how this ties into styling and templates. ## What You’ll Learn You’ll be able to: - Add text using one of the prebuilt editors. - Create a text block programmatically. - Set a text block’s content with `replaceText()`. - Adjust width and wrapping behavior. - Choose a typeface and font size. - Position the text block on a page. - Understand where this fits in broader text workflows. ## When You’ll Use It Programmatic text insertion is ideal when: - Text comes from user input or API data. - You’re building custom UI instead of the prebuilt editor. - You want consistent layout across many scenes. - You’re generating [compositions](https://img.ly/docs/cesdk/ios/create-composition-db709c/) automatically. - You’re filling placeholders in templates. ## Adding Text with the Prebuilt Editors (iOS only) When you’re using the prebuilt editors on iOS, you can add text instantly using the **Text** button on the toolbar. ![Toolbar highlighting Text button](assets/ios-toolbar-add-text.png) After tapping the button, choose one of the predefined plain-text or formatted text options for your text. ![Drawer with plain text and formatted text options.](assets/ios-toolbar-text-choices-163.png) See the [guide on font combinations](https://img.ly/docs/cesdk/ios/text/text-designs-a1b2c3/) to learn about creating an using predefined text art boxes in your compositions. For **existing text**, select the text box in the scene to make further changes using controls in the inspector. ![Inspector for text boxes](assets/ios-inspector-text-options-163.png) Use: - The **Edit** button for changing the text string. - **Fill & Stroke** and **Background** for configuring color. - The **Format** button for changing: - font family - weight - size - alignment. ![Format options drawer](assets/ios-inspector-text-format-163.png) ## Adding a Text Block to the Canvas CE.SDK represents text as a block. You can: 1. create a `.text` block 2. update its contents using the `replaceText` method This method is for both: - initial text - subsequent updates. ```swift // 1. Create the text block let textID = try engine.block.create(.text) // 2. Set its content try engine.block.replaceText(textID, text: "Hello from CE.SDK") ``` Use `replaceText` when: - The text block displays literal text. - You are updating labels, titles, captions, or any non-variable content. - The block isn’t bound to a template variable. When your text block contains template variables (for example, "\{\{user-name}}"), you generally don’t call `replaceText`. Instead, you [update the matching variable](https://img.ly/docs/cesdk/ios/create-templates/add-dynamic-content/text-variables-7ecb50/) value, and CE.SDK automatically refreshes all text blocks that reference it. At this point, you can append the block to the scene to display text. You can set configuration options before or after displaying the text block. ## Choose a Font Size You can set a clear, readable starting size for the text: ```swift try engine.block.setTextFontSize(textID, fontSize: 36) ``` The typeface (font family) that the block uses comes from the project's font configuration. Refer to the [Styling](https://img.ly/docs/cesdk/ios/text/styling-269c48/) guide for changing the fonts, styling, colors, and more. ## Typeface vs Font in CE.SDK CE.SDK distinguishes between typefaces and fonts, and each serves a different role: **Typeface**: - A family of letterforms (such as: System Sans, Inter, Roboto). - Set with `setTypeface`. - Formatting (bold, italic) is preserved where possible. **Font**: - A specific font file such as `Inter-Regular.otf` or `Brand-BoldItalic.ttf`. - Set with `setFont`. - This overrides the family entirely and resets formatting. Rule of thumb: - Use **typefaces** for general text styling. - Use **fonts** when you need exact control over a specific font file. > **Note:** CE.SDK exposes both typed APIs (`setFont`, `setTypeface`, `setTextFontSize`) and raw block properties ("text/fontFileUri", "text/fontSize"). When a helper function exists, it’s recommended to use it instead of editing the property directly. ## Control the Width of the Text Block The width of a text box defines how the block wraps in the scene. For a text box, you can set: - fixed width `absolute` - percentage of parent width `percent` - let the engine decide `auto` This code sets the width to 280 design units. ```swift try engine.block.setWidthMode(textID, mode: .absolute) try engine.block.setWidth(textID, width: 280) ``` This sets the width of the box to the same size as its parent. ```swift try engine.block.setWidth(textID, width: 1.0) // 100% try engine.block.setWidthMode(textID, mode: .percent) ``` ## Place the Text Block New blocks start at a default location. You can move the text block along the X axis and Y axis. Set the position in the scene using either: - `absolute` mode - `percentage` mode Set the mode on both axes using the functions: - `setPositionXMode` for the position along the x axis. - `setPositionYMode` for the position along the y axis. ```swift try engine.block.setPositionX(textID, positionX: 40) try engine.block.setPositionY(textID, positionY: 120) ``` ## Troubleshooting | Symptom | Possible Cause | Fix | | --- | --- | --- | | Text not visible | Not attached to page | Use `appendChild` | | Text off-screen |Incorrect position |Reset X/Y | | Text doesn’t wrap | Width not set | Set width + width mode | | Font looks too small/large | Design units vs points | Adjust font size | | Literal text not updating | Wrong API | Use `replaceText` | | Variable text not updating | Modified content instead of variable |Update the variable | ## Next Steps Explore more text features: - Text Variables [Bind text to dynamic data.](https://img.ly/docs/cesdk/ios/create-templates/add-dynamic-content/text-variables-7ecb50/) - Auto-Size [Let text blocks expand or shrink with content.](https://img.ly/docs/cesdk/ios/automation/auto-resize-4c2d58/) - Edit Text [Interactively edit text.](https://img.ly/docs/cesdk/ios/text/edit-c5106b/) --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Edit Text" description: "Edit text content directly on the canvas or through the properties panel." platform: ios url: "https://img.ly/docs/cesdk/ios/text/edit-c5106b/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Create and Edit Text](https://img.ly/docs/cesdk/ios/text-8a993a/) > [Edit Text](https://img.ly/docs/cesdk/ios/text/edit-c5106b/) --- ```swift reference-only let text = try engine.block.create(.text) try engine.block.replaceText(text, text: "Hello World") try engine.block.removeText(text, from: "Hello World".range(of: "Hello ")!) try engine.block.setTextColor( text, color: .rgba(r: 0, g: 0, b: 0), in: "World".index(after: "World".startIndex) ..< "World".index(before: "World".endIndex) ) let colorsInRange = try engine.block.getTextColors(text) try engine.block.setTextFontWeight(text, fontWeight: .bold, "World".index(after: "World".startIndex) ..< "World".index(before: "World".endIndex) let fontWeights = try engine.block.getTextFontWeights(text) try engine.block.setTextFontSize(text, fontSize: 14, "World".index(after: "World".startIndex) ..< "World".index(before: "World".endIndex) let fontSizes = try engine.block.getTextFontSizes(text) try engine.block.setTextFontStyle(text, fontStyle: .italic, "World".index(after: "World".startIndex) ..< "World".index(before: "World".endIndex) let fontStyles = try engine.block.getTextFontStyles(text) try engine.block.setTextCase(text, textCase: .titlecase) let textCases = try engine.block.getTextCases(text) let canToggleBold = try engine.block.canToggleBoldFont(text) let canToggleItalic = try engine.block.canToggleItalicFont(text) try engine.block.toggleBoldFont(text) try engine.block.toggleItalicFont(text) let typefaceAssetResults = try await engine.asset.findAssets( sourceID: "ly.img.typeface", query: .init( query: "Open Sans", page: 0, perPage: 100 ) ) let typeface = typefaceAssetResults.assets[0].payload?.typeface let font = typeface!.fonts.first { font in font.subFamily == "Bold" } try engine.block.setFont(text, fontFileURL: font!.uri, typeface: typeface!) try engine.block.setTypeface(text, typeface: typeface!, "World".index(after: "World".startIndex) ..< "World".index(before: "World".endIndex) try engine.block.setTypeface(text, typeface: typeface!) let defaultTypeface = try engine.block.getTypeface(text) let typefaces = try engine.block.getTypefaces(text) let selectedRange = try engine.block.getTextCursorRange() try engine.block.setTextCursorRange(text.startIndex..? = nil) throws ``` Replaces the given text in the selected range of the text block. Required scope: "text/edit" - `id`: The text block into which to insert the given text. - `text`: The text which should replace the selected range in the block. - `subrange`: The subrange of the string to replace. The bounds of the range must be valid indices of the string. - Note: Passing `nil` to subrange is equivalent to the entire extisting string. ```swift public func removeText(_ id: DesignBlockID, from subrange: Range? = nil) throws ``` Removes selected range of text of the given text block. Required scope: "text/edit" - `id`: The text block from which the selected text should be removed. - `subrange`: The subrange of the string to replace. The bounds of the range must be valid indices of the string. - Note: Passing `nil` to subrange is equivalent to the entire extisting string. ```swift public func setTextColor(_ id: DesignBlockID, color: Color, in subrange: Range? = nil) throws ``` Changes the color of the text in the selected range to the given color. Required scope: "fill/change" - `id`: The text block whose color should be changed. - `color`: The new color of the selected text range. - `subrange`: The subrange of the string whose colors should be set. The bounds of the range must be valid indices of the string. - Note: Passing `nil` to subrange is equivalent to the entire extisting string. ```swift public func getTextColors(_ id: DesignBlockID, in subrange: Range? = nil) throws -> [Color] ``` Returns the ordered unique list of colors of the text in the selected range. - `id`: The text block whose colors should be returned. - `subrange`: The subrange of the string whose colors should be returned. The bounds of the range must be valid indices of the string. - Note: Passing `nil` to subrange is equivalent to the entire extisting string. - Returns: The text colors from the selected subrange. ```swift public func setTextFontWeight(_ id: DesignBlockID, fontWeight: FontWeight, in subrange: Range? = nil) throws ``` Sets the given text weight for the selected range of text. Required scope: "text/character" - `id`: The text block whose text weight should be changed. - `fontWeight`: The new weight of the selected text range. - `subrange`: The subrange of the string whose weight should be set. The bounds of the range must be valid indices of the string. - Note: Passing `nil` to subrange is equivalent to the entire extisting string. ```swift public func getTextFontWeights(_ id: DesignBlockID, in subrange: Range? = nil) throws -> [FontWeight] ``` Returns the ordered unique list of font weights of the text in the selected range. - `id`: The text block whose font weights should be returned. - `subrange`: The subrange of the string whose font weights should be returned. The bounds of the range must be valid indices of the string. - Note: Passing `nil` to subrange is equivalent to the entire extisting string. - Returns: The font weights from the selected subrange. ```swift public func setTextFontSize(_ id: DesignBlockID, fontSize: Float, in subrange: Range? = nil) throws ``` Sets the given text font size for the selected range of text. If the font size is applied to the entire text block, its font size property will be updated. Required scope: "text/character" - `id`: The text block whose text font size should be changed. - `fontSize`: The new font size of the selected text range. - `subrange`: The subrange of the string whose size should be set. The bounds of the range must be valid indices of the string. - Note: Passing `nil` to subrange is equivalent to the entire extisting string. ```swift public func getTextFontSizes(_ id: DesignBlockID, in subrange: Range? = nil) throws -> [Float] ``` Returns the ordered unique list of font sizes of the text in the selected range. - `id`: The text block whose font sizes should be returned. - `subrange`: The subrange of the string whose font sizes should be returned. The bounds of the range must be valid indices of the string. - Note: Passing `nil` to subrange is equivalent to the entire extisting string. - Returns: The font sizes from the selected subrange. ```swift public func setTextFontStyle(_ id: DesignBlockID, fontStyle: FontStyle, in subrange: Range? = nil) throws ``` Sets the given text style for the selected range of text. Required scope: "text/character" - `id`: The text block whose text style should be changed. - `fontStyle`: The new style of the selected text range. - `subrange`: The subrange of the string whose style should be set. The bounds of the range must be valid indices of the string. - Note: Passing `nil` to subrange is equivalent to the entire extisting string. ```swift public func getTextFontStyles(_ id: DesignBlockID, in subrange: Range? = nil) throws -> [FontStyle] ``` Returns the ordered unique list of font styles of the text in the selected range. - `id`: The text block whose font styles should be returned. - `subrange`: The subrange of the string whose font styles should be returned. The bounds of the range must be valid indices of the string. - Note: Passing `nil` to subrange is equivalent to the entire extisting string. - Returns: The font styles from the selected subrange. ```swift public func setTextCase(_ id: DesignBlockID, textCase: TextCase, in subrange: Range? = nil) throws ``` Sets the given text case for the selected range of text. Required scope: "text/character" - `id`: The text block whose text case should be changed. - `textCase`: The new text case value. - `subrange`: The subrange of the string whose text cases should be set. The bounds of the range must be valid indices of the string. - Note: Passing `nil` to subrange is equivalent to the entire extisting string. ```swift public func getTextCases(_ id: DesignBlockID, in subrange: Range? = nil) throws -> [TextCase] ``` Returns the ordered list of text cases of the text in the selected range. - `id`: The text block whose text cases should be returned. - `subrange`: The subrange of the string whose text cases should be returned. The bounds of the range must be valid indices of the string. - Note: Passing `nil` to subrange is equivalent to the entire extisting string. - Returns: The text cases from the selected subrange. ```swift public func canToggleBoldFont(_ id: DesignBlockID, in subrange: Range? = nil) throws -> Bool ``` Returns whether the font weight of the given block can be toggled between bold and normal. - `id`: The text block block whose font weight should be toggled. - `subrange`: The subrange of the string whose font weight should be toggled. The bounds of the range must be valid indices of the string. - Returns:`true`, if the font weight of the given block can be toggled between bold and normal, `false` otherwise. ```swift public func canToggleItalicFont(_ id: DesignBlockID, in subrange: Range? = nil) throws -> Bool ``` Returns whether the font style of the given block can be toggled between italic and normal. - `id`: The text block block whose font style should be toggled. - `subrange`: The subrange of the string whose font style should be toggled. The bounds of the range must be valid indices of the string. - Returns: `true`, if the font style of the given block can be toggled between bold and normal, `false` otherwise. ```swift public func toggleBoldFont(_ id: DesignBlockID, in subrange: Range? = nil) throws ``` Toggles the font weight of the given block between bold and normal. Required scope: "text/character" - `id`: The text block whose font weight should be toggled. - `subrange`: The subrange of the string whose font weight should be toggled. The bounds of the range must be valid indices of the string. ```swift public func toggleItalicFont(_ id: DesignBlockID, in subrange: Range? = nil) throws ``` Toggles the font style of the given block between italic and normal. Required scope: "text/character" - `id`: The text block whose font style should be toggled. - `subrange`: The subrange of the string whose font style should be toggled. The bounds of the range must be valid indices of the string. ```swift public func setFont(_ id: DesignBlockID, fontFileURL: URL, typeface: Typeface) throws ``` Sets the given font and typeface for the text block. Existing formatting is reset. Required scope: "text/character" - `id`: The text block whose font should be changed. - `fontFileURL`: The URL of the new font file. - `typeface`: The typeface of the new font. ```swift public func setTypeface(_ id: DesignBlockID, typeface: Typeface, in subrange: Range? = nil) throws ``` Sets the given font and typeface for the text block. The current formatting, e.g., bold or italic, is retained as far as possible. Some formatting might change if the new typeface does not support it, e.g. thin might change to light, bold to normal, and/or italic to non-italic. If the typeface is applied to the entire text block, its typeface property will be updated. If a run does not support the new typeface, it will fall back to the default typeface from the typeface property. Required scope: "text/character" - `id`: The text block whose font should be changed. - `typeface`: The new typeface. - `subrange`: The subrange of the string whose font sizes should be returned. The bounds of the range must be valid indices of the string. ```swift public func getTypeface(_ id: DesignBlockID) throws -> Typeface ``` Returns the typeface property of the text block. Does not return the typefaces of the text runs. - `id:`: The text block whose typeface should be returned. - Returns: The typeface of the text block. ```swift public func getTypefaces(_ id: DesignBlockID, in subrange: Range? = nil) throws -> [Typeface] ``` Returns the typefaces of the text block. - `id`: The text block whose typeface should be returned. - `subrange`: The subrange of the string whose typefaces should be returned. The bounds of the rangemust be valid indices of the string. - `Returns`: The typefaces of the text block. ```swift public func getTextCursorRange() throws -> Range? ``` Returns the indices of the selected grapheme range of the text block that is currently being edited. If both the start and end index of the returned range have the same value, then the text cursor is positioned at that index. - Returns: The selected grapheme range or `nil` if no text block is currently being edited. ```swift public func setTextCursorRange(_ range: Range) throws ``` Sets the text cursor range (selection) within the text block that is currently being edited. Required scope: "text/edit" - `range`: The grapheme range to set as the selection. If the range has equal bounds, the cursor is positioned at that index. To select all text, use `text.startIndex.. Int ``` Returns the number of visible lines in the given text block. - `id:`: The text block whose line count should be returned. - Returns: The number of lines in the text block. ```swift public func getTextLineBoundingBoxRect(_ id: DesignBlockID, index: Int) throws -> CGRect ``` Returns the bounds of the visible area of the given line of the text block. The values are in the scene's global coordinate space (which has its origin at the top left). - `id`: The text block whose line bounding box should be returned. - `index`: The index of the line whose bounding box should be returned. - Returns: The bounding box of the line. ## Full Code Here's the full code: ```swift let text = try engine.block.create(.text) try engine.block.replaceText(text, text: "Hello World") try engine.block.removeText(text, from: "Hello World".range(of: "Hello ")!) try engine.block.setTextColor( text, color: .rgba(r: 0, g: 0, b: 0), in: "World".index(after: "World".startIndex) ..< "World".index(before: "World".endIndex) ) let colorsInRange = try engine.block.getTextColors(text) try engine.block.setTextFontWeight(text, fontWeight: .bold, "World".index(after: "World".startIndex) ..< "World".index(before: "World".endIndex) let fontWeights = try engine.block.getTextFontWeights(text) try engine.block.setTextFontSize(text, fontSize: 14, "World".index(after: "World".startIndex) ..< "World".index(before: "World".endIndex) let fontSizes = try engine.block.getTextFontSizes(text) try engine.block.setTextFontStyle(text, fontStyle: .italic, "World".index(after: "World".startIndex) ..< "World".index(before: "World".endIndex) let fontStyles = try engine.block.getTextFontStyles(text) try engine.block.setTextCase(text, textCase: .titlecase) let textCases = try engine.block.getTextCases(text) let canToggleBold = try engine.block.canToggleBoldFont(text) let canToggleItalic = try engine.block.canToggleItalicFont(text) try engine.block.toggleBoldFont(text) try engine.block.toggleItalicFont(text) let typefaceAssetResults = try await engine.asset.findAssets( sourceID: "ly.img.typeface", query: .init( query: "Open Sans", page: 0, perPage: 100 ) ) let typeface = typefaceAssetResults.assets[0].payload?.typeface let font = typeface!.fonts.first { font in font.subFamily == "Bold" } try engine.block.setFont(text, fontFileURL: font!.uri, typeface: typeface!) try engine.block.setTypeface(text, typeface: typeface!, "World".index(after: "World".startIndex) ..< "World".index(before: "World".endIndex) try engine.block.setTypeface(text, typeface: typeface!) let defaultTypeface = try engine.block.getTypeface(text) let typefaces = try engine.block.getTypefaces(text) let selectedRange = try engine.block.getTextCursorRange() let textString = try engine.block.getString(text, property: "text/text") try engine.block.setTextCursorRange(textString.startIndex.. --- title: "Emojis" description: "Insert and style emojis alongside text for expressive, modern typographic designs." platform: ios url: "https://img.ly/docs/cesdk/ios/text/emojis-510651/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Create and Edit Text](https://img.ly/docs/cesdk/ios/text-8a993a/) > [Emojis](https://img.ly/docs/cesdk/ios/text/emojis-510651/) --- ```swift file=@cesdk_swift_examples/engine-guides-text-with-emojis/TextWithEmojis.swift reference-only import Foundation import IMGLYEngine @MainActor func textWithEmojis(engine: Engine) async throws { let uri = try engine.editor.getSettingString("ubq://defaultEmojiFontFileUri") // From a bundle try engine.editor.setSettingString( "ubq://defaultEmojiFontFileUri", value: "bundle://ly.img.cesdk/fonts/NotoColorEmoji.ttf", ) // From a URL try engine.editor.setSettingString( "ubq://defaultEmojiFontFileUri", value: "https://cdn.img.ly/assets/v4/emoji/NotoColorEmoji.ttf", ) let scene = try engine.scene.create() let page = try engine.block.create(.page) try engine.block.setWidth(page, value: 800) try engine.block.setHeight(page, value: 600) try engine.block.appendChild(to: scene, child: page) try await engine.scene.zoom(to: page, paddingLeft: 40, paddingTop: 40, paddingRight: 40, paddingBottom: 40) let text = try engine.block.create(.text) try engine.block.setString(text, property: "text/text", value: "Text with an emoji 🧐") try engine.block.setWidth(text, value: 50) try engine.block.setHeight(text, value: 10) try engine.block.appendChild(to: page, child: text) } ``` Text blocks in CE.SDK support the use of emojis. A default emoji font is used to render these independently from the target platform. This guide shows how to change the default font and use emojis in text blocks. ## Change the Default Emoji Font The default font URI can be changed when another emoji font should be used or when the font should be served from another website, a content delivery network (CDN), or a file path. The preset is to use the [NotoColorEmoji](https://github.com/googlefonts/noto-emoji) font loaded from our [CDN](https://cdn.img.ly/assets/v4/emoji/NotoColorEmoji.ttf). This font file supports a wide variety of Emojis and is licensed under the [Open Font License](https://cdn.img.ly/assets/v4/emoji/LICENSE.txt). The file is relatively small with 9.9 MB but has the emojis stored as PNG images. As an alternative for higher quality emojis, e.g., this [NotoColorEmoji](https://fonts.google.com/noto/specimen/Noto+Color+Emoji) font can be used. This font file supports also a wide variety of Emojis and is licensed under the [SIL Open Font License, Version 1.1](https://fonts.google.com/noto/specimen/Noto+Color+Emoji/license). The file is significantly larger with 24.3 MB but has the emojis stored as vector graphics. In order to change the emoji font URI, call the `func setSettingString(_ keypath: String, value: String) throws` [Editor API](https://img.ly/docs/cesdk/ios/settings-970c98/) with "defaultEmojiFontFileUri" as keypath and the new URI as value. ```swift highlight-change-default-emoji-font let uri = try engine.editor.getSettingString("ubq://defaultEmojiFontFileUri") // From a bundle try engine.editor.setSettingString( "ubq://defaultEmojiFontFileUri", value: "bundle://ly.img.cesdk/fonts/NotoColorEmoji.ttf", ) // From a URL try engine.editor.setSettingString( "ubq://defaultEmojiFontFileUri", value: "https://cdn.img.ly/assets/v4/emoji/NotoColorEmoji.ttf", ) ``` ## Add a Text Block with an Emoji To add a text block with an emoji, add a text block and set the emoji as text content. ```swift highlight-add-text-with-emoji let text = try engine.block.create(.text) try engine.block.setString(text, property: "text/text", value: "Text with an emoji 🧐") try engine.block.setWidth(text, value: 50) try engine.block.setHeight(text, value: 10) try engine.block.appendChild(to: page, child: text) ``` ## Full Code Here's the full code: ```swift import Foundation import IMGLYEngine @MainActor func textWithEmojis(engine: Engine) async throws { let uri = try engine.editor.getSettingString("ubq://defaultEmojiFontFileUri") // From a bundle try engine.editor.setSettingString( "ubq://defaultEmojiFontFileUri", value: "bundle://ly.img.cesdk/fonts/NotoColorEmoji.ttf" ) // From a URL try engine.editor.setSettingString( "ubq://defaultEmojiFontFileUri", value: "https://cdn.img.ly/assets/v4/emoji/NotoColorEmoji.ttf" ) let scene = try engine.scene.create() let page = try engine.block.create(.page) try engine.block.setWidth(page, value: 800) try engine.block.setHeight(page, value: 600) try engine.block.appendChild(to: scene, child: page) try await engine.scene.zoom(to: page, paddingLeft: 40, paddingTop: 40, paddingRight: 40, paddingBottom: 40) let text = try engine.block.create(.text) try engine.block.setString(text, property: "text/text", value: "Text with an emoji 🧐") try engine.block.setWidth(text, value: 50) try engine.block.setHeight(text, value: 10) try engine.block.appendChild(to: page, child: text) } ``` --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Overview" description: "Add, style, and customize text layers in your design using CE.SDK’s flexible text editing tools." platform: ios url: "https://img.ly/docs/cesdk/ios/text/overview-0bd620/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Create and Edit Text](https://img.ly/docs/cesdk/ios/text-8a993a/) > [Overview](https://img.ly/docs/cesdk/ios/text/overview-0bd620/) --- In CreativeEditor SDK (CE.SDK), a *text element* is an editable, stylable block that you can add to your design. Whether you're creating marketing graphics, videos, social media posts, or multilingual layouts, text plays a vital role in conveying information and enhancing your visuals. You can fully manipulate text elements using both the user interface and programmatic APIs, giving you maximum flexibility to control how text behaves and appears. Additionally, text can be animated to bring motion to your designs. [Explore Demos](https://img.ly/showcases/cesdk?tags=ios) [Get Started](https://img.ly/docs/cesdk/ios/get-started/overview-e18f40/) --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Text Styling" description: "Apply fonts, colors, alignment, and other styling options to customize text appearance." platform: ios url: "https://img.ly/docs/cesdk/ios/text/styling-269c48/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Create and Edit Text](https://img.ly/docs/cesdk/ios/text-8a993a/) > [Text Styling](https://img.ly/docs/cesdk/ios/text/styling-269c48/) --- ```swift file=@cesdk_swift_examples/engine-guides-text-properties/TextProperties.swift reference-only import Foundation import IMGLYEngine @MainActor func textProperties(engine: Engine) async throws { let scene = try engine.scene.create() let text = try engine.block.create(.text) try engine.block.appendChild(to: scene, child: text) try engine.block.setWidthMode(text, mode: .auto) try engine.block.setHeightMode(text, mode: .auto) try engine.block.replaceText(text, text: "Hello World") // Add a "!" at the end of the text try engine.block.replaceText(text, text: "!", in: "Hello World".endIndex ..< "Hello World".endIndex) // Replace "World" with "Alex" try engine.block.replaceText(text, text: "Alex", in: "Hello World".range(of: "World")!) try await engine.scene.zoom(to: text, paddingLeft: 100, paddingTop: 100, paddingRight: 100, paddingBottom: 100) // Remove the "Hello " try engine.block.removeText(text, from: "Hello Alex".range(of: "Hello ")!) try engine.block.setTextColor(text, color: .rgba(r: 1, g: 1, b: 0)) try engine.block.setTextColor(text, color: .rgba(r: 0, g: 0, b: 0), in: "Alex".range(of: "lex")!) let allColors = try engine.block.getTextColors(text) let colorsInRange = try engine.block.getTextColors(text, in: "Alex".range(of: "lex")!) try engine.block.setBool(text, property: "backgroundColor/enabled", value: true) try engine.block.getColor(text, property: "backgroundColor/color") as Color try engine.block.setColor(text, property: "backgroundColor/color", color: .rgba(r: 0.0, g: 0.0, b: 1.0, a: 1.0)) try engine.block.setFloat(text, property: "backgroundColor/paddingLeft", value: 1) try engine.block.setFloat(text, property: "backgroundColor/paddingTop", value: 2) try engine.block.setFloat(text, property: "backgroundColor/paddingRight", value: 3) try engine.block.setFloat(text, property: "backgroundColor/paddingBottom", value: 4) try engine.block.setFloat(text, property: "backgroundColor/cornerRadius", value: 4) let animation = try engine.block.createAnimation(AnimationType.slide) try engine.block.setEnum(animation, property: "textAnimationWritingStyle", value: "Block") try engine.block.setInAnimation(text, animation: animation) try engine.block.setOutAnimation(text, animation: animation) try engine.block.setTextCase(text, textCase: .titlecase) let textCases = try engine.block.getTextCases(text) let typeface = Typeface( name: "Roboto", fonts: [ Font( uri: URL(string: "https://cdn.img.ly/assets/v4/ly.img.typeface/fonts/Roboto/Roboto-Bold.ttf")!, subFamily: "Bold", weight: .bold, style: .normal, ), Font( uri: URL(string: "https://cdn.img.ly/assets/v4/ly.img.typeface/fonts/Roboto/Roboto-BoldItalic.ttf")!, subFamily: "Bold Italic", weight: .bold, style: .italic, ), Font( uri: URL(string: "https://cdn.img.ly/assets/v4/ly.img.typeface/fonts/Roboto/Roboto-Italic.ttf")!, subFamily: "Italic", weight: .normal, style: .italic, ), Font( uri: URL(string: "https://cdn.img.ly/assets/v4/ly.img.typeface/fonts/Roboto/Roboto-Regular.ttf")!, subFamily: "Regular", weight: .normal, style: .normal, ), ], ) try engine.block.setFont(text, fontFileURL: typeface.fonts[3].uri, typeface: typeface) try engine.block.setTypeface(text, typeface: typeface, in: "Alex".range(of: "lex")!) try engine.block.setTypeface(text, typeface: typeface) let currentDefaultTypeface = try engine.block.getTypeface(text) let currentTypefaces = try engine.block.getTypefaces(text) let currentTypefacesOfRange = try engine.block.getTypefaces(text, in: "Alex".range(of: "lex")!) if try engine.block.canToggleBoldFont(text) { try engine.block.toggleBoldFont(text) } if try engine.block.canToggleBoldFont(text, in: "Alex".range(of: "lex")!) { try engine.block.toggleBoldFont(text, in: "Alex".range(of: "lex")!) } if try engine.block.canToggleItalicFont(text) { try engine.block.toggleItalicFont(text) } if try engine.block.canToggleItalicFont(text, in: "Alex".range(of: "lex")!) { try engine.block.toggleItalicFont(text, in: "Alex".range(of: "lex")!) } try engine.block.setTextFontWeight(text, fontWeight: .bold) let fontWeights = try engine.block.getTextFontWeights(text) try engine.block.setTextFontStyle(text, fontStyle: .italic) let fontStyles = try engine.block.getTextFontStyles(text) } ``` In this example, we want to show how to read and modify the text block's contents via the API in the CreativeEngine. ## Editing the Text String You can edit the text string contents of a text block using the `func replaceText(_ id: DesignBlockID, text: String, in subrange: Range? = nil) throws` and `func removeText(_ id: DesignBlockID, from subrange: Range? = nil) throws` APIs. The range of text that should be edited is defined using the native Swift `Range` type. When passing `nil` to `subrange` argument, the entire existing string is replaced. ```swift highlight-replaceText try engine.block.replaceText(text, text: "Hello World") ``` When specifying an empty range, the new text is inserted at its lower bound. ```swift highlight-replaceText-single-index // Add a "!" at the end of the text try engine.block.replaceText(text, text: "!", in: "Hello World".endIndex ..< "Hello World".endIndex) ``` To replace a specific text, `.range(of:)` can be used to find the range of the text to be replaced. ```swift highlight-replaceText-range // Replace "World" with "Alex" try engine.block.replaceText(text, text: "Alex", in: "Hello World".range(of: "World")!) ``` Similarly, the `removeText` API can be called to remove either a specific range or the entire text. ```swift highlight-removeText // Remove the "Hello " try engine.block.removeText(text, from: "Hello Alex".range(of: "Hello ")!) ``` ## Text Colors Text blocks in the CreativeEngine allow different ranges to have multiple colors. Use the `func setTextColor(_ id: DesignBlockID, color: Color, in subrange: Range? = nil) throws` API to change either the color of the entire text ```swift highlight-setTextColor try engine.block.setTextColor(text, color: .rgba(r: 1, g: 1, b: 0)) ``` or only that of a range. After these two calls, the text "Alex!" now starts with one yellow character, followed by three black characters and two more yellow ones. ```swift highlight-setTextColor-range try engine.block.setTextColor(text, color: .rgba(r: 0, g: 0, b: 0), in: "Alex".range(of: "lex")!) ``` The `func getTextColors(_ id: DesignBlockID, in subrange: Range? = nil) throws -> [Color]` API returns an ordered list of unique colors in the requested range. Here, `allColors` will be an array containing the colors yellow and black (in this order). ```swift highlight-getTextColors let allColors = try engine.block.getTextColors(text) ``` When only the colors in the specific range are requested, the result will be an array containing black and then yellow, since black appears first in the requested range. ```swift highlight-getTextColors-range let colorsInRange = try engine.block.getTextColors(text, in: "Alex".range(of: "lex")!) ``` ## Text Background You can create and edit the background of a text block by setting specific block properties. To add a colored background to a text block use the `func setBool(_ id: DesignBlockID, property: String, value: Bool)` API and enable the `backgroundColor/enabled` property. ```swift highlight-backgroundColor-enabled try engine.block.setBool(text, property: "backgroundColor/enabled", value: true) ``` The color of the text background can be queried (by making use of the `func getColor(_ id: DesignBlockID, property: String)` API ) and also changed (with the `func setColor(_ id: DesignBlockID, property: String, color: Color)` API). ```swift highlight-backgroundColor-get-set try engine.block.getColor(text, property: "backgroundColor/color") as Color ``` The padding of the rectangular background shape can be edited by using the `func setFloat(_ id: DesignBlockID, property: String, value: Float)` API and setting the target value for the desired padding property like: - `backgroundColor/paddingLeft`: - `backgroundColor/paddingRight`: - `backgroundColor/paddingTop`: - `backgroundColor/paddingBottom`: ```swift highlight-backgroundColor-padding try engine.block.setFloat(text, property: "backgroundColor/paddingLeft", value: 1) ``` Additionally, the rectangular shape of the background can be rounded by setting a corner radius with the `func setFloat(_ id: DesignBlockID, property: String, value: Float)` API to adjust the value of the `backgroundColor/cornerRadius` property. ```swift highlight-backgroundColor-cornerRadius try engine.block.setFloat(text, property: "backgroundColor/cornerRadius", value: 4) ``` Text backgrounds inherit the animations assigned to their respective text block when the animation text writing style is set to `Block`. ```swift highlight-backgroundColor-animation let animation = try engine.block.createAnimation(AnimationType.slide) ``` ## Text Case You can apply text case modifications to ranges of text in order to display them in upper case, lower case or title case. It is important to note that these modifiers do not change the `text` string value of the text block but are only applied when the block is rendered. By default, the text case of all text within a text block is set to `.normal`, which does not modify the appearance of the text at all. The `func setTextCase(_ id: DesignBlockID, textCase: TextCase, in subrange: Range? = nil) throws` API sets the given text case for the selected range of text. Possible values for `TextCase` are: - `.normal`: The text string is rendered without modifications. - `.uppercase`: All characters are rendered in upper case. - `.lowercase`: All characters are rendered in lower case. - `.titlecase`: The first character of each word is rendered in upper case. ```swift highlight-setTextCase try engine.block.setTextCase(text, textCase: .titlecase) ``` The `func getTextCases(_ id: DesignBlockID, in subrange: Range? = nil) throws -> [TextCase]` API returns the ordered list of text cases of the text in the selected range. ```swift highlight-getTextCases let textCases = try engine.block.getTextCases(text) ``` ## Typefaces In order to change the font of a text block, you have to call the `setFont(_ id: DesignBlockID, fontFileURL: URL, typeface: Typeface) throws` API and provide it with both the url of the font file to be actively used and the complete typeface definition of the corresponding typeface. Existing formatting of the block is reset. A typeface definition consists of the unique typeface name (as it is defined within the font files), and a list of all font definitions that belong to this typeface. Each font definition must provide a `uri` which points to the font file and a `subFamily` string which is this font's effective name within its typeface. The subfamily value is typically also defined within the font file. For the sake of this example, we define a `Roboto` typeface with only four fonts: `Regular`, `Bold`, `Italic`, and `Bold Italic` and we change the font of the text block to the Roboto Regular font. ```swift highlight-setFont let typeface = Typeface( name: "Roboto", fonts: [ Font( uri: URL(string: "https://cdn.img.ly/assets/v4/ly.img.typeface/fonts/Roboto/Roboto-Bold.ttf")!, subFamily: "Bold", weight: .bold, style: .normal, ), Font( uri: URL(string: "https://cdn.img.ly/assets/v4/ly.img.typeface/fonts/Roboto/Roboto-BoldItalic.ttf")!, subFamily: "Bold Italic", weight: .bold, style: .italic, ), Font( uri: URL(string: "https://cdn.img.ly/assets/v4/ly.img.typeface/fonts/Roboto/Roboto-Italic.ttf")!, subFamily: "Italic", weight: .normal, style: .italic, ), Font( uri: URL(string: "https://cdn.img.ly/assets/v4/ly.img.typeface/fonts/Roboto/Roboto-Regular.ttf")!, subFamily: "Regular", weight: .normal, style: .normal, ), ], ) try engine.block.setFont(text, fontFileURL: typeface.fonts[3].uri, typeface: typeface) ``` If the formatting, e.g., bold or italic, of the text should be kept, you have to call the `fun setTypeface(block: DesignBlock, fontFileUri: Uri, typeface: Typeface)` API and provide it with both the uri of the font file to be used and the complete typeface definition of the corresponding typeface. The font file should be a fallback font, e.g., `Regular`, from the same typeface. The actual font that matches the formatting is chosen automatically with the current formatting retained as much as possible. If the new typeface does not support the current formatting, the formatting changes to a reasonable close one, e.g. thin might change to light, bold to normal, and/or italic to non-italic. If no reasonable font can be found, the fallback font is used. ```swift highlight-setTypeface try engine.block.setTypeface(text, typeface: typeface, in: "Alex".range(of: "lex")!) try engine.block.setTypeface(text, typeface: typeface) ``` You can query the currently used typeface definition of a text block by calling the `getTypeface(_ id: DesignBlockID) throws -> Typeface` API. It is important to note that new text blocks don't have any explicit typeface set until you call the `setFont` API. In this case, the `getTypeface` API will throw an error. ```swift highlight-getTypeface let currentDefaultTypeface = try engine.block.getTypeface(text) ``` ## Font Weights and Styles Text blocks can have multiple ranges with different weights and styles. In order to toggle the text of a text block between the normal and bold font weights, first call the `canToggleBoldFont(_ id: DesignBlockID, in subrange: Range? = nil) throws -> Bool` API to check whether such an edit is possible and if so, call the `toggleBoldFont(_ id: DesignBlockID, in subrange: Range? = nil) throws` API to change the weight. ```swift highlight-toggleBold if try engine.block.canToggleBoldFont(text) { try engine.block.toggleBoldFont(text) } if try engine.block.canToggleBoldFont(text, in: "Alex".range(of: "lex")!) { try engine.block.toggleBoldFont(text, in: "Alex".range(of: "lex")!) } ``` In order to toggle the text of a text block between the normal and italic font styles, first call the `canToggleItalicFont(_ id: DesignBlockID, in subrange: Range? = nil) throws -> Bool` API to check whether such an edit is possible and if so, call the `toggleItalicFont(_ id: DesignBlockID, in subrange: Range? = nil) throws` API to change the style. ```swift highlight-toggleItalic if try engine.block.canToggleItalicFont(text) { try engine.block.toggleItalicFont(text) } if try engine.block.canToggleItalicFont(text, in: "Alex".range(of: "lex")!) { try engine.block.toggleItalicFont(text, in: "Alex".range(of: "lex")!) } ``` In order to change the font weight or style, the typeface definition of the text block must include a font definition that corresponds to the requested font weight and style combination. For example, if the text block currently uses a bold font and you want to toggle the font style to italic - such as in the example code - the typeface must contain a font that is both bold and italic. The `func setTextFontWeight(_ id: DesignBlockID, fontWeight: FontWeight, in subrange: Range? = nil) throws` API sets a font weight in the requested range, similar to the `setTextColor` API described above. ```swift highlight-setTextFontWeight try engine.block.setTextFontWeight(text, fontWeight: .bold) ``` The `func getTextFontWeights(_ id: DesignBlockID, in subrange: Range? = nil) throws -> [FontWeight]` API returns an ordered list of unique font weights in the requested range, similar to the `getTextColors` API described above. For this example text, the result will be `[.bold]`. ```swift highlight-getTextFontWeights let fontWeights = try engine.block.getTextFontWeights(text) ``` The `func setTextFontStyle(_ id: DesignBlockID, fontStyle: FontStyle, in subrange: Range? = nil) throws` API sets a font style in the requested range. ```swift highlight-setTextFontStyle try engine.block.setTextFontStyle(text, fontStyle: .italic) ``` The `func getTextFontStyles(_ id: DesignBlockID, in subrange: Range? = nil) throws -> [FontStyle]` API returns an ordered list of unique font styles in the requested range. For this example text, the result will be `[.italic]`. ```swift highlight-getTextFontStyles let fontStyles = try engine.block.getTextFontStyles(text) ``` ## Full Code Here's the full code: ```swift import Foundation import IMGLYEngine @MainActor func textProperties(engine: Engine) async throws { let scene = try engine.scene.create() let text = try engine.block.create(.text) try engine.block.appendChild(to: scene, child: text) try engine.block.setWidthMode(text, mode: .auto) try engine.block.setHeightMode(text, mode: .auto) try engine.block.replaceText(text, text: "Hello World") // Add a "!" at the end of the text try engine.block.replaceText(text, text: "!", in: "Hello World".endIndex ..< "Hello World".endIndex) // Replace "World" with "Alex" try engine.block.replaceText(text, text: "Alex", in: "Hello World".range(of: "World")!) try await engine.scene.zoom(to: text, paddingLeft: 100, paddingTop: 100, paddingRight: 100, paddingBottom: 100) // Remove the "Hello " try engine.block.removeText(text, from: "Hello Alex".range(of: "Hello ")!) try engine.block.setTextColor(text, color: .rgba(r: 1, g: 1, b: 0)) try engine.block.setTextColor(text, color: .rgba(r: 0, g: 0, b: 0), in: "Alex".range(of: "lex")!) let allColors = try engine.block.getTextColors(text) let colorsInRange = try engine.block.getTextColors(text, in: "Alex".range(of: "lex")!) try engine.block.setBool(text, property: "backgroundColor/enabled", value: true) try engine.block.getColor(text, property: "backgroundColor/color") as Color try engine.block.setColor(text, property: "backgroundColor/color", color: .rgba(r: 0.0, g: 0.0, b: 1.0, a: 1.0)) try engine.block.setFloat(text, property: "backgroundColor/paddingLeft", value: 1) try engine.block.setFloat(text, property: "backgroundColor/paddingTop", value: 2) try engine.block.setFloat(text, property: "backgroundColor/paddingRight", value: 3) try engine.block.setFloat(text, property: "backgroundColor/paddingBottom", value: 4) try engine.block.setFloat(text, property: "backgroundColor/cornerRadius", value: 4) let animation = try engine.block.createAnimation(AnimationType.slide) try engine.block.setEnum(animation, property: "textAnimationWritingStyle", value: "Block") try engine.block.setInAnimation(text, animation: animation) try engine.block.setOutAnimation(text, animation: animation) try engine.block.setTextCase(text, textCase: .titlecase) let textCases = try engine.block.getTextCases(text) let typeface = Typeface( name: "Roboto", fonts: [ Font( uri: URL(string: "https://cdn.img.ly/assets/v4/ly.img.typeface/fonts/Roboto/Roboto-Bold.ttf")!, subFamily: "Bold", weight: .bold, style: .normal ), Font( uri: URL(string: "https://cdn.img.ly/assets/v4/ly.img.typeface/fonts/Roboto/Roboto-BoldItalic.ttf")!, subFamily: "Bold Italic", weight: .bold, style: .italic ), Font( uri: URL(string: "https://cdn.img.ly/assets/v4/ly.img.typeface/fonts/Roboto/Roboto-Italic.ttf")!, subFamily: "Italic", weight: .normal, style: .italic ), Font( uri: URL(string: "https://cdn.img.ly/assets/v4/ly.img.typeface/fonts/Roboto/Roboto-Regular.ttf")!, subFamily: "Regular", weight: .normal, style: .normal ), ] ) try engine.block.setFont(text, fontFileURL: typeface.fonts[3].uri, typeface: typeface) try engine.block.setTypeface(text, typeface: typeface, in: "Alex".range(of: "lex")!) try engine.block.setTypeface(text, typeface: typeface) let currentDefaultTypeface = try engine.block.getTypeface(text) let currentTypefaces = try engine.block.getTypefaces(text) let currentTypefacesOfRange = try engine.block.getTypefaces(text, in: "Alex".range(of: "lex")!) if try engine.block.canToggleBoldFont(text) { try engine.block.toggleBoldFont(text) } if try engine.block.canToggleBoldFont(text, in: "Alex".range(of: "lex")!) { try engine.block.toggleBoldFont(text, in: "Alex".range(of: "lex")!) } if try engine.block.canToggleItalicFont(text) { try engine.block.toggleItalicFont(text) } if try engine.block.canToggleItalicFont(text, in: "Alex".range(of: "lex")!) { try engine.block.toggleItalicFont(text, in: "Alex".range(of: "lex")!) } let fontWeights = try engine.block.getTextFontWeights(text) try engine.block.setTextFontStyle(text, fontStyle: .italic) let fontStyles = try engine.block.getTextFontStyles(text) } ``` --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Text Designs" description: "Create and customize text component libraries using predefined text designs that appear in your asset library." platform: ios url: "https://img.ly/docs/cesdk/ios/text/text-designs-a1b2c3/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Create and Edit Text](https://img.ly/docs/cesdk/ios/text-8a993a/) > [Text Designs](https://img.ly/docs/cesdk/ios/text/text-designs-a1b2c3/) --- Text Designs (also known as Text Components) are pre-designed text layouts that appear in your asset library. Users can click on these components to automatically insert them into their designs. This guide explains how to prepare and customize the `content.json` file that defines these components. ## What are Text Designs? Text Designs are serialized text blocks or groups of text blocks configured with specific styling, layout constraints, and behavior. They provide users with professionally designed text layouts that are easy to customize while maintaining their visual integrity. When users browse the asset library, they see thumbnails of these text components. Clicking on a component automatically loads and inserts it into their current scene. ## Default Components CE.SDK ships with over 20 pre-built text designs including: - **Box** - Text with decorative border elements - **Breaking** - Bold, attention-grabbing headlines - **Cinematic** - Movie poster-style text effects - **Glow** - Text with luminous glow effects - **Greetings** - Welcoming message layouts - **Promo** - Promotional and sale-focused designs - **Quote** - Quote bubble and callout styles - **Speech** - Dialog and conversation layouts - **Valentine** - Romantic and heart-themed designs - **Handwriting** - Script and handwritten font styles - And many more... ## Content.json Structure Text designs are defined in a `content.json` file with the following structure: ```json { "version": "3.0.0", "id": "ly.img.textComponents", "assets": [ { "id": "ly.img.textComponents.box", "label": { "en": "Box" }, "meta": { "uri": "{{base_url}}/ly.img.textComponents/data/box/blocks.blocks", "thumbUri": "{{base_url}}/ly.img.textComponents/thumbnails/box.png", "mimeType": "application/ubq-blocks-string" } } ], "blocks": [] } ``` ### Key Properties - **version**: Content format version (currently "3.0.0") - **id**: Unique identifier for the asset source ("ly.img.textComponents") - **assets**: Array of component definitions ### Asset Properties Each component in the assets array has: - **id**: Unique identifier following the pattern `ly.img.textComponents.[name]` - **label**: Display name object with language codes (e.g., `{"en": "Box"}`) - **meta**: - **uri**: Path to the `.blocks` file containing the serialized component - **thumbUri**: Path to the thumbnail image (400x320px PNG recommended) - **mimeType**: Always `"application/ubq-blocks-string"` for text components The `{{base_url}}` placeholder gets replaced with your configured base URL. ## Creating Custom Components ### 1. Design Your Component Follow these best practices when designing text components: #### Text Settings - Use **variable text** with a range of 0-1000 characters - Set **fixed frame** with **clipping enabled** - Avoid growing or shrinking frames to prevent scaling issues #### Constraints Setup - **Parent Group**: Give the parent group all available constraint options for maximum flexibility - **Child Elements**: Set constraints relative to the parent group to maintain proper relationships during resizing #### Design Considerations - Use **scopes** and **auto font-size** features to enable easy editing - Test components by dropping them into new files to verify constraint behavior - Ensure components work as cohesive units that are easy to edit but difficult to accidentally break ### 2. Export Your Component Once your design is ready: 1. Select the complete text component (parent group with all children) 2. Use the BlockAPI (not the SceneAPI) to serialize it to an archive: ```swift // Save the component to a blocks archive file let blocksArchive = try await engine.block.saveToArchive([componentBlockId]) ``` #### Resource Management Text components often reference external resources like fonts and images. When using `saveToArchive()`, these resources can be stored. If you later serve all the resources together with the blocks file, the component can be used in other editors. Using `saveToArchive()` ensures that: - Font references remain valid across different environments - Components can be safely used in any scene - Serialized scenes maintain all resource references **Best Practices:** 1. **Ensure resource availability**: Make sure all resources used in your components are served 2. **Test in isolation**: Always test components in fresh editor instances to verify resource loading 3. **Validate references**: Check that all asset URIs are accessible from your target environments ### 3. Create Component Files #### Save the Blocks Archive File Save the component archive and extract it: - Use descriptive names matching your component ID (e.g., `customBox`) - Extract the zip file and store it in your `/data/customBox` directory structure - All files should be included in the same file structure as in the archive Example with only a blocks file: ``` /data/customBox/blocks.blocks ``` Example with images and fonts: ``` /data/customBox/blocks.blocks /data/customBox/fonts/59251598.ttf /data/customBox/fonts/355809377.ttf /data/customBox/images/3255389386.jpeg /data/customBox/images/3302885400.jpeg ``` #### Create Thumbnails Generate 400x320px PNG thumbnails: 1. Remove page background color from your design 2. Export as PNG using the block export API: ```swift // Export component as 400x320px thumbnail let thumbnailData = try await engine.block.export(componentBlockId, mimeType: "image/png", options: ExportOptions( targetWidth: 400, targetHeight: 320 ) ) // Save thumbnail to file // Save thumbnailData to your thumbnail file (e.g., customBox.png) ``` ### 4. Update content.json Add your new component to the assets array: ```json { "id": "ly.img.textComponents.customBox", "label": { "en": "Custom Box", "de": "Eigene Box" }, "meta": { "uri": "{{base_url}}/ly.img.textComponents/data/customBox/blocks.blocks", "thumbUri": "{{base_url}}/ly.img.textComponents/thumbnails/customBox.png", "mimeType": "application/ubq-blocks-string" } } ``` ## Hosting Custom Components ### Backend Setup 1. **Host your files**: Upload your modified `content.json`, `.blocks` files, and thumbnails to your web server 2. **Maintain structure**: Keep the same directory structure: ``` /ly.img.textComponents/ ├── content.json ├── data/ │ ├── box/blocks.blocks │ ├── customBox/blocks.blocks │ ├── customBox/fonts/59251598.ttf │ ├── customBox/fonts/355809377.ttf │ ├── customBox/images/3255389386.jpeg │ ├── customBox/images/3302885400.jpeg │ └── ... └── thumbnails/ ├── box.png ├── customBox.png └── ... ``` ### Configuration To customize your application to use your custom assets, refer to [Serve Assets](https://img.ly/docs/cesdk/ios/serve-assets-b0827c/). Your custom text designs will now appear in the text components section of the asset library. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "To v1.19" description: "Learn what changed in v1.19 and how to update your implementation to stay compatible." platform: ios url: "https://img.ly/docs/cesdk/ios/to-v1-19-55bcad/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Upgrading](https://img.ly/docs/cesdk/ios/upgrade-4f8715/) > [To v1.19](https://img.ly/docs/cesdk/ios/to-v1-19-55bcad/) --- Version v1.19 of CreativeEngineSDK and CreativeEditorSDK introduces structural changes to many of the current design blocks, making them more composable and more powerful. Along with this update, there are mandatory license changes that require attention. This comes with a number of breaking changes. This document will explain the changes and describe the steps you need to take to adapt them to your setup. ## **Initialization** The initialization of the `Engine` has changed. Now the `Engine` initializer is async and failable. It also requires a new parameter `license` which is the API key you received from our dashboard. There is also a new optional parameter `userID` an optional unique ID tied to your application's user. This helps us accurately calculate monthly active users (MAU). Especially useful when one person uses the app on multiple devices with a sign-in feature, ensuring they're counted once. Providing this aids in better data accuracy. ```swift try await Engine(license: "", userID: "") ``` Please see the [updated Quickstarts](https://img.ly/docs/cesdk/ios/get-started/overview-e18f40/) for complete SwiftUI, UIKit, and AppKit integration examples. ## **DesignBlockType** These are the transformations of all `DesignBlockType` types: Removed: - `DesignBlockType.image` - `DesignBlockType.video` - `DesignBlockType.sticker` - `DesignBlockType.vectorPath` - `DesignBlockType.rectShape` - `DesignBlockType.lineShape` - `DesignBlockType.starShape` - `DesignBlockType.polygonShape` - `DesignBlockType.ellipseShape` - `DesignBlockType.colorFill` - `DesignBlockType.imageFill` - `DesignBlockType.videoFill` - `DesignBlockType.linearGradientFill` - `DesignBlockType.radialGradientFill` - `DesignBlockType.conicalGradientFill` Added: - `DesignBlockType.graphic` - `DesignBlockType.cutout` Note that `DesignBlockType.allCases` can be used to get the list of all instances mentioned above. ## **Graphic Design Block** A new generic `DesignBlockType.graphic` type has been introduced, that forms the basis of the new unified block structure. ## **Shapes** Similar to how the fill of a block is a separate object which can be attached to and replaced on a design block, we have now introduced a similar concept for the shape of a block. You use the new `createShape`, `getShape` and `setShape` APIs in order to define the shape of a design block. Only the new `DesignBlockType.graphic` block allows to change its shape with these APIs. The new available shape types are: - `ShapeType.rect` - `ShapeType.line` - `ShapeType.ellipse` - `ShapeType.polygon` - `ShapeType.star` - `ShapeType.vectorPath` Note that `ShapeType.allCases` can be used to get the list of all instances mentioned above. The following design block types are now removed in favor of using a `DesignBlockType.graphic` block with one of the above mentioned shape instances: - `DesignBlockType.rectShape` - `DesignBlockType.lineShape` - `DesignBlockType.ellipseShape` - `DesignBlockType.polygonShape` - `DesignBlockType.starShape` - `DesignBlockType.vectorPath` This structural change means that the shape-specific properties (e.g. the number of sides of a polygon) are not available on the design block anymore but on the shape instances instead. You will have to add calls to `getShape` to get the instance id of the shape instance and then pass that to the property getter and setter APIs. Also, remember to change property key strings in the getter and setter calls from plural `shapes/…` to singular `shape/…` to match the new type identifiers. ## **Image and Sticker** Previously, `DesignBlockType.image` and `DesignBlockType.sticker` were their own high-level design block types. They neither support the fill APIs nor the effects APIs. Both of these blocks are now removed in favor of using a `DesignBlockType.graphic` block with an image fill (`FillType.image`) and using the effects APIs instead of the legacy image block’s numerous effects properties. At its core, the sticker block has always just been an image block that is heavily limited in its capabilities. You can neither crop it, nor apply any effects to it. In order to replicate the difference as closely as possible in the new unified structure, more fine-grained scopes have been added. You can now limit the adopter’s ability to crop a block and to edit its appearance. Note that since these scopes only apply to a user of the editor with the “Adopter” role, a “Creator” user will now have all of the same editing options for both images and for blocks that used to be stickers. ## **Scopes** The following is the list of changes to the design block scopes: - (Breaking) The permission to crop a block was split from `content/replace` and `design/style` into a separate scope: `layer/crop`. - Deprecated the `design/arrange` scope and renamed `design/arrange/move` → `layer/move` `design/arrange/resize` → `layer/resize` `design/arrange/rotate` → `layer/rotate` `design/arrange/flip` → `layer/flip` - Deprecated the `content/replace` scope. For `DesignBlockType.Text` blocks, it is replaced with the new `text/edit` scope. For other blocks it is replaced with `fill/change`. - Deprecated the `design/style` scope and replaced it with the following fine-grained scopes: `text/character`, `stroke/change`, `layer/opacity`, `layer/blendMode`, `layer/visibility`, `layer/clipping`, `appearance/adjustments`, `appearance/filter`, `appearance/effect`, `appearance/blur`, `appearance/shadow` - Introduced `fill/change`, `stroke/change`, and `shape/change` scopes that control whether the fill, stroke or shape of a block may be edited by a user with an "Adopter" role. - The deprecated scopes are automatically mapped to their new corresponding scopes by the scope APIs for now until they will be removed completely in a future update. ## **Kind** While the new unified block structure both simplifies a lot of code and makes design blocks more powerful, it also means that many of the design blocks that used to have unique type ids now all have the same generic `DesignBlockType.graphic` type, which means that calls to the `findByType` cannot be used to filter blocks based on their legacy type ids any more. Simultaneously, there are many instances in which different blocks in the scene which might have the same type and underlying technical structure have different semantic roles in the document and should therefore be treated differently by the user interface. To solve both of these problems, we have introduced the concept of a block “kind”. This is a mutable string that can be used to tag different blocks with a semantic label. You can get the kind of a block using the `getKind` API and you can query blocks with a specific kind using the `findByKind` API. CreativeEngine provides the following default kind values: - image - video - sticker - scene - camera - stack - page - audio - text - shape - group Unlike the immutable design block type id, you can change the kind of a block with the new `setKind` API. It is important to remember that the underlying structure and properties of a design block are not strictly defined by its kind, since the kind, shape, fill and effects of a block can be changed independent of each other. Therefore, a user-interface should not make assumptions about available properties of a block purely based on its kind. > **Note:** **Note**Due to legacy reasons, blocks with the kind "sticker" will continue > to not allow their contents to be cropped. This special behavior will be > addressed and replaced with a more general-purpose implementation in a future > update. ​ ## **Asset Definitions** The asset definitions have been updated to reflect the deprecation of legacy block type ids and the introduction of the “kind” property. In addition to the “blockType” meta property, you can now also define the `“shapeType”` ,`“fillType”` and `“kind”` of the block that should be created by the default implementation of the applyAsset function. - `“blockType”` defaults to `DesignBlockType.graphic.rawValue (“//ly.img.ubq/graphic”)` if left unspecified. - `“shapeType”` defaults to `ShapeType.rect.rawValue (“//ly.img.ubq/shape/rect”)` if left unspecified - `“fillType”` defaults to `FillType.color.rawValue (“//ly.img.ubq/fill/color”)` if left unspecified Video block asset definitions used to specify the `“blockType”` as `“//ly.img.ubq/fill/video“ (FillType.video.rawValue)`. The `“fillType”` meta asset property should now be used instead for such fill type ids. ## **Automatic Migration** CreativeEngine will always continue to support scene files that contain the now removed legacy block types. Those design blocks will be automatically replaced by the equivalent new unified block structure when the scene is loaded, which means that the types of all legacy blocks will change to `DesignBlockType.graphic`. Note that this can mean that a block gains new capabilities that it did not have before. For example, the line shape block did not have any stroke properties, so the `hasStroke` API used to return `false`. However, after the automatic migration its `DesignBlockType.graphic` design block replacement supports both strokes and fills, so the `hasStroke` API now returns `true` . Similarly, the image block did not support fills or effects, but the `DesignBlockType.graphic` block does. ## **Types and API Signatures** To improve the type safety of our APIs, we have moved away from using a single `DesignBlockType` enum and split it into multiple types (revised `DesignBlockType`, `FillType`, `EffectType`, and `BlurType`). Those changes have affected the following APIs: - `BlockAPI.create(_:)` - `BlockAPI.createFill(_:)` - `BlockAPI.createEffect(_:)` - `BlockAPI.createBlur(_:)` - `BlockAPI.find(byType:)` > **Note:** **Note**All the functions above still support the string overload variants, however, their > usage will cause lint warnings in favor of type safe overloads. > **Note:** **Attention**`find(byType:)` now provides overloads for `DesignBlockType` and the new `FillType`. > If the type-inferred `find(byType: .image)` version is used it would still compile > without warnings but it now returns image fills (`FillType.image`) and not the > removed legacy high-level image design block types (`DesignBlockType.image`) anymore. > Please see the below "Block Exploration" example to "Query all images in the scene > after migration" to migrate your code base. ## **Code Examples** This section will show some code examples of the breaking changes and how it would look like after migrating. ```swift /** Block Creation */ // Creating an Image before migration let image = try engine.block.create(.image) try engine.block.setString( image, property: "image/imageFileURI", value: "https://domain.com/link-to-image.jpg" ) // Creating an Image after migration let block = try engine.block.create(.graphic) let rectShape = try engine.block.createShape(.rect) let imageFill = try engine.block.createFill(.image) try engine.block.setString( imageFill, property: "fill/image/imageFileURI", value: "https://domain.com/link-to-image.jpg" ) try engine.block.setShape(block, shape: rectShape) try engine.block.setFill(block, fill: imageFill) try engine.block.setKind(block, kind: "image") // Creating a star shape before migration let star = try engine.block.create(.starShape) try engine.block.setInt(star, property: "shapes/star/points", value: 8) // Creating a star shape after migration let block = try engine.block.create(.graphic) let starShape = try engine.block.createShape(.star) let colorFill = try engine.block.createFill(.color) try engine.block.setInt(starShape, property: "shape/star/points", value: 8) try engine.block.setShape(block, shape: starShape) try engine.block.setFill(block, fill: colorFill) try engine.block.setKind(block, kind: "shape") // Creating a sticker before migration let sticker = try engine.block.create(.sticker) try engine.block.setString( sticker, property: "sticker/imageFileURI", value: "https://domain.com/link-to-sticker.png" ) // Creating a sticker after migration let block = try engine.block.create(.graphic) let rectShape = try engine.block.createShape(.rect) let imageFill = try engine.block.createFill(.image) try engine.block.setString( imageFill, property: "fill/image/imageFileURI", value: "https://domain.com/link-to-sticker.png" ) try engine.block.setShape(block, shape: rectShape) try engine.block.setFill(block, fill: imageFill) try engine.block.setKind(block, kind: "sticker") /** Block Creation */ ``` ```swift /** Block Exploration */ // Query all images in the scene before migration let images = try engine.block.find(byType: .image) // Query all images in the scene after migration let images = try engine.block.find(byType: .graphic).filter { block in let fill = try engine.block.getFill(block) return try engine.block.isValid(fill) && engine.block.getType(fill) == FillType.image.rawValue } // Query all stickers in the scene before migration let stickers = try engine.block.find(byType: .sticker) // Query all stickers in the scene after migration let stickers = try engine.block.find(byKind: "sticker") // Query all Polygon shapes in the scene before migration let polygons = engine.block.find(byType: .polygonShape) // Query all Polygon shapes in the scene after migration let polygons = try engine.block.find(byType: .graphic).filter { block in let shape = try engine.block.getShape(block) return try engine.block.isValid(shape) && engine.block.getType(shape) == ShapeType.polygon.rawValue } /** Block Exploration */ ``` --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Upgrade" description: "Learn how to upgrade CE.SDK and apply required changes when migrating between major SDK versions." platform: ios url: "https://img.ly/docs/cesdk/ios/upgrade-4f8715/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Upgrading](https://img.ly/docs/cesdk/ios/upgrade-4f8715/) --- --- ## Related Pages - [To v1.19](https://img.ly/docs/cesdk/ios/to-v1-19-55bcad/) - Learn what changed in v1.19 and how to update your implementation to stay compatible. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Apply a Template" description: "Learn how to apply template scenes via API in the CreativeEditor SDK." platform: ios url: "https://img.ly/docs/cesdk/ios/use-templates/apply-template-35c73e/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Create and Use Templates](https://img.ly/docs/cesdk/ios/create-templates-3aef79/) > [Apply a Template](https://img.ly/docs/cesdk/ios/use-templates/apply-template-35c73e/) --- ```swift reference-only try await engine.scene.applyTemplate(from: "UBQ1ewoiZm9ybWF0Ij...") try await engine.scene .applyTemplate( from: .init( string: "https://cdn.img.ly/assets/demo/v1/ly.img.template/templates/cesdk_postcard_1.scene" )! ) ``` In this example, we will show you how to use the [CreativeEditor SDK](https://img.ly/products/creative-sdk)'s CreativeEngine to apply the contents of a given template scene to the currently loaded scene through the `scene` API. ## Applying Template Scenes ```swift public func applyTemplate(from string: String) async throws ``` Applies the contents of the given template scene to the currently loaded scene. This loads the template scene while keeping the design unit and page dimensions of the current scene. The content of the pages is automatically adjusted to fit the new dimensions. - `string:`: The template scene file contents, a base64 string. ```swift public func applyTemplate(from url: URL) async throws ``` Applies the contents of the given template scene to the currently loaded scene. This loads the template scene while keeping the design unit and page dimensions of the current scene. The content of the pages is automatically adjusted to fit the new dimensions. - `url:`: The url to the template scene file. ## Full Code Here's the full code: ```swift try await engine.scene.applyTemplate(from: "UBQ1ewoiZm9ybWF0Ij...") try await engine.scene .applyTemplate( from: .init( string: "https://cdn.img.ly/assets/demo/v1/ly.img.template/templates/cesdk_postcard_1.scene" )! ) ``` --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Generate From Templates" description: "Learn how to load, apply, and populate CE.SDK templates in Swift for iOS, macOS, and Mac Catalyst." platform: ios url: "https://img.ly/docs/cesdk/ios/use-templates/generate-334e15/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Create and Use Templates](https://img.ly/docs/cesdk/ios/create-templates-3aef79/) > [Generate From Template](https://img.ly/docs/cesdk/ios/use-templates/generate-334e15/) --- Once you create templates, either in CE.SDK’s web-based editor or programmatically, your app can load and apply them to scenes at runtime. This guide explains **how to load templates**, populate them with variables and images, and use template libraries to integrate them into your app. ## What You’ll Learn - Load and apply templates from string or URL. - Launch the editor with a template as the initial scene. - Populate templates with dynamic content using variables and placeholders. - Create custom template libraries with thumbnails and metadata. - Adapt templates automatically to target dimensions. ## When to Use It Use this guide when your app needs to **load and apply existing templates** to generate new scenes, such as: - Starting from a brand template. - Building a “Start from Template” screen. - Populating designs dynamically with user or product data. ## Applying Templates A template can replace or populate an existing scene while keeping the current page size and units. ### Apply from String Use `.applyTemplate(from:)` with a `String` when you’ve saved the template using `.saveToString()` or when your back-end returns the raw string encoding instead of a `.scene` file or a `Blob`. ```swift try await engine.scene.applyTemplate(from: "UBQ1ewoiZm9ybWF0Ij...") ``` ### Apply from URL ```swift let url = URL(string: "https://cdn.img.ly/assets/demo/v1/ly.img.template/templates/cesdk_postcard_1.scene")! try await engine.scene.applyTemplate(from: url) ``` When applying a template to an existing scene, CE.SDK automatically adjusts template content to fit the current scene’s dimensions. > **Note:** When using a prebuilt editor, end users can do these actions:* Visually replace images by drag-and-drop. > * Update text directly in the editor interface.In code-only, CI, or headless workflows, you must replace media or text programmatically by:* Changing the fill URI for images > * Updating variable text values in code. ## Loading Templates as Scenes Instead of applying a template to an existing scene, you can load a template as the active scene when you either: - Start the engine. - Launch a prebuilt editor with a template as the active scene. This is ideal when you want to either: - Open directly into a predefined layout. - Start an editing session from a template. ### Load from String ```swift try await engine.scene.load(from: "UBQ1ewoiZm9ybWF0Ij...") ``` ### Load from URL ```swift let url = URL(string: "https://cdn.img.ly/assets/demo/v1/ly.img.template/templates/cesdk_postcard_1.scene")! try await engine.scene.load(from: url) ``` ### Load from Archive Use `.loadArchive(from:)` when you've saved the template using `.saveToArchive()` to bundle resources. ```swift let url = URL(string: "https://cdn.img.ly/assets/demo/postcard.archive")! try await engine.scene.loaArchive(from: url) ``` The preceding code would typically appear in the `.imgly.onCreate` modifier when using one of the prebuilt editors. ## Comparison | Use Case | Method | Behavior| |---|---|---| |Apply a template to an existing design|.applyTemplate(from:) |Merges the template layout into the current scene while preserving size| |Launch with a predefined template|.load(from:)|The template becomes the initial scene| |Automate batch generation|Headless mode .load(from:)|Loads and renders templates programmatically| ## Template Libraries You can present templates in the Asset Library along with other assets via a custom `AssetSource`. Each entry includes metadata that points to your template file and a preview image. ## Dynamic Population (Variables & Placeholders) Templates can include variable placeholders like \{\{name}} or image placeholders. Your app can inject values at runtime. > **Note:** Interactive placeholder behavior (tap-to-replace, drag-drop) is available only in **CE.SDK’s predefined editors**.In **code-only, CI, or headless workflows**, use the `Variable` and `Block` APIs to replace media by:* Updating the image fill URI. > * Updating text via variables. ```swift // Example: Replace an image by setting a new fill URI try engine.block.setString(block, property: "fill/image/imageFileURI", value: "https://cdn.example.com/images/new_photo.jpg") ``` ```swift // Example: Update a variable-based text field try engine.variable.set(key: "name", value: "Chris") ``` ## Template Adaptation When you apply a template, CE.SDK keeps the current scene’s design unit and page size, automatically fitting the template content to match the target dimensions. This makes it easy to reuse templates for multiple aspect ratios. ## Troubleshooting **❌ Wrong size or scaling**: - Ensure the scene is initialized with the correct dimensions before applying the template. **❌ Missing assets**: - Confirm that external URLs or archives are accessible. **❌ Text variables not updating**: - Check that variable keys match the placeholders used. ## Next Steps Now that you can generate creations from templates, some related topics you may find helpful are: - [Create templates](https://img.ly/docs/cesdk/ios/create-templates/from-scratch-663cda/) from scratch. - [Apply templates](https://img.ly/docs/cesdk/ios/use-templates/apply-template-35c73e/) to existing scenes. - Work with [dynamic content](https://img.ly/docs/cesdk/ios/create-templates/add-dynamic-content-53fad7/) to update templates at runtime. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Overview" description: "Learn how to browse, apply, and dynamically populate templates in CE.SDK to streamline design workflows." platform: ios url: "https://img.ly/docs/cesdk/ios/use-templates/overview-ae74e1/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [Create and Use Templates](https://img.ly/docs/cesdk/ios/create-templates-3aef79/) > [Use Templates Overview](https://img.ly/docs/cesdk/ios/use-templates/overview-ae74e1/) --- Templates in CreativeEditor SDK (CE.SDK) are pre-designed layouts that serve as starting points for generating static designs, videos, or print-ready outputs. Templates can be used to produce a wide range of media, including images, PDFs, and videos. Instead of creating a design from scratch, you can use a template to quickly produce content by adapting pre-defined elements like text, images, and layout structures. Using templates offers significant advantages: faster content creation, consistent visual style, and scalable design workflows across many outputs. CE.SDK supports two modes of using templates: - **Fully Programmatic**: Generate content variations automatically by merging external data into templates without user intervention. - **User-Assisted**: Let users load a template, customize editable elements, and export the result manually. Template-based generation can be performed entirely on the client, entirely on a server, or in a hybrid setup where users interact with templates client-side before triggering automated server-side generation. [Explore Demos](https://img.ly/showcases/cesdk?tags=ios) [Get Started](https://img.ly/docs/cesdk/ios/get-started/overview-e18f40/) ## Output Formats When Using Templates When generating outputs from templates, CE.SDK supports: Templates are format-aware, allowing you to design once and export to multiple formats seamlessly. For example, a single marketing template could be used to produce a social media graphic, a printable flyer, and a promotional video, all using the same underlying design. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "User Interface" description: "Use CE.SDK’s customizable, production-ready UI or replace it entirely with your own interface." platform: ios url: "https://img.ly/docs/cesdk/ios/user-interface-5a089a/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [User Interface](https://img.ly/docs/cesdk/ios/user-interface-5a089a/) --- --- ## Related Pages - [Overview](https://img.ly/docs/cesdk/ios/user-interface/overview-41101a/) - Use CE.SDK’s customizable, production-ready UI or replace it entirely with your own interface. - [Appearance](https://img.ly/docs/cesdk/ios/user-interface/appearance-b155eb/) - Customize the visual style of the editor UI, including themes, fonts, labels, and icons. - [Customization](https://img.ly/docs/cesdk/ios/user-interface/customization-72b2f8/) - Control which features are available and how UI components behave, appear, or are arranged in the editor. - [UI Extensions](https://img.ly/docs/cesdk/ios/user-interface/ui-extensions-d194d1/) - Extend the editor interface with custom components, panels, actions, and dialogs tailored to your workflow. - [Localization](https://img.ly/docs/cesdk/ios/user-interface/localization-508e20/) - Learn how to configure and manage multiple languages in the CE.SDK editor using the built-in internationalization API. - [UI Events](https://img.ly/docs/cesdk/ios/user-interface/events-514b70/) - Listen to UI events and trigger custom logic based on user interactions in the editor interface. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Appearance" description: "Customize the visual style of the editor UI, including themes, fonts, labels, and icons." platform: ios url: "https://img.ly/docs/cesdk/ios/user-interface/appearance-b155eb/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [User Interface](https://img.ly/docs/cesdk/ios/user-interface-5a089a/) > [Appearance](https://img.ly/docs/cesdk/ios/user-interface/appearance-b155eb/) --- --- ## Related Pages - [Theming](https://img.ly/docs/cesdk/ios/user-interface/appearance/theming-4b0938/) - Customize the editor's visual theme to match your brand using flexible theming options. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Theming" description: "Customize the editor's visual theme to match your brand using flexible theming options." platform: ios url: "https://img.ly/docs/cesdk/ios/user-interface/appearance/theming-4b0938/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [User Interface](https://img.ly/docs/cesdk/ios/user-interface-5a089a/) > [Appearance](https://img.ly/docs/cesdk/ios/user-interface/appearance-b155eb/) > [Theming](https://img.ly/docs/cesdk/ios/user-interface/appearance/theming-4b0938/) --- ```swift file=@cesdk_swift_examples/editor-guides-configuration-theming/ThemingEditorSolution.swift reference-only import IMGLYDesignEditor import SwiftUI struct ThemingEditorSolution: View { let settings = EngineSettings(license: secrets.licenseKey, // pass nil for evaluation mode with watermark userID: "") @Environment(\.colorScheme) private var colorScheme var editor: some View { DesignEditor(settings) .preferredColorScheme(colorScheme == .dark ? .light : .dark) } @State private var isPresented = false var body: some View { Button("Use the Editor") { isPresented = true } .fullScreenCover(isPresented: $isPresented) { ModalEditor { editor } } } } #Preview { ThemingEditorSolution() } ``` In this example, we will show you how to make theming configurations for the mobile editor. The example is based on the `Design Editor`, however, it is exactly the same for all the other [solutions](https://img.ly/docs/cesdk/ios/prebuilt-solutions-d0ed07/). ## Modifiers After initializing an editor SwiftUI view you can apply any SwiftUI *modifier* to customize it like for any other SwiftUI view. Theming the mobile editor is done like for any other SwiftUI view. The editor respects the SwiftUI [`colorScheme` environment](https://developer.apple.com/documentation/swiftui/colorscheme). It can be configured with the [`preferredColorScheme` modifier](https://developer.apple.com/documentation/swiftui/view/preferredcolorscheme\(_:\)) to override the system's color scheme which is the default if it is not already overridden somewhere in your view hierarchy. In this example, we use the opposite color scheme that is currently used. ```swift import IMGLYDesignEditor import SwiftUI struct ThemingEditorSolution: View { let settings = EngineSettings(license: secrets.licenseKey, userID: "") @Environment(\.colorScheme) private var colorScheme var editor: some View { DesignEditor(settings) .preferredColorScheme(colorScheme == .dark ? .light : .dark) } @State private var isPresented = false var body: some View { Button("Use the Editor") { isPresented = true } .fullScreenCover(isPresented: $isPresented) { ModalEditor { editor } } } } #Preview { ThemingEditorSolution() } ``` --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Customization" description: "Control which features are available and how UI components behave, appear, or are arranged in the editor." platform: ios url: "https://img.ly/docs/cesdk/ios/user-interface/customization-72b2f8/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [User Interface](https://img.ly/docs/cesdk/ios/user-interface-5a089a/) > [Customization](https://img.ly/docs/cesdk/ios/user-interface/customization-72b2f8/) --- --- ## Related Pages - [Page Format](https://img.ly/docs/cesdk/ios/user-interface/customization/page-format-496315/) - Define default page size, orientation, and other format settings for your design canvas. - [Crop Presets](https://img.ly/docs/cesdk/ios/user-interface/customization/crop-presets-f94f26/) - Define crop presets settings for your design. - [Force Crop](https://img.ly/docs/cesdk/ios/user-interface/customization/force-crop-c2854e/) - Programmatically apply crop presets to design blocks with automatic best-match selection and flexible UI behavior. - [Rearrange Buttons](https://img.ly/docs/cesdk/ios/user-interface/customization/rearrange-buttons-97022a/) - Reorder UI buttons across editor components to guide user actions and streamline workflows. - [Navigation Bar](https://img.ly/docs/cesdk/ios/user-interface/customization/navigation-bar-4e5d39/) - Show, hide, or customize the editor’s top navigation bar to match your app layout. - [Dock](https://img.ly/docs/cesdk/ios/user-interface/customization/dock-cb916c/) - Configure the dock area to show or hide tools, panels, or quick access actions. - [Panel](https://img.ly/docs/cesdk/ios/user-interface/customization/panel-7ce1ee/) - Show or hide panels to focus the user interface on what matters most for your use case. - [Inspector Bar](https://img.ly/docs/cesdk/ios/user-interface/customization/inspector-bar-8ca1cd/) - Customize the inspector bar for editing properties like position, color, and size. - [Canvas Menu](https://img.ly/docs/cesdk/ios/user-interface/customization/canvas-menu-0d2b5b/) - Control visibility and customize the contextual popup menu that appears when selecting design elements on the canvas. - [Hide Elements](https://img.ly/docs/cesdk/ios/user-interface/customization/hide-elements-fe945c/) - Hide the dock completely or remove specific items from UI components to create customized editing experiences. --- ## More Resources - **[iOS Documentation Index](https://img.ly/docs/cesdk/ios.md)** - Browse all iOS documentation - **[Complete Documentation](https://img.ly/docs/cesdk/ios/llms-full.txt)** - Full documentation in one file (for LLMs) - **[Web Documentation](https://img.ly/docs/cesdk/ios/)** - Interactive documentation with examples - **[Support](mailto:support@img.ly)** - Contact IMG.LY support --- --- title: "Canvas Menu" description: "Control visibility and customize the contextual popup menu that appears when selecting design elements on the canvas." platform: ios url: "https://img.ly/docs/cesdk/ios/user-interface/customization/canvas-menu-0d2b5b/" --- > This is one page of the CE.SDK iOS documentation. For a complete overview, see the [iOS Documentation Index](https://img.ly/docs/cesdk/ios.md). For all docs in one file, see [llms-full.txt](https://img.ly/docs/cesdk/ios/llms-full.txt). **Navigation:** [Guides](https://img.ly/docs/cesdk/ios/guides-8d8b00/) > [User Interface](https://img.ly/docs/cesdk/ios/user-interface-5a089a/) > [Customization](https://img.ly/docs/cesdk/ios/user-interface/customization-72b2f8/) > [Canvas Menu](https://img.ly/docs/cesdk/ios/user-interface/customization/canvas-menu-0d2b5b/) --- ```swift file=@cesdk_swift_examples/editor-guides-configuration-canvas-menu/CanvasMenuEditorSolution.swift reference-only // swiftlint:disable unused_closure_parameter // swiftformat:disable unusedArguments import IMGLYDesignEditor import SwiftUI struct CanvasMenuEditorSolution: View { let settings = EngineSettings(license: secrets.licenseKey, // pass nil for evaluation mode with watermark userID: "") var editor: some View { DesignEditor(settings) .imgly.canvasMenuItems { context in CanvasMenu.Buttons.selectGroup() CanvasMenu.Divider() CanvasMenu.Buttons.bringForward() CanvasMenu.Buttons.sendBackward() CanvasMenu.Divider() CanvasMenu.Buttons.duplicate() CanvasMenu.Buttons.delete() } .imgly.modifyCanvasMenuItems { context, items in items.addFirst { CanvasMenu.Button(id: "my.package.canvasMenu.button.first") { context in print("First Button action") } label: { context in Label("First Button", systemImage: "arrow.backward.circle") } } items.addLast { CanvasMenu.Button(id: "my.package.canvasMenu.button.last") { context in print("Last Button action") } label: { context in Label("Last Button", systemImage: "arrow.forward.circle") } } items.addAfter(id: CanvasMenu.Buttons.ID.bringForward) { CanvasMenu.Button(id: "my.package.canvasMenu.button.afterBringForward") { context in print("After Bring Forward action") } label: { context in Label("After Bring Forward", systemImage: "arrow.forward.square") } } items.addBefore(id: CanvasMenu.Buttons.ID.sendBackward) { CanvasMenu.Button(id: "my.package.canvasMenu.button.beforeSendBackward") { context in print("Before Send Backward action") } label: { context in Label("Before Send Backward", systemImage: "arrow.backward.square") } } items.replace(id: CanvasMenu.Buttons.ID.duplicate) { CanvasMenu.Button(id: "my.package.canvasMenu.button.replacedDuplicate") { context in print("Replaced Duplicate action") } label: { context in Label("Replaced Duplicate", systemImage: "arrow.uturn.down.square") } } items.remove(id: CanvasMenu.Buttons.ID.delete) } } @State private var isPresented = false var body: some View { Button("Use the Editor") { isPresented = true } .fullScreenCover(isPresented: $isPresented) { ModalEditor { editor } } } } #Preview { CanvasMenuEditorSolution() } ``` ```swift file=@cesdk_swift_examples/editor-guides-configuration-canvas-menu/CanvasMenuItemEditorSolution.swift reference-only // swiftlint:disable unused_closure_parameter // swiftformat:disable unusedArguments import IMGLYDesignEditor import SwiftUI struct CanvasMenuItemEditorSolution: View { let settings = EngineSettings(license: secrets.licenseKey, // pass nil for evaluation mode with watermark userID: "") var editor: some View { DesignEditor(settings) .imgly.canvasMenuItems { context in CanvasMenu.Buttons.duplicate() CanvasMenu.Buttons.delete( action: { context in context.eventHandler.send(.deleteSelection) }, label: { context in Label { Text("Delete") } icon: { Image.imgly.delete } }, isEnabled: { context in true }, isVisible: { context in try context.engine.block.isAllowedByScope(context.selection.block, key: "lifecycle/destroy") }, ) CanvasMenu.Button( id: "my.package.canvasMenu.button.newButton", ) { context in print("New Button action") } label: { context in Label("New Button", systemImage: "star.circle") } isEnabled: { context in true } isVisible: { context in true } CustomCanvasMenuItem() } } @State private var isPresented = false var body: some View { Button("Use the Editor") { isPresented = true } .fullScreenCover(isPresented: $isPresented) { ModalEditor { editor } } } } private struct CustomCanvasMenuItem: CanvasMenu.Item { var id: EditorComponentID { "my.package.canvasMenu.newCustomItem" } func body(_ context: CanvasMenu.Context) throws -> some View { ZStack { RoundedRectangle(cornerRadius: 10) .fill(.conicGradient(colors: [.red, .yellow, .green, .cyan, .blue, .purple, .red], center: .center)) Text("New Custom Item") .padding(4) } .onTapGesture { print("New Custom Item action") } } func isVisible(_ context: CanvasMenu.Context) throws -> Bool { true } } #Preview { CanvasMenuItemEditorSolution() } ``` Customize the contextual popup menu through two distinct approaches: complete replacement for strict control or modification for flexible extension. We configure the canvas menu to streamline editing workflows by controlling which actions appear when users select design elements. ![Canvas Menu](./assets/canvas-menu-ios.png) Explore the complete code sample on [GitHub](https://github.com/imgly/cesdk-swift-examples/tree/v$UBQ_VERSION$/editor-guides-configuration-canvas-menu). ## Understanding Canvas Menu The canvas menu displays contextual editing actions when users select design elements. CE.SDK iOS provides two distinct approaches for customizing this menu, each suited for different use cases. **Architecture**: The canvas menu system consists of: - **Items** - Protocol-based components (Button, Divider, custom Item) - **Context** - Access to engine, assetLibrary, eventHandler, and **cached selection** - **Configuration** - Two mutually exclusive approaches (replacement or modification) **Key Distinction**: | Approach | Method | Result | Version Safety | |----------|--------|--------|----------------| | **Complete Replacement** | `.imgly.canvasMenuItems` | Exact control over items and order | ✅ Safe - you define everything | | **Modification** | `.imgly.modifyCanvasMenuItems` | Extends defaults with flexible operations | ⚠️ Caution - default order may change between versions | **Context Properties**: The `CanvasMenu.Context` provides: | Property | Type | Available | Description | |----------|------|-----------|-------------| | engine | Engine | ✅ | Current editor engine instance | | eventHandler | EditorEventHandler | ✅ | Handler for editor events | | assetLibrary | any AssetLibrary | ✅ | Configured asset library | | **selection** | Selection | ✅ | **Cached selection info (optimized for UI)** | **Selection Properties** (cached for performance): | Property | Type | Description | |----------|------|-------------| | block | DesignBlockID | Currently selected block | | parentBlock | DesignBlockID? | Parent of selected block | | type | DesignBlockType? | Type of selected block (e.g., "//ly.img.ubq/text") | | fillType | FillType? | Fill type if applicable | | kind | String? | Kind property of block | | siblings | \[DesignBlockID] | Reorderable siblings | | canMove | Bool | Whether block can be reordered | **Critical**: Use `context.selection` instead of querying the engine directly—it's optimized for UI presentation timing. ## Complete Replacement Approach We use `.imgly.canvasMenuItems` when we need strict control over the exact items and their order. This approach provides version-safe configuration by explicitly defining every item. **Use when:** - Need exact control over item ordering - Building a minimal or custom menu from scratch - Want version-safe configuration (default order won't affect you) - Creating a simplified interface for specific workflows ```swift highlight-canvasMenu-canvasMenuItems .imgly.canvasMenuItems { context in CanvasMenu.Buttons.selectGroup() CanvasMenu.Divider() CanvasMenu.Buttons.bringForward() CanvasMenu.Buttons.sendBackward() CanvasMenu.Divider() CanvasMenu.Buttons.duplicate() CanvasMenu.Buttons.delete() } ``` **Key Points**: - Complete control over items and order - No default items included—build from scratch - Builder pattern with `@CanvasMenu.Builder` - Items only shown when `isVisible` returns `true` - **Version-safe**: Changes to default menu won't affect your configuration ## Modification Approach We use `.imgly.modifyCanvasMenuItems` when we want to extend or adjust the default configuration without rebuilding from scratch. This approach provides flexibility through operations that add, remove, or reorder items. **Use when:** - Extending default configuration with custom buttons - Removing unwanted default buttons - Reordering a few items relative to defaults - Quick customization without rebuilding entire menu ### Modification Operations All modification operations work with the existing default item list: | Operation | Purpose | Throws on Missing ID | |-----------|---------|---------------------| | `items.addFirst(_:)` | Prepend items at beginning | No | | `items.addLast(_:)` | Append items at end | No | | `items.addBefore(id:_:)` | Insert before specific item | Yes ✅ | | `items.addAfter(id:_:)` | Insert after specific item | Yes ✅ | | `items.replace(id:_:)` | Replace existing item | Yes ✅ | | `items.remove(id:)` | Remove item by ID | Yes ✅ | **Important**: Operations targeting specific IDs throw errors if the ID doesn't exist or was already removed. ### Adding Items We prepend custom actions at the beginning: ```swift highlight-canvasMenu-addFirst items.addFirst { CanvasMenu.Button(id: "my.package.canvasMenu.button.first") { context in print("First Button action") } label: { context in Label("First Button", systemImage: "arrow.backward.circle") } } ``` We append custom actions at the end: ```swift highlight-canvasMenu-addLast items.addLast { CanvasMenu.Button(id: "my.package.canvasMenu.button.last") { context in print("Last Button action") } label: { context in Label("Last Button", systemImage: "arrow.forward.circle") } } ``` ### Positioning Relative to Existing Items We insert items before specific buttons: ```swift highlight-canvasMenu-addBefore items.addBefore(id: CanvasMenu.Buttons.ID.sendBackward) { CanvasMenu.Button(id: "my.package.canvasMenu.button.beforeSendBackward") { context in print("Before Send Backward action") } label: { context in Label("Before Send Backward", systemImage: "arrow.backward.square") } } ``` We insert items after specific buttons: ```swift highlight-canvasMenu-addAfter items.addAfter(id: CanvasMenu.Buttons.ID.bringForward) { CanvasMenu.Button(id: "my.package.canvasMenu.button.afterBringForward") { context in print("After Bring Forward action") } label: { context in Label("After Bring Forward", systemImage: "arrow.forward.square") } } ``` ### Replacing and Removing Items We replace default buttons with custom implementations: ```swift highlight-canvasMenu-replace items.replace(id: CanvasMenu.Buttons.ID.duplicate) { CanvasMenu.Button(id: "my.package.canvasMenu.button.replacedDuplicate") { context in print("Replaced Duplicate action") } label: { context in Label("Replaced Duplicate", systemImage: "arrow.uturn.down.square") } } ``` We remove unwanted buttons: ```swift highlight-canvasMenu-remove items.remove(id: CanvasMenu.Buttons.ID.delete) ``` **Warning**: Default item order may change between editor versions. Use complete replacement if strict ordering is required across versions. ## How Replacement and Modification Interact The two approaches are **mutually exclusive**—use one or the other, not both. If both are specified, replacement takes precedence. **Decision Matrix**: | Need | Approach | Reason | |------|----------|--------| | Exact control over order | **Replacement** | Version-safe, explicit control | | Extend default configuration | **Modification** | Builds on defaults, less code | | Minimal menu from scratch | **Replacement** | Start with empty slate | | Add one custom button | **Modification** | Quick, leverages defaults | | Version-safe configuration | **Replacement** | Immune to default changes | | Quick customization | **Modification** | Flexible operations | **When Replacement Wins**: ```swift DesignEditor(settings) .imgly.canvasMenuItems { _ in // This takes precedence CanvasMenu.Buttons.duplicate() CanvasMenu.Buttons.delete() } .imgly.modifyCanvasMenuItems { _, items in // This is ignored items.addFirst { /* ... */ } } ``` ## Item Types and Creation The canvas menu supports three item types: predefined buttons, custom buttons, and fully custom items. ### Predefined Buttons We use predefined buttons for common editing actions: ```swift highlight-canvasMenu-predefinedButton CanvasMenu.Buttons.duplicate() ``` **Available Predefined Buttons**: | Button | ID | Description | Default Visibility | |--------|----|-----------|--------------------| | `CanvasMenu.Buttons.bringForward` | `.bringForward` | Brings selected block forward | `selection.canMove` | | `CanvasMenu.Buttons.sendBackward` | `.sendBackward` | Sends selected block backward | `selection.canMove` | | `CanvasMenu.Buttons.duplicate` | `.duplicate` | Duplicates selected block | Scope: `lifecycle/duplicate` | | `CanvasMenu.Buttons.delete` | `.delete` | Deletes selected block | Scope: `lifecycle/destroy` | | `CanvasMenu.Buttons.selectGroup` | `.selectGroup` | Selects parent group | Parent is group | ### Customizing Predefined Buttons We override default parameters to customize behavior: ```swift highlight-canvasMenu-customizePredefinedButton CanvasMenu.Buttons.delete( action: { context in context.eventHandler.send(.deleteSelection) }, label: { context in Label { Text("Delete") } icon: { Image.imgly.delete } }, isEnabled: { context in true }, isVisible: { context in try context.engine.block.isAllowedByScope(context.selection.block, key: "lifecycle/destroy") }, ) ``` **Available Parameters**: | Parameter | Type | Required | Default | Description | |-----------|------|----------|---------|-------------| | action | `Context.To` | ❌ | Default action | Closure executed when tapped | | label | `Context.To