Skip to main content
CESDK/CE.SDK/Web Editor/Customization/Plugins/AI Integration

Integrating AI Capabilities into CreativeEditor SDK

Learn how to integrate different AI plugins into CE.SDK

This tutorial will guide you through integrating AI-powered generation capabilities into your CreativeEditor SDK application using the @imgly/plugin-ai-apps-web package. You'll learn how to set up various AI providers for generating images, videos, audio, and text.

Prerequisites#

  • Basic knowledge of JavaScript/TypeScript and React
  • Familiarity with CreativeEditor SDK
  • API keys for AI services (Anthropic, fal.ai, ElevenLabs, etc.)

1. Project Setup#

First, set up your project and install the necessary packages:

# Initialize a new project or use an existing one
npm install @cesdk/cesdk-js
npm install @imgly/plugin-ai-apps-web
# Install individual AI generation packages as needed
npm install @imgly/plugin-ai-image-generation-web
npm install @imgly/plugin-ai-video-generation-web
npm install @imgly/plugin-ai-audio-generation-web
npm install @imgly/plugin-ai-text-generation-web

2. Full Integration Example#

Here's a comprehensive example demonstrating how to integrate all AI capabilities with CreativeEditor SDK:

import CreativeEditorSDK from '@cesdk/cesdk-js';
import AiApps from '@imgly/plugin-ai-apps-web';
import { useRef } from 'react';
// Import providers from individual AI generation packages
import Elevenlabs from '@imgly/plugin-ai-audio-generation-web/elevenlabs';
import FalAiImage from '@imgly/plugin-ai-image-generation-web/fal-ai';
import AnthropicProvider from '@imgly/plugin-ai-text-generation-web/anthropic';
import FalAiVideo from '@imgly/plugin-ai-video-generation-web/fal-ai';
// Import middleware utilities
import { uploadMiddleware } from '@imgly/plugin-ai-generation-web';
function App() {
const cesdk = useRef<CreativeEditorSDK>();
return (
<div
style={{ width: '100vw', height: '100vh' }}
ref={(domElement) => {
if (domElement != null) {
CreativeEditorSDK.create(domElement, {
license: 'your-license-key',
callbacks: {
onUpload: 'local',
onExport: 'download'
},
ui: {
elements: {
navigation: {
action: {
load: true,
export: true
}
}
}
}
}).then(async (instance) => {
cesdk.current = instance;
// Add asset sources
await Promise.all([
instance.addDefaultAssetSources(),
instance.addDemoAssetSources({ sceneMode: 'Video' })
]);
// Configure AI Apps dock position
instance.ui.setDockOrder([
'ly.img.ai/apps.dock',
...instance.ui.getDockOrder()
]);
// Add AI options to canvas menu
instance.ui.setCanvasMenuOrder([
'ly.img.ai.text.canvasMenu',
'ly.img.ai.image.canvasMenu',
...instance.ui.getCanvasMenuOrder()
]);
// Add the AI Apps plugin with all providers
instance.addPlugin(
AiApps({
providers: {
// Text generation and transformation
text2text: AnthropicProvider({
proxyUrl: 'https://your-server.com/api/anthropic-proxy'
}),
// Image generation
text2image: FalAiImage.RecraftV3({
proxyUrl: 'https://your-server.com/api/fal-ai-proxy',
// Add upload middleware to store generated images on your server
middleware: [
uploadMiddleware(async (output) => {
// Upload the generated image to your server
const response = await fetch(
'https://your-server.com/api/store-image',
{
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
imageUrl: output.url,
metadata: { source: 'ai-generation' }
})
}
);
const result = await response.json();
// Return the output with your server's URL
return {
...output,
url: result.permanentUrl
};
})
]
}),
image2image: FalAiImage.GeminiFlashEdit({
proxyUrl: 'https://your-server.com/api/fal-ai-proxy'
}),
// Video generation
text2video: FalAiVideo.MinimaxVideo01Live({
proxyUrl: 'https://your-server.com/api/fal-ai-proxy'
}),
image2video: FalAiVideo.MinimaxVideo01LiveImageToVideo({
proxyUrl: 'https://your-server.com/api/fal-ai-proxy'
}),
// Audio generation
text2speech: Elevenlabs.ElevenMultilingualV2({
proxyUrl: 'https://your-server.com/api/elevenlabs-proxy'
}),
text2sound: Elevenlabs.ElevenSoundEffects({
proxyUrl: 'https://your-server.com/api/elevenlabs-proxy'
})
}
})
);
// Create a video scene to utilize all capabilities
await instance.createVideoScene();
});
} else if (cesdk.current != null) {
cesdk.current.dispose();
}
}}
></div>
);
}
export default App;

3. AI Provider Configuration#

Each AI provider type serves a specific purpose and creates different types of content:

Text Generation (Anthropic)#

text2text: AnthropicProvider({
proxyUrl: 'https://your-server.com/api/anthropic-proxy'
});

The text provider enables capabilities like:

  • Improving writing quality
  • Fixing spelling and grammar
  • Making text shorter or longer
  • Changing tone (professional, casual, friendly)
  • Translating to different languages
  • Custom text transformations

Image Generation (fal.ai)#

// Text-to-image generation
text2image: FalAiImage.RecraftV3({
proxyUrl: 'https://your-server.com/api/fal-ai-proxy'
}),
// Image-to-image transformation
image2image: FalAiImage.GeminiFlashEdit({
proxyUrl: 'https://your-server.com/api/fal-ai-proxy'
})

Image generation features include:

  • Creating images from text descriptions
  • Multiple style options (realistic, illustration, vector)
  • Various size presets and custom dimensions
  • Transforming existing images based on text prompts

Video Generation (fal.ai)#

// Text-to-video generation
text2video: FalAiVideo.MinimaxVideo01Live({
proxyUrl: 'https://your-server.com/api/fal-ai-proxy'
}),
// Image-to-video transformation
image2video: FalAiVideo.MinimaxVideo01LiveImageToVideo({
proxyUrl: 'https://your-server.com/api/fal-ai-proxy'
})

Video generation capabilities include:

  • Creating videos from text descriptions
  • Transforming still images into videos
  • Fixed output dimensions (typically 1280×720)
  • 5-second video duration

Audio Generation (ElevenLabs)#

// Text-to-speech generation
text2speech: Elevenlabs.ElevenMultilingualV2({
proxyUrl: 'https://your-server.com/api/elevenlabs-proxy'
}),
// Sound effect generation
text2sound: Elevenlabs.ElevenSoundEffects({
proxyUrl: 'https://your-server.com/api/elevenlabs-proxy'
})

Audio generation features include:

  • Text-to-speech with multiple voices
  • Multilingual support
  • Adjustable speaking speed
  • Sound effect generation from text descriptions
  • Creating ambient sounds and effects

4. UI Integration#

The AI Apps plugin registers several UI components to CreativeEditor SDK:

AI Dock Button#

The main entry point for AI features is the AI dock button, which you can position in the dock:

// Add the AI dock component to the beginning of the dock
cesdk.ui.setDockOrder(['ly.img.ai/apps.dock', ...cesdk.ui.getDockOrder()]);
// Or add it at a specific position
const currentOrder = cesdk.ui.getDockOrder();
currentOrder.splice(2, 0, 'ly.img.ai/apps.dock');
cesdk.ui.setDockOrder(currentOrder);

Canvas Menu Options#

AI text and image transformations are available in the canvas context menu:

cesdk.ui.setCanvasMenuOrder([
'ly.img.ai.text.canvasMenu',
'ly.img.ai.image.canvasMenu',
...cesdk.ui.getCanvasMenuOrder()
]);

AI Apps Menu#

In video mode, clicking the AI dock button shows cards for all available AI generation types. This menu automatically adjusts based on the available providers you've configured.

5. Using Middleware#

The AI generation framework supports middleware that can enhance or modify the generation process. Middleware functions are executed in sequence and can perform operations before generation, after generation, or both.

Common Middleware Types#

Upload Middleware#

The uploadMiddleware is useful when you need to store generated content on your server before it's used:

import { uploadMiddleware } from '@imgly/plugin-ai-generation-web';
// In your provider configuration
middleware: [
uploadMiddleware(async (output) => {
// Upload the output to your server
const response = await fetch('https://your-server.com/api/upload', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ url: output.url })
});
const result = await response.json();
// Return updated output with your server's URL
return {
...output,
url: result.permanentUrl
};
})
];

Use cases for upload middleware:

  • Storing generated assets in your own cloud storage
  • Adding watermarks or processing assets before use
  • Tracking/logging generated content
  • Implementing licensing or rights management

Rate Limiting Middleware#

To prevent abuse of AI services, you can implement rate limiting:

import { rateLimitMiddleware } from '@imgly/plugin-ai-generation-web';
// In your provider configuration
middleware: [
rateLimitMiddleware({
maxRequests: 10,
timeWindowMs: 60 * 60 * 1000, // 1 hour
onRateLimitExceeded: (input, options, info) => {
// Show a notice to the user
console.log(
`Rate limit reached: ${info.currentCount}/${info.maxRequests}`
);
return false; // Reject the request
}
})
];

Custom Error Handling Middleware#

You can create custom middleware for error handling:

const errorMiddleware = async (input, options, next) => {
try {
return await next(input, options);
} catch (error) {
// Handle error (show UI notification, log, etc.)
console.error('Generation failed:', error);
// You can rethrow or return a fallback
throw error;
}
};

Middleware Order#

The order of middleware is important - they're executed in the sequence provided:

middleware: [
// Executes first
rateLimitMiddleware({ maxRequests: 10, timeWindowMs: 3600000 }),
// Executes second (only if rate limit wasn't exceeded)
loggingMiddleware(),
// Executes third (after generation completes)
uploadMiddleware(async (output) => {
/* ... */
})
];

Conclusion#

By following this tutorial, you've learned how to integrate powerful AI generation capabilities into your CreativeEditor SDK application. You now know how to:

  1. Set up various AI providers for different content types
  2. Configure the AI Apps plugin with all necessary providers
  3. Integrate AI features into the editor UI

This integration enables your users to create impressive content with AI assistance directly within your application, enhancing their creative workflow and the overall value of your product.

For more details about each AI provider, refer to the individual package documentation: