Search
Loading...
Skip to content

Moderate Content

Use CE.SDK’s engine APIs to extract images and text from designs in Node.js, then integrate with third-party moderation services to detect inappropriate content.

8 mins
estimated time
Download
StackBlitz
GitHub

CE.SDK does not provide prebuilt content moderation workflows. Instead, it provides powerful engine APIs that make it straightforward to extract images and text from designs for moderation by third-party services of your choice. This approach is intentional: content moderation requirements are highly specific to each business, including which categories to check, what thresholds to apply, and which services to use. Similarly, when and where to check content (during editing, before export, on upload) varies based on your workflow and user experience goals.

Content moderation helps maintain quality standards and comply with content policies. Unlike built-in validation rules that check technical aspects like resolution or layout, content moderation requires integration with external AI-powered services (Sightengine, AWS Rekognition, OpenAI Moderation API) to analyze visual content (images for weapons, drugs, nudity) and textual content (profanity, hate speech, threats).

This guide demonstrates how to use CE.SDK’s engine APIs to find and extract content from designs in Node.js, send it to moderation APIs, and validate before export in server environments.

Finding Content in Designs#

Locate all images and text blocks, then extract the data needed for moderation.

Images: Use findByKind('image') to find image blocks, then extract URLs from fill properties:

/**
* Extracts the image URL from a block's fill property
*/
function getImageUrl(engine, blockId) {
try {
const imageFill = engine.block.getFill(blockId);
const fillImageURI = engine.block.getString(
imageFill,
'fill/image/imageFileURI'
);
if (fillImageURI) {
return fillImageURI;
}
const sourceSet = engine.block.getSourceSet(
imageFill,
'fill/image/sourceSet'
);
if (sourceSet && sourceSet.length > 0) {
return sourceSet[0].uri;
}
return null;
} catch (error) {
return null;
}
}

Process all images by checking each URL against the moderation API:

/**
* Checks all images in the design for inappropriate content
*/
async function checkImageContent(engine) {
const imageBlockIds = engine.block.findByKind('image');
const imageBlocksData = imageBlockIds
.map((blockId) => ({
blockId,
url: getImageUrl(engine, blockId),
blockType: engine.block.getType(blockId),
blockName: engine.block.getName(blockId)
}))
.filter((data) => data.url !== null);
const imagesWithValidity = await Promise.all(
imageBlocksData.map(async (imageBlockData) => {
const categories = await checkImageContentAPI(imageBlockData.url);
return categories.map((checkResult) => ({
...checkResult,
blockId: imageBlockData.blockId,
blockType: imageBlockData.blockType,
blockName: imageBlockData.blockName,
url: imageBlockData.url,
id: `${imageBlockData.blockId}-${checkResult.name}`
}));
})
);
return imagesWithValidity.flat();
}

Text: Use findByType('//ly.img.ubq/text') to find text blocks, then extract content using getString():

/**
* Extracts text content from a text block
*/
function getTextContent(engine, blockId) {
try {
return engine.block.getString(blockId, 'text/text');
} catch (error) {
return '';
}
}

Process all text blocks by checking each text string against the moderation API:

/**
* Checks all text blocks in the design for inappropriate content
*/
async function checkTextContent(engine) {
const textBlockIds = engine.block.findByType('//ly.img.ubq/text');
const textBlocksData = textBlockIds
.map((blockId) => ({
blockId,
text: getTextContent(engine, blockId),
blockType: engine.block.getType(blockId),
blockName: engine.block.getName(blockId)
}))
.filter((data) => data.text.trim().length > 0);
const textsWithValidity = await Promise.all(
textBlocksData.map(async (textBlockData) => {
const categories = await checkTextContentAPI(textBlockData.text);
return categories.map((checkResult) => ({
...checkResult,
blockId: textBlockData.blockId,
blockType: textBlockData.blockType,
blockName: textBlockData.blockName,
text: textBlockData.text,
id: `${textBlockData.blockId}-${checkResult.name}`
}));
})
);
return textsWithValidity.flat();
}

Both processes use Promise.all() to handle multiple items concurrently.

Integrating Moderation APIs#

Integrate external AI services (Sightengine, AWS Rekognition, OpenAI Moderation API) to analyze content. In server environments, you can call moderation APIs directly or proxy through your backend for better security.

Image Moderation — This example shows a simulated API call that you’ll replace with your actual moderation service. The function returns content categories with confidence scores:

/**
* Simulates an image content moderation API call
* In production, replace this with your actual moderation service
*/
async function checkImageContentAPI(url) {
if (imageCache.has(url)) {
return imageCache.get(url);
}
// Simulate API delay
await new Promise((resolve) => setTimeout(resolve, 100));
const results = [
{
name: 'Weapons',
description: 'Handguns, rifles, machine guns, threatening knives',
state: percentageToState(Math.random() * 0.3)
},
{
name: 'Alcohol',
description: 'Wine, beer, cocktails, champagne',
state: percentageToState(Math.random() * 0.4)
},
{
name: 'Drugs',
description: 'Cannabis, syringes, glass pipes, bongs, pills',
state: percentageToState(Math.random() * 0.2)
},
{
name: 'Nudity',
description: 'Raw or partial nudity',
state: percentageToState(Math.random() * 0.3)
}
];
imageCache.set(url, results);
return results;
}

In production, replace the simulated logic with a real API call to services like Sightengine or AWS Rekognition. Store API keys securely using environment variables.

Text Moderation — Similar to image moderation, this simulates an API call that checks text content. Replace this with your actual text moderation service:

/**
* Simulates a text content moderation API call
* In production, replace this with your actual moderation service
*/
async function checkTextContentAPI(text) {
if (textCache.has(text)) {
return textCache.get(text);
}
// Simulate API delay
await new Promise((resolve) => setTimeout(resolve, 100));
const results = [
{
name: 'Profanity',
description: 'Offensive or vulgar language',
state: percentageToState(Math.random() * 0.3)
},
{
name: 'Hate Speech',
description: 'Content promoting hatred or discrimination',
state: percentageToState(Math.random() * 0.2)
},
{
name: 'Threats',
description: 'Threatening or violent language',
state: percentageToState(Math.random() * 0.1)
}
];
textCache.set(text, results);
return results;
}

In production, replace the simulation with calls to services like OpenAI Moderation API or Perspective API.

Processing Results: Map confidence scores to severity levels (failed > 0.8, warning > 0.4, success ≤ 0.4):

/**
* Maps confidence scores to validation states
*/
function percentageToState(percentage) {
if (percentage > 0.8) {
return 'failed';
} else if (percentage > 0.4) {
return 'warning';
} else {
return 'success';
}
}

Implement caching using Map or Redis to avoid redundant API calls for the same content.

Displaying Validation Results#

Group results by severity (failed, warning, success) and display them in the console:

/**
* Displays validation results grouped by severity
*/
function displayResults(results) {
const failed = results.filter((r) => r.state === 'failed');
const warnings = results.filter((r) => r.state === 'warning');
const passed = results.filter((r) => r.state === 'success');
console.log('\n=== Content Moderation Results ===\n');
console.log(`Total checks: ${results.length}`);
console.log(` Violations: ${failed.length}`);
console.log(` Warnings: ${warnings.length}`);
console.log(` Passed: ${passed.length}\n`);
if (failed.length > 0) {
console.log('❌ VIOLATIONS:');
failed.forEach((result) => {
const content = result.url || result.text?.substring(0, 50) + '...';
console.log(
` - ${result.name}: Block ${result.blockId} (${result.blockType})`
);
console.log(` ${result.description}`);
console.log(` Content: ${content}\n`);
});
}
if (warnings.length > 0) {
console.log('⚠️ WARNINGS:');
warnings.forEach((result) => {
const content = result.url || result.text?.substring(0, 50) + '...';
console.log(
` - ${result.name}: Block ${result.blockId} (${result.blockType})`
);
console.log(` ${result.description}`);
console.log(` Content: ${content}\n`);
});
}
return { failed, warnings, passed };
}

Export Validation#

Validate content before allowing export in server workflows:

/**
* Validates content before allowing export
*/
async function validateForExport(engine) {
console.log('Validating design before export...');
const imageResults = await checkImageContent(engine);
const textResults = await checkTextContent(engine);
const allResults = [...imageResults, ...textResults];
const { failed } = displayResults(allResults);
if (failed.length > 0) {
console.error(
`\n❌ Export blocked: ${failed.length} policy violation(s) detected.\n`
);
return false;
}
console.log('\n✅ Content validation passed. Export allowed.\n');
return true;
}

This pattern ensures content is checked at the critical point before generating final outputs.

Server Environment Setup#

Configuration: Use environment variables to store API credentials and load scenes from templates:

/**
* Main demonstration function
*/
async function main() {
console.log('🚀 Starting CE.SDK Content Moderation Demo...\n');
// Initialize the engine
const config = {
license: process.env.CESDK_LICENSE || '',
logger: (level, ...args) => {
if (level === 'error' || level === 'warn') {
console.log(`[${level}]`, ...args);
}
}
};
const engine = await CreativeEngine.init(config);
console.log('✓ Engine initialized\n');
// Load a sample scene/template
// In production, use your actual scene URL or template
const templateURL =
'https://cdn.img.ly/packages/imgly/cesdk-js/latest/assets/templates/cesdk_postcard_1.scene';
await engine.scene.loadFromURL(templateURL);
const page = engine.block.findByType('page')[0];
console.log('✓ Scene loaded\n');

Headless Execution: The server example loads a template scene that already contains content to moderate:

// The loaded scene already contains images and text
// In production, you would modify the scene content as needed
console.log('✓ Scene contains sample content for moderation\n');

Command-Line Interface: Run validation checks from the command line:

// Validate content before export
const canExport = await validateForExport(engine);
if (canExport) {
// Export the design
const blob = await engine.block.export(page, 'image/png');
const buffer = Buffer.from(await blob.arrayBuffer());
// Save to file (optional)
const fs = await import('fs/promises');
await fs.writeFile('output.png', buffer);
console.log('✓ Design exported to output.png\n');
}

Best Practices#

Security: Store API keys in environment variables, never in code. Use .env files for local development and secret managers in production.

Performance: Cache results using Redis or similar for production workloads. Process items in parallel with Promise.all().

Error Handling: Implement proper error handling with retries for API calls. Log all checks for compliance and auditing.

Timing: Validate at export time for the best balance between policy enforcement and creative freedom.

Troubleshooting#

Checks not running: Verify engine is initialized, content exists, API endpoint is reachable, and credentials are valid.

Content not found: Ensure blocks have correct kind/type, images have fills, text blocks aren’t empty, and scene has loaded.

API errors: Check API key validity, endpoint URL, image URL accessibility, rate limits, and service-specific error codes.

Inconsistent results: Verify caching works correctly, threshold values are appropriate, and API responses parse correctly.

API Reference#

Finding Content:

  • engine.block.findByKind('image') — Find all image blocks
  • engine.block.findByType('//ly.img.ubq/text') — Find all text blocks

Getting Data:

  • engine.block.getFill(blockId) — Get fill object for image
  • engine.block.getString(id, 'text/text') — Get text content
  • engine.block.getString(fill, 'fill/image/imageFileURI') — Get image URL
  • engine.block.getSourceSet(fill, 'fill/image/sourceSet') — Get image source set

Export:

  • engine.block.export(blockId, mimeType) — Export block as blob

Next Steps#

Now that you understand content moderation in Node.js, explore related features: