This page covers common issues you may encounter when deploying and running the CE.SDK Renderer, along with solutions and diagnostic steps.
Verifying Your Environment#
Before debugging specific issues, verify that your server meets all requirements by running our GPU requirements checker script inside your deployment environment (not inside the container). Download check-requirements.sh and run it:
bash check-requirements.shThe script checks for:
- NVIDIA driver and GPU detection
- NVENC hardware encoder availability
- EGL context support (required for GPU-accelerated rendering)
- Required system libraries
- Docker GPU runtime configuration
All items marked with a red X need to be addressed before the Renderer can use GPU acceleration. See the Server setup guide for installation instructions.
Reproduction baseline#
Our live demo is powered by a reference deployment of the CE.SDK Renderer. You can upload your scene or archive there to compare results against a known-good baseline before investigating further.
GPU and Rendering Issues#
GPU is not being used for rendering#
Symptoms: Exports are significantly slower than expected, or verbose logs show falling back to CPU rendering.
Possible causes and solutions:
-
Missing
--runtime=nvidiaflag: When usingdocker run, you must pass both--runtime=nvidiaand--gpus all. Using only--gpus allis not sufficient.Terminal window # Correctsudo docker run --runtime=nvidia --gpus all ...# Incorrect - GPU will not be available for renderingsudo docker run --gpus all ... -
Missing
runtime: nvidiain Docker Compose: When using Docker Compose, addruntime: nvidiato your service configuration:services:renderer:image: docker.io/imgly/cesdk-renderer:1.73.0runtime: nvidiadeploy:resources:reservations:devices:- driver: nvidiacount: allcapabilities: [gpu] -
NVIDIA Container Runtime not installed or configured: Run the requirements checker script to verify. If missing, install the NVIDIA Container Toolkit and configure the runtime:
Terminal window sudo nvidia-ctk runtime configure --runtime=dockersudo systemctl restart docker -
Kubernetes / containerd environments: If you’re using containerd instead of Docker, ensure the NVIDIA device plugin is installed and your pod spec requests GPU resources. The
dockerandnvidia-container-runtimechecks from the script may show failures, which is expected in a Kubernetes environment.
EGL context creation failure#
Error: Error encountered while creating an EGL hardware-accelerated context, falling back to CPU rendering: EGL initialize error: UNKNOWN
This means the Renderer could not create a hardware-accelerated OpenGL context. The Renderer will still work but will fall back to CPU rendering, which is significantly slower for video exports.
- Verify the NVIDIA driver is installed on the host:
nvidia-smi - Verify the NVIDIA EGL libraries are available: check for
libEGL_nvidia.soand the vendor config at/usr/share/glvnd/egl_vendor.d/10_nvidia.json - Ensure the container has GPU access (see above)
- If running on a machine without a GPU, this message is expected. Image exports will work but be slower, and video exports depend on available CPU resources.
Forcing GPU-only rendering for validation#
To verify GPU acceleration is working correctly, pass --render-device gpu to the Renderer. This will cause the export to fail if GPU acceleration is unavailable, rather than silently falling back to CPU:
sudo docker run --runtime=nvidia --gpus all \ docker.io/imgly/cesdk-renderer:1.73.0 \ --render-device gpu \ -i /input/scene.scene -o /output/result.mp4If this command fails, your GPU setup needs to be fixed before you can benefit from GPU acceleration.
Video Encoding Performance#
Video exports are slower than expected#
The CE.SDK Renderer uses the CPU for video encoding by default, because CPU encoding supports higher output bitrates than GPU-based NVIDIA NVENC encoding.
Key factors that affect encoding performance:
| Factor | Impact |
|---|---|
| Output bitrate | Higher bitrates require more CPU processing power |
| CPU cores | More cores speed up CPU-based encoding |
| Scene complexity | More layers, effects and transitions increase rendering time |
| Output resolution | Higher resolutions (4K) take longer |
If exports are taking much longer than expected:
-
Check that GPU rendering is active: The EGL fallback warning is always printed to the output when GPU acceleration is unavailable. If you see it, GPU rendering is not being used.
-
Compare against the baseline: Export the same scene using our live demo to establish a reference time.
-
Consider your CPU power: If your instance has few CPU cores, CPU-based encoding at high bitrates will be slow. Options:
- Use an instance with more CPU cores
- Force GPU-based encoding (bitrate limited by resolution, see below)
- Lower the requested output bitrate
Forcing GPU-based video encoding#
You can force the Renderer to use the GPU for H.264 encoding by setting the UBQ_AV_OVERRIDE_H264_ENCODER environment variable. This speeds up encoding but limits the maximum output bitrate depending on the output resolution, due to NVIDIA NVENC hardware constraints:
sudo docker run --runtime=nvidia --gpus all \ -e UBQ_AV_OVERRIDE_H264_ENCODER=vulkanh264enc \ docker.io/imgly/cesdk-renderer:1.73.0 \ -i /input/scene.scene -o /output/result.mp4sudo docker run --runtime=nvidia --gpus all \ -e UBQ_AV_OVERRIDE_H264_ENCODER=fluhwvanvench264enc \ docker.io/imgly/cesdk-renderer-avlicensed:1.73.0 \ -i /input/scene.scene -o /output/result.mp4Choosing between CPU and GPU encoding#
| Encoding Mode | Bitrate Limit | Speed | When to Use |
|---|---|---|---|
| CPU (default) | Unlimited | Depends on CPU cores | High-quality output at any bitrate |
| GPU (NVENC) | Resolution-dependent | Fast | Fast exports where the NVENC bitrate limit is acceptable |
Licensed Codec Issues#
Open-source vs. licensed codec differences#
| Aspect | Open-source (cesdk-renderer) | Licensed (cesdk-renderer-avlicensed) |
|---|---|---|
| Patent coverage | None (use at own risk) | H.264, H.265, AAC covered |
| Output quality | Standard | Higher quality encoding |
| Performance | Faster | Slower due to higher quality encoding and license handshake |
| Network requirement | None when using archives or bundled assets | License activation requires network access |
| Use case | Evaluation, prototyping | Production, commercial use |
For details, see the Licensed codec setup page.
Container and Deployment Issues#
Container fails to start#
-
Image not found: Verify you are pulling the correct image name and tag. Licensed codec images require authentication:
Terminal window sudo docker login container.img.ly -u "oauth" -p "YOUR-API-KEY" -
Architecture mismatch: The CE.SDK Renderer is built for
x86_64(AMD64) Linux only. ARM-based instances (e.g. AWS Graviton) are not supported.
Using containerd instead of Docker#
If your environment uses containerd (common in Kubernetes), the Docker-specific checks from the requirements script will fail. This is expected. Ensure:
- The NVIDIA device plugin for Kubernetes is installed
- Your pod spec requests
nvidia.com/gpuresources - The container runtime is configured to use the NVIDIA runtime class
Getting Help#
If you’ve verified your environment, compared results against the baseline, and still encounter issues:
- Run the requirements checker script and include the output
- Include the
--verboselogs from the Renderer - Note the Renderer container version and variant you’re using
- Contact support at support@img.ly