How to Build a TikTok Clone for iOS with Swift & VideoEditor SDK

Learn how to build a TikTok clone for iOS with Swift and VideoEditor SDK with this step-by-step in-depth tutorial.

17 min read
How to Build a TikTok Clone for iOS with Swift & VideoEditor SDK

Now that TikTok is facing a ban in the US any day now, we better wait in the wings with an alternative ready to go and scoop up those millions of hobby dancers, micro bloggers, and would-be influencers.

Today, we'll show you how you can utilize VideoEditor SDK to create your very own version of this beloved app. With VE.SDK, you can easily create a platform to display your dance moves, lip-syncing skills, or your pet's tricks. Whether you're a developer hoping to enhance your skill set or just curious about the mechanics of social media apps, our tutorial will guide you through the steps necessary to create your own TikTok clone.

Who knows, your new creation might just turn you into the next big social media sensation! So let's get started and explore the world of social media app creation.

In this article you will see how to use Swift and the IMG.LY VideoEditor SDK to create a simple iOS app that can record, edit and view short videos.

Record and trim your video, apply filters and doodles: In this tutorial, you will start building your own TikTok clone. 🎉

This example uses three view controllers to provide the basic features of a video sharing application like TikTok.

  1. First you'll see a camera controller with some real time filtering where you can make clips.
  2. Next you'll configure an editing controller to use with clips.
  3. Finally you'll see how to make a playback controller. For playback we'll just use some standard AVFoundation code but for the other two, we'll leverage the VideoEditorSDK.

By using the VideoEditorSDK to handle most of the hard work, you can build a robust app quickly and then add more features and filters and options as you need them.

This app uses the CameraViewController from the VideoEditorSDK to record some video and apply basic filters. Then pass that data to the VideoEditViewController for more extensive editing. Finally, export the finished video with all of the edits as a movie file.

Setting Up Your Project

To add the VideoEditorSDK to an Xcode project, include it with your other package dependencies. Use the URL below to add the package, either to your Package.swift file or using File > Add Packages...


You can also include the SDK using Cocoapods or manually.

Once Xcode downloads and resolves the packages, you can add to include VideoEditorSDK in any of the classes in your app. Though this article uses UIKit the VideoEditor SDK also supports SwiftUI.

Asking for Permissions

Before you can start, you must ask the user's permission to record audio or video as well as access the user's Photo library. If you don't secure these permissions properly before trying to use the camera the app may crash and also will not get through Apple's app review. During development, the app will show console messages, but continue to function, for some missing permissions. But once the app is deployed, it is more likely to crash. The dialogue requesting permissions is a system dialog and looks like this:

You do not have the ability to change this dialog. You can supply the reason you are asking for the permission using a key in the info.plist file in the Xcode project. In the example above "Lets you record videos" is in the info.plist. For the video camera you will have to ask for video and audio permission. The first step is to add some short message to the user in the info.plist.

For video the key to add is NSCameraUsageDescription and for the microphone you need to add NSMicrophoneUsageDescription. Whenever your  app attempts to use the camera, iOS will first check to see if the user has already granted access. If not, it will then display the dialogs using your entries in the info.plist or crash if you have not provided entries. The user might be surprised by these dialog boxes and may accidentally tap Don't Allow if they are trying to quickly launch the camera. It is better practice to set up some kind of onboarding view and secure the permissions before you need them. You might have a view that explains what you are going to ask for and then displays the permission dialog.

switch AVCaptureDevice.authorizationStatus(for: .video) {
  case .authorized:
    //the user has authroized permission!
  case .denied:
    //the user has previously denied permission!
    //perhaps we should ask them again
  case .restricted:
    //the user cannot grant permission!
  case .notDetermined:
    //the user has never been asked for permission
    //so let's ask them now
    AVCaptureDevice.requestAccess(for: .video) { accessGranted in
      if accessGranted {
        //the user has authorized permission!
      } else {
        //the user has denied permission!
  @unknown default:

The snippet above lets your app read the permission status for the video camera and ask for permission if it has not been granted. To request access to the microphone pass .audio to the AVCaptureDevice.authorizationStatus(for:) and AVCaptureDevice.requestAccess(for:) functions. The AVCaptureDevice.requestAccess(for:) command is what displays the system dialog actually requesting access. The accessGranted variable reports back to your app what the user chose.

The .restricted case is fairly new. In this case, there are policies on the phone that prohibit the user from granting permission to your app. In addition to asking permissions during onboarding, it is good practice to check for permission every time before you attempt to launch the camera. The user can change permissions using the Settings app on their phone at any time. If the user has denied permissions and you present the video camera anyway, your app will record black video frames and slience on audio tracks.

In addition to asking for camera and microphone permissions, your app will probably want to access the photos on the user's phone. You will need to add an entry to the info.plist for NSPhotoLibraryUsageDescription. As with the camera and the microphone, the dialog will appear when the VideoEditor SDK first attempts access, but you can give your user some ability to grant permission during onboarding. For the user's photos you can check the authorization status using

let status = PHPhotoLibrary.authroizationStatus(for: .readWrite)

As with the camera, the photo library has PHPhotoLibrary.requestAuthroization(for: .readWrite) but instead of just returning a "granted" or "denied" status, this command returns the actual new status. In addition to the status values for the camera, the photo library may return a .limited status meaning that the user has granted permission to only some of the photos. If the user has chosen to share only some of their photos, Apple provides some views you can present so that the user can manage the photos. Any videos that your app saves to the user's photo library will always be available when the user has chosen .limited. You can read more about  how to work with permissions and the user's photo library by reading this Apple article Delivering an Enhanced Privacy Experience in Your photos App.

Asking for Location Permissions

By default, the VideoEditorSDK is set up to tag the user's video and photos with the current location. Unlike camera, microphone and photos access, the user may not understand why your app is requesting location data. You may decide to simply disable the tagging by setting one of the configuration options for the CameraViewController. When setting options, include this line to turn off location tagging:

options.includeUserLocation = false

Alternatively you can include your own code to display a message before the permissions dialog. As you will see later in this tutorial, for a number of customization options with the VideoEditorSDK, you are provided a closure where you can execute your own code. The closure for location is the .locationAccessRequestClosure and it passes in an instance of CoreLocationManager. So, you can customize how to present the dialog.

cameraViewController.locationAccessRequestClosure = { locationManager in
  if locationManager.authorizationStatus == .notDetermined {

The code above customizes the location request after the CameraViewController is created but before it is presented.

Even if you decide to turn off the location options, it is a good idea to include an entry for NSLocationWhenInUseUsageDescription in the info.plist file. During Apple's app review, a code scan may flag the core location code in the VideoEditorSDK and will look for an info.plist entry. Your app may get rejected if there is no entry even if you turned off the location code.

##Icon Replacement Most of the customizations you can make to the VideoEditorSDK are done using the Configuration builders as you will see later. However, butotn icons are customized slightly differently.

You can create a closure to replace any of the stock SDK icons with your own. If you manually installed the frameworks, you can also dig into the packages and modify the graphics directly. One of the first things we want to do is to swap out some of the default icons for new ones. For this article we will replace the icon for switching between the front and back camera and the icon to show the filtering options on the camera. Assign this closure to the IMGLY.bundleImageBlock and make sure the code is somewhere in the app that it will be called before you launch either the CameraViewController or the VideoEditViewController. The bundleImageBlock is called every time the SDK requests an icon image.

IMGLY.bundleImageBlock = { imageName in
// logger.debug("requested icon: \\\\(imageName)")
  switch imageName {
    case "imgly_icon_cam_switch":
      return UIImage(named: "336-reloop")
    case "imgly_icon_show_filter":
      return UIImage(systemName: "camera.filters", withConfiguration: UIImage.SymbolConfiguration(scale: .large))?.icon(pt: 48, alpha: 1)
      return nil

The imageName that gets passed in to the closure is the name of the icon that the SDK is about to render. If your code returns nil the default icon appears. But if you supply an alternate UIImage then that icon will appear. The code above replaces the icon to switch between the front and back cameras with an icon from the asset catalog. It replaces the icon to show the camera filters with an SFSymbol. As long as your code returns a UIImage it doesn't matter where it is stored. Uncomment the logger.debug("requested icon: \\\\(imageName)") and run the app to discover what other icons it uses to render the camera and editing screens. The VideoEditor SDK Documentation contains some more info on how to customize the icons.

Making Video Recordings

The start of a great TikTok-like experience is the camera. When our user wants to make a new video clip with our app, they tap on the button at the bottom of the initial screen to start the creation workflow.

The VideoEditorSDK contains a CameraViewController class that lets you provide your users with a video camera and a variety of filters to apply in real time. The SDK handles most of the camera functions but you do have some opportunities to customize.

The image above shows three different camera controllers. The one on the left is from the TikTok app, the one in the middle is the default settings of the VideoEditorSDK, and the one on the right is the customized controller for the app we are building in this article. The comparable controls for each are marked with green numbers:

1 - close button 2 - flip camera button 3 - flash button 4 - filters/effects button 5 - import image button

The TikTok camera view does some things during video capture that the VideoEditorSDK does during the editing phase. Import image (5) is part of TikTok stitching together stills and videos into a single movie. The VideoEditorSDK has this same feature, but does it in the editing screen. For filters and effects (4), the default filters that the VideoEditorSDK provides are similar to those that TikTok uses, but the more complicated "effects" are in the editor. The VideoEditorSDK also allows you to create your own filters.

Configuring the Camera Controls

The VideoEditorSDK uses a pattern where a Configuration object provides a closure with a builder. You modify the options by modifying the Configuration object. Then you instantiate the camera or the editor using with the Configuration object. You can make one giant Configuration or smaller ones as makes sense in your workflow. Here is an simple example for how to configure the CameraViewController.

let config = Configuration { builder in
  builder.configureCameraViewController { options in

  // allow cancel
  options.showCancelButton = true

  //set more options

let cameraViewController = CameraViewController(configuration: config)

To set the other options for the camera, you modify other properties of the options object in the closure. Here is how to allow the user to flip between the back and front cameras, force the user to only take videos and force the camera to remain in portrait mode.

// configure camera
options.allowedCameraPositions = [.back, .front]
options.allowedRecordingModes = [.video]
options.allowedRecordingOrientations = [.portrait]

You can customize other visual aspects of the camera controls and even add additional actions using the closure pattern like you saw with the icons. The VideoEditorSDK provides closures for each of the camera controls. The closure will be given the UIButton object to modify.

options.filterSelectorButtonConfigurationClosure = { button in
  button.layer.cornerRadius = 20.0
  button.layer.borderWidth = 2.0
  button.layer.backgroundColor = UIColor(red: 0.0, green: 0.0, blue: 1.0, alpha: 0.3).cgColor
  button.layer.borderColor =
  button.actionClosure = { _ in"filter tapped")

In the code above, we modify the background and border of the button that us used to show and hide the filter drawer. We also add a logging statement that will execute when the user taps the button. Any code you add to the .actionClosure will get executed in addition to the VideoEditorSDK code.

Discovering All Customization Options

The definitions of the configuration options are well documented in the headers. A good way to discover all of the options, and how to construct the closures, is to add some options to your configuration and then place the cursor over one of the options and right-click to show the context menu. Then choose "Jump to Definition".

By scrolling around in the header file you can read about the different options and also see what types of data you need to pass to each one.

The documentation website is another resource for understanding how to customize and present the camera controller and how to discover all of the available options.

Presenting the Controller

Once the view controller configurations are set, your app should present the CameraViewController. One way to present it is modally so that it covers the entire screen.

cameraViewController.modalPresentationStyle = .fullScreen
self.present(cameraViewController, animated: true)

Capturing the Video

The user can start their capture by tapping the large, red button on the camera controller. They end recording by tapping the button a second time. When the user taps the button to end recording another closure allows your app to decide what to do next. The .completionBlock closure passes in a result object that has a URL to the video file and a model. The model object contains any information about filters that the user applied. As with the other closures, the app should set the .completionBlock before presenting the CameraViewController.

For this example, the app can just dismiss the camera controller and pass the video and the filter information to the editor controller.

cameraViewController.completionBlock = { [weak self] result in
  // Create a `Video` from the `URL` object.
  guard let url = result.url else { return }
  let video = Video(url: url)

  // Dismiss the camera view controller and open the video editor after dismissal.
  // set the animated property to false to make the transition cleaner
  self?.dismiss(animated: false, completion: {
    let config = Configuration { builder in
      builder.configureVideoEditViewController { options in
        //set any configuration for the editor
    // Create and present the video editor.
    // Passing the `result.model` to the editor to preserve selected filters.
    let videoEditViewController = VideoEditViewController(videoAsset: video, configuration: config, photoEditModel: result.model)
    videoEditViewController.modalPresentationStyle = .fullScreen
    self?.present(videoEditViewController, animated: false, completion: nil)

In the code above, the app queries the result object for a .url. If the camera had been capturing still images the result would pass the image as a .data object. Video is much larger so it would be inefficient to read it into memory just to pass it along. Then the app dismisses the camera controller and immediately configures and presents an editing controller. The pattern of configuration is the same, using the builder and Configuration objects. We can explore those more in the next section.

Once the VideoEditViewController is configured and has a video asset to work with, the app presents the controller.

Editing Your Clips

After capturing a video, the app displays an editor to complete the creation. In the image above, each app has a cancel button (1) and a save/complete (2) button. The editing tools are in the red circles. TikTok provides about ten different tools on the editing screen. Additionally, TikTok provided some editing tools on the capture screen. The VideoEditor SDK apps provide a scrolling set of tools along the bottom. By default the editor provides tools for video composition, audio overlays, trim, transform, filter, adjust, sticker, text, text design, overlay, frame, brush, and focus.

How to configure and customize each of these tools is beyond the scope of this tutorial. But, customizing each one uses the same Configuration and builder pattern and closures that you've already seen. Additionally, for tools with assets such as filters and stickers, the VideoEditorSDK provides methods to append custom assets so that your app can continually provide new stickers, filters, fonts and text options to your users.

Configuring the Editor

As with the camera controller, the VideoEditViewController uses a Configuration object and a set of closures. Here is an example configuration

let config = Configuration { builder in
  builder.configureBrushToolController { options in
    options.allowedBrushTools = [.color, .hardness]

  builder.configureVideoEditViewController { options in
    options.backgroundColor = UIColor.brown
    options.menuBackgroundColor =

    options.overlayButtonConfigurationClosure = { button, action in
    options.discardButtonConfigurationClosure = { button in
    options.applyButtonConfigurationClosure = { button in

    options.titleViewConfigurationClosure = { view in
      view.backgroundColor = UIColor.brown

    options.menuItems = [PhotoEditMenuItem.tool(.createTrimToolItem()!),

In the code above, we customize the settings for the brush tool. Notice that this is done outside of the configuration of the VideoEditViewController closure. Then we apply the orange background and border to each of the buttons. The styleCameraButton is simply a function that applies the cornerRadius, borderWidth and other properties from earlier in the article. Finally, this configuration limits the tools menu to just three of the tools, just for the example. The default for .menuItems is to display all of the tools, so unless you are adding custom tools, or want to remove some of the tools, don't make any reference to .menuItems.

Exporting the Finished Video

Once the user has finished with their creation, they tap the completion button and the VideoEditor SDK composes all of the video, audio, filters and static items into a single .mp4 file. The VideoEditViewController uses a different format for passing the video back to your app. You may remember from earlier, that the CameraViewController had a .completionBlock and a .cancelBlock closure. The VideoEditViewController uses a delegate pattern instead.

In your app, before you present the VideoEditViewController you need to set its .delegate. Then you need to implement the delegate methods for videoEditViewControllerDidFinish, videoEditViewControllerDidFail, and videoEditViewControllerDidCancel.

In the cases of failure or cancel, you can log any errors and dismiss the editor.

func videoEditViewControllerDidFail(_ videoEditViewController: ImglyKit.VideoEditViewController, error: ImglyKit.VideoEditorError) {
  // Dismissing the editor.
  self.dismiss(animated: true, completion: nil)

func videoEditViewControllerDidCancel(_ videoEditViewController: ImglyKit.VideoEditViewController) {
  self.dismiss(animated: true, completion: nil)

As with the CameraViewController your app will get a URL of the completed video. By default, the VideoEditorSDK saves the video to the app's temporary directory. You should move it to a safer location as the temporary directory will get purged over time. Either the user's Documents directory or the camera roll are two common locations. Here is some code to move the completed video composition to the Documents directory and dismiss the editor.

func videoEditViewControllerDidFinish(_ videoEditViewController: ImglyKit.VideoEditViewController, result: ImglyKit.VideoEditorResult) {"new video: \\\\(result.output.url)")
  do {
    guard let documentDirectory = try? FileManager.default.url(for: .documentDirectory, in: .userDomainMask, appropriateFor: nil, create: true) else { logger.error("Documents directory not found"); return}
    let filePath = documentDirectory.appending(component: result.output.url.lastPathComponent)
    try FileManager.default.moveItem(atPath: result.output.url.path(), toPath: filePath.path())
  } catch (let error as NSError) {
    logger.error("could not copy finished file: \\\\(error.localizedDescription)")
  self.dismiss(animated: true, completion: nil)

The code above gets a handle to the user's documents directory and then moves the file from the temporary directory (result.output.url) to the new location (filePath.path()). Finally it dismisses the editor.

Setting Up a Playback Controller

For this example app, the playback controller will play any clips in the app's Documents directory. Each clip plays on a loop. The user can swipe up to get the next clip and tap to pause or restart the clip. If there are no clips, the user will be encouraged to make a new one.

When the video player view controller appears, the first task is to see if there are any videos to load. Any video files in the Documents directory are assumed to be ready to play.

func loadVideos() {
  //get a handle to the documents directory for this app
  guard let documentDirectory = try? FileManager.default.url(for: .documentDirectory, in: .userDomainMask, appropriateFor: nil, create: true) else { logger.error("Documents directory not found"); return}

  if let files = try? FileManager.default.contentsOfDirectory(at: documentDirectory, includingPropertiesForKeys: nil) {
    videos = files

The code above reads all of the filenames from the documents directory into a files array and then assigns that array to a videos variable.

Each time the app is launched or the VideoEditViewController is dismissed, the video player controller will get the viewWillAppear message. This is a good place to check for videos using the loadVideos function. After the videos are loaded, the app can start to play the first one.

func setupVideoPlayer() {
  guard let currentVideo = videos.first else {logger.error("No videos to play"); return}

  emptyDirectoryLabel.isHidden = true
  self.currentVideo = currentVideo

  let video = AVPlayerItem(url: currentVideo)

  let playerLayer = AVPlayerLayer(player: player)
  playerLayer.frame = self.view.layer.bounds
  playerLayer.videoGravity = .resizeAspect


  // Create a new player looper with the queue player and template item
  playerLooper = AVPlayerLooper(player: player, templateItem: video)


In the code above, we first check to see if there are any video files. If there are, then hide the message about videos and create a new AVPlayerItem using the URL of the first video. Next we create an AVPlayerLayer to display the video. We set the size of the video and we add it to the main view. Then we create an AVPlayerLoooper to play the video in a loop. The AVPLayerLooper is a recent addition to AVFoundation. The looper can either take one video in a loop or take a number of videos and it will play them all in sequence. The player is an AVQueuePlayer and not a simple AVPlayer object. Finally, we tell the player to .play().

In the TikTok app, you can advance to the next video by swiping up. Additionally, you can start and stop any video by taping on the screen. We can add both of those feature using gesture recognizers.

First, we can configure a Swipe Gesture Recognizer to display the next video.

@IBAction func nextVideo(_ sender: Any) {"Swiped Up")
  guard let currentVideo = self.currentVideo else { logger.error("No current video"); return }
  let currentIndex = videos.firstIndex(of: currentVideo)
  var nextVideo = videos.index(after: currentIndex!)
  if nextVideo == videos.count {
    nextVideo = 0

  self.currentVideo = videos[nextVideo]
  let newVideo = AVPlayerItem(url: self.currentVideo!)

  //Disable looping so the looper can clean up before we switch to the next video

  // Create a new player looper with the queue player and template item
  playerLooper = AVPlayerLooper(player: player, templateItem: newVideo)

The code above searches the array of videos for the index current video. Then it gets the URL of the next video in the array using its index. If the index goes beyond the end of the array of videos, it resets to 0 (the first video). Then we create a new AVPlayerItem using the url of the new video. If you don't disableLooping before you change the videos in the looper, it will throw an exception. So, after disabling looping, we can create a new looper and play the new video.

The code for play and pause is attached to a Tap Gesture Recognizer.

@IBAction func togglePlayback(_ sender: Any) {

  if player.rate > 0.0 {
    player.rate = 0.0
  } else {
    player.rate = 1.0

Because of how we have constructed the looper it becomes a little difficult to use the standard .play() and .pause() commands. We cannot ask the AVPlayerLayer if it is currently playing. So, we can just adjust the playback rate. A value of 1.0 is a normal speed and 0.0 is paused.

With this view controller, the user can create new videos by tapping the creation button and swipe to view all of their created videos.

Where to Go From Here

This tutorial has focused on how the VideoEditor SDK can help you quickly make a video creation app like TikTok. Good next steps would be to further customize the editing tools and build out the network for sharing and tagging videos.

Something that is important to consider is the data structure for each video. The iOS system is optimized to read only as much of a video file off of disk as is needed at any moment. Your app should use those optimizations to run faster. So, don't load the whole video into memory as a Data object. Your data structures should keep the large video files somehow separate from the much smaller metadata (likes, comments, etc.). Consider storing the URL or filename of the video in the same object as the likes and comments. This will also allow you to cache video files after they have been downloaded so that you don't need to re-download them when other data such as comments or number of likes changes.

Thanks for reading! We hope that you've gotten a better understanding for how a tool like the VideoEditor SDK can bring your ideas to market faster. Feel free to reach out to us with any questions, comments, or suggestions.

Looking for more video capabilities? Check out our solutions for Short Video Creation, and Camera SDK!

To stay in the loop, subscribe to our Newsletter or follow us on LinkedIn and Twitter.