Video Calling

Set up video calls with the Sinch iOS Voice & Video SDK.

Setting Up a Video Call

Just like audio calls, video calls are placed through the SINCallClient and events are received using the SINCallClientDelegate and SINCallDelegate. For a more general introduction to calling and SINCallClient, see Calling.

Before you start, ensure your application is requesting user permission for using the video camera.

Showing the Video Streams

The following examples for showing video streams will be based on the assumption of a view controller having the following properties:

Copy
Copied
@interface MyViewController : UIViewController

@property (weak, nonatomic) IBOutlet UIView *remoteVideoView;
@property (weak, nonatomic) IBOutlet UIView *localVideoView;

@end

Showing a Preview of the Local Video Stream

The locally captured stream is rendered into the view provided by -[SINVideoController localView] when it's attached to the application UI view hierarchy.

Copy
Copied
- (void)viewDidLoad {
  [super viewDidLoad];

  id<SINVideoController> videoController = ... // get video controller from SINClient.

  [self.localVideoView addSubview:[videoController localView]];
}

Showing Remote Video Streams

Once the remote video stream is available, the delegate method -[SINCall callDidAddVideoTrack:] will be called and you can use that to attach the Sinch video controller view (-[SINVideoController remoteView]) to your application view hierarchy so that the stream is rendered.

Copy
Copied
- (void)callDidAddVideoTrack:(id<SINCall>)call {
  id<SINVideoController> videoController = ... // get video controller from SINClient.

  // Add the video views to your view hierarchy
  [self.remoteVideoView addSubview:[videoController remoteView]];
}

(The remote stream will automatically be rendered into the view provided by -[SINVideoController remoteView].)

Pausing and Resuming a Video Stream

To pause the local video stream, use the method -[SINCall pauseVideo]. To resume the local video stream, use the method -[SINCall resumeVideo].

Copy
Copied
// Pause the video stream.
[call pauseVideo];

// Resume the video stream.
[call resumeVideo];

The call delegate will be notified of pause- and resume events via the delegate callback methods -[SINCallDelegate callDidPauseVideoTrack:] and -[SINCallDelegate callDidResumeVideoTrack:]. it's for example appropriate to based on these events update the UI with a pause indicator, and subsequently remove such pause indicator.

Video Content Fitting and Aspect Ratio

How the rendered video stream is fitted into a view can be controlled by the regular property UIView.contentMode. Assigning contentMode on a view returned by -[SINVideoController remoteView] or -[SINVideoController localView] will affect how the video content is laid out. Note though that only UIViewContentModeScaleAspectFit and UIViewContentModeScaleAspectFill will be respected.

Example

Copy
Copied
id<SINVideoController> videoController = ... // get video controller from SINClient.

[videoController remoteView].contentMode = UIViewContentModeScaleAspectFill;

Full Screen Mode

The Sinch SDK provides helper functions to transition a video view into fullscreen mode. These are provided as Objective-C category methods for the UIView class and are defined in SINUIView+Fullscreen.h (SINUIViewFullscreenAdditions).

Example

Copy
Copied
- (IBAction)toggleFullscreen:(id)sender {
    id<SINVideoController> videoController = ... // get video controller from SINClient.

    UIView *view = videoController.remoteView;

    if ([view sin_isFullscreen]) {
      view.contentMode = UIViewContentModeScaleAspectFit;
      [view sin_disableFullscreen:YES]; // Pass YES to animate the transition
    } else {
      view.contentMode = UIViewContentModeScaleAspectFill;
      [view sin_enableFullscreen:YES];  // Pass YES to animate the transition
    }
  }

Camera Selection (Front/Back)

Select the front or back camera using -[SINVideoController captureDevicePosition:].

Example

Copy
Copied
- (IBAction)onUserSelectedBackCamera:(id)sender {
  id<SINVideoController> videoController = ... // get video controller from SINClient.

  videoController.captureDevicePosition = AVCaptureDevicePositionBack;
}

- (IBAction)onUserSelectedFrontCamera:(id)sender {
  id<SINVideoController> videoController = ... // get video controller from SINClient.

  videoController.captureDevicePosition = AVCaptureDevicePositionFront;
}

Accessing Raw Video Frames from Remote and Local Streams

The Sinch SDK provides access to the raw video frames of the remote and local video streams. This allows you to process the video frames with your own implementation to achieve rich functionalities, example applying filters, adding stickers to the video frames, or saving a frame as an image.

Perform custom video frame processing by implementing SINVideoFrameCallback and register it using -[SINVideoController setRemoteVideoFrameCallback:] and -[SINVideoController setLocalVideoFrameCallback:]. The callback handler will provide the frame in the form of a CVPixelBufferRef, and a completion handler block that you must invoke, passing the processed output frame (also as a CVPixelBufferRef) as argument. The implementation of the frame callback hander must retain (and release) the buffer using CVPixelBufferRetain and CVPixelBufferRelease.

Example:

Copy
Copied
// Implementation of -[SINVideoFrameCallback onFrame:completionHandler:]
- (void)onFrame:(CVPixelBufferRef)pixelBuffer completionHandler:(void (^)(CVPixelBufferRef))completionHandler {
  if (!pixelBuffer)
    return;

  CVPixelBufferRetain(pixelBuffer);

  // it's important to dispatch the filter operations in a different thread,
  // so the SDK won't be blocked while the filter is being applied.
  dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
    CIImage *sourceImage = [CIImage imageWithCVPixelBuffer:(CVPixelBufferRef)pixelBuffer options:nil];
    CGRect sourceExtent = sourceImage.extent;

    CIFilter *vignetteFilter = [CIFilter filterWithName:@"CIVignetteEffect"];

    [vignetteFilter setValue:sourceImage forKey:kCIInputImageKey];
    [vignetteFilter setValue:[CIVector vectorWithX:sourceExtent.size.width / 2 Y:sourceExtent.size.height / 2]
                      forKey:kCIInputCenterKey];
    [vignetteFilter setValue:@(sourceExtent.size.width / 2) forKey:kCIInputRadiusKey];
    CIImage *filteredImage = [vignetteFilter outputImage];

    CIFilter *effectFilter = [CIFilter filterWithName:@"CIPhotoEffectInstant"];
    [effectFilter setValue:filteredImage forKey:kCIInputImageKey];
    filteredImage = [effectFilter outputImage];

    CIContext *ciContext = [CIContext contextWithOptions:nil];
    [ciContext render:filteredImage toCVPixelBuffer:pixelBuffer];

    // Send processed CVPixelBufferRef back
    if (completionHandler) {
      completionHandler(pixelBuffer);
    } else {
      NSLog(@"completionHandler is nil!");
    }
    CVPixelBufferRelease(pixelBuffer);
  });
}
Notes:
  • It's recommended to perform frame processing asynchronously using GCD , using a dedicated queue and tune the queue priority to your use case. If you are processing each and every frame (example applying a filter), it's recommended to use a high priority queue. If you are only processing some frames, example saving snapshot frames based on user action, then it may be more appropriate to use a low priority background queue.
  • The approach shown in the example above might provoke a crash on older iOS versions (example iOS11.x, iOS12.x) due to a bug in CoreImage (see StackOverflow threads 1 and 2 ). If your deployment target is lower than iOS13.0, consider using an image processing library other than CoreImage .

Converting a Video Frame to UIImage

The Sinch SDK provides the helper function SINUIImageFromPixelBuffer(CVPixelBufferRef) to convert CVPixelBufferRef to UIImage*.

Copy
Copied
#import "SINVideoController.h" // To use SINUIImageFromPixelBuffer()

// Get CVPixelBufferRef from -[SINVideoFrameCallback onFrame:completionHandler:] callback
CVPixelBufferRef pixelBuffer = ...
UIImage *image = SINUIImageFromPixelBuffer(pixelBuffer);
Important!

The helper function won't release the frame buffer (you must still call CVPixelBufferRelease after using this helper function.)

Request User Permission for Using the Camera

Recording video always requires explicit permission from the user. Your app must provide a description for its use of video camera in terms of the Info.plist key NSCameraUsageDescription.

See the Apple documentation on +[AVCaptureDevice requestAccessForMediaType:completionHandler:] for details on how to request user permission.

Was this page helpful?