iOS开源框架和库推荐:将图片集保存为视频并添加声音

ChatGPT4o 使用 ChatGPT 4o 回答

问题:

iOS 有什么开源的框架或者库,能够将图片集保存为视频,并且保存声音

回答:

在 iOS 开发中,有几个开源的框架和库可以帮助你将图片集保存为视频,并且添加声音。以下是一些常用的开源库:

1. GPUImage

GPUImage 是一个强大的图像和视频处理框架,支持多种滤镜和效果。你可以使用它来将图片集转换为视频,并添加音频。

2. AVFoundation

虽然 AVFoundation 是 iOS 的原生框架,而不是开源库,但它非常强大,能够处理视频和音频的各种操作。你可以使用 AVFoundation 来创建视频,添加图片和音频。

3. ImageVideoGenerator

ImageVideoGenerator ��一个简单的库,可以将一组图片转换为视频,并且可以添加背景音乐。

4. NextLevel

NextLevel 是一个开源的 iOS 框架,专注于媒体捕捉和处理。它提供了丰富的功能,可以帮助你处理视频和音频。

5. VideoToolbox

VideoToolbox 是一个低级别的框架,提供了硬件加速的视频编码和解码功能。虽然它不是专门用于将图片集转换为视频,���你可以结合 AVFoundation 使用它来实现高效的视频处理。

示例代码

以下是一个使用 AVFoundation 将图片集转换为视频并添加音频的简单示例:

import AVFoundation
import UIKit

func createVideoFromImages(images: [UIImage], audioURL: URL, outputURL: URL, completion: @escaping (Bool) -> Void) {
    let videoSize = CGSize(width: 1920, height: 1080)
    let videoWriter = try! AVAssetWriter(outputURL: outputURL, fileType: .mp4)
    
    let videoSettings: [String: Any] = [
        AVVideoCodecKey: AVVideoCodecType.h264,
        AVVideoWidthKey: videoSize.width,
        AVVideoHeightKey: videoSize.height
    ]
    
    let videoWriterInput = AVAssetWriterInput(mediaType: .video, outputSettings: videoSettings)
    let sourceBufferAttributes: [String: Any] = [
        kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32ARGB),
        kCVPixelBufferWidthKey as String: videoSize.width,
        kCVPixelBufferHeightKey as String: videoSize.height
    ]
    
    let pixelBufferAdaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: videoWriterInput, sourcePixelBufferAttributes: sourceBufferAttributes)
    
    videoWriter.add(videoWriterInput)
    videoWriter.startWriting()
    videoWriter.startSession(atSourceTime: .zero)
    
    var frameCount: Int64 = 0
    let frameDuration = CMTime(value: 1, timescale: 30)
    
    for image in images {
        while !videoWriterInput.isReadyForMoreMediaData {
            Thread.sleep(forTimeInterval: 0.1)
        }
        
        let presentationTime = CMTimeMultiply(frameDuration, multiplier: Int32(frameCount))
        let pixelBuffer = pixelBufferFromImage(image: image, size: videoSize)
        pixelBufferAdaptor.append(pixelBuffer, withPresentationTime: presentationTime)
        frameCount += 1
    }
    
    videoWriterInput.markAsFinished()
    videoWriter.finishWriting {
        addAudioToVideo(videoURL: outputURL, audioURL: audioURL, completion: completion)
    }
}

func pixelBufferFromImage(image: UIImage, size: CGSize) -> CVPixelBuffer {
    let options: [String: Any] = [
        kCVPixelBufferCGImageCompatibilityKey as String: true,
        kCVPixelBufferCGBitmapContextCompatibilityKey as String: true
    ]
    
    var pixelBuffer: CVPixelBuffer?
    CVPixelBufferCreate(kCFAllocatorDefault, Int(size.width), Int(size.height), kCVPixelFormatType_32ARGB, options as CFDictionary, &pixelBuffer)
    
    CVPixelBufferLockBaseAddress(pixelBuffer!, [])
    let pixelData = CVPixelBufferGetBaseAddress(pixelBuffer!)
    
    let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
    let context = CGContext(data: pixelData, width: Int(size.width), height: Int(size.height), bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer!), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue)
    
    context?.draw(image.cgImage!, in: CGRect(x: 0, y: 0, width: size.width, height: size.height))
    CVPixelBufferUnlockBaseAddress(pixelBuffer!, [])
    
    return pixelBuffer!
}

func addAudioToVideo(videoURL: URL, audioURL: URL, completion: @escaping (Bool) -> Void) {
    let mixComposition = AVMutableComposition()
    
    let videoAsset = AVURLAsset(url: videoURL)
    let videoTrack = videoAsset.tracks(withMediaType: .video).first!
    let videoCompositionTrack = mixComposition.addMutableTrack(withMediaType: .video, preferredTrackID: kCMPersistentTrackID_Invalid)
    try! videoCompositionTrack?.insertTimeRange(CMTimeRangeMake(start: .zero, duration: videoAsset.duration), of: videoTrack, at: .zero)
    
    let audioAsset = AVURLAsset(url: audioURL)
    let audioTrack = audioAsset.tracks(withMediaType: .audio).first!
    let audioCompositionTrack = mixComposition.addMutableTrack(withMediaType: .audio, preferredTrackID: kCMPersistentTrackID_Invalid)
    try! audioCompositionTrack?.insertTimeRange(CMTimeRangeMake(start: .zero, duration: videoAsset.duration), of: audioTrack, at: .zero)
    
    let outputURL = videoURL.deletingLastPathComponent().appendingPathComponent("output_with_audio.mp4")
    let exporter = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPresetHighestQuality)!
    exporter.outputURL = outputURL
    exporter.outputFileType = .mp4
    exporter.exportAsynchronously {
        completion(exporter.status == .completed)
    }
}

这个示例代码展示了如何使用 AVFoundation 将图片集转换为视频,并添加音频。你可以根据需要进行修改和扩展。