2016-08-22 125 views
14

新编辑下面只有首先播放曲目AVMutableComposition的()

我已经提到

AVMutableComposition - Only Playing First Track (Swift)

但它不提供答案,我所期待的。我有AVMutableComposition()。我正尝试在此单一作文中应用单个类型AVMediaTypeVideo的多个AVCompositionTrack。这是因为我使用了两个不同的AVMediaTypeVideo来源,它们来自AVAsset的不同CGSizepreferredTransforms

因此,应用其指定的preferredTransforms的唯一方法是在2个不同的轨道中提供它们。但是,无论出于何种原因,只有第一首曲目会提供任何视频,就好像第二首曲子永远不会在那里一样。

所以,我一直在使用AVMutableVideoCompositionLayerInstruction的和应用与AVAssetExportSession,这工作好沿AVVideoComposition试图

1),我还在工作的变换,但是做,能。但视频的处理时间为1分钟,这在我的情况下是不适用的。

2)使用多个轨道,没有AVAssetExportSession和同一类型的第二个轨道从不出现。现在,我可以将它全部放在1首曲目中,但所有视频的大小和首选视频都会变成第一部视频,这绝对不是我想要的,因为它将它们拉伸到各个方面。

所以我的问题是,是否有可能

1)申请指令只是一个音轨,而不用使用AVAssetExportSession? //首选方式BY FAR。

2)减少出口时间? (我曾尝试过使用PresetPassthrough,但是如果您的exporter.videoComposition是我的使用说明,则不能使用,这是我知道的唯一可以放置说明的地方,不确定是否可以将它们放在其他地方。是我的一些代码(不出口,因为我并不需要任何地方出口任何东西,只是做的东西的AVMutableComposition结合了项目之后。

func merge() { 
    if let firstAsset = controller.firstAsset, secondAsset = self.asset { 

     let mixComposition = AVMutableComposition() 

     let firstTrack = mixComposition.addMutableTrackWithMediaType(AVMediaTypeVideo, 
                    preferredTrackID: Int32(kCMPersistentTrackID_Invalid)) 
     do { 
      //Don't need now according to not being able to edit first 14seconds. 

      if(CMTimeGetSeconds(startTime) == 0) { 
       self.startTime = CMTime(seconds: 1/600, preferredTimescale: Int32(600)) 
      } 
      try firstTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, CMTime(seconds: CMTimeGetSeconds(startTime), preferredTimescale: 600)), 
              ofTrack: firstAsset.tracksWithMediaType(AVMediaTypeVideo)[0], 
              atTime: kCMTimeZero) 
     } catch _ { 
      print("Failed to load first track") 
     } 


     //This secondTrack never appears, doesn't matter what is inside of here, like it is blank space in the video from startTime to endTime (rangeTime of secondTrack) 
     let secondTrack = mixComposition.addMutableTrackWithMediaType(AVMediaTypeVideo, 
                    preferredTrackID: Int32(kCMPersistentTrackID_Invalid)) 
//   secondTrack.preferredTransform = self.asset.preferredTransform 
     do { 
      try secondTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, secondAsset.duration), 
              ofTrack: secondAsset.tracksWithMediaType(AVMediaTypeVideo)[0], 
              atTime: CMTime(seconds: CMTimeGetSeconds(startTime), preferredTimescale: 600)) 
     } catch _ { 
      print("Failed to load second track") 
     } 

     //This part appears again, at endTime which is right after the 2nd track is suppose to end. 
     do { 
      try firstTrack.insertTimeRange(CMTimeRangeMake(CMTime(seconds: CMTimeGetSeconds(endTime), preferredTimescale: 600), firstAsset.duration-endTime), 
              ofTrack: firstAsset.tracksWithMediaType(AVMediaTypeVideo)[0] , 
              atTime: CMTime(seconds: CMTimeGetSeconds(endTime), preferredTimescale: 600)) 
     } catch _ { 
      print("failed") 
     } 
     if let loadedAudioAsset = controller.audioAsset { 
      let audioTrack = mixComposition.addMutableTrackWithMediaType(AVMediaTypeAudio, preferredTrackID: 0) 
      do { 
       try audioTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, firstAsset.duration), 
               ofTrack: loadedAudioAsset.tracksWithMediaType(AVMediaTypeAudio)[0] , 
               atTime: kCMTimeZero) 
      } catch _ { 
       print("Failed to load Audio track") 
      } 
     } 
    } 
} 

编辑

苹果指出,“指示说明视频组成通过实施AVVideoCompositionInstruction协议的类的实例的NSArray。 对于数组中的第一条指令,timeRange.start必须小于或等于尝试播放或其他处理的最早时间 (请注意,这通常是kCMTimeZero)。对于后续指令,timeRange.start必须等于先前指令的结束时间。 最后一条指令的结束时间必须大于或等于尝试播放或其他处理的最后时间(请注意,这通常是AVVideoComposition实例与之关联的资产的持续时间的 )。“

这只是说,如果你决定使用任何指令(这是我所理解的),那么整个组合必须在指令内部进行分层,为什么会这样呢?没有在所有的应用变化的轨道1或3:0

音轨1 - 10秒,2道10 - 20秒,轨道3从20 - 30秒

上,任何的解释可能会回答我的问题(如果它是可行的)。

+0

当你说_第二首曲目永远不会在那里_你的意思是你看到了作曲的背景,还是在第一首曲目之后立即停止播放? –

+0

我的意思是第一首曲目播放,它是空白的,当第二首曲目完成时,它会回到第一首曲目 – impression7vx

+0

你对第二首曲目有什么变化?也许它只是位于videoComposition的框架之外。 –

回答

1

好吧,所以对于我确切的问题,我必须在Swift中应用特定的转换CGAffineTransform以获得我们想要的特定结果。目前一个我张贴任何图片作品拍摄/获得以及视频

//This method gets the orientation of the current transform. This method is used below to determine the orientation 
func orientationFromTransform(_ transform: CGAffineTransform) -> (orientation: UIImageOrientation, isPortrait: Bool) { 
    var assetOrientation = UIImageOrientation.up 
    var isPortrait = false 
    if transform.a == 0 && transform.b == 1.0 && transform.c == -1.0 && transform.d == 0 { 
     assetOrientation = .right 
     isPortrait = true 
    } else if transform.a == 0 && transform.b == -1.0 && transform.c == 1.0 && transform.d == 0 { 
     assetOrientation = .left 
     isPortrait = true 
    } else if transform.a == 1.0 && transform.b == 0 && transform.c == 0 && transform.d == 1.0 { 
     assetOrientation = .up 
    } else if transform.a == -1.0 && transform.b == 0 && transform.c == 0 && transform.d == -1.0 { 
     assetOrientation = .down 
    } 

    //Returns the orientation as a variable 
    return (assetOrientation, isPortrait) 
} 

//Method that lays out the instructions for each track I am editing and does the transformation on each individual track to get it lined up properly 
func videoCompositionInstructionForTrack(_ track: AVCompositionTrack, _ asset: AVAsset) -> AVMutableVideoCompositionLayerInstruction { 

    //This method Returns set of instructions from the initial track 

    //Create inital instruction 
    let instruction = AVMutableVideoCompositionLayerInstruction(assetTrack: track) 

    //This is whatever asset you are about to apply instructions to. 
    let assetTrack = asset.tracks(withMediaType: AVMediaTypeVideo)[0] 

    //Get the original transform of the asset 
    var transform = assetTrack.preferredTransform 

    //Get the orientation of the asset and determine if it is in portrait or landscape - I forget which, but either if you take a picture or get in the camera roll it is ALWAYS determined as landscape at first, I don't recall which one. This method accounts for it. 
    let assetInfo = orientationFromTransform(transform) 

    //You need a little background to understand this part. 
    /* MyAsset is my original video. I need to combine a lot of other segments, according to the user, into this original video. So I have to make all the other videos fit this size. 
     This is the width and height ratios from the original video divided by the new asset 
    */ 
    let width = MyAsset.tracks(withMediaType: AVMediaTypeVideo)[0].naturalSize.width/assetTrack.naturalSize.width 
    var height = MyAsset.tracks(withMediaType: AVMediaTypeVideo)[0].naturalSize.height/assetTrack.naturalSize.height 

    //If it is in portrait 
    if assetInfo.isPortrait { 

     //We actually change the height variable to divide by the width of the old asset instead of the height. This is because of the flip since we determined it is portrait and not landscape. 
     height = MyAsset.tracks(withMediaType: AVMediaTypeVideo)[0].naturalSize.height/assetTrack.naturalSize.width 

     //We apply the transform and scale the image appropriately. 
     transform = transform.scaledBy(x: height, y: height) 

     //We also have to move the image or video appropriately. Since we scaled it, it could be wayy off on the side, outside the bounds of the viewing. 
     let movement = ((1/height)*assetTrack.naturalSize.height)-assetTrack.naturalSize.height 

     //This lines it up dead center on the left side of the screen perfectly. Now we want to center it. 
     transform = transform.translatedBy(x: 0, y: movement) 

     //This calculates how much black there is. Cut it in half and there you go! 
     let totalBlackDistance = MyAsset.tracks(withMediaType: AVMediaTypeVideo)[0].naturalSize.width-transform.tx 
     transform = transform.translatedBy(x: 0, y: -(totalBlackDistance/2)*(1/height)) 

    } else { 

     //Landscape! We don't need to change the variables, it is all defaulted that way (iOS prefers landscape items), so we scale it appropriately. 
     transform = transform.scaledBy(x: width, y: height) 

     //This is a little complicated haha. So because it is in landscape, the asset fits the height correctly, for me anyway; It was just extra long. Think of this as a ratio. I forgot exactly how I thought this through, but the end product looked like: Answer = ((Original height/current asset height)*(current asset width))/(Original width) 
     let scale:CGFloat = ((MyAsset.tracks(withMediaType: AVMediaTypeVideo)[0].naturalSize.height/assetTrack.naturalSize.height)*(assetTrack.naturalSize.width))/MyAsset.tracks(withMediaType: AVMediaTypeVideo)[0].naturalSize.width 
     transform = transform.scaledBy(x: scale, y: 1) 

     //The asset can be way off the screen again, so we have to move it back. This time we can have it dead center in the middle, because it wasn't backwards because it wasn't flipped because it was landscape. Again, another long complicated algorithm I derived. 
     let movement = ((MyAsset.tracks(withMediaType: AVMediaTypeVideo)[0].naturalSize.width-((MyAsset.tracks(withMediaType: AVMediaTypeVideo)[0].naturalSize.height/assetTrack.naturalSize.height)*(assetTrack.naturalSize.width)))/2)*(1/MyAsset.tracks(withMediaType: AVMediaTypeVideo)[0].naturalSize.height/assetTrack.naturalSize.height) 
     transform = transform.translatedBy(x: movement, y: 0) 
    } 

    //This creates the instruction and returns it so we can apply it to each individual track. 
    instruction.setTransform(transform, at: kCMTimeZero) 
    return instruction 
} 

现在我们有了这些方法,我们现在可以适当地应用正确的和适当的转换,以我们的资产,并得到一切装修不错,清洁。

func merge() { 
if let firstAsset = MyAsset, let newAsset = newAsset { 

     //This creates our overall composition, our new video framework 
     let mixComposition = AVMutableComposition() 

     //One by one you create tracks (could use loop, but I just had 3 cases) 
     let firstTrack = mixComposition.addMutableTrack(withMediaType: AVMediaTypeVideo, 
                    preferredTrackID: Int32(kCMPersistentTrackID_Invalid)) 

     //You have to use a try, so need a do 
     do { 

      //Inserting a timerange into a track. I already calculated my time, I call it startTime. This is where you would put your time. The preferredTimeScale doesn't have to be 600000 haha, I was playing with those numbers. It just allows precision. At is not where it begins within this individual track, but where it starts as a whole. As you notice below my At times are different You also need to give it which track 
      try firstTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, CMTime(seconds: CMTimeGetSeconds(startTime), preferredTimescale: 600000)), 
              of: firstAsset.tracks(withMediaType: AVMediaTypeVideo)[0], 
              at: kCMTimeZero) 
     } catch _ { 
      print("Failed to load first track") 
     } 

     //Create the 2nd track 
     let secondTrack = mixComposition.addMutableTrack(withMediaType: AVMediaTypeVideo, 
                     preferredTrackID: Int32(kCMPersistentTrackID_Invalid)) 

     do { 

      //Apply the 2nd timeRange you have. Also apply the correct track you want 
      try secondTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, self.endTime-self.startTime), 
              of: newAsset.tracks(withMediaType: AVMediaTypeVideo)[0], 
              at: CMTime(seconds: CMTimeGetSeconds(startTime), preferredTimescale: 600000)) 
      secondTrack.preferredTransform = newAsset.preferredTransform 
     } catch _ { 
      print("Failed to load second track") 
     } 

     //We are not sure we are going to use the third track in my case, because they can edit to the end of the original video, causing us not to use a third track. But if we do, it is the same as the others! 
     var thirdTrack:AVMutableCompositionTrack! 
     if(self.endTime != controller.realDuration) { 
      thirdTrack = mixComposition.addMutableTrack(withMediaType: AVMediaTypeVideo, 
                     preferredTrackID: Int32(kCMPersistentTrackID_Invalid)) 

     //This part appears again, at endTime which is right after the 2nd track is suppose to end. 
      do { 
       try thirdTrack.insertTimeRange(CMTimeRangeMake(CMTime(seconds: CMTimeGetSeconds(endTime), preferredTimescale: 600000), self.controller.realDuration-endTime), 
              of: firstAsset.tracks(withMediaType: AVMediaTypeVideo)[0] , 
              at: CMTime(seconds: CMTimeGetSeconds(endTime), preferredTimescale: 600000)) 
      } catch _ { 
       print("failed") 
      } 
     } 

     //Same thing with audio! 
     if let loadedAudioAsset = controller.audioAsset { 
      let audioTrack = mixComposition.addMutableTrack(withMediaType: AVMediaTypeAudio, preferredTrackID: 0) 
      do { 
       try audioTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, self.controller.realDuration), 
               of: loadedAudioAsset.tracks(withMediaType: AVMediaTypeAudio)[0] , 
               at: kCMTimeZero) 
      } catch _ { 
       print("Failed to load Audio track") 
      } 
     } 

     //So, now that we have all of these tracks we need to apply those instructions! If we don't, then they could be different sizes. Say my newAsset is 720x1080 and MyAsset is 1440x900 (These are just examples haha), then it would look a tad funky and possibly not show our new asset at all. 
     let mainInstruction = AVMutableVideoCompositionInstruction() 

     //Make sure the overall time range matches that of the individual tracks, if not, it could cause errors. 
     mainInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, self.controller.realDuration) 

     //For each track we made, we need an instruction. Could set loop or do individually as such. 
     let firstInstruction = videoCompositionInstructionForTrack(firstTrack, firstAsset) 
     //You know, not 100% why this is here. This is 1 thing I did not look into well enough or understand enough to describe to you. 
     firstInstruction.setOpacity(0.0, at: startTime) 

     //Next Instruction 
     let secondInstruction = videoCompositionInstructionForTrack(secondTrack, self.asset) 

     //Again, not sure we need 3rd one, but if we do. 
     var thirdInstruction:AVMutableVideoCompositionLayerInstruction! 
     if(self.endTime != self.controller.realDuration) { 
      secondInstruction.setOpacity(0.0, at: endTime) 
      thirdInstruction = videoCompositionInstructionForTrack(thirdTrack, firstAsset) 
     } 

     //Okay, now that we have all these instructions, we tie them into the main instruction we created above. 
     mainInstruction.layerInstructions = [firstInstruction, secondInstruction] 
     if(self.endTime != self.controller.realDuration) { 
      mainInstruction.layerInstructions += [thirdInstruction] 
     } 

     //We create a video framework now, slightly different than the one above. 
     let mainComposition = AVMutableVideoComposition() 

     //We apply these instructions to the framework 
     mainComposition.instructions = [mainInstruction] 

     //How long are our frames, you can change this as necessary 
     mainComposition.frameDuration = CMTimeMake(1, 30) 

     //This is your render size of the video. 720p, 1080p etc. You set it! 
     mainComposition.renderSize = firstAsset.tracks(withMediaType: AVMediaTypeVideo)[0].naturalSize 

     //We create an export session (you can't use PresetPassthrough because we are manipulating the transforms of the videos and the quality, so I just set it to highest) 
     guard let exporter = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPresetHighestQuality) else { return } 

     //Provide type of file, provide the url location you want exported to (I don't have mine posted in this example). 
     exporter.outputFileType = AVFileTypeMPEG4 
     exporter.outputURL = url 

     //Then we tell the exporter to export the video according to our video framework, and it does the work! 
     exporter.videoComposition = mainComposition 

     //Asynchronous methods FTW! 
     exporter.exportAsynchronously(completionHandler: { 
      //Do whatever when it finishes! 
     }) 
    } 
} 

这里有很多事情要做,但是我必须这样做,无论如何我的例子!对不起,花了这么长的时间发布,并让我知道你是否有问题。

1

是你完全可以只申请一个个人转变成AVMuta的每一层bleComposition。

继承人的过程的概述 - 伊夫亲自在Objective-C这样做,虽然,所以我不能给你确切的SWIFT代码,但我知道这些相同功能的工作只是在斯威夫特一样。

  1. 创建AVMutableComposition。
  2. 创建一个AVMutableVideoComposition。
  3. 设置视频合成的渲染大小和帧持续时间。
  4. 现在每个AVAsset:
    • 创建AVAssetTrack和AVAudioTrack。
    • 通过将每个添加到mutableComposition中,为每个视频创建一个AVMutableCompositionTrack(一个用于视频,一个用于音频)。

这里变得更为复杂。(对不起AVFoundation不容易!)

  • 创建从指的是每个视频AVAssetTrack的AVMutableCompositionLayerInstruction。对于每个AVMutableCompositionLayerInstruction,你可以设置它的变换。你也可以做一些事情,比如设置裁剪矩形。
  • 将每个AVMutableCompositionLayerInstruction添加到一个layerinstructions数组中。当所有的AVMutableCompositionLayerInstructions被创建时,数组被设置在AVMutableVideoComposition上。
  • 最后..

  • 最后,你将有你将用来玩这个后面(上AVPlayer)的AVPlayerItem。您使用AVMutableComposition创建AVPlayerItem,然后在AVPlayerItem本身上设置AVMutableVideoComposition(setVideoComposition ..)
  • Easy eh?

    我花了几个星期才得到这个东西运作良好。它完全无情,正如你所提到的,如果你做错了什么,它不会告诉你你做错了什么 - 它不会出现。

    但是,当你破解它,它完全工作得很快,很好。

    最后,所有我所概述的东西是在AVFoundation文档可用。它是一个漫长的,但你需要知道它来实现你想要做的。

    祝你好运!

    +0

    我感谢您的帮助,已经找到答案。只是没有发布它。谢谢你! – impression7vx

    +0

    @ impression7vx有任何进展吗?任何帮助社区的东西?用这个命中一个路障,并没有找到一个好的答案。谢谢! – simplexity

    +0

    是的男人。我昨天做了手术,因此有时间回家并在今天或明天发布一些代码。凉? – impression7vx

    相关问题