Probably the most related stackoverflow put up from a number of years in the past is the next the place they try to make use of ARKit for a number of cameras, however in my state of affairs, I do want ARKit performance within the selfie digicam feed: ARKit and AVCamera concurrently
I would wish to know if it is attainable to run ARKit alongside one other digicam session, utilizing the front-facing digicam. I am additionally if there’s a low-level answer that bypasses a few of the predefined performance of ARKit to make this work, however nonetheless offers me an analogous performance of utilizing ARKit within the rear-facing digicam whereas working a video feed with the front-facing digicam. I don’t want the front-facing digicam to have ARKit performance, I simply desire a picture-in-picture of a daily front-facing/selfie video feed whereas an ARKit session is working within the rear-camera feed. It is primarily simultaneous seize as seen in Apple’s pattern code right here, besides they do not implement an ARSession with it: https://developer.apple.com/documentation/avfoundation/capture_setup/avmulticampip_capturing_from_multiple_cameras
The viewcontroller.swift script I’ve beneath “works”, nevertheless it looks as if there could also be a useful resource battle as a result of the front-facing digicam will start its feed, however then it freezes as soon as the rear-facing digicam utilizing ARKit begins its feed; so I am primarily caught with a picture for the front-facing digicam.(See picture beneath)
What I’ve tried (I bought the identical outcomes with each of those strategies/See picture beneath):
- Initially I attempted the AVCaptureMulticam session with solely the entrance/selfie digicam as a result of ARView (again digicam view) from ARKit has it is personal AVCapture it mechanically runs
- I attempted placing each the ARView and the UIView (entrance/selfie digicam) right into a AVCaptureMultiCamSession
import SwiftUI
import RealityKit
import UIKit
import ARKit
import AVFoundation
class ViewController : UIViewController {
@IBOutlet var arView: ARView!
@IBOutlet var frontCameraView: UIView! //small PiP view
//Seize Session for entrance digicam
var captureSession: AVCaptureMultiCamSession?
var rearCameraLayer: AVCaptureVideoPreviewLayer?
var frontCameraLayer: AVCaptureVideoPreviewLayer?
override func viewDidLoad(){
tremendous.viewDidLoad()
//arrange ar session for the rear digicam already dealt with by arview
//begin aircraft detection from ARKit information
startPlaneDetection()
//arrange gesture recognizer for putting 3D objects
//second level
arView.addGestureRecognizer(UITapGestureRecognizer(goal: self, motion: #selector(handleTap(recognizer:))))
// arrange the entrance camerra for PiP
setupFrontCamera()
}
func setupARSession()
{
let configuration = ARWorldTrackingConfiguration()
configuration.planeDetection = [.horizontal]
configuration.environmentTexturing = .computerized
arView.session.run(configuration)
}
//organising each rear & selfie digicam as a multicamera session
func setupDualCameraSession() {
captureSession = AVCaptureMultiCamSession()
// Rear digicam (major AR view)
guard let rearCamera = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, place: .again),
let rearInput = attempt? AVCaptureDeviceInput(system: rearCamera),
captureSession?.canAddInput(rearInput) == true else {
return
}
captureSession?.addInput(rearInput)
let rearOutput = AVCaptureVideoDataOutput()
if captureSession?.canAddOutput(rearOutput) == true {
captureSession?.addOutput(rearOutput)
}
// Add rear digicam preview to arView (guarantee it doesn't intervene with ARKit)
rearCameraLayer = AVCaptureVideoPreviewLayer(session: captureSession!)
rearCameraLayer?.body = arView.bounds
rearCameraLayer?.videoGravity = .resizeAspectFill
arView.layer.insertSublayer(rearCameraLayer!, at: 0) // Rear digicam underneath AR content material
// Entrance digicam (PiP view)
guard let frontCamera = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, place: .entrance),
let frontInput = attempt? AVCaptureDeviceInput(system: frontCamera),
captureSession?.canAddInput(frontInput) == true else {
return
}
captureSession?.addInput(frontInput)
let frontOutput = AVCaptureVideoDataOutput()
if captureSession?.canAddOutput(frontOutput) == true {
captureSession?.addOutput(frontOutput)
}
// Add entrance digicam preview to frontCameraView (PiP)
frontCameraLayer = AVCaptureVideoPreviewLayer(session: captureSession!)
frontCameraLayer?.body = frontCameraView.bounds
frontCameraLayer?.videoGravity = .resizeAspectFill
frontCameraView.layer.addSublayer(frontCameraLayer!)
// Begin the session
captureSession?.startRunning()
}
func setupFrontCamera(){
captureSession = AVCaptureMultiCamSession()
// Entrance digicam (PiP view)
guard let frontCamera = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, place: .entrance),
let frontInput = attempt? AVCaptureDeviceInput(system: frontCamera),
captureSession?.canAddInput(frontInput) == true else {
return
}
captureSession?.addInput(frontInput)
let frontOutput = AVCaptureVideoDataOutput()
if captureSession?.canAddOutput(frontOutput) == true {
captureSession?.addOutput(frontOutput)
}
// Add entrance digicam preview to the small UIView (PiP view)
frontCameraLayer = AVCaptureVideoPreviewLayer(session: captureSession!)
frontCameraLayer?.body = frontCameraView.bounds
frontCameraLayer?.videoGravity = .resizeAspectFill
frontCameraView.layer.addSublayer(frontCameraLayer!)
// Begin the session
captureSession?.startRunning()
}
func createSphere() -> ModelEntity
{
//Mesh; why do we wish these as immuatables?
let sphere = MeshResource.generateSphere(radius: 0.5)
//Assign materials
let sphereMaterial = SimpleMaterial(shade: .blue, roughness: 0, isMetallic: true)
//Mannequin Entity; what is the diff w entity and mesh + materials
//within the context of Swift and the way it handles these?
//Mannequin Entity
let sphereEntity = ModelEntity(mesh: sphere, supplies: [sphereMaterial])
return sphereEntity
}
@objc
func handleTap(recognizer: UITapGestureRecognizer)
{
//Contact location, on display screen
let tapLocation = recognizer.location(in: arView)
//Raycast (2D -> 3D)
let outcomes = arView.raycast(from: tapLocation, permitting: .estimatedPlane, alignment: .horizontal)
//I'm assuming the outcomes return just a few attainable outcomes, so first selects the primary
//returned worth (Confirm)
if let firstResult = outcomes.first {
//3D level (x,y,z)
let worldPos = simd_make_float3(firstResult.worldTransform.columns.3)
//Create sphere
let sphere = createSphere()
//place sphere
placeObject(object: sphere, at: worldPos)
}
}
func placeObject(object:ModelEntity, at location: SIMD3)
{
//Anchor
let objectAnchor = AnchorEntity(world: location)
// Tie mannequin to anchor
objectAnchor.addChild(object)
// Add Anchor to scene
arView.scene.addAnchor(objectAnchor)
}
func startPlaneDetection(){
arView.automaticallyConfigureSession = true
let configuration = ARWorldTrackingConfiguration()
configuration.planeDetection = [.horizontal]
configuration.environmentTexturing = .computerized
arView.session.run(configuration)
}
}
Moreover, when working the applying on XCode, I get the next warnings:
Couldn't find file 'default-binaryarchive.metallib' in bundle.
Registering library (/System/Library/PrivateFrameworks/CoreRE.framework/default.metallib) that already exists in shader supervisor. Library might be overwritten.
Couldn't resolve materials title 'engine:BuiltinRenderGraphResources/AR/suFeatheringCreateMergedOcclusionMask.rematerial' in bundle at
I additionally get the next strains showing in my console
<<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:275) - (err=-12784)
<<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:511) - (err=-12784)
<<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:275) - (err=-12784)