12.3 C
New York
Monday, November 4, 2024

ios – RealityKit: Rotate part of a 3d mannequin in the direction of the digicam


I’ve a 3D mannequin of an object with animation embedded in it. The mannequin additionally has a skeleton. I would like the item displayed in AR to show its head in the direction of the digicam. How can this be carried out programmatically?

I considered implementing this by manipulating bones, however I discovered that RealityKit (iOS 16) doesn’t present this selection.

If I load the mannequin as Entity, then I’ve a graph of objects of the Entity kind – the geometry of the mannequin, however no bones.

If I load the mannequin as ModelEntity, then I do not need direct entry to the tree of kid components in any respect, however there are 2 lists: jointNames and jointTransforms. As I perceive it, these are lists of names and transformations of bones, respectively.

Beneath is the code for fundamental loading of the item:

import SwiftUI
import RealityKit
import ARKit
import Mix

struct ARViewContainer: UIViewRepresentable {
    func makeUIView(context: Context) -> ARView {
        let arView = ARView(body: .zero)
        
        let configuration = ARWorldTrackingConfiguration()
        configuration.planeDetection = [.horizontal]
        arView.session.run(configuration)
        
        addCouchingOverlay(arView: arView)
        addModelToARView(arView: arView, context: context)
        
        return arView
    }
    
    func updateUIView(_ uiView: ARView, context: Context) { }
    
    func makeCoordinator() -> ARCoordinator {
        ARCoordinator()
    }
    
    non-public func addModelToARView(arView: ARView, context: Context) {
        Entity.loadModelAsync(named: "mannequin.usdz").sink(
            receiveCompletion: { completion in
                if case let .failure(error) = completion {
                    print("Error loading mannequin: (error)")
                }
            },
            receiveValue: { modelEntity in
                configureModel(context: context, arView: arView, modelEntity: modelEntity)
            }
        ).retailer(in: &context.coordinator.cancellables)
    }
}
 

extension ARViewContainer {
    non-public func addCouchingOverlay(arView: ARView) {
        let coachingOverlay = ARCoachingOverlayView()
        coachingOverlay.autoresizingMask = [.flexibleWidth, .flexibleHeight]
        coachingOverlay.session = arView.session
        coachingOverlay.objective = .horizontalPlane
        arView.addSubview(coachingOverlay)
    }
    
    non-public func configureModel(context: Context, arView: ARView, modelEntity: ModelEntity) {
        let anchorEntity = AnchorEntity(airplane: .horizontal)
        
        anchorEntity.addChild(modelEntity)
        arView.scene.addAnchor(anchorEntity)
        
        let minScale: Float = 0.001
        modelEntity.scale = [minScale, minScale, minScale]
        context.coordinator.modelEntity = modelEntity
        
        arView.scene.subscribe(to: SceneEvents.Replace.self) { _ in
            let currentScale = modelEntity.scale.x
            if currentScale < minScale {
                modelEntity.scale = [minScale, minScale, minScale]
            }
        }.retailer(in: &context.coordinator.cancellables)
    }
}

class ARCoordinator {
    var cancellables: Set = []
    
    var modelEntity: ModelEntity?
}

It appears to me that probably the most right manner is to govern the bones of the skeleton of the 3D mannequin. However in RealityKit for iOS 16, this performance is just not immediately obtainable. Or are there methods to animate the impact on bones/joints?

In the course of the experiment, I found that when altering the transformations within the jointTransforms record, the mannequin itself visually modifications. Subsequently, I even tried to assemble my very own tree from Entity by setting names and transformations from the jointNames and jointTransforms lists. Then I retailer them within the coordinator object for fast entry to them, in addition to the indices of the required joints for subsequent updates of the values ​​in jointTransforms.

I run a operate on a timer to vary the bone transformation in accordance with the next logic:

  1. take the entities of the top, neck and the mannequin itself
  2. get the place of the top relative to the neck
  3. carry out a change for the top utilizing the look(at:, from:, upVector: relativeTo:) operate
  4. overwrite the worth in jointTransforms by index

Right here is an instance of code:

import Basis
import RealityKit
import Mix

class ARCoordinator {
    var cancellables: Set = []
    
    var modelEntity: ModelEntity?
    var entity: Entity?
    
    var neck: Entity?
    var head: Entity?
    var lEye: Entity?
    var rEye: Entity?
    
    var headTransformIdx = 0
    var lEyeTransformIdx = 0
    var rEyeTransformIdx = 0
    
    non-public var trackingTimer: Timer?
    
    non-public var skeletoneIsBuilded = false
    non-public var entities: [String: Entity] = [:]
    
    func startCameraTrackingTimer(arView: ARView) {
        trackingTimer = Timer.scheduledTimer(withTimeInterval: 0.1, repeats: true) { [weak self] _ in
            if let isBuilded = self?.skeletoneIsBuilded, isBuilded {
                self?.updateModelOrientation(arView: arView)
            }
        }
    }
    
    non-public func prepareEntities() {
        let neckName = "root/physique/neck"
        let headName = "root/physique/neck/head"
        let eyeLName = "root/physique/neck/head/eye_left"
        let eyeRName = "root/physique/neck/head/eye_right"
        
        if let entity = entities[neckName],
           let idx = modelEntity?.jointNames.firstIndex(the place: { $0.hasSuffix("/neck") }) {
            neck = entity
        }
        
        if let entity = entities[headName],
           let idx = modelEntity?.jointNames.firstIndex(the place: { $0.hasSuffix("/head") }) {
            head = entity
            headTransformIdx = idx
        }
        
        if let entity = entities[eyeLName],
           let idx = modelEntity?.jointNames.firstIndex(the place: { $0.hasSuffix("/eye_left") }) {
            lEye = entity
            lEyeTransformIdx = idx
        }
        
        if let entity = entities[eyeRName],
           let idx = modelEntity?.jointNames.firstIndex(the place: { $0.hasSuffix("/eye_right") }) {
            rEye = entity
            rEyeTransformIdx = idx
        }
    }
    
    func stopCameraTrackingTimer() {
        trackingTimer?.invalidate()
    }

    non-public func updateModelOrientation(arView: ARView) {
        if let modelEntity = modelEntity,
           let neck = neck,
           let head = head {
            
            let place = head.place(relativeTo: neck)
            head.look(
                at: arView.cameraTransform.translation,
                from: place,
                upVector: [0, 1, 0],
                relativeTo: neck
            )
            
            modelEntity.jointTransforms[headTransformIdx] = head.remodel
        }
    }
    
    func buildGraphAsync(jointNames: [String], jointTransforms: [Transform]) async -> Entity? {
        let graphRoot: Entity? = await withTaskGroup(of: Entity?.self) { group in
            group.addTask { [weak self] in
                return self?.buildGraph(jointNames: jointNames, jointTransforms: jointTransforms)
            }
            
            return await group.first(the place: { $0 != nil }) ?? nil
        }
        
        skeletoneIsBuilded = true
        prepareEntities()
        
        return graphRoot
    }
    
    non-public func buildGraph(jointNames: [String], jointTransforms: [Transform]) -> Entity? {
        guard jointNames.depend == jointTransforms.depend else {
            print("Error: the variety of names and transformations doesn't match.")
            return nil
        }

        var idx = 0
        let root = Entity.create(title: jointNames[idx], remodel: jointTransforms[idx])
        entities = [jointNames[idx]: root]
        idx += 1

        whereas idx < jointNames.depend {
            let title = jointNames[idx]
            let remodel = jointTransforms[idx]
            
            let parentPathComponents = title.cut up(separator: "/").dropLast()
            let parentName = parentPathComponents.joined(separator: "/")

            let entity = Entity.create(title: title, remodel: remodel)
            entities[name] = entity

            if let father or mother = entities[parentName] {
                father or mother.addChild(entity)
            } else {
                print("Mum or dad not fount for: (title)")
            }
            
            idx += 1
        }
        
        return root
    }
}

extension Entity {
    static func create(title: String, remodel: Remodel) -> Entity {
        let entity = Entity()
        entity.title = title
        entity.remodel = remodel
        
        return entity
    }
}


struct ARViewContainer: UIViewRepresentable {
    func makeUIView(context: Context) -> ARView {
        // ...
        
        context.coordinator.startCameraTrackingTimer(arView: arView)
        
        return arView
    }
    
    non-public func configureModel(context: Context, arView: ARView, modelEntity: ModelEntity) {
        // ...
        
        Job.indifferent {
            let entity = await context.coordinator.buildGraphAsync(
                jointNames: modelEntity.jointNames,
                jointTransforms: modelEntity.jointTransforms
            )
            
            await MainActor.run {
                context.coordinator.entity = entity
            }
        }
    }
}

I experimented with completely different choices for calling the look(at:, from:, upVector: relativeTo:) operate:

  • modified values ​​in upVector
  • set the neck object and nil as relativeTo

I at the moment have the next issues with this strategy:

  • if you choose the vector appropriately, the top rotates, however solely horizontally. There is no such thing as a motion alongside the vertical axis.
  • I seen that the mannequin’s head abruptly modifications route in the wrong way even earlier than the digicam is on the opposite aspect. That’s, it appears as if the purpose from which the route in the direction of the digicam is calculated is noticeably nearer to the observer than the item itself. that’s, this level is between the item and the digicam. Maybe it’s essential to someway set the binding appropriately…

Can somebody assist with fixing this downside? To start with, I’m within the right resolution for implementing the route of the mannequin’s head in the direction of the digicam

Thanks upfront for any assist)

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles