I’ve lately received desirous about generated audio once more, however I am having a little bit of bother. I have been following this tutorial, and changing it into Swift:
https://gist.github.com/gcatlin/0dd61f19d40804173d015c01a80461b8
Nevertheless, once I play again by audio, all I get is a few reasonably icky white noise results reasonably than the pure tone I used to be anticipating. This is the code I am utilizing to create the tone unit:
non-public func createToneUnit() throws {
// Configure the search parameters to search out the default playback output unit
var outputDesc = AudioComponentDescription()
outputDesc.componentType = kAudioUnitType_Output
outputDesc.componentSubType = kAudioUnitSubType_RemoteIO
outputDesc.componentManufacturer = kAudioUnitManufacturer_Apple
outputDesc.componentFlags = 0
outputDesc.componentFlagsMask = 0
// Get the default playback output unit
guard let output = AudioComponentFindNext(nil, &outputDesc) else {
throw AudioError.cannotFindOutput
}
// Create a brand new unit primarily based on this that we'll use for output
var error = AudioComponentInstanceNew(output, &toneUnit)
guard let toneUnit = toneUnit, error == noErr else {
throw AudioError.cannotCreateComponent
}
// Set our tone rendering operate on the unit
var callback = AURenderCallbackStruct()
callback.inputProcRefCon = UnsafeMutableRawPointer(Unmanaged.passUnretained(self).toOpaque())
callback.inputProc = {
(userData, actionFlags, timeStamp, busNumber, frameCount, knowledge) -> OSStatus in
let _self = Unmanaged.fromOpaque(userData).takeUnretainedValue()
return _self.renderTone(actionFlags: actionFlags, timeStamp: timeStamp, busNumber: busNumber, frameCount: frameCount, knowledge: knowledge)
}
error = AudioUnitSetProperty(
toneUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Input,
0,
&callback,
UInt32(MemoryLayout.dimension(ofValue: callback))
)
guard error == noErr else {
throw AudioError.cannotSetCallback
}
// Set the format to 32 bit, single channel, floating level, linear PCM
var streamFormat = AudioStreamBasicDescription()
streamFormat.mSampleRate = sampleRate
streamFormat.mFormatID = kAudioFormatLinearPCM
streamFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked
streamFormat.mFramesPerPacket = 1
streamFormat.mChannelsPerFrame = 1
streamFormat.mBitsPerChannel = 16
streamFormat.mBytesPerFrame = streamFormat.mChannelsPerFrame * streamFormat.mBitsPerChannel / 8
streamFormat.mBytesPerPacket = streamFormat.mBytesPerFrame * streamFormat.mFramesPerPacket
error = AudioUnitSetProperty(
toneUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
0,
&streamFormat,
UInt32(MemoryLayout.dimension)
)
guard error == noErr else {
throw AudioError.cannotSetStreamFormat
}
}
And this is the render operate:
func renderTone(
actionFlags: UnsafeMutablePointer,
timeStamp: UnsafePointer,
busNumber: UInt32,
frameCount: UInt32,
knowledge: UnsafeMutablePointer?
) -> OSStatus {
// Get buffer
let bufferList = UnsafeMutableAudioBufferListPointer(knowledge!)
let increment = MainViewController.fullCycle * frequency / sampleRate
// Generate samples
for buffer in bufferList {
for body in 0 ..< frameCount {
if let audioData = buffer.mData?.assumingMemoryBound(to: Float64.self) {
audioData[Int(frame)] = sin(theta) * amplitude
}
// Word: this is able to NOT work for a stereo output
theta += increment
whereas theta > MainViewController.fullCycle {
theta -= MainViewController.fullCycle
}
}
}
return noErr;
}
Anybody see something clearly dangerous about this? I would actually a lot reasonably be utilizing Swift than Obj C however I am unable to discover a working instance of the right way to accomplish this, just some (admittedly helpful) partial examples about the right way to set issues up that do not truly carry out any tone rendering.