First thoughts on using Core Audio with Swift

I built a quick project (available on GitHub here) to work through some of the challenges of using the Core Audio APIs with Swift. This project uses AUGraph to connect and control AudioFilePlayer and RemoteIO AudioUnits, then schedules and plays a bundled audio file. While simple, this project illustrates the impact of Swift’s type safety on Core Audio calls.

Because of C’s (and, by extension, Objective-C’s) rather relaxed view on type safety, we were able to intermingle types when filling out the structs used by Core Audio. For example, a call to AudioUnitSetProperty would look like this:

                kAudioUnitScope_Global, 0, 
                filesToSchedule, sizeof(AudioFileID));

In Swift, we have to make sure to cast everything (including system constants) to the proper type when filling out the struct:

                 AudioUnitScope(kAudioUnitScope_Global), 0, 
                 filesToSchedule, UInt32(sizeof(AudioFileID)))

Not too much of a change, just something to be aware of.

Previously, we could get away with partially filled out structs, as long as the information we needed was there. If we wanted to use an AudioTimeStamp without using the mSMPTETime member, we could just set it to nil and move on. Swift doesn’t like partially completed structs, so it appears we will have to do a little more typing here as well. I haven’t researched a way around this, so if there is one, please let me know and I will update the project accordingly.

Lastly, while not a part of this project, it doesn’t look like Core Audio callbacks are possible in Swift without jumping through a lot of hoops. Many developers have requested that Apple address this and I’m sure a fix is forthcoming.

I was able to build this project nearly as quickly as with Objective-C, only being slowed down by a little extra typing due to Swift’s type safety (which is a good tradeoff, in my opinion).

Leave a Reply