Introduction
Mach1 Spatial SDK includes APIs to allow developers to design applications that can encode or pan to a spatial audio render from audio streams and/or playback and decode Mach1Spatial 8channel spatial audio mixes with orientation to decode the correct stereo output sum of the user's current orientation. Additionally the Mach1 Spatial SDK allows users to safely convert surround/spatial audio mixes to and from the Mach1Spatial or Mach1Horizon VVBP formats.
VVBP or Virtual Vector Based Panning is a controlled virtual version of traditional VBAP (Vector Based Amplitude Panning) or SPS (Spatial PCM Sampling). These formats are designed for simplicity and ease of use & implementation both for the content creators and the developers. The spatial audio mixes are based on only amplitude based coefficients changes for both encoding and decoding, and unlike many other spatial audio approaches, there are no additional signal altering processes (such as room modeling, delays or filters) to create coherent and accurate spatial sound fields and play them back from a first person headtracked perspective. Due to the simplicity of the format and cuboid vector space it relies on, it is also ideal for converting and carrying surround and spatial audio mixes without altering the mix to do so, making it an ideal server side audio middleman container. Bringing controlled post-produced spatial audio into new mediums easily.
Overview
The Mach1 Spatial SDK includes the following components and libraries:
- Mach1Encode lib: Encode and process input streams/audio into a Mach1Spatial VVBP format.
- Mach1Decode lib: Decode and process a Mach1Spatial VVBP format with device orientation / headtracking to output directional spatial audio.
- Mach1DecodePositional lib: Add additional optional decoding layer to decode spatial mixes with 6DOF for positional and orientational decoding.
- Mach1Transcode lib: Transcode / convert any audio format (surround/spatial) to or from a Mach1Spatial VVBP format.
Mach1Encode and Mach1Decode are C based and cross compiler friendly with pre-built library files supported on OSX 10.7+, Windows 10+, ARM based devices (Raspberry Pi), iOS 9.0+ and Android API 19+. Unity 4.0+ and Unreal Engine 4.10+ examples are available and said engines are supported too on the aforementioned platforms.
Mach1Transcode is supported on macOS, linux and Windows, game engine support coming soon.
Mach1 Internal Angle Standard
We decided to cherrypick and clarify how we think and describe rotations & translations in space, focused more to unify creators and developers and derived more from a first person perspective. After long deliberation on using various existing standards, they had places where they worked and places where they didn’t, they also were not very “humanized” and in an effort to fix this, we follow these guidelines:
Coordinate / Angle / Rotation Description Expectations:
- Rotations can be individually explained per axis with signed rotations
- Rotations are explained from a center perspective point of view (FPV - First Person View)
Mach1 YPR Polar Expectation of Describing Orientation:
Common use: Mach1Decode API, Mach1DecodePositional API
- Yaw (left -> right | where rotating left is negative)
- Pitch (down -> up | where rotating down is negative)
- Roll (top-pointing-left -> top-pointing-right | where rotating top of object left is negative)
Mach1 AED Expectation of Describing Polar Points:
Common use: Mach1Encode API
- Azimuth (left -> right | where rotating left is negative)
- Elevation (down -> up | where rotating down is negative)
- Diverge (backward -> forward | where behind origin of Azimuth/Elevation is negative)
Mach1 XYZ Coordinate Expectation of Vector Points:
- X (left -> right | where -X is left)
- Y (front -> back | where -Y is back)
- Z (top -> bottom | where -Z is bottom)
Positional 3D Coords
- X+ = strafe right
- X- = strafe left
- Y+ = up
- Y- = down
- Z+ = forward
- Z- = backward
Orientation Euler
- Yaw[0]+ = rotate right [Range: 0->360 | -180->180]
- Yaw[0]- = rotate left [Range: 0->360 | -180->180]
- Pitch[1]+ = rotate up [Range: -90->90]
- Pitch[1]- = rotate down [Range: -90->90]
- Roll[2]+ = tilt right [Range: -90->90]
- Roll[2]- = tilt left [Range: -90->90] The orientation convention is based on the first person perspective point of view to make interfacing as easy to interpret as possible.
JSON Descriptions
When utilizing the CustomPoints
formats for the Mach1Transcode API for either the input or output format description you can describe the custom format via JSON using the following syntax and example:
Concept
Each input channel is described as a "point", this can be described either spherically via usePolar
or via cartesian coordinates either of which should match the descriptions outlined in Mach1 Internal Angle Standard.
Advanced
- [IN DEVELOPMENT]
gain
descriptions can be added for further custom normalization schemes.
Point Description
Using Cartesian
x
with values between -1.0 and 1.0y
with values between -1.0 and 1.0z
with values between -1.0 and 1.0
Using Polar / Spherical (in degrees)
azimuth
with values between -180.0 and 180.0 (conversions to radians happen internally)elevation
with values between -180.0 and 180.0 (conversions to radians happen internally)diverge
with values between -1.0 and 1.0name
a string description of the channel or pointusePolar
a boolean description of skipping cartesian input of the point, expected as0
/1
orfalse
/true
(REQUIRED FOR POLAR/SPHERICAL DESCRIPTIONS OF EACH POINT)
Example
The following is a 2 channel example description
{
"points": [
{
"x": 0.0,
"y": 0.0,
"z": 0.0,
"usePolar": true,
"azimuth": -45.0,
"elevation": 0.0,
"diverge": 1.0,
"name": "L"
},
{
"x": 1.0,
"y": 1.0,
"z": 0.0,
"name": "R"
},
]
}
Mach1Encode API
Mach1Encode allows you to transform input audio streams into the Mach1Spatial VVBP 8 channel format. Included are functions needed for mono, stereo or quad/FOA audio streams. The input streams are referred to as Points
in our SDK.
The typical encoding process starts with creating an object of a class Mach1EncodeCore, and setting it up as described below. After that, you're meant to generate Points by calling generatePointResults() on the object of this class. You'll get as many points as there are input channels and as many gains in each point as there are output channels. You then copy each input channel to each output channel with the according gain.
Summary of Use
void update(){
m1Encode.setAzimuth(azimuth);
m1Encode.setElevation(elevation);
m1Encode.setDiverge(diverge);
m1Encode.setAutoOrbit(autoOrbit);
m1Encode.setStereoRotate(sRotation);
m1Encode.setStereoSpread(sSpread);
m1Decode.setDecodeAlgoType(Mach1DecodeAlgoSpatial);
m1Encode.setPannerMode(Mach1EncodePannerMode::Mach1EncodePannerModeIsotropicLinear);
m1Encode.setInputMode(Mach1EncodeInputModeType::Mach1EncodeInputModeMono);
m1Encode.setOutputMode(Mach1EncodeOutputModeType::Mach1EncodeOutputModeM1Spatial_8);
mtx.lock();
m1Encode.generatePointResults();
m1Decode.beginBuffer();
decoded = m1Decode.decode(decoderRotationY, decoderRotationP, decoderRotationR, 0, 0);
m1Decode.endBuffer();
std::vector<float> gains = this->gains;
mtx.unlock();
}
func update(decodeArray: [Float], decodeType: Mach1DecodeAlgoType){
m1Encode.setAzimuth(azimuth: azimuth)
m1Encode.setElevation(elevation: elevation)
m1Encode.setDiverge(diverge: diverge)
m1Encode.setAutoOrbit(setAutoOrbit: true)
m1Encode.setStereoRotate(setStereoRotation: sRotation)
m1Encode.setStereoSpread(setStereoSpread: stereoSpread)
m1Encode.setPannerMode(pannerMode: type)
m1Encode.setInputMode(inputMode: type)
m1Encode.setOutputMode(outputMode: type)
m1Encode.generatePointResults()
//Use each coeff to decode multichannel Mach1 Spatial mix
var gains : [Float] = m1Encode.getResultingCoeffsDecoded(decodeType: decodeType, decodeResult: decodeArray)
for i in 0..<players.count {
players[i].volume = gains[i] * volume
}
let m1Encode = null;
Mach1EncodeModule().then(function(m1EncodeModule) {
m1Encode = new(m1EncodeModule).Mach1Encode();
});
function update() {
m1Encode.setAzimuth(params.azimuth);
m1Encode.setElevation(params.elevation);
m1Encode.setDiverge(params.diverge);
m1Encode.setStereoRotate(params.sRotation);
m1Encode.setStereoSpread(params.sSpread);
m1Encode.setAutoOrbit(params.autoOrbit);
m1Encode.setPannerMode(params.pannerMode);
m1Encode.generatePointResults();
var encodeCoeffs = m1Encode.getGains();
}
The Mach1Encode API is designed to aid in developing tools for inputting to a Mach1 VVBP/SPS format. They give access to common calculations needed for the audio processing and UI/UX handling for panning/encoding Mach1 VVBP/SPS formats via the following common structure:
Installation
Import and link the appropriate target device's / IDE's library file.
Generate Point Results
m1Encode.generatePointResults();
m1Encode.generatePointResults()
m1Encode.generatePointResults();
Returns the resulting points
coefficients based on selected and calculated input/output configuration.
Set Input Mode
if (inputKind == 0) { // Input: MONO
m1Encode.inputMode = M1Encode::INPUT_MONO;
}
if (inputKind == 1) { // Input: STERO
m1Encode.inputMode = M1Encode::INPUT_STEREO;
}
if (inputKind == 2) { // Input: Quad
m1Encode.inputMode = M1Encode::INPUT_QUAD;
}
if (inputKind == 3) { // Input: AFORMAT
m1Encode.inputMode = M1Encode::INPUT_AFORMAT;
}
if (inputKind == 4) { // Input: BFORMAT
m1Encode.inputMode = M1Encode::INPUT_FOAACN;
}
var type : Mach1EncodeInputModeType = Mach1EncodeInputModeMono
m1Encode.setInputMode(inputMode: type)
if(soundFiles[soundIndex].count == 1) {
type = Mach1EncodeInputModeMono
}
else if(soundFiles[soundIndex].count == 2) {
type = Mach1EncodeInputModeStereo
}
else if (soundFiles[soundIndex].count == 4) {
if (quadMode){
type = Mach1EncodeInputModeQuad
}
if (aFormatMode){
type = Mach1EncodeInputModeAFormat
}
if (bFormatMode){
type = Mach1EncodeInputModeBFOAACN
}
}
if (params.inputKind == 0) { // Input: MONO
m1Encode.setInputMode(m1Encode.Mach1EncodeInputModeType.Mach1EncodeInputModeMono);
}
if (params.inputKind == 1) { // Input: STERO
m1Encode.setInputMode(m1Encode.Mach1EncodeInputModeType.Mach1EncodeInputModeStereo);
}
if (params.inputKind == 2) { // Input: Quad
m1Encode.setInputMode(m1Encode.Mach1EncodeInputModeType.Mach1EncodeInputModeQuad);
}
if (params.inputKind == 3) { // Input: AFORMAT
m1Encode.setInputMode(m1Encode.Mach1EncodeInputModeType.Mach1EncodeInputModeAFormat);
}
if (params.inputKind == 4) { // Input: 1st Order Ambisonics (ACNSN3D)
m1Encode.setInputMode(m1Encode.Mach1EncodeInputModeType.Mach1EncodeInputModeBFOAACN);
}
Sets the number of input streams to be positioned as points.
- INPUT_MONO
- INPUT_STEREO
- INPUT_QUAD
- INPUT_LCRS
- INPUT_AFORMAT
- INPUT_FOAACN
- INPUT_FOAFUMA
- INPUT_2OAACN
- INPUT_2OAFUMA
- INPUT_3OAACN
- INPUT_3OAFUMA
- INPUT_LCR
Set Output Mode
if (outputKind == 0) { // Output: 4CH Mach1Horizon
m1Encode.outputMode = M1Encode::Mach1EncodeOutputModeM1Horizon_4;
}
if (outputKind == 1) { // Output: 8CH Mach1Spatial
m1Encode.outputMode = M1Encode::Mach1EncodeOutputModeM1Spatial_8;
}
if (outputKind == 0) { // Output: 4CH Mach1Horizon
m1Encode.setOutputMode(outputMode: Mach1EncodeOutputModeM1Horizon_4)
}
if (outputKind == 1) { // Output: 8CH Mach1Spatial
m1Encode.setOutputMode(outputMode: Mach1EncodeOutputModeM1Spatial_8)
}
if (params.outputKind == 0) { // Output: 4CH Mach1Horizon
m1Encode.setOutputMode(m1Encode.Mach1EncodeOutputModeType.Mach1EncodeOutputModeM1Horizon_4);
}
if (params.outputKind == 1) { // Output: 8CH Mach1Spatial
m1Encode.setOutputMode(m1Encode.Mach1EncodeOutputModeType.Mach1EncodeOutputModeM1Spatial_8);
}
Sets the output spatial format, Mach1Spatial or Mach1Horizon
- Mach1EncodeOutputModeM1Spatial_8 (8ch) [Yaw, Pitch, Roll] {default}
- Mach1EncodeOutputModeM1Horizon_4 (4ch) [Yaw]
- Mach1EncodeOutputModeM1Spatial_12 (12ch) [Yaw, Pitch, Roll]
- Mach1EncodeOutputModeM1Spatial_14 (14ch) [Yaw, Pitch, Roll]
- Mach1EncodeOutputModeM1Spatial_18 (18ch) [Yaw, Pitch, Roll]
- Mach1EncodeOutputModeM1Spatial_32 (32ch) [Yaw, Pitch, Roll]
- Mach1EncodeOutputModeM1Spatial_36 (36ch) [Yaw, Pitch, Roll]
- Mach1EncodeOutputModeM1Spatial_48 (48ch) [Yaw, Pitch, Roll]
- Mach1EncodeOutputModeM1Spatial_60 (60ch) [Yaw, Pitch, Roll]
Set Azimuth
m1Encode.setAzimuth = azimuthFromMinus1To1;
m1Encode.setAzimuth(azimuth: azimuthFromMinus1To1)
m1Encode.setAzimuth(params.azimuthFromMinus1To1);
Rotates the point(s) around the center origin of the vector space.
UI value range: 0.0 -> 1.0 (0 -> 360)
Set Azimuth Degrees
m1Encode.setAzimuthDegrees = azimuthDegrees;
m1Encode.setAzimuthDegrees(azimuth: azimuthDegrees)
m1Encode.setAzimuthDegrees(params.azimuthDegrees);
Rotates the point(s) around the center origin of the vector space.
UI value range: 0.0 -> 360.0
Set Azimuth Radians
m1Encode.setAzimuthRadians = azimuthRadians;
m1Encode.setAzimuthRadians(azimuth: azimuthRadians)
m1Encode.setAzimuthRadians(params.azimuthRadians);
Rotates the point(s) around the center origin of the vector space.
UI value range: 0 -> 2PI (0 -> 360)
Set Diverge
m1Encode.setDiverge = diverge;
m1Encode.setDiverge(diverge: diverge)
m1Encode.setDiverge(params.diverge);
Moves the point(s) to/from center origin of the vector space.
UI value range: -1.0 -> 1.0
Set Elevation
m1Encode.setElevation = elevationFromMinus1to1;
m1Encode.setElevation(elevation: elevationFromMinus1to1)
m1Encode.setElevation(params.elevationFromMinus1to1);
Moves the point(s) up/down the vector space.
UI value range: -1.0 -> 1.0 (-90 -> 90)
Set Elevation Degrees
m1Encode.setElevationDegrees = elevationFromMinus90to90;
m1Encode.setElevationDegrees(elevation: elevationFromMinus90to90)
m1Encode.setElevationDegrees(params.elevationFromMinus90to90);
Moves the point(s) up/down the vector space.
UI value range: -90 -> 90
Set Elevation Radians
m1Encode.setElevationRadians = elevationFromMinusHalfPItoHalfPI;
m1Encode.setElevationRadians(elevation: elevationFromMinusHalfPItoHalfPI)
m1Encode.setElevationRadians(params.elevationFromMinusHalfPItoHalfPI);
Moves the point(s) up/down the vector space.
UI value range: -PI/2 -> PI/2 (-90 -> 90)
Set Stereo Rotation
m1Encode.setStereoRotate = sRotation;
m1Encode.setStereoRotate(setStereoRotate: stereoRotate)
m1Encode.setStereoRotate(params.sRotation);
Rotates the two stereo points around the axis of the center point between them.
UI value range: -180.0 -> 180.0
Set Stereo Spread
m1Encode.setStereoSpread = sSpread;
m1Encode.setStereoSpread(setStereoSpread: stereoSpread)
m1Encode.setStereoSpread(params.sSpread);
Increases or decreases the space between the two stereo points.
UI value range: 0.0 -> 1.0
Set Auto Orbit
m1Encode.setAutoOrbit = autoOrbit;
m1Encode.setAutoOrbit(setAutoOrbit: true)
m1Encode.setAutoOrbit(params.autoOrbit);
When active both stereo points rotate in relation to the center point between them so that they always triangulate toward center of the cuboid.
default value: true
Inline Mach1Encode Object Decoder
//Use each coeff to decode multichannel Mach1 Spatial mix
volumes = m1Encode.getResultingCoeffsDecoded(decodeType, decodeArray)
for (int i = 0; i < 8; i++) {
players[i].volume = volumes[i] * volume
}
//Use each coeff to decode multichannel Mach1 Spatial mix
var volumes : [Float] = m1Encode.getResultingCoeffsDecoded(decodeType: decodeType, decodeResult: decodeArray)
for i in 0..<players.count {
players[i].volume = volumes[i] * volume
}
m1Encode.generatePointResults();
m1Decode.beginBuffer();
var decoded = m1Decode.decode(params.decoderRotationY, params.decoderRotationP, params.decoderRotationR);
m1Decode.endBuffer();
var vol = [];
if (params.outputKind == 1) { // Output: Mach1Spatial
vol = m1Encode.getResultingCoeffsDecoded(m1Decode.Mach1DecodeAlgoType.Mach1DecodeAlgoSpatial, decoded);
}
This function allows designs where only previewing or live rendering to decoded audio output is required without any step of rendering or exporting to disk. This enables designs where developers can stack and sum multiple Mach1Encode object's decoded outputs instead of using Mach1Encode objects to write to a master 8 channel intermediary file. Allowing shorthand versions of Mach1Encode->Mach1Decode->Stereo if only live playback is needed.
This can also be used to add object audio design to your application from the Mach1 Spatial APIs and add further control to an application to layer pre-rendered spatial audio and runtime spatial audio as needed.
Mach1Decode API
Mach1Decode supplies the functions needed to playback Mach1 Spatial VVBP formats to a stereo stream based on the device's orientation, this can be used for mobile device windowing or first person based media such as AR/VR/MR without any additional processing effects required.
Summary Use
void setup(){
mach1Decode.setDecodeAlgoType(Mach1DecodeAlgoSpatial_8);
mach1Decode.setPlatformType(Mach1PlatformDefault);
mach1Decode.setFilterSpeed(0.95f);
}
void loop(){
mach1Decode.setRotation(deviceYaw, devicePitch, deviceRoll); // normalized rotation input
mach1Decode.beginBuffer();
auto gainCoeffs = mach1Decode.decodeCoeffs();
mach1Decode.endBuffer();
// Apply gainCoeffs to gain/volume of array of audioplayers for custom spatial audio mixer
}
override func viewDidLoad() {
mach1Decode.setDecodeAlgoType(newAlgorithmType: Mach1DecodeAlgoSpatial_8)
mach1Decode.setPlatformType(type: Mach1PlatformiOS)
mach1Decode.setFilterSpeed(filterSpeed: 1.0)
}
func update() {
mach1Decode.beginBuffer()
let decodeArray: [Float] = mach1Decode.decode(Yaw: Float(deviceYaw), Pitch: Float(devicePitch), Roll: Float(deviceRoll))
mach1Decode.endBuffer()
// Apply gainCoeffs to gain/volume of array of audioplayers for custom spatial audio mixer
}
let mach1Decode = null;
Mach1DecodeModule().then(function(m1DecodeModule) {
m1Decode = new(m1DecodeModule).Mach1Decode();
m1Decode.setPlatformType(m1Decode.Mach1PlatformType.Mach1PlatformDefault);
m1Decode.setDecodeAlgoType(m1Decode.Mach1DecodeAlgoType.Mach1DecodeAlgoSpatial_8);
m1Decode.setFilterSpeed(0.95);
});
function update() {
mach1Decode.beginBuffer();
var decoded = mach1Decode.decode(params.decoderRotationY, params.decoderRotationP, params.decoderRotationR);
mach1Decode.endBuffer();
// Apply gainCoeffs to gain/volume of array of audioplayers for custom spatial audio mixer
}
The Mach1Decode API is designed to be used the following way:
Setup Step (setup/start):
setAlgorithmType
setAngularSettingsType
setFilterSpeed
Audio Loop:
beginBuffer
setRotation
/setRotationDegrees
/setRotationRadians
/setRotationQuat
decode
/decodeCoeffs
getCurrentAngle
(debug/optional)endBuffer
Using Mach1Decode decode
Callback Options
The Mach1Decode class's decode function returns the coefficients for each external audio players' gains/volumes to create the spatial decode per update at the current angle. The timing of when this callback can be designed with two different modes of use:
- Update decode results via the used audio player/engine's audio callback
- Update decode results via main loop (or any call)
To utilize the first mode simply supply the int your audio player's are using for bufferSize and current index of sample in that buffer to the decode(yaw, pitch, roll, bufferSize, sampleIndex)
function to syncronize and update with the audio callback
To utilize the second mode simply supply 0
values to bufferSize
and sampleIndex
Installation
Import and link the appropriate target device's / IDE's library file.
Set Platform Type
Set the Angular Type for the target device via the enum
mach1Decode.setPlatformType(Mach1PlatformDefault);
mach1Decode.setPlatformType(type: Mach1PlatformType.Mach1PlatformiOS)
mach1Decode.setPlatformType(m1Decode.Mach1PlatformType.Mach1PlatformOfEasyCam);
Use the setPlatformType
function to set the device's angle order and convention if applicable:
Preset Types(enum):
Mach1PlatformDefault
= 0Mach1PlatformUnity
Mach1PlatformUE
Mach1PlatformOfEasyCam
Mach1PlatformAndroid
Mach1PlatformiOS
Angle Order Conventions
- Order of Yaw, Pitch, Roll (Defined as angle applied first, second and third).
- Direction of transform around each pole's positive movement (left or right rotation).
- integer Range before tranform completes 2(PI).
Euler Angle Orders:
- (yaw-pitch-roll)
- Unity: Left handed x-y-z (pitch-roll-yaw)
- Unreal Engine: Right handed z-y-x
Set Filter Speed
float filterSpeed = 1.0f;
mach1Decode.setFilterSpeed(filterSpeed);
mach1Decode.setFilterSpeed(filterSpeed: 1.0)
mach1Decode.setFilterSpeed(0.95);
Filter speed determines the amount of angle smoothing applied to the orientation angles used for the Mach1DecodeCore class. 1.0 would mean that there is no filtering applied, 0.1 would add a long ramp effect of intermediary angles between each angle sample. It should be noted that you will not have any negative effects with >0.9 but could get some orientation latency when <0.85. The reason you might want angle smoothing is that it might help remove a zipper effect seen on some poorer performing platforms or devices.
Set Decoding Algorithm
void setDecodeAlgoType(Mach1DecodeAlgoType newAlgorithmType);
func setDecodeAlgoType(newAlgorithmType: Mach1DecodeAlgoType)
mach1Decode.setDecodeAlgoType(m1Decode.Mach1DecodeAlgoType.Mach1DecodeAlgoSpatial_8);
Use this function to setup and choose the required Mach1 decoding algorithm.
Mach1 Decoding Algorithm Types:
Mach1DecodeAlgoSpatial_8
= 0 (default spatial | 8 channels)Mach1DecodeAlgoHorizon_4
(compass / yaw | 4 channels)Mach1DecodeAlgoHorizonPairs
(compass / yaw | 4x stereo mastered pairs)
Mach1DecodeAlgoSpatial_8
Mach1Spatial. 8 Channel spatial mix decoding from our cuboid configuration. This is the default and recommended decoding utilizing isotropic decoding behavior.
Mach1DecodeAlgoHorizon_4
Mach1Horizon. 4 channel spatial mix decoding for compass / yaw only configurations. Also able to decode and virtualize a first person perspective of Quad Surround mixes.
Mach1DecodeAlgoHorizonPairs
Mach1HorizonPairs. 8 channel spatial mix decoding for compass / yaw only that can support headlocked / non-diegetic stereo elements to be mastered within the mix / 8 channels. Supports and decodes Quad-Binaural mixes.
Begin Buffer
mach1Decode.beginBuffer();
mach1Decode.beginBuffer()
mach1Decode.beginBuffer();
Call this function before reading from the Mach1Decode buffer.
End Buffer
mach1Decode.endBuffer();
mach1Decode.endBuffer()
mach1Decode.endBuffer();
Call this function after reading from the Mach1Decode buffer.
Decode
There are four exposed functions that can be called at any time to set the next rotation. This maximizes design possibilities where rotations update may need to be called less or more than the calculated returned coefficients are needed. In general their are three ways to calculate decode
values for your native audio player.
- Use
setRotation
/setRotationDegrees
/setRotationRadians
/setRotationQuat
before you calldecodeCoeffs()
- Use
decode()
with inline arguments in Euler degrees - Use
decodeCoeffsUsingTranscodeMatrix()
1. DecodeCoeffs()
mach1Decode.setRotationDegrees(float deviceYaw, float devicePitch, float deviceRoll);
std::vector<float> decodedGains = mach1Decode.decodeCoeffs();
mach1Decode.setRotationDegrees(Yaw: Float(deviceYaw), Pitch: Float(devicePitch), Roll: Float(deviceRoll))
let decodedGains: [Float] = mach1Decode.decodeCoeffs()
m1Decode.setRotationDegrees(params.decoderRotationY, params.decoderRotationP, params.decoderRotationR);
var decodedGains = m1Decode.decodeCoeffs();
For easier use with more design cases we have a function for "decode" that uses the last called setRotation
function in case your use case needs different input rotation descriptions or have different places that the orientation is updated compared to the audio thread your decode might be applied within.
Please view the section on setRotation
to learn more about the different functions for updating input rotations to Mach1Decode.
2. Decode()
std::vector<float> decodedGains = mach1Decode.decode(float deviceYaw, float devicePitch, float deviceRoll);
let decodedGains: [Float] = mach1Decode.decode(Yaw: Float(deviceYaw), Pitch: Float(devicePitch), Roll: Float(deviceRoll))
var decodedGains = m1Decode.decode(params.decoderRotationY, params.decoderRotationP, params.decoderRotationR);
An all in one call to decode(float yaw, float pitch, float roll)
with the input orientation rotation described in absolute Euler degrees can be used.
Decoding Design
Default Isotropic Decoding [recommended]:
lower performance version for non audio thread operation or for use in managed languages
std::vector<float> volumes = mach1Decode.decode(float deviceYaw, float devicePitch, float deviceRoll);
you can get a per sample gains/volumes frame if you specify the buffer size and the current sample index
std::vector<float> decodedGains = mach1Decode.decode(float deviceYaw, float devicePitch, float deviceRoll, int bufferSize, int sampleIndex);
// high performance version is meant to be used on the audio thread, it puts the resulting channel gains/volumes
// into a float array instead of allocating a result vector. Notice the pointer to volumeFrame array passed. The array itself has to have a size of 18 floats
float decodedGainsFrame [18];
mach1Decode.decode(float deviceYaw, float devicePitch, float deviceRoll, float *decodedGainsFrame, int bufferSize, int sampleIndex);
The decode function's purpose is to give you updated gains/volumes for each input audio channel for each frame in order for spatial effect to manifest itself. There are two versions of this function - one for cases when you might not need very low latency or couldn't include C/C++ directly, and another version for C/C++ high performance use.
If using on audio thread, high performance version is recommended if possible.
Example of Using Decoded Coefficients
Sample based example
decodedGains = mach1Decode.decode(deivceYaw, devicePitch, deviceRoll);
for (int i = 0; i < 8; i++) {
playersLeft[i].setVolume(decodedGains[i * 2] * overallVolume);
playersRight[i].setVolume(decodedGains[i * 2 + 1] * overallVolume);
}
//Send device orientation to mach1Decode object with the preferred algo
mach1Decode.beginBuffer()
let decodedGains: [Float] = mach1Decode.decode(Yaw: Float(deviceYaw), Pitch: Float(devicePitch), Roll: Float(deviceRoll))
mach1Decode.endBuffer()
//Use each coeff to decode multichannel Mach1 Spatial mix
for i in 0...7 {
players[i * 2].volume = Double(decodedGains[i * 2])
players[i * 2 + 1].volume = Double(decodedGains[i * 2 + 1])
}
Buffer based example
//16 coefficients of spatial, 2 coefficients of headlocked stereo
float decodedGains[18];
mach1Decode.beginBuffer();
for (size_t i = 0; i < samples; i++)
{
mach1Decode.decode(Yaw, Pitch, Roll, decodedGains, samples, i);
for (int j = 0; j < 8; j++)
{
sndL += decodedGains[j * 2 + 0] * buffer[j][idx];
sndR += decodedGains[j * 2 + 1] * buffer[j][idx];
}
buf[i * 2 + 0] = (short) (sndL * (SHRT_MAX-1));
buf[i * 2 + 1] = (short) (sndR * (SHRT_MAX-1));
}
mach1Decode.endBuffer();
bufferRead += samples;
Input orientation angles and return the current sample/buffers coefficients
Set Rotation
Call this before you call
decodeCoeffs()
to properly update the calculated gains before applying to your native audio player/handler.
Use one of these functions to update new rotation values for Mach1Decode, ideally before calling decodeCoeffs()
which will return the calculated spatial gains you will be applying to your native audio player/handler as per our examples. To make things more human interpretable in terms of the expected ranges and values you can use in each of these setRotation
functions we have explicitly named the input arguments.
Rotation: Normalized
mach1Decode.setRotation(float deviceYawNorm, float devicePitchNorm, float deviceRollNorm);
mach1Decode.setRotation(Yaw: Float(deviceYawNorm), Pitch: Float(devicePitchNorm), Roll: Float(deviceRollNorm))
m1Decode.setRotation(params.decoderRotationYNorm, params.decoderRotationPNorm, params.decoderRotationRNorm);
- Yaw: float for device/listener yaw angle: [Range: -1.0 -> 1.0]
- Pitch: float for device/listener pitch angle: [Range: -0.25 -> 0.25]
- Roll: float for device/listener roll angle: [Range: -0.25 -> 0.25]
Rotation: Degrees
mach1Decode.setRotationDegrees(float deviceYawDegrees, float devicePitchDegrees, float deviceRollDegrees);
mach1Decode.setRotationDegrees(Yaw: Float(deviceYawDegrees), Pitch: Float(devicePitchDegrees), Roll: Float(deviceRollDegrees))
m1Decode.setRotationDegrees(params.decoderRotationYDegrees, params.decoderRotationPDegrees, params.decoderRotationRDegrees);
- Yaw: float for device/listener yaw angle: [Range: 0->360 | -180->180]
- Pitch: float for device/listener pitch angle: [Range: -90->90]
- Roll: float for device/listener roll angle: [Range: -90->90]
Rotation: Radians
mach1Decode.setRotationRadians(float deviceYawRads, float devicePitchRads, float deviceRollRads);
mach1Decode.setRotationRadians(Yaw: Float(deviceYawRads), Pitch: Float(devicePitchRads), Roll: Float(deviceRollRads))
m1Decode.setRotationRadians(params.decoderRotationYRads, params.decoderRotationPRads, params.decoderRotationRRads);
- Yaw: float for device/listener yaw angle: [Range: 0->2PI | -PI->PI]
- Pitch: float for device/listener pitch angle: -PI/2 -> PI/2
- Roll: float for device/listener roll angle: -PI/2 -> PI/2
Rotation: Quaternion
mach1Decode.setRotationQuat(float deviceW, float deviceX, float deviceY, float deviceZ);
mach1Decode.setRotationQuat(W: Float(deviceW), X: Float(deviceX), Y: Float(deviceY), Z: Float(deviceZ))
m1Decode.setRotationQuat(params.deviceW, params.deviceX, params.deviceY, params.deviceZ);
- W: float for device/listener W: [Range: -1.0->1.0]
- X: float for device/listener X: [Range: -1.0->1.0]
- Y: float for device/listener Y: [Range: -1.0->1.0]
- Z: float for device/listener Z: [Range: -1.0->1.0]
Get Current Time
Returns the current elapsed time in milliseconds (ms) since Mach1Decode object's creation.
Get Log
Returns a string of the last log message (or empty string if none) from Mach1DecodeCAPI binary library. Use this to assist in debug with a list of input angles and the associated output coefficients from the used Mach1Decode function.
Get Current Angle
Use this to get the current angle being processed by Mach1Decode, good for orientation latency checks.
Mach1DecodePositional API
Mach1DecodePositional allows the 3DOF orientation decoding to decode in a dev environment that supports 6DOF with positional movement. It does this by referencing the user's device to a location and adding an additional layer of rotations and attenuations to the spatial decoding.
Unity & Unreal Engine
Please view the examples in examples/Unity|UnrealEngine
to see deployment of Mach1Spatial mixes with positional rotation and attenuation applied. These fucntions can be viewed from the M1Base class used in both examples and are called by creating a new object in the game engine and attaching Mach1SpatialActor or Mach1SpatialDecode.cs to view the setup for a Mach1 Spatial mix layer.
Summary Use
void setup(){
mach1DecodePositional.setDecodeAlgoType(Mach1DecodeAlgoSpatial_8);
mach1DecodePositional.setPlatformType(Mach1PlatformDefault);
mach1DecodePositional.setUseAttenuation(bool useAttenuation);
mach1DecodePositional.setAttenuationCurve(float attenuationCurve);
mach1DecodePositional.setUsePlaneCalculation(bool usePlaneCalculation);
}
void loop(){
mach1DecodePositional.setListenerPosition(Mach1Point3D devicePos);
mach1DecodePositional.setListenerRotation(Mach1Point3D deviceRot);
mach1DecodePositional.setDecoderAlgoPosition(Mach1Point3D objPos);
mach1DecodePositional.setDecoderAlgoRotation(Mach1Point3D objRot);
mach1DecodePositional.setDecoderAlgoScale(Mach1Point3D objScale);
mach1DecodePositional.getDist();
mach1DecodePositional.getCoefficients(float* result);
}
override func viewDidLoad() {
mach1DecodePositional.setDecodeAlgoType(newAlgorithmType: Mach1DecodeAlgoSpatial_8)
mach1DecodePositional.setPlatformType(type: Mach1PlatformiOS)
mach1DecodePositional.setFilterSpeed(filterSpeed: 1.0)
mach1DecodePositional.setUseAttenuation(useAttenuation: true)
mach1DecodePositional.setUsePlaneCalculation(bool: false)
}
func update() {
mach1DecodePositional.setListenerPosition(point: (devicePos))
mach1DecodePositional.setListenerRotation(point: Mach1Point3D(deviceRot))
mach1DecodePositional.setDecoderAlgoPosition(point: (objectPosition))
mach1DecodePositional.setDecoderAlgoRotation(point: Mach1Point3D(objectRotation))
mach1DecodePositional.setDecoderAlgoScale(point: Mach1Point3D(x: 0.1, y: 0.1, z: 0.1))
mach1DecodePositional.evaluatePositionResults()
var attenuation : Float = mach1DecodePositional.getDist()
attenuation = mapFloat(value: attenuation, inMin: 0, inMax: 3, outMin: 1, outMax: 0)
attenuation = clampFloat(value: attenuation, min: 0, max: 3)
mach1DecodePositional.setAttenuationCurve(attenuationCurve: attenuation)
var decodeArray: [Float] = Array(repeating: 0.0, count: 18)
mach1DecodePositional.getCoefficients(result: &decodeArray)
}
The Mach1DecodePositional API is designed to be added if 6DOF or positional placement of Mach1Decode objects are needed. Once added and used for updating the object's and referencable device/camera it will calculate the positional and rotational angles and distances and result them via the same useable coefficients that are used for Mach1Decode, as per the following way:
Setup Step (setup/start):
setDecodeAlgoType
setPlatformType
setUseAttenuation
set distance attenuation for soundfieldsetAttenuationCurve
design custom distance attenuation curvessetUsePlaneCalculation
reference rotations use a plane instead of a point, or closest plane of a shape if needed
Audio Loop:
- update device/camera position & rotation (can use Euler or Quat)
- update m1obj decode position & rotation (can use Euler or Quat)
getDist
used for attenuation/falloff resultsgetCoefficients
resulting coeffs for players
Installation
Import and link the appropriate target device's / IDE's library file.
For Unity: - Import the Custom Asset Package
For Unreal Engine: - Add the Mach1Spatial Plugin to your project
Setup per Spatial Soundfield Position
The following are functions to aid in how positional distance effects the overall gain of an mach1decode object to any design. The resulting distance calculations can also be used for any external effect if created.
Attenuation/Falloff
void setUseAttenuation(bool useAttenuation);
func setUseAttenuation(useAttenuation: Bool)
Boolean turning on/off distance attenuation calculations on that mach1decode object
Reference positional point/plane/shape
void setUsePlaneCalculation(bool usePlaneCalculation);
func setUsePlaneCalculation(bool usePlaneCalculation: Bool)
This very long named function can set whether the rotational pivots of a mach1decode soundfield are by referencing the device/camera to a positional point or the closest point of a plane (and further the closest plane of a shape). This allows each mach1decode object to be placed with more design options to prevent soundfield scenes from rotating when not needed.
Set Filter Speed
float filterSpeed = 1.0f;
mach1Decode.setFilterSpeed(filterSpeed);
mach1Decode.setFilterSpeed(filterSpeed: 1.0)
Filter speed determines the amount of angle smoothing applied to the orientation angles used for the Mach1DecodeCore class. 1.0 would mean that there is no filtering applied, 0.1 would add a long ramp effect of intermediary angles between each angle sample. It should be noted that you will not have any negative effects with >0.9 but could get some orientation latency when <0.85. The reason you might want angle smoothing is that it might help remove a zipper effect seen on some poorer performing platforms or devices.
Setup for Advanced Settings
Mute Controls
void setMuteWhenOutsideObject(bool muteWhenOutsideObject);
func setMuteWhenOutsideObject(muteWhenOutsideObject: Bool)
Similar to the setUseClosestPointRotationMuteInside
these functions give further control over placing a soundfield positionally and determining when it should/shouldn't output results.
void setMuteWhenInsideObject(bool muteWhenInsideObject);
func setMuteWhenInsideObject(muteWhenInsideObject: Bool)
Mute mach1decode object (all coefficifient results becomes 0) when outside the positional reference shape/point
Mute mach1decode object (all coefficifient results becomes 0) when inside the positional reference shape/point
Manipulate input angles for positional rotations
void setUseYawForRotation(bool useYawForRotation);
func setUseYawForRotation(bool useYawForRotation: Bool)
Ignore Yaw angle rotation results from pivoting positionally
void setUsePitchForRotation(bool usePitchForRotation);
func setUsePitchForRotation(bool usePitchForRotation: Bool)
Ignore Pitch angle rotation results from pivoting positionally
void setUseRollForRotation(bool useRollForRotation);
func setUseRollForRotation(bool useRollForRotation: Bool)
Ignore Roll angle rotation results from pivoting positionally
Update per Spatial Soundfield Position
Updatable variables for each mach1decode object. These are also able to be set once if needed.
void setListenerPosition(Mach1Point3DCore* pos);
func setListenerPosition(point: Mach1Point3D)
Updates the device/camera's position in desired x,y,z space
void setListenerRotation(Mach1Point3DCore* euler);
func setListenerRotation(point: Mach1Point3D)
Updates the device/camera's orientation with Euler angles (yaw, pitch, roll)
void setListenerRotationQuat(Mach1Point4DCore* quat);
func setListenerRotationQuat(point: Mach1Point4D)
Updates the device/camera's orientation with Quaternion
void setDecoderAlgoPosition(Mach1Point3DCore* pos);
func setDecoderAlgoPosition(point: Mach1Point3D)
Updates the decode object's position in desired x,y,z space
void setDecoderAlgoRotation(Mach1Point3DCore* euler);
func setDecoderAlgoRotation(point: Mach1Point3D)
Updates the decode object's orientation with Euler angles (yaw, pitch, roll)
void setDecoderAlgoRotationQuat(Mach1Point4DCore* quat);
func setDecoderAlgoRotationQuat(point: Mach1Point4D)
Updates the decode object's orientation with Quaternion
void setDecoderAlgoScale(Mach1Point3DCore* scale);
func setDecoderAlgoScale(point: Mach1Point3D)
Updates the decode object's scale in desired x,y,z space
Applying Resulting Coefficients
void evaluatePositionResults();
func evaluatePositionResults()
Calculate!
void getCoefficients(float *result);
func getCoefficients(result: inout [Float])
Get coefficient results for applying to mach1decode object for rotational and positional, replaces the results from: mach1Decode.decode
Return Relative Comparisons
Distance
float getDist();
func getDist() -> Float
Get normalized distance between mach1decode object and device/camera
Current Angle
Mach1Point3D getCurrentAngle();
func getCurrentAngle() -> Mach1Point3D
Get the current angle of the mach1decode object
Current Rotation
Mach1Point3D getCoefficientsRotation();
func getCoefficientsRotation() -> Mach1Point3D
Get the current rotation of the mach1decode object
Update Falloff/Attenuation
void setAttenuationCurve(float attenuationCurve);
func setAttenuationCurve(attenuationCurve: Float)
Set a resulting float of that curve to the current buffer
Mach1Transcode CommandLine
Mach1Transcode includes functions for use cases that utilizing Mach1Spatial's agnostic abilities and allows 1:1 VBAP style conversions from any surround or spatial audio format and to any other surround or spatial audio format. This is very helpful for apps that have certain input requirements but different output requirements based on whether the app is launched for VR/AR/MR or just mobile use without completely redesigning the application's structure for audio. This is also a recommended method of carrying one master spatial audio container and at endpoints converting it as needed without adverse signal altering effects seen in other spatial audio formats.
Usage
Rapidly offline render to and from Mach1 formats.
Example in command line for converting Mach1Spatial mix to First Order ambisonics: ACNSN3D
m1-transcode -in-file /path/to/file.wav -in-fmt M1Spatial-8 -out-fmt ACNSN3D -out-file /path/to/output.wav -out-file-chans 0
Example in command line for converting 7.1 film mix to Mach1Spatial
m1-transcode -in-file /path/to/file.wav -in-fmt 7.1_C -out-fmt Mach1Spatial-8 -out-file /path/to/output.wav
Example in command line for converting Mach1Spatial to Mach1HorizonPairs (quad-binaural compliant)
m1-transcode -in-file /path/to/file.wav -in-fmt M1Spatial-8 -out-fmt Mach1HorizonPairs -out-file /path/to/output.wav -out-file-chans 2
Suggested Metadata Spec [optional]
Mach1Spatial-8 =
mach1spatial-8
Mach1Spatial-12=
mach1spatial-12
Mach1Spatial-14 =
mach1spatial-16
Mach1 StSP =
mach1stsp-2
Mach1Spatial-4 =
mach1horizon-4
Mach1Horizon Pairs =
mach1horizon-8
Metadata is not required for decoding any Mach1 Spatial VVBP format, and often it is not recommended to rely on auto-detection methods but instead rely on UI/UX for user input upon uploading a Mach1 multichannel audio file for safest handling. This is due to their being several possible 8 channel formats and unless there are proper methods to filter and detect and handle each one, user input will be a safer option. There are many oppurtunities for transcoding or splitting a multichannel audio file all of which could undo metadata or apply false-positive metadata due to the many audio engines not built to handle multichannel solutions safely.
If autodetection is still required, use the following suggested specifications which will be applied to mixes that run out of M1-Transcoder and soon m1-transcode directly:
Example:
Metadata:
comment : mach1spatial-8
Examples of Metadata Spec
ffmpeg (wav output): -metadata ICMT="mach1spatial-8"
ffmpeg (vorbis output): -metadata spatial-audio='mach1spatial-8'
ffmpeg (aac output): -metadata comment='mach1spatial-8'
libsndfile (wav output): outfiles[i].setString(0x05, "mach1spatial-8");
Formats Supported
The most up to date location for the supported formats are: Supported Formats List
Mach1 & Vector Based Formats
Surround Formats
Ambisonic & Spherical Formats (special thanks to VVAudio)
Mic Array Formats
Custom Format/Configuration
./m1-transcode -in-file /path/to/16channel.wav -in-fmt CustomPoints -in-json /path/to/16ChannelDescription.json -out-file /path/to/output-m1spatial.wav -out-fmt M1Spatial-8 -out-file-chans 0
Input JSON description of the surround/spatial soundfield setup per your design and input it with the -in-json
arguement for any custom input or output transcoding.
To use this set the -in-fmt
or -out-fmt
as CustomPoints
Additional Features
LFE/SUB Channel Filter
Example of low pass filtering every channel but the Front-Right of the Mach1 Spatial mix and outputting it to stereo.
./m1-transcode -in-file /path/to/input-m1spatial.wav -in-fmt M1Spatial-8 -out-file /path/to/output-stereo.wav -lfe-sub 0,2,3,4,6,7 -out-fmt Stereo -out-file-chans 0
Use -lfe-sub
arguement to indicate which input channels you want to apply a Low Pass Filter to, the arguement exapects a list of ints with commas to separate them.
Spatial Downmixer
./m1-transcode -in-file /path/to/input-fiveOne.wav -in-fmt 5.1_C -spatial-downmixer 0.9 -out-file /path/to/output-m1spatial.wav -out-fmt M1Spatial-8 -out-file-chans 0
For scaling audio outputting to streaming use cases of Mach1Decode and use cases using the Mach1 Spatial output from Mach1Transcode we have included a way to compare the top vs. bottom of the input soundfield, if the difference is less than the set threshold (float) output format will be Mach1 Horizon. This is to allow soundfields that do not have much of a top vs bottom difference to output to a lesser channel Mach1 Horizon format to save on filesize while streaming.
-spatial-downmixer
arguement can be used to set this, the float after this arguement will be used as the threshold. If used this will effectively add an additional transoding after anything outputting to Mach1 Spatial
to then transcode from Mach1 Spatial
to Mach1 Horizon
while respecting the content of the soundfield.
Metadata Extractor
./m1-transcode -in-file /path/to/input-ADM.wav -in-fmt 7.1.4_C_SIM -out-file /path/to/output.wav -extract-metadata -out-fmt M1Spatial-8 -out-file-chans 0
An ADM metadata reader and parser is embedded into m1-transcode binary executable to help with custom pipelines using Mach1Encode API to render Object Audio Soundfields into Mach1 Spatial mixes/renders for easier handling.
-extract-metadata
will dump any found XML ADM metadata in the audio binary as a textfile with the same output name and path.
Mach1Transcode API
Mach1Transcode leverages the benefits of the Mach1 Spatial virtual vector based panning (VVBP) priniciples to enable faster than real time multichannel audio format conversions safely to retain the soundfield and not use any additional processing effects for simulation. Lack of processing effects ensures transparent input and output soundfields during conversion and the lightweight modular design of the API allows use with any audio handler or media library already natively installed.
Multichannel audio development and creative use currently has a lot of challenges plagued by legacy surround implementations, the Mach1Transcode API can be used to help customize multichannel and spatial audio pipelines in development and garner control without requiring adoption of legacy practices.
Summary of Use
static void* decode(void* v);
Mach1Transcode m1Transcode;
static std::vector<std::vector<float>> m1Coeffs; //2D array, [input channel][input channel's coeff]
Mach1TranscodeFormatType inputMode;
Mach1TranscodeFormatType outputMode;
// Mach1 Transcode Setup
inputMode = "ACNSN3DmaxRE3oa";
outputMode = "M1Spatial-8";
//resize coeffs array to the size of the current output
m1Transcode.setOutputFormat(outputMode);
for (int i = 0; i < m1Coeffs.size(); i++){
m1Coeffs[i].resize(m1Transcode.getOutputNumChannels(), 0.0f);
}
m1Transcode.setInputFormat(inputMode);
m1Transcode.setOutputFormat(outputMode);
// Called to update Mach1Transcode
m1Transcode.setSpatialDownmixer();
m1Transcode.processConversionPath();
m1Coeffs = m1Transcode.getMatrixConversion();
import Mach1SpatialAPI
private var m1Decode = Mach1Decode()
private var m1Transcode = Mach1Transcode()
// Mach1 Transcode Setup
m1Transcode.setInputFormat(inFmt: Mach1TranscodeFormatType)
m1Transcode.setOutputFormat(outFmt: "M1Spatial-8")
m1Transcode.processConversionPath()
matrix = m1Transcode.getMatrixConversion()
// Mach1 Decode Setup
m1Decode.setPlatformType(type: Mach1PlatformiOS)
m1Decode.setDecodeAlgoType(newAlgorithmType: Mach1DecodeAlgoSpatial_8)
m1Decode.setFilterSpeed(filterSpeed: 1.0)
// Called when updating InputFormat for Mach1Transcode
m1Decode.beginBuffer()
m1Decode.setRotationDegrees(newRotationDegrees: Mach1Point3D(x: 0, y: 0, z: 0))
let result: [Float] = m1Decode.decodeCoeffsUsingTranscodeMatrix(matrix: matrix, channels: m1Transcode.getInputNumChannels())
m1Decode.endBuffer()
// Called when updating input orientation for Mach1Decode
m1Decode.beginBuffer()
m1Decode.setRotationDegrees(newRotationDegrees: Mach1Point3D(x: Float(deviceYaw), y: Float(devicePitch), z: Float(deviceRoll)))
let result: [Float] = m1Decode.decodeCoeffsUsingTranscodeMatrix(matrix: matrix, channels: m1Transcode.getInputNumChannels())
m1Decode.endBuffer()
The Mach1Transcode API is designed openly by supplying a coefficient matrix for conversion, intepreted as needed. However, the following will be an example of setting up Mach1Transcode for any input and for direct conversion to Mach1Spatial to be decoded with orientation to stereo for spatial previewing applications:
Installation
Import and link the appropriate target device's / IDE's library file and headers.
Set / Get Input Format
Mach1TranscodeFormatType inputMode;
m1Transcode.setInputFormat(inputMode);
m1Transcode.setInputFormat(inFmt: Mach1TranscodeFormatFiveOneFilm_Cinema)
Set or return the input format/configuration for processing.
Set / Get Output Format
Mach1TranscodeFormatType outputMode;
m1Transcode.setOutputFormat(outputMode);
m1Transcode.setOutputFormat(outFmt: Mach1TranscodeFormatM1Spatial)
Set or return the output format/configuration for processing.
Set / Get Spatial Downmixer
m1Transcode.setSpatialDownmixer();
m1Transcode.setSpatialDownmixer()
Sets the threshold float for getSpatialDownmixerPossibility
calculation. The getSpatialDownmixerPossibility
returns true if the compared signals are less than the setSpatialDownmixer(corrThreshold)
.
Float from 0.0 to 1.0 where 0.0 no difference and incrementing to 1.0 is more difference When returned true; transcodings that are set to ouput to
Mach1Spatial
will process an additional conversion toMach1Horizon
Set LFE / Sub Channels
Applys a low pass filter (LPF) to each indicated channel index of the input format and soundfield. Input vector of ints representing the index of input channels to be processed.
Set Input as Custom Points
Sets the input format for transcoding from an external JSON source. View the JSON spec for describing a format here: https://dev.mach1.tech/#json-descriptions.
Set Input as ADM
Sets the input format for transcoding from the parsed ADM metadata within the audiofile.
Process Master Gain
Applys an input gain to the output soundfield.
Parameters: Input buffer, Integer of input number of samples, Float for gain multiplier
Process Conversion Path
Use this function to control when to call for calculating the format transcoding calculations.
Get Conversion Path
Returns the shortest found conversion path to get from input format X to output format Y, both set by Mach1Transcode::setInputFormat(Mach1TranscodeFormatType inFmt)
and Mach1Transcode::setOutputFormat(Mach1TranscodeFormatType outFmt)
. Majority of format instances will use Mach1Spatial as the middle format for non-Mach1-format -> non-Mach1-format transcodings. This is due to Mach1 Spatial being a platonic solid format, ideal for safe calculations without loss
Process Conversion Matrix
std::vector<std::vector<float>> m1Coeffs; //2D array, [input channel][input channel's coeff]
m1Coeffs = m1Transcode.getMatrixConversion();
private var matrix: [[Float]] = []
matrix = m1Transcode.getMatrixConversion()
Returns the transcoding matrix of coefficients based on the set input and output formats.
Process Conversion
m1Transcode.processConversion(float: inBufs, float: outBufs, int: numSamples)
Call to process the conversion as set by previous functions.
Direct Agnostic Playback of All Input Formats via Mach1Decode
// Basic struct for input audio/format
struct AudioInput {
var name: String
var format: Mach1TranscodeFormatType
var files: [String]
}
// Declarations
private var m1Decode = Mach1Decode()
private var m1Transcode = Mach1Transcode()
private var players: [AVAudioPlayer] = []
private var matrix: [[Float]] = []
// Setup
m1Transcode.setInputFormat(inFmt: AudioInput.format)
m1Transcode.setOutputFormat(outFmt: Mach1TranscodeFormatM1Spatial)
m1Transcode.processConversionPath()
matrix = m1Transcode.getMatrixConversion()
// Loop
m1Decode.beginBuffer()
m1Decode.setRotationDegrees(newRotationDegrees: Mach1Point3D(x: 0, y: 0, z: 0)) // Update orientation as needed
let result: [Float] = m1Decode.decodeCoeffsUsingTranscodeMatrix(matrix: matrix, channels: m1Transcode.getInputNumChannels())
m1Decode.endBuffer()
//Use each coeff to decode the multichannel Mach1 Spatial
for i in 0..<result.count {
players[i].setVolume(result[i], fadeDuration: 0)
}
Common Issues
The following is a list of common heard issues during implementation and include audio tools to help find these issues as well as basic descriptions of their behavior and how they can be avoided.
Orientation Latency Issues
Orientation Rate Issues (Zipper)
Audio/Visual Sync Issues
Spatial Decoding Phase Issues
Last updated: 2023-July-5
© Copyright 2017-2023, Mach1. All rights reserved.