public interface AudioReceiveHandler
Modifier and Type | Field | Description |
---|---|---|
static javax.sound.sampled.AudioFormat |
OUTPUT_FORMAT |
Audio Output Format used by JDA.
|
Modifier and Type | Method | Description |
---|---|---|
boolean |
canReceiveCombined() |
If this method returns true, then JDA will generate combined audio data and provide it to the handler.
|
boolean |
canReceiveUser() |
If this method returns true, then JDA will provide audio data to the
handleUserAudio(UserAudio) method. |
void |
handleCombinedAudio(CombinedAudio combinedAudio) |
If
canReceiveCombined() returns true, JDA will provide a CombinedAudio
object to this method every 20 milliseconds. |
void |
handleUserAudio(UserAudio userAudio) |
If
canReceiveUser() returns true, JDA will provide a UserAudio
object to this method every time the user speaks. Continuing with the last statement: This method is only fired
when discord provides us audio data which is very different from the scheduled firing time of
handleCombinedAudio(CombinedAudio) . |
static final javax.sound.sampled.AudioFormat OUTPUT_FORMAT
boolean canReceiveCombined()
boolean canReceiveUser()
handleUserAudio(UserAudio)
method.void handleCombinedAudio(CombinedAudio combinedAudio)
canReceiveCombined()
returns true, JDA will provide a CombinedAudio
object to this method every 20 milliseconds. The data provided by CombinedAudio is all audio that occurred
during the 20 millisecond period mixed together into a single 20 millisecond packet. If no users spoke, this method
will still be provided with a CombinedAudio object containing 20 milliseconds of silence and
CombinedAudio.getUsers()
's list will be empty.
The main use of this method is if you are wanting to record audio. Because it automatically combines audio and maintains timeline (no gaps in audio due to silence) it is an incredible resource for audio recording.
If you are wanting to do audio processing (voice recognition) or you only want to deal with a single user's audio,
please consider handleUserAudio(UserAudio)
.
Output audio format: 48KHz 16bit stereo signed BigEndian PCM
and is defined by: AudioRecieveHandler.OUTPUT_FORMAT
combinedAudio
- The combined audio data.void handleUserAudio(UserAudio userAudio)
canReceiveUser()
returns true, JDA will provide a UserAudio
object to this method every time the user speaks. Continuing with the last statement: This method is only fired
when discord provides us audio data which is very different from the scheduled firing time of
handleCombinedAudio(CombinedAudio)
.
The UserAudio
object provided to this method will contain the
User
that spoke along with only the audio data sent by the specific user.
The main use of this method is for listening to specific users. Whether that is for audio recording, custom mixing (possibly for user muting), or even voice recognition, this is the method you will want.
If you are wanting to do audio recording, please consider handleCombinedAudio(CombinedAudio)
as it was created
just for that reason.
Output audio format: 48KHz 16bit stereo signed BigEndian PCM
and is defined by: AudioRecieveHandler.OUTPUT_FORMAT
userAudio
- The user audio data