Interface AudioReceiveHandler
-
public interface AudioReceiveHandler
Interface used to receive audio from Discord through JDA.
-
-
Field Summary
Fields Modifier and Type Field Description static javax.sound.sampled.AudioFormat
OUTPUT_FORMAT
Audio Output Format used by JDA.
-
Method Summary
All Methods Instance Methods Abstract Methods Modifier and Type Method Description boolean
canReceiveCombined()
If this method returns true, then JDA will generate combined audio data and provide it to the handler.boolean
canReceiveUser()
If this method returns true, then JDA will provide audio data to thehandleUserAudio(UserAudio)
method.void
handleCombinedAudio(CombinedAudio combinedAudio)
IfcanReceiveCombined()
returns true, JDA will provide aCombinedAudio
object to this method every 20 milliseconds.void
handleUserAudio(UserAudio userAudio)
IfcanReceiveUser()
returns true, JDA will provide aUserAudio
object to this method every time the user speaks. Continuing with the last statement: This method is only fired when discord provides us audio data which is very different from the scheduled firing time ofhandleCombinedAudio(CombinedAudio)
.
-
-
-
Method Detail
-
canReceiveCombined
boolean canReceiveCombined()
If this method returns true, then JDA will generate combined audio data and provide it to the handler.
Only enable if you specifically want combined audio because combining audio is costly if unused.- Returns:
- If true, JDA enables subsystems to combine all user audio into a single provided data packet.
-
canReceiveUser
boolean canReceiveUser()
If this method returns true, then JDA will provide audio data to thehandleUserAudio(UserAudio)
method.- Returns:
- If true, JDA enables subsystems to provide user specific audio data.
-
handleCombinedAudio
void handleCombinedAudio(CombinedAudio combinedAudio)
IfcanReceiveCombined()
returns true, JDA will provide aCombinedAudio
object to this method every 20 milliseconds. The data provided by CombinedAudio is all audio that occurred during the 20 millisecond period mixed together into a single 20 millisecond packet. If no users spoke, this method will still be provided with a CombinedAudio object containing 20 milliseconds of silence andCombinedAudio.getUsers()
's list will be empty.The main use of this method is if you are wanting to record audio. Because it automatically combines audio and maintains timeline (no gaps in audio due to silence) it is an incredible resource for audio recording.
If you are wanting to do audio processing (voice recognition) or you only want to deal with a single user's audio, please consider
handleUserAudio(UserAudio)
.Output audio format: 48KHz 16bit stereo signed BigEndian PCM
and is defined by:AudioRecieveHandler.OUTPUT_FORMAT
- Parameters:
combinedAudio
- The combined audio data.
-
handleUserAudio
void handleUserAudio(UserAudio userAudio)
IfcanReceiveUser()
returns true, JDA will provide aUserAudio
object to this method every time the user speaks. Continuing with the last statement: This method is only fired when discord provides us audio data which is very different from the scheduled firing time ofhandleCombinedAudio(CombinedAudio)
.The
UserAudio
object provided to this method will contain theUser
that spoke along with only the audio data sent by the specific user.The main use of this method is for listening to specific users. Whether that is for audio recording, custom mixing (possibly for user muting), or even voice recognition, this is the method you will want.
If you are wanting to do audio recording, please consider
handleCombinedAudio(CombinedAudio)
as it was created just for that reason.Output audio format: 48KHz 16bit stereo signed BigEndian PCM
and is defined by:AudioRecieveHandler.OUTPUT_FORMAT
- Parameters:
userAudio
- The user audio data
-
-