Interface ConnectionListener

All Known Implementing Classes:
ListenerProxy

public interface ConnectionListener
Used to monitor an audio connection, ping, and speaking users.
This provides functionality similar to the functionalities present in the Discord client related to an audio connection.
  • Method Details

    • onPing

      default void onPing(long ping)
      Called when JDA send a heartbeat packet to Discord and Discord sends an acknowledgement. The time difference between sending and receiving the acknowledgement is calculated as the ping.
      Parameters:
      ping - The time, in milliseconds, for round-trip packet travel to discord.
    • onStatusChange

      default void onStatusChange(@Nonnull ConnectionStatus status)
      Called when the status of the audio channel changes. Used to track the connection state of the audio connection for easy debug and status display for clients.
      Parameters:
      status - The new ConnectionStatus of the audio connection.
    • onUserSpeakingModeUpdate

      default void onUserSpeakingModeUpdate(@Nonnull User user, @Nonnull EnumSet<SpeakingMode> modes)
      This method is used to listen for users changing their speaking mode.

      Whenever a user joins a voice channel, this is fired once to define the initial speaking modes.

      To detect when a user is speaking, a AudioReceiveHandler should be used instead.

      Note: This requires the user to be currently in the cache. You can use MemberCachePolicy.VOICE to cache currently connected users. Alternatively, use onUserSpeakingModeUpdate(UserSnowflake, EnumSet) to avoid cache.

      Parameters:
      user - The user who changed their speaking mode
      modes - The new speaking modes of the user
    • onUserSpeakingModeUpdate

      default void onUserSpeakingModeUpdate(@Nonnull UserSnowflake user, @Nonnull EnumSet<SpeakingMode> modes)
      This method is used to listen for users changing their speaking mode.

      Whenever a user joins a voice channel, this is fired once to define the initial speaking modes.

      To detect when a user is speaking, a AudioReceiveHandler should be used instead.

      This method works independently of the user cache. The provided user might not be cached.

      Parameters:
      user - The user who changed their speaking mode
      modes - The new speaking modes of the user