Skip to content

Add new linux app#241

Draft
kavishdevar wants to merge 58 commits intomainfrom
linux/rust
Draft

Add new linux app#241
kavishdevar wants to merge 58 commits intomainfrom
linux/rust

Conversation

@kavishdevar
Copy link
Copy Markdown
Owner

@kavishdevar kavishdevar commented Nov 9, 2025

not sure if this is gonna stay in this repo (reason below)

So, the current Qt app wasn’t really as updated as the Android app- the maintainer didn’t have time, and it was too different from the Android implementation for me to easily port new stuff.

This new app is built in Rust, using the iced library, and borrowing the AACP and ATT implementation from the Android app. This app also has a much better UI, and now isn’t just limited to AirPods! The AirPods basic implementation like ear detection and conversational awareness is already in place, along with battery and changing ANC mode.

linux-new-app

To-Do List

  • Add basic UI
  • Device state and connection handling skeleton
    • Handle new connections
    • Handle disconnections
  • Improve the tray
    • add a channel between window and tray to make the tray update when customized
    • handle multiple devices
    • handle disconnections
    • use devicemanagers instead of a channel between bluetooth thread and tray thread
  • Misc (bugs/other enhancements)
    • take ear status from LE for taking ownership and audio, not received over AACP without taking control first
    • first connection on a clean startup is a bit messy because airpods might not send information packet right away. the app doesn't store the LEInfo when that is the case.
    • battery is only updated for one airpods in the sidebar when multiple are connected.
    • handle bluetooth availability properly (currently throws UI message channel closed)
    • turn off conversational awareness when any (physical) microphone is being used
    • add option for preferred A2DP codec
  • Add AACP settings
    • Implement the aacp handler
    • Add a advanced page which allows sending custom packets and control command values
    • Add everything to the UI
  • ATT related
    • Implement the manager
    • For some reason, connection is refused by AirPods sometimes; haven’t traced the cause yet
    • Handle ATT manager, while keeping it optional- not everyone would change their BlueZ config
    • Add the UI: Yet to decide if I want multiple settings pages for each device, or just out everything on a single page under different sections (like transparency settings, and hearing aid settings will take up space, and can be better of separated)
  • expose everything over dbus
  • Nothing will work on this after airpods so it can be merged first
    • Parse notifications
    • Read battery status from bluez (or something else- it's not sent over ATT)
  • Prepare for distribution
    • Add CI for Flatpak
    • Add flake.nix
    • Possibly, bundle it as an AppImage

@kavishdevar kavishdevar self-assigned this Nov 9, 2025
Comment thread .github/workflows/ci-linux-rust.yml Fixed
Comment thread .github/workflows/ci-linux-rust.yml Fixed
Comment thread .github/workflows/ci-linux-rust.yml Fixed
Comment thread .github/workflows/ci-linux-rust.yml Fixed
@kavishdevar kavishdevar changed the base branch from main to multi-device-and-accessibility November 10, 2025 06:17
@kavishdevar kavishdevar changed the base branch from multi-device-and-accessibility to main November 10, 2025 06:18
@kavishdevar kavishdevar force-pushed the linux/rust branch 3 times, most recently from 91f5884 to 5e5b428 Compare November 10, 2025 06:32
@YuseiRun

This comment was marked as off-topic.

@kavishdevar
Copy link
Copy Markdown
Owner Author

kavishdevar commented Feb 10, 2026

@YuseiRun you are running the old version, not this rewrite. Please limit the comments here to the rewrite.

andreyondemand and others added 2 commits March 31, 2026 09:32
…ort (#469)

* feat: add stem press track control and headless mode support

- Parse STEM_PRESS packets and emit AACPEvent::StemPress with press type and bud side
- Enable double/triple tap detection on init via StemConfig control command (0x06)
- Double press → next track, triple press → previous track via MPRIS D-Bus
- Add next_track() and previous_track() to MediaController
- Add --no-tray flag for headless operation without a GUI
- Replace unwrap() on ui_tx.send() calls with graceful warn! logging

(vibecoded)

* Update main.rs

* feat: make stem press track control optional with GUI toggle

Add a --no-stem-control CLI flag and a toggle in the Settings tab for
environments that handle AirPods AVRCP commands natively (e.g. via
BlueZ/PipeWire). The feature remains enabled by default.

- Load stem_control from app settings JSON on startup; --no-stem-control
  overrides it to false regardless of the saved value
- Share an Arc<AtomicBool> between the async backend and the GUI thread;
  AirPodsDevice holds the Arc directly so the event loop reads the live
  value on every stem press — toggle takes effect immediately without
  reconnecting
- Persist stem_control to settings JSON alongside theme and tray_text_mode
- Add a "Controls" section to the Settings tab with a toggler labelled
  "Stem press track control", with a subtitle explaining the AVRCP
  conflict scenario
- Fix StemConfig bitmask comment to clarify it uses a separate numbering
  scheme from the StemPressType event enum values (0x05–0x08)
@randshell
Copy link
Copy Markdown

randshell commented Apr 8, 2026

thanks so much for the continued work on this!

one minor issue I'd like to report is missing the case battery percentage, using AirPods Pro 3.

image

More importantly, in my opinion, is that conversation awareness doesn't take into account when you're in a call, so when the microphone is in use. This causes for the conversation awareness to trigger and lower the volume of the call. I think if there's a way to check an ongoing microphone use, conversation awareness shouldn't lower the volume. The workaround is manually disabling this feature and re-enabling it as needed.

I'm using latest artifact from 2f1208c.

@kavishdevar
Copy link
Copy Markdown
Owner Author

@randshell the battery is only shown when one of the AirPods are in the case. From your screenshot, I believe none of the AirPods are charging.
image
(perhaps hiding the case icon might avoid confusion)

Thanks for the suggestion about the conversational awareness. I'll implement it when get the time.

@randshell
Copy link
Copy Markdown

@randshell the battery is only shown when one of the AirPods are in the case. From your screenshot, I believe none of the AirPods are charging.

Yes, your assumption is correct.
I was basing myself on the battery widget on iPhone, which shows the case battery all the time. I suppose, like you proposed, the icon could be hidden until a packet is received, then the value is updated and persists until the next packet that updates the battery percentage. At the same time, if the AirPods aren't connected, the battery shouldn't be persisted for either AirPods or case, as they'll be outdated.

Thanks for the suggestion about the conversational awareness. I'll implement it when get the time.

If it helps, I remember that pactl list sources short provides an easy way to do that, where the result of SUSPENDED or RUNNING can be used to determine if the microphone is being used.

@Krakish6
Copy link
Copy Markdown

I have noticed that the app forces Airpods Pro 3 to SBC-XQ is this intended behaviour and if so why?

@Ari-43
Copy link
Copy Markdown

Ari-43 commented Apr 17, 2026

I have noticed that the app forces Airpods Pro 3 to SBC-XQ is this intended behaviour and if so why?

I can reproduce this.
It is not always preferable to use SBC-XQ, so librepods should let the user choose the audio profile like they normally would with any other device or without librepods.

@kavishdevar
Copy link
Copy Markdown
Owner Author

The app handles the audio profiles to route all audio properly based on if the AirPods are worn. And the app also activates/deactivates the profile when some other device connected to AirPods plays audio.

It is specifically SBC-XQ because I remember reading somewhere it better than AAC, but I can add an option in the settings to pick which codec to use.

For using headset profile- as long as no media starts playing (anything that creates a MPRIS service) when you manually switch to HSP/HFP, the app will not switch to A2DP.

What codec do you usually use for A2DP, @Ari-43 @Krakish6?

@Ari-43
Copy link
Copy Markdown

Ari-43 commented Apr 17, 2026

SBC-XQ does indeed have higher quality than AAC. Under most circumstances it is the best choice, but can reduce battery life slightly.
Also, I personally sometimes have multiple Bluetooth audio devices in-use at once, and need the extra bandwidth from using AAC or SBC instead to prevent crackling. For the same bandwidth reasons AAC and SBC seem to me like they also work slightly better under adverse RF conditions, but I have not properly tested this.

If librepods could record what A2DP profile is set by the user on the Pulse/PW side and use that, that'd work and be transparent. I understand that's more complicated to implement than a static default though.

@randshell
Copy link
Copy Markdown

SBC-XQ does indeed have higher quality than AAC. Under most circumstances it is the best choice, but can reduce battery life slightly. Also, I personally sometimes have multiple Bluetooth audio devices in-use at once, and need the extra bandwidth from using AAC or SBC instead to prevent crackling. For the same bandwidth reasons AAC and SBC seem to me like they also work slightly better under adverse RF conditions, but I have not properly tested this.

If librepods could record what A2DP profile is set by the user on the Pulse/PW side and use that, that'd work and be transparent. I understand that's more complicated to implement than a static default though.

omg, this 100%.
With the (external) bluetooth adapter I have, I can barely go outside the room before I get crackling and also extended breaks in audio. I've never figured it out, because on iPhone I can go anywhere without any issue.

I've quit LibrePods-rust and changed from SBC-XQ to AAC and the link now is more stable when I'm out of the room, and the breaks in the audio are considerably less. It's just more useable overall like this.
So the reason I couldn't get to try AAC before was that LibrePods overwrites the codec when the app is open, so setting it manually never changed anything.

@Krakish6
Copy link
Copy Markdown

The app handles the audio profiles to route all audio properly based on if the AirPods are worn. And the app also activates/deactivates the profile when some other device connected to AirPods plays audio.

It is specifically SBC-XQ because I remember reading somewhere it better than AAC, but I can add an option in the settings to pick which codec to use.

For using headset profile- as long as no media starts playing (anything that creates a MPRIS service) when you manually switch to HSP/HFP, the app will not switch to A2DP.

What codec do you usually use for A2DP, @Ari-43 @Krakish6?

I use AAC preferably that should be also native for AirPods i think but i agree with what @Ari-43 said it would be great if it could get the A2DP profile from system settings.

@Ari-43
Copy link
Copy Markdown

Ari-43 commented Apr 17, 2026

I use AAC preferably that should be also native for AirPods

I think SBC-XQ is still a fairly sensible default for most users as long as there is some way to choose other codecs.

If it is not possible to have LibrePods remember what the user sets in PulseAudio/PipeWire--the most transparent solution--then an option in LibrePods would suffice.
The output of pactl list cards contains the current active profile and all supported profiles.

Ap0ll02 and others added 2 commits April 20, 2026 13:56
…495)

This fix allows the send thread to make 10 attempts to send it's data.
This gives access to seeing the battery status
@kavishdevar kavishdevar force-pushed the main branch 6 times, most recently from 4bbaa29 to cb246d1 Compare April 26, 2026 12:06
@caughtquick
Copy link
Copy Markdown

`

I use AAC preferably that should be also native for AirPods

I think SBC-XQ is still a fairly sensible default for most users as long as there is some way to choose other codecs.

If it is not possible to have LibrePods remember what the user sets in PulseAudio/PipeWire--the most transparent solution--then an option in LibrePods would suffice. The output of pactl list cards contains the current active profile and all supported profiles.

I personally think that SBC-XQ would be the better option as the default as well. I've noticed not insignificant audio latency issues with AAC that don't exist with SBC-XQ on my APP2 -- not sure if its an issue with the BlueZ implementation of AAC or the APP2 however.

Repository owner deleted a comment from coderabbitai Bot Apr 28, 2026
@kavishdevar
Copy link
Copy Markdown
Owner Author

First off, I wanted to ask everyone in the thread how often they use the app’s UI? I’m looking for a clear view on if I should separate the background stuff (ear detection, conversational awareness. etc.) from the UI entirely or leave it in the same program and have a headless mode like it currently is.

Secondly, I know not everyone’s a developer here looking to make a frontend for this, but what IPC should the app use? D-Bus is my top preference right now, with the app exposing only the parts that are going to be shown, like control commands (config like Listening Mode, Conversational Awareness enabled), battery etc., i.e. the in-ear status, conversational awareness state, and other info will not be available over this.
Other way is to have a UNIX socket, where selected programs can write/read raw AACP packets. This would also involve having to release a library for a few languages to parse/create the packets. This gives more flexibility, but I don’t see if it’s really practical.

Third- would you use the AirPods microphone if you could continue using A2DP and not switch to HSP/HFP? Because this would involve creating virtual audio source, processing the audio stream and feeding that, a lot of overhead IMO.

And lastly, I wanted to go a little off-topic here and throw an idea out here- would it be useful to have a Linux machine as a relay between AirPods and Android TV? I guess that not even everyone in the Apple ecosystem has an Apple TV, so having access basic stuff like ear detection and conversational awareness sounds like a good idea to me. Of course there are several barriers, like having a capable enough Bluetooth adapter to be able to do 2 continuous audio streams and perhaps even more if head-tracked audio ever comes to life.

@waltmck
Copy link
Copy Markdown

waltmck commented May 1, 2026

First off, I wanted to ask everyone in the thread how often they use the app’s UI? I’m looking for a clear view on if I should separate the background stuff (ear detection, conversational awareness. etc.) from the UI entirely or leave it in the same program and have a headless mode like it currently is.

Secondly, I know not everyone’s a developer here looking to make a frontend for this, but what IPC should the app use? D-Bus is my top preference right now, with the app exposing only the parts that are going to be shown, like control commands (config like Listening Mode, Conversational Awareness enabled), battery etc., i.e. the in-ear status, conversational awareness state, and other info will not be available over this. Other way is to have a UNIX socket, where selected programs can write/read raw AACP packets. This would also involve having to release a library for a few languages to parse/create the packets. This gives more flexibility, but I don’t see if it’s really practical.

Third- would you use the AirPods microphone if you could continue using A2DP and not switch to HSP/HFP? Because this would involve creating virtual audio source, processing the audio stream and feeding that, a lot of overhead IMO.

And lastly, I wanted to go a little off-topic here and throw an idea out here- would it be useful to have a Linux machine as a relay between AirPods and Android TV? I guess that not even everyone in the Apple ecosystem has an Apple TV, so having access basic stuff like ear detection and conversational awareness sounds like a good idea to me. Of course there are several barriers, like having a capable enough Bluetooth adapter to be able to do 2 continuous audio streams and perhaps even more if head-tracked audio ever comes to life.

I'll throw in my two cents as a user. I rarely use the app's UI---basically only ever to toggle transparency mode. I wish that I could set a keyboard shortcut to do this, which leads into my response to your second question.

I think that it would be most useful and idiomatic to expose a dbus API. The most efficient and reliable apps I use generally take this approach (in particular iwd and mullvad). Aside from the maintainability benefits from strictly separating UI from backend, this architecture would make it easy to write a minimal CLI client that controls the daemon over dbus. This would allow using librepods on headless machines as well as letting users script keyboard shortcuts to conveniently use librepods functionality even without the overhead of a GUI app.

As for your last question: if I had an extra Linux machine I would personally prefer to just run Kodi with librepods on that (by the way, supporting this use-case is another argument for implementing a dbus service with a CLI client: users could write scripts to control transparency etc. with a button on a remote even on a minimal DE like Kodi that doesn't have a full desktop environment capable of running the GUI app). However, maybe it would be useful to others.

@Ari-43
Copy link
Copy Markdown

Ari-43 commented May 1, 2026

I wanted to ask everyone in the thread how often they use the app’s UI?

I mostly use this application to reliably monitor battery levels via the tray application. Optimally this would be accessible via a script-friendly CLI as well so I can put it in my bar directly without having a tray icon at all, similar to what I do with my mouse via mxw:
image
I have an iPhone, so I usually end up doing any configuration using that.

would you use the AirPods microphone if you could continue using A2DP

I always assumed this was due to a Bluetooth bandwidth limitation on the earbuds' end. Is this no longer the case on modern earbuds?
If it is possible, then yes, this would be extremely handy in a whole bunch of scenarios.

The main issue I have with using HFP, besides the obvious quality drop that severely harms media playback, is the fact that it's significantly louder than the A2DP modes. That issue can be remedied by changing per-application playback volumes, but this is exceedingly annoying to do regularly.
Currently, I do not ever use HFP for these reasons and would therefore be very grateful to have microphone input in A2DP mode.

@jessicarod7
Copy link
Copy Markdown

jessicarod7 commented May 1, 2026

First off, I wanted to ask everyone in the thread how often they use the app’s UI?

Not often, I mostly use the tray icon for controls. Although it seems the state of Conversation Awareness isn’t always consistent between the UI and the tray icon’s menu, like state doesn’t copy from one to the other.

Third- would you use the AirPods microphone if you could continue using A2DP and not switch to HSP/HFP? Because this would involve creating virtual audio source, processing the audio stream and feeding that, a lot of overhead IMO.

If it’s possible to get improved audio while using the mic I would, whether that involves manually switching profiles or not. Currently I leave my AirPods in A2DP and use my webcam’s mic. I’ve seen this article about Apple implementing AAC-ELD in HFP, which seems difficult to say the least (I think this is the same as #545, but only in HFP).

So for improved audio it would be nice, but otherwise I don't mind.

@thomaseizinger
Copy link
Copy Markdown

I'd love if the microphone could work over A2DP. I often only have my Airpods with me when I am travelling and being able to go on video calls without using my laptop mic would be great.

As for the app itself: Given that it is by design Linux only, I think a headless binary with a dbus interface would make the most sense, perhaps accommpanied with a Gnome extension that integrates nicely into the desktop. But once the API is out there, this could easily be built by other people.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.