AI with Legacy Code Requires Real Coders These Days

Here’s a good use case as to why developers are needed to monitor code generated by AI.

Using Amazon Q, I set up a project that is a 100% unit test project to be 100% AI generated, for another barely tested project. (More about this later; was not a smooth creation.) This is grails — and I wanted to use its modularity to separate out the tests to isolate them as I finish them up before moving into the target project.

Running a few tests, I noted the same failure — a missing property. I dug into the classes the tests were calling and couldn’t find any reference to (in this case) log property.

Below is my querying of Q:

Basically Q had made an assumption about the project setup, didn’t even check the class for reference of an annotation, but still made the test.

I’ve noticed this kind of behavior especially when I give Q a large task (create 50 test cases) vs small (create 2 test cases). There might be something in the algorithm to save time, credits, whatever and you get incorrect extrapolations causing errors in the code.

This exact reason is why a lot of my current team will not use Q on legacy code: it might do a refactor if you aren’t paying attention, it might produce bogus code, or might make a code assumption.

The amount of time to figure out and correct AI issues in code can easily exceed just writing the code itself.

If any more proof needed that real coders are different than AI code generator, this is SonarQube’s current price list:

Note that there are AI code options: detect AI code, AI code assurance, and AI code fixing. The code assurance has me doubtful in that popup description, but I’ll test it at some point. I think part of the detection happens when commits are marked by say a GitHub Copilot or Q generator, which would be easier than detecting from code.

But the point being — the difference between human generated code and AI generated code is being recognized by the industry.

Also, kind of chuckling — if you’ve ever used IntelliJ’s markers @formatter:off/@formatter:on (usually for SQL in my experience), we’ll probably start seeing more markups to indicate areas of legacy, human, and AI code areas.

Generating OpenAPI and PostMan Collections with Q

Generating out descriptive schema for your existing applications that AI can well.

Working on a Grails API application, I am using with BurpSuite DAST to do some security testing on a regular basis for it. I’ve needed an API list of endpoints for their API security scan feature for quite some time, that’s easy to maintain.  We don’t have any maps of the APIs, annoying, so I thought I would ask Amazon Q to make OpenAPI bill of endpoints as well as a Postman collection of them for good measure.

OpenAPI is a specification for making a kind of “schema” for your APIs.

Postman is a tool for building and testing APIS.

BurpSuite DAST – dynamic application security testing – is owned by PortSwigger.  It’s good for penetration testing.

 

Here are the prompts I asked after opening my project:

  1. can you make a postman collect of the apis in <myproject>
  2. can you also make an OpenAI 3.0 definition, both json and yaml

Voila – have my three files.

BurpSuite added an extra selection for an API scan type, and I hadn’t been able to use it until I had the catalog of API endpoints.

BurpSuite can take these formats to describe APIs:

SOAP WSDL, Postman Collection, or an OpenAPI definition in YAML or JSON format.

I like to work with yaml files as I work with some devops type things, Docker, and AI Specs (that use markup, I find more akin to yaml). But for grins I generated out both formats, and then the Postman collection. I didn’t try the wsdl.

Example content:

openapi: 3.0.0
info:
  title: Product Management API
  version: 1.0.0
  description: A sample API for managing products.
servers:
  - url: https://api.example.com/v1
    description: Production server
  - url: http://localhost:8080/v1
    description: Development server
tags:
  - name: Products
    description: Operations related to products
paths:
  /products:
    get:
      tags:
        - Products
      summary: Get all products
      operationId: getAllProducts
      responses:
        '200':
          description: A list of products
          content:
            application/json:
              schema:
                type: array
                items:
                  $ref: '#/components/schemas/Product'
    post:
      tags:
        - Products
      summary: Create a new product
      operationId: createProduct
      requestBody:
        required: true
        content:
          application/json:
            schema:
              $ref: '#/components/schemas/NewProduct'
      responses:
        '201':
          description: Product created successfully
          content:
            application/json:
              schema:
                $ref: '#/components/schemas/Product'

< and so forth .... >

Here’s the file loaded into BurpSuite:

Then you can add authentication if you need to:

The Postman collection looks like this. It actually can import an OpenAPI spec and output a Postman collection.

	"info": {
		"_postman_id": "e2c42dd8-dabe-44fd-b90c-9136d35d9167",
		"name": "Product Management API",
		"description": "A sample API for managing products.",
		"schema": "https://schema.getpostman.com/json/collection/v2.1.0/collection.json",
		"_exporter_id": "6514261"
	},
	"item": [
		{
			"name": "products",
			"item": [
				{
					"name": "{productId}",
					"item": [
						{
							"name": "Get a product by ID",
< and so forth ... >

Postman Collection loaded into Postman tool — ready for use:

So anyway, after you have this all loaded up into BurpSuite — you can run your scan job on the endpoints and hopefully nothing major to fix.


The product test file I generated with the AI in my Chrome browser is in the following file. You can test BurpSuite with it or import it into Postman.

AI Misfires

AI often misfires the more I work with it.

I think the big issue is, you cannot forget that you continually need to keep AI guided and boxed into your task, or you will get hallucinations and errors.

Here’s an example of an agent example using Gemini.  It asks for a complex math question, and I asked it for 42 factorial using the proper 

There’s an option to “fix the error” — which I ask it to do. Then we get this:

A fixed result. But why didn’t it have a fixed result in the first place? This kind of thing is frustrating — and in a complex coding environment could be very troublesome, with generated bugs not discovered until production deployment.


Have had two recent experiences with general AI querying I found interesting.

  1. I was asking Co-pilot about a famous documentary. There also happened to be a court case with like names to the documentary. Co-pilot *only* told me about the case and not the documentary. It also said I was wrong when I tried to query further. This one I finished on Gemini.
  2. Was also asking the AI for Duck Duck Go (the search assist) about a documentary series of an American touring Japanese food — ramen, sushi, tea house etc. The AI only told me of some other documentaries, denying that there was even such a thing with an American. Well the series exists “Pride Japan: Nihon no kokoroni deau.” I finished this one on Grok.

The interesting part of this wasn’t that it couldn’t find the items, it’s that AI told me I was wrong, or denied there even being anything. Also, recently, Claude was getting out of hand calling some of my input “delusional utterances.”

During all this I’ve asked these applications to not use my name, nor use human style interaction. A lot of bias can enter this way, imo.

Here’s some extra reading that is interesting on AI wrongness as well.

https://itrevolution.com/articles/when-ai-cuts-corners-hijacking-the-reward-function/

Also — maybe you should ask several AIs at once. Well, funny enough, I was wataching a Rick Beato guitar video on YouTube, he mentioned an app that can ask several at once. (Honestly, I was about to fabricate a javascript page to do just this). Then I found out about another multi-AI -asker. Here they are:

https://openrouter.ai/

https://www.maxai.co/features/web-ai-chat/

They can cost some money, but worth a try.

AI doesn’t keep up necessarily

Here’ an issue I’ve run into repeatedly: AI having old versions of APIs for services, in this case JIRA, who recently updated theirs and didn’t leave any backwards compatibility (and also hosed my dev data lake).

I have a python script I AI’d out of Claude to extract JIRA data for a BERT model training. But, when running, get the 410 error for the JIRA endpoint:

I am also using Claude in VS Code for this, so having it fix it. Run the update and it works….

This AI coding requires a lot of practice to get better with it, still generating this script even with the api error saved some time.

Prompt with AI is like …. golfing

Let’s talk about using AI in this current age of 2025.  Here’s a stock photo to help out:

Golf exercise Stock Photos, Royalty Free Golf exercise Images | DepositPhotos

As obvious as it sounds, writing prompts for an AI chat session is a lot like golf.  Basically, honing down until you get a good answer OR run out of chat memory and have to move to a new session.

  1. Use the driver off the tee.
    -Post your first prompt: “I’d like all the recent JIRA tickets that have work logged on them.”
  2. On the fairway.  Now you select an iron to get you close to the hole.
    -“I mean the tickets that have worked logged in the last two weeks.”
  3. Now you are on the green.  You decide to putt.
    -“For team members Sally and Frank.”

Sometimes though, AI completely misses it, for instance step two (real scenario with Rovo, Atlassians’s new AI assistant) – “Here you go, a JQL that lists all the tickets with time on them” — but since getting work time requires iteration through the worklog api data, AI didn’t do that correctly and just gave you tickets with recent activity that had work logged at any time, even three months ago.  In this case 2A., need that pitching wedge to get onto the tee.

I’ve been working with Claude, Gemini, Copilot, ChatGPT, Ollama, and a lot of minor AI llms — they all do this as you can imagine, some just get you there faster.  I am finding that smaller less known models need more time to get what you need.  For instance, Junie in IntelliJ.  Or some of the GitHub AI things, and definitely Rovo for Atlassian are less mature.

A big thing is learning how to summarize a session so as to continue on again at some point — have AI summarize, or save the chat context ported off for reloading in a new session.

AI Agents Files: Context, Build Specs, Markdown

This is just an open thought/notes post — using agents specifications files in an AI development environment.

These kind of files are written in Markdown, and can do some of these kind of things:

  1. Project documentation
    • Anything from testing, code narration, code structure, dependencies that an AI tool can read to assist in code generation or help.
  2. Structured input to LLMs
    • Agent guidance — i.e. infor and context about your development environment, project settings

There’s an emerging standard, open source, called AGENTS.md. Here’s their first page sample:

# AGENTS.md

## Setup commands
- Install deps: `pnpm install`
- Start dev server: `pnpm dev`
- Run tests: `pnpm test`

## Code style
- TypeScript strict mode
- Single quotes, no semicolons
- Use functional patterns where possible

These can become very complicated. I’ve ran some tests developing a desktop image environment for VM Ware, and have specified a ton of applications, OS, etc. in these files.

Here is some of my experiment. It’s quite easy to generate such a file, and I was asking it to create a packer instance for VMWare”

# DVM Virtual Dev Environment

## Project Overview
This project creates a Ubuntu 24.04.3 desktop virtual machine using HashiCorp Packer and VMware. The VM is pre-configured with development tools including Visual Studio Code and IntelliJ IDEA Community Edition.

## Purpose
- Build standardized development environments
- Automate VM provisioning with consistent tooling
- Provide reproducible Ubuntu desktop environments for development work

## Key Components

### Packer Configuration (`ubuntu-desktop.pkr.hcl`)
- **Base Image**: Ubuntu 24.04.3 Desktop AMD64 ISO
- **Platform**: VMware (vmware-iso builder)
- **Resources**: 4GB RAM, 2 CPUs, 40GB disk
- **Authentication**: SSH with ubuntu/ubuntu credentials

### Provisioning
The build process installs:

**Development Tools:**
- Microsoft Visual Studio Code
- IntelliJ IDEA Community Edition
- Docker Desktop
- Postman
- MySQL Workbench Community

But also — this remembers a lot of context.

Amazon has the same ides — rules for their “spec driven development” approach. Will probably discuss later as I have questioned them about the lifecycle of biz rules in a development cycle, but, generally stores all the info of your dev environment for your project. Generally, is stored in .amazonq/rules, though can be in a file like amazonq.rules.

Also Anthropic has the CLAUDE.md file, which is like AGENTS. It’s loaded at startup to give your agent context, among other things. There is also use of a json file for settings.

From CLAUDE site:

Key points about the configuration system

  • Memory files (CLAUDE.md): Contain instructions and context that Claude loads at startup
  • Settings files (JSON): Configure permissions, environment variables, and tool behavior
  • Slash commands: Custom commands that can be invoked during a session with /command-name
  • MCP servers: Extend Claude Code with additional tools and integrations
  • Precedence: Higher-level configurations (Enterprise) override lower-level ones (User/Project)
  • Inheritance: Settings are merged, with more specific settings adding to or overriding broader ones

Note that these other AI agents will in fact read AGENTS.md, even if they put up a fight against you. Also, be aware that in general you’ll eventually be checking in and maintaining these files for your projects.


Thoughts. Well, after working with these, it’s nice to have a context on loadup mechanism. It would be nice to share among other developers. That said, I wonder how this is going to play into the specs in maven, ivy, ant, gradle etc. files. Why not just read those files, instead of in some cases having a double entry?

And since this is now emerging, there are several standards — am guessing if you ever switch AI providers it could convert the standards (from say Q rules to CLAUDE) with no fuss. It had better, being that conversions seems to be one of the AI strongpoints.

Another really nice related mechism is experimenting with saving chat contexts from the IDE. Amazon has something like this, and Claude can remember as well — but you could technically store prompt history offline, even incorporate the best answers into the md files.

Best thing is to try all of this out.


References:

MARKDOWN

AGENTS.md

CLAUDE Settings

Amazon Q Rules

Further Reading:

There are mentions of Jupyter notebooks in much of the MD config documentation. I didn’t discuss this here, but you can of course explore it more.

JUPYTER

GitHub Becoming 2FA

GitHub is switching most of the sites now to 2FA. The warning has been out there since at least 2023. Just now, on my current team’s site, have I gotten the message. Our organization is now doing this.

Check it out here on GitHub.

As of March 2023, GitHub required all users who contribute code on GitHub.com to enable one or more forms of two-factor authentication (2FA). If you were in an eligible group, you would have received a notification email when that group was selected for enrollment, marking the beginning of a 45-day 2FA enrollment period, and you would have seen banners asking you to enroll in 2FA on GitHub.com. If you didn’t receive a notification, then you were not part of a group required to enable 2FA, though we strongly recommend it.

You get a few choices in GitHub:

I decided to use a software authentication app on my phone, here are a few you can choose from:

And away you go.

Claude and API limitations

Was running Jira integrations with Clade AI — and the total count of issues was coming back incorrectly.  I’d get twenty-four issues back for what would be a few hundred (163 in this case) results I could see in a JQL console in Jira.

A JQL query like this:

project = "<project>" AND "Fix Version" = "38" and status in (Open)

Asked Claude AI why this was so, and the answer is, Claude is limited by the API it uses:

Me: when I run the JQL in jira I get 163

Claude: You’re absolutely right – there’s a discrepancy between what I’m seeing through the API and what you’re getting in JIRA directly. The API response I received only showed 24 issues, but you’re seeing 163 when running the same query in JIRA.

This difference could be due to several factors:

  1. Pagination limits – The API might be returning only a subset of results despite my attempts to get more
  2. Permission differences – The API call might have different access permissions than your direct JIRA access
  3. API response limitations – There could be constraints on how many results the API returns in a single call

When you run that exact JQL query directly in JIRA and get 163 results, that’s the authoritative count. The API response I received appears to be incomplete.

For accurate counts and comprehensive results, running the query directly in JIRA (as you did) is the most reliable method. The 163 issues you found is the correct count for your query.

I then asked it to run the second page:

Me: run the second page of this query

Claude: I’ll get the second page of results using the nextPageToken from the previous response:

Search Jira issues with JQL

Request

{
`jql`: ......` }

Response

{"issues":[{"expand":"......"}]}

I notice that the second page response looks different – it seems like some of the issue IDs are repeating, which suggests there may be an issue with the pagination token I used from the previous response.

Looking at the results, I can see the second page returned additional issues, but since the response was truncated, I can’t count them all. However, what’s clear is that:

  1. First page: 100 issues (maxResults limit)
  2. Second page: Additional issues (response truncated due to size)

Since you confirmed getting 163 issues when running the query directly in JIRA, and I’ve now retrieved 100+ issues via the API across two pages, this confirms that:

  • Your count of 163 issues is correct

  • The API is working properly, just paginated

  • The total count from the API calls matches your direct JIRA query result

I then asked Claude to list the key and summary for these, and it showed everything in the right sidebar.  But, at that point ran out of context memory so I had to make a better prompt for the next round.

Claude doesn’t do a generic agents.md file for sessions, and does do a claude.md.  That siad, I looked and you can ask for a summary of the session, which I did, then paste at the start of a new Claude session.

From Google search engine:

AI Overview
No, a Claude model will not inherently “read” an

agent.md file; it depends on the specific AI tool you are using and how it’s set up. While Claude Code uses its own format, CLAUDE.md, to provide project-specific context, AGENTS.md is an emerging open standard for other AI agents to use. To get Claude to use information from an agent.md file, you would typically need to copy its contents into a CLAUDE.md file or explicitly paste the instructions into your chat prompt.

Here’s a breakdown of the situation:
  • CLAUDE.md for Claude Code:Claude Code automatically pulls in the contents of a CLAUDE.md file from your project’s root or a specified location to provide context for the agent.

  • AGENTS.md as an Open Standard:AGENTS.md is a newer standard intended to provide instructions and context for any AI agent, not just Claude.

  • How to Use agent.md with Claude Code:Since Claude Code doesn’t automatically load AGENTS.md, you would need to take action to use the information within it:

    • Copy and Paste: Copy the contents of the agent.md file and paste them directly into your chat with the agent, or into the CLAUDE.md file for your project.

    • Prompt Claude to Read: You can explicitly tell the agent to read the agent.md file by including a prompt like “Here are the instructions from agent.md: [paste contents here]”.

I think we just did get Claude Memory, but haven’t tried it — so opted for the summary/copy paste method.  But just in case:

AI Overview
If you reach Claude’s context window limit, you can continue a session by creating a summary of your conversation and feeding it to a new chat

. For paid plans, a “memory” feature can automatically reference past discussions. 

Here are the best strategies to maintain context in a new session after running out of memory.
1. Manual summary and prompt engineering
This method works for all versions of Claude by creating a concise yet detailed record of your progress. 
  1. Ask Claude to create a summary. At a natural stopping point, use a prompt like: “Summarize our conversation so far into a single, dense paragraph. Include all key decisions, constraints, and the next immediate goal. The summary should be optimized to be used as a starting prompt for a new conversation”.
  2. Start a new chat. Create a new conversation and paste the summary you just generated.
  3. Frame the request clearly. Begin your prompt by stating that this is a continuation. For example: “This is our previous work on [project]. Let’s continue from where we left off based on the following summary:”. 
2. Use Claude’s memory and chat search feature (paid plans)
For paid subscribers, Claude offers built-in features that can recall past conversations and retain context across sessions. 
  • Search past chats: Ask Claude to retrieve relevant information from your previous conversations. For example, “What did we discuss about [topic]?” or “Can you find our conversation about [subject]?”.
  • Enable memory: You can enable memory for all your conversations in Settings > Features. When turned on, Claude can automatically summarize key insights from your chats and reference them in new conversations. 

It does seem that Claude Code operates a bit differently, but I am working in just the chat.

At this point, Claude was analyzing each story for complete requirements and giving me a list (I’m not including the company’s data, sorry).  But, I saw now I had another problem was telling Claude the definition of an incomplete story — because it was including “no assignee” but for open stories, we didn’t want an assignee just yet.  Still, close enough, so I can sort out the reqs for these prompts and get some sort of close estimation of bad stories.

Amazon Q Reviews with GitHub Repos

This is about hooking up Amazon Q to GitHub, so it can code review your pull requests.  Also, to see what happens if you have an unsupported language reviewed (Groovy, and for supported looking at Java).

List of Q supported code review languages as of today is here:

https://docs.aws.amazon.com/amazonq/latest/qdeveloper-ug/q-language-ide-support.html

Here is a printout of the receipt:

20250828 Supported languages for Amazon Q Developer in the IDE – Amazon Q Developer

Language support for code reviews with /review

Amazon Q can create code reviews and provide automatic code fixes for files and projects written in the following languages:

  • Java ‐ Java 17 and earlier
  • JavaScript ‐ ECMAScript 2021 and earlier
  • Python ‐ Python 3.11 and earlier, within the Python 3 series
  • C# ‐ All versions (.NET 6.0 and later recommended)
  • TypeScript ‐ All versions
  • Ruby ‐ Ruby 2.7 and 3.2
  • Go ‐ Go 1.18
  • C ‐ C11 and earlier
  • C++ ‐ C++17 and earlier
  • PHP ‐ PHP 8.2 and earlier
  • Kotlin ‐ Kotlin 2.0.0 and earlier
  • Scala ‐ Scala 3.2.2 and earlier
  • JSX ‐ React 17 and earlier
  • Infrastructure as Code (IaC) languages
    • AWS CloudFormation ‐ 2010-09-09
    • Terraform ‐ 1.6.2 and earlier
    • AWS CDK ‐ TypeScript and Python

     

Also, in the list if the current supported languages is Java, but not Groovy

First, you will need a GitHub account, a repository, and an Amazon Q Account.

Add the Amazon Q App to your GitHub account:

Go to settings (menu under your profile), then Applications:

Visit the Github Marketplace in Applications and install Amazon Q Developer:

When you install, it should be for Free.  It asks for a billing address — but don’t need a card or anything.  You can also choose all or just particular repositories for Q access.

Q App should now be installed in your GitHub.

 

Next connect the GitHub in Amazon Q Developer.  Log in and go to Amazon Q Developer, then click into the GitHub portion to Register Installations.

Authorize Q, then Register.  You’ll get a confirmation.

Now everything is connected, and it’s time to do a code review.

You can go out to an AI (Gemini in this case) and generate some bad code test files.  I generated two — one for Groovy, and one for Java.  SOme of the project I am on now is Groovy and the Amazon reps said it would work even though as of this date Groovy is not a supported language for Q.

 

The prompts:

  1. can you generate a Groovy file that has security and syntax issues so that I can test my code review process
  2. generate this same thing but as a java file

Here are the groovy and java test files to look at (printed as pdf):

badcode.groovy badcode.java

These files have errors in them, which AWS should catch.  I am not going to do an in-depthstudy of which errors it might see, just if the review works, and decorates, the code in GitHub or not.

 

Isn’t AI great? you don’t even need to hire a real developer to get bad code.

 

Here is a list of things that are wrong in these files, from Gemini.

Bad Groovy:

Security Issues

  • Command Injection: The insecureMethod() concatenates user input directly into a shell command, which can allow an attacker to execute arbitrary commands.
  • SQL Injection: The vulnerableSql() method builds a SQL query string by concatenating data, leaving it open to SQL injection attacks.
  • Directory Traversal: The insecureFileAccess() method uses user input to access a file path without proper validation, which could allow an attacker to read sensitive files outside of the intended directory.

Syntax and Style Issues

  • Missing Semicolon: In Groovy, while semicolons are optional, their absence can sometimes lead to unexpected behavior, especially when combining multiple statements on a single line. The provided example shows a simple case where a missing semicolon could be a point of review.
  • Misspelled Keyword: The reutrn keyword is misspelled, which will cause a compilation error.
  • Unmatched Parenthesis: A parenthesis is missing in the println statement, which will also cause a compilation error.
  • Redundant Comparison: The if (flag == true) statement is redundant. A more concise and idiomatic Groovy style would be if (flag).
  • Unused Variable: The unusedVariable is declared but never used, which is a common code smell that a good code review should identify.

Bad Java:

 

Security Issues

  • Command Injection: The insecureMethod() concatenates user input into a shell command using Runtime.getRuntime().exec(). This is a classic command injection vulnerability, as an attacker could provide input that executes malicious commands on the system.
  • SQL Injection: The vulnerableSql() method builds a SQL query string by directly concatenating an int and String into the query. This leaves the application open to SQL injection attacks, where an attacker could manipulate the query to gain unauthorized access to data.
  • Directory Traversal: The insecureFileAccess() method uses a user-controlled path to create a File object. Without proper validation, this is a directory traversal vulnerability, allowing an attacker to access sensitive files outside the intended directory, such as /etc/passwd.

Syntax and Style Issues

  • Compiler Errors: Unlike Groovy, Java is a strictly typed and compiled language. Missing semicolons, misspelled keywords (reutrn), and unmatched parentheses will cause the program to fail to compile, which is a major red flag in a code review.
  • Redundant Comparison: The condition flag == true is redundant and considered bad practice in Java. A more idiomatic and concise style is simply if (flag). This is a common style issue that a good code review should catch.
  • Unused Variable: The unusedVariable is declared but never used. Most IDEs and compilers will issue a warning about this, as it indicates potentially dead code or a mistake. This is also a key code smell to identify during a review.

In my repository I have two branches, one for groovy and one for java, I’ll pull separate PRs and then wait for Q’s code review.

Java PR:

Groovy PR:

Q is thinking about the review:

It takes a little time and here are the results for Java, which is finds many issues (more than listed here):

But with Groovy, an unsupported language, finds nothing:

 

Conclusion

It works rather well for a supported language, almost as good as my experience with SonarQube decoration.  If you are working in a low quality environment, you could turn this on and force PRs and get good code reviews for free at point of merge.

As for the Groovy/unsupported — be careful as Amazon might write off your code base and say “be patient.”  Doesn’t matter how cheap a tool is if it sucks time.  I have found the Groovy support kind of OK in the IDE with Q, just, the review doesn’t work.

 

Audio Driver Removal in Sequoia 15

I use some Skullcandy Smokin’ Buds with my work Mac running Sequoia 15 for use with meetings at times — but whenever they disconnect the audio driver reverts to one of the virtual audio drivers I had installed (and wasn’t using) for screen recordings. So, I could be in a meeting, disconnect my earbuds and could not talk or hear anything as the Mac would revert to a weird default audio driver and not the native Mac audio drive. That would require navigating to the sound settings and assigning the correct native Mac driver. This would happen with Google Meet or Zoom. Found this surprising, and hadn’t found a config to give the mac a device preference (being lazy, and assuming”it would just work”), so, thought it best just to remove the other drivers.

Once I removed these audio drivers, I was able to connect/disconnect as expected.

Here’s what I had on my machine (sans the Smokin’ Buds) — can be found in the sound settings. The two I wanted gone were the BlackHole and the VB-Cable drivers.

Also — I didn’t have any midi entries:

I couldn’t find any direct way to remove the two drivers so went to the internet.

Here’s what Google search AI gave me when I searched “osx uninstall virtual sound drivers”:


AI Overview

To uninstall unwanted virtual sound drivers on macOS, first locate the virtual device in Audio MIDI Setup and remove it from there. If the device is not found there, navigate to /Library/Audio/Plug-Ins/HAL and delete any associated files or folders. Finally, restart your Mac to ensure the changes are applied. 

Detailed Steps:

  1. Locate and Remove in Audio MIDI Setup:
    • Open Audio MIDI Setup (Applications > Utilities).
    • Go to Window > Show Audio Devices.
    • If your virtual device is listed, select it and click the minus (-) icon at the bottom left. 
  2. Delete Files/Folders in /Library/Audio/Plug-Ins/HAL:
    • Open Finder and go to /Library/Audio/Plug-Ins/HAL.
    • Delete any files or folders related to the virtual sound driver. 
    • For example, if it’s a virtual device from RØDE, you might delete files or folders named something like “RØDE Virtual”. 
  3. Remove from System Library:
    • Use Command-Shift-G and paste /Library/Audio to access the system-wide library.
    • Search for any .plist files or other files related to the virtual device and delete them. 
  4. Check Application Support:
    • Go to the Application Support folder (e.g., /Users/yourusername/Library/Application Support).
    • Look for any files or folders related to the virtual sound driver and delete them. 
  5. Restart:
    • Restart your Mac to ensure all changes are applied and the virtual sound driver is removed. 

Generative AI is experimental.


I went through all the steps — but for me, the steps that made the difference were #’s 2 and 5:

  • Delete Files/Folders in /Library/Audio/Plug-Ins/HAL
  • Restart

Never been a fan of this kind of “delete the driver files” thing in any OS but it worked. Always restart — always.

With those two drivers gone, I can connect/disconnect the earbuds in a meeting session with little trouble.