Automated Recurring Emails to Multiple Accounts for GMail

Am sure there are people with better solutions, but this is my recent journey trying to find a solution to do recurring emails to multiple accounts in GMail.

Silly as this may sound, there doesn’t seem to be a way to send out automated recurring emails to multiple accounts in GMail without third-party intervention.  I also couldn’t find anything good for Google tasks the anseer being “Creating a recurring task for a group in Google Tasks is a bit tricky because Google’s native “Shared Tasks” (found in Google Chat Spaces) do not currently support the “Repeat” feature.” With some “workarounds.”

Yes, I did the usual AI inquiries — Gemini and Claude for me.  Also I contacted our Google accounts admin who confirmed that he couldn’t find any such feature.  We shared some solutions:

  • Boomerang for Gmail: Offers “Send Later” with recurring options (daily, weekly, etc.) and other productivity features.
  • IFTTT (If This Then That): Connects Gmail to other services for powerful triggers (e.g., send email when a new contact is added).
  • RightInBox: Email productivity.

Security Issues

The Chrome plugins and GEoogle plugins don’t pass my company;s security measures. They want too much access, so these are a no-go (for what I tested).

Google Apps Script Solution

Curiously enough he said his research also said Apps Scripts were a solution.

I have a few Google Apps Scripts for doing data mining — this sounded fun.  I opened up VS Code and using the Gemini code assists plugin generated out a solution.

Also — Apps Script comes with everyone who has a Gmail. As a company, we are OK’d for Google measures on security.

Here’s the de-id solution I am running now.  I am putting it in here in it’s raw form. I’ve noticed that the vibe coding results donm’t do very good cleanup/review without a few iterations — interesting, and means a lot of non-AI code review is necessary.  Note for the two methods — one to do text emails, the other to do html emails, the comments are slightly diff.  I just globalized some of the shared parameters to make it a little easier.  Feel free to clean this up and curse my name.

// Javascript - Code.js
const recipients = "team@yourcompany.com";

const subject = "Weekly Team Reminder";
/**
 * Sends a recurring email to a predefined list of recipients.
 */
function sendRecurringEmail() {
  // --- Configuration ---
  // --- Text Body ---
  // Set the email body. You can use simple text.
  const body = "Team,\n\nPlease remember to do your task for this week.\n\nThank you.\n\nSpartacus";

  // --- Sending the Email ---
  try {
    MailApp.sendEmail(recipients, subject, body);
    Logger.log("Email sent successfully to: " + recipients);
  } catch (e) {
    Logger.log("Error sending email: " + e.toString());
  }
}

/**
 * Sends a recurring email with an HTML body to a predefined list of recipients.
 */
function sendRecurringHtmlEmail() {
  // --- Configuration ---
  // --- HTML Body ---
  // Create an HTML template for the email body for richer formatting.
  const htmlBody = `
    <html>
      <body>
        <h2>Weekly Reminder</h2>
        <br>
        <p>Team,</p>
        <br>
        <p>Please remember to do your task for this week.</p>
        <br>
        <p>Thank you.</p>
        <br>
        <p>Hannibal</p>
      </body>
    </html>
  `;

  // --- Sending the Email ---
  try {
    MailApp.sendEmail({
      to: recipients,
      subject: subject,
      htmlBody: htmlBody // Use the 'htmlBody' option for HTML content
    });
    Logger.log("HTML email sent successfully to: " + recipients);
  } catch (e) {
    Logger.log("Error sending HTML email: " + e.toString());
  }
}

Making the code work

Log into Google Apps Script.

Add your code into the Apps Script editor. Usually the platform wants to call it Code.js.

You can pick wich method you which to run.

You do NOT need to Deploy the code for any of this, so ignore that. (I aske the AI about it, said no need).

Test your code with RUN. Usually I use my own email.

Adding a GMail group as the target

Now you can create a GMail group for the target. It’s pretty easy, just use Google Groups. It will allow you to create an email group. Add members and you’ll havee something with a special email like your-team@yourcompany.com.

I did try to use a contact grouping but those don’t have a recognizable email address, so more code would be needed in your script to mine that out. I found the use of groups simpler.

Make the Email Recurring

This is pretty cool — Apps Script has trigger functionality that can make this recurring.

Once you set this up you have your recurring email to your group.

Creating a CLI: Language Choices

After a lot of my work with gathering JIRA data, I’ve had to go an use the API from PostMan or execute a script more often than now, as opposed to using AI.  Even with Claude, Amazon, Rovo, and Gemini — many times will be given a script to execute because the AI’s cannot use my credentials or cannot do a two-step process like gather info, then iterate through that info–> needed arest + code solution.

Decided to give a shot at creating a simple JIRA CLI (apart from Atlassian’s, which requires special permissions) based on these APIs and scripts I’ve been creating.

The first thing I am curious about is what language is best to develop a CLI with.  Considerations are easy cross-platform deployability, and less set-up (like needing a special version or a code platform).  Also, I intend to vide-code mosst of this and do some conversions so would not like any crazy languages as suggestions.

I asked AI.


“Best” Language to Write a CLI?

Here is a summary that Gemini in GDocs created from Q, Claude, and Gemini results when asked about the “best” code platform to write a CLI:

Best Language for CLI Development

The document recommends Go (Golang) as the top choice for professional, cross-platform CLIs.

  • Go (Recommended): The “industry standard” due to its ability to compile to a single static binary, which makes distribution incredibly simple, as the end-user does not need to install a runtime environment (like Python or Node.js).
  • Rust: The “performance powerhouse,” ideal for system tools and high-performance applications, also offering excellent distribution via static binaries but with a steeper learning curve.
  • Python: The “king of prototyping,” favored for rapid development and automation, despite distribution being more difficult due to the requirement of a Python runtime and managing dependencies.

Languages for Specific CLIs

The document details the languages used by several major CLI tools, which often reflects a balance between the speed of development and ease of distribution:

CLI Tool Primary Language(s) Key Distribution Factor Note on Amazon Q CLI
AWS CLI (Amazon’s primary tool) Python V2 is distributed as a self-contained installer that bundles its own Python runtime, giving it a single-file-like feel. The document does not specify Amazon Q CLI, but the highly relevant AWS CLI is Python-based.
Claude Code (CLI) TypeScript/Node.js Distributed via NPM or self-contained installers that can bundle Node.js. The latest version of Anthropic’s open-source tool is primarily written this way.
Gemini CLI TypeScript/Node.js Distributed via NPM, Homebrew, or npx, requiring Node.js 20+ for installation via NPM. This is based on the open-source google-gemini/gemini-cli repository.
KIRO CLI TypeScript/Node.js Leverages the NPM ecosystem for development. This is a third-party AI CLI tool.
GitHub CLI Go Follows the industry standard for easy distribution via a single binary. Mentioned as an example of a popular modern CLI written in Go.
I found it curious that Gemini didn’t add TypeScript/Node.js to the list of best choices, being that Claude, Gemini, and KIRO are all writting in that.
Here is what Gemini had to say:
Why isn’t TypeScript/Node.js a top choice?

TypeScript/Node.js is not generally considered the top choice for all CLIs (compared to Go or Rust) primarily due to distribution and runtime requirements.

The document highlights the following downsides:

  • Requires a Runtime: An end-user must have the Node.js runtime installed on their system to run a standard Node.js/TypeScript CLI, which adds a dependency for the user. Go and Rust, in contrast, compile to a single static binary that runs directly without needing an interpreter or runtime.
  • Perceived “Heaviness”: The necessary Node.js dependencies, often contained in a large node_modules folder, can make the tool feel “heavy” compared to a self-contained Go or Rust binary.
  • Larger Memory Footprint: The Node.js runtime environment generally has a larger memory footprint than a native executable.

Despite these drawbacks, the language is still an excellent choice for specific use cases, such as:

  • Tools for the web development ecosystem.
  • Projects that need to leverage the massive NPM ecosystem and existing JavaScript/TypeScript tooling.
  • Tools where the development team is already familiar with JavaScript/TypeScript.

In modern distributions (like AWS CLI v2 or the Claude Code CLI), this downside is sometimes mitigated by shipping the CLI with an embedded runtime so the end-user doesn’t have to install Node.js separately.


DevOps vs Developer CLI:

 

Language Platforms Differ

Finally, I was seeing a bit of a pattern – Go is used (as mentioned by one AI) for Docker, Kubernetes, and Hugo.  These are explicitely DevOps-oriented platforms, whereas the coding CLI’s would seem to be mostly used by developers. I asked Gemini about it:
 

So do devops CLIs tend to be written in GO, and application developer CLIs tend to be not written with Go?

That is a very accurate observation of the current landscape. There is a strong “language divide” between tools built to manage infrastructure (DevOps) and tools built for application developers.

The primary reason for this split isn’t just aesthetic; it’s about distribution constraints versus ecosystem integration.

1. DevOps CLIs: The Reign of Go

If you look at the heavy hitters in DevOps—Docker, Kubernetes, Terraform, Helm, and the GitHub CLI—they are almost all written in Go.

  • The Problem: DevOps tools often need to run on “clean” servers, inside bare-bones Docker containers, or on a colleague’s machine that doesn’t have any specific software installed.

  • The Go Solution: Go compiles to a statically linked binary. This means the executable contains every single piece of code it needs to run.

  • The Result: You can just curl a binary and run it. No “You need Python 3.9,” no “Your Node version is too old,” and no pip install errors. For someone managing a server, this reliability is non-negotiable.

Core DevOps Stack (Go-Dominant)

  • Docker: (Go)

  • Kubernetes (kubectl): (Go)

  • Terraform: (Go)

  • Prometheus: (Go)

  • GitHub CLI (gh): (Go)

2. Developer CLIs: The NPM & Python World

When a CLI is meant for Application Developers (especially Web or AI developers), the language choice usually shifts to TypeScript (Node.js) or Python.

  • The Reason: These tools live where the developers already spend their time.

  • Ecosystem Symbiosis: If you are a frontend dev, you already have Node installed. If you are an AI researcher, you already have Python. Forcing these users to learn Go just to contribute to a CLI’s codebase would create a “contribution tax.”

  • Rich UI Libraries: TypeScript has libraries like Ink (which lets you build CLI layouts using React components), making it much easier to build the highly interactive, colorful UIs that modern app devs expect (like the Gemini or Claude CLIs).

Developer Tooling Stack (Node/Python Dominant)

  • Vercel / Netlify CLIs: (TypeScript/Node)

  • Claude Code: (TypeScript/Node)

  • AWS CLI: (Python)

  • Gemini CLI: (TypeScript)

  • Firebase CLI: (TypeScript/Node)

Summary: Go vs. The Rest

Feature DevOps (Go) App Dev (TS / Python)
Main Priority “Just work” on any server Integration with dev workflow
Install Method Download a single file npm install or pip install
Runtime None required (Built-in) Needs Node or Python installed
Startup Time Near-instant Small delay (Interpreter boot)

The “Hybrid” Exception

You will occasionally see a shift toward Rust for both groups (like the Bun runtime or Oxc). Rust offers the “single binary” benefit of Go but with even higher performance and better safety. It is currently the “cool” choice for developers who want the portability of a DevOps tool but the high-end UX of an app-dev tool.


Direction

Finally, for my purposes — writing a quick JIRA CLI, asking Gemini for a recommendation considering my bulk-operation need:

Language Comparison for a Bulk JIRA CLI

Language Strengths (For JIRA CLI) Weaknesses (For Bulk Operations/Distribution)
Go (Golang) Superior for Bulk Operations (Goroutines for parallel API calls); Excellent Distribution (Single static binary). Terminal UI/TUI libraries may be less mature than Node.js’s; Less native JSON handling than TypeScript.
Python Rapid development, familiar for data handling. Not ideal for high-speed parallel API calls (Global Interpreter Lock); Difficult Distribution (Requires Python runtime/dependencies).
TypeScript/Node.js Excellent for highly interactive CLIs (TUIs); Easy to implement concurrency (Promise.all). “Cold Start” overheadDistribution headache (Requires Node.js runtime), Slower for high-CPU data manipulation.

The Verdict: Choose Go if you want a fast, portable utility that feels like a native system command and can handle massive bulk operations. Choose TypeScript if your priority is building a highly interactive, menu-driven tool.

I am also given the exact steps to proceed.
Here is the raw information from the propmting,  if you are more curious:
Best CLI Language AI Prompts

Deleting Excessive JIRA Web Links

I have a Jira ticket with a ton of web links all created by the Jira/Burp Suite connector.  

While I can delete them manually, I wanted to look to see if I do this via the API. 

There is a way to l0ok up all the links on a ticket with the API, and then use each linke id to delete. Can use it in Postman etc.

Do a GET with this (paying mind to your security credentials):

GET
https://bluesage.atlassian.net/rest/api/latest/issue/ISSUE-1234/remotelink

This give you back a list of the links and their ids. From here it is simple enough to iterate through and delete the links (REST DELETE operation). You’ll have to use this for each link as there is no bulk API statement for Atlassian. In this cas the link ID is 876

DELETE
https://bluesage.atlassian.net/rest/api/latest/issue/ISSUE-1234/remotelink/876

I didn’t write any code to do this call/iteration, but it would be simple enough to even vibe code it. Also, there is an Atlassian CLI I haven’t tried yet that might be easy to write bash or groovy or python to iterate.

AI with Legacy Code Requires Real Coders These Days

Here’s a good use case as to why developers are needed to monitor code generated by AI.

Using Amazon Q, I set up a project that is a 100% unit test project to be 100% AI generated, for another barely tested project. (More about this later; was not a smooth creation.) This is grails — and I wanted to use its modularity to separate out the tests to isolate them as I finish them up before moving into the target project.

Running a few tests, I noted the same failure — a missing property. I dug into the classes the tests were calling and couldn’t find any reference to (in this case) log property.

Below is my querying of Q:

Basically Q had made an assumption about the project setup, didn’t even check the class for reference of an annotation, but still made the test.

I’ve noticed this kind of behavior especially when I give Q a large task (create 50 test cases) vs small (create 2 test cases). There might be something in the algorithm to save time, credits, whatever and you get incorrect extrapolations causing errors in the code.

This exact reason is why a lot of my current team will not use Q on legacy code: it might do a refactor if you aren’t paying attention, it might produce bogus code, or might make a code assumption.

The amount of time to figure out and correct AI issues in code can easily exceed just writing the code itself.

If any more proof needed that real coders are different than AI code generator, this is SonarQube’s current price list:

Note that there are AI code options: detect AI code, AI code assurance, and AI code fixing. The code assurance has me doubtful in that popup description, but I’ll test it at some point. I think part of the detection happens when commits are marked by say a GitHub Copilot or Q generator, which would be easier than detecting from code.

But the point being — the difference between human generated code and AI generated code is being recognized by the industry.

Also, kind of chuckling — if you’ve ever used IntelliJ’s markers @formatter:off/@formatter:on (usually for SQL in my experience), we’ll probably start seeing more markups to indicate areas of legacy, human, and AI code areas.

Generating OpenAPI and PostMan Collections with Q

Generating out descriptive schema for your existing applications that AI can well.

Working on a Grails API application, I am using with BurpSuite DAST to do some security testing on a regular basis for it. I’ve needed an API list of endpoints for their API security scan feature for quite some time, that’s easy to maintain.  We don’t have any maps of the APIs, annoying, so I thought I would ask Amazon Q to make OpenAPI bill of endpoints as well as a Postman collection of them for good measure.

OpenAPI is a specification for making a kind of “schema” for your APIs.

Postman is a tool for building and testing APIS.

BurpSuite DAST – dynamic application security testing – is owned by PortSwigger.  It’s good for penetration testing.

 

Here are the prompts I asked after opening my project:

  1. can you make a postman collect of the apis in <myproject>
  2. can you also make an OpenAI 3.0 definition, both json and yaml

Voila – have my three files.

BurpSuite added an extra selection for an API scan type, and I hadn’t been able to use it until I had the catalog of API endpoints.

BurpSuite can take these formats to describe APIs:

SOAP WSDL, Postman Collection, or an OpenAPI definition in YAML or JSON format.

I like to work with yaml files as I work with some devops type things, Docker, and AI Specs (that use markup, I find more akin to yaml). But for grins I generated out both formats, and then the Postman collection. I didn’t try the wsdl.

Example content:

openapi: 3.0.0
info:
  title: Product Management API
  version: 1.0.0
  description: A sample API for managing products.
servers:
  - url: https://api.example.com/v1
    description: Production server
  - url: http://localhost:8080/v1
    description: Development server
tags:
  - name: Products
    description: Operations related to products
paths:
  /products:
    get:
      tags:
        - Products
      summary: Get all products
      operationId: getAllProducts
      responses:
        '200':
          description: A list of products
          content:
            application/json:
              schema:
                type: array
                items:
                  $ref: '#/components/schemas/Product'
    post:
      tags:
        - Products
      summary: Create a new product
      operationId: createProduct
      requestBody:
        required: true
        content:
          application/json:
            schema:
              $ref: '#/components/schemas/NewProduct'
      responses:
        '201':
          description: Product created successfully
          content:
            application/json:
              schema:
                $ref: '#/components/schemas/Product'

< and so forth .... >

Here’s the file loaded into BurpSuite:

Then you can add authentication if you need to:

The Postman collection looks like this. It actually can import an OpenAPI spec and output a Postman collection.

	"info": {
		"_postman_id": "e2c42dd8-dabe-44fd-b90c-9136d35d9167",
		"name": "Product Management API",
		"description": "A sample API for managing products.",
		"schema": "https://schema.getpostman.com/json/collection/v2.1.0/collection.json",
		"_exporter_id": "6514261"
	},
	"item": [
		{
			"name": "products",
			"item": [
				{
					"name": "{productId}",
					"item": [
						{
							"name": "Get a product by ID",
< and so forth ... >

Postman Collection loaded into Postman tool — ready for use:

So anyway, after you have this all loaded up into BurpSuite — you can run your scan job on the endpoints and hopefully nothing major to fix.


The product test file I generated with the AI in my Chrome browser is in the following file. You can test BurpSuite with it or import it into Postman.

AI Misfires

AI often misfires the more I work with it.

I think the big issue is, you cannot forget that you continually need to keep AI guided and boxed into your task, or you will get hallucinations and errors.

Here’s an example of an agent example using Gemini.  It asks for a complex math question, and I asked it for 42 factorial using the proper 

There’s an option to “fix the error” — which I ask it to do. Then we get this:

A fixed result. But why didn’t it have a fixed result in the first place? This kind of thing is frustrating — and in a complex coding environment could be very troublesome, with generated bugs not discovered until production deployment.


Have had two recent experiences with general AI querying I found interesting.

  1. I was asking Co-pilot about a famous documentary. There also happened to be a court case with like names to the documentary. Co-pilot *only* told me about the case and not the documentary. It also said I was wrong when I tried to query further. This one I finished on Gemini.
  2. Was also asking the AI for Duck Duck Go (the search assist) about a documentary series of an American touring Japanese food — ramen, sushi, tea house etc. The AI only told me of some other documentaries, denying that there was even such a thing with an American. Well the series exists “Pride Japan: Nihon no kokoroni deau.” I finished this one on Grok.

The interesting part of this wasn’t that it couldn’t find the items, it’s that AI told me I was wrong, or denied there even being anything. Also, recently, Claude was getting out of hand calling some of my input “delusional utterances.”

During all this I’ve asked these applications to not use my name, nor use human style interaction. A lot of bias can enter this way, imo.

Here’s some extra reading that is interesting on AI wrongness as well.

https://itrevolution.com/articles/when-ai-cuts-corners-hijacking-the-reward-function/

Also — maybe you should ask several AIs at once. Well, funny enough, I was wataching a Rick Beato guitar video on YouTube, he mentioned an app that can ask several at once. (Honestly, I was about to fabricate a javascript page to do just this). Then I found out about another multi-AI -asker. Here they are:

https://openrouter.ai/

https://www.maxai.co/features/web-ai-chat/

They can cost some money, but worth a try.

AI doesn’t keep up necessarily

Here’ an issue I’ve run into repeatedly: AI having old versions of APIs for services, in this case JIRA, who recently updated theirs and didn’t leave any backwards compatibility (and also hosed my dev data lake).

I have a python script I AI’d out of Claude to extract JIRA data for a BERT model training. But, when running, get the 410 error for the JIRA endpoint:

I am also using Claude in VS Code for this, so having it fix it. Run the update and it works….

This AI coding requires a lot of practice to get better with it, still generating this script even with the api error saved some time.

Prompt with AI is like …. golfing

Let’s talk about using AI in this current age of 2025.  Here’s a stock photo to help out:

Golf exercise Stock Photos, Royalty Free Golf exercise Images | DepositPhotos

As obvious as it sounds, writing prompts for an AI chat session is a lot like golf.  Basically, honing down until you get a good answer OR run out of chat memory and have to move to a new session.

  1. Use the driver off the tee.
    -Post your first prompt: “I’d like all the recent JIRA tickets that have work logged on them.”
  2. On the fairway.  Now you select an iron to get you close to the hole.
    -“I mean the tickets that have worked logged in the last two weeks.”
  3. Now you are on the green.  You decide to putt.
    -“For team members Sally and Frank.”

Sometimes though, AI completely misses it, for instance step two (real scenario with Rovo, Atlassians’s new AI assistant) – “Here you go, a JQL that lists all the tickets with time on them” — but since getting work time requires iteration through the worklog api data, AI didn’t do that correctly and just gave you tickets with recent activity that had work logged at any time, even three months ago.  In this case 2A., need that pitching wedge to get onto the tee.

I’ve been working with Claude, Gemini, Copilot, ChatGPT, Ollama, and a lot of minor AI llms — they all do this as you can imagine, some just get you there faster.  I am finding that smaller less known models need more time to get what you need.  For instance, Junie in IntelliJ.  Or some of the GitHub AI things, and definitely Rovo for Atlassian are less mature.

A big thing is learning how to summarize a session so as to continue on again at some point — have AI summarize, or save the chat context ported off for reloading in a new session.

AWS Adding a Repository to Q

  1. Visit the Github Marketplace in Applications and install Amazon Q Developer.
    1. This plugin in Github is where you decide which repos Q has access to.
      Amazon Q Developer – GitHub Marketplace



  2. Log in and go to Amazon Q Developer (is in the BlueSage-Shared account), then click into the GitHub portion to Register Installations in the Q admin screen.
  3. Authorize Q, then Register.  You’ll get a confirmation.

AI Agents Files: Context, Build Specs, Markdown

This is just an open thought/notes post — using agents specifications files in an AI development environment.

These kind of files are written in Markdown, and can do some of these kind of things:

  1. Project documentation
    • Anything from testing, code narration, code structure, dependencies that an AI tool can read to assist in code generation or help.
  2. Structured input to LLMs
    • Agent guidance — i.e. infor and context about your development environment, project settings

There’s an emerging standard, open source, called AGENTS.md. Here’s their first page sample:

# AGENTS.md

## Setup commands
- Install deps: `pnpm install`
- Start dev server: `pnpm dev`
- Run tests: `pnpm test`

## Code style
- TypeScript strict mode
- Single quotes, no semicolons
- Use functional patterns where possible

These can become very complicated. I’ve ran some tests developing a desktop image environment for VM Ware, and have specified a ton of applications, OS, etc. in these files.

Here is some of my experiment. It’s quite easy to generate such a file, and I was asking it to create a packer instance for VMWare”

# DVM Virtual Dev Environment

## Project Overview
This project creates a Ubuntu 24.04.3 desktop virtual machine using HashiCorp Packer and VMware. The VM is pre-configured with development tools including Visual Studio Code and IntelliJ IDEA Community Edition.

## Purpose
- Build standardized development environments
- Automate VM provisioning with consistent tooling
- Provide reproducible Ubuntu desktop environments for development work

## Key Components

### Packer Configuration (`ubuntu-desktop.pkr.hcl`)
- **Base Image**: Ubuntu 24.04.3 Desktop AMD64 ISO
- **Platform**: VMware (vmware-iso builder)
- **Resources**: 4GB RAM, 2 CPUs, 40GB disk
- **Authentication**: SSH with ubuntu/ubuntu credentials

### Provisioning
The build process installs:

**Development Tools:**
- Microsoft Visual Studio Code
- IntelliJ IDEA Community Edition
- Docker Desktop
- Postman
- MySQL Workbench Community

But also — this remembers a lot of context.

Amazon has the same ides — rules for their “spec driven development” approach. Will probably discuss later as I have questioned them about the lifecycle of biz rules in a development cycle, but, generally stores all the info of your dev environment for your project. Generally, is stored in .amazonq/rules, though can be in a file like amazonq.rules.

Also Anthropic has the CLAUDE.md file, which is like AGENTS. It’s loaded at startup to give your agent context, among other things. There is also use of a json file for settings.

From CLAUDE site:

Key points about the configuration system

  • Memory files (CLAUDE.md): Contain instructions and context that Claude loads at startup
  • Settings files (JSON): Configure permissions, environment variables, and tool behavior
  • Slash commands: Custom commands that can be invoked during a session with /command-name
  • MCP servers: Extend Claude Code with additional tools and integrations
  • Precedence: Higher-level configurations (Enterprise) override lower-level ones (User/Project)
  • Inheritance: Settings are merged, with more specific settings adding to or overriding broader ones

Note that these other AI agents will in fact read AGENTS.md, even if they put up a fight against you. Also, be aware that in general you’ll eventually be checking in and maintaining these files for your projects.


Thoughts. Well, after working with these, it’s nice to have a context on loadup mechanism. It would be nice to share among other developers. That said, I wonder how this is going to play into the specs in maven, ivy, ant, gradle etc. files. Why not just read those files, instead of in some cases having a double entry?

And since this is now emerging, there are several standards — am guessing if you ever switch AI providers it could convert the standards (from say Q rules to CLAUDE) with no fuss. It had better, being that conversions seems to be one of the AI strongpoints.

Another really nice related mechism is experimenting with saving chat contexts from the IDE. Amazon has something like this, and Claude can remember as well — but you could technically store prompt history offline, even incorporate the best answers into the md files.

Best thing is to try all of this out.


References:

MARKDOWN

AGENTS.md

CLAUDE Settings

Amazon Q Rules

Further Reading:

There are mentions of Jupyter notebooks in much of the MD config documentation. I didn’t discuss this here, but you can of course explore it more.

JUPYTER